Compare commits

..

252 Commits

Author SHA1 Message Date
973bd8c480 adaptive_cloth: AdaptiveMesh: use params for collapsible test
Use the `aspect_ratio_min` available in the params for the edge
collapsible test instead of the hard coded value.
2021-09-06 00:40:19 +05:30
6748f5cfbb adaptive_cloth: face sizing: change_in_vertex_normal_max as a param
Create GUI and use `change_in_vertex_normal_max` for dynamic face
sizing calculation instead of the hard coded value.
2021-09-06 00:26:41 +05:30
e07f8a6214 adaptive_cloth: AdaptiveMesh: face sizing: use params
Use values from given params instead of the hard coded parameter
values.
2021-09-05 21:55:48 +05:30
dcbd6f4257 adaptive_cloth: gui for edge_length_max and aspect_ratio_min 2021-09-05 21:47:00 +05:30
7a0683cf05 adaptive_cloth: parameter name from size_min to edge_length_min 2021-09-05 20:35:07 +05:30
fd9b3419ea adaptive_cloth: AdaptiveMesh: dynamic face sizing (for curvature)
Compute the dynamic face sizing with respect to curvature and do eigen
decomposition of the final sizing to add constraints with respect to
edge lengths and aspect ratio.
2021-09-05 11:05:24 +05:30
9d32df72b4 adaptive_cloth: AdaptiveMesh: calculate derivative
Calculate derivative of the given float3s with respect to the uv space
coordinates of the given face.
2021-09-05 11:04:23 +05:30
030e53da23 adaptive_cloth: Mesh: Node: get normal 2021-09-05 11:03:36 +05:30
6c5a5cb83d float2x3 and float3x2: transpose and multiplication with intern file
Transpose and multiplication requires access to the other structure as
well, this cannot be done through header files only, at least not
easily. So separate implementation files for each with the respective
functions that are required.
2021-09-05 11:01:44 +05:30
c5433fdcb2 float3x2: constructor through columns of the matrix 2021-09-05 10:56:14 +05:30
4b8dc73996 float2x2: transpose 2021-09-05 10:55:47 +05:30
04496a2e01 float2x2: inverse 2021-09-05 10:55:38 +05:30
653da43627 float2x2: constructor through direct values of the matrix 2021-09-05 10:55:22 +05:30
672e4c3fa5 float2x2: constructor through columns of the matrix 2021-09-05 10:54:59 +05:30
b37ecf5855 float2x2: eigen decomposition 2021-09-04 20:19:57 +05:30
0514f64a0f adaptive_cloth: AdaptiveMesh: dynamic: vert sizing from face sizing
Calculate the vert sizing by taking the uv area weighted average of
the sizing of the adjacent faces of the vert.
2021-09-02 20:55:41 +05:30
c5461f624e adaptive_cloth: AdaptiveMesh: dynamic: vert sizing calc overview 2021-09-02 20:07:36 +05:30
189c38dced adaptive_cloth: AdaptiveMesh: set uv area for faces 2021-09-02 20:07:25 +05:30
b04ba2e8eb adaptive_cloth: AdaptiveMesh: FaceData: initial, store uv area 2021-09-02 20:06:30 +05:30
48278754a7 adaptive_cloth: Add dynamic remeshing selection to GUI
Add remeshing type support to both the cloth modifier and the adaptive
remesh modifier.

The basic call for dynamic remesh is also setup, only need to work on
finding the vertex sizing dynamically.
2021-09-02 14:43:54 +05:30
ffcc059abf adaptive_cloth: fix: Mesh: collapse edge: n2 not updated for v1
A simple solution that is extremely difficult to debug. This
particular part of the collapse edge function is run rarely and to
make this particular bug even harder to find is that only a subsequent
operation will show any signs of a problem.

One way to trigger this bug is to static remesh Suzanne (Blender
monkey) at 0.005 or lower minimum size. This leads to a crash due to a
bad optional access of a node.

The vert refers to the node but the node doesn't refer to the vert. So
once this sort of situation is created, when that vert's nodes need to
be used, it leads to a bad optional access.

Such a simple fix :)
2021-09-01 20:31:07 +05:30
a5c01917e8 adaptive_cloth: squash unused parameters warnings in release mode
All unused parameter warnings in BKE_cloth_remesh.hh and
cloth_remesh.cc have been fixed either by adding a #ifndef NDEBUG
directive or by changing the code slightly.
2021-09-01 14:56:17 +05:30
5bdf40dcda adaptive_cloth: AdaptiveMesh: verts of new sewing edge as preserve
Newly created sewing edges's verts should also be marked as preserve.
2021-09-01 13:50:49 +05:30
940ee5d587 adaptive_cloth: AdaptiveMesh: force split for sewing
Option to split the opposite edge even if it does not meet the size
criterion, with the option, it ensures that no sewing edge is missed
when an edge is split (it can be missed if the opposite edge doesn't
meet the size criterion).
2021-08-31 21:17:59 +05:30
9635768cda adaptive_cloth: AdaptiveMesh: mark sewing edge verts as preserve
Mark all verts attached to sewing edge(s) as preserve, this ensures
that no sewing edge(s) are removed which would otherwise lead to
results are not in line with what the artist would want.
2021-08-31 20:32:55 +05:30
1600f3de96 adaptive_cloth: AdaptiveMesh: ensure edge between sewing edges
While trying to create the sewing edges, ensure that the vert in
question is between 2 or more edges that are between sewing edges.

Also ensure that the opposite is between sewing edges.
2021-08-31 19:11:38 +05:30
79f70b1b09 adaptive_cloth: AdaptiveMesh: no need to add flag after split edge
Since split edge triangulate already handles copying the extra data,
there is no need to try to add the flag `EDGE_BETWEEN_SEWING_EDGES` to
the newly split edges.
2021-08-30 18:08:16 +05:30
d1b89d1a71 adaptive_cloth: Mesh: split edge: option to copy extra data
An extra option to copy to the extra data from the edge that is split
to the edges that are formed due to the split. This does not include
the other edges added for triangulation purposes.
2021-08-30 18:06:35 +05:30
28732a6ee1 adaptive_cloth: AdaptiveMesh: setting EdgeData flags after split
Edges that had EDGE_BETWEEN_SEWING_EDGES when split, the new edges
should also be EDGE_BETWEEN_SEWING_EDGES.
2021-08-29 20:22:00 +05:30
3b54908fd7 adaptive_cloth: AdaptiveMesh: mark edges between sewing edges
Add a new flag for EdgeData that stores if the edge is between sewing
edges or not.

A function that marks all the edges that between sewing edges.

Call this function in the initialization of the static remeshing if
sewing is enabled.
2021-08-29 20:19:47 +05:30
68e2240857 adaptive_cloth: AdaptiveMesh: sewing: dump file after adding edge
Dump the serialized Mesh after adding the loose (sewing) edge.
2021-08-28 17:59:33 +05:30
f2e0b6b21c adaptive_cloth: AdaptiveMesh: do not split a loose edge 2021-08-28 17:47:59 +05:30
f0721e7b16 adaptive_cloth: AdaptiveMesh: sewing: add the sewing edge if needed
If the opposite edge still exists and it can be split, add a sewing
edge between `vert` and the newly created vert.
2021-08-28 17:30:43 +05:30
8abda209a4 adaptive_cloth: AdaptiveMesh: split edge: verts added during split
Split edge now appends the flip edges mesh diff to the split edge mesh
diff and returns this complete mesh diff and the verts that were added
during split operation.
2021-08-28 17:28:36 +05:30
3192103d22 adaptive_cloth: AdaptiveMesh: flip edges: return complete MeshDiff
Return a complete MeshDiff of all the operations done in flip edges by
appending the MeshDiff(s) after each operation.
2021-08-28 16:32:02 +05:30
25cfaf2e5c adaptive_cloth: AdaptiveMesh: is edge splittable
Abstract out the edge splittablity check to a function.
2021-08-28 16:28:59 +05:30
01881edbe9 adaptive_cloth: AdaptiveMesh: compute_info_element functions
AdaptiveMesh specific compute info for elements. This internally calls
the Mesh specific compute info to make function calls easier.
2021-08-28 16:22:20 +05:30
89e7d6b02f adaptive_cloth: MeshDiff: append one MeshDiff to another 2021-08-28 16:09:23 +05:30
f1c057006b adaptive_cloth: MeshDiff: remove elements that don't exist in mesh
It is possible to create a MeshDiff which is updated to remove a
certain element that was initially added. So the added element index
is no longer valid and should be removed.

This function removes elements from the `added_elements` lists that no
longer exist in the mesh.
2021-08-28 16:07:17 +05:30
33a37c854e adaptive_cloth: MeshDiff: add elements
Functions to add elements to `added_element`.
2021-08-28 16:05:28 +05:30
c47d67260e adaptive_cloth: Mesh: add checked loose edge
Adds a loose edge to the mesh with the given vert indices and ensures
that these vert indices do not already have an edge between them and
that they exist.
2021-08-28 16:04:20 +05:30
8f283d50d7 adaptive_cloth: Mesh: compute info separate functions for each type
`compute_info()` now calls separate functions for each element type
instead of computing it within that function. This allows other parts
of the code to compute info of the element when it is easier to do
over creating a mesh diff.
2021-08-28 16:02:05 +05:30
611172fd8c adaptive_cloth: Mesh: functions to check if mesh has that element
Given the element's index, it check if the element still exists in the
mesh.
2021-08-28 16:00:33 +05:30
8e8946c545 adaptive_cloth: AdaptiveMesh: sewing edge: get set of opposite edges
Given a vertex, the function is supposed to add a sewing edge to it if
possible.

Currently the function gets the set of opposite edges. An opposite
edge is an edge that is in between 2 loose edges and these loose edges
are connected to edges that connect to the given vert.

            e1   vert   e5
    e1_ov.________.________.e4_ov
         |                 |
       e2|                 |e4
         ._________________.
    e2_ov   opposite_edge   e3_ov
                 (e3)

What needs to be done: With the set of opposite edges, if the edge is
splittable then it should be split and a new edge should be added
between vert and the newly created vert (vert created when splitting
the opposite edge).
2021-08-28 00:15:57 +05:30
7608be58f1 adaptive_cloth: Mesh: Edge: get checked other vert 2021-08-28 00:15:27 +05:30
ab857c4ce9 adaptive_cloth: fix: Mesh: split edge: elements missed in MeshDiff
Split edge function didn't add the new node and the new vert(s) to
MeshDiff.
2021-08-28 00:14:02 +05:30
ee7425dab7 adaptive_cloth: AdaptiveMesh: flag for sewing with gui where needed
Added a new flag to have sewing enabled. The flag doesn't do anything
yet (no functionality).

Added the required code to pass this sewing option to
adaptive_remesh().

Created the GUI for this flag in the AdaptiveRemesh modifier.
2021-08-28 00:14:02 +05:30
4465d236d2 float2x3: initial 2021-08-27 16:16:00 +05:30
6acfed0712 float3x2: initial 2021-08-27 15:32:21 +05:30
c4f5736275 float2x2: documentation 2021-08-27 15:32:08 +05:30
d134678da7 float4x4: documentation 2021-08-27 15:31:26 +05:30
d17b21604d adaptive_cloth: ClothNodeData: fix typo for ClothVertex flags interp 2021-08-26 10:06:00 +05:30
950857656d adaptive_cloth: ClothNodeData: interp goal correctly
The goal of the ClothVertex causes the vertex to move towards xconst
so it doesn't make sense to interpolate the goal since this will lead
to a gradient for the goal and this is not artist friendly, there
would be no way for the artist to define only a specific vertex to be
attached to a point in 3D space.
2021-08-19 18:53:36 +05:30
bf0f118419 adaptive_cloth: ClothNodeData: interp flags correctly 2021-08-19 17:32:58 +05:30
e68cf37413 cloth: fix: reset per vertex spring count when building springs
While building the structural springs, the vertices that connect the
springs, their spring count is incremented.

Since the springs are being rebuilt entirely, the spring count should
also be reset.
2021-08-19 16:58:26 +05:30
5afe71daf5 adaptive_mesh: serialize and dump the cloth mesh every frame 2021-08-19 16:24:56 +05:30
111458f6c7 adaptive_cloth: fix: Mesh: collapse edge: can lead to loose verts
The previous commit can lead to loose verts (verts that don't have any
edges attached). Such verts should be deleted.
2021-08-15 16:08:07 +05:30
53033ffd11 adaptive_cloth: fix: Mesh: collapse edge: can lead to loose edges
There are some edge cases (:P) that can lead to the creation of loose
edges when collapsing the edge.

Suppose v2 has an edge that contains only 1 face and this face also
contains v1, the face was deleted but the remaining edge between v2
and ov, remained. This causes the loose edge. The simple check and
deletion of such an edge fixes this bug.
2021-08-14 14:48:31 +05:30
e4165cdcf9 adaptive_cloth: AdaptiveMesh: fix: aspect ratio, wrong area
Must take the non directional area (absolute value of the area) since
the order of the verts of the face may not correspond to normal of the
face always. The aspect ratio calculation should not be doing the
inversion testing as well. (This could be used for inversion testing
but without proper math backing it, don't want to use it right now).
2021-08-13 19:39:58 +05:30
161fa5b4da adaptive_cloth: AdaptiveMesh: aspect ratio test when collapsing edge
Test to see if the faces generated when collapsing the edge will not
meet the aspect ratio criterion.

There is a lot more information about which algorithm is used for
calculating the aspect ratio in the code comments. A gist of it would
be that there many different ways to calculate the aspect ratio of a
triangle and each provides a different end result suitable for
different needs. This makes it hard to select the right algorithm.
2021-08-13 14:35:38 +05:30
1a1185b599 adaptive_cloth: cloth: prev_frame_mesh is invalid if frame incorrect
When current frame (framenr) is not immediately following the
last_frame stored, the `prev_frame_mesh` is invalid and must be
deleted. When remesh is on, even the clothObject is invalid, so delete
that as well.
2021-08-12 22:28:14 +05:30
72d10c1edc adaptive_cloth: Store previous frame mesh information for remeshing
The modifier stack doesn't give the previously evaluated `Mesh`, this
means it needs to be stored within the `ClothModifierData`.

Some parts of the cloth modifier copy information from the given
`Mesh` to the `clothObject` every frame, this should not happen when
remeshing is on.

In `BKE_cloth_remesh()`, must use the previous frame mesh if available
for remeshing.

TODO: In case the user goes back a frame, must make the
`clmd->prev_frame_mesh` invalid otherwise the simulation will not know
that the simulation must be restart.
2021-08-12 20:21:08 +05:30
bd0a9f1943 adaptive_cloth: Mesh: msgpack: store the type of Mesh on first line
This allows the code that deserializes it to decide which type of Mesh
structure to deserialize to. Since msgpack is compact, it doesn't
store information about the size (explicitly or implicitly) of the
element, this means that deserialization can lead to weird outputs if
wrong data structures are used.
2021-08-12 13:25:37 +05:30
aa7fceb7a1 adaptive_cloth: set cloth information after remeshing
The cloth object doesn't know about the updated mesh, so update the
entire cloth object to utilize the new mesh.
2021-08-12 11:48:26 +05:30
0cbad198c7 adaptive_cloth: AdaptiveMesh: more info in file dump filename
Store the edge index as well.
2021-08-12 00:05:17 +05:30
ba82bc87aa adaptive_cloth: AdaptiveMesh: collapse edge: face inversion test
An edge is collapsible only if it doesn't invert any faces.
2021-08-11 12:35:02 +05:30
bb31b5bca0 adaptive_cloth: Mesh: get vert indices of edge aligned with n1
A set of "3D edges" don't always have n1 and n2 (through the verts of
the edge) in the same order, useful to get it in the same order.
2021-08-11 11:50:18 +05:30
b3d8a748a3 adaptive_cloth: AdaptiveMesh: call compute_info() where needed 2021-08-11 00:06:56 +05:30
f9c2f52517 adaptive_cloth: AdaptiveMesh: compute info for newly added elements
AdaptiveMesh needs some more calculations, like edge size.
2021-08-11 00:06:07 +05:30
1717a4d55c adaptive_cloth: Mesh: compute info for newly added elements
Using MeshDiff which provides the newly added elements, iterate over
these elements and calculate certain required information. This makes
it easier to take care of things. If there is a MeshDiff produced,
always call compute_info() with it. This ensures all required
information is always available.
2021-08-11 00:05:50 +05:30
31d2283c3c adaptive_cloth: AdaptiveMesh: no edge flip if inverted face normals
Edge flippability test should prevent flipping of an edge that will
lead to inverted face normals.
2021-08-10 22:12:33 +05:30
1501a34915 adaptive_cloth: remesh: compute face normals after creating mesh 2021-08-10 22:08:52 +05:30
99b9417f42 adaptive_cloth: Mesh: compute face normal(s) 2021-08-10 22:08:44 +05:30
be8d07b349 adaptive_cloth: AdaptiveMesh: preserve vert
Need to mark verts that should be preserves, ones that are on seams or
boundaries at the beginning of the remeshing process so that the
initial state is not altered.

Collapse edges should not collapse a vert that should be preserved
into another vert.
2021-08-09 16:36:57 +05:30
3aca3d8ad3 adaptive_cloth: FilenameGen: increase number of leading zeros
Now that many more operations take place during remeshing, it exceeds
1000, so need to update the number of leading zeros.
2021-08-09 16:36:07 +05:30
e57f77f6bf adaptive_cloth: AdaptiveMesh: collapse edges: edge size criterion
If the edge were to be collapsed, the newly formed edges shouldn't
exceed the edge size criterion (1.0 - small_value).
2021-08-08 19:07:26 +05:30
c73bc92e06 adaptive_cloth: AdaptiveMesh: collapse edges: run flip edges
It is important that the total update to faces of the mesh (collapse
followed by flip edges) is added to the new_active_faces.
2021-08-08 18:28:04 +05:30
7519d40091 adaptive_cloth: AdaptiveMesh: collapse edges: initial steps
There are todos but the code roughly works.
2021-08-08 18:09:22 +05:30
ba26266997 adaptive_cloth: mesh: is vert on seam or boundary 2021-08-08 18:08:44 +05:30
2f60cb83c5 adaptive_cloth: mesh: is edge collapseable: make function const 2021-08-08 18:07:42 +05:30
b19335d5f3 adaptive_cloth: mesh: const version of get checked node of vert 2021-08-08 18:07:06 +05:30
c7a231c75d adaptive_cloth: AdaptiveMesh: flippability: consider abs of 2 terms
Consider the absolute value of the cross_2d values generated because
they are just calculating the area and the assumption is that the
orientation of the triangles shouldn't matter for this test.
2021-08-07 12:01:23 +05:30
78c1afa8fd adaptive_cloth: should remesh dump file macro
This helps turn off dumping of the serialized mesh easily.
2021-08-07 11:33:45 +05:30
9de42e7030 adaptive_cloth: mesh: split edge: correct orientation for new face
Set the correct orientation for the new faces formed during the split
edge operation.

By swapping the unwanted vert for the new vert, the orientation is
preserved.
2021-08-06 18:51:13 +05:30
fb67bf4941 adaptive_cloth: AdaptiveMesh: edge flip test: only if edge size is ok
Allow the edge flip only if the flipped edges doesn't exceed the edge
size requirement.
2021-08-06 16:42:52 +05:30
5bef741e58 adaptive_cloth: Mesh: edge flippable: no connecting edge ov1, ov2
Ensure that the edge when flipped does not already have a connecting
edge. This leads to a variety of problems like overlapping duplicate
faces or deletion of faces if no duplication is a requirement. Edge
flips should only change the orientation of the edge which is only
through connectivity change, not actually change the number of
edges/faces.
2021-08-06 11:13:01 +05:30
acaf87d88c adaptive_cloth: AdaptiveMesh: flip edges: set edge size
It is important to set the edge size for all the newly created edges.

Limit the number of loops, it can easily become an infinite
loop. There should be a better solution for this but cannot think of
one as of right now.

Always check if the edge is still flippable or not. Again, this
couldn't be solved with a different flip edge algorithm where the new
edge is always added, but right now, that isn't possible.
2021-08-06 10:35:20 +05:30
e3947dfd10 adaptive_cloth: AdaptiveMesh: splittable edges indices set changes
Earlier, an edge wouldn't be tested for if one of it's verts was
already selected (for another edge). This would seems almost correct
but leads to non symmetrical remeshing. So it makes sense to get all
the splittable edges, sort them based on their size and then split the
edge only if it still exists in the mesh.

So changes added:
Give the entire set of splittable edges instead of maximally
independent.
Sort the set based on the edge size.
2021-08-05 15:09:00 +05:30
84562e2eac adaptive_cloth: fix: AdaptiveMesh: anisotropic flip check
Based on the next paper by the same authors, "Folding and Crumpling
Adaptive Sheets". The edge flip criterion is different. So using this
now.
2021-08-05 14:05:28 +05:30
4e2a10352d adaptive_cloth: fix: mesh: flip edge: edge might already exist
It is possible that there might already be an edge between ov1 and ov2
so it is best to not create a new edge between those because edges
between verts should always be unique for all the algorithms to work
correctly.

It is a case that shouldn't show up too often but when it does, it
will be interesting to see what happens in the static remeshing
part. The number of faces remains the same but number of edges can
change (reduce only).
2021-08-05 11:14:30 +05:30
cb9424b99c modifier: adaptive_remesh: collapse edge only if collapseable 2021-08-04 12:29:19 +05:30
44c82975ec adaptive_cloth: mesh: collapse edges: remove dump file statements 2021-08-04 12:28:25 +05:30
cae9f4dd95 adaptive_cloth: mesh: is_edge_collapseable()
The collapse edge operation doesn't support one type of edge
collapse. When the collapse is across seams, it is possible for one
vert v1 to be asked to collapsed into multiple verts v2. Now deciding
which v2 to consider is a difficult task, so not handling it right now.
2021-08-04 11:27:33 +05:30
fdda6f3f63 adaptive_cloth: mesh: collapse edge: tackle edge case
During an across seams collapse edge, it is possible that n1 might
still have v1 attached to it. Take the example of a icosphere's bottom
most vert collapsed into some other neighbouring vert.

For this, make v1.node point to n1 so essentially, v1 has been
converted to v2 instead of removing it.
2021-08-04 10:42:32 +05:30
29c73ddd2e adaptive_cloth: mesh: face edge linkage checks improvement
It is not necessary for the face to have it's verts available, because
delete_edge() can remove the verts from the face. So this check
ensures that there is no out of bounds access.
2021-08-04 10:16:41 +05:30
dc1144f59f adaptive_cloth: mesh: collapse edge across seams
It works for most cases. There is one case that is difficult to do
for, when v1 has two v2 to collapse into. Will add a check to make
sure the edge is collapsed only if collapse-able.
2021-08-04 09:31:57 +05:30
bffe7c58d9 adaptive_cloth: mesh: collapse edge rewrite to fix bugs
Found a lot of bugs in collapse edge routine using the debug
tool so needed a rewrite of this function.

Need to still add across seams support.
2021-08-03 22:15:59 +05:30
20d24a9995 adaptive_cloth: msgpack: store more information for easier debugging
Ideally this wouldn't be needed. This makes it easier to debug the
deserialization part. In case msgpack is used for final caching this
information needs to be removed.
2021-08-02 11:15:26 +05:30
764ddf1d8f adaptive_cloth: static_remesh: dump file after each split and flip 2021-08-01 22:43:58 +05:30
dd364de831 adaptive_cloth: filename_gen: number has 0's to make length 3 2021-08-01 22:42:59 +05:30
094e0e740c modifier: adaptive_remesh: filename has edge_i with 0s 2021-08-01 18:50:47 +05:30
4e154c59a2 modifier: adaptive_remesh: dump file pre and post edge operation 2021-08-01 11:35:34 +05:30
d1c296519c adaptive_cloth: serialize Mesh and store to /tmp/test.msgpack 2021-07-27 14:59:21 +05:30
0772a217b9 msgpack: adaptive_cloth: Mesh: packer implementation 2021-07-26 23:59:21 +05:30
c90331221c msgpack: adaptive_cloth: Mesh Elements: packer implementation 2021-07-26 23:58:54 +05:30
b04f0c47e1 msgpack: adaptive_cloth: internal: packer implementation 2021-07-26 23:57:59 +05:30
3170667177 msgpack: blender::generational_arena::Index: packer implementation 2021-07-26 23:57:08 +05:30
6852a81516 msgpack: blender::Map<Key, Value>: packer implementation 2021-07-26 23:56:44 +05:30
0d68431f94 msgpack: blender::float3: packer implementation 2021-07-26 23:56:31 +05:30
609435bcff msgpack: blender::float2: packer implementation 2021-07-26 23:55:59 +05:30
f0f29153d2 msgpack: blender::generational_arena::Arena: packer implementation 2021-07-26 18:10:14 +05:30
a4350ffaec msgpack: blender::Vector<T>: packer implementation
Implement msgpack's packer for blender::Vector<T>.
2021-07-26 17:54:39 +05:30
c277047cb3 extern: msgpack: add this header only library
This will allow serialization of the adaptive cloth mesh structure
more easily.

From the website: https://msgpack.org
MessagePack is an efficient binary serialization format. It lets you
exchange data among multiple languages like JSON. But it's faster and
smaller. Small integers are encoded into a single byte, and typical
short strings require only one extra byte in addition to the strings
themselves.
2021-07-26 16:49:06 +05:30
1479175a9a adaptive_cloth: AdaptiveMesh: split_edges: run flip_edges()
On the faces that were added by the split edge operation, run the
flip_edges() function.
2021-07-22 17:41:29 +05:30
11ceeef00c adaptive_cloth: AdaptiveMesh: flip_edges()
Flips edges of the `active_faces` if needed.

* Get the maximally independent set of flippable `AdaptiveEdge`s.
* Flip each edge in this set and update `active_faces` with the faces
affected by this operation.
* Repeat until the set has 0 flippable `AdaptiveEdge`s
2021-07-22 17:39:03 +05:30
f81d1d4ad6 adaptive_cloth: AdaptiveMesh: get_flippable_edge_indices_set()
Similar to get_splittable_edge_indices_set() it checks for
flippability of the `AdaptiveEdge` instead of the "size" of the
`AdaptiveEdge`.
2021-07-22 17:37:31 +05:30
dcb069ed25 adaptive_cloth: AdaptiveMesh: is_edge_flippable_anisotropic_aware()
Function to check if the given `Edge` is flippable or not by adding an
anisotropic aware check.

Based on reference [1].
2021-07-22 17:36:56 +05:30
bec5b06b55 adaptive_cloth: AdaptiveMesh: VertFlags: VERT_SELECTED_FOR_SPLIT
rename `VERT_SELECTED` to `VERT_SELECTED_FOR_SPLIT` for more explicit
meaning
2021-07-22 17:31:37 +05:30
249115583c adaptive_cloth: AdaptiveMesh: Sizing: overload add and mul ops
Overload the operators to allow operations on `Sizing` itself. This
will help in case `Sizing` will need to change in the future maybe for
adaptive tearing support.
2021-07-22 17:27:11 +05:30
d71ee2d5e3 adaptive_cloth: Mesh: does_element_exist()
Set of functions to test if the element still exists in the `Mesh`
when given it's index.
2021-07-22 17:26:26 +05:30
1ab96c45a0 adaptive_cloth: Mesh: checked and unchecked get_other_vert_index()
`get_checked_other_vert_index()` a new checked version of
`get_other_vert_index()`

Update `get_other_vert_index()` to have better checks
2021-07-22 17:25:06 +05:30
612ebd73f9 adaptive_cloth: Mesh: Edge: get_checked_verts() 2021-07-22 17:24:33 +05:30
059fe4ff63 adaptive_cloth: Mesh: make get_edge_indices_of_face() as protected 2021-07-22 17:23:00 +05:30
151a8a3894 adaptive_cloth: Mesh: is_edge_loose_or_on_seam_or_boundary() 2021-07-22 11:39:13 +05:30
87e3b5a52b adaptive_cloth: Mesh: is_edge_on_seam() 2021-07-22 11:38:45 +05:30
d2f7b1a1e8 adaptive_cloth: Mesh: is_edge_loose() 2021-07-22 11:38:26 +05:30
358030554a modifier: adaptive_remesh: UI for static remeshing of the mesh
Simple UI to allow `adaptive_remesh()` through the simple wrapper to
be called and controlled easily.

This should help with testing since the cloth simulator when applied
uses the (point) cached `Mesh` instead running the simulation which
will call `adaptive_remesh()` for us. This will need to be fixed later
but this is an easier approach to continue development the adaptive
remesher.
2021-07-21 23:27:17 +05:30
b5f262b3e0 adaptive_cloth: adaptive_remesh: create an "Empty" remesh function
It is useful to call `adaptive_remesh()` outside of the cloth
simulator mainly for testing purposes. Now because the use of
templates for the function `adaptive_remesh()` and it being defined
not in the same header file (this isn't possible in this case because
of other reasons), it requires a template instantiation needs to
happen because the function is called or something like that. This
leads to problems when `adaptive_remesh()` is called from some other
file.

The easiest fix was to create a new wrapper function without templates
and call this from other files.
2021-07-21 23:23:22 +05:30
c080ca1b37 modifier: adaptive_remesh: use dna defaults through initData
Adding defaults for a new modifier is not as simple as just adding the
new structure to the defaults macro file. It needs to be copied over
to the ModifierData to actually use it.
2021-07-21 23:19:59 +05:30
1780f2aaf8 adaptive_cloth: create an abstraction for adaptive_remesh
The adaptive remeshing algorithm is not dependent on the cloth
modifier so it makes sense to abstract at least to a point of not
needing all of the `ClothVertex` data.

This will help with testing of the implementation by adding it to the
`AdaptiveRemesh` modifier.
2021-07-21 21:55:59 +05:30
5567ceea92 adaptive_cloth: AdaptiveMesh: set edge size after splitting edge
Need to set the edge size for all edges effected by the split operation.
2021-07-21 18:30:21 +05:30
642288268c adaptive_cloth: MeshDiff: operator<< for ostream 2021-07-21 18:18:17 +05:30
fbef3efcf9 adaptive_cloth: fix: Mesh: split edge: edge vert references fault
References between the `Edge`s and `Vert`s were not created.
2021-07-21 17:51:06 +05:30
1549da9678 adaptive_cloth: Add UI for size_min of static_remesh 2021-07-19 22:25:09 +05:30
6f39ef31ea adaptive_cloth: implement and run static_remesh()
Sets the same "sizing" for all the `Vert`s of the `AdaptiveMesh` and
then runs `mesh.split_edges()`.

TODO(ish): Add mesh.collapse_edges() within static_remesh()
2021-07-19 22:05:53 +05:30
d7531b6a9a adaptive_cloth: AdaptiveMesh: split_edges()
Splits edges whose "size" is greater than 1.0

TODO(ish): Need to flip edges of those faces that have been affected
by the split edge operation.
2021-07-19 22:05:23 +05:30
c4a08df49e adaptive_cloth: AdaptiveMesh: get_splittable_edge_indices_set()
Gets the maximal independent set of splittable edge indices in the
`AdaptiveMesh`.
2021-07-19 22:01:38 +05:30
790f69d122 adaptive_cloth: AdaptiveMesh: VertData: add flag
Add flag to be able to tag the `Vert`.
2021-07-19 22:00:19 +05:30
8801d58ce8 adaptive_cloth: AdaptiveMesh: extra data interp
Add interp() function to the extra data of `AdaptiveMesh`
2021-07-19 21:59:13 +05:30
4d3cfce00f bli: float2x2: linear_blend()
Linearly blend between the 2 matrices.
2021-07-19 21:55:32 +05:30
460fea65ad adaptive_cloth: Mesh elements: get_self_index() 2021-07-19 21:55:00 +05:30
a21cb0b63a adaptive_cloth: AdaptiveMesh: set_edge_sizes()
Based on the `Sizing` stored in the `Vert`s of the `Edge`s, store the
"size" of the `Edge` in it's extra data.
2021-07-18 21:39:25 +05:30
1bfedda596 adaptive_cloth: Mesh: change visibility of many utility functions
Make them protected instead of private.

Functions that get some element from the `Mesh` (maybe through other
elements) can be used by derived classes. It might make sense to make
these functions public directly instead of protected but will take
that decision later.
2021-07-18 21:34:01 +05:30
3ccc9cdbd4 adaptive_cloth: Vert: get_uv() and get_uv_mut() 2021-07-18 21:32:45 +05:30
d2244cb40b adaptive_cloth: Mesh: get extra data from elements
Simple functions to get extra data from elements.
It is possible make `extra_data` public but this reduces readability
of the code. By having these functions, intent is obvious.
2021-07-18 21:29:30 +05:30
79a90776d2 adaptive_cloth: AdaptiveMesh: set and retrieve Nodes extra data
From the `Cloth::verts`, set the extra data in the `Mesh::nodes`.

Delete the `Cloth::verts` to not have duplicate data.

Allocate data for `Cloth::verts`, and set from the `Mesh::nodes`.
2021-07-16 23:40:47 +05:30
fee8367e4a adaptive_cloth: Mesh: get elements mutably
Useful to get the elements mutably.

This does make `Mesh` less "safe" since anyone can add or delete
elements without creating/deleting the necessary referencing between
the elements but this shouldn't be too much of a problem because it
would lead to an instant forced crash when the `Mesh` is used for
anything.
2021-07-16 23:30:34 +05:30
d6b35a45a9 adaptive_cloth: BKE_cloth_remesh: convert to and from AdaptiveMesh 2021-07-16 22:14:05 +05:30
2c6d3ab678 adaptive_cloth: Sizing: get edge size squared 2021-07-16 22:06:11 +05:30
57e374f089 adaptive_cloth: VertData: initial implementation
Used to store extra `Vert` data in the `Mesh`.
2021-07-16 22:02:04 +05:30
4b94a2be22 adaptive_cloth: NodeData: initial implementation
Used to store extra `Node` data in the `Mesh`.
2021-07-16 21:56:41 +05:30
04cec203e3 adaptive_cloth: BKE_cloth_remesh() add to namespace blender::bke 2021-07-16 21:54:44 +05:30
b34235a2eb bli: float2x2: multiply matrix with float 2021-07-16 20:52:23 +05:30
c07dad40c9 bli: float2x2: add 2 matrices 2021-07-16 20:52:03 +05:30
581d46112a bli: float2x2: initial implementation 2021-07-16 14:58:50 +05:30
18e84a915a adaptive_cloth: Mesh: collapse edge: all v1 should point to n1
Sometimes, `n1` will have still have verts attached to it, so it makes
sense to make those verts point to `n2` but keep their UV coordinates
since attempting to merge verts not joined by edges can lead to lot of
issues. So essentially, extra `v1`s becomes `v2`s.
2021-07-15 11:31:18 +05:30
a027f4ea4b adaptive_cloth: fix: Mesh: split edge: reference is broken
The reference to the edge is broken because adding or deleting an edge
can realloc memory which means all the references are invalid. So just
need to recreate the references after a addition or deletion of an
element.

Would really appreciate a borrow checker to handle such
instances.
2021-07-15 09:38:45 +05:30
a1a8608fce adaptive_cloth: Mesh: flip_edge: do operation by deleting and adding
Sometimes, the user needs to know which elements have been affected,
for this, delete and add new elements. This shouldn't have any effect
on performance in theory because the arena should add back the
elements in the place where it has been deleted, hasn't been tested.
2021-07-14 18:09:53 +05:30
1b5a2cf325 adaptive_cloth: Mesh: is_edge_flippable(): works with across_seams
It is useful to know if the "3D edge" is flippable or not. This is
essentially done by going over all the `Edge` formed by the `Node`s of
the `Edge` and working with all those faces as a whole. Here we also
need to ensure that the edge is not on a seam. It actually might be
possible to optimize this and remove the boundary check entirely.
2021-07-14 13:03:12 +05:30
2f8a2fad29 adaptive_cloth: Mesh: is_edge_on_boundary()
Checks if the given edge lies on the boundary of the mesh.
2021-07-14 12:30:42 +05:30
79cad4c0b6 adaptive_cloth: Mesh: flip_edge_triangulate() initial implementation
Need to implement across seams support.
2021-07-14 10:46:35 +05:30
5bb6a05d7a adaptive_cloth: Mesh: collapse_edge_triangulate(): across_seams option 2021-07-13 18:49:46 +05:30
0913090091 adaptive_cloth: fix: Mesh: collapse_edge_triangulate(): delete node only if no other vert refers to it 2021-07-13 11:55:26 +05:30
c5ed6fab1f adaptive_cloth: fix: Mesh: collapse_edge_triangulate() invalid faces generated
The edge between v1 and ov should be removed only if the edge doesn't have any linked faces. If it has any linked faces, it will make those faces invalid and cause problems down the line. For the face to have (v2, ov, vx), this becomes impossible since (v1, ov) has been removed, this is wrong.
2021-07-13 10:36:06 +05:30
84a0df0776 adaptive_cloth: Mesh: collapse_edge_triangulate() version 2 2021-07-12 19:17:41 +05:30
316ddc19e2 adaptive_cloth: fix: Mesh: split_edge_triangulate() crash when edges are realloced
Took the new edge index by reference instead of copy leading to wrong memory access when the edges are reallocated by the arena.
2021-07-11 22:21:11 +05:30
ac431c17f4 adaptive_cloth: Mesh: make split_edge_triangulate() work with new deletion methods 2021-07-11 18:05:59 +05:30
f1f9559345 adaptive_cloth: Mesh: delink face edges 2021-07-11 18:05:31 +05:30
eb010b33a2 adaptive_cloth: Mesh: const get checked elements 2021-07-11 18:03:09 +05:30
02fab35860 adaptive_cloth: Mesh: better deletion of the elements
This new way of deleting elements ensures that no links can be missed out on. The deletion of the elements has to happen in a certain order leading to lesser number of errors.

This is API breaking, so all functions refering to deletion of the elements have to be updated. This includes, Mesh::split_edge_triangulate() and Mesh::collapse_edge_triangulate().
2021-07-11 13:32:10 +05:30
71c8a2d6bc modifier: adaptive_remesh: select split or collapse edge operation 2021-07-10 17:15:57 +05:30
740f622a03 adaptive_cloth: Mesh: collapse_edge_triangulate() first draft 2021-07-10 17:13:47 +05:30
4d7b919703 adaptive_cloth: Mesh: write to MeshIO: allow gaps in the Arena(s)
There can be gaps in the `Arena`s introduced due to removing elements in the middle of the `Arena`. Now this can lead to wrong indexing when writing the positions and the uvs. For this, create a `Map` between the element's arena index and it's true index in the vectors created for them.
2021-07-09 11:28:42 +05:30
a33623b553 adaptive_cloth: Mesh: split_edge_triangulate(): support for across seams
In the `Mesh` structure, `Edge` stores a tuple of `Vert`s which means,
it is based on the UV coordinates, although this is quite useful,
sometimes it is important to have the mesh operation happen across the
UV seams as well. So consider all the edges formed by the `Vert`s
stored in the `Node` of the given `Edge`'s `Vert`s. Confusing, yes,
but simple.
2021-07-08 23:48:15 +05:30
daec1377a9 adaptive_cloth: Mesh: append non duplicate vert indices in node.verts 2021-07-08 18:25:13 +05:30
0bf9f99834 adaptive_cloth: Mesh: split edge: remove completed todo comment 2021-07-08 17:47:57 +05:30
ef439454c7 adaptive_cloth: fix: Mesh: delete_face(): wrong starting index 2021-07-08 13:01:23 +05:30
8ceb995670 adaptive_cloth: MeshIO: DNA Mesh: combine repeating UVs 2021-07-08 12:24:33 +05:30
6ff81cd658 modifier: adaptive_remesh: Run split edge of specified edge index
This is for testing the cloth_remesh.cc::Mesh::split_edge_triangulate() function.
2021-07-07 19:24:00 +05:30
3b5cb1d54b adaptive_cloth: Mesh: split_edge_triangulate() 2021-07-07 19:20:21 +05:30
2871ad0f3b adaptive_cloth: fix: Mesh: delete elements bunch of fixes 2021-07-07 19:19:22 +05:30
8926ac9c6b adaptive_cloth: Mesh: get_checked_other_vert() 2021-07-07 19:18:34 +05:30
2bc5186aa3 adaptive_cloth: Mesh: get_checked_node_of_vert() 2021-07-07 19:18:04 +05:30
ad6c40022e adaptive_cloth: Mesh: get_checked_verts_of_edge() 2021-07-07 19:17:22 +05:30
f547143533 adaptive_cloth: Mesh: get_checked_element() 2021-07-07 19:15:15 +05:30
5143a10288 adaptive_cloth: fix: add_empty_interp_element() crash sometimes
Since the elements have a reference to the `Arena`, if the `Arena` changes in size, which can happen because of `add_empty_element()`, then the references can become invalid. Realloc can create memory space elsewhere and copy the contents, the references to the previous memory space are not updated which means references to the previous memory space are invalid.

There are three ways to fix this:
 - Create a copy and work on the copy, this is inefficient
 - Get a reference to that element again, this is efficient but may not be possible always
 - Don't use the reference after an operation that can lead to the invalidation of the reference, this is the most efficient way but may not be possible always

Decided to go with the third way in this case.

Side note: Having a borrow checker completely eliminates such problems. Rust should be the way forward.
2021-07-07 17:54:44 +05:30
3e4a201eae adaptive_cloth: Face: add has_vert_index(), has_edge() 2021-07-07 17:52:15 +05:30
fd467923b9 adaptive_cloth: Edge: make has_vert() const, add get_verts() 2021-07-07 17:50:02 +05:30
3d95f774c8 bli: generational_arena: fix: begin() points to EntryNoExist
When the first element of `Arena::data` is `EntryNoExist`, must iterate over the vector until the first `EntryExist` is found.

For `Arena::end()` this shouldn't be necessary because it should point to the last element + 1 anyway, so the `operator*` on this is meaningless.
2021-07-07 17:40:41 +05:30
128ad36b06 adaptive_cloth: Mesh: delete elements 2021-07-05 21:24:54 +05:30
ab8f971656 adaptive_cloth: Mesh: add empty element with interpolation 2021-07-05 21:24:10 +05:30
bf0438bd46 adaptive_cloth: Mesh: get other vert when given edge and face indices 2021-07-05 21:21:54 +05:30
53309357df adaptive_cloth: MeshDiff: initial 2021-07-04 17:46:34 +05:30
5d0c79ec8a adaptive_cloth: tests: Mesh to MeshIO, include only valid normals and uvs 2021-07-03 20:29:11 +05:30
98ac359545 adaptive_cloth: tests: Mesh: write loose edges 2021-07-03 19:00:08 +05:30
3157770deb adaptive_cloth: Mesh: Write: support loose edges 2021-07-03 18:59:39 +05:30
6a0ed670fd bli: generational_arena: fix: compile error when gtests not enabled 2021-07-03 18:13:33 +05:30
04ca6418f9 adaptive_cloth: Mesh: support loose edges (read only) 2021-07-03 15:03:04 +05:30
90ea4d03ed adaptive_cloth: tests: MeshIO: write DNA Mesh loose edges 2021-07-02 23:23:43 +05:30
86ce842ff3 adaptive_cloth: MeshIO: support line indices for write to DNA Mesh 2021-07-02 23:23:25 +05:30
198f8d5e99 adaptive_cloth: tests: MeshIO: read DNA Mesh loose edges 2021-07-02 20:35:03 +05:30
efacb6fda0 adaptive_cloth: tests: MeshIO: write OBJ loose edges 2021-07-02 20:32:17 +05:30
28fca5d58c adaptive_cloth: tests: MeshIO: read OBJ loose edges 2021-07-02 20:31:51 +05:30
e2eec91e40 adaptive_cloth: MeshIO: support line indices for read DNA Mesh and write OBJ 2021-07-02 20:29:46 +05:30
2fff595dbc cloth: cloth_to_object: create copy only if needed 2021-07-02 15:46:37 +05:30
52ea334923 cloth: create copy of mesh only when needed 2021-07-02 13:33:51 +05:30
440b486603 adaptive_cloth: MeshIO: write DNA Mesh 2021-07-02 12:49:33 +05:30
08d33d0203 adaptive_cloth: MeshIO: test: ReadDNAMesh: finish up 2021-07-01 19:44:45 +05:30
bf82a15db7 adaptive_cloth: fix: MeshIO: write obj normals started with 'v ' instead of 'vn ' 2021-07-01 18:55:36 +05:30
ed500956c3 adaptive_cloth: MeshIO: read DNA Mesh
The conversion was easy enough but the test case for this is extremely difficult.

Creating `Mesh` is not possible without initializing `idtype` using `BKE_idtype_init()` (there should actually be a check for this, at least in debug mode).

Converting this `Mesh` to `BMesh` is simple without any hassles.

Using the `BMesh`, create a cube. Now turns out that is also simple if `BMO_op_callf()` is known about. The previous approach was to create a copy of the code in `bmo_primitive.c` and use that. This also leads to problems because without the `BMOperator` functions pre-applied on the `BMesh`, `BMO_face_flag_enable()` crashes without any proper indication of what could have caused the error. Simplest way to fix is to just remove that flag check. Then found out about `BMO_op_callf()`, this really needs documentation since it has variadic arguments and is magically calling the required operator (printf style).

Now that we have the cube in the `BMesh`, converting it back to `Mesh` is another massive problem. There are many functions that can do this `BM_mesh_bm_to_me()` and `BKE_mesh_from_bmesh_eval_nomain()` are most appropriate. These functions write the CustomData blocks to `Mesh` but before doing do, set the `mvert->co` and such. For some reason, `BMesh` has not updated the blocks so `mvert->co` just has (0.0, 0.0, 0.0). After spending multiple hours on this, found one way to fix this, copy the `BMesh` which then initializes the blocks in the new `BMesh` and this can be used in `BM_mesh_bm_to_me()` to get the `Mesh` that is needed.
2021-07-01 18:34:51 +05:30
95d9b5810b adaptive_cloth: tests: create separate executable for faster compilation 2021-07-01 10:44:51 +05:30
a218df7918 adaptive_cloth: MeshIO: FileType to IOType 2021-06-30 18:25:16 +05:30
a2b0c06663 adaptive_cloth: update references 2021-06-30 10:30:39 +05:30
ea058d61a4 bli: generational_arena: add references 2021-06-30 10:30:15 +05:30
02d8f6a646 adaptive_cloth: Mesh: write()
Creates a MeshIO object.  There are limitations for this as of right
now, check inline comments for details (is a todo), the basic problem
is if the arena has `EntryNoExist` in between elements. This will
cause major problems.
2021-06-24 20:19:55 +05:30
0126abfd6e adaptive_cloth: MeshIO: write obj 2021-06-24 18:14:41 +05:30
72ea9b9e06 adaptive_cloth: MeshReader to MeshIO 2021-06-24 16:50:29 +05:30
e28bb2266b adaptive_cloth: mesh: fix failing test Mesh_Read
Turns out auto doesn't automatically use references even if the function is returning a reference, need to do `auto &`.
2021-06-24 13:35:57 +05:30
8bcd17be05 adaptive_cloth: better debug print 2021-06-24 13:28:50 +05:30
f60100b8ca adaptive_cloth: operator << overload for debug printing 2021-06-24 12:05:19 +05:30
b9fa792e08 bli: generational_arena: ConstIterator
ConstIterator is an iterator where the Iterator cannot change the value stored by the iterator.
2021-06-24 11:02:00 +05:30
acdfde3598 adaptive_cloth: test: mesh: read(), fails, need to find fix 2021-06-24 10:30:00 +05:30
3b28d1486e adaptive_cloth: mesh: abstract out read over MeshReader and fix compiler errors 2021-06-23 13:32:41 +05:30
4796acef66 adaptive_cloth: mesh: read obj to Mesh 2021-06-23 12:17:34 +05:30
255edfa5e1 adaptive_cloth: fix: mesh_reader: obj: 'vt' considered under 'v'
Consider
v 1.0 1.0 1.0
vt 1.0 1.0

Here, both lines start with "v" hence, if the check is, line starts with "v", it satisfies both, which is not correct.

The correct check should be, line starts with "v ".
2021-06-22 23:32:25 +05:30
fed0cdf3cb adaptive_cloth: test: MeshReader simple cube read 2021-06-22 23:31:13 +05:30
2446e978df adaptive_cloth: mesh_reader: obj parse 2021-06-22 21:49:49 +05:30
32441a7ae5 adaptive_cloth: mesh_reader: initial setup 2021-06-22 17:43:38 +05:30
9457c541a5 adaptive_cloth: mesh: add empty elements 2021-06-22 12:25:40 +05:30
11ad8fa732 adaptive_cloth: mesh elements, better constructors 2021-06-22 12:25:04 +05:30
ff2db09f55 bli: generational_arena: add iterator support
Made as a bidirectional iterator since movement can be in both
directions. Random access iterator is not possible since there can be
gaps in between the elements.

Had a very annoying bug- using the iterator with the standard library
would throw an error "`iterator_category` not defined" even though it
was defined. Spent a lot of time to realize that `difference_type`
should also be defined but other types within the iterator class are
optional. For some reason the compiler does not create the
`iterator_trait` correctly if `difference_type` is missing but throws
error about `iterator_category` which is extremely strange and time consuming to debug :(
2021-06-21 23:39:56 +05:30
476610dfae bli: generational_arena: insert_with() and try_insert_with() 2021-06-19 23:54:06 +05:30
b58ba3aa47 bli: generational_arena: use better constructor and better if-else 2021-06-19 23:53:38 +05:30
d5f8d9380f adaptive_cloth: mesh: initial class setup 2021-06-18 14:52:22 +05:30
30b9dc6a01 bli: generational_arena: better code styling 2021-06-17 13:36:16 +05:30
7f194771f7 bli: generational_arena: test for ensuring next_free list is correct 2021-06-16 19:17:34 +05:30
e48607a0c0 bli: generational_arena: extra insert() test case with fix
When the capacity of the `Arena` is 0, need to handle it specially.
2021-06-16 14:53:11 +05:30
0ba4520162 bli: generational_arena: remove() with test 2021-06-16 14:21:15 +05:30
7de7ac55d2 bli: generational_arena: get related tests and respective fixes 2021-06-16 13:56:28 +05:30
0620fbbb94 bli: generational_arena: tests, size(), capacity(), fix some errors 2021-06-16 12:45:39 +05:30
805e2ccb07 bli: generational_arena: get(), get_no_gen(), get_no_gen_index() 2021-06-15 22:18:33 +05:30
20497c448c bli: generational_arena: insert() 2021-06-15 21:42:33 +05:30
bc62878f98 bli: generational_arena: constructors and reserve() 2021-06-15 12:48:10 +05:30
1deefd940f bli: generational_arena: doc: how it works 2021-06-15 12:01:57 +05:30
1e50ce9284 bli: generational_arena: fix: compile errors 2021-06-15 11:40:38 +05:30
ebfeab8fde bli: generational_arena: class declaration setup 2021-06-14 22:02:01 +05:30
07b52c9fb6 adaptive_cloth: run BKE_cloth_remesh() when remesh flag is on 2021-06-11 21:19:39 +05:30
ad5d6448f2 adaptive_cloth: ui: add remesh option within shape 2021-06-11 19:42:55 +05:30
87ade2a279 adaptive_cloth: do_step_cloth() should update the input mesh 2021-06-11 13:22:52 +05:30
d4be1bfb81 adaptive_cloth: clothModifier_do() returns a new Mesh if simulation was successful
This is an important change that allows for remeshing operations in
the cloth simulator. `clothModifier_do()` returns the resulting mesh
if it was successful otherwise NULL.
2021-06-11 12:19:52 +05:30
57c1f52d2e adaptive_cloth: initial conversion of cloth modifier to modifyMesh
from `deformVerts`

Adaptive remeshing requires the mesh connectivity to change, so the
modifier should be of `eModifierTypeType_Nonconstructive` type and use
the `modifyMesh()` function instead of the `deformVerts()` function.

The next step is to make the `clothModifier_do()` function to take
advantage of the mesh given to it since now that mesh can be edited by
it and returned directly.
2021-06-10 12:06:03 +05:30
c31f095edf adaptive_remesh: create a new modifier 2021-06-09 10:11:24 +05:30
6552 changed files with 446614 additions and 597907 deletions

View File

@@ -180,7 +180,6 @@ ForEachMacros:
- CTX_DATA_BEGIN_WITH_ID
- DEG_OBJECT_ITER_BEGIN
- DEG_OBJECT_ITER_FOR_RENDER_ENGINE_BEGIN
- DRW_ENABLED_ENGINE_ITER
- DRIVER_TARGETS_LOOPER_BEGIN
- DRIVER_TARGETS_USED_LOOPER_BEGIN
- FOREACH_BASE_IN_EDIT_MODE_BEGIN
@@ -266,8 +265,4 @@ ForEachMacros:
- VECTOR_SET_SLOT_PROBING_BEGIN
StatementMacros:
- PyObject_HEAD
- PyObject_VAR_HEAD
MacroBlockBegin: "^BSDF_CLOSURE_CLASS_BEGIN$"
MacroBlockEnd: "^BSDF_CLOSURE_CLASS_END$"

View File

@@ -12,8 +12,6 @@ Checks: >
-readability-avoid-const-params-in-decls,
-readability-simplify-boolean-expr,
-readability-make-member-function-const,
-readability-suspicious-call-argument,
-readability-redundant-member-init,
-readability-misleading-indentation,
@@ -27,8 +25,6 @@ Checks: >
-bugprone-branch-clone,
-bugprone-macro-parentheses,
-bugprone-reserved-identifier,
-bugprone-easily-swappable-parameters,
-bugprone-implicit-widening-of-multiplication-result,
-bugprone-sizeof-expression,
-bugprone-integer-division,
@@ -44,8 +40,7 @@ Checks: >
-modernize-pass-by-value,
# Cannot be enabled yet, because using raw string literals in tests breaks
# the windows compiler currently.
-modernize-raw-string-literal,
-modernize-return-braced-init-list
-modernize-raw-string-literal
CheckOptions:
- key: modernize-use-default-member-init.UseAssignment

View File

@@ -1,5 +0,0 @@
This repository is only used as a mirror of git.blender.org. Blender development happens on
https://developer.blender.org.
To get started with contributing code, please see:
https://wiki.blender.org/wiki/Process/Contributing_Code

View File

@@ -30,7 +30,7 @@ if(${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR})
"CMake generation for blender is not allowed within the source directory!"
"\n Remove \"${CMAKE_SOURCE_DIR}/CMakeCache.txt\" and try again from another folder, e.g.:"
"\n "
"\n rm -rf CMakeCache.txt CMakeFiles"
"\n rm CMakeCache.txt"
"\n cd .."
"\n mkdir cmake-make"
"\n cd cmake-make"
@@ -110,10 +110,6 @@ if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW)
endif()
# Install CODE|SCRIPT allow the use of generator expressions.
if(POLICY CMP0087)
cmake_policy(SET CMP0087 NEW)
endif()
#-----------------------------------------------------------------------------
# Load some macros.
include(build_files/cmake/macros.cmake)
@@ -156,15 +152,6 @@ get_blender_version()
option(WITH_BLENDER "Build blender (disable to build only the blender player)" ON)
mark_as_advanced(WITH_BLENDER)
if(APPLE)
# Currently this causes a build error linking, disable.
set(WITH_BLENDER_THUMBNAILER OFF)
elseif(WIN32)
option(WITH_BLENDER_THUMBNAILER "Build \"BlendThumb.dll\" helper for Windows explorer integration" ON)
else()
option(WITH_BLENDER_THUMBNAILER "Build \"blender-thumbnailer\" thumbnail extraction utility" ON)
endif()
option(WITH_INTERNATIONAL "Enable I18N (International fonts and text)" ON)
option(WITH_PYTHON "Enable Embedded Python API (only disable for development)" ON)
@@ -187,13 +174,6 @@ mark_as_advanced(CPACK_OVERRIDE_PACKAGENAME)
mark_as_advanced(BUILDINFO_OVERRIDE_DATE)
mark_as_advanced(BUILDINFO_OVERRIDE_TIME)
if(${CMAKE_VERSION} VERSION_GREATER_EQUAL "3.16")
option(WITH_UNITY_BUILD "Enable unity build for modules that support it to improve compile times" ON)
mark_as_advanced(WITH_UNITY_BUILD)
else()
set(WITH_UNITY_BUILD OFF)
endif()
option(WITH_IK_ITASC "Enable ITASC IK solver (only disable for development & for incompatible C++ compilers)" ON)
option(WITH_IK_SOLVER "Enable Legacy IK solver (only disable for development)" ON)
option(WITH_FFTW3 "Enable FFTW3 support (Used for smoke, ocean sim, and audio effects)" ON)
@@ -369,7 +349,7 @@ mark_as_advanced(WITH_SYSTEM_GLOG)
option(WITH_FREESTYLE "Enable Freestyle (advanced edges rendering)" ON)
# Misc
if(WIN32 OR APPLE)
if(WIN32)
option(WITH_INPUT_IME "Enable Input Method Editor (IME) for complex Asian character input" ON)
endif()
option(WITH_INPUT_NDOF "Enable NDOF input devices (SpaceNavigator and friends)" ON)
@@ -404,73 +384,45 @@ if(WITH_PYTHON_INSTALL)
set(PYTHON_REQUESTS_PATH "" CACHE PATH "Path to python site-packages or dist-packages containing 'requests' module")
mark_as_advanced(PYTHON_REQUESTS_PATH)
endif()
option(WITH_PYTHON_INSTALL_ZSTANDARD "Copy zstandard into the blender install folder" ON)
set(PYTHON_ZSTANDARD_PATH "" CACHE PATH "Path to python site-packages or dist-packages containing 'zstandard' module")
mark_as_advanced(PYTHON_ZSTANDARD_PATH)
endif()
option(WITH_CPU_SIMD "Enable SIMD instruction if they're detected on the host machine" ON)
mark_as_advanced(WITH_CPU_SIMD)
# Cycles
option(WITH_CYCLES "Enable Cycles Render Engine" ON)
option(WITH_CYCLES_OSL "Build Cycles with OpenShadingLanguage support" ON)
option(WITH_CYCLES_EMBREE "Build Cycles with Embree support" ON)
option(WITH_CYCLES_LOGGING "Build Cycles with logging support" ON)
option(WITH_CYCLES_DEBUG "Build Cycles with options useful for debugging (e.g., MIS)" OFF)
option(WITH_CYCLES_STANDALONE "Build Cycles standalone application" OFF)
option(WITH_CYCLES_STANDALONE_GUI "Build Cycles standalone with GUI" OFF)
option(WITH_CYCLES_DEBUG_NAN "Build Cycles with additional asserts for detecting NaNs and invalid values" OFF)
option(WITH_CYCLES_NATIVE_ONLY "Build Cycles with native kernel only (which fits current CPU, use for development only)" OFF)
option(WITH_CYCLES_KERNEL_ASAN "Build Cycles kernels with address sanitizer when WITH_COMPILER_ASAN is on, even if it's very slow" OFF)
set(CYCLES_TEST_DEVICES CPU CACHE STRING "Run regression tests on the specified device types (CPU CUDA OPTIX HIP)" )
option(WITH_CYCLES "Enable Cycles Render Engine" ON)
option(WITH_CYCLES_STANDALONE "Build Cycles standalone application" OFF)
option(WITH_CYCLES_STANDALONE_GUI "Build Cycles standalone with GUI" OFF)
option(WITH_CYCLES_OSL "Build Cycles with OSL support" ON)
option(WITH_CYCLES_EMBREE "Build Cycles with Embree support" ON)
option(WITH_CYCLES_CUDA_BINARIES "Build Cycles CUDA binaries" OFF)
option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF)
option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF)
mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL)
set(CYCLES_TEST_DEVICES CPU CACHE STRING "Run regression tests on the specified device types (CPU CUDA OPTIX OPENCL)" )
set(CYCLES_CUDA_BINARIES_ARCH sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_75 sm_86 compute_75 CACHE STRING "CUDA architectures to build binaries for")
mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH)
unset(PLATFORM_DEFAULT)
option(WITH_CYCLES_LOGGING "Build Cycles with logging support" ON)
option(WITH_CYCLES_DEBUG "Build Cycles with extra debug capabilities" OFF)
option(WITH_CYCLES_NATIVE_ONLY "Build Cycles with native kernel only (which fits current CPU, use for development only)" OFF)
option(WITH_CYCLES_KERNEL_ASAN "Build Cycles kernels with address sanitizer when WITH_COMPILER_ASAN is on, even if it's very slow" OFF)
mark_as_advanced(WITH_CYCLES_KERNEL_ASAN)
mark_as_advanced(WITH_CYCLES_CUBIN_COMPILER)
mark_as_advanced(WITH_CYCLES_LOGGING)
mark_as_advanced(WITH_CYCLES_DEBUG_NAN)
mark_as_advanced(WITH_CYCLES_DEBUG)
mark_as_advanced(WITH_CYCLES_NATIVE_ONLY)
# NVIDIA CUDA & OptiX
if(NOT APPLE)
option(WITH_CYCLES_DEVICE_CUDA "Enable Cycles NVIDIA CUDA compute support" ON)
option(WITH_CYCLES_DEVICE_OPTIX "Enable Cycles NVIDIA OptiX support" ON)
mark_as_advanced(WITH_CYCLES_DEVICE_CUDA)
option(WITH_CYCLES_DEVICE_CUDA "Enable Cycles CUDA compute support" ON)
option(WITH_CYCLES_DEVICE_OPTIX "Enable Cycles OptiX support" OFF)
option(WITH_CYCLES_DEVICE_OPENCL "Enable Cycles OpenCL compute support" ON)
option(WITH_CYCLES_NETWORK "Enable Cycles compute over network support (EXPERIMENTAL and unfinished)" OFF)
mark_as_advanced(WITH_CYCLES_DEVICE_CUDA)
mark_as_advanced(WITH_CYCLES_DEVICE_OPENCL)
mark_as_advanced(WITH_CYCLES_NETWORK)
option(WITH_CYCLES_CUDA_BINARIES "Build Cycles NVIDIA CUDA binaries" OFF)
set(CYCLES_CUDA_BINARIES_ARCH sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_75 sm_86 compute_75 CACHE STRING "CUDA architectures to build binaries for")
option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF)
option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF)
option(WITH_CUDA_DYNLOAD "Dynamically load CUDA libraries at runtime (for developers, makes cuda-gdb work)" ON)
mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH)
mark_as_advanced(WITH_CYCLES_CUBIN_COMPILER)
mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL)
mark_as_advanced(WITH_CUDA_DYNLOAD)
endif()
# AMD HIP
if(NOT APPLE)
if(WIN32)
option(WITH_CYCLES_DEVICE_HIP "Enable Cycles AMD HIP support" ON)
else()
option(WITH_CYCLES_DEVICE_HIP "Enable Cycles AMD HIP support" OFF)
endif()
option(WITH_CYCLES_HIP_BINARIES "Build Cycles AMD HIP binaries" OFF)
set(CYCLES_HIP_BINARIES_ARCH gfx1010 gfx1011 gfx1012 gfx1030 gfx1031 gfx1032 gfx1034 CACHE STRING "AMD HIP architectures to build binaries for")
mark_as_advanced(WITH_CYCLES_DEVICE_HIP)
mark_as_advanced(CYCLES_HIP_BINARIES_ARCH)
endif()
# Apple Metal
if(APPLE)
option(WITH_CYCLES_DEVICE_METAL "Enable Cycles Apple Metal compute support" ON)
endif()
# Draw Manager
option(WITH_DRAW_DEBUG "Add extra debug capabilities to Draw Manager" OFF)
mark_as_advanced(WITH_DRAW_DEBUG)
option(WITH_CUDA_DYNLOAD "Dynamically load CUDA libraries at runtime" ON)
mark_as_advanced(WITH_CUDA_DYNLOAD)
# LLVM
option(WITH_LLVM "Use LLVM" OFF)
@@ -511,10 +463,11 @@ if(WIN32)
endif()
# This should be turned off when Blender enter beta/rc/release
if("${BLENDER_VERSION_CYCLE}" STREQUAL "alpha")
set(WITH_EXPERIMENTAL_FEATURES ON)
else()
if("${BLENDER_VERSION_CYCLE}" STREQUAL "release" OR
"${BLENDER_VERSION_CYCLE}" STREQUAL "rc")
set(WITH_EXPERIMENTAL_FEATURES OFF)
else()
set(WITH_EXPERIMENTAL_FEATURES ON)
endif()
# Unit testsing
@@ -536,14 +489,12 @@ option(WITH_OPENGL "When off limits visibility of the opengl header
option(WITH_GLEW_ES "Switches to experimental copy of GLEW that has support for OpenGL ES. (temporary option for development purposes)" OFF)
option(WITH_GL_EGL "Use the EGL OpenGL system library instead of the platform specific OpenGL system library (CGL, glX, or WGL)" OFF)
option(WITH_GL_PROFILE_ES20 "Support using OpenGL ES 2.0. (through either EGL or the AGL/WGL/XGL 'es20' profile)" OFF)
option(WITH_GPU_SHADER_BUILDER "Shader builder is a developer option enabling linting on GLSL during compilation" OFF)
mark_as_advanced(
WITH_OPENGL
WITH_GLEW_ES
WITH_GL_EGL
WITH_GL_PROFILE_ES20
WITH_GPU_SHADER_BUILDER
)
if(WIN32)
@@ -561,14 +512,12 @@ if(WIN32)
set(CPACK_INSTALL_PREFIX ${CMAKE_GENERIC_PROGRAM_FILES}/${})
endif()
# Compiler tool-chain.
if(UNIX AND NOT APPLE)
if(CMAKE_COMPILER_IS_GNUCC)
option(WITH_LINKER_GOLD "Use ld.gold linker which is usually faster than ld.bfd" ON)
mark_as_advanced(WITH_LINKER_GOLD)
option(WITH_LINKER_LLD "Use ld.lld linker which is usually faster than ld.gold" OFF)
mark_as_advanced(WITH_LINKER_LLD)
endif()
# Compiler toolchain
if(CMAKE_COMPILER_IS_GNUCC)
option(WITH_LINKER_GOLD "Use ld.gold linker which is usually faster than ld.bfd" ON)
mark_as_advanced(WITH_LINKER_GOLD)
option(WITH_LINKER_LLD "Use ld.lld linker which is usually faster than ld.gold" OFF)
mark_as_advanced(WITH_LINKER_LLD)
endif()
option(WITH_COMPILER_ASAN "Build and link against address sanitizer (only for Debug & RelWithDebInfo targets)." OFF)
@@ -655,6 +604,12 @@ if(WIN32)
option(WITH_WINDOWS_FIND_MODULES "Use find_package to locate libraries" OFF)
mark_as_advanced(WITH_WINDOWS_FIND_MODULES)
option(WINDOWS_USE_VISUAL_STUDIO_PROJECT_FOLDERS "Organize the visual studio projects according to source folder structure." ON)
mark_as_advanced(WINDOWS_USE_VISUAL_STUDIO_PROJECT_FOLDERS)
option(WINDOWS_USE_VISUAL_STUDIO_SOURCE_FOLDERS "Organize the source files in filters matching the source folders." ON)
mark_as_advanced(WINDOWS_USE_VISUAL_STUDIO_SOURCE_FOLDERS)
option(WINDOWS_PYTHON_DEBUG "Include the files needed for debugging python scripts with visual studio 2017+." OFF)
mark_as_advanced(WINDOWS_PYTHON_DEBUG)
@@ -667,23 +622,11 @@ if(WIN32)
option(WITH_WINDOWS_PDB "Generate a pdb file for client side stacktraces" ON)
mark_as_advanced(WITH_WINDOWS_PDB)
option(WITH_WINDOWS_STRIPPED_PDB "Use a stripped PDB file" ON)
option(WITH_WINDOWS_STRIPPED_PDB "Use a stripped PDB file" On)
mark_as_advanced(WITH_WINDOWS_STRIPPED_PDB)
endif()
if(WIN32 OR XCODE)
option(IDE_GROUP_SOURCES_IN_FOLDERS "Organize the source files in filters matching the source folders." ON)
mark_as_advanced(IDE_GROUP_SOURCES_IN_FOLDERS)
option(IDE_GROUP_PROJECTS_IN_FOLDERS "Organize the projects according to source folder structure." ON)
mark_as_advanced(IDE_GROUP_PROJECTS_IN_FOLDERS)
if (IDE_GROUP_PROJECTS_IN_FOLDERS)
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
endif()
endif()
if(UNIX)
# See WITH_WINDOWS_SCCACHE for Windows.
option(WITH_COMPILER_CCACHE "Use ccache to improve rebuild times (Works with Ninja, Makefiles and Xcode)" OFF)
@@ -860,7 +803,7 @@ if(WITH_AUDASPACE)
endif()
# Auto-enable CUDA dynload if toolkit is not found.
if(WITH_CYCLES AND WITH_CYCLES_DEVICE_CUDA AND NOT WITH_CUDA_DYNLOAD)
if(NOT WITH_CUDA_DYNLOAD)
find_package(CUDA)
if(NOT CUDA_FOUND)
message(STATUS "CUDA toolkit not found, using dynamic runtime loading of libraries (WITH_CUDA_DYNLOAD) instead")
@@ -868,11 +811,6 @@ if(WITH_CYCLES AND WITH_CYCLES_DEVICE_CUDA AND NOT WITH_CUDA_DYNLOAD)
endif()
endif()
if(WITH_CYCLES_DEVICE_HIP)
# Currently HIP must be dynamically loaded, this may change in future toolkits
set(WITH_HIP_DYNLOAD ON)
endif()
#-----------------------------------------------------------------------------
# Check check if submodules are cloned
@@ -898,7 +836,7 @@ if(WITH_PYTHON)
# because UNIX will search for the old Python paths which may not exist.
# giving errors about missing paths before this case is met.
if(DEFINED PYTHON_VERSION AND "${PYTHON_VERSION}" VERSION_LESS "3.9")
message(FATAL_ERROR "At least Python 3.9 is required to build, but found Python ${PYTHON_VERSION}")
message(FATAL_ERROR "At least Python 3.9 is required to build")
endif()
file(GLOB RESULT "${CMAKE_SOURCE_DIR}/release/scripts/addons")
@@ -1090,7 +1028,7 @@ if(MSVC)
add_definitions(-D__LITTLE_ENDIAN__)
# OSX-Note: as we do cross-compiling with specific set architecture,
# endianness-detection and auto-setting is counterproductive
# endianess-detection and auto-setting is counterproductive
# so we just set endianness according CMAKE_OSX_ARCHITECTURES
elseif(CMAKE_OSX_ARCHITECTURES MATCHES i386 OR CMAKE_OSX_ARCHITECTURES MATCHES x86_64 OR CMAKE_OSX_ARCHITECTURES MATCHES arm64)
@@ -1646,9 +1584,6 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")
ADD_CHECK_C_COMPILER_FLAG(C_WARNINGS C_WARN_UNUSED_PARAMETER -Wunused-parameter)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_ALL -Wall)
# Using C++20 features while having C++17 as the project language isn't allowed by MSVC.
ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_CXX20_DESIGNATOR -Wc++20-designator)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_NO_AUTOLOGICAL_COMPARE -Wno-tautological-compare)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_NO_UNKNOWN_PRAGMAS -Wno-unknown-pragmas)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_NO_CHAR_SUBSCRIPTS -Wno-char-subscripts)
@@ -1768,26 +1703,24 @@ if(WITH_PYTHON)
elseif(WITH_PYTHON_INSTALL_REQUESTS)
find_python_package(requests "")
endif()
if(WIN32 OR APPLE)
# pass, we have this in lib/python/site-packages
elseif(WITH_PYTHON_INSTALL_ZSTANDARD)
find_python_package(zstandard "")
endif()
endif()
# Select C++17 as the standard for C++ projects.
set(CMAKE_CXX_STANDARD 17)
# If C++17 is not available, downgrading to an earlier standard is NOT OK.
set(CMAKE_CXX_STANDARD_REQUIRED ON)
# Do not enable compiler specific language extensions.
set(CMAKE_CXX_EXTENSIONS OFF)
# Make MSVC properly report the value of the __cplusplus preprocessor macro
# Available MSVC 15.7 (1914) and up, without this it reports 199711L regardless
# of the C++ standard chosen above.
if(MSVC AND MSVC_VERSION GREATER 1913)
string(APPEND CMAKE_CXX_FLAGS " /Zc:__cplusplus")
if(MSVC)
string(APPEND CMAKE_CXX_FLAGS " /std:c++17")
# Make MSVC properly report the value of the __cplusplus preprocessor macro
# Available MSVC 15.7 (1914) and up, without this it reports 199711L regardless
# of the C++ standard chosen above
if(MSVC_VERSION GREATER 1913)
string(APPEND CMAKE_CXX_FLAGS " /Zc:__cplusplus")
endif()
elseif(
CMAKE_COMPILER_IS_GNUCC OR
CMAKE_C_COMPILER_ID MATCHES "Clang" OR
CMAKE_C_COMPILER_ID MATCHES "Intel"
)
string(APPEND CMAKE_CXX_FLAGS " -std=c++17")
else()
message(FATAL_ERROR "Unknown compiler ${CMAKE_C_COMPILER_ID}, can't enable C++17 build")
endif()
# Visual Studio has all standards it supports available by default
@@ -1908,9 +1841,6 @@ elseif(WITH_CYCLES_STANDALONE)
if(WITH_CUDA_DYNLOAD)
add_subdirectory(extern/cuew)
endif()
if(WITH_HIP_DYNLOAD)
add_subdirectory(extern/hipew)
endif()
if(NOT WITH_SYSTEM_GLEW)
add_subdirectory(extern/glew)
endif()
@@ -1985,7 +1915,6 @@ if(FIRST_RUN)
info_cfg_option(WITH_IK_ITASC)
info_cfg_option(WITH_IK_SOLVER)
info_cfg_option(WITH_INPUT_NDOF)
info_cfg_option(WITH_INPUT_IME)
info_cfg_option(WITH_INTERNATIONAL)
info_cfg_option(WITH_OPENCOLLADA)
info_cfg_option(WITH_OPENCOLORIO)
@@ -2045,7 +1974,6 @@ if(FIRST_RUN)
endif()
info_cfg_option(WITH_PYTHON_INSTALL)
info_cfg_option(WITH_PYTHON_INSTALL_NUMPY)
info_cfg_option(WITH_PYTHON_INSTALL_ZSTANDARD)
info_cfg_option(WITH_PYTHON_MODULE)
info_cfg_option(WITH_PYTHON_SAFETY)

View File

@@ -27,7 +27,7 @@
define HELP_TEXT
Blender Convenience Targets
Provided for building Blender (multiple targets can be used at once).
Provided for building Blender, (multiple at once can be used).
* debug: Build a debug binary.
* full: Enable all supported dependencies & options.
@@ -40,8 +40,6 @@ Blender Convenience Targets
* ninja: Use ninja build tool for faster builds.
* ccache: Use ccache for faster rebuilds.
Note: when passing in multiple targets their order is not important.
So for a fast build you can for e.g. run 'make lite ccache ninja'.
Note: passing the argument 'BUILD_DIR=path' when calling make will override the default build dir.
Note: passing the argument 'BUILD_CMAKE_ARGS=args' lets you add cmake arguments.
@@ -51,7 +49,7 @@ Other Convenience Targets
* config: Run cmake configuration tool to set build options.
* deps: Build library dependencies (intended only for platform maintainers).
The existance of locally build dependencies overrides the pre-built dependencies from subversion.
The existance of locally build dependancies overrides the pre-built dependencies from subversion.
These must be manually removed from '../lib/' to go back to using the pre-compiled libraries.
Project Files
@@ -65,7 +63,7 @@ Package Targets
* package_debian: Build a debian package.
* package_pacman: Build an arch linux pacman package.
* package_archive: Build an archive package.
* package_archive: Build an archive package.
Testing Targets
Not associated with building Blender.
@@ -169,7 +167,7 @@ endef
# This makefile is not meant for Windows
ifeq ($(OS),Windows_NT)
$(error On Windows, use "cmd //c make.bat" instead of "make")
$(error On Windows, use "cmd //c make.bat" instead of "make")
endif
# System Vars
@@ -381,7 +379,7 @@ deps: .FORCE
@cmake -H"$(DEPS_SOURCE_DIR)" \
-B"$(DEPS_BUILD_DIR)" \
-DHARVEST_TARGET=$(DEPS_INSTALL_DIR)
-DHARVEST_TARGET=$(DEPS_INSTALL_DIR)
@echo
@echo Building dependencies ...
@@ -458,8 +456,7 @@ project_eclipse: .FORCE
check_cppcheck: .FORCE
$(CMAKE_CONFIG)
cd "$(BUILD_DIR)" ; \
$(PYTHON) \
"$(BLENDER_DIR)/build_files/cmake/cmake_static_check_cppcheck.py" 2> \
$(PYTHON) "$(BLENDER_DIR)/build_files/cmake/cmake_static_check_cppcheck.py" 2> \
"$(BLENDER_DIR)/check_cppcheck.txt"
@echo "written: check_cppcheck.txt"
@@ -521,9 +518,8 @@ source_archive: .FORCE
python3 ./build_files/utils/make_source_archive.py
source_archive_complete: .FORCE
cmake \
-S "$(BLENDER_DIR)/build_files/build_environment" -B"$(BUILD_DIR)/source_archive" \
-DCMAKE_BUILD_TYPE_INIT:STRING=$(BUILD_TYPE) -DPACKAGE_USE_UPSTREAM_SOURCES=OFF
cmake -S "$(BLENDER_DIR)/build_files/build_environment" -B"$(BUILD_DIR)/source_archive" \
-DCMAKE_BUILD_TYPE_INIT:STRING=$(BUILD_TYPE) -DPACKAGE_USE_UPSTREAM_SOURCES=OFF
# This assumes CMake is still using a default `PACKAGE_DIR` variable:
python3 ./build_files/utils/make_source_archive.py --include-packages "$(BUILD_DIR)/source_archive/packages"
@@ -531,11 +527,9 @@ source_archive_complete: .FORCE
INKSCAPE_BIN?="inkscape"
icons: .FORCE
BLENDER_BIN=$(BLENDER_BIN) INKSCAPE_BIN=$(INKSCAPE_BIN) \
"$(BLENDER_DIR)/release/datafiles/blender_icons_update.py"
INKSCAPE_BIN=$(INKSCAPE_BIN) \
"$(BLENDER_DIR)/release/datafiles/prvicons_update.py"
INKSCAPE_BIN=$(INKSCAPE_BIN) \
"$(BLENDER_DIR)/release/datafiles/alert_icons_update.py"
"$(BLENDER_DIR)/release/datafiles/blender_icons_update.py"
BLENDER_BIN=$(BLENDER_BIN) INKSCAPE_BIN=$(INKSCAPE_BIN) \
"$(BLENDER_DIR)/release/datafiles/prvicons_update.py"
icons_geom: .FORCE
BLENDER_BIN=$(BLENDER_BIN) \
@@ -549,7 +543,7 @@ update_code: .FORCE
format: .FORCE
PATH="../lib/${OS_NCASE}_${CPU}/llvm/bin/:../lib/${OS_NCASE}_centos7_${CPU}/llvm/bin/:../lib/${OS_NCASE}/llvm/bin/:$(PATH)" \
$(PYTHON) source/tools/utils_maintenance/clang_format_paths.py $(PATHS)
$(PYTHON) source/tools/utils_maintenance/clang_format_paths.py $(PATHS)
# -----------------------------------------------------------------------------
@@ -559,9 +553,8 @@ format: .FORCE
# Simple version of ./doc/python_api/sphinx_doc_gen.sh with no PDF generation.
doc_py: .FORCE
ASAN_OPTIONS=halt_on_error=0:${ASAN_OPTIONS} \
$(BLENDER_BIN) \
--background -noaudio --factory-startup \
--python doc/python_api/sphinx_doc_gen.py
$(BLENDER_BIN) --background -noaudio --factory-startup \
--python doc/python_api/sphinx_doc_gen.py
sphinx-build -b html -j $(NPROCS) doc/python_api/sphinx-in doc/python_api/sphinx-out
@echo "docs written into: '$(BLENDER_DIR)/doc/python_api/sphinx-out/index.html'"
@@ -570,9 +563,8 @@ doc_doxy: .FORCE
@echo "docs written into: '$(BLENDER_DIR)/doc/doxygen/html/index.html'"
doc_dna: .FORCE
$(BLENDER_BIN) \
--background -noaudio --factory-startup \
--python doc/blender_file_format/BlendFileDnaExporter_25.py
$(BLENDER_BIN) --background -noaudio --factory-startup \
--python doc/blender_file_format/BlendFileDnaExporter_25.py
@echo "docs written into: '$(BLENDER_DIR)/doc/blender_file_format/dna.html'"
doc_man: .FORCE

View File

@@ -56,7 +56,6 @@ else()
endif()
include(cmake/zlib.cmake)
include(cmake/zstd.cmake)
include(cmake/openal.cmake)
include(cmake/png.cmake)
include(cmake/jpeg.cmake)
@@ -82,11 +81,7 @@ if(UNIX)
endif()
include(cmake/openimageio.cmake)
include(cmake/tiff.cmake)
if(WIN32)
include(cmake/flexbison.cmake)
elseif(UNIX AND NOT APPLE)
include(cmake/flex.cmake)
endif()
include(cmake/flexbison.cmake)
include(cmake/osl.cmake)
include(cmake/tbb.cmake)
include(cmake/openvdb.cmake)
@@ -169,7 +164,6 @@ endif()
if(UNIX AND NOT APPLE)
include(cmake/libglu.cmake)
include(cmake/mesa.cmake)
include(cmake/wayland_protocols.cmake)
endif()
include(cmake/harvest.cmake)

View File

@@ -87,10 +87,7 @@ download_source(LIBGLU)
download_source(MESA)
download_source(NASM)
download_source(XR_OPENXR_SDK)
download_source(WL_PROTOCOLS)
download_source(ISPC)
download_source(GMP)
download_source(POTRACE)
download_source(HARU)
download_source(ZSTD)
download_source(FLEX)

View File

@@ -30,7 +30,6 @@ if(WIN32)
--enable-w32threads
--disable-pthreads
--enable-libopenjpeg
--disable-mediafoundation
)
if("${CMAKE_SIZEOF_VOID_P}" EQUAL "4")
set(FFMPEG_EXTRA_FLAGS

View File

@@ -1,28 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ***** END GPL LICENSE BLOCK *****
ExternalProject_Add(external_flex
URL file://${PACKAGE_DIR}/${FLEX_FILE}
URL_HASH ${FLEX_HASH_TYPE}=${FLEX_HASH}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
PREFIX ${BUILD_DIR}/flex
CONFIGURE_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && ${CONFIGURE_COMMAND} --prefix=${LIBDIR}/flex
BUILD_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && make -j${MAKE_THREADS}
INSTALL_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && make install
INSTALL_DIR ${LIBDIR}/flex
)

View File

@@ -38,6 +38,13 @@ elseif(UNIX AND NOT APPLE)
)
endif()
if(BLENDER_PLATFORM_ARM)
set(GMP_OPTIONS
${GMP_OPTIONS}
--disable-assembly
)
endif()
ExternalProject_Add(external_gmp
URL file://${PACKAGE_DIR}/${GMP_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}

View File

@@ -17,7 +17,7 @@
# ***** END GPL LICENSE BLOCK *****
########################################################################
# Copy all generated files to the proper structure as blender prefers
# Copy all generated files to the proper strucure as blender prefers
########################################################################
if(NOT DEFINED HARVEST_TARGET)
@@ -106,7 +106,6 @@ harvest(llvm/include llvm/include "*")
harvest(llvm/bin llvm/bin "llvm-config")
harvest(llvm/lib llvm/lib "libLLVM*.a")
harvest(llvm/lib llvm/lib "libclang*.a")
harvest(llvm/lib/clang llvm/lib/clang "*.h")
if(APPLE)
harvest(openmp/lib openmp/lib "*")
harvest(openmp/include openmp/include "*.h")
@@ -127,8 +126,6 @@ if(UNIX AND NOT APPLE)
harvest(xml2/include xml2/include "*.h")
harvest(xml2/lib xml2/lib "*.a")
harvest(wayland-protocols/share/wayland-protocols wayland-protocols/share/wayland-protocols/ "*.xml")
else()
harvest(blosc/lib openvdb/lib "*.a")
harvest(xml2/lib opencollada/lib "*.a")
@@ -193,8 +190,6 @@ harvest(potrace/include potrace/include "*.h")
harvest(potrace/lib potrace/lib "*.a")
harvest(haru/include haru/include "*.h")
harvest(haru/lib haru/lib "*.a")
harvest(zstd/include zstd/include "*.h")
harvest(zstd/lib zstd/lib "*.a")
if(UNIX AND NOT APPLE)
harvest(libglu/lib mesa/lib "*.so*")

View File

@@ -35,7 +35,6 @@ elseif(APPLE)
else()
set(ISPC_EXTRA_ARGS_APPLE
-DBISON_EXECUTABLE=/usr/local/opt/bison/bin/bison
-DFLEX_EXECUTABLE=/usr/local/opt/flex/bin/flex
-DARM_ENABLED=Off
)
endif()
@@ -44,7 +43,6 @@ elseif(UNIX)
-DCMAKE_C_COMPILER=${LIBDIR}/llvm/bin/clang
-DCMAKE_CXX_COMPILER=${LIBDIR}/llvm/bin/clang++
-DARM_ENABLED=Off
-DFLEX_EXECUTABLE=${LIBDIR}/flex/bin/flex
)
endif()
@@ -84,9 +82,4 @@ if(WIN32)
external_ispc
external_flexbison
)
elseif(UNIX AND NOT APPLE)
add_dependencies(
external_ispc
external_flex
)
endif()

View File

@@ -66,11 +66,7 @@ ExternalProject_Add(ll
if(MSVC)
if(BUILD_MODE STREQUAL Release)
set(LLVM_HARVEST_COMMAND
${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/llvm/lib ${HARVEST_TARGET}/llvm/lib &&
${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/llvm/include ${HARVEST_TARGET}/llvm/include &&
${CMAKE_COMMAND} -E copy ${LIBDIR}/llvm/bin/clang-format.exe ${HARVEST_TARGET}/llvm/bin/clang-format.exe
)
set(LLVM_HARVEST_COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/llvm/ ${HARVEST_TARGET}/llvm/ )
else()
set(LLVM_HARVEST_COMMAND
${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/llvm/lib/ ${HARVEST_TARGET}/llvm/debug/lib/ &&

View File

@@ -42,7 +42,6 @@ ExternalProject_Add(nanovdb
URL_HASH ${NANOVDB_HASH_TYPE}=${NANOVDB_HASH}
PREFIX ${BUILD_DIR}/nanovdb
SOURCE_SUBDIR nanovdb
PATCH_COMMAND ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/nanovdb/src/nanovdb < ${PATCH_DIR}/nanovdb.diff
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/nanovdb ${DEFAULT_CMAKE_FLAGS} ${NANOVDB_EXTRA_ARGS}
INSTALL_DIR ${LIBDIR}/nanovdb
)

View File

@@ -38,6 +38,7 @@ ExternalProject_Add(external_numpy
PREFIX ${BUILD_DIR}/numpy
PATCH_COMMAND ${NUMPY_PATCH}
CONFIGURE_COMMAND ""
PATCH_COMMAND COMMAND ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/numpy/src/external_numpy < ${PATCH_DIR}/numpy.diff
LOG_BUILD 1
BUILD_COMMAND ${PYTHON_BINARY} ${BUILD_DIR}/numpy/src/external_numpy/setup.py build ${NUMPY_BUILD_OPTION} install --old-and-unmanageable
INSTALL_COMMAND ""

View File

@@ -45,6 +45,7 @@ ExternalProject_Add(external_openimagedenoise
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${OIDN_HASH_TYPE}=${OIDN_HASH}
PREFIX ${BUILD_DIR}/openimagedenoise
PATCH_COMMAND ${PATCH_CMD} -p 1 -N -d ${BUILD_DIR}/openimagedenoise/src/external_openimagedenoise < ${PATCH_DIR}/oidn.diff
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/openimagedenoise ${DEFAULT_CMAKE_FLAGS} ${OIDN_EXTRA_ARGS}
INSTALL_DIR ${LIBDIR}/openimagedenoise
)

View File

@@ -16,20 +16,15 @@
#
# ***** END GPL LICENSE BLOCK *****
if(APPLE)
set(OPENMP_PATCH_COMMAND ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/openmp/src/external_openmp < ${PATCH_DIR}/openmp.diff)
else()
set(OPENMP_PATCH_COMMAND)
endif()
ExternalProject_Add(external_openmp
URL file://${PACKAGE_DIR}/${OPENMP_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${OPENMP_HASH_TYPE}=${OPENMP_HASH}
PREFIX ${BUILD_DIR}/openmp
PATCH_COMMAND ${OPENMP_PATCH_COMMAND}
PATCH_COMMAND ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/openmp/src/external_openmp < ${PATCH_DIR}/openmp.diff
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/openmp ${DEFAULT_CMAKE_FLAGS}
INSTALL_COMMAND cd ${BUILD_DIR}/openmp/src/external_openmp-build && install_name_tool -id @rpath/libomp.dylib runtime/src/libomp.dylib && make install
INSTALL_COMMAND cd ${BUILD_DIR}/openmp/src/external_openmp-build && install_name_tool -id @executable_path/../Resources/lib/libomp.dylib runtime/src/libomp.dylib && make install
INSTALL_DIR ${LIBDIR}/openmp
)

View File

@@ -39,7 +39,7 @@ endif()
set(DOWNLOAD_DIR "${CMAKE_CURRENT_BINARY_DIR}/downloads" CACHE STRING "Path for downloaded files")
# This path must be hard-coded like this, so that the GNUmakefile knows where it is and can pass it to make_source_archive.py:
set(PACKAGE_DIR "${CMAKE_CURRENT_BINARY_DIR}/packages")
option(PACKAGE_USE_UPSTREAM_SOURCES "Use sources upstream to download the package sources, when OFF the blender mirror will be used" ON)
option(PACKAGE_USE_UPSTREAM_SOURCES "Use soures upstream to download the package sources, when OFF the blender mirror will be used" ON)
file(TO_CMAKE_PATH ${DOWNLOAD_DIR} DOWNLOAD_DIR)
file(TO_CMAKE_PATH ${PACKAGE_DIR} PACKAGE_DIR)

View File

@@ -20,10 +20,12 @@ if(WIN32)
set(OSL_CMAKE_CXX_STANDARD_LIBRARIES "kernel32${LIBEXT} user32${LIBEXT} gdi32${LIBEXT} winspool${LIBEXT} shell32${LIBEXT} ole32${LIBEXT} oleaut32${LIBEXT} uuid${LIBEXT} comdlg32${LIBEXT} advapi32${LIBEXT} psapi${LIBEXT}")
set(OSL_FLEX_BISON -DFLEX_EXECUTABLE=${LIBDIR}/flexbison/win_flex.exe -DBISON_EXECUTABLE=${LIBDIR}/flexbison/win_bison.exe)
set(OSL_SIMD_FLAGS -DOIIO_NOSIMD=1 -DOIIO_SIMD=sse2)
SET(OSL_PLATFORM_FLAGS -DLINKSTATIC=ON)
else()
set(OSL_CMAKE_CXX_STANDARD_LIBRARIES)
set(OSL_FLEX_BISON)
set(OSL_OPENIMAGEIO_LIBRARY "${LIBDIR}/openimageio/lib/${LIBPREFIX}OpenImageIO${LIBEXT};${LIBDIR}/openimageio/lib/${LIBPREFIX}OpenImageIO_Util${LIBEXT};${LIBDIR}/png/lib/${LIBPREFIX}png16${LIBEXT};${LIBDIR}/jpg/lib/${LIBPREFIX}jpeg${LIBEXT};${LIBDIR}/tiff/lib/${LIBPREFIX}tiff${LIBEXT};${LIBDIR}/openexr/lib/${LIBPREFIX}IlmImf${OPENEXR_VERSION_POSTFIX}${LIBEXT}")
SET(OSL_PLATFORM_FLAGS)
endif()
set(OSL_ILMBASE_CUSTOM_LIBRARIES "${LIBDIR}/openexr/lib/Imath${OPENEXR_VERSION_POSTFIX}.lib^^${LIBDIR}/openexr/lib/Half{OPENEXR_VERSION_POSTFIX}.lib^^${LIBDIR}/openexr/lib/IlmThread${OPENEXR_VERSION_POSTFIX}.lib^^${LIBDIR}/openexr/lib/Iex${OPENEXR_VERSION_POSTFIX}.lib")
@@ -49,13 +51,12 @@ set(OSL_EXTRA_ARGS
-DOpenImageIO_ROOT=${LIBDIR}/openimageio/
-DOSL_BUILD_TESTS=OFF
-DOSL_BUILD_MATERIALX=OFF
-DPNG_ROOT=${LIBDIR}/png
-DZLIB_LIBRARY=${LIBDIR}/zlib/lib/${ZLIB_LIBRARY}
-DZLIB_INCLUDE_DIR=${LIBDIR}/zlib/include/
${OSL_FLEX_BISON}
-DCMAKE_CXX_STANDARD_LIBRARIES=${OSL_CMAKE_CXX_STANDARD_LIBRARIES}
-DBUILD_SHARED_LIBS=OFF
-DLINKSTATIC=ON
${OSL_PLATFORM_FLAGS}
-DOSL_BUILD_PLUGINS=OFF
-DSTOP_ON_WARNING=OFF
-DUSE_LLVM_BITCODE=OFF
@@ -68,9 +69,13 @@ set(OSL_EXTRA_ARGS
${OSL_SIMD_FLAGS}
-Dpugixml_ROOT=${LIBDIR}/pugixml
-DUSE_PYTHON=OFF
-DCMAKE_CXX_STANDARD=14
)
# Apple arm64 uses LLVM 11, LLVM 10+ requires C++14
if (APPLE AND "${CMAKE_OSX_ARCHITECTURES}" STREQUAL "arm64")
list(APPEND OSL_EXTRA_ARGS -DCMAKE_CXX_STANDARD=14)
endif()
ExternalProject_Add(external_osl
URL file://${PACKAGE_DIR}/${OSL_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
@@ -88,20 +93,10 @@ add_dependencies(
ll
external_openexr
external_zlib
external_flexbison
external_openimageio
external_pugixml
)
if(WIN32)
add_dependencies(
external_osl
external_flexbison
)
elseif(UNIX AND NOT APPLE)
add_dependencies(
external_osl
external_flex
)
endif()
if(WIN32)
if(BUILD_MODE STREQUAL Release)

View File

@@ -24,7 +24,7 @@ if(MSVC)
add_custom_command(
OUTPUT ${PYTARGET}/bin/python${PYTHON_POSTFIX}.exe
COMMAND echo packaging python
COMMAND echo this should output at ${PYTARGET}/bin/python${PYTHON_POSTFIX}.exe
COMMAND echo this should ouput at ${PYTARGET}/bin/python${PYTHON_POSTFIX}.exe
COMMAND ${CMAKE_COMMAND} -E make_directory ${PYTARGET}/libs
COMMAND ${CMAKE_COMMAND} -E copy ${PYSRC}/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}.lib ${PYTARGET}/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}.lib
COMMAND ${CMAKE_COMMAND} -E copy ${PYSRC}/python.exe ${PYTARGET}/bin/python.exe
@@ -43,7 +43,7 @@ if(MSVC)
add_custom_command(
OUTPUT ${PYTARGET}/bin/python${PYTHON_POSTFIX}.exe
COMMAND echo packaging python
COMMAND echo this should output at ${PYTARGET}/bin/python${PYTHON_POSTFIX}.exe
COMMAND echo this should ouput at ${PYTARGET}/bin/python${PYTHON_POSTFIX}.exe
COMMAND ${CMAKE_COMMAND} -E make_directory ${PYTARGET}/libs
COMMAND ${CMAKE_COMMAND} -E copy ${PYSRC}/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib ${PYTARGET}/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib
COMMAND ${CMAKE_COMMAND} -E copy ${PYSRC}/python${PYTHON_POSTFIX}.exe ${PYTARGET}/bin/python${PYTHON_POSTFIX}.exe

View File

@@ -23,7 +23,7 @@ set(PNG_EXTRA_ARGS
)
if(BLENDER_PLATFORM_ARM)
set(PNG_EXTRA_ARGS ${PNG_EXTRA_ARGS} -DPNG_HARDWARE_OPTIMIZATIONS=ON -DPNG_ARM_NEON=on -DCMAKE_SYSTEM_PROCESSOR="aarch64")
set(PNG_EXTRA_ARGS ${PNG_EXTRA_ARGS} -DPNG_HARDWARE_OPTIMIZATIONS=ON -DPNG_ARM_NEON=ON -DCMAKE_SYSTEM_PROCESSOR="aarch64")
endif()
ExternalProject_Add(external_png

View File

@@ -18,20 +18,14 @@
if(WIN32 AND BUILD_MODE STREQUAL Debug)
set(SITE_PACKAGES_EXTRA --global-option build --global-option --debug)
# zstandard is determined to build and link release mode libs in a debug
# configuration, the only way to make it happy is to bend to its will
# and give it a library to link with.
set(PIP_CONFIGURE_COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/python/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}_d.lib ${LIBDIR}/python/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}.lib)
else()
set(PIP_CONFIGURE_COMMAND echo ".")
endif()
ExternalProject_Add(external_python_site_packages
DOWNLOAD_COMMAND ""
CONFIGURE_COMMAND ${PIP_CONFIGURE_COMMAND}
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
PREFIX ${BUILD_DIR}/site_packages
INSTALL_COMMAND ${PYTHON_BINARY} -m pip install ${SITE_PACKAGES_EXTRA} cython==${CYTHON_VERSION} idna==${IDNA_VERSION} charset-normalizer==${CHARSET_NORMALIZER_VERSION} urllib3==${URLLIB3_VERSION} certifi==${CERTIFI_VERSION} requests==${REQUESTS_VERSION} zstandard==${ZSTANDARD_VERSION} --no-binary :all:
INSTALL_COMMAND ${PYTHON_BINARY} -m pip install ${SITE_PACKAGES_EXTRA} cython==${CYTHON_VERSION} idna==${IDNA_VERSION} chardet==${CHARDET_VERSION} urllib3==${URLLIB3_VERSION} certifi==${CERTIFI_VERSION} requests==${REQUESTS_VERSION} --no-binary :all:
)
if(USE_PIP_NUMPY)

View File

@@ -79,7 +79,6 @@ if(WIN32)
# Normal collection of build artifacts
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb_debug.lib ${HARVEST_TARGET}/tbb/lib/tbb_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbb_debug.dll ${HARVEST_TARGET}/tbb/bin/tbb_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_debug.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_proxy_debug.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc_proxy_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbbmalloc_debug.dll ${HARVEST_TARGET}/tbb/bin/tbbmalloc_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbbmalloc_proxy_debug.dll ${HARVEST_TARGET}/tbb/bin/tbbmalloc_proxy_debug.dll

View File

@@ -152,28 +152,35 @@ set(OPENCOLORIO_HASH 1a2e3478b6cd9a1549f24e1b2205e3f0)
set(OPENCOLORIO_HASH_TYPE MD5)
set(OPENCOLORIO_FILE OpenColorIO-${OPENCOLORIO_VERSION}.tar.gz)
set(LLVM_VERSION 12.0.0)
set(LLVM_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/llvm-project-${LLVM_VERSION}.src.tar.xz)
set(LLVM_HASH 5a4fab4d7fc84aefffb118ac2c8a4fc0)
set(LLVM_HASH_TYPE MD5)
set(LLVM_FILE llvm-project-${LLVM_VERSION}.src.tar.xz)
if(BLENDER_PLATFORM_ARM)
# Newer version required by ISPC with arm support.
set(LLVM_VERSION 11.0.1)
set(LLVM_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/llvm-project-${LLVM_VERSION}.src.tar.xz)
set(LLVM_HASH e700af40ab83463e4e9ab0ba3708312e)
set(LLVM_HASH_TYPE MD5)
set(LLVM_FILE llvm-project-${LLVM_VERSION}.src.tar.xz)
if(APPLE)
# Cloth physics test is crashing due to this bug:
# https://bugs.llvm.org/show_bug.cgi?id=50579
set(OPENMP_VERSION 9.0.1)
set(OPENMP_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${OPENMP_VERSION}/openmp-${OPENMP_VERSION}.src.tar.xz)
set(OPENMP_HASH 6eade16057edbdecb3c4eef9daa2bfcf)
set(OPENMP_HASH_TYPE MD5)
set(OPENMP_FILE openmp-${OPENMP_VERSION}.src.tar.xz)
else()
set(OPENMP_VERSION ${LLVM_VERSION})
set(OPENMP_HASH ac48ce3e4582ccb82f81ab59eb3fc9dc)
endif()
set(OPENMP_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${OPENMP_VERSION}/openmp-${OPENMP_VERSION}.src.tar.xz)
set(OPENMP_HASH_TYPE MD5)
set(OPENMP_FILE openmp-${OPENMP_VERSION}.src.tar.xz)
set(LLVM_VERSION 9.0.1)
set(LLVM_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/llvm-project-${LLVM_VERSION}.tar.xz)
set(LLVM_HASH b4268e733dfe352960140dc07ef2efcb)
set(LLVM_HASH_TYPE MD5)
set(LLVM_FILE llvm-project-${LLVM_VERSION}.tar.xz)
set(OPENIMAGEIO_VERSION 2.2.15.1)
set(OPENMP_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/openmp-${LLVM_VERSION}.src.tar.xz)
set(OPENMP_HASH 6eade16057edbdecb3c4eef9daa2bfcf)
set(OPENMP_HASH_TYPE MD5)
set(OPENMP_FILE openmp-${LLVM_VERSION}.src.tar.xz)
endif()
set(OPENIMAGEIO_VERSION 2.1.15.0)
set(OPENIMAGEIO_URI https://github.com/OpenImageIO/oiio/archive/Release-${OPENIMAGEIO_VERSION}.tar.gz)
set(OPENIMAGEIO_HASH 3db5c5f0b3dc91597c75e5df09eb9072)
set(OPENIMAGEIO_HASH f03aa5e3ac4795af04771ee4146e9832)
set(OPENIMAGEIO_HASH_TYPE MD5)
set(OPENIMAGEIO_FILE OpenImageIO-${OPENIMAGEIO_VERSION}.tar.gz)
@@ -183,17 +190,17 @@ set(TIFF_HASH 2165e7aba557463acc0664e71a3ed424)
set(TIFF_HASH_TYPE MD5)
set(TIFF_FILE tiff-${TIFF_VERSION}.tar.gz)
set(OSL_VERSION 1.11.14.1)
set(OSL_VERSION 1.11.10.0)
set(OSL_URI https://github.com/imageworks/OpenShadingLanguage/archive/Release-${OSL_VERSION}.tar.gz)
set(OSL_HASH 1abd7ce40481771a9fa937f19595d2f2)
set(OSL_HASH dfdc23597aeef083832cbada62211756)
set(OSL_HASH_TYPE MD5)
set(OSL_FILE OpenShadingLanguage-${OSL_VERSION}.tar.gz)
set(PYTHON_VERSION 3.9.7)
set(PYTHON_VERSION 3.9.2)
set(PYTHON_SHORT_VERSION 3.9)
set(PYTHON_SHORT_VERSION_NO_DOTS 39)
set(PYTHON_URI https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz)
set(PYTHON_HASH fddb060b483bc01850a3f412eea1d954)
set(PYTHON_HASH f0dc9000312abeb16de4eccce9a870ab)
set(PYTHON_HASH_TYPE MD5)
set(PYTHON_FILE Python-${PYTHON_VERSION}.tar.xz)
@@ -209,24 +216,23 @@ set(OPENVDB_HASH 01b490be16cc0e15c690f9a153c21461)
set(OPENVDB_HASH_TYPE MD5)
set(OPENVDB_FILE openvdb-${OPENVDB_VERSION}.tar.gz)
set(NANOVDB_GIT_UID dc37d8a631922e7bef46712947dc19b755f3e841)
set(NANOVDB_GIT_UID e62f7a0bf1e27397223c61ddeaaf57edf111b77f)
set(NANOVDB_URI https://github.com/AcademySoftwareFoundation/openvdb/archive/${NANOVDB_GIT_UID}.tar.gz)
set(NANOVDB_HASH e7b9e863ec2f3b04ead171dec2322807)
set(NANOVDB_HASH 90919510bc6ccd630fedc56f748cb199)
set(NANOVDB_HASH_TYPE MD5)
set(NANOVDB_FILE nano-vdb-${NANOVDB_GIT_UID}.tar.gz)
set(IDNA_VERSION 3.2)
set(CHARSET_NORMALIZER_VERSION 2.0.6)
set(URLLIB3_VERSION 1.26.7)
set(CERTIFI_VERSION 2021.10.8)
set(REQUESTS_VERSION 2.26.0)
set(CYTHON_VERSION 0.29.24)
set(ZSTANDARD_VERSION 0.15.2 )
set(IDNA_VERSION 2.10)
set(CHARDET_VERSION 4.0.0)
set(URLLIB3_VERSION 1.26.3)
set(CERTIFI_VERSION 2020.12.5)
set(REQUESTS_VERSION 2.25.1)
set(CYTHON_VERSION 0.29.21)
set(NUMPY_VERSION 1.21.2)
set(NUMPY_SHORT_VERSION 1.21)
set(NUMPY_VERSION 1.19.5)
set(NUMPY_SHORT_VERSION 1.19)
set(NUMPY_URI https://github.com/numpy/numpy/releases/download/v${NUMPY_VERSION}/numpy-${NUMPY_VERSION}.zip)
set(NUMPY_HASH 5638d5dae3ca387be562912312db842e)
set(NUMPY_HASH f6a1b48717c552bbc18f1adc3cc1fe0e)
set(NUMPY_HASH_TYPE MD5)
set(NUMPY_FILE numpy-${NUMPY_VERSION}.zip)
@@ -364,18 +370,12 @@ set(PUGIXML_HASH 0c208b0664c7fb822bf1b49ad035e8fd)
set(PUGIXML_HASH_TYPE MD5)
set(PUGIXML_FILE pugixml-${PUGIXML_VERSION}.tar.gz)
set(FLEXBISON_VERSION 2.5.24)
set(FLEXBISON_VERSION 2.5.5)
set(FLEXBISON_URI http://prdownloads.sourceforge.net/winflexbison/win_flex_bison-${FLEXBISON_VERSION}.zip)
set(FLEXBISON_HASH 6b549d43e34ece0e8ed05af92daa31c4)
set(FLEXBISON_HASH d87a3938194520d904013abef3df10ce)
set(FLEXBISON_HASH_TYPE MD5)
set(FLEXBISON_FILE win_flex_bison-${FLEXBISON_VERSION}.zip)
set(FLEX_VERSION 2.6.4)
set(FLEX_URI https://github.com/westes/flex/releases/download/v${FLEX_VERSION}/flex-${FLEX_VERSION}.tar.gz)
set(FLEX_HASH 2882e3179748cc9f9c23ec593d6adc8d)
set(FLEX_HASH_TYPE MD5)
set(FLEX_FILE flex-${FLEX_VERSION}.tar.gz)
# Libraries to keep Python modules static on Linux.
# NOTE: bzip.org domain does no longer belong to BZip 2 project, so we download
@@ -432,9 +432,9 @@ set(USD_HASH 1dd1e2092d085ed393c1f7c450a4155a)
set(USD_HASH_TYPE MD5)
set(USD_FILE usd-v${USD_VERSION}.tar.gz)
set(OIDN_VERSION 1.4.1)
set(OIDN_VERSION 1.3.0)
set(OIDN_URI https://github.com/OpenImageDenoise/oidn/releases/download/v${OIDN_VERSION}/oidn-${OIDN_VERSION}.src.tar.gz)
set(OIDN_HASH df4007b0ab93b1c41cdf223b075d01c0)
set(OIDN_HASH 301a5a0958d375a942014df0679b9270)
set(OIDN_HASH_TYPE MD5)
set(OIDN_FILE oidn-${OIDN_VERSION}.src.tar.gz)
@@ -444,10 +444,10 @@ set(LIBGLU_HASH 151aef599b8259efe9acd599c96ea2a3)
set(LIBGLU_HASH_TYPE MD5)
set(LIBGLU_FILE glu-${LIBGLU_VERSION}.tar.xz)
set(MESA_VERSION 21.1.5)
set(MESA_VERSION 20.3.4)
set(MESA_URI ftp://ftp.freedesktop.org/pub/mesa/mesa-${MESA_VERSION}.tar.xz)
set(MESA_HASH 022c7293074aeeced2278c872db4fa693147c70f8595b076cf3f1ef81520766d)
set(MESA_HASH_TYPE SHA256)
set(MESA_HASH 556338446aef8ae947a789b3e0b5e056)
set(MESA_HASH_TYPE MD5)
set(MESA_FILE mesa-${MESA_VERSION}.tar.xz)
set(NASM_VERSION 2.15.02)
@@ -456,27 +456,29 @@ set(NASM_HASH aded8b796c996a486a56e0515c83e414116decc3b184d88043480b32eb0a8589)
set(NASM_HASH_TYPE SHA256)
set(NASM_FILE nasm-${NASM_VERSION}.tar.gz)
set(XR_OPENXR_SDK_VERSION 1.0.17)
set(XR_OPENXR_SDK_VERSION 1.0.14)
set(XR_OPENXR_SDK_URI https://github.com/KhronosGroup/OpenXR-SDK/archive/release-${XR_OPENXR_SDK_VERSION}.tar.gz)
set(XR_OPENXR_SDK_HASH bf0fd8828837edff01047474e90013e1)
set(XR_OPENXR_SDK_HASH 0df6b2fd6045423451a77ff6bc3e1a75)
set(XR_OPENXR_SDK_HASH_TYPE MD5)
set(XR_OPENXR_SDK_FILE OpenXR-SDK-${XR_OPENXR_SDK_VERSION}.tar.gz)
set(WL_PROTOCOLS_VERSION 1.21)
set(WL_PROTOCOLS_FILE wayland-protocols-${WL_PROTOCOLS_VERSION}.tar.gz)
set(WL_PROTOCOLS_URI https://gitlab.freedesktop.org/wayland/wayland-protocols/-/archive/${WL_PROTOCOLS_VERSION}/${WL_PROTOCOLS_FILE})
set(WL_PROTOCOLS_HASH af5ca07e13517cdbab33504492cef54a)
set(WL_PROTOCOLS_HASH_TYPE MD5)
if(BLENDER_PLATFORM_ARM)
# Unreleased version with macOS arm support.
set(ISPC_URI https://github.com/ispc/ispc/archive/f5949c055eb9eeb93696978a3da4bfb3a6a30b35.zip)
set(ISPC_HASH d382fea18d01dbd0cd05d9e1ede36d7d)
set(ISPC_HASH_TYPE MD5)
set(ISPC_FILE f5949c055eb9eeb93696978a3da4bfb3a6a30b35.zip)
else()
set(ISPC_VERSION v1.14.1)
set(ISPC_URI https://github.com/ispc/ispc/archive/${ISPC_VERSION}.tar.gz)
set(ISPC_HASH 968fbc8dfd16a60ba4e32d2e0e03ea7a)
set(ISPC_HASH_TYPE MD5)
set(ISPC_FILE ispc-${ISPC_VERSION}.tar.gz)
endif()
set(ISPC_VERSION v1.16.0)
set(ISPC_URI https://github.com/ispc/ispc/archive/${ISPC_VERSION}.tar.gz)
set(ISPC_HASH 2e3abedbc0ea9aaec17d6562c632454d)
set(ISPC_HASH_TYPE MD5)
set(ISPC_FILE ispc-${ISPC_VERSION}.tar.gz)
set(GMP_VERSION 6.2.1)
set(GMP_VERSION 6.2.0)
set(GMP_URI https://gmplib.org/download/gmp/gmp-${GMP_VERSION}.tar.xz)
set(GMP_HASH 0b82665c4a92fd2ade7440c13fcaa42b)
set(GMP_HASH a325e3f09e6d91e62101e59f9bda3ec1)
set(GMP_HASH_TYPE MD5)
set(GMP_FILE gmp-${GMP_VERSION}.tar.xz)
@@ -492,11 +494,5 @@ set(HARU_HASH 4f916aa49c3069b3a10850013c507460)
set(HARU_HASH_TYPE MD5)
set(HARU_FILE libharu-${HARU_VERSION}.tar.gz)
set(ZSTD_VERSION 1.5.0)
set(ZSTD_URI https://github.com/facebook/zstd/releases/download/v${ZSTD_VERSION}/zstd-${ZSTD_VERSION}.tar.gz)
set(ZSTD_HASH 5194fbfa781fcf45b98c5e849651aa7b3b0a008c6b72d4a0db760f3002291e94)
set(ZSTD_HASH_TYPE SHA256)
set(ZSTD_FILE zstd-${ZSTD_VERSION}.tar.gz)
set(SSE2NEON_GIT https://github.com/DLTcollab/sse2neon.git)
set(SSE2NEON_GIT_HASH fe5ff00bb8d19b327714a3c290f3e2ce81ba3525)

View File

@@ -1,27 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ***** END GPL LICENSE BLOCK *****
ExternalProject_Add(external_wayland_protocols
URL file://${PACKAGE_DIR}/${WL_PROTOCOLS_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${WL_PROTOCOLS_HASH_TYPE}=${WL_PROTOCOLS_HASH}
PREFIX ${BUILD_DIR}/wayland-protocols
CONFIGURE_COMMAND meson --prefix ${LIBDIR}/wayland-protocols . ../external_wayland_protocols -Dtests=false
BUILD_COMMAND ninja
INSTALL_COMMAND ninja install
)

View File

@@ -1,51 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ***** END GPL LICENSE BLOCK *****
set(ZSTD_EXTRA_ARGS
-DZSTD_BUILD_PROGRAMS=OFF
-DZSTD_BUILD_SHARED=OFF
-DZSTD_BUILD_STATIC=ON
-DZSTD_BUILD_TESTS=OFF
-DZSTD_LEGACY_SUPPORT=OFF
-DZSTD_LZ4_SUPPORT=OFF
-DZSTD_LZMA_SUPPORT=OFF
-DZSTD_MULTITHREAD_SUPPORT=ON
-DZSTD_PROGRAMS_LINK_SHARED=OFF
-DZSTD_USE_STATIC_RUNTIME=OFF
-DZSTD_ZLIB_SUPPORT=OFF
)
ExternalProject_Add(external_zstd
URL file://${PACKAGE_DIR}/${ZSTD_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${ZSTD_HASH_TYPE}=${ZSTD_HASH}
PREFIX ${BUILD_DIR}/zstd
SOURCE_SUBDIR build/cmake
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/zstd ${DEFAULT_CMAKE_FLAGS} ${ZSTD_EXTRA_ARGS}
INSTALL_DIR ${LIBDIR}/zstd
)
if(WIN32)
if(BUILD_MODE STREQUAL Release)
ExternalProject_Add_Step(external_zstd after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/zstd/lib/zstd_static${LIBEXT} ${HARVEST_TARGET}/zstd/lib/zstd_static${LIBEXT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/zstd/include/ ${HARVEST_TARGET}/zstd/include/
DEPENDEES install
)
endif()
endif()

File diff suppressed because it is too large Load Diff

View File

@@ -68,34 +68,3 @@
+
return ret;
}
--- a/libavcodec/rl.c
+++ b/libavcodec/rl.c
@@ -71,17 +71,19 @@
av_cold void ff_rl_init_vlc(RLTable *rl, unsigned static_size)
{
int i, q;
- VLC_TYPE table[1500][2] = {{0}};
+ VLC_TYPE (*table)[2] = av_calloc(sizeof(VLC_TYPE), 1500 * 2);
VLC vlc = { .table = table, .table_allocated = static_size };
- av_assert0(static_size <= FF_ARRAY_ELEMS(table));
+ av_assert0(static_size < 1500);
init_vlc(&vlc, 9, rl->n + 1, &rl->table_vlc[0][1], 4, 2, &rl->table_vlc[0][0], 4, 2, INIT_VLC_USE_NEW_STATIC);
for (q = 0; q < 32; q++) {
int qmul = q * 2;
int qadd = (q - 1) | 1;
- if (!rl->rl_vlc[q])
+ if (!rl->rl_vlc[q]){
+ av_free(table);
return;
+ }
if (q == 0) {
qmul = 1;
@@ -113,4 +115,5 @@
rl->rl_vlc[q][i].run = run;
}
}
+ av_free(table);
}

View File

@@ -1,374 +0,0 @@
Index: nanovdb/nanovdb/NanoVDB.h
===================================================================
--- a/nanovdb/nanovdb/NanoVDB.h (revision 62751)
+++ b/nanovdb/nanovdb/NanoVDB.h (working copy)
@@ -152,8 +152,8 @@
#endif // __CUDACC_RTC__
-#ifdef __CUDACC__
-// Only define __hostdev__ when using NVIDIA CUDA compiler
+#if defined(__CUDACC__) || defined(__HIP__)
+// Only define __hostdev__ when using NVIDIA CUDA or HIP compiler
#define __hostdev__ __host__ __device__
#else
#define __hostdev__
@@ -461,7 +461,7 @@
/// Maximum floating-point values
template<typename T>
struct Maximum;
-#ifdef __CUDA_ARCH__
+#if defined(__CUDA_ARCH__) || defined(__HIP__)
template<>
struct Maximum<int>
{
@@ -1006,10 +1006,10 @@
using Vec3i = Vec3<int>;
/// @brief Return a single precision floating-point vector of this coordinate
-Vec3f Coord::asVec3s() const { return Vec3f(float(mVec[0]), float(mVec[1]), float(mVec[2])); }
+inline __hostdev__ Vec3f Coord::asVec3s() const { return Vec3f(float(mVec[0]), float(mVec[1]), float(mVec[2])); }
/// @brief Return a double precision floating-point vector of this coordinate
-Vec3d Coord::asVec3d() const { return Vec3d(double(mVec[0]), double(mVec[1]), double(mVec[2])); }
+inline __hostdev__ Vec3d Coord::asVec3d() const { return Vec3d(double(mVec[0]), double(mVec[1]), double(mVec[2])); }
// ----------------------------> Vec4 <--------------------------------------
@@ -1820,7 +1820,7 @@
}; // Map
template<typename Mat4T>
-void Map::set(const Mat4T& mat, const Mat4T& invMat, double taper)
+__hostdev__ void Map::set(const Mat4T& mat, const Mat4T& invMat, double taper)
{
float * mf = mMatF, *vf = mVecF;
float* mif = mInvMatF;
@@ -2170,7 +2170,7 @@
}; // Class Grid
template<typename TreeT>
-int Grid<TreeT>::findBlindDataForSemantic(GridBlindDataSemantic semantic) const
+__hostdev__ int Grid<TreeT>::findBlindDataForSemantic(GridBlindDataSemantic semantic) const
{
for (uint32_t i = 0, n = blindDataCount(); i < n; ++i)
if (blindMetaData(i).mSemantic == semantic)
@@ -2328,7 +2328,7 @@
}; // Tree class
template<typename RootT>
-void Tree<RootT>::extrema(ValueType& min, ValueType& max) const
+__hostdev__ void Tree<RootT>::extrema(ValueType& min, ValueType& max) const
{
min = this->root().valueMin();
max = this->root().valueMax();
@@ -2336,7 +2336,7 @@
template<typename RootT>
template<typename NodeT>
-const NodeT* Tree<RootT>::getNode(uint32_t i) const
+__hostdev__ const NodeT* Tree<RootT>::getNode(uint32_t i) const
{
static_assert(is_same<TreeNodeT<NodeT::LEVEL>, NodeT>::value, "Tree::getNode: unvalid node type");
NANOVDB_ASSERT(i < DataType::mCount[NodeT::LEVEL]);
@@ -2345,7 +2345,7 @@
template<typename RootT>
template<int LEVEL>
-const typename TreeNode<Tree<RootT>, LEVEL>::type* Tree<RootT>::getNode(uint32_t i) const
+__hostdev__ const typename TreeNode<Tree<RootT>, LEVEL>::type* Tree<RootT>::getNode(uint32_t i) const
{
NANOVDB_ASSERT(i < DataType::mCount[LEVEL]);
return reinterpret_cast<const TreeNodeT<LEVEL>*>(reinterpret_cast<const uint8_t*>(this) + DataType::mBytes[LEVEL]) + i;
@@ -2353,7 +2353,7 @@
template<typename RootT>
template<typename NodeT>
-NodeT* Tree<RootT>::getNode(uint32_t i)
+__hostdev__ NodeT* Tree<RootT>::getNode(uint32_t i)
{
static_assert(is_same<TreeNodeT<NodeT::LEVEL>, NodeT>::value, "Tree::getNode: invalid node type");
NANOVDB_ASSERT(i < DataType::mCount[NodeT::LEVEL]);
@@ -2362,7 +2362,7 @@
template<typename RootT>
template<int LEVEL>
-typename TreeNode<Tree<RootT>, LEVEL>::type* Tree<RootT>::getNode(uint32_t i)
+__hostdev__ typename TreeNode<Tree<RootT>, LEVEL>::type* Tree<RootT>::getNode(uint32_t i)
{
NANOVDB_ASSERT(i < DataType::mCount[LEVEL]);
return reinterpret_cast<TreeNodeT<LEVEL>*>(reinterpret_cast<uint8_t*>(this) + DataType::mBytes[LEVEL]) + i;
@@ -2370,7 +2370,7 @@
template<typename RootT>
template<typename NodeT>
-uint32_t Tree<RootT>::getNodeID(const NodeT& node) const
+__hostdev__ uint32_t Tree<RootT>::getNodeID(const NodeT& node) const
{
static_assert(is_same<TreeNodeT<NodeT::LEVEL>, NodeT>::value, "Tree::getNodeID: invalid node type");
const NodeT* first = reinterpret_cast<const NodeT*>(reinterpret_cast<const uint8_t*>(this) + DataType::mBytes[NodeT::LEVEL]);
@@ -2380,7 +2380,7 @@
template<typename RootT>
template<typename NodeT>
-uint32_t Tree<RootT>::getLinearOffset(const NodeT& node) const
+__hostdev__ uint32_t Tree<RootT>::getLinearOffset(const NodeT& node) const
{
return this->getNodeID(node) + DataType::mPFSum[NodeT::LEVEL];
}
@@ -3366,7 +3366,7 @@
}; // LeafNode class
template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
-inline void LeafNode<ValueT, CoordT, MaskT, LOG2DIM>::updateBBox()
+inline __hostdev__ void LeafNode<ValueT, CoordT, MaskT, LOG2DIM>::updateBBox()
{
static_assert(LOG2DIM == 3, "LeafNode::updateBBox: only supports LOGDIM = 3!");
if (!this->isActive()) return;
Index: nanovdb/nanovdb/util/SampleFromVoxels.h
===================================================================
--- a/nanovdb/nanovdb/util/SampleFromVoxels.h (revision 62751)
+++ b/nanovdb/nanovdb/util/SampleFromVoxels.h (working copy)
@@ -22,7 +22,7 @@
#define NANOVDB_SAMPLE_FROM_VOXELS_H_HAS_BEEN_INCLUDED
// Only define __hostdev__ when compiling as NVIDIA CUDA
-#ifdef __CUDACC__
+#if defined(__CUDACC__) || defined(__HIP__)
#define __hostdev__ __host__ __device__
#else
#include <cmath> // for floor
@@ -136,7 +136,7 @@
template<typename TreeOrAccT>
template<typename Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 0, true>::operator()(const Vec3T& xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 0, true>::operator()(const Vec3T& xyz) const
{
const CoordT ijk = Round<CoordT>(xyz);
if (ijk != mPos) {
@@ -147,7 +147,7 @@
}
template<typename TreeOrAccT>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 0, true>::operator()(const CoordT& ijk) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 0, true>::operator()(const CoordT& ijk) const
{
if (ijk != mPos) {
mPos = ijk;
@@ -158,7 +158,7 @@
template<typename TreeOrAccT>
template<typename Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 0, false>::operator()(const Vec3T& xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 0, false>::operator()(const Vec3T& xyz) const
{
return mAcc.getValue(Round<CoordT>(xyz));
}
@@ -195,7 +195,7 @@
}; // TrilinearSamplerBase
template<typename TreeOrAccT>
-void TrilinearSampler<TreeOrAccT>::stencil(CoordT& ijk, ValueT (&v)[2][2][2]) const
+__hostdev__ void TrilinearSampler<TreeOrAccT>::stencil(CoordT& ijk, ValueT (&v)[2][2][2]) const
{
v[0][0][0] = mAcc.getValue(ijk); // i, j, k
@@ -224,7 +224,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType TrilinearSampler<TreeOrAccT>::sample(const Vec3T<RealT> &uvw, const ValueT (&v)[2][2][2])
+__hostdev__ typename TreeOrAccT::ValueType TrilinearSampler<TreeOrAccT>::sample(const Vec3T<RealT> &uvw, const ValueT (&v)[2][2][2])
{
#if 0
auto lerp = [](ValueT a, ValueT b, ValueT w){ return fma(w, b-a, a); };// = w*(b-a) + a
@@ -239,7 +239,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-Vec3T<typename TreeOrAccT::ValueType> TrilinearSampler<TreeOrAccT>::gradient(const Vec3T<RealT> &uvw, const ValueT (&v)[2][2][2])
+__hostdev__ Vec3T<typename TreeOrAccT::ValueType> TrilinearSampler<TreeOrAccT>::gradient(const Vec3T<RealT> &uvw, const ValueT (&v)[2][2][2])
{
static_assert(std::is_floating_point<ValueT>::value, "TrilinearSampler::gradient requires a floating-point type");
#if 0
@@ -270,7 +270,7 @@
}
template<typename TreeOrAccT>
-bool TrilinearSampler<TreeOrAccT>::zeroCrossing(const ValueT (&v)[2][2][2])
+__hostdev__ bool TrilinearSampler<TreeOrAccT>::zeroCrossing(const ValueT (&v)[2][2][2])
{
static_assert(std::is_floating_point<ValueT>::value, "TrilinearSampler::zeroCrossing requires a floating-point type");
const bool less = v[0][0][0] < ValueT(0);
@@ -363,7 +363,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, true>::operator()(Vec3T<RealT> xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, true>::operator()(Vec3T<RealT> xyz) const
{
this->cache(xyz);
return BaseT::sample(xyz, mVal);
@@ -370,7 +370,7 @@
}
template<typename TreeOrAccT>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, true>::operator()(const CoordT &ijk) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, true>::operator()(const CoordT &ijk) const
{
return ijk == mPos ? mVal[0][0][0] : BaseT::mAcc.getValue(ijk);
}
@@ -377,7 +377,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-Vec3T<typename TreeOrAccT::ValueType> SampleFromVoxels<TreeOrAccT, 1, true>::gradient(Vec3T<RealT> xyz) const
+__hostdev__ Vec3T<typename TreeOrAccT::ValueType> SampleFromVoxels<TreeOrAccT, 1, true>::gradient(Vec3T<RealT> xyz) const
{
this->cache(xyz);
return BaseT::gradient(xyz, mVal);
@@ -393,7 +393,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-void SampleFromVoxels<TreeOrAccT, 1, true>::cache(Vec3T<RealT>& xyz) const
+__hostdev__ void SampleFromVoxels<TreeOrAccT, 1, true>::cache(Vec3T<RealT>& xyz) const
{
CoordT ijk = Floor<CoordT>(xyz);
if (ijk != mPos) {
@@ -406,7 +406,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, false>::operator()(Vec3T<RealT> xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, false>::operator()(Vec3T<RealT> xyz) const
{
ValueT val[2][2][2];
CoordT ijk = Floor<CoordT>(xyz);
@@ -418,7 +418,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, false>::operator()(Vec3T<RealT> xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 1, false>::operator()(Vec3T<RealT> xyz) const
{
auto lerp = [](ValueT a, ValueT b, RealT w) { return a + ValueT(w) * (b - a); };
@@ -463,7 +463,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-inline Vec3T<typename TreeOrAccT::ValueType> SampleFromVoxels<TreeOrAccT, 1, false>::gradient(Vec3T<RealT> xyz) const
+inline __hostdev__ Vec3T<typename TreeOrAccT::ValueType> SampleFromVoxels<TreeOrAccT, 1, false>::gradient(Vec3T<RealT> xyz) const
{
ValueT val[2][2][2];
CoordT ijk = Floor<CoordT>(xyz);
@@ -473,7 +473,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-bool SampleFromVoxels<TreeOrAccT, 1, false>::zeroCrossing(Vec3T<RealT> xyz) const
+__hostdev__ bool SampleFromVoxels<TreeOrAccT, 1, false>::zeroCrossing(Vec3T<RealT> xyz) const
{
ValueT val[2][2][2];
CoordT ijk = Floor<CoordT>(xyz);
@@ -510,7 +510,7 @@
}; // TriquadraticSamplerBase
template<typename TreeOrAccT>
-void TriquadraticSampler<TreeOrAccT>::stencil(const CoordT &ijk, ValueT (&v)[3][3][3]) const
+__hostdev__ void TriquadraticSampler<TreeOrAccT>::stencil(const CoordT &ijk, ValueT (&v)[3][3][3]) const
{
CoordT p(ijk[0] - 1, 0, 0);
for (int dx = 0; dx < 3; ++dx, ++p[0]) {
@@ -526,7 +526,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType TriquadraticSampler<TreeOrAccT>::sample(const Vec3T<RealT> &uvw, const ValueT (&v)[3][3][3])
+__hostdev__ typename TreeOrAccT::ValueType TriquadraticSampler<TreeOrAccT>::sample(const Vec3T<RealT> &uvw, const ValueT (&v)[3][3][3])
{
auto kernel = [](const ValueT* value, double weight)->ValueT {
return weight * (weight * (0.5f * (value[0] + value[2]) - value[1]) +
@@ -545,7 +545,7 @@
}
template<typename TreeOrAccT>
-bool TriquadraticSampler<TreeOrAccT>::zeroCrossing(const ValueT (&v)[3][3][3])
+__hostdev__ bool TriquadraticSampler<TreeOrAccT>::zeroCrossing(const ValueT (&v)[3][3][3])
{
static_assert(std::is_floating_point<ValueT>::value, "TrilinearSampler::zeroCrossing requires a floating-point type");
const bool less = v[0][0][0] < ValueT(0);
@@ -624,7 +624,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 2, true>::operator()(Vec3T<RealT> xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 2, true>::operator()(Vec3T<RealT> xyz) const
{
this->cache(xyz);
return BaseT::sample(xyz, mVal);
@@ -631,7 +631,7 @@
}
template<typename TreeOrAccT>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 2, true>::operator()(const CoordT &ijk) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 2, true>::operator()(const CoordT &ijk) const
{
return ijk == mPos ? mVal[1][1][1] : BaseT::mAcc.getValue(ijk);
}
@@ -646,7 +646,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-void SampleFromVoxels<TreeOrAccT, 2, true>::cache(Vec3T<RealT>& xyz) const
+__hostdev__ void SampleFromVoxels<TreeOrAccT, 2, true>::cache(Vec3T<RealT>& xyz) const
{
CoordT ijk = Floor<CoordT>(xyz);
if (ijk != mPos) {
@@ -657,7 +657,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 2, false>::operator()(Vec3T<RealT> xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 2, false>::operator()(Vec3T<RealT> xyz) const
{
ValueT val[3][3][3];
CoordT ijk = Floor<CoordT>(xyz);
@@ -667,7 +667,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-bool SampleFromVoxels<TreeOrAccT, 2, false>::zeroCrossing(Vec3T<RealT> xyz) const
+__hostdev__ bool SampleFromVoxels<TreeOrAccT, 2, false>::zeroCrossing(Vec3T<RealT> xyz) const
{
ValueT val[3][3][3];
CoordT ijk = Floor<CoordT>(xyz);
@@ -710,7 +710,7 @@
}; // TricubicSampler
template<typename TreeOrAccT>
-void TricubicSampler<TreeOrAccT>::stencil(const CoordT& ijk, ValueT (&C)[64]) const
+__hostdev__ void TricubicSampler<TreeOrAccT>::stencil(const CoordT& ijk, ValueT (&C)[64]) const
{
auto fetch = [&](int i, int j, int k) -> ValueT& { return C[((i + 1) << 4) + ((j + 1) << 2) + k + 1]; };
@@ -929,7 +929,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 3, true>::operator()(Vec3T<RealT> xyz) const
+__hostdev__ typename TreeOrAccT::ValueType SampleFromVoxels<TreeOrAccT, 3, true>::operator()(Vec3T<RealT> xyz) const
{
this->cache(xyz);
return BaseT::sample(xyz, mC);
@@ -937,7 +937,7 @@
template<typename TreeOrAccT>
template<typename RealT, template<typename...> class Vec3T>
-void SampleFromVoxels<TreeOrAccT, 3, true>::cache(Vec3T<RealT>& xyz) const
+__hostdev__ void SampleFromVoxels<TreeOrAccT, 3, true>::cache(Vec3T<RealT>& xyz) const
{
CoordT ijk = Floor<CoordT>(xyz);
if (ijk != mPos) {

View File

@@ -0,0 +1,27 @@
diff --git a/numpy/distutils/system_info.py b/numpy/distutils/system_info.py
index ba2b1f4..b10f7df 100644
--- a/numpy/distutils/system_info.py
+++ b/numpy/distutils/system_info.py
@@ -2164,8 +2164,8 @@ class accelerate_info(system_info):
'accelerate' in libraries):
if intel:
args.extend(['-msse3'])
- else:
- args.extend(['-faltivec'])
+# else:
+# args.extend(['-faltivec'])
args.extend([
'-I/System/Library/Frameworks/vecLib.framework/Headers'])
link_args.extend(['-Wl,-framework', '-Wl,Accelerate'])
@@ -2174,8 +2174,8 @@ class accelerate_info(system_info):
'veclib' in libraries):
if intel:
args.extend(['-msse3'])
- else:
- args.extend(['-faltivec'])
+# else:
+# args.extend(['-faltivec'])
args.extend([
'-I/System/Library/Frameworks/vecLib.framework/Headers'])
link_args.extend(['-Wl,-framework', '-Wl,vecLib'])

View File

@@ -0,0 +1,40 @@
diff -Naur oidn-1.3.0/cmake/FindTBB.cmake external_openimagedenoise/cmake/FindTBB.cmake
--- oidn-1.3.0/cmake/FindTBB.cmake 2021-02-04 16:20:26 -0700
+++ external_openimagedenoise/cmake/FindTBB.cmake 2021-02-12 09:35:53 -0700
@@ -332,20 +332,22 @@
${TBB_ROOT}/lib/${TBB_ARCH}/${TBB_VCVER}
${TBB_ROOT}/lib
)
-
# On Windows, also search the DLL so that the client may install it.
file(GLOB DLL_NAMES
${TBB_ROOT}/bin/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME}.dll
${TBB_ROOT}/bin/${LIB_NAME}.dll
+ ${TBB_ROOT}/lib/${LIB_NAME}.dll
${TBB_ROOT}/redist/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME}.dll
${TBB_ROOT}/redist/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME_GLOB1}.dll
${TBB_ROOT}/redist/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME_GLOB2}.dll
${TBB_ROOT}/../redist/${TBB_ARCH}/tbb/${TBB_VCVER}/${LIB_NAME}.dll
${TBB_ROOT}/../redist/${TBB_ARCH}_win/tbb/${TBB_VCVER}/${LIB_NAME}.dll
)
- list(GET DLL_NAMES 0 DLL_NAME)
- get_filename_component(${BIN_DIR_VAR} "${DLL_NAME}" DIRECTORY)
- set(${DLL_VAR} "${DLL_NAME}" CACHE PATH "${COMPONENT_NAME} ${BUILD_CONFIG} dll path")
+ if (DLL_NAMES)
+ list(GET DLL_NAMES 0 DLL_NAME)
+ get_filename_component(${BIN_DIR_VAR} "${DLL_NAME}" DIRECTORY)
+ set(${DLL_VAR} "${DLL_NAME}" CACHE PATH "${COMPONENT_NAME} ${BUILD_CONFIG} dll path")
+ endif()
elseif(APPLE)
set(LIB_PATHS ${TBB_ROOT}/lib)
else()
--- external_openimagedenoise/cmake/oidn_ispc.cmake 2021-02-15 17:29:34.000000000 +0100
+++ external_openimagedenoise/cmake/oidn_ispc.cmake2 2021-02-15 17:29:28.000000000 +0100
@@ -98,7 +98,7 @@
elseif(OIDN_ARCH STREQUAL "ARM64")
set(ISPC_ARCHITECTURE "aarch64")
if(APPLE)
- set(ISPC_TARGET_OS "--target-os=ios")
+ set(ISPC_TARGET_OS "--target-os=macos")
endif()
endif()

View File

@@ -34,3 +34,24 @@ diff -Naur orig/src/include/OpenImageIO/platform.h external_openimageio/src/incl
# include <windows.h>
#endif
diff -Naur orig/src/libutil/ustring.cpp external_openimageio/src/libutil/ustring.cpp
--- orig/src/libutil/ustring.cpp 2020-05-11 05:43:52.000000000 +0200
+++ external_openimageio/src/libutil/ustring.cpp 2020-11-26 12:06:08.000000000 +0100
@@ -337,6 +337,8 @@
// the std::string to make it point to our chars! In such a case, the
// destructor will be careful not to allow a deallocation.
+ // Disable internal std::string for Apple silicon based Macs
+#if !(defined(__APPLE__) && defined(__arm64__))
#if defined(__GNUC__) && !defined(_LIBCPP_VERSION) \
&& defined(_GLIBCXX_USE_CXX11_ABI) && _GLIBCXX_USE_CXX11_ABI
// NEW gcc ABI
@@ -382,7 +384,7 @@
return;
}
#endif
-
+#endif
// Remaining cases - just assign the internal string. This may result
// in double allocation for the chars. If you care about that, do
// something special for your platform, much like we did for gcc and

View File

@@ -1,3 +1,18 @@
diff -Naur OpenShadingLanguage-Release-1.9.9/src/cmake/flexbison.cmake.rej external_osl/src/cmake/flexbison.cmake.rej
--- OpenShadingLanguage-Release-1.9.9/src/cmake/flexbison.cmake.rej 1969-12-31 17:00:00 -0700
+++ external_osl/src/cmake/flexbison.cmake.rej 2018-08-24 17:42:11 -0600
@@ -0,0 +1,11 @@
+--- src/cmake/flexbison.cmake 2018-05-01 16:39:02 -0600
++++ src/cmake/flexbison.cmake 2018-08-24 10:24:03 -0600
+@@ -77,7 +77,7 @@
+ DEPENDS ${${compiler_headers}}
+ WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} )
+ ADD_CUSTOM_COMMAND ( OUTPUT ${flexoutputcxx}
+- COMMAND ${FLEX_EXECUTABLE} -o ${flexoutputcxx} "${CMAKE_CURRENT_SOURCE_DIR}/${flexsrc}"
++ COMMAND ${FLEX_EXECUTABLE} ${FLEX_EXTRA_OPTIONS} -o ${flexoutputcxx} "${CMAKE_CURRENT_SOURCE_DIR}/${flexsrc}"
+ MAIN_DEPENDENCY ${flexsrc}
+ DEPENDS ${${compiler_headers}}
+ WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} )
diff -Naur OpenShadingLanguage-Release-1.9.9/src/include/OSL/llvm_util.h external_osl/src/include/OSL/llvm_util.h
--- OpenShadingLanguage-Release-1.9.9/src/include/OSL/llvm_util.h 2018-05-01 16:39:02 -0600
+++ external_osl/src/include/OSL/llvm_util.h 2018-08-25 14:05:00 -0600
@@ -48,50 +63,19 @@ diff -Naur org/CMakeLists.txt external_osl/CMakeLists.txt
set (OSL_NO_DEFAULT_TEXTURESYSTEM OFF CACHE BOOL "Do not use create a raw OIIO::TextureSystem")
if (OSL_NO_DEFAULT_TEXTURESYSTEM)
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 990f50d69..46ef7351d 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -252,11 +252,9 @@ install (EXPORT OSL_EXPORTED_TARGETS
FILE ${OSL_TARGETS_EXPORT_NAME}
NAMESPACE ${PROJECT_NAME}::)
-
-
-
-osl_add_all_tests()
-
+if (${PROJECT_NAME}_BUILD_TESTS AND NOT ${PROJECT_NAME}_IS_SUBPROJECT)
+ osl_add_all_tests()
+endif ()
if (NOT ${PROJECT_NAME}_IS_SUBPROJECT)
include (packaging)
diff -Naur external_osl_orig/src/cmake/externalpackages.cmake external_osl/src/cmake/externalpackages.cmake
--- external_osl_orig/src/cmake/externalpackages.cmake 2021-06-01 13:44:18 -0600
+++ external_osl/src/cmake/externalpackages.cmake 2021-06-28 07:44:32 -0600
@@ -80,6 +80,7 @@
checked_find_package (ZLIB REQUIRED) # Needed by several packages
+checked_find_package (PNG REQUIRED) # Needed since OIIO needs it
# IlmBase & OpenEXR
checked_find_package (OpenEXR REQUIRED
diff -Naur external_osl_orig/src/liboslcomp/oslcomp.cpp external_osl/src/liboslcomp/oslcomp.cpp
--- external_osl_orig/src/liboslcomp/oslcomp.cpp 2021-06-01 13:44:18 -0600
+++ external_osl/src/liboslcomp/oslcomp.cpp 2021-06-28 09:11:06 -0600
@@ -21,6 +21,13 @@
#if !defined(__STDC_CONSTANT_MACROS)
# define __STDC_CONSTANT_MACROS 1
diff --git a/src/liboslexec/llvm_util.cpp b/src/liboslexec/llvm_util.cpp
index 445f6400..3d468de2 100644
--- a/src/liboslexec/llvm_util.cpp
+++ b/src/liboslexec/llvm_util.cpp
@@ -3430,8 +3430,9 @@ LLVM_Util::call_function (llvm::Value *func, cspan<llvm::Value *> args)
#endif
//llvm_gen_debug_printf (std::string("start ") + std::string(name));
#if OSL_LLVM_VERSION >= 110
- OSL_DASSERT(llvm::isa<llvm::Function>(func));
- llvm::Value *r = builder().CreateCall(llvm::cast<llvm::Function>(func), llvm::ArrayRef<llvm::Value *>(args.data(), args.size()));
+ llvm::Value* r = builder().CreateCall(
+ llvm::cast<llvm::FunctionType>(func->getType()->getPointerElementType()), func,
+ llvm::ArrayRef<llvm::Value*>(args.data(), args.size()));
#else
llvm::Value *r = builder().CreateCall (func, llvm::ArrayRef<llvm::Value *>(args.data(), args.size()));
#endif
+
+// clang uses CALLBACK in its templates which causes issues if it is already defined
+#ifdef _WIN32 && defined(CALLBACK)
+# undef CALLBACK
+#endif
+
+//
#include <clang/Basic/TargetInfo.h>
#include <clang/Frontend/CompilerInstance.h>
#include <clang/Frontend/TextDiagnosticPrinter.h>

View File

@@ -197,38 +197,3 @@ index 67ec0d15f..6dc3e85a0 100644
#else
#error Unknown architecture.
#endif
diff --git a/pxr/base/arch/demangle.cpp b/pxr/base/arch/demangle.cpp
index 67ec0d15f..6dc3e85a0 100644
--- a/pxr/base/arch/demangle.cpp
+++ b/pxr/base/arch/demangle.cpp
@@ -36,6 +36,7 @@
#if (ARCH_COMPILER_GCC_MAJOR == 3 && ARCH_COMPILER_GCC_MINOR >= 1) || \
ARCH_COMPILER_GCC_MAJOR > 3 || defined(ARCH_COMPILER_CLANG)
#define _AT_LEAST_GCC_THREE_ONE_OR_CLANG
+#include <cxxabi.h>
#endif
PXR_NAMESPACE_OPEN_SCOPE
@@ -138,7 +139,6 @@
#endif
#if defined(_AT_LEAST_GCC_THREE_ONE_OR_CLANG)
-#include <cxxabi.h>
/*
* This routine doesn't work when you get to gcc3.4.
diff --git a/pxr/base/work/singularTask.h b/pxr/base/work/singularTask.h
index 67ec0d15f..6dc3e85a0 100644
--- a/pxr/base/work/singularTask.h
+++ b/pxr/base/work/singularTask.h
@@ -120,7 +120,7 @@
// case we go again to ensure the task can do whatever it
// was awakened to do. Once we successfully take the count
// to zero, we stop.
- size_t old = count;
+ std::size_t old = count;
do { _fn(); } while (
!count.compare_exchange_strong(old, 0));
});

View File

@@ -79,9 +79,6 @@ set STAGING=%BUILD_DIR%\S
rem for python module build
set MSSdk=1
set DISTUTILS_USE_SDK=1
rem if you let pip pick its own build dirs, it'll stick it somewhere deep inside the user profile
rem and cython will refuse to link due to a path that gets too long.
set TMPDIR=c:\t\
rem for python externals source to be shared between the various archs and compilers
mkdir %BUILD_DIR%\downloads\externals

View File

@@ -20,24 +20,8 @@ if(NOT CLANG_ROOT_DIR AND NOT $ENV{CLANG_ROOT_DIR} STREQUAL "")
set(CLANG_ROOT_DIR $ENV{CLANG_ROOT_DIR})
endif()
if(NOT LLVM_ROOT_DIR)
if(DEFINED LLVM_VERSION)
message(running llvm-config-${LLVM_VERSION})
find_program(LLVM_CONFIG llvm-config-${LLVM_VERSION})
endif()
if(NOT LLVM_CONFIG)
find_program(LLVM_CONFIG llvm-config)
endif()
execute_process(COMMAND ${LLVM_CONFIG} --prefix
OUTPUT_VARIABLE LLVM_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
set(LLVM_ROOT_DIR ${LLVM_ROOT_DIR} CACHE PATH "Path to the LLVM installation")
endif()
set(_CLANG_SEARCH_DIRS
${CLANG_ROOT_DIR}
${LLVM_ROOT_DIR}
/opt/lib/clang
)

View File

@@ -472,7 +472,8 @@ if(NOT GFLAGS_FOUND)
gflags_report_not_found(
"Could not find gflags include directory, set GFLAGS_INCLUDE_DIR "
"to directory containing gflags/gflags.h")
endif()
endif(NOT GFLAGS_INCLUDE_DIR OR
NOT EXISTS ${GFLAGS_INCLUDE_DIR})
find_library(GFLAGS_LIBRARY NAMES gflags
PATHS ${GFLAGS_LIBRARY_DIR_HINTS}
@@ -483,7 +484,8 @@ if(NOT GFLAGS_FOUND)
gflags_report_not_found(
"Could not find gflags library, set GFLAGS_LIBRARY "
"to full path to libgflags.")
endif()
endif(NOT GFLAGS_LIBRARY OR
NOT EXISTS ${GFLAGS_LIBRARY})
# gflags typically requires a threading library (which is OS dependent), note
# that this defines the CMAKE_THREAD_LIBS_INIT variable. If we are able to
@@ -558,7 +560,8 @@ if(NOT GFLAGS_FOUND)
gflags_report_not_found(
"Caller defined GFLAGS_INCLUDE_DIR:"
" ${GFLAGS_INCLUDE_DIR} does not contain gflags/gflags.h header.")
endif()
endif(GFLAGS_INCLUDE_DIR AND
NOT EXISTS ${GFLAGS_INCLUDE_DIR}/gflags/gflags.h)
# TODO: This regex for gflags library is pretty primitive, we use lowercase
# for comparison to handle Windows using CamelCase library names, could
# this check be better?
@@ -568,7 +571,8 @@ if(NOT GFLAGS_FOUND)
gflags_report_not_found(
"Caller defined GFLAGS_LIBRARY: "
"${GFLAGS_LIBRARY} does not match gflags.")
endif()
endif(GFLAGS_LIBRARY AND
NOT "${LOWERCASE_GFLAGS_LIBRARY}" MATCHES ".*gflags[^/]*")
gflags_reset_find_library_prefix()

View File

@@ -1,81 +0,0 @@
# - Find HIP compiler
#
# This module defines
# HIP_HIPCC_EXECUTABLE, the full path to the hipcc executable
# HIP_VERSION, the HIP compiler version
#
# HIP_FOUND, if the HIP toolkit is found.
#=============================================================================
# Copyright 2021 Blender Foundation.
#
# Distributed under the OSI-approved BSD 3-Clause License,
# see accompanying file BSD-3-Clause-license.txt for details.
#=============================================================================
# If HIP_ROOT_DIR was defined in the environment, use it.
if(NOT HIP_ROOT_DIR AND NOT $ENV{HIP_ROOT_DIR} STREQUAL "")
set(HIP_ROOT_DIR $ENV{HIP_ROOT_DIR})
endif()
set(_hip_SEARCH_DIRS
${HIP_ROOT_DIR}
)
find_program(HIP_HIPCC_EXECUTABLE
NAMES
hipcc
HINTS
${_hip_SEARCH_DIRS}
PATH_SUFFIXES
bin
)
if(HIP_HIPCC_EXECUTABLE AND NOT EXISTS ${HIP_HIPCC_EXECUTABLE})
message(WARNING "Cached or directly specified hipcc executable does not exist.")
set(HIP_FOUND FALSE)
elseif(HIP_HIPCC_EXECUTABLE)
set(HIP_FOUND TRUE)
set(HIP_VERSION_MAJOR 0)
set(HIP_VERSION_MINOR 0)
set(HIP_VERSION_PATCH 0)
# Get version from the output.
execute_process(COMMAND ${HIP_HIPCC_EXECUTABLE} --version
OUTPUT_VARIABLE HIP_VERSION_RAW
ERROR_QUIET
OUTPUT_STRIP_TRAILING_WHITESPACE)
# Parse parts.
if(HIP_VERSION_RAW MATCHES "HIP version: .*")
# Strip the HIP prefix and get list of individual version components.
string(REGEX REPLACE
".*HIP version: ([.0-9]+).*" "\\1"
HIP_SEMANTIC_VERSION "${HIP_VERSION_RAW}")
string(REPLACE "." ";" HIP_VERSION_PARTS "${HIP_SEMANTIC_VERSION}")
list(LENGTH HIP_VERSION_PARTS NUM_HIP_VERSION_PARTS)
# Extract components into corresponding variables.
if(NUM_HIP_VERSION_PARTS GREATER 0)
list(GET HIP_VERSION_PARTS 0 HIP_VERSION_MAJOR)
endif()
if(NUM_HIP_VERSION_PARTS GREATER 1)
list(GET HIP_VERSION_PARTS 1 HIP_VERSION_MINOR)
endif()
if(NUM_HIP_VERSION_PARTS GREATER 2)
list(GET HIP_VERSION_PARTS 2 HIP_VERSION_PATCH)
endif()
# Unset temp variables.
unset(NUM_HIP_VERSION_PARTS)
unset(HIP_SEMANTIC_VERSION)
unset(HIP_VERSION_PARTS)
endif()
# Construct full semantic version.
set(HIP_VERSION "${HIP_VERSION_MAJOR}.${HIP_VERSION_MINOR}.${HIP_VERSION_PATCH}")
unset(HIP_VERSION_RAW)
else()
set(HIP_FOUND FALSE)
endif()

View File

@@ -40,7 +40,7 @@ FIND_PACKAGE_HANDLE_STANDARD_ARGS(NanoVDB DEFAULT_MSG
IF(NANOVDB_FOUND)
SET(NANOVDB_INCLUDE_DIRS ${NANOVDB_INCLUDE_DIR})
ENDIF()
ENDIF(NANOVDB_FOUND)
MARK_AS_ADVANCED(
NANOVDB_INCLUDE_DIR

View File

@@ -46,7 +46,7 @@ SET(_opencollada_FIND_COMPONENTS
)
# Fedora openCOLLADA package links these statically
# note that order is important here or it won't link
# note that order is important here ot it wont link
SET(_opencollada_FIND_STATIC_COMPONENTS
buffer
ftoa

View File

@@ -21,7 +21,7 @@ ENDIF()
SET(_optix_SEARCH_DIRS
${OPTIX_ROOT_DIR}
"$ENV{PROGRAMDATA}/NVIDIA Corporation/OptiX SDK 7.3.0"
"$ENV{PROGRAMDATA}/NVIDIA Corporation/OptiX SDK 7.0.0"
)
FIND_PATH(OPTIX_INCLUDE_DIR
@@ -33,23 +33,11 @@ FIND_PATH(OPTIX_INCLUDE_DIR
include
)
IF(EXISTS "${OPTIX_INCLUDE_DIR}/optix.h")
FILE(STRINGS "${OPTIX_INCLUDE_DIR}/optix.h" _optix_version REGEX "^#define OPTIX_VERSION[ \t].*$")
STRING(REGEX MATCHALL "[0-9]+" _optix_version ${_optix_version})
MATH(EXPR _optix_version_major "${_optix_version} / 10000")
MATH(EXPR _optix_version_minor "(${_optix_version} % 10000) / 100")
MATH(EXPR _optix_version_patch "${_optix_version} % 100")
SET(OPTIX_VERSION "${_optix_version_major}.${_optix_version_minor}.${_optix_version_patch}")
ENDIF()
# handle the QUIETLY and REQUIRED arguments and set OPTIX_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(OptiX
REQUIRED_VARS OPTIX_INCLUDE_DIR
VERSION_VAR OPTIX_VERSION)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(OptiX DEFAULT_MSG
OPTIX_INCLUDE_DIR)
IF(OPTIX_FOUND)
SET(OPTIX_INCLUDE_DIRS ${OPTIX_INCLUDE_DIR})
@@ -57,7 +45,6 @@ ENDIF()
MARK_AS_ADVANCED(
OPTIX_INCLUDE_DIR
OPTIX_VERSION
)
UNSET(_optix_SEARCH_DIRS)

View File

@@ -44,7 +44,7 @@ SET(PYTHON_LINKFLAGS "-Xlinker -export-dynamic" CACHE STRING "Linker flags for p
MARK_AS_ADVANCED(PYTHON_LINKFLAGS)
# if the user passes these defines as args, we don't want to overwrite
# if the user passes these defines as args, we dont want to overwrite
SET(_IS_INC_DEF OFF)
SET(_IS_INC_CONF_DEF OFF)
SET(_IS_LIB_DEF OFF)
@@ -143,7 +143,7 @@ IF((NOT _IS_INC_DEF) OR (NOT _IS_INC_CONF_DEF) OR (NOT _IS_LIB_DEF) OR (NOT _IS_
SET(_PYTHON_ABI_FLAGS "${_CURRENT_ABI_FLAGS}")
break()
ELSE()
# ensure we don't find values from 2 different ABI versions
# ensure we dont find values from 2 different ABI versions
IF(NOT _IS_INC_DEF)
UNSET(PYTHON_INCLUDE_DIR CACHE)
ENDIF()

View File

@@ -1,66 +0,0 @@
# - Find Zstd library
# Find the native Zstd includes and library
# This module defines
# ZSTD_INCLUDE_DIRS, where to find zstd.h, Set when
# ZSTD_INCLUDE_DIR is found.
# ZSTD_LIBRARIES, libraries to link against to use Zstd.
# ZSTD_ROOT_DIR, The base directory to search for Zstd.
# This can also be an environment variable.
# ZSTD_FOUND, If false, do not try to use Zstd.
#
# also defined, but not for general use are
# ZSTD_LIBRARY, where to find the Zstd library.
#=============================================================================
# Copyright 2019 Blender Foundation.
#
# Distributed under the OSI-approved BSD License (the "License");
# see accompanying file Copyright.txt for details.
#
# This software is distributed WITHOUT ANY WARRANTY; without even the
# implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the License for more information.
#=============================================================================
# If ZSTD_ROOT_DIR was defined in the environment, use it.
IF(NOT ZSTD_ROOT_DIR AND NOT $ENV{ZSTD_ROOT_DIR} STREQUAL "")
SET(ZSTD_ROOT_DIR $ENV{ZSTD_ROOT_DIR})
ENDIF()
SET(_zstd_SEARCH_DIRS
${ZSTD_ROOT_DIR}
)
FIND_PATH(ZSTD_INCLUDE_DIR
NAMES
zstd.h
HINTS
${_zstd_SEARCH_DIRS}
PATH_SUFFIXES
include
)
FIND_LIBRARY(ZSTD_LIBRARY
NAMES
zstd
HINTS
${_zstd_SEARCH_DIRS}
PATH_SUFFIXES
lib64 lib
)
# handle the QUIETLY and REQUIRED arguments and set ZSTD_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(Zstd DEFAULT_MSG
ZSTD_LIBRARY ZSTD_INCLUDE_DIR)
IF(ZSTD_FOUND)
SET(ZSTD_LIBRARIES ${ZSTD_LIBRARY})
SET(ZSTD_INCLUDE_DIRS ${ZSTD_INCLUDE_DIR})
ENDIF()
MARK_AS_ADVANCED(
ZSTD_INCLUDE_DIR
ZSTD_LIBRARY
)

View File

@@ -0,0 +1,49 @@
# - Find MessagePack (msgpack) library
# Find the native MessagePack includes and library
# This module defines
# MSGPACK_INCLUDE_DIRS, where to find spnav.h, Set when
# MSGPACK_INCLUDE_DIR is found.
# MSGPACK_ROOT_DIR, The base directory to search for msgpack.
# This can also be an environment variable.
# MSGPACK_FOUND, If false, do not try to use msgpack.
#
#=============================================================================
# Copyright 2021 Blender Foundation.
#
# Distributed under the OSI-approved BSD 3-Clause License,
# see accompanying file BSD-3-Clause-license.txt for details.
#=============================================================================
# If MSGPACK_ROOT_DIR was defined in the environment, use it.
IF(NOT MSGPACK_ROOT_DIR AND NOT $ENV{MSGPACK_ROOT_DIR} STREQUAL "")
SET(MSGPACK_ROOT_DIR $ENV{MSGPACK_ROOT_DIR})
ENDIF()
SET(_msgpack_SEARCH_DIRS
${MSGPACK_ROOT_DIR}
)
FIND_PATH(MSGPACK_INCLUDE_DIR
NAMES
msgpack/include/msgpack.hpp
HINTS
${_msgpack_SEARCH_DIRS}
PATH_SUFFIXES
include/msgpack
)
# handle the QUIETLY and REQUIRED arguments and set MSGPACK_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(Msgpack DEFAULT_MSG
MSGPACK_INCLUDE_DIR)
IF(MSGPACK_FOUND)
SET(MSGPACK_INCLUDE_DIRS ${MSGPACK_INCLUDE_DIR})
ENDIF()
MARK_AS_ADVANCED(
MSGPACK_INCLUDE_DIR
)
UNSET(_msgpack_SEARCH_DIRS)

View File

@@ -40,7 +40,7 @@ FIND_PACKAGE_HANDLE_STANDARD_ARGS(sse2neon DEFAULT_MSG
IF(SSE2NEON_FOUND)
SET(SSE2NEON_INCLUDE_DIRS ${SSE2NEON_INCLUDE_DIR})
ENDIF()
ENDIF(SSE2NEON_FOUND)
MARK_AS_ADVANCED(
SSE2NEON_INCLUDE_DIR

View File

@@ -168,7 +168,7 @@ def function_parm_wash_tokens(parm):
# if tokens[-1].kind == To
# remove trailing char
if tokens[-1].kind == TokenKind.PUNCTUATION:
if tokens[-1].spelling in {",", ")", ";"}:
if tokens[-1].spelling in (",", ")", ";"):
tokens.pop()
# else:
# print(tokens[-1].spelling)
@@ -179,7 +179,7 @@ def function_parm_wash_tokens(parm):
t_spelling = t.spelling
ok = True
if t_kind == TokenKind.KEYWORD:
if t_spelling in {"const", "restrict", "volatile"}:
if t_spelling in ("const", "restrict", "volatile"):
ok = False
elif t_spelling.startswith("__"):
ok = False # __restrict
@@ -305,7 +305,7 @@ def file_check_arg_sizes(tu):
for i, node_child in enumerate(children):
children = list(node_child.get_children())
# skip if we don't have an index...
# skip if we dont have an index...
size_def = args_size_definition.get(i, -1)
if size_def == -1:
@@ -354,7 +354,7 @@ def file_check_arg_sizes(tu):
filepath # always the same but useful when running threaded
))
# we don't really care what we are looking at, just scan entire file for
# we dont really care what we are looking at, just scan entire file for
# function calls.
def recursive_func_call_check(node):

View File

@@ -114,7 +114,7 @@ def is_c_header(filename: str) -> bool:
def is_c(filename: str) -> bool:
ext = splitext(filename)[1]
return (ext in {".c", ".cpp", ".cxx", ".m", ".mm", ".rc", ".cc", ".inl", ".metal"})
return (ext in {".c", ".cpp", ".cxx", ".m", ".mm", ".rc", ".cc", ".inl"})
def is_c_any(filename: str) -> bool:

View File

@@ -8,9 +8,6 @@ IGNORE_SOURCE = (
# specific source files
"extern/audaspace/",
# Use for `WIN32` only.
"source/creator/blender_launcher_win32.c",
# specific source files
"extern/bullet2/src/BulletCollision/CollisionDispatch/btBox2dBox2dCollisionAlgorithm.cpp",
"extern/bullet2/src/BulletCollision/CollisionDispatch/btConvex2dConvex2dAlgorithm.cpp",

View File

@@ -82,7 +82,7 @@ def create_nb_project_main():
make_exe = cmake_cache_var("CMAKE_MAKE_PROGRAM")
make_exe_basename = os.path.basename(make_exe)
# --------------- NetBeans specific.
# --------------- NB specific
defines = [("%s=%s" % cdef) if cdef[1] else cdef[0] for cdef in defines]
defines += [cdef.replace("#define", "").strip() for cdef in cmake_compiler_defines()]
@@ -180,7 +180,7 @@ def create_nb_project_main():
f.write(' </logicalFolder>\n')
f.write(' </logicalFolder>\n')
# default, but this dir is in fact not in blender dir so we can ignore it
# default, but this dir is infact not in blender dir so we can ignore it
# f.write(' <sourceFolderFilter>^(nbproject)$</sourceFolderFilter>\n')
f.write(r' <sourceFolderFilter>^(nbproject|__pycache__|.*\.py|.*\.html|.*\.blend)$</sourceFolderFilter>\n')

View File

@@ -24,7 +24,6 @@ import project_source_info
import subprocess
import sys
import os
import tempfile
from typing import (
Any,
@@ -36,6 +35,7 @@ USE_QUIET = (os.environ.get("QUIET", None) is not None)
CHECKER_IGNORE_PREFIX = [
"extern",
"intern/moto",
]
CHECKER_BIN = "cppcheck"
@@ -47,19 +47,13 @@ CHECKER_ARGS = [
"--max-configs=1", # speeds up execution
# "--check-config", # when includes are missing
"--enable=all", # if you want sixty hundred pedantic suggestions
# Quiet output, otherwise all defines/includes are printed (overly verbose).
# Only enable this for troubleshooting (if defines are not set as expected for example).
"--quiet",
# NOTE: `--cppcheck-build-dir=<dir>` is added later as a temporary directory.
]
if USE_QUIET:
CHECKER_ARGS.append("--quiet")
def cppcheck() -> None:
def main() -> None:
source_info = project_source_info.build_info(ignore_prefix_list=CHECKER_IGNORE_PREFIX)
source_defines = project_source_info.build_defines_as_args()
@@ -84,10 +78,7 @@ def cppcheck() -> None:
percent_str = "[" + ("%.2f]" % percent).rjust(7) + " %:"
sys.stdout.flush()
sys.stdout.write("%s %s\n" % (
percent_str,
os.path.relpath(c, project_source_info.SOURCE_DIR)
))
sys.stdout.write("%s " % percent_str)
return subprocess.Popen(cmd)
@@ -99,11 +90,5 @@ def cppcheck() -> None:
print("Finished!")
def main() -> None:
with tempfile.TemporaryDirectory() as temp_dir:
CHECKER_ARGS.append("--cppcheck-build-dir=" + temp_dir)
cppcheck()
if __name__ == "__main__":
main()

View File

@@ -7,6 +7,7 @@
set(WITH_ASSERT_ABORT ON CACHE BOOL "" FORCE)
set(WITH_BUILDINFO OFF CACHE BOOL "" FORCE)
set(WITH_COMPILER_ASAN ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_DEBUG ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_NATIVE_ONLY ON CACHE BOOL "" FORCE)
set(WITH_DOC_MANPAGE OFF CACHE BOOL "" FORCE)
set(WITH_GTESTS ON CACHE BOOL "" FORCE)

View File

@@ -29,7 +29,6 @@ set(WITH_IMAGE_OPENEXR ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_OPENJPEG ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_TIFF ON CACHE BOOL "" FORCE)
set(WITH_INPUT_NDOF ON CACHE BOOL "" FORCE)
set(WITH_INPUT_IME ON CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL ON CACHE BOOL "" FORCE)
set(WITH_LIBMV ON CACHE BOOL "" FORCE)
set(WITH_LIBMV_SCHUR_SPECIALIZATIONS ON CACHE BOOL "" FORCE)

View File

@@ -9,7 +9,6 @@ set(WITH_INSTALL_PORTABLE ON CACHE BOOL "" FORCE)
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)
set(WITH_AUDASPACE OFF CACHE BOOL "" FORCE)
set(WITH_BLENDER_THUMBNAILER OFF CACHE BOOL "" FORCE)
set(WITH_BOOST OFF CACHE BOOL "" FORCE)
set(WITH_BUILDINFO OFF CACHE BOOL "" FORCE)
set(WITH_BULLET OFF CACHE BOOL "" FORCE)
@@ -19,6 +18,9 @@ set(WITH_CODEC_SNDFILE OFF CACHE BOOL "" FORCE)
set(WITH_COMPOSITOR OFF CACHE BOOL "" FORCE)
set(WITH_COREAUDIO OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_DEVICE_OPTIX OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_EMBREE OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_OSL OFF CACHE BOOL "" FORCE)
set(WITH_DRACO OFF CACHE BOOL "" FORCE)
set(WITH_FFTW3 OFF CACHE BOOL "" FORCE)
set(WITH_FREESTYLE OFF CACHE BOOL "" FORCE)

View File

@@ -30,7 +30,6 @@ set(WITH_IMAGE_OPENEXR ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_OPENJPEG ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_TIFF ON CACHE BOOL "" FORCE)
set(WITH_INPUT_NDOF ON CACHE BOOL "" FORCE)
set(WITH_INPUT_IME ON CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL ON CACHE BOOL "" FORCE)
set(WITH_LIBMV ON CACHE BOOL "" FORCE)
set(WITH_LIBMV_SCHUR_SPECIALIZATIONS ON CACHE BOOL "" FORCE)
@@ -61,7 +60,6 @@ set(WITH_MEM_JEMALLOC ON CACHE BOOL "" FORCE)
# platform dependent options
if(APPLE)
set(WITH_COREAUDIO ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_DEVICE_METAL ON CACHE BOOL "" FORCE)
endif()
if(NOT WIN32)
set(WITH_JACK ON CACHE BOOL "" FORCE)
@@ -82,5 +80,4 @@ if(NOT APPLE)
set(WITH_CYCLES_DEVICE_OPTIX ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_CUDA_BINARIES ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_CUBIN_COMPILER OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_HIP_BINARIES ON CACHE BOOL "" FORCE)
endif()

View File

@@ -208,7 +208,7 @@ function(blender_source_group
)
# if enabled, use the sources directories as filters.
if(IDE_GROUP_SOURCES_IN_FOLDERS)
if(WINDOWS_USE_VISUAL_STUDIO_SOURCE_FOLDERS)
foreach(_SRC ${sources})
# remove ../'s
get_filename_component(_SRC_DIR ${_SRC} REALPATH)
@@ -240,8 +240,8 @@ function(blender_source_group
endforeach()
endif()
# if enabled, set the FOLDER property for the projects
if(IDE_GROUP_PROJECTS_IN_FOLDERS)
# if enabled, set the FOLDER property for visual studio projects
if(WINDOWS_USE_VISUAL_STUDIO_PROJECT_FOLDERS)
get_filename_component(FolderDir ${CMAKE_CURRENT_SOURCE_DIR} DIRECTORY)
string(REPLACE ${CMAKE_SOURCE_DIR} "" FolderDir ${FolderDir})
set_target_properties(${name} PROPERTIES FOLDER ${FolderDir})
@@ -529,7 +529,7 @@ function(SETUP_LIBDIRS)
# NOTE: For all new libraries, use absolute library paths.
# This should eventually be phased out.
# APPLE platform uses full paths for linking libraries, and avoids link_directories.
# APPLE plaform uses full paths for linking libraries, and avoids link_directories.
if(NOT MSVC AND NOT APPLE)
link_directories(${JPEG_LIBPATH} ${PNG_LIBPATH} ${ZLIB_LIBPATH} ${FREETYPE_LIBPATH})
@@ -694,7 +694,7 @@ macro(message_first_run)
endmacro()
# when we have warnings as errors applied globally this
# needs to be removed for some external libs which we don't maintain.
# needs to be removed for some external libs which we dont maintain.
# utility macro
macro(remove_cc_flag
@@ -794,7 +794,7 @@ macro(remove_extra_strict_flags)
endmacro()
# note, we can only append flags on a single file so we need to negate the options.
# at the moment we can't shut up ffmpeg deprecations, so use this, but will
# at the moment we cant shut up ffmpeg deprecations, so use this, but will
# probably add more removals here.
macro(remove_strict_c_flags_file
filenames)
@@ -963,6 +963,14 @@ macro(blender_project_hack_post)
unset(_reset_standard_cflags_rel)
unset(_reset_standard_cxxflags_rel)
# ------------------------------------------------------------------
# workaround for omission in cmake 2.8.4's GNU.cmake, fixed in 2.8.5
if(CMAKE_COMPILER_IS_GNUCC)
if(NOT DARWIN)
set(CMAKE_INCLUDE_SYSTEM_FLAG_C "-isystem ")
endif()
endif()
endmacro()
# pair of macros to allow libraries to be specify files to install, but to
@@ -1292,6 +1300,29 @@ macro(openmp_delayload
endif()
endmacro()
macro(blender_precompile_headers target cpp header)
if(MSVC)
# get the name for the pch output file
get_filename_component(pchbase ${cpp} NAME_WE)
set(pchfinal "${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}/${pchbase}.pch")
# mark the cpp as the one outputting the pch
set_property(SOURCE ${cpp} APPEND PROPERTY OBJECT_OUTPUTS "${pchfinal}")
# get all sources for the target
get_target_property(sources ${target} SOURCES)
# make all sources depend on the pch to enforce the build order
foreach(src ${sources})
set_property(SOURCE ${src} APPEND PROPERTY OBJECT_DEPENDS "${pchfinal}")
endforeach()
target_sources(${target} PRIVATE ${cpp} ${header})
set_target_properties(${target} PROPERTIES COMPILE_FLAGS "/Yu${header} /Fp${pchfinal} /FI${header}")
set_source_files_properties(${cpp} PROPERTIES COMPILE_FLAGS "/Yc${header} /Fp${pchfinal}")
endif()
endmacro()
macro(set_and_warn_dependency
_dependency _setting _val)
# when $_dependency is disabled, forces $_setting = $_val

View File

@@ -257,6 +257,9 @@ if(WITH_BOOST)
if(WITH_INTERNATIONAL)
list(APPEND _boost_FIND_COMPONENTS locale)
endif()
if(WITH_CYCLES_NETWORK)
list(APPEND _boost_FIND_COMPONENTS serialization)
endif()
if(WITH_OPENVDB)
list(APPEND _boost_FIND_COMPONENTS iostreams)
endif()
@@ -336,7 +339,7 @@ if(WITH_LLVM)
endif()
if(WITH_CYCLES AND WITH_CYCLES_OSL)
if(WITH_CYCLES_OSL)
set(CYCLES_OSL ${LIBDIR}/osl)
find_library(OSL_LIB_EXEC NAMES oslexec PATHS ${CYCLES_OSL}/lib)
@@ -356,7 +359,7 @@ if(WITH_CYCLES AND WITH_CYCLES_OSL)
endif()
endif()
if(WITH_CYCLES AND WITH_CYCLES_EMBREE)
if(WITH_CYCLES_EMBREE)
find_package(Embree 3.8.0 REQUIRED)
# Increase stack size for Embree, only works for executables.
if(NOT WITH_PYTHON_MODULE)
@@ -385,10 +388,6 @@ endif()
if(WITH_TBB)
find_package(TBB)
if(NOT TBB_FOUND)
message(WARNING "TBB not found, disabling WITH_TBB")
set(WITH_TBB OFF)
endif()
endif()
if(WITH_POTRACE)
@@ -401,16 +400,32 @@ endif()
# CMake FindOpenMP doesn't know about AppleClang before 3.12, so provide custom flags.
if(WITH_OPENMP)
if(CMAKE_C_COMPILER_ID MATCHES "Clang")
if(CMAKE_C_COMPILER_ID MATCHES "Clang" AND CMAKE_C_COMPILER_VERSION VERSION_GREATER_EQUAL "7.0")
# Use OpenMP from our precompiled libraries.
message(STATUS "Using ${LIBDIR}/openmp for OpenMP")
set(OPENMP_CUSTOM ON)
set(OPENMP_FOUND ON)
set(OpenMP_C_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'")
set(OpenMP_CXX_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'")
set(OpenMP_LIBRARY_DIR "${LIBDIR}/openmp/lib/")
set(OpenMP_LINKER_FLAGS "-L'${OpenMP_LIBRARY_DIR}' -lomp")
set(OpenMP_LIBRARY "${OpenMP_LIBRARY_DIR}/libomp.dylib")
set(OpenMP_LINKER_FLAGS "-L'${LIBDIR}/openmp/lib' -lomp")
# Copy libomp.dylib to allow executables like datatoc and tests to work.
# `@executable_path/../Resources/lib/` `LC_ID_DYLIB` is added by the deps builder.
# For single config generator datatoc, tests etc.
execute_process(
COMMAND mkdir -p ${CMAKE_BINARY_DIR}/Resources/lib
COMMAND cp -p ${LIBDIR}/openmp/lib/libomp.dylib ${CMAKE_BINARY_DIR}/Resources/lib/libomp.dylib
)
# For multi-config generator datatoc, etc.
execute_process(
COMMAND mkdir -p ${CMAKE_BINARY_DIR}/bin/Resources/lib
COMMAND cp -p ${LIBDIR}/openmp/lib/libomp.dylib ${CMAKE_BINARY_DIR}/bin/Resources/lib/libomp.dylib
)
# For multi-config generator tests.
execute_process(
COMMAND mkdir -p ${CMAKE_BINARY_DIR}/bin/tests/Resources/lib
COMMAND cp -p ${LIBDIR}/openmp/lib/libomp.dylib ${CMAKE_BINARY_DIR}/bin/tests/Resources/lib/libomp.dylib
)
endif()
endif()
@@ -438,9 +453,6 @@ if(WITH_HARU)
endif()
endif()
set(ZSTD_ROOT_DIR ${LIBDIR}/zstd)
find_package(Zstd REQUIRED)
if(EXISTS ${LIBDIR})
without_system_libs_end()
endif()
@@ -464,8 +476,10 @@ else()
set(CMAKE_CXX_FLAGS_RELEASE "-O2 -mdynamic-no-pic")
endif()
# Clang has too low template depth of 128 for libmv.
string(APPEND CMAKE_CXX_FLAGS " -ftemplate-depth=1024")
if(${XCODE_VERSION} VERSION_EQUAL 5 OR ${XCODE_VERSION} VERSION_GREATER 5)
# Xcode 5 is always using CLANG, which has too low template depth of 128 for libmv
string(APPEND CMAKE_CXX_FLAGS " -ftemplate-depth=1024")
endif()
# Avoid conflicts with Luxrender, and other plug-ins that may use the same
# libraries as Blender with a different version or build options.
@@ -495,15 +509,3 @@ if(WITH_COMPILER_CCACHE)
endif()
endif()
endif()
# For binaries that are built but not installed (also not distributed) (datatoc,
# makesdna, tests, etc.), we add an rpath to the OpenMP library dir through
# CMAKE_BUILD_RPATH. This avoids having to make many copies of the dylib next to each binary.
#
# For the installed Python module and installed Blender executable, CMAKE_INSTALL_RPATH
# is modified to find the dylib in an adjacent folder. Install step puts the libraries there.
set(CMAKE_SKIP_BUILD_RPATH FALSE)
list(APPEND CMAKE_BUILD_RPATH "${OpenMP_LIBRARY_DIR}")
set(CMAKE_SKIP_INSTALL_RPATH FALSE)
list(APPEND CMAKE_INSTALL_RPATH "@loader_path/../Resources/${BLENDER_VERSION}/lib")

View File

@@ -96,7 +96,7 @@ else()
# Detect SDK version to use.
if(NOT DEFINED OSX_SYSTEM)
execute_process(
COMMAND xcrun --sdk macosx --show-sdk-version
COMMAND xcrun --show-sdk-version
OUTPUT_VARIABLE OSX_SYSTEM
OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()

View File

@@ -18,7 +18,7 @@
# All rights reserved.
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for any *nix system including Linux and Unix (excluding APPLE).
# Libraries configuration for any *nix system including Linux and Unix.
# Detect precompiled library directory
if(NOT DEFINED LIBDIR)
@@ -99,7 +99,6 @@ endif()
find_package_wrapper(JPEG REQUIRED)
find_package_wrapper(PNG REQUIRED)
find_package_wrapper(ZLIB REQUIRED)
find_package_wrapper(Zstd REQUIRED)
find_package_wrapper(Freetype REQUIRED)
if(WITH_PYTHON)
@@ -241,7 +240,7 @@ if(WITH_INPUT_NDOF)
endif()
endif()
if(WITH_CYCLES AND WITH_CYCLES_OSL)
if(WITH_CYCLES_OSL)
set(CYCLES_OSL ${LIBDIR}/osl CACHE PATH "Path to OpenShadingLanguage installation")
if(EXISTS ${CYCLES_OSL} AND NOT OSL_ROOT)
set(OSL_ROOT ${CYCLES_OSL})
@@ -314,7 +313,7 @@ if(WITH_BOOST)
endif()
set(Boost_USE_MULTITHREADED ON)
set(__boost_packages filesystem regex thread date_time)
if(WITH_CYCLES AND WITH_CYCLES_OSL)
if(WITH_CYCLES_OSL)
if(NOT (${OSL_LIBRARY_VERSION_MAJOR} EQUAL "1" AND ${OSL_LIBRARY_VERSION_MINOR} LESS "6"))
list(APPEND __boost_packages wave)
else()
@@ -323,6 +322,9 @@ if(WITH_BOOST)
if(WITH_INTERNATIONAL)
list(APPEND __boost_packages locale)
endif()
if(WITH_CYCLES_NETWORK)
list(APPEND __boost_packages serialization)
endif()
if(WITH_OPENVDB)
list(APPEND __boost_packages iostreams)
endif()
@@ -400,7 +402,7 @@ if(WITH_OPENCOLORIO)
endif()
endif()
if(WITH_CYCLES AND WITH_CYCLES_EMBREE)
if(WITH_CYCLES_EMBREE)
find_package(Embree 3.8.0 REQUIRED)
endif()
@@ -455,10 +457,6 @@ endif()
if(WITH_TBB)
find_package_wrapper(TBB)
if(NOT TBB_FOUND)
message(WARNING "TBB not found, disabling WITH_TBB")
set(WITH_TBB OFF)
endif()
endif()
if(WITH_XR_OPENXR)

View File

@@ -27,7 +27,7 @@ if(NOT MSVC)
endif()
if(CMAKE_C_COMPILER_ID MATCHES "Clang")
set(MSVC_CLANG ON)
set(MSVC_CLANG On)
set(VC_TOOLS_DIR $ENV{VCToolsRedistDir} CACHE STRING "Location of the msvc redistributables")
set(MSVC_REDIST_DIR ${VC_TOOLS_DIR})
if(DEFINED MSVC_REDIST_DIR)
@@ -53,10 +53,12 @@ if(CMAKE_C_COMPILER_ID MATCHES "Clang")
endif()
if(WITH_WINDOWS_STRIPPED_PDB)
message(WARNING "stripped pdb not supported with clang, disabling..")
set(WITH_WINDOWS_STRIPPED_PDB OFF)
set(WITH_WINDOWS_STRIPPED_PDB Off)
endif()
endif()
set_property(GLOBAL PROPERTY USE_FOLDERS ${WINDOWS_USE_VISUAL_STUDIO_PROJECT_FOLDERS})
if(NOT WITH_PYTHON_MODULE)
set_property(DIRECTORY PROPERTY VS_STARTUP_PROJECT blender)
endif()
@@ -151,15 +153,15 @@ if(MSVC_CLANG) # Clangs version of cl doesn't support all flags
string(APPEND CMAKE_CXX_FLAGS " ${CXX_WARN_FLAGS} /nologo /J /Gd /EHsc -Wno-unused-command-line-argument -Wno-microsoft-enum-forward-reference ")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd -Wno-unused-command-line-argument -Wno-microsoft-enum-forward-reference")
else()
string(APPEND CMAKE_CXX_FLAGS " /nologo /J /Gd /MP /EHsc /bigobj /Zc:inline")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd /MP /bigobj /Zc:inline")
string(APPEND CMAKE_CXX_FLAGS " /nologo /J /Gd /MP /EHsc /bigobj")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd /MP /bigobj")
endif()
# X64 ASAN is available and usable on MSVC 16.9 preview 4 and up)
if(WITH_COMPILER_ASAN AND MSVC AND NOT MSVC_CLANG)
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 19.28.29828)
#set a flag so we don't have to do this comparison all the time
SET(MSVC_ASAN ON)
SET(MSVC_ASAN On)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /fsanitize=address")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /fsanitize=address")
string(APPEND CMAKE_EXE_LINKER_FLAGS_DEBUG " /INCREMENTAL:NO")
@@ -179,22 +181,22 @@ endif()
if(WITH_WINDOWS_SCCACHE AND CMAKE_VS_MSBUILD_COMMAND)
message(WARNING "Disabling sccache, sccache is not supported with msbuild")
set(WITH_WINDOWS_SCCACHE OFF)
set(WITH_WINDOWS_SCCACHE Off)
endif()
# Debug Symbol format
# sccache # MSVC_ASAN # format # why
# ON # ON # Z7 # sccache will only play nice with Z7
# ON # OFF # Z7 # sccache will only play nice with Z7
# OFF # ON # Zi # Asan will not play nice with Edit and Continue
# OFF # OFF # ZI # Neither asan nor sscache is enabled Edit and Continue is available
# On # On # Z7 # sccache will only play nice with Z7
# On # Off # Z7 # sccache will only play nice with Z7
# Off # On # Zi # Asan will not play nice with Edit and Continue
# Off # Off # ZI # Neither asan nor sscache is enabled Edit and Continue is available
# Release Symbol format
# sccache # MSVC_ASAN # format # why
# ON # ON # Z7 # sccache will only play nice with Z7
# ON # OFF # Z7 # sccache will only play nice with Z7
# OFF # ON # Zi # Asan will not play nice with Edit and Continue
# OFF # OFF # Zi # Edit and Continue disables some optimizations
# On # On # Z7 # sccache will only play nice with Z7
# On # Off # Z7 # sccache will only play nice with Z7
# Off # On # Zi # Asan will not play nice with Edit and Continue
# Off # Off # Zi # Edit and Continue disables some optimizations
if(WITH_WINDOWS_SCCACHE)
@@ -215,8 +217,8 @@ else()
endif()
if(WITH_WINDOWS_PDB)
set(PDB_INFO_OVERRIDE_FLAGS "${SYMBOL_FORMAT_RELEASE}")
set(PDB_INFO_OVERRIDE_LINKER_FLAGS "/DEBUG /OPT:REF /OPT:ICF /INCREMENTAL:NO")
set(PDB_INFO_OVERRIDE_FLAGS "${SYMBOL_FORMAT_RELEASE}")
set(PDB_INFO_OVERRIDE_LINKER_FLAGS "/DEBUG /OPT:REF /OPT:ICF /INCREMENTAL:NO")
endif()
string(APPEND CMAKE_CXX_FLAGS_DEBUG " /MDd ${SYMBOL_FORMAT}")
@@ -259,10 +261,8 @@ if(NOT DEFINED LIBDIR)
else()
message(FATAL_ERROR "32 bit compiler detected, blender no longer provides pre-build libraries for 32 bit windows, please set the LIBDIR cmake variable to your own library folder")
endif()
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 19.30.30423)
message(STATUS "Visual Studio 2022 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc15)
elseif(MSVC_VERSION GREATER 1919)
# Can be 1910..1912
if(MSVC_VERSION GREATER 1919)
message(STATUS "Visual Studio 2019 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc15)
elseif(MSVC_VERSION GREATER 1909)
@@ -288,7 +288,7 @@ if(CMAKE_GENERATOR MATCHES "^Visual Studio.+" AND # Only supported in the VS IDE
"EnableMicrosoftCodeAnalysis=false"
"EnableClangTidyCodeAnalysis=true"
)
set(VS_CLANG_TIDY ON)
set(VS_CLANG_TIDY On)
endif()
# Mark libdir as system headers with a lower warn level, to resolve some warnings
@@ -469,7 +469,7 @@ if(WITH_PYTHON)
set(PYTHON_INCLUDE_DIR ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/include)
set(PYTHON_NUMPY_INCLUDE_DIRS ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/lib/site-packages/numpy/core/include)
set(NUMPY_FOUND ON)
set(NUMPY_FOUND On)
unset(_PYTHON_VERSION_NO_DOTS)
# uncached vars
set(PYTHON_INCLUDE_DIRS "${PYTHON_INCLUDE_DIR}")
@@ -477,7 +477,7 @@ if(WITH_PYTHON)
endif()
if(WITH_BOOST)
if(WITH_CYCLES AND WITH_CYCLES_OSL)
if(WITH_CYCLES_OSL)
set(boost_extra_libs wave)
endif()
if(WITH_INTERNATIONAL)
@@ -520,7 +520,7 @@ if(WITH_BOOST)
debug ${BOOST_LIBPATH}/libboost_thread-${BOOST_DEBUG_POSTFIX}
debug ${BOOST_LIBPATH}/libboost_chrono-${BOOST_DEBUG_POSTFIX}
)
if(WITH_CYCLES AND WITH_CYCLES_OSL)
if(WITH_CYCLES_OSL)
set(BOOST_LIBRARIES ${BOOST_LIBRARIES}
optimized ${BOOST_LIBPATH}/libboost_wave-${BOOST_POSTFIX}
debug ${BOOST_LIBPATH}/libboost_wave-${BOOST_DEBUG_POSTFIX})
@@ -548,6 +548,7 @@ if(WITH_OPENIMAGEIO)
set(OPENIMAGEIO_LIBRARIES ${OIIO_OPTIMIZED} ${OIIO_DEBUG})
set(OPENIMAGEIO_DEFINITIONS "-DUSE_TBB=0")
set(OPENCOLORIO_DEFINITIONS "-DDOpenColorIO_SKIP_IMPORTS")
set(OPENIMAGEIO_IDIFF "${OPENIMAGEIO}/bin/idiff.exe")
add_definitions(-DOIIO_STATIC_DEFINE)
add_definitions(-DOIIO_NO_SSE=1)
@@ -593,7 +594,7 @@ if(WITH_OPENCOLORIO)
debug ${OPENCOLORIO_LIBPATH}/libexpatdMD.lib
debug ${OPENCOLORIO_LIBPATH}/pystring_d.lib
)
set(OPENCOLORIO_DEFINITIONS "-DOpenColorIO_SKIP_IMPORTS")
set(OPENCOLORIO_DEFINITIONS)
endif()
if(WITH_OPENVDB)
@@ -678,7 +679,6 @@ if(WITH_TBB)
set(TBB_INCLUDE_DIR ${LIBDIR}/tbb/include)
set(TBB_INCLUDE_DIRS ${TBB_INCLUDE_DIR})
if(WITH_TBB_MALLOC_PROXY)
set(TBB_MALLOC_LIBRARIES optimized ${LIBDIR}/tbb/lib/tbbmalloc.lib debug ${LIBDIR}/tbb/lib/tbbmalloc_debug.lib)
add_definitions(-DWITH_TBB_MALLOC)
endif()
endif()
@@ -708,7 +708,7 @@ if(WITH_CODEC_SNDFILE)
set(LIBSNDFILE_LIBRARIES ${LIBSNDFILE_LIBPATH}/libsndfile-1.lib)
endif()
if(WITH_CYCLES AND WITH_CYCLES_OSL)
if(WITH_CYCLES_OSL)
set(CYCLES_OSL ${LIBDIR}/osl CACHE PATH "Path to OpenShadingLanguage installation")
set(OSL_SHADER_DIR ${CYCLES_OSL}/shaders)
# Shaders have moved around a bit between OSL versions, check multiple locations
@@ -741,7 +741,7 @@ if(WITH_CYCLES AND WITH_CYCLES_OSL)
endif()
endif()
if(WITH_CYCLES AND WITH_CYCLES_EMBREE)
if(WITH_CYCLES_EMBREE)
windows_find_package(Embree)
if(NOT EMBREE_FOUND)
set(EMBREE_INCLUDE_DIRS ${LIBDIR}/embree/include)
@@ -853,18 +853,18 @@ if(WITH_GMP)
set(GMP_INCLUDE_DIRS ${LIBDIR}/gmp/include)
set(GMP_LIBRARIES ${LIBDIR}/gmp/lib/libgmp-10.lib optimized ${LIBDIR}/gmp/lib/libgmpxx.lib debug ${LIBDIR}/gmp/lib/libgmpxx_d.lib)
set(GMP_ROOT_DIR ${LIBDIR}/gmp)
set(GMP_FOUND ON)
set(GMP_FOUND On)
endif()
if(WITH_POTRACE)
set(POTRACE_INCLUDE_DIRS ${LIBDIR}/potrace/include)
set(POTRACE_LIBRARIES ${LIBDIR}/potrace/lib/potrace.lib)
set(POTRACE_FOUND ON)
set(POTRACE_FOUND On)
endif()
if(WITH_HARU)
if(EXISTS ${LIBDIR}/haru)
set(HARU_FOUND ON)
set(HARU_FOUND On)
set(HARU_ROOT_DIR ${LIBDIR}/haru)
set(HARU_INCLUDE_DIRS ${HARU_ROOT_DIR}/include)
set(HARU_LIBRARIES ${HARU_ROOT_DIR}/lib/libhpdfs.lib)
@@ -873,6 +873,3 @@ if(WITH_HARU)
set(WITH_HARU OFF)
endif()
endif()
set(ZSTD_INCLUDE_DIRS ${LIBDIR}/zstd/include)
set(ZSTD_LIBRARIES ${LIBDIR}/zstd/lib/zstd_static.lib)

View File

@@ -27,7 +27,7 @@ if(WITH_WINDOWS_BUNDLE_CRT)
# Install the CRT to the blender.crt Sub folder.
install(FILES ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS} DESTINATION ./blender.crt COMPONENT Libraries)
# Generating the manifest is a relatively expensive operation since
# Generating the manifest is a relativly expensive operation since
# it is collecting an sha1 hash for every file required. so only do
# this work when the libs have either changed or the manifest does
# not exist yet.

View File

@@ -243,9 +243,7 @@ def build_defines_as_args() -> List[str]:
# use this module.
def queue_processes(
process_funcs: Sequence[Tuple[Callable[..., subprocess.Popen[Any]], Tuple[Any, ...]]],
*,
job_total: int =-1,
sleep: float = 0.1,
) -> None:
""" Takes a list of function arg pairs, each function must return a process
"""
@@ -273,20 +271,14 @@ def queue_processes(
if len(processes) <= job_total:
break
time.sleep(sleep)
else:
time.sleep(0.1)
sys.stdout.flush()
sys.stderr.flush()
processes.append(func(*args))
# Don't return until all jobs have finished.
while 1:
processes[:] = [p for p in processes if p.poll() is None]
if not processes:
break
time.sleep(sleep)
def main() -> None:
if not os.path.exists(join(CMAKE_DIR, "CMakeCache.txt")):

View File

@@ -1,10 +1,8 @@
Pipeline Config
===============
The `yaml` configuration file is used by buildbot build pipeline `update-code` step.
This configuration file is used by buildbot new pipeline for the `update-code` step.
The file allows to set branches or specific commits for both git submodules and svn artifacts. Can also define various build package versions for use by build workers. Especially useful in experimental and release branches.
It will soon be used by the ../utils/make_update.py script.
NOTE:
* The configuration file is ```NOT``` used by the `../utils/make_update.py` script.
* That will implemented in the future.
Both buildbot and developers will eventually use the same configuration file.

View File

@@ -0,0 +1,87 @@
{
"update-code":
{
"git" :
{
"submodules":
[
{ "path": "release/scripts/addons", "branch": "master", "commit_id": "HEAD" },
{ "path": "release/scripts/addons_contrib", "branch": "master", "commit_id": "HEAD" },
{ "path": "release/datafiles/locale", "branch": "master", "commit_id": "HEAD" },
{ "path": "source/tools", "branch": "master", "commit_id": "HEAD" }
]
},
"svn":
{
"tests": { "path": "lib/tests", "branch": "trunk", "commit_id": "HEAD" },
"libraries":
{
"darwin-x86_64": { "path": "lib/darwin", "branch": "trunk", "commit_id": "HEAD" },
"darwin-arm64": { "path": "lib/darwin_arm64", "branch": "trunk", "commit_id": "HEAD" },
"linux-x86_64": { "path": "lib/linux_centos7_x86_64", "branch": "trunk", "commit_id": "HEAD" },
"windows-amd64": { "path": "lib/win64_vc15", "branch": "trunk", "commit_id": "HEAD" }
}
}
},
"buildbot":
{
"gcc":
{
"version": "9.0"
},
"sdks":
{
"optix":
{
"version": "7.1.0"
},
"cuda10":
{
"version": "10.1"
},
"cuda11":
{
"version": "11.3"
}
},
"cmake":
{
"default":
{
"version": "any",
"overrides":
{
}
},
"darwin-x86_64":
{
"overrides":
{
}
},
"darwin-arm64":
{
"overrides":
{
}
},
"linux-x86_64":
{
"overrides":
{
}
},
"windows-amd64":
{
"overrides":
{
}
}
}
}
}

View File

@@ -1,70 +0,0 @@
#
# Used by Buildbot build pipeline make_update.py script only for now
# We intended to update the make_update.py in the branches to use this file eventually
#
update-code:
git:
submodules:
- branch: master
commit_id: HEAD
path: release/scripts/addons
- branch: master
commit_id: HEAD
path: release/scripts/addons_contrib
- branch: master
commit_id: HEAD
path: release/datafiles/locale
- branch: master
commit_id: HEAD
path: source/tools
svn:
libraries:
darwin-arm64:
branch: trunk
commit_id: HEAD
path: lib/darwin_arm64
darwin-x86_64:
branch: trunk
commit_id: HEAD
path: lib/darwin
linux-x86_64:
branch: trunk
commit_id: HEAD
path: lib/linux_centos7_x86_64
windows-amd64:
branch: trunk
commit_id: HEAD
path: lib/win64_vc15
tests:
branch: trunk
commit_id: HEAD
path: lib/tests
benchmarks:
branch: trunk
commit_id: HEAD
path: lib/benchmarks
#
# Buildbot only configs
#
buildbot:
gcc:
version: '9.0.0'
cuda10:
version: '10.1.243'
cuda11:
version: '11.4.1'
optix:
version: '7.3.0'
cmake:
default:
version: any
overrides: {}
darwin-arm64:
overrides: {}
darwin-x86_64:
overrides: {}
linux-x86_64:
overrides: {}
windows-amd64:
overrides: {}

View File

@@ -31,7 +31,6 @@ def parse_arguments():
parser.add_argument("--no-submodules", action="store_true")
parser.add_argument("--use-tests", action="store_true")
parser.add_argument("--svn-command", default="svn")
parser.add_argument("--svn-branch", default=None)
parser.add_argument("--git-command", default="git")
parser.add_argument("--use-centos-libraries", action="store_true")
return parser.parse_args()
@@ -47,7 +46,7 @@ def svn_update(args, release_version):
svn_non_interactive = [args.svn_command, '--non-interactive']
lib_dirpath = os.path.join(get_blender_git_root(), '..', 'lib')
svn_url = make_utils.svn_libraries_base_url(release_version, args.svn_branch)
svn_url = make_utils.svn_libraries_base_url(release_version)
# Checkout precompiled libraries
if sys.platform == 'darwin':
@@ -171,28 +170,26 @@ def submodules_update(args, release_version, branch):
sys.stderr.write("git not found, can't update code\n")
sys.exit(1)
# Update submodules to appropriate given branch,
# falling back to master if none is given and/or found in a sub-repository.
branch_fallback = "master"
if not branch:
branch = branch_fallback
# Update submodules to latest master or appropriate release branch.
if not release_version:
branch = "master"
submodules = [
("release/scripts/addons", branch, branch_fallback),
("release/scripts/addons_contrib", branch, branch_fallback),
("release/datafiles/locale", branch, branch_fallback),
("source/tools", branch, branch_fallback),
("release/scripts/addons", branch),
("release/scripts/addons_contrib", branch),
("release/datafiles/locale", branch),
("source/tools", branch),
]
# Initialize submodules only if needed.
for submodule_path, submodule_branch, submodule_branch_fallback in submodules:
for submodule_path, submodule_branch in submodules:
if not os.path.exists(os.path.join(submodule_path, ".git")):
call([args.git_command, "submodule", "update", "--init", "--recursive"])
break
# Checkout appropriate branch and pull changes.
skip_msg = ""
for submodule_path, submodule_branch, submodule_branch_fallback in submodules:
for submodule_path, submodule_branch in submodules:
cwd = os.getcwd()
try:
os.chdir(submodule_path)
@@ -200,20 +197,10 @@ def submodules_update(args, release_version, branch):
if msg:
skip_msg += submodule_path + " skipped: " + msg + "\n"
else:
# Find a matching branch that exists.
call([args.git_command, "fetch", "origin"])
if make_utils.git_branch_exists(args.git_command, submodule_branch):
pass
elif make_utils.git_branch_exists(args.git_command, submodule_branch_fallback):
submodule_branch = submodule_branch_fallback
else:
submodule_branch = None
# Switch to branch and pull.
if submodule_branch:
if make_utils.git_branch(args.git_command) != submodule_branch:
call([args.git_command, "checkout", submodule_branch])
call([args.git_command, "pull", "--rebase", "origin", submodule_branch])
if make_utils.git_branch(args.git_command) != submodule_branch:
call([args.git_command, "fetch", "origin"])
call([args.git_command, "checkout", submodule_branch])
call([args.git_command, "pull", "--rebase", "origin", submodule_branch])
finally:
os.chdir(cwd)
@@ -227,10 +214,6 @@ if __name__ == "__main__":
# Test if we are building a specific release version.
branch = make_utils.git_branch(args.git_command)
if branch == 'HEAD':
sys.stderr.write('Blender git repository is in detached HEAD state, must be in a branch\n')
sys.exit(1)
tag = make_utils.git_tag(args.git_command)
release_version = make_utils.git_branch_release_version(branch, tag)

View File

@@ -8,19 +8,14 @@ import subprocess
import sys
def call(cmd, exit_on_error=True, silent=False):
if not silent:
print(" ".join(cmd))
def call(cmd, exit_on_error=True):
print(" ".join(cmd))
# Flush to ensure correct order output on Windows.
sys.stdout.flush()
sys.stderr.flush()
if silent:
retcode = subprocess.call(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
else:
retcode = subprocess.call(cmd)
retcode = subprocess.call(cmd)
if exit_on_error and retcode != 0:
sys.exit(retcode)
return retcode
@@ -43,11 +38,6 @@ def check_output(cmd, exit_on_error=True):
return output.strip()
def git_branch_exists(git_command, branch):
return call([git_command, "rev-parse", "--verify", branch], exit_on_error=False, silent=True) == 0 or \
call([git_command, "rev-parse", "--verify", "remotes/origin/" + branch], exit_on_error=False, silent=True) == 0
def git_branch(git_command):
# Get current branch name.
try:
@@ -80,11 +70,9 @@ def git_branch_release_version(branch, tag):
return release_version
def svn_libraries_base_url(release_version, branch=None):
def svn_libraries_base_url(release_version):
if release_version:
svn_branch = "tags/blender-" + release_version + "-release"
elif branch:
svn_branch = "branches/" + branch
else:
svn_branch = "trunk"
return "https://svn.blender.org/svnroot/bf-blender/" + svn_branch + "/lib/"

View File

@@ -1,12 +1,9 @@
echo No explicit msvc version requested, autodetecting version.
call "%~dp0\detect_msvc2019.cmd"
if %ERRORLEVEL% EQU 0 goto DetectionComplete
call "%~dp0\detect_msvc2017.cmd"
if %ERRORLEVEL% EQU 0 goto DetectionComplete
call "%~dp0\detect_msvc2022.cmd"
call "%~dp0\detect_msvc2019.cmd"
if %ERRORLEVEL% EQU 0 goto DetectionComplete
echo Compiler Detection failed. Use verbose switch for more information.

View File

@@ -1,6 +1,5 @@
if "%BUILD_VS_YEAR%"=="2017" set BUILD_VS_LIBDIRPOST=vc15
if "%BUILD_VS_YEAR%"=="2019" set BUILD_VS_LIBDIRPOST=vc15
if "%BUILD_VS_YEAR%"=="2022" set BUILD_VS_LIBDIRPOST=vc15
set BUILD_VS_SVNDIR=win64_%BUILD_VS_LIBDIRPOST%
set BUILD_VS_LIBDIR="%BLENDER_DIR%..\lib\%BUILD_VS_SVNDIR%"

View File

@@ -19,10 +19,10 @@ if "%WITH_PYDEBUG%"=="1" (
set PYDEBUG_CMAKE_ARGS=-DWINDOWS_PYTHON_DEBUG=On
)
if "%BUILD_VS_YEAR%"=="2017" (
set BUILD_GENERATOR_POST=%WINDOWS_ARCH%
) else (
if "%BUILD_VS_YEAR%"=="2019" (
set BUILD_PLATFORM_SELECT=-A %MSBUILD_PLATFORM%
) else (
set BUILD_GENERATOR_POST=%WINDOWS_ARCH%
)
set BUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% -G "Visual Studio %BUILD_VS_VER% %BUILD_VS_YEAR%%BUILD_GENERATOR_POST%" %BUILD_PLATFORM_SELECT% %TESTS_CMAKE_ARGS% %CLANG_CMAKE_ARGS% %ASAN_CMAKE_ARGS% %PYDEBUG_CMAKE_ARGS%

View File

@@ -1,3 +0,0 @@
set BUILD_VS_VER=17
set BUILD_VS_YEAR=2022
call "%~dp0\detect_msvc_vswhere.cmd"

View File

@@ -1,34 +0,0 @@
set SOURCEDIR=%BLENDER_DIR%/doc/python_api/sphinx-in
set BUILDDIR=%BLENDER_DIR%/doc/python_api/sphinx-out
if "%BF_LANG%" == "" set BF_LANG=en
set SPHINXOPTS=-j auto -D language=%BF_LANG%
call "%~dp0\find_sphinx.cmd"
if EXIST "%SPHINX_BIN%" (
goto detect_sphinx_done
)
echo unable to locate sphinx-build, run "set sphinx_BIN=full_path_to_sphinx-build.exe"
exit /b 1
:detect_sphinx_done
call "%~dp0\find_blender.cmd"
if EXIST "%BLENDER_BIN%" (
goto detect_blender_done
)
echo unable to locate blender, run "set BLENDER_BIN=full_path_to_blender.exe"
exit /b 1
:detect_blender_done
%BLENDER_BIN% ^
--background -noaudio --factory-startup ^
--python %BLENDER_DIR%/doc/python_api/sphinx_doc_gen.py
"%SPHINX_BIN%" -b html %SPHINXOPTS% %O% %SOURCEDIR% %BUILDDIR%
:EOF

View File

@@ -1,28 +0,0 @@
REM First see if there is an environment variable set
if EXIST "%BLENDER_BIN%" (
goto detect_blender_done
)
REM Check the build folder next, if ninja was used there will be no
REM debug/release folder
set BLENDER_BIN=%BUILD_DIR%\bin\blender.exe
if EXIST "%BLENDER_BIN%" (
goto detect_blender_done
)
REM Check the release folder next
set BLENDER_BIN=%BUILD_DIR%\bin\release\blender.exe
if EXIST "%BLENDER_BIN%" (
goto detect_blender_done
)
REM Check the debug folder next
set BLENDER_BIN=%BUILD_DIR%\bin\debug\blender.exe
if EXIST "%BLENDER_BIN%" (
goto detect_blender_done
)
REM at this point, we don't know where blender is, clear the variable
set BLENDER_BIN=
:detect_blender_done

View File

@@ -3,7 +3,7 @@ for %%X in (svn.exe) do (set SVN=%%~$PATH:X)
for %%X in (cmake.exe) do (set CMAKE=%%~$PATH:X)
for %%X in (ctest.exe) do (set CTEST=%%~$PATH:X)
for %%X in (git.exe) do (set GIT=%%~$PATH:X)
set PYTHON=%BLENDER_DIR%\..\lib\win64_vc15\python\39\bin\python.exe
set PYTHON=%BLENDER_DIR%\..\lib\win64_vc15\python\37\bin\python.exe
if NOT "%verbose%" == "" (
echo svn : "%SVN%"
echo cmake : "%CMAKE%"

View File

@@ -1,21 +0,0 @@
REM First see if there is an environment variable set
if EXIST "%INKSCAPE_BIN%" (
goto detect_inkscape_done
)
REM Then see if inkscape is available in the path
for %%X in (inkscape.exe) do (set INKSCAPE_BIN=%%~$PATH:X)
if EXIST "%INKSCAPE_BIN%" (
goto detect_inkscape_done
)
REM Finally see if it is perhaps installed at the default location
set INKSCAPE_BIN=%ProgramFiles%\Inkscape\bin\inkscape.exe
if EXIST "%INKSCAPE_BIN%" (
goto detect_inkscape_done
)
REM If still not found clear the variable
set INKSCAPE_BIN=
:detect_inkscape_done

View File

@@ -1,23 +0,0 @@
REM First see if there is an environment variable set
if EXIST "%SPHINX_BIN%" (
goto detect_sphinx_done
)
REM Then see if inkscape is available in the path
for %%X in (sphinx-build.exe) do (set SPHINX_BIN=%%~$PATH:X)
if EXIST "%SPHINX_BIN%" (
goto detect_sphinx_done
)
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINX_BIN environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
REM If still not found clear the variable
set SPHINX_BIN=
:detect_sphinx_done

View File

@@ -10,7 +10,7 @@ exit /b 1
echo found clang-format in %CF_PATH%
if EXIST %PYTHON% (
set PYTHON=%BLENDER_DIR%\..\lib\win64_vc15\python\39\bin\python.exe
set PYTHON=%BLENDER_DIR%\..\lib\win64_vc15\python\37\bin\python.exe
goto detect_python_done
)

View File

@@ -1,42 +0,0 @@
if EXIST "%PYTHON%" (
goto detect_python_done
)
set PYTHON=%BLENDER_DIR%\..\lib\win64_vc15\python\39\bin\python.exe
if EXIST %PYTHON% (
goto detect_python_done
)
echo python not found at %PYTHON%
exit /b 1
:detect_python_done
echo found python (%PYTHON%)
call "%~dp0\find_inkscape.cmd"
if EXIST "%INKSCAPE_BIN%" (
goto detect_inkscape_done
)
echo unable to locate inkscape, run "set inkscape_BIN=full_path_to_inkscape.exe"
exit /b 1
:detect_inkscape_done
call "%~dp0\find_blender.cmd"
if EXIST "%BLENDER_BIN%" (
goto detect_blender_done
)
echo unable to locate blender, run "set BLENDER_BIN=full_path_to_blender.exe"
exit /b 1
:detect_blender_done
%PYTHON% -B %BLENDER_DIR%\release\datafiles\blender_icons_update.py
%PYTHON% -B %BLENDER_DIR%\release\datafiles\prvicons_update.py
%PYTHON% -B %BLENDER_DIR%\release\datafiles\alert_icons_update.py
:EOF

View File

@@ -1,29 +0,0 @@
if EXIST %PYTHON% (
goto detect_python_done
)
set PYTHON=%BLENDER_DIR%\..\lib\win64_vc15\python\39\bin\python.exe
if EXIST %PYTHON% (
goto detect_python_done
)
echo python not found at %PYTHON%
exit /b 1
:detect_python_done
echo found python (%PYTHON%)
call "%~dp0\find_blender.cmd"
if EXIST "%BLENDER_BIN%" (
goto detect_blender_done
)
echo unable to locate blender, run "set BLENDER_BIN=full_path_to_blender.exe"
exit /b 1
:detect_blender_done
%PYTHON% -B %BLENDER_DIR%\release\datafiles\blender_icons_geom_update.py
:EOF

View File

@@ -66,14 +66,6 @@ if NOT "%1" == "" (
) else if "%1" == "2019b" (
set BUILD_VS_YEAR=2019
set VSWHERE_ARGS=-products Microsoft.VisualStudio.Product.BuildTools
) else if "%1" == "2022" (
set BUILD_VS_YEAR=2022
) else if "%1" == "2022pre" (
set BUILD_VS_YEAR=2022
set VSWHERE_ARGS=-prerelease
) else if "%1" == "2022b" (
set BUILD_VS_YEAR=2022
set VSWHERE_ARGS=-products Microsoft.VisualStudio.Product.BuildTools
) else if "%1" == "packagename" (
set BUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% -DCPACK_OVERRIDE_PACKAGENAME="%2"
shift /1
@@ -107,18 +99,6 @@ if NOT "%1" == "" (
set FORMAT=1
set FORMAT_ARGS=%2 %3 %4 %5 %6 %7 %8 %9
goto EOF
) else if "%1" == "icons" (
set ICONS=1
goto EOF
) else if "%1" == "icons_geom" (
set ICONS_GEOM=1
goto EOF
) else if "%1" == "doc_py" (
set DOC_PY=1
goto EOF
) else if "%1" == "svnfix" (
set SVN_FIX=1
goto EOF
) else (
echo Command "%1" unknown, aborting!
goto ERR

View File

@@ -31,6 +31,3 @@ set PYDEBUG_CMAKE_ARGS=
set FORMAT=
set TEST=
set BUILD_WITH_SCCACHE=
set ICONS=
set ICONS_GEOM=
set DOC_PY=

View File

@@ -31,10 +31,6 @@ echo - 2019 ^(build with visual studio 2019^)
echo - 2019pre ^(build with visual studio 2019 pre-release^)
echo - 2019b ^(build with visual studio 2019 Build Tools^)
echo.
echo Documentation Targets ^(Not associated with building^)
echo - doc_py ^(Generate sphinx python api docs^)
echo.
echo Experimental options
echo - with_opengl_tests ^(enable both the render and draw opengl test suites^)

View File

@@ -1,26 +0,0 @@
if "%BUILD_VS_YEAR%"=="2017" set BUILD_VS_LIBDIRPOST=vc15
if "%BUILD_VS_YEAR%"=="2019" set BUILD_VS_LIBDIRPOST=vc15
if "%BUILD_VS_YEAR%"=="2022" set BUILD_VS_LIBDIRPOST=vc15
set BUILD_VS_SVNDIR=win64_%BUILD_VS_LIBDIRPOST%
set BUILD_VS_LIBDIR="%BLENDER_DIR%..\lib\%BUILD_VS_SVNDIR%"
echo Starting cleanup in %BUILD_VS_LIBDIR%.
cd %BUILD_VS_LIBDIR%
:RETRY
"%SVN%" cleanup
"%SVN%" update
if errorlevel 1 (
set /p LibRetry= "Error during update, retry? y/n"
if /I "!LibRetry!"=="Y" (
goto RETRY
)
echo.
echo Error: Download of external libraries failed.
echo This is needed for building, please manually run 'svn cleanup' and 'svn update' in
echo %BUILD_VS_LIBDIR% , until this is resolved you CANNOT make a successful blender build
echo.
exit /b 1
)
echo Cleanup complete

View File

@@ -123,7 +123,7 @@ def Align(handle):
class BlendFile:
'''
Reads a blendfile and store the header, all the fileblocks, and catalogue
structs found in the DNA fileblock
structs foound in the DNA fileblock
- BlendFile.Header (BlendFileHeader instance)
- BlendFile.Blocks (list of BlendFileBlock instances)

View File

@@ -1,4 +1,4 @@
# Doxyfile 1.9.1
# Doxyfile 1.8.16
# This file describes the settings to be used by the documentation system
# doxygen (www.doxygen.org) for a project.
@@ -38,7 +38,7 @@ PROJECT_NAME = Blender
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = V3.1
PROJECT_NUMBER = "V3.0"
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
@@ -227,14 +227,6 @@ QT_AUTOBRIEF = NO
MULTILINE_CPP_IS_BRIEF = NO
# By default Python docstrings are displayed as preformatted text and doxygen's
# special commands cannot be used. By setting PYTHON_DOCSTRING to NO the
# doxygen's special commands can be used and the contents of the docstring
# documentation blocks is shown as doxygen documentation.
# The default value is: YES.
PYTHON_DOCSTRING = YES
# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
# documentation from any documented member that it re-implements.
# The default value is: YES.
@@ -271,6 +263,12 @@ TAB_SIZE = 4
ALIASES =
# This tag can be used to specify a number of word-keyword mappings (TCL only).
# A mapping has the form "name=value". For example adding "class=itcl::class"
# will allow you to use the command class in the itcl::class meaning.
TCL_SUBST =
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For
# instance, some of the names that are used will be different. The list of all
@@ -311,22 +309,19 @@ OPTIMIZE_OUTPUT_SLICE = NO
# parses. With this tag you can assign which parser to use for a given
# extension. Doxygen has a built-in mapping, but you can override or extend it
# using this tag. The format is ext=language, where ext is a file extension, and
# language is one of the parsers supported by doxygen: IDL, Java, JavaScript,
# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice, VHDL,
# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice,
# Fortran (fixed format Fortran: FortranFixed, free formatted Fortran:
# FortranFree, unknown formatted Fortran: Fortran. In the later case the parser
# tries to guess whether the code is fixed or free formatted code, this is the
# default for Fortran type files). For instance to make doxygen treat .inc files
# as Fortran files (default is PHP), and .f files as C (default is Fortran),
# use: inc=Fortran f=C.
# default for Fortran type files), VHDL, tcl. For instance to make doxygen treat
# .inc files as Fortran files (default is PHP), and .f files as C (default is
# Fortran), use: inc=Fortran f=C.
#
# Note: For files without extension you can use no_extension as a placeholder.
#
# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
# the files are not read by doxygen. When specifying no_extension you should add
# * to the FILE_PATTERNS.
#
# Note see also the list of default file extension mappings.
# the files are not read by doxygen.
EXTENSION_MAPPING =
@@ -460,19 +455,6 @@ TYPEDEF_HIDES_STRUCT = NO
LOOKUP_CACHE_SIZE = 3
# The NUM_PROC_THREADS specifies the number threads doxygen is allowed to use
# during processing. When set to 0 doxygen will based this on the number of
# cores available in the system. You can set it explicitly to a value larger
# than 0 to get more control over the balance between CPU load and processing
# speed. At this moment only the input processing can be done using multiple
# threads. Since this is still an experimental feature the default is set to 1,
# which efficively disables parallel processing. Please report any issues you
# encounter. Generating dot graphs in parallel is controlled by the
# DOT_NUM_THREADS setting.
# Minimum value: 0, maximum value: 32, default value: 1.
NUM_PROC_THREADS = 1
#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------
@@ -536,13 +518,6 @@ EXTRACT_LOCAL_METHODS = NO
EXTRACT_ANON_NSPACES = NO
# If this flag is set to YES, the name of an unnamed parameter in a declaration
# will be determined by the corresponding definition. By default unnamed
# parameters remain unnamed in the output.
# The default value is: YES.
RESOLVE_UNNAMED_PARAMS = YES
# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
# undocumented members inside documented classes or files. If set to NO these
# members will be included in the various overviews, but no documentation
@@ -560,8 +535,8 @@ HIDE_UNDOC_MEMBERS = NO
HIDE_UNDOC_CLASSES = NO
# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
# declarations. If set to NO, these declarations will be included in the
# documentation.
# (class|struct|union) declarations. If set to NO, these declarations will be
# included in the documentation.
# The default value is: NO.
HIDE_FRIEND_COMPOUNDS = NO
@@ -580,18 +555,11 @@ HIDE_IN_BODY_DOCS = NO
INTERNAL_DOCS = YES
# With the correct setting of option CASE_SENSE_NAMES doxygen will better be
# able to match the capabilities of the underlying filesystem. In case the
# filesystem is case sensitive (i.e. it supports files in the same directory
# whose names only differ in casing), the option must be set to YES to properly
# deal with such files in case they appear in the input. For filesystems that
# are not case sensitive the option should be be set to NO to properly deal with
# output files written for symbols that only differ in casing, such as for two
# classes, one named CLASS and the other named Class, and to also support
# references to files without having to specify the exact matching casing. On
# Windows (including Cygwin) and MacOS, users should typically set this option
# to NO, whereas on Linux or other Unix flavors it should typically be set to
# YES.
# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
# names in lower-case letters. If set to YES, upper-case letters are also
# allowed. This is useful if you have classes or files whose names only differ
# in case and if your file system supports case sensitive file names. Windows
# (including Cygwin) ands Mac users are advised to set this option to NO.
# The default value is: system dependent.
CASE_SENSE_NAMES = YES
@@ -830,10 +798,7 @@ WARN_IF_DOC_ERROR = YES
WARN_NO_PARAMDOC = NO
# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when
# a warning is encountered. If the WARN_AS_ERROR tag is set to FAIL_ON_WARNINGS
# then doxygen will continue running as if WARN_AS_ERROR tag is set to NO, but
# at the end of the doxygen process doxygen will return with a non-zero status.
# Possible values are: NO, YES and FAIL_ON_WARNINGS.
# a warning is encountered.
# The default value is: NO.
WARN_AS_ERROR = NO
@@ -875,8 +840,8 @@ INPUT = doxygen.main.h \
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
# documentation (see:
# https://www.gnu.org/software/libiconv/) for the list of possible encodings.
# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
# possible encodings.
# The default value is: UTF-8.
INPUT_ENCODING = UTF-8
@@ -889,15 +854,11 @@ INPUT_ENCODING = UTF-8
# need to set EXTENSION_MAPPING for the extension otherwise the files are not
# read by doxygen.
#
# Note the list of default checked file patterns might differ from the list of
# default file extension mappings.
#
# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
# *.m, *.markdown, *.md, *.mm, *.dox (to be provided as doxygen C comment),
# *.py, *.pyw, *.f90, *.f95, *.f03, *.f08, *.f18, *.f, *.for, *.vhd, *.vhdl,
# *.ucf, *.qsf and *.ice.
# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, *.qsf and *.ice.
FILE_PATTERNS =
@@ -1125,6 +1086,13 @@ VERBATIM_HEADERS = YES
ALPHABETICAL_INDEX = YES
# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
# which the alphabetical index list will be split.
# Minimum value: 1, maximum value: 20, default value: 5.
# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
COLS_IN_ALPHA_INDEX = 5
# In case all classes in a project start with a common prefix, all classes will
# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
# can be used to specify a prefix (or a list of prefixes) that should be ignored
@@ -1263,9 +1231,9 @@ HTML_TIMESTAMP = YES
# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML
# documentation will contain a main index with vertical navigation menus that
# are dynamically created via JavaScript. If disabled, the navigation index will
# are dynamically created via Javascript. If disabled, the navigation index will
# consists of multiple levels of tabs that are statically embedded in every HTML
# page. Disable this option to support browsers that do not have JavaScript,
# page. Disable this option to support browsers that do not have Javascript,
# like the Qt help browser.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.
@@ -1295,11 +1263,10 @@ HTML_INDEX_NUM_ENTRIES = 100
# If the GENERATE_DOCSET tag is set to YES, additional index files will be
# generated that can be used as input for Apple's Xcode 3 integrated development
# environment (see:
# https://developer.apple.com/xcode/), introduced with OSX 10.5 (Leopard). To
# create a documentation set, doxygen will generate a Makefile in the HTML
# output directory. Running make will produce the docset in that directory and
# running make install will install the docset in
# environment (see: https://developer.apple.com/xcode/), introduced with OSX
# 10.5 (Leopard). To create a documentation set, doxygen will generate a
# Makefile in the HTML output directory. Running make will produce the docset in
# that directory and running make install will install the docset in
# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy
# genXcode/_index.html for more information.
@@ -1341,8 +1308,8 @@ DOCSET_PUBLISHER_NAME = Publisher
# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
# (see:
# https://www.microsoft.com/en-us/download/details.aspx?id=21138) on Windows.
# (see: https://www.microsoft.com/en-us/download/details.aspx?id=21138) on
# Windows.
#
# The HTML Help Workshop contains a compiler that can convert all HTML output
# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
@@ -1372,7 +1339,7 @@ CHM_FILE = blender.chm
HHC_LOCATION = "C:/Program Files (x86)/HTML Help Workshop/hhc.exe"
# The GENERATE_CHI flag controls if a separate .chi index file is generated
# (YES) or that it should be included in the main .chm file (NO).
# (YES) or that it should be included in the master .chm file (NO).
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
@@ -1417,8 +1384,7 @@ QCH_FILE =
# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
# Project output. For more information please see Qt Help Project / Namespace
# (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).
# (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1426,8 +1392,8 @@ QHP_NAMESPACE = org.doxygen.Project
# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
# Help Project output. For more information please see Qt Help Project / Virtual
# Folders (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-folders).
# Folders (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-
# folders).
# The default value is: doc.
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1435,16 +1401,16 @@ QHP_VIRTUAL_FOLDER = doc
# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
# filter to add. For more information please see Qt Help Project / Custom
# Filters (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters).
# Filters (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_NAME =
# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
# custom filter to add. For more information please see Qt Help Project / Custom
# Filters (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters).
# Filters (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_ATTRS =
@@ -1456,9 +1422,9 @@ QHP_CUST_FILTER_ATTRS =
QHP_SECT_FILTER_ATTRS =
# The QHG_LOCATION tag can be used to specify the location (absolute path
# including file name) of Qt's qhelpgenerator. If non-empty doxygen will try to
# run qhelpgenerator on the generated .qhp file.
# The QHG_LOCATION tag can be used to specify the location of Qt's
# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
# generated .qhp file.
# This tag requires that the tag GENERATE_QHP is set to YES.
QHG_LOCATION =
@@ -1535,17 +1501,6 @@ TREEVIEW_WIDTH = 246
EXT_LINKS_IN_WINDOW = NO
# If the HTML_FORMULA_FORMAT option is set to svg, doxygen will use the pdf2svg
# tool (see https://github.com/dawbarton/pdf2svg) or inkscape (see
# https://inkscape.org) to generate formulas as SVG images instead of PNGs for
# the HTML output. These images will generally look nicer at scaled resolutions.
# Possible values are: png (the default) and svg (looks nicer but requires the
# pdf2svg or inkscape tool).
# The default value is: png.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_FORMULA_FORMAT = png
# Use this tag to change the font size of LaTeX formulas included as images in
# the HTML documentation. When you change the font size after a successful
# doxygen run you need to manually remove any form_*.png images from the HTML
@@ -1566,14 +1521,8 @@ FORMULA_FONTSIZE = 10
FORMULA_TRANSPARENT = YES
# The FORMULA_MACROFILE can contain LaTeX \newcommand and \renewcommand commands
# to create new LaTeX commands to be used in formulas as building blocks. See
# the section "Including formulas" for details.
FORMULA_MACROFILE =
# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
# https://www.mathjax.org) which uses client side JavaScript for the rendering
# https://www.mathjax.org) which uses client side Javascript for the rendering
# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
# installed or if you want to formulas look prettier in the HTML output. When
# enabled you may also need to install MathJax separately and configure the path
@@ -1585,7 +1534,7 @@ USE_MATHJAX = NO
# When MathJax is enabled you can set the default output format to be used for
# the MathJax output. See the MathJax site (see:
# http://docs.mathjax.org/en/v2.7-latest/output.html) for more details.
# http://docs.mathjax.org/en/latest/output.html) for more details.
# Possible values are: HTML-CSS (which is slower, but has the best
# compatibility), NativeMML (i.e. MathML) and SVG.
# The default value is: HTML-CSS.
@@ -1601,7 +1550,7 @@ MATHJAX_FORMAT = HTML-CSS
# Content Delivery Network so you can quickly see the result without installing
# MathJax. However, it is strongly recommended to install a local copy of
# MathJax from https://www.mathjax.org before deployment.
# The default value is: https://cdn.jsdelivr.net/npm/mathjax@2.
# The default value is: https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/.
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_RELPATH = http://www.mathjax.org/mathjax
@@ -1615,8 +1564,7 @@ MATHJAX_EXTENSIONS =
# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
# of code that will be used on startup of the MathJax code. See the MathJax site
# (see:
# http://docs.mathjax.org/en/v2.7-latest/output.html) for more details. For an
# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
# example see the documentation.
# This tag requires that the tag USE_MATHJAX is set to YES.
@@ -1644,7 +1592,7 @@ MATHJAX_CODEFILE =
SEARCHENGINE = NO
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a web server instead of a web client using JavaScript. There
# implemented using a web server instead of a web client using Javascript. There
# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
# setting. When disabled, doxygen will generate a PHP script for searching and
# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
@@ -1663,8 +1611,7 @@ SERVER_BASED_SEARCH = NO
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see:
# https://xapian.org/).
# Xapian (see: https://xapian.org/).
#
# See the section "External Indexing and Searching" for details.
# The default value is: NO.
@@ -1677,9 +1624,8 @@ EXTERNAL_SEARCH = NO
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see:
# https://xapian.org/). See the section "External Indexing and Searching" for
# details.
# Xapian (see: https://xapian.org/). See the section "External Indexing and
# Searching" for details.
# This tag requires that the tag SEARCHENGINE is set to YES.
SEARCHENGINE_URL =
@@ -1843,11 +1789,9 @@ LATEX_EXTRA_FILES =
PDF_HYPERLINKS = NO
# If the USE_PDFLATEX tag is set to YES, doxygen will use the engine as
# specified with LATEX_CMD_NAME to generate the PDF file directly from the LaTeX
# files. Set this option to YES, to get a higher quality PDF documentation.
#
# See also section LATEX_CMD_NAME for selecting the engine.
# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
# the PDF file directly from the LaTeX files. Set this option to YES, to get a
# higher quality PDF documentation.
# The default value is: YES.
# This tag requires that the tag GENERATE_LATEX is set to YES.
@@ -2182,8 +2126,7 @@ INCLUDE_FILE_PATTERNS =
# recursively expanded use the := operator instead of the = operator.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
PREDEFINED = BUILD_DATE \
DOXYGEN=1
PREDEFINED = BUILD_DATE
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The
@@ -2360,31 +2303,9 @@ UML_LOOK = YES
# but if the number exceeds 15, the total amount of fields shown is limited to
# 10.
# Minimum value: 0, maximum value: 100, default value: 10.
# This tag requires that the tag UML_LOOK is set to YES.
UML_LIMIT_NUM_FIELDS = 10
# If the DOT_UML_DETAILS tag is set to NO, doxygen will show attributes and
# methods without types and arguments in the UML graphs. If the DOT_UML_DETAILS
# tag is set to YES, doxygen will add type and arguments for attributes and
# methods in the UML graphs. If the DOT_UML_DETAILS tag is set to NONE, doxygen
# will not generate fields with class member information in the UML graphs. The
# class diagrams will look similar to the default class diagrams but using UML
# notation for the relationships.
# Possible values are: NO, YES and NONE.
# The default value is: NO.
# This tag requires that the tag UML_LOOK is set to YES.
DOT_UML_DETAILS = NO
# The DOT_WRAP_THRESHOLD tag can be used to set the maximum number of characters
# to display on a single line. If the actual line length exceeds this threshold
# significantly it will wrapped across multiple lines. Some heuristics are apply
# to avoid ugly line breaks.
# Minimum value: 0, maximum value: 1000, default value: 17.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_WRAP_THRESHOLD = 17
UML_LIMIT_NUM_FIELDS = 10
# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
# collaboration graphs will show the relations between templates and their
@@ -2575,11 +2496,9 @@ DOT_MULTI_TARGETS = YES
GENERATE_LEGEND = YES
# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate
# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
# files that are used to generate the various graphs.
#
# Note: This setting is not only used for dot files but also for msc and
# plantuml temporary files.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_CLEANUP = YES

View File

@@ -6,87 +6,91 @@
* as part of the normal development process.
*/
/* TODO: other modules.
* - `libmv`
* - `cycles`
* - `opencolorio`
* - `opensubdiv`
* - `openvdb`
* - `quadriflow`
/** \defgroup MEM Guarded memory (de)allocation
* \ingroup intern
*/
/** \defgroup intern_atomic Atomic Operations
* \ingroup intern */
/** \defgroup clog C-Logging (CLOG)
* \ingroup intern
*/
/** \defgroup intern_clog C-Logging (CLOG)
* \ingroup intern */
/** \defgroup ctr container
* \ingroup intern
*/
/** \defgroup intern_eigen Eigen
* \ingroup intern */
/** \defgroup iksolver iksolver
* \ingroup intern
*/
/** \defgroup intern_glew-mx GLEW with Multiple Rendering Context's
* \ingroup intern */
/** \defgroup itasc itasc
* \ingroup intern
*/
/** \defgroup intern_iksolver Inverse Kinematics (Solver)
* \ingroup intern */
/** \defgroup memutil memutil
* \ingroup intern
*/
/** \defgroup intern_itasc Inverse Kinematics (ITASC)
* \ingroup intern */
/** \defgroup mikktspace mikktspace
* \ingroup intern
*/
/** \defgroup intern_libc_compat libc Compatibility For Linux
* \ingroup intern */
/** \defgroup moto moto
* \ingroup intern
*/
/** \defgroup intern_locale Locale
* \ingroup intern */
/** \defgroup eigen eigen
* \ingroup intern
*/
/** \defgroup intern_mantaflow Manta-Flow Fluid Simulation
* \ingroup intern */
/** \defgroup smoke smoke
* \ingroup intern
*/
/** \defgroup intern_mem Guarded Memory (de)allocation
* \ingroup intern */
/** \defgroup intern_memutil Memory Utilities (memutil)
* \ingroup intern */
/** \defgroup intern_mikktspace MikktSpace
* \ingroup intern */
/** \defgroup intern_rigidbody Rigid-Body C-API
* \ingroup intern */
/** \defgroup intern_sky_model Sky Model
* \ingroup intern */
/** \defgroup intern_utf_conv UTF-8/16 Conversion (utfconv)
* \ingroup intern */
/** \defgroup string string
* \ingroup intern
*/
/** \defgroup audaspace Audaspace
* \ingroup intern undoc
* \todo add to doxygen */
* \todo add to doxygen
*/
/** \defgroup audcoreaudio Audaspace CoreAudio
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audfx Audaspace FX
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audopenal Audaspace OpenAL
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audpulseaudio Audaspace PulseAudio
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audwasapi Audaspace WASAPI
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audpython Audaspace Python
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audsdl Audaspace SDL
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audsrc Audaspace SRC
* \ingroup audaspace */
*
* \ingroup audaspace
*/
/** \defgroup audffmpeg Audaspace FFMpeg
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audfftw Audaspace FFTW
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audjack Audaspace Jack
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup audsndfile Audaspace sndfile
* \ingroup audaspace */
* \ingroup audaspace
*/
/** \defgroup GHOST GHOST API
* \ingroup intern GUI

View File

@@ -5,8 +5,7 @@
/** \defgroup bmesh BMesh
* \ingroup blender
*/
/** \defgroup compositor Compositing
* \ingroup blender */
/** \defgroup compositor Compositing */
/** \defgroup python Python
* \ingroup blender
@@ -79,8 +78,7 @@
* \ingroup blender
*/
/** \defgroup data DNA, RNA and .blend access
* \ingroup blender */
/** \defgroup data DNA, RNA and .blend access*/
/** \defgroup gpu GPU
* \ingroup blender
@@ -103,12 +101,11 @@
* merged in docs.
*/
/**
* \defgroup gui GUI
* \ingroup blender */
/** \defgroup gui GUI */
/** \defgroup wm Window Manager
* \ingroup gui */
* \ingroup blender gui
*/
/* ================================ */
@@ -282,8 +279,7 @@
* \ingroup gui
*/
/** \defgroup externformats External Formats
* \ingroup blender */
/** \defgroup externformats External Formats */
/** \defgroup collada COLLADA
* \ingroup externformats
@@ -312,7 +308,4 @@
/* ================================ */
/** \defgroup undoc Undocumented
*
* \brief Modules and libraries that are still undocumented,
* or lacking proper integration into the doxygen system, are marked in this group.
*/
* \brief Modules and libraries that are still undocumented, or lacking proper integration into the doxygen system, are marked in this group. */

View File

@@ -61,7 +61,7 @@ def blender_extract_info(blender_bin: str) -> Dict[str, str]:
stdout=subprocess.PIPE,
).stdout.decode(encoding="utf-8")
blender_version_output = subprocess.run(
blender_version_ouput = subprocess.run(
[blender_bin, "--version"],
env=blender_env,
check=True,
@@ -73,7 +73,7 @@ def blender_extract_info(blender_bin: str) -> Dict[str, str]:
# check for each lines prefix to ensure these aren't included.
blender_version = ""
blender_date = ""
for l in blender_version_output.split("\n"):
for l in blender_version_ouput.split("\n"):
if l.startswith("Blender "):
# Remove 'Blender' prefix.
blender_version = l.split(" ", 1)[1].strip()
@@ -122,7 +122,7 @@ is a full-featured 3D application. It supports the entirety of the 3D pipeline -
'''modeling, rigging, animation, simulation, rendering, compositing, motion tracking, and video editing.
Use Blender to create 3D images and animations, films and commercials, content for games, '''
r'''architectural and industrial visualizations, and scientific visualizations.
r'''architectural and industrial visualizatons, and scientific visualizations.
https://www.blender.org''')

View File

@@ -14,7 +14,7 @@ sound = aud.Sound('music.ogg')
# play the audio, this return a handle to control play/pause
handle = device.play(sound)
# if the audio is not too big and will be used often you can buffer it
sound_buffered = aud.Sound.cache(sound)
sound_buffered = aud.Sound.buffer(sound)
handle_buffered = device.play(sound_buffered)
# stop the sounds (otherwise they play until their ends)

View File

@@ -12,7 +12,6 @@ such cases, lock the interface (Render → Lock Interface or
Below is an example of a mesh that is altered from a handler:
"""
def frame_change_pre(scene):
# A triangle that shifts in the z direction
zshift = scene.frame_current * 0.1

View File

@@ -11,17 +11,15 @@ import queue
execution_queue = queue.Queue()
# This function can safely be called in another thread.
# This function can savely be called in another thread.
# The function will be executed when the timer runs the next time.
def run_in_main_thread(function):
execution_queue.put(function)
def execute_queued_functions():
while not execution_queue.empty():
function = execution_queue.get()
function()
return 1.0
bpy.app.timers.register(execute_queued_functions)

View File

@@ -31,13 +31,11 @@ owner = object()
subscribe_to = bpy.context.object.location
def msgbus_callback(*args):
# This will print:
# Something changed! (1, 2, 3)
print("Something changed!", args)
bpy.msgbus.subscribe_rna(
key=subscribe_to,
owner=owner,

View File

@@ -1,40 +0,0 @@
"""
This method enables conversions between Local and Pose space for bones in
the middle of updating the armature without having to update dependencies
after each change, by manually carrying updated matrices in a recursive walk.
"""
def set_pose_matrices(obj, matrix_map):
"Assign pose space matrices of all bones at once, ignoring constraints."
def rec(pbone, parent_matrix):
matrix = matrix_map[pbone.name]
## Instead of:
# pbone.matrix = matrix
# bpy.context.view_layer.update()
# Compute and assign local matrix, using the new parent matrix
if pbone.parent:
pbone.matrix_basis = pbone.bone.convert_local_to_pose(
matrix,
pbone.bone.matrix_local,
parent_matrix=parent_matrix,
parent_matrix_local=pbone.parent.bone.matrix_local,
invert=True
)
else:
pbone.matrix_basis = pbone.bone.convert_local_to_pose(
matrix,
pbone.bone.matrix_local,
invert=True
)
# Recursively process children, passing the new matrix through
for child in pbone.children:
rec(child, matrix)
# Scan all bone trees from their roots
for pbone in obj.pose.bones:
if not pbone.parent:
rec(pbone, None)

View File

@@ -44,7 +44,7 @@ class OBJECT_OT_object_to_curve(bpy.types.Operator):
# Remove temporary curve.
obj.to_curve_clear()
# Invoke to_curve() with applying modifiers.
curve_with_modifiers = obj.to_curve(depsgraph, apply_modifiers=True)
curve_with_modifiers = obj.to_curve(depsgraph, apply_modifiers = True)
self.report({'INFO'}, f"{len(curve_with_modifiers.splines)} splines in new curve with modifiers.")
# Remove temporary curve.
obj.to_curve_clear()

View File

@@ -42,13 +42,8 @@ class SimpleMouseOperator(bpy.types.Operator):
self.y = event.mouse_y
return self.execute(context)
# Only needed if you want to add into a dynamic menu
def menu_func(self, context):
self.layout.operator(SimpleMouseOperator.bl_idname, text="Simple Mouse Operator")
# Register and add to the view menu (required to also use F3 search "Simple Mouse Operator" for quick access)
bpy.utils.register_class(SimpleMouseOperator)
bpy.types.VIEW3D_MT_view.append(menu_func)
# Test call to the newly defined operator.
# Here we call the operator and invoke it, meaning that the settings are taken

View File

@@ -43,7 +43,7 @@ def menu_func(self, context):
self.layout.operator(ExportSomeData.bl_idname, text="Text Export Operator")
# Register and add to the file selector (required to also use F3 search "Text Export Operator" for quick access)
# Register and add to the file selector
bpy.utils.register_class(ExportSomeData)
bpy.types.TOPBAR_MT_file_export.append(menu_func)

View File

@@ -27,14 +27,8 @@ class DialogOperator(bpy.types.Operator):
wm = context.window_manager
return wm.invoke_props_dialog(self)
# Only needed if you want to add into a dynamic menu
def menu_func(self, context):
self.layout.operator(DialogOperator.bl_idname, text="Dialog Operator")
# Register and add to the object menu (required to also use F3 search "Dialog Operator" for quick access)
bpy.utils.register_class(DialogOperator)
bpy.types.VIEW3D_MT_object.append(menu_func)
# Test call.
bpy.ops.object.dialog_operator('INVOKE_DEFAULT')

View File

@@ -41,13 +41,8 @@ class CustomDrawOperator(bpy.types.Operator):
col.prop(self, "my_string")
# Only needed if you want to add into a dynamic menu
def menu_func(self, context):
self.layout.operator(CustomDrawOperator.bl_idname, text="Custom Draw Operator")
# Register and add to the object menu (required to also use F3 search "Custom Draw Operator" for quick access)
bpy.utils.register_class(CustomDrawOperator)
bpy.types.VIEW3D_MT_object.append(menu_func)
# test call
bpy.ops.object.custom_draw('INVOKE_DEFAULT')

View File

@@ -55,13 +55,8 @@ class ModalOperator(bpy.types.Operator):
context.window_manager.modal_handler_add(self)
return {'RUNNING_MODAL'}
# Only needed if you want to add into a dynamic menu
def menu_func(self, context):
self.layout.operator(ModalOperator.bl_idname, text="Modal Operator")
# Register and add to the object menu (required to also use F3 search "Modal Operator" for quick access)
bpy.utils.register_class(ModalOperator)
bpy.types.VIEW3D_MT_object.append(menu_func)
# test call
bpy.ops.object.modal_operator('INVOKE_DEFAULT')

View File

@@ -31,13 +31,8 @@ class SearchEnumOperator(bpy.types.Operator):
context.window_manager.invoke_search_popup(self)
return {'RUNNING_MODAL'}
# Only needed if you want to add into a dynamic menu
def menu_func(self, context):
self.layout.operator(SearchEnumOperator.bl_idname, text="Search Enum Operator")
# Register and add to the object menu (required to also use F3 search "Search Enum Operator" for quick access)
bpy.utils.register_class(SearchEnumOperator)
bpy.types.VIEW3D_MT_object.append(menu_func)
# test call
bpy.ops.object.search_enum_operator('INVOKE_DEFAULT')

View File

@@ -22,13 +22,8 @@ class HelloWorldOperator(bpy.types.Operator):
print("Hello World")
return {'FINISHED'}
# Only needed if you want to add into a dynamic menu
def menu_func(self, context):
self.layout.operator(HelloWorldOperator.bl_idname, text="Hello World Operator")
# Register and add to the view menu (required to also use F3 search "Hello World Operator" for quick access)
bpy.utils.register_class(HelloWorldOperator)
bpy.types.VIEW3D_MT_view.append(menu_func)
# test call to the newly defined operator
bpy.ops.wm.hello_world()

Some files were not shown because too many files have changed in this diff Show More