Compare commits

..

147 Commits

Author SHA1 Message Date
b6232c8c13 Bumped version to 1.4.1 2016-07-27 18:38:10 +02:00
6d4ba51c6c Added missing callback argument 2016-07-27 18:37:29 +02:00
b9caecfce9 Removed some unused code 2016-07-26 16:02:15 +02:00
4ce8db88c6 Tagged version as 1.4.0 2016-07-26 15:30:23 +02:00
dfff0cb55b Bumped pillarsdk requirement to 1.5.0 2016-07-26 15:30:16 +02:00
1a515bfbda Texture browser: removed the blue line :( 2016-07-22 18:25:24 +02:00
a47dfa8f32 Texture browser: UI polish for HDRi variation selector 2016-07-22 18:25:20 +02:00
8890ad5421 Texture browser: storing desired HDRi variation on image, not window mgr
Also added an HDRi panel in the node properties.
2016-07-22 18:06:13 +02:00
f0d42ed2cc Texture browser: on click on HDRi, always just download the smallest image.
The larger ones can be downloaded later from the GUI.
The addon assumes that the first image in the node.properties.files list is
the smallest one. This can be ensured on a per-project basis by running
'manage.py hdri_sort {project URL}' on the Pillar server.
2016-07-22 17:49:29 +02:00
76ca59251b Texture browser: nicer handling of still-loading menu items. 2016-07-22 17:47:37 +02:00
b33ec74347 Texture browser: Use node name as file name 2016-07-22 17:47:37 +02:00
b5e33c52c1 Texture browser: simplified HDRi replacing.
It now just loads a new image into the existing image datablock.
2016-07-22 17:47:37 +02:00
8b56918989 Texture browser: HDRi variation selector now defaults to variation of the current image. 2016-07-22 17:47:31 +02:00
99257bd88b Added HDRi variation swap operator.
The variation/resolution selector isn't final yet.
2016-07-22 15:39:08 +02:00
c2363d248b Settings sync: solved etag mismatch issue.
This was caused by doing a cached request, which would always return
the etag of the previous request.
2016-07-22 12:57:15 +02:00
3776246d70 Texture browser: load images with relative path if needed.
If the local texture path of the current scene is relative, the image
will also be stored in a relative path.
2016-07-22 12:56:37 +02:00
9bc8c30443 Moved invoke-calling execute function to AsyncModalOperatorMixin
because this is what you'd generally want from an async operator
2016-07-22 12:48:50 +02:00
56b622a723 Texture browser: save Node document with downloaded image. 2016-07-21 16:25:58 +02:00
8edf9c7428 Texture browser: Don't show spinner for HDRi files 2016-07-21 14:09:52 +02:00
10bf3e62ec Marked as beta release in setup.py 2016-07-21 12:11:02 +02:00
3ec1a3d26d Made HRDi browsing more efficient.
It now uses the thumbnail of the node for each file, instead of trying
to download each file's thumbnail individually.
2016-07-21 12:10:29 +02:00
3ce89ad5f4 Show file size for HDRi files. 2016-07-21 11:17:53 +02:00
7cf858855e PEP8 formatting 2016-07-21 11:03:30 +02:00
a10b4a804c Added support for HDRi nodes.
These nodes are like textures, except that here the user should choose
which variation to download (instead of downloading them all).
2016-07-21 11:03:23 +02:00
514968de40 Texture browser: set downloaded image as active in image editor.
If the current context is the image editor, that is.
2016-07-20 17:08:57 +02:00
c73dce169f Added a panel that shows custom properties in the image editor. 2016-07-20 16:54:06 +02:00
369e082880 Added GPL License block to the top of each .py file. 2016-07-20 16:32:01 +02:00
6cd9cb1713 Texture browser: clicking on HDRi node no longer causes exception.
The browser still downloads all HDRi files, though.
2016-07-20 16:17:48 +02:00
37f701edaf Added texture browser to the image menu.
The Ctrl+Alt+Shift+A shortcut still works everywhere, but now it's also
easy to find in the GUI.
2016-07-20 16:09:56 +02:00
b04f9adb40 Texture browser: Don't use file name as menu item label.
Just using the node name is clearer, as it only depends on the node, and
no longer on the linked files themselves. This also makes it easier to
get compatible with HDRi nodes (as those files won't be named
"{name}-{maptype}".
2016-07-20 16:02:56 +02:00
70a0aba10a Allow browsing group_hdri nodes.
Nodes of type 'hdri' don't work well yet.
2016-07-20 15:58:09 +02:00
f6d05c4c84 Bumped version to 1.4.0 (otherwise we don't get HDRi projects from Cloud) 2016-07-20 14:27:00 +02:00
8e9d62b5c5 Include addon version in all Pillar HTTP requests 2016-07-20 14:26:36 +02:00
e300c32d64 Bumped version to 1.3.3 2016-07-20 11:13:31 +02:00
63eaaf7dc9 Added addon-bundle dir 2016-07-20 10:59:17 +02:00
6fcea9469f Limit scrolling to content area. 2016-07-19 18:13:32 +02:00
61f86d63e0 Scrolling on MacOS X 2016-07-19 18:13:18 +02:00
0d69b1d7ec Removed trailing period from bl_desc 2016-07-19 18:13:09 +02:00
d5139c767e Texture browser: Added scrolling.
You can scroll indefinitely for now. Might fix that in a later commit.
2016-07-15 17:01:24 +02:00
f0d829da49 Renamed some constants to all-caps 2016-07-15 16:59:52 +02:00
a4817259c8 Moved import 2016-07-15 16:56:55 +02:00
f899f6d1ab Started pagination support, but it isn't used yet. 2016-07-15 16:56:39 +02:00
9a0873eea4 Renamed gui.py to texture_browser.py
Also discovered double-unregister of a class, so that fixed an old bug.
Removed the workaround for that bug.
2016-07-15 14:27:42 +02:00
388a059400 Bumped version to 1.3.2 2016-07-15 14:02:01 +02:00
80d2b5b2e7 Move "Share on Cloud" button from image header to menu. 2016-07-15 14:01:21 +02:00
53ab2fc6df Bumped version to 1.3.1 2016-07-14 11:50:19 +02:00
1e2c74e82d Made screenshot the default target for image sharing.
This way the spacebar-menu takes a screenshot of the current area and
shares it. The other targets need a 'name' property set, so those won't
work from the spacebar-menu anyway.

I also added some extra options for the screenshotting, to mirror the
bpy.ops.screen.screenshot() operator options.

The full-window screenshot operator is now also placed in the Window menu.
2016-07-14 11:49:30 +02:00
ecb8f8575f Added missing logger 2016-07-14 11:47:50 +02:00
acd62b4917 Added screenshot functionality 2016-07-14 11:13:09 +02:00
65faeba7b0 Bumped requirement pillarsdk 1.3.0 → 1.4.0 2016-07-13 11:06:51 +02:00
8f8e14b66e Bumped version to 1.3.0 2016-07-12 18:00:00 +02:00
250939dc32 Added custom cloud icon 2016-07-08 17:00:44 +02:00
2e617287fd Remove now-unused PILLAR_WEB_SERVER_URL 2016-07-08 17:00:21 +02:00
36bbead1e1 Handling 413 Request Entity Too Large while uploading synced settings.
Non-subscribers are limited in the file size they can upload.
2016-07-08 12:38:58 +02:00
89a9055aa4 No longer use theatre_link, and show URL in GUI 2016-07-07 17:03:28 +02:00
6339f75406 Fix: added some missing return statements 2016-07-07 16:14:51 +02:00
a9aa961b92 Open browser at the short URL. 2016-07-07 15:43:36 +02:00
4da601be0c Share image after uploading it to Pillar. 2016-07-07 15:19:21 +02:00
3c9e4e2873 Give users the option to open a webbrowser after sharing an image.
The addon now also uses the home project URL from the project itself,
rather than hard-coding it.
2016-07-07 11:43:01 +02:00
4762f0292d Added support for sharing packed images. 2016-07-07 11:09:30 +02:00
959e83229b Allow execution of the file sharing operator
(rather than requiring INVOKE_DEFAULT)
2016-07-06 16:25:31 +02:00
662b6cf221 Choose default target='DATABLOCK' 2016-07-06 16:25:10 +02:00
96616dbdff Always create new nodes on the cloud, and prevent cache issue.
The cache issue: this caused etag mismatches when sharing a file multiple
times using always_create_new_node=False. Even though we don't use this
option right now, it should be easy to enable.
2016-07-06 16:24:55 +02:00
dbbffcc28e Some protection against sharing dirty image datablocks.
We can save dirty files, either to disk or the cloud, but I think that's
a bad idea to:

- Share unsaved data to the cloud; users can assume it's saved
  to disk and close blender, losing their file.
- Save unsaved data first; this can overwrite a file a user
  didn't want to overwrite.

The clearest way is simply to refuse to handle dirty datablocks.
2016-07-06 16:23:33 +02:00
0a1f1972da Support for uploading render results. 2016-07-06 15:48:55 +02:00
c9a92dd5d1 Added start of image sharing.
Sharing an image datablock works, if it has been saved and not packed.
Directly sharing a file, and dirty/packed datablocks are for a future
commit.
2016-07-06 15:20:50 +02:00
1c2def3b84 Moved some code from settings_sync.py to home_project.py and pillar.py 2016-07-05 17:26:26 +02:00
e29b61b649 Using pillarsdk.Node.create_asset_from_file() 2016-07-05 16:47:37 +02:00
1d1c8cf3d6 Bumped version to 1.2.2 2016-06-30 18:43:51 +02:00
fc01e32f0d Added note about restarting Blender 2016-06-30 14:58:26 +02:00
7577b348a5 Prevent writing bytecode when building zip 2016-06-30 14:53:27 +02:00
be99bcb250 Using "your Blender Cloud" instead of "your home project". 2016-06-30 14:43:04 +02:00
2190bd795e Bumped version to 1.2.1 2016-06-29 11:39:43 +02:00
76d1f88c4e Prevent syncing of any file path in the 'Files' tab. 2016-06-29 11:32:25 +02:00
f0b7a0451d Some UI tweaks 2016-06-28 16:55:35 +02:00
6eab5ba0af Work around RuntimeError unregistering the texture browser operator. 2016-06-28 16:41:31 +02:00
d457c77b19 Monkey-patch Requests < 2.6.1 to prevent crash on 2.77a/Mac 2016-06-28 16:07:08 +02:00
ef70d20a77 Bumped version to 1.2.0 in setup.py 2016-06-28 15:34:05 +02:00
db10495e7f Bumped pillarsdk requirement to 1.3.0 2016-06-28 15:32:05 +02:00
586905a183 If there are multiple wheels that match, load the latest one.
This should allow users to upgrade the addon by overwriting an older
version, instead of requiring a remove-and-install sequence.
2016-06-28 15:31:56 +02:00
822c8daf07 Gracefully handle use of sync feature without home project access.
This is for people that aren't part of the AB-testing group, who still
used this version of the addon.
2016-06-28 15:00:14 +02:00
e044607d91 Allow non-subscribers to use Blender Sync 2016-06-28 14:29:51 +02:00
e484d6496c Don't clear report when there was an error getting available Blender versions 2016-06-28 14:29:40 +02:00
78d567793e Depend on Pillar to create the 'Blender Sync' group node. 2016-06-28 14:29:12 +02:00
7e105167c0 Don't sync bookmarks and recent files (for now).
These files can be restored when we allow users to pick what they sync.
2016-06-28 14:28:39 +02:00
d53938e03b Check specific roles for specific addon features. 2016-06-24 15:22:12 +02:00
0f26551368 Texture browser now uses pillar.PillarOperatorMixin too. 2016-06-24 15:00:38 +02:00
645529bf35 Sync: gracefully handle credential sync errors 2016-06-24 14:46:27 +02:00
4d2314e08f Hide some things in the UI except when bpy.app.debug=True 2016-06-24 14:46:13 +02:00
a5df609d95 Bumped version to 1.2.0, and moved from TESTING to OFFICIAL support 2016-06-24 14:02:19 +02:00
e9a08c11b3 Renamed addon to just 'Blender Cloud'
It does more than just be the texture browser, namely Blender Sync!
2016-06-24 13:48:55 +02:00
7bdfa28a3f After pushing, change the 'pull' version to the current version of Blender.
Or to the latest version, if by some mistake somewhere the current push
isn't available after all.
2016-06-24 13:03:10 +02:00
e73e9d3df7 Nice UI and proper refreshing versions & loading settings. 2016-06-24 12:53:49 +02:00
671e9f31fa Nicer UI, and Blender Sync all in one operator. 2016-06-23 19:00:47 +02:00
6de026c8e2 Sync: new operator allows to choose which Blender version to pull.
The user gets a popup with the Blender versions for which they have
synced settings, can select one, and it'll pull those settings in.

There is an issue, though: the PULL action operator doesn't report to
the GUI the way it's written now. Will look at that later.
2016-06-23 11:09:19 +02:00
6470feac7c Moved some functions outside of sync operator 2016-06-23 10:35:30 +02:00
6462561f2d Moved some code around. 2016-06-22 16:48:16 +02:00
2080f92558 Removed now-unused code 2016-06-22 16:26:38 +02:00
a6f5a16583 Don't create folder structure on Cloud when pulling settings.
Instead, an error is shown that there are no synced settings. This will
have to be replaced, allowing the user to select from settings that
are available for other Blender versions.
2016-06-22 16:22:20 +02:00
6f376027e5 Update userpref.blend with machine-local settings before moving it place.
After pulling settings from the Cloud, we update userpref.blend with
machine-local settings before we move it to ~/.config/blender/{ver}/config
2016-06-22 16:04:03 +02:00
2ee9d1ebfa Added callback that can be an 'async def' function 2016-06-22 15:17:35 +02:00
ed02816872 Sync to Blender version specific group node 2016-06-21 17:55:18 +02:00
d100232428 Also mention my fork of CacheControl in requirements.txt 2016-06-21 17:54:58 +02:00
9044bfadb9 Sync pull: Make a backup copy of the files before overwriting them 2016-06-21 16:30:58 +02:00
4cdf2cee9c Also save userprefs after restoring local-only settings. 2016-06-21 16:30:42 +02:00
9c527520a9 Prevent overwriting of certain user preferences.
Those prefs are considered system-specific, such as the temporary
directory and CUDA compute device.
2016-06-17 16:47:32 +02:00
56137c485f Nicer resetting of _loop_kicking_operator_running 2016-06-17 16:27:45 +02:00
eb77461ca0 Removed more caching + added explanation why caching is dangerous here. 2016-06-17 16:22:11 +02:00
884d68ebe8 Let check_credentials return the user ID 2016-06-17 16:22:11 +02:00
36d62082f3 Sync: downloading files from Cloud 2016-06-17 16:22:08 +02:00
af53d61cf2 Sync: upload caching fix
A POST to create a new node didn't invalidate the preceeding GET on
/nodes to find whether the node already exists. As a result, the negative
answer was cached, and new nodes were created even though the node
already existed.
2016-06-17 15:48:22 +02:00
332c32ca9c Allow downloading files with None file_loading/file_loaded/map_type 2016-06-17 15:48:22 +02:00
988dc72ba1 Use Sybren's fork of CacheControl to fix caching issue.
We need my clone until pull request #125 has been merged & released.
See https://github.com/ionrock/cachecontrol/pull/125 for more info.
2016-06-17 15:48:22 +02:00
82c7560c7b Allow easy switching between cached and uncached requests to Pillar 2016-06-17 15:48:22 +02:00
73e2fd77e2 Added reloading of home file after pulling (not implemented pull yet)
Pull is easy, we can already download files from Cloud. Had to jump
through some hoops to make the reload work reliably, though.
2016-06-17 15:48:17 +02:00
483e847ffe Added checking credentials for settings sync 2016-06-17 13:14:10 +02:00
ef822208c8 Use blend_data.is_dirty, seems to be more reliable than is_saved 2016-06-17 13:11:34 +02:00
791b3f480c Uploading setting files to home project works. 2016-06-16 17:19:49 +02:00
efb1456596 Started working on synchronising settings 2016-06-16 16:33:35 +02:00
58785977e7 Easy access to pillar user ID 2016-06-16 16:33:21 +02:00
8a5efc18db Separated async-task-operator code from texture browser GUI code. 2016-06-16 16:33:05 +02:00
b970530f44 Added 'clear_wheels' script 2016-06-16 10:37:28 +02:00
ded05b6ca9 Tweaked debug message 2016-06-15 09:29:09 +02:00
5f5f0d8db9 Prevent double map types in the filename. 2016-05-20 16:20:33 +02:00
30f71ac9fc Fixed typo. I'm a moron. 2016-05-20 14:34:35 +02:00
bdef942b0b Replaced log.warning with debug msg.
We can now list all available projects, so there is no need to warn.
2016-05-20 11:33:26 +02:00
2a0ef39b12 Bumped SDK requirement to 1.2.0 2016-05-20 11:31:45 +02:00
c57a3bc902 Bumped version to 1.1.0 2016-05-18 16:36:56 +02:00
b94998d12e Fall back on texture.properties.files[0].file if texture.picture doesn't exist. 2016-05-18 16:27:04 +02:00
1cd42e246e Use current_path in log 2016-05-18 15:55:08 +02:00
079689a532 Client-side sorting of nodes.
The sorting happens after obtaining the individual nodes, as this is done
in parallel in unpredictable order.
2016-05-18 15:13:44 +02:00
597ba6de1c Use project name in download path, rather than UUID.
Filenames are now also sanitized.
2016-05-18 15:13:29 +02:00
7b59391872 Place map type (col, spec, etc) at end of filename instead of start. 2016-05-18 14:14:38 +02:00
8201ba7691 Fix node type name 2016-05-18 14:12:21 +02:00
8f2b0f8faa Allow querying for multiple node types. 2016-05-18 14:11:49 +02:00
33b52cc8a9 CPU-friendlier by lowering fixed redraw rate.
The GUI is still redrawn on other events, such as mouse move, so it still
responds quickly to that. This is just regarding background updates of the
data model, such as when loading thumbnails.
2016-05-18 13:01:48 +02:00
be46b9cf81 Handling more cases of login/credentials issues 2016-05-18 13:01:04 +02:00
ba4c951d32 Use /bcloud/texture-library end point to fetch texture library projects. 2016-05-18 12:50:51 +02:00
5c7343f8c9 Make sure we can always go up again (except at top level) 2016-05-18 12:17:07 +02:00
64d36818fe Start browsing at project overview, instead of inside one project.
Also moved from using project_uuid and node_uuid to using CloudPath
objects.
2016-05-18 11:57:36 +02:00
07f28d3072 Debug log reason why module can't be imported.
Usually this will be because someone just wants to use the wheel, but
during development this can be caused by other issues, and shouldn't
be silenced.
2016-05-18 11:57:36 +02:00
48ca91a364 Skip nodes of unsupported node_type (instead of raising exception) 2016-05-17 17:30:57 +02:00
7ee052f71b Use project UUID from prefs 2016-05-17 17:30:38 +02:00
2bb859efd9 Increased pillarsdk required version 2016-05-10 15:04:49 +02:00
ac3943fe6c Bumped version to 1.0.1 2016-05-10 15:01:15 +02:00
5eaee872bf Added check for user's roles -- disallow usage by non-subscribers.
This makes it clear from the get-go that users need to subscribe. Otherwise
they'll get unexpected errors once they try to download something.
2016-05-10 14:52:51 +02:00
6ce4399407 Show default mouse cursor, instead of the one belonging to the editor. 2016-05-10 14:33:02 +02:00
21 changed files with 3917 additions and 859 deletions

1
.gitignore vendored
View File

@@ -7,3 +7,4 @@ blender_cloud/wheels/*.whl
/test_*.py /test_*.py
/dist/ /dist/
/build/ /build/
/addon-bundle/*.zip

View File

@@ -24,6 +24,8 @@ Installing the addon
* If you don't have one already, sign up for an account at * If you don't have one already, sign up for an account at
the [Blender ID site](https://www.blender.org/id/). the [Blender ID site](https://www.blender.org/id/).
* If you had a previous version of the addon installed, deactivate it
and restart Blender.
* Install and log in with the * Install and log in with the
[Blender ID addon](https://developer.blender.org/diffusion/BIA/). [Blender ID addon](https://developer.blender.org/diffusion/BIA/).
* Install the Blender Cloud addon in Blender (User Preferences → * Install the Blender Cloud addon in Blender (User Preferences →

52
addon-bundle/README.txt Normal file
View File

@@ -0,0 +1,52 @@
Blender Cloud Addon
===================
Congratulations on downloading the Blender Cloud addon. For your
convenience, we have bundled it with the Blender ID addon.
To use the Blender Cloud addon, perform the following steps:
- Use Blender (File, User Preferences, Addons, Install from file)
to install blender_id-x.x.x.addon.zip
- If you had a previous version of the Blender Cloud addon installed,
restart Blender now.
- Log in with your Blender ID.
- Use Blender to install blender_cloud-x.x.x.addon.zip
If you don't see the addon in the list, enable the Testing
category.
- Press Ctrl+Alt+Shift+A to start the texture browser.
- Visit the User Preferences, Addons panel, to use the Blender Sync
feature.
Support for Blenders not from blender.org
-----------------------------------------
Maybe you use Blender from another source than blender.org, such as an
Ubuntu package. If that is the case, you have to make sure that the
Python package "requests" is installed. On Ubuntu Linux this can be
done with the command
sudo apt-get install python3-requests
On other platforms & distributions this might be different.
Blender uses Python 3.5, so make sure you install the package for the
correct version of Python.
Subscribing to the Blender Cloud
--------------------------------
The Blender Sync feature is free to use for everybody with a Blender
ID account. In order to use the Texture Browser you need to have a
Blender Cloud subscription. If you didn't subscribe yet, go to:
https://cloud.blender.org/join

33
addon-bundle/bundle.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
cd $(dirname $(readlink -f $0))
BCLOUD=$(ls ../dist/blender_cloud-*.addon.zip | tail -n 1)
BID=$(ls ../../../blender-id-addon/dist/blender_id-*.addon.zip | tail -n 1)
cp -va $BCLOUD $BID .
BUNDLE=$(basename $BCLOUD)
BUNDLE=${BUNDLE/.addon.zip/-bundle-UNZIP_ME_FIRST.zip}
zip -9 $BUNDLE $(basename $BCLOUD) $(basename $BID) README.txt
dolphin --select $BUNDLE 2>/dev/null >/dev/null & disown
echo "CREATED: $BUNDLE"

View File

@@ -19,19 +19,21 @@
# <pep8 compliant> # <pep8 compliant>
bl_info = { bl_info = {
'name': 'Blender Cloud Texture Browser', 'name': 'Blender Cloud',
'author': 'Sybren A. Stüvel and Francesco Siddi', 'author': 'Sybren A. Stüvel and Francesco Siddi',
'version': (0, 2, 0), 'version': (1, 4, 1),
'blender': (2, 77, 0), 'blender': (2, 77, 0),
'location': 'Ctrl+Shift+Alt+A anywhere', 'location': 'Addon Preferences panel, and Ctrl+Shift+Alt+A anywhere for texture browser',
'description': 'Allows downloading of textures from the Blender Cloud. Requires ' 'description': 'Texture library browser and Blender Sync. Requires the Blender ID addon '
'the Blender ID addon and Blender 2.77a or newer.', 'and Blender 2.77a or newer.',
'wiki_url': 'http://wiki.blender.org/index.php/Extensions:2.6/Py/' 'wiki_url': 'http://wiki.blender.org/index.php/Extensions:2.6/Py/'
'Scripts/System/BlenderCloud', 'Scripts/System/BlenderCloud',
'category': 'System', 'category': 'System',
'support': 'TESTING' 'support': 'OFFICIAL'
} }
import logging
# Support reloading # Support reloading
if 'pillar' in locals(): if 'pillar' in locals():
import importlib import importlib
@@ -43,16 +45,21 @@ if 'pillar' in locals():
cache = importlib.reload(cache) cache = importlib.reload(cache)
else: else:
from . import wheels from . import wheels
wheels.load_wheels() wheels.load_wheels()
from . import pillar, cache from . import pillar, cache
log = logging.getLogger(__name__)
def register(): def register():
"""Late-loads and registers the Blender-dependent submodules.""" """Late-loads and registers the Blender-dependent submodules."""
import sys import sys
_monkey_patch_requests()
# Support reloading # Support reloading
if '%s.blender' % __name__ in sys.modules: if '%s.blender' % __name__ in sys.modules:
import importlib import importlib
@@ -63,23 +70,50 @@ def register():
sys.modules[modname] = module sys.modules[modname] = module
return module return module
reload_mod('blendfile')
reload_mod('home_project')
reload_mod('utils')
blender = reload_mod('blender') blender = reload_mod('blender')
gui = reload_mod('gui')
async_loop = reload_mod('async_loop') async_loop = reload_mod('async_loop')
texture_browser = reload_mod('texture_browser')
settings_sync = reload_mod('settings_sync')
image_sharing = reload_mod('image_sharing')
else: else:
from . import blender, gui, async_loop from . import (blender, texture_browser, async_loop, settings_sync, blendfile, home_project,
image_sharing)
async_loop.setup_asyncio_executor() async_loop.setup_asyncio_executor()
async_loop.register() async_loop.register()
texture_browser.register()
blender.register() blender.register()
gui.register() settings_sync.register()
image_sharing.register()
def _monkey_patch_requests():
"""Monkey-patch old versions of Requests.
This is required for the Mac version of Blender 2.77a.
"""
import requests
if requests.__build__ >= 0x020601:
return
log.info('Monkey-patching requests version %s', requests.__version__)
from requests.packages.urllib3.response import HTTPResponse
HTTPResponse.chunked = False
HTTPResponse.chunk_left = None
def unregister(): def unregister():
from . import blender, gui, async_loop from . import blender, texture_browser, async_loop, settings_sync, image_sharing
gui.unregister() image_sharing.unregister()
settings_sync.unregister()
blender.unregister() blender.unregister()
texture_browser.unregister()
async_loop.unregister() async_loop.unregister()

View File

@@ -1,3 +1,21 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
"""Manages the asyncio loop.""" """Manages the asyncio loop."""
import asyncio import asyncio
@@ -87,6 +105,15 @@ def ensure_async_loop():
log.debug('Result of starting modal operator is %r', result) log.debug('Result of starting modal operator is %r', result)
def erase_async_loop():
global _loop_kicking_operator_running
log.debug('Erasing async loop')
loop = asyncio.get_event_loop()
loop.stop()
class AsyncLoopModalOperator(bpy.types.Operator): class AsyncLoopModalOperator(bpy.types.Operator):
bl_idname = 'asyncio.loop' bl_idname = 'asyncio.loop'
bl_label = 'Runs the asyncio main loop' bl_label = 'Runs the asyncio main loop'
@@ -94,6 +121,14 @@ class AsyncLoopModalOperator(bpy.types.Operator):
timer = None timer = None
log = logging.getLogger(__name__ + '.AsyncLoopModalOperator') log = logging.getLogger(__name__ + '.AsyncLoopModalOperator')
def __del__(self):
global _loop_kicking_operator_running
# This can be required when the operator is running while Blender
# (re)loads a file. The operator then doesn't get the chance to
# finish the async tasks, hence stop_after_this_kick is never True.
_loop_kicking_operator_running = False
def execute(self, context): def execute(self, context):
return self.invoke(context, None) return self.invoke(context, None)
@@ -115,6 +150,12 @@ class AsyncLoopModalOperator(bpy.types.Operator):
def modal(self, context, event): def modal(self, context, event):
global _loop_kicking_operator_running global _loop_kicking_operator_running
# If _loop_kicking_operator_running is set to False, someone called
# erase_async_loop(). This is a signal that we really should stop
# running.
if not _loop_kicking_operator_running:
return {'FINISHED'}
if event.type != 'TIMER': if event.type != 'TIMER':
return {'PASS_THROUGH'} return {'PASS_THROUGH'}
@@ -130,6 +171,87 @@ class AsyncLoopModalOperator(bpy.types.Operator):
return {'RUNNING_MODAL'} return {'RUNNING_MODAL'}
# noinspection PyAttributeOutsideInit
class AsyncModalOperatorMixin:
async_task = None # asyncio task for fetching thumbnails
signalling_future = None # asyncio future for signalling that we want to cancel everything.
log = logging.getLogger('%s.AsyncModalOperatorMixin' % __name__)
_state = 'INITIALIZING'
def invoke(self, context, event):
context.window_manager.modal_handler_add(self)
self.timer = context.window_manager.event_timer_add(1 / 15, context.window)
return {'RUNNING_MODAL'}
def execute(self, context):
return self.invoke(context, None)
def modal(self, context, event):
task = self.async_task
if self._state != 'EXCEPTION' and task and task.done() and not task.cancelled():
ex = task.exception()
if ex is not None:
self._state = 'EXCEPTION'
self.log.error('Exception while running task: %s', ex)
return {'RUNNING_MODAL'}
if self._state == 'QUIT':
self._finish(context)
return {'FINISHED'}
return {'PASS_THROUGH'}
def _finish(self, context):
self._stop_async_task()
context.window_manager.event_timer_remove(self.timer)
def _new_async_task(self, async_task: asyncio.coroutine, future: asyncio.Future = None):
"""Stops the currently running async task, and starts another one."""
self.log.debug('Setting up a new task %r, so any existing task must be stopped', async_task)
self._stop_async_task()
# Download the previews asynchronously.
self.signalling_future = future or asyncio.Future()
self.async_task = asyncio.ensure_future(async_task)
self.log.debug('Created new task %r', self.async_task)
# Start the async manager so everything happens.
ensure_async_loop()
def _stop_async_task(self):
self.log.debug('Stopping async task')
if self.async_task is None:
self.log.debug('No async task, trivially stopped')
return
# Signal that we want to stop.
self.async_task.cancel()
if not self.signalling_future.done():
self.log.info("Signalling that we want to cancel anything that's running.")
self.signalling_future.cancel()
# Wait until the asynchronous task is done.
if not self.async_task.done():
self.log.info("blocking until async task is done.")
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(self.async_task)
except asyncio.CancelledError:
self.log.info('Asynchronous task was cancelled')
return
# noinspection PyBroadException
try:
self.async_task.result() # This re-raises any exception of the task.
except asyncio.CancelledError:
self.log.info('Asynchronous task was cancelled')
except Exception:
self.log.exception("Exception from asynchronous task")
def register(): def register():
bpy.utils.register_class(AsyncLoopModalOperator) bpy.utils.register_class(AsyncLoopModalOperator)

View File

@@ -1,15 +1,35 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
"""Blender-specific code. """Blender-specific code.
Separated from __init__.py so that we can import & run from non-Blender environments. Separated from __init__.py so that we can import & run from non-Blender environments.
""" """
import logging import logging
import os.path
import bpy import bpy
from bpy.types import AddonPreferences, Operator, WindowManager, Scene from bpy.types import AddonPreferences, Operator, WindowManager, Scene, PropertyGroup
from bpy.props import StringProperty from bpy.props import StringProperty, EnumProperty, PointerProperty, BoolProperty
import rna_prop_ui
from . import pillar, gui from . import pillar
PILLAR_SERVER_URL = 'https://cloudapi.blender.org/' PILLAR_SERVER_URL = 'https://cloudapi.blender.org/'
# PILLAR_SERVER_URL = 'http://localhost:5000/' # PILLAR_SERVER_URL = 'http://localhost:5000/'
@@ -17,32 +37,96 @@ PILLAR_SERVER_URL = 'https://cloudapi.blender.org/'
ADDON_NAME = 'blender_cloud' ADDON_NAME = 'blender_cloud'
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
icons = None
def redraw(self, context):
context.area.tag_redraw()
def blender_syncable_versions(self, context):
bss = context.window_manager.blender_sync_status
versions = bss.available_blender_versions
if not versions:
return [('', 'No settings stored in your Blender Cloud', '')]
return [(v, v, '') for v in versions]
class SyncStatusProperties(PropertyGroup):
status = EnumProperty(
items=[
('NONE', 'NONE', 'We have done nothing at all yet.'),
('IDLE', 'IDLE', 'User requested something, which is done, and we are now idle.'),
('SYNCING', 'SYNCING', 'Synchronising with Blender Cloud.'),
],
name='status',
description='Current status of Blender Sync',
update=redraw)
version = EnumProperty(
items=blender_syncable_versions,
name='Version of Blender from which to pull',
description='Version of Blender from which to pull')
message = StringProperty(name='message', update=redraw)
level = EnumProperty(
items=[
('INFO', 'INFO', ''),
('WARNING', 'WARNING', ''),
('ERROR', 'ERROR', ''),
('SUBSCRIBE', 'SUBSCRIBE', ''),
],
name='level',
update=redraw)
def report(self, level: set, message: str):
assert len(level) == 1, 'level should be a set of one string, not %r' % level
self.level = level.pop()
self.message = message
# Message can also be empty, just to erase it from the GUI.
# No need to actually log those.
if message:
try:
loglevel = logging._nameToLevel[self.level]
except KeyError:
loglevel = logging.WARNING
log.log(loglevel, message)
# List of syncable versions is stored in 'available_blender_versions' ID property,
# because I don't know how to store a variable list of strings in a proper RNA property.
@property
def available_blender_versions(self) -> list:
return self.get('available_blender_versions', [])
@available_blender_versions.setter
def available_blender_versions(self, new_versions):
self['available_blender_versions'] = new_versions
class BlenderCloudPreferences(AddonPreferences): class BlenderCloudPreferences(AddonPreferences):
bl_idname = ADDON_NAME bl_idname = ADDON_NAME
# The following two properties are read-only to limit the scope of the # The following two properties are read-only to limit the scope of the
# addon and allow for proper testing within this scope. # addon and allow for proper testing within this scope.
pillar_server = bpy.props.StringProperty( pillar_server = StringProperty(
name='Blender Cloud Server', name='Blender Cloud Server',
description='URL of the Blender Cloud backend server', description='URL of the Blender Cloud backend server',
default=PILLAR_SERVER_URL, default=PILLAR_SERVER_URL,
get=lambda self: PILLAR_SERVER_URL get=lambda self: PILLAR_SERVER_URL
) )
# TODO: Move to the Scene properties?
project_uuid = bpy.props.StringProperty(
name='Project UUID',
description='UUID of the current Blender Cloud project',
default='5672beecc0261b2005ed1a33',
get=lambda self: '5672beecc0261b2005ed1a33'
)
local_texture_dir = StringProperty( local_texture_dir = StringProperty(
name='Default Blender Cloud texture storage directory', name='Default Blender Cloud texture storage directory',
subtype='DIR_PATH', subtype='DIR_PATH',
default='//textures') default='//textures')
open_browser_after_share = BoolProperty(
name='Open browser after sharing file',
description='When enabled, Blender will open a webbrowser',
default=True
)
def draw(self, context): def draw(self, context):
import textwrap import textwrap
@@ -58,53 +142,115 @@ class BlenderCloudPreferences(AddonPreferences):
blender_id_profile = blender_id.get_active_profile() blender_id_profile = blender_id.get_active_profile()
if blender_id is None: if blender_id is None:
icon = 'ERROR' msg_icon = 'ERROR'
text = 'This add-on requires Blender ID' text = 'This add-on requires Blender ID'
help_text = 'Make sure that the Blender ID add-on is installed and activated' help_text = 'Make sure that the Blender ID add-on is installed and activated'
elif not blender_id_profile: elif not blender_id_profile:
icon = 'ERROR' msg_icon = 'ERROR'
text = 'You are logged out.' text = 'You are logged out.'
help_text = 'To login, go to the Blender ID add-on preferences.' help_text = 'To login, go to the Blender ID add-on preferences.'
elif pillar.SUBCLIENT_ID not in blender_id_profile.subclients: elif bpy.app.debug and pillar.SUBCLIENT_ID not in blender_id_profile.subclients:
icon = 'QUESTION' msg_icon = 'QUESTION'
text = 'No Blender Cloud credentials.' text = 'No Blender Cloud credentials.'
help_text = ('You are logged in on Blender ID, but your credentials have not ' help_text = ('You are logged in on Blender ID, but your credentials have not '
'been synchronized with Blender Cloud yet. Press the Update ' 'been synchronized with Blender Cloud yet. Press the Update '
'Credentials button.') 'Credentials button.')
else: else:
icon = 'WORLD_DATA' msg_icon = 'WORLD_DATA'
text = 'You are logged in as %s.' % blender_id_profile.username text = 'You are logged in as %s.' % blender_id_profile.username
help_text = ('To logout or change profile, ' help_text = ('To logout or change profile, '
'go to the Blender ID add-on preferences.') 'go to the Blender ID add-on preferences.')
sub = layout.column(align=True) # Authentication stuff
sub.label(text=text, icon=icon) auth_box = layout.box()
auth_box.label(text=text, icon=msg_icon)
help_lines = textwrap.wrap(help_text, 80) help_lines = textwrap.wrap(help_text, 80)
for line in help_lines: for line in help_lines:
sub.label(text=line) auth_box.label(text=line)
if bpy.app.debug:
auth_box.operator("pillar.credentials_update")
sub = layout.column() # Texture browser stuff
sub.label(text='Local directory for downloaded textures') texture_box = layout.box()
texture_box.enabled = msg_icon != 'ERROR'
sub = texture_box.column()
sub.label(text='Local directory for downloaded textures', icon_value=icon('CLOUD'))
sub.prop(self, "local_texture_dir", text='Default') sub.prop(self, "local_texture_dir", text='Default')
sub.prop(context.scene, "local_texture_dir", text='Current scene') sub.prop(context.scene, "local_texture_dir", text='Current scene')
# options for Pillar # Blender Sync stuff
sub = layout.column() bss = context.window_manager.blender_sync_status
sub.enabled = icon != 'ERROR' bsync_box = layout.box()
bsync_box.enabled = msg_icon != 'ERROR'
row = bsync_box.row().split(percentage=0.33)
row.label('Blender Sync with Blender Cloud', icon_value=icon('CLOUD'))
# TODO: let users easily pick a project. For now, we just use the icon_for_level = {
# hard-coded server URL and UUID of the textures project. 'INFO': 'NONE',
# sub.prop(self, "pillar_server") 'WARNING': 'INFO',
# sub.prop(self, "project_uuid") 'ERROR': 'ERROR',
sub.operator("pillar.credentials_update") 'SUBSCRIBE': 'ERROR',
}
msg_icon = icon_for_level[bss.level] if bss.message else 'NONE'
message_container = row.row()
message_container.label(bss.message, icon=msg_icon)
sub = bsync_box.column()
if bss.level == 'SUBSCRIBE':
self.draw_subscribe_button(sub)
self.draw_sync_buttons(sub, bss)
# Image Share stuff
share_box = layout.box()
share_box.label('Image Sharing on Blender Cloud', icon_value=icon('CLOUD'))
texture_box.enabled = msg_icon != 'ERROR'
share_box.prop(self, 'open_browser_after_share')
def draw_subscribe_button(self, layout):
layout.operator('pillar.subscribe', icon='WORLD')
def draw_sync_buttons(self, layout, bss):
layout.enabled = bss.status in {'NONE', 'IDLE'}
buttons = layout.column()
row_buttons = buttons.row().split(percentage=0.5)
row_push = row_buttons.row()
row_pull = row_buttons.row(align=True)
row_push.operator('pillar.sync',
text='Save %i.%i settings' % bpy.app.version[:2],
icon='TRIA_UP').action = 'PUSH'
versions = bss.available_blender_versions
version = bss.version
if bss.status in {'NONE', 'IDLE'}:
if not versions or not version:
row_pull.operator('pillar.sync',
text='Find version to load',
icon='TRIA_DOWN').action = 'REFRESH'
else:
props = row_pull.operator('pillar.sync',
text='Load %s settings' % version,
icon='TRIA_DOWN')
props.action = 'PULL'
props.blender_version = version
row_pull.operator('pillar.sync',
text='',
icon='DOTSDOWN').action = 'SELECT'
else:
row_pull.label('Cloud Sync is running.')
class PillarCredentialsUpdate(Operator): class PillarCredentialsUpdate(pillar.PillarOperatorMixin,
Operator):
"""Updates the Pillar URL and tests the new URL.""" """Updates the Pillar URL and tests the new URL."""
bl_idname = 'pillar.credentials_update' bl_idname = 'pillar.credentials_update'
bl_label = 'Update credentials' bl_label = 'Update credentials'
log = logging.getLogger('bpy.ops.%s' % bl_idname)
@classmethod @classmethod
def poll(cls, context): def poll(cls, context):
# Only allow activation when the user is actually logged in. # Only allow activation when the user is actually logged in.
@@ -130,7 +276,7 @@ class PillarCredentialsUpdate(Operator):
try: try:
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
loop.run_until_complete(pillar.refresh_pillar_credentials()) loop.run_until_complete(self.check_credentials(context, set()))
except blender_id.BlenderIdCommError as ex: except blender_id.BlenderIdCommError as ex:
log.exception('Error sending subclient-specific token to Blender ID') log.exception('Error sending subclient-specific token to Blender ID')
self.report({'ERROR'}, 'Failed to sync Blender ID to Blender Cloud') self.report({'ERROR'}, 'Failed to sync Blender ID to Blender Cloud')
@@ -144,24 +290,81 @@ class PillarCredentialsUpdate(Operator):
return {'FINISHED'} return {'FINISHED'}
class PILLAR_OT_subscribe(Operator):
"""Opens a browser to subscribe the user to the Cloud."""
bl_idname = 'pillar.subscribe'
bl_label = 'Subscribe to the Cloud'
def execute(self, context):
import webbrowser
webbrowser.open_new_tab('https://cloud.blender.org/join')
self.report({'INFO'}, 'We just started a browser for you.')
return {'FINISHED'}
class PILLAR_PT_image_custom_properties(rna_prop_ui.PropertyPanel, bpy.types.Panel):
"""Shows custom properties in the image editor."""
bl_space_type = 'IMAGE_EDITOR'
bl_region_type = 'UI'
bl_label = 'Custom Properties'
_context_path = 'edit_image'
_property_type = bpy.types.Image
def preferences() -> BlenderCloudPreferences: def preferences() -> BlenderCloudPreferences:
return bpy.context.user_preferences.addons[ADDON_NAME].preferences return bpy.context.user_preferences.addons[ADDON_NAME].preferences
def load_custom_icons():
global icons
if icons is not None:
# Already loaded
return
import bpy.utils.previews
icons = bpy.utils.previews.new()
my_icons_dir = os.path.join(os.path.dirname(__file__), 'icons')
icons.load('CLOUD', os.path.join(my_icons_dir, 'icon-cloud.png'), 'IMAGE')
def unload_custom_icons():
global icons
if icons is None:
# Already unloaded
return
bpy.utils.previews.remove(icons)
icons = None
def icon(icon_name: str) -> int:
"""Returns the icon ID for the named icon.
Use with layout.operator('pillar.image_share', icon_value=icon('CLOUD'))
"""
return icons[icon_name].icon_id
def register(): def register():
bpy.utils.register_class(BlenderCloudPreferences) bpy.utils.register_class(BlenderCloudPreferences)
bpy.utils.register_class(PillarCredentialsUpdate) bpy.utils.register_class(PillarCredentialsUpdate)
bpy.utils.register_class(SyncStatusProperties)
WindowManager.blender_cloud_project = StringProperty( bpy.utils.register_class(PILLAR_OT_subscribe)
name="Blender Cloud project UUID", bpy.utils.register_class(PILLAR_PT_image_custom_properties)
default='5672beecc0261b2005ed1a33') # TODO: don't hard-code this
WindowManager.blender_cloud_node = StringProperty(
name="Blender Cloud node UUID",
default='') # empty == top-level of project
addon_prefs = preferences() addon_prefs = preferences()
WindowManager.last_blender_cloud_location = StringProperty(
name="Last Blender Cloud browser location",
default="/")
def default_if_empty(scene, context): def default_if_empty(scene, context):
"""The scene's local_texture_dir, if empty, reverts to the addon prefs.""" """The scene's local_texture_dir, if empty, reverts to the addon prefs."""
@@ -174,13 +377,19 @@ def register():
default=addon_prefs.local_texture_dir, default=addon_prefs.local_texture_dir,
update=default_if_empty) update=default_if_empty)
WindowManager.blender_sync_status = PointerProperty(type=SyncStatusProperties)
load_custom_icons()
def unregister(): def unregister():
gui.unregister() unload_custom_icons()
bpy.utils.unregister_class(PillarCredentialsUpdate) bpy.utils.unregister_class(PillarCredentialsUpdate)
bpy.utils.unregister_class(BlenderCloudPreferences) bpy.utils.unregister_class(BlenderCloudPreferences)
bpy.utils.unregister_class(SyncStatusProperties)
bpy.utils.unregister_class(PILLAR_OT_subscribe)
bpy.utils.unregister_class(PILLAR_PT_image_custom_properties)
del WindowManager.blender_cloud_project del WindowManager.last_blender_cloud_location
del WindowManager.blender_cloud_node del WindowManager.blender_sync_status
del WindowManager.blender_cloud_thumbnails

929
blender_cloud/blendfile.py Normal file
View File

@@ -0,0 +1,929 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2009, At Mind B.V. - Jeroen Bakker
# (c) 2014, Blender Foundation - Campbell Barton
import gzip
import logging
import os
import struct
import tempfile
log = logging.getLogger("blendfile")
FILE_BUFFER_SIZE = 1024 * 1024
# -----------------------------------------------------------------------------
# module global routines
#
# read routines
# open a filename
# determine if the file is compressed
# and returns a handle
def open_blend(filename, access="rb"):
"""Opens a blend file for reading or writing pending on the access
supports 2 kind of blend files. Uncompressed and compressed.
Known issue: does not support packaged blend files
"""
handle = open(filename, access)
magic_test = b"BLENDER"
magic = handle.read(len(magic_test))
if magic == magic_test:
log.debug("normal blendfile detected")
handle.seek(0, os.SEEK_SET)
bfile = BlendFile(handle)
bfile.is_compressed = False
bfile.filepath_orig = filename
return bfile
elif magic[:2] == b'\x1f\x8b':
log.debug("gzip blendfile detected")
handle.close()
log.debug("decompressing started")
fs = gzip.open(filename, "rb")
data = fs.read(FILE_BUFFER_SIZE)
magic = data[:len(magic_test)]
if magic == magic_test:
handle = tempfile.TemporaryFile()
while data:
handle.write(data)
data = fs.read(FILE_BUFFER_SIZE)
log.debug("decompressing finished")
fs.close()
log.debug("resetting decompressed file")
handle.seek(os.SEEK_SET, 0)
bfile = BlendFile(handle)
bfile.is_compressed = True
bfile.filepath_orig = filename
return bfile
else:
raise Exception("filetype inside gzip not a blend")
else:
raise Exception("filetype not a blend or a gzip blend")
def pad_up_4(offset):
return (offset + 3) & ~3
# -----------------------------------------------------------------------------
# module classes
class BlendFile:
"""
Blend file.
"""
__slots__ = (
# file (result of open())
"handle",
# str (original name of the file path)
"filepath_orig",
# BlendFileHeader
"header",
# struct.Struct
"block_header_struct",
# BlendFileBlock
"blocks",
# [DNAStruct, ...]
"structs",
# dict {b'StructName': sdna_index}
# (where the index is an index into 'structs')
"sdna_index_from_id",
# dict {addr_old: block}
"block_from_offset",
# int
"code_index",
# bool (did we make a change)
"is_modified",
# bool (is file gzipped)
"is_compressed",
)
def __init__(self, handle):
log.debug("initializing reading blend-file")
self.handle = handle
self.header = BlendFileHeader(handle)
self.block_header_struct = self.header.create_block_header_struct()
self.blocks = []
self.code_index = {}
block = BlendFileBlock(handle, self)
while block.code != b'ENDB':
if block.code == b'DNA1':
(self.structs,
self.sdna_index_from_id,
) = BlendFile.decode_structs(self.header, block, handle)
else:
handle.seek(block.size, os.SEEK_CUR)
self.blocks.append(block)
self.code_index.setdefault(block.code, []).append(block)
block = BlendFileBlock(handle, self)
self.is_modified = False
self.blocks.append(block)
# cache (could lazy init, incase we never use?)
self.block_from_offset = {block.addr_old: block for block in self.blocks if block.code != b'ENDB'}
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
def find_blocks_from_code(self, code):
assert(type(code) == bytes)
if code not in self.code_index:
return []
return self.code_index[code]
def find_block_from_offset(self, offset):
# same as looking looping over all blocks,
# then checking ``block.addr_old == offset``
assert(type(offset) is int)
return self.block_from_offset.get(offset)
def close(self):
"""
Close the blend file
writes the blend file to disk if changes has happened
"""
handle = self.handle
if self.is_modified:
if self.is_compressed:
log.debug("close compressed blend file")
handle.seek(os.SEEK_SET, 0)
log.debug("compressing started")
fs = gzip.open(self.filepath_orig, "wb")
data = handle.read(FILE_BUFFER_SIZE)
while data:
fs.write(data)
data = handle.read(FILE_BUFFER_SIZE)
fs.close()
log.debug("compressing finished")
handle.close()
def ensure_subtype_smaller(self, sdna_index_curr, sdna_index_next):
# never refine to a smaller type
if (self.structs[sdna_index_curr].size >
self.structs[sdna_index_next].size):
raise RuntimeError("cant refine to smaller type (%s -> %s)" %
(self.structs[sdna_index_curr].dna_type_id.decode('ascii'),
self.structs[sdna_index_next].dna_type_id.decode('ascii')))
@staticmethod
def decode_structs(header, block, handle):
"""
DNACatalog is a catalog of all information in the DNA1 file-block
"""
log.debug("building DNA catalog")
shortstruct = DNA_IO.USHORT[header.endian_index]
shortstruct2 = struct.Struct(header.endian_str + b'HH')
intstruct = DNA_IO.UINT[header.endian_index]
data = handle.read(block.size)
types = []
names = []
structs = []
sdna_index_from_id = {}
offset = 8
names_len = intstruct.unpack_from(data, offset)[0]
offset += 4
log.debug("building #%d names" % names_len)
for i in range(names_len):
tName = DNA_IO.read_data0_offset(data, offset)
offset = offset + len(tName) + 1
names.append(DNAName(tName))
del names_len
offset = pad_up_4(offset)
offset += 4
types_len = intstruct.unpack_from(data, offset)[0]
offset += 4
log.debug("building #%d types" % types_len)
for i in range(types_len):
dna_type_id = DNA_IO.read_data0_offset(data, offset)
# None will be replaced by the DNAStruct, below
types.append(DNAStruct(dna_type_id))
offset += len(dna_type_id) + 1
offset = pad_up_4(offset)
offset += 4
log.debug("building #%d type-lengths" % types_len)
for i in range(types_len):
tLen = shortstruct.unpack_from(data, offset)[0]
offset = offset + 2
types[i].size = tLen
del types_len
offset = pad_up_4(offset)
offset += 4
structs_len = intstruct.unpack_from(data, offset)[0]
offset += 4
log.debug("building #%d structures" % structs_len)
for sdna_index in range(structs_len):
d = shortstruct2.unpack_from(data, offset)
struct_type_index = d[0]
offset += 4
dna_struct = types[struct_type_index]
sdna_index_from_id[dna_struct.dna_type_id] = sdna_index
structs.append(dna_struct)
fields_len = d[1]
dna_offset = 0
for field_index in range(fields_len):
d2 = shortstruct2.unpack_from(data, offset)
field_type_index = d2[0]
field_name_index = d2[1]
offset += 4
dna_type = types[field_type_index]
dna_name = names[field_name_index]
if dna_name.is_pointer or dna_name.is_method_pointer:
dna_size = header.pointer_size * dna_name.array_size
else:
dna_size = dna_type.size * dna_name.array_size
field = DNAField(dna_type, dna_name, dna_size, dna_offset)
dna_struct.fields.append(field)
dna_struct.field_from_name[dna_name.name_only] = field
dna_offset += dna_size
return structs, sdna_index_from_id
class BlendFileBlock:
"""
Instance of a struct.
"""
__slots__ = (
# BlendFile
"file",
"code",
"size",
"addr_old",
"sdna_index",
"count",
"file_offset",
"user_data",
)
def __str__(self):
return ("<%s.%s (%s), size=%d at %s>" %
# fields=[%s]
(self.__class__.__name__,
self.dna_type.dna_type_id.decode('ascii'),
self.code.decode(),
self.size,
# b", ".join(f.dna_name.name_only for f in self.dna_type.fields).decode('ascii'),
hex(self.addr_old),
))
def __init__(self, handle, bfile):
OLDBLOCK = struct.Struct(b'4sI')
self.file = bfile
self.user_data = None
data = handle.read(bfile.block_header_struct.size)
# header size can be 8, 20, or 24 bytes long
# 8: old blend files ENDB block (exception)
# 20: normal headers 32 bit platform
# 24: normal headers 64 bit platform
if len(data) > 15:
blockheader = bfile.block_header_struct.unpack(data)
self.code = blockheader[0].partition(b'\0')[0]
if self.code != b'ENDB':
self.size = blockheader[1]
self.addr_old = blockheader[2]
self.sdna_index = blockheader[3]
self.count = blockheader[4]
self.file_offset = handle.tell()
else:
self.size = 0
self.addr_old = 0
self.sdna_index = 0
self.count = 0
self.file_offset = 0
else:
blockheader = OLDBLOCK.unpack(data)
self.code = blockheader[0].partition(b'\0')[0]
self.code = DNA_IO.read_data0(blockheader[0])
self.size = 0
self.addr_old = 0
self.sdna_index = 0
self.count = 0
self.file_offset = 0
@property
def dna_type(self):
return self.file.structs[self.sdna_index]
def refine_type_from_index(self, sdna_index_next):
assert(type(sdna_index_next) is int)
sdna_index_curr = self.sdna_index
self.file.ensure_subtype_smaller(sdna_index_curr, sdna_index_next)
self.sdna_index = sdna_index_next
def refine_type(self, dna_type_id):
assert(type(dna_type_id) is bytes)
self.refine_type_from_index(self.file.sdna_index_from_id[dna_type_id])
def get_file_offset(self, path,
default=...,
sdna_index_refine=None,
base_index=0,
):
"""
Return (offset, length)
"""
assert(type(path) is bytes)
ofs = self.file_offset
if base_index != 0:
assert(base_index < self.count)
ofs += (self.size // self.count) * base_index
self.file.handle.seek(ofs, os.SEEK_SET)
if sdna_index_refine is None:
sdna_index_refine = self.sdna_index
else:
self.file.ensure_subtype_smaller(self.sdna_index, sdna_index_refine)
dna_struct = self.file.structs[sdna_index_refine]
field = dna_struct.field_from_path(
self.file.header, self.file.handle, path)
return (self.file.handle.tell(), field.dna_name.array_size)
def get(self, path,
default=...,
sdna_index_refine=None,
use_nil=True, use_str=True,
base_index=0,
):
ofs = self.file_offset
if base_index != 0:
assert(base_index < self.count)
ofs += (self.size // self.count) * base_index
self.file.handle.seek(ofs, os.SEEK_SET)
if sdna_index_refine is None:
sdna_index_refine = self.sdna_index
else:
self.file.ensure_subtype_smaller(self.sdna_index, sdna_index_refine)
dna_struct = self.file.structs[sdna_index_refine]
return dna_struct.field_get(
self.file.header, self.file.handle, path,
default=default,
use_nil=use_nil, use_str=use_str,
)
def get_recursive_iter(self, path, path_root=b"",
default=...,
sdna_index_refine=None,
use_nil=True, use_str=True,
base_index=0,
):
if path_root:
path_full = (
(path_root if type(path_root) is tuple else (path_root, )) +
(path if type(path) is tuple else (path, )))
else:
path_full = path
try:
yield (path_full, self.get(path_full, default, sdna_index_refine, use_nil, use_str, base_index))
except NotImplementedError as ex:
msg, dna_name, dna_type = ex.args
struct_index = self.file.sdna_index_from_id.get(dna_type.dna_type_id, None)
if struct_index is None:
yield (path_full, "<%s>" % dna_type.dna_type_id.decode('ascii'))
else:
struct = self.file.structs[struct_index]
for f in struct.fields:
yield from self.get_recursive_iter(
f.dna_name.name_only, path_full, default, None, use_nil, use_str, 0)
def items_recursive_iter(self):
for k in self.keys():
yield from self.get_recursive_iter(k, use_str=False)
def get_data_hash(self):
"""
Generates a 'hash' that can be used instead of addr_old as block id, and that should be 'stable' across .blend
file load & save (i.e. it does not changes due to pointer addresses variations).
"""
# TODO This implementation is most likely far from optimal... and CRC32 is not renown as the best hashing
# algo either. But for now does the job!
import zlib
def _is_pointer(self, k):
return self.file.structs[self.sdna_index].field_from_path(
self.file.header, self.file.handle, k).dna_name.is_pointer
hsh = 1
for k, v in self.items_recursive_iter():
if not _is_pointer(self, k):
hsh = zlib.adler32(str(v).encode(), hsh)
return hsh
def set(self, path, value,
sdna_index_refine=None,
):
if sdna_index_refine is None:
sdna_index_refine = self.sdna_index
else:
self.file.ensure_subtype_smaller(self.sdna_index, sdna_index_refine)
dna_struct = self.file.structs[sdna_index_refine]
self.file.handle.seek(self.file_offset, os.SEEK_SET)
self.file.is_modified = True
return dna_struct.field_set(
self.file.header, self.file.handle, path, value)
# ---------------
# Utility get/set
#
# avoid inline pointer casting
def get_pointer(
self, path,
default=...,
sdna_index_refine=None,
base_index=0,
):
if sdna_index_refine is None:
sdna_index_refine = self.sdna_index
result = self.get(path, default, sdna_index_refine=sdna_index_refine, base_index=base_index)
# default
if type(result) is not int:
return result
assert(self.file.structs[sdna_index_refine].field_from_path(
self.file.header, self.file.handle, path).dna_name.is_pointer)
if result != 0:
# possible (but unlikely)
# that this fails and returns None
# maybe we want to raise some exception in this case
return self.file.find_block_from_offset(result)
else:
return None
# ----------------------
# Python convenience API
# dict like access
def __getitem__(self, item):
return self.get(item, use_str=False)
def __setitem__(self, item, value):
self.set(item, value)
def keys(self):
return (f.dna_name.name_only for f in self.dna_type.fields)
def values(self):
for k in self.keys():
try:
yield self[k]
except NotImplementedError as ex:
msg, dna_name, dna_type = ex.args
yield "<%s>" % dna_type.dna_type_id.decode('ascii')
def items(self):
for k in self.keys():
try:
yield (k, self[k])
except NotImplementedError as ex:
msg, dna_name, dna_type = ex.args
yield (k, "<%s>" % dna_type.dna_type_id.decode('ascii'))
# -----------------------------------------------------------------------------
# Read Magic
#
# magic = str
# pointer_size = int
# is_little_endian = bool
# version = int
class BlendFileHeader:
"""
BlendFileHeader allocates the first 12 bytes of a blend file
it contains information about the hardware architecture
"""
__slots__ = (
# str
"magic",
# int 4/8
"pointer_size",
# bool
"is_little_endian",
# int
"version",
# str, used to pass to 'struct'
"endian_str",
# int, used to index common types
"endian_index",
)
def __init__(self, handle):
FILEHEADER = struct.Struct(b'7s1s1s3s')
log.debug("reading blend-file-header")
values = FILEHEADER.unpack(handle.read(FILEHEADER.size))
self.magic = values[0]
pointer_size_id = values[1]
if pointer_size_id == b'-':
self.pointer_size = 8
elif pointer_size_id == b'_':
self.pointer_size = 4
else:
assert(0)
endian_id = values[2]
if endian_id == b'v':
self.is_little_endian = True
self.endian_str = b'<'
self.endian_index = 0
elif endian_id == b'V':
self.is_little_endian = False
self.endian_index = 1
self.endian_str = b'>'
else:
assert(0)
version_id = values[3]
self.version = int(version_id)
def create_block_header_struct(self):
return struct.Struct(b''.join((
self.endian_str,
b'4sI',
b'I' if self.pointer_size == 4 else b'Q',
b'II',
)))
class DNAName:
"""
DNAName is a C-type name stored in the DNA
"""
__slots__ = (
"name_full",
"name_only",
"is_pointer",
"is_method_pointer",
"array_size",
)
def __init__(self, name_full):
self.name_full = name_full
self.name_only = self.calc_name_only()
self.is_pointer = self.calc_is_pointer()
self.is_method_pointer = self.calc_is_method_pointer()
self.array_size = self.calc_array_size()
def __repr__(self):
return '%s(%r)' % (type(self).__qualname__, self.name_full)
def as_reference(self, parent):
if parent is None:
result = b''
else:
result = parent + b'.'
result = result + self.name_only
return result
def calc_name_only(self):
result = self.name_full.strip(b'*()')
index = result.find(b'[')
if index != -1:
result = result[:index]
return result
def calc_is_pointer(self):
return (b'*' in self.name_full)
def calc_is_method_pointer(self):
return (b'(*' in self.name_full)
def calc_array_size(self):
result = 1
temp = self.name_full
index = temp.find(b'[')
while index != -1:
index_2 = temp.find(b']')
result *= int(temp[index + 1:index_2])
temp = temp[index_2 + 1:]
index = temp.find(b'[')
return result
class DNAField:
"""
DNAField is a coupled DNAStruct and DNAName
and cache offset for reuse
"""
__slots__ = (
# DNAName
"dna_name",
# tuple of 3 items
# [bytes (struct name), int (struct size), DNAStruct]
"dna_type",
# size on-disk
"dna_size",
# cached info (avoid looping over fields each time)
"dna_offset",
)
def __init__(self, dna_type, dna_name, dna_size, dna_offset):
self.dna_type = dna_type
self.dna_name = dna_name
self.dna_size = dna_size
self.dna_offset = dna_offset
class DNAStruct:
"""
DNAStruct is a C-type structure stored in the DNA
"""
__slots__ = (
"dna_type_id",
"size",
"fields",
"field_from_name",
"user_data",
)
def __init__(self, dna_type_id):
self.dna_type_id = dna_type_id
self.fields = []
self.field_from_name = {}
self.user_data = None
def __repr__(self):
return '%s(%r)' % (type(self).__qualname__, self.dna_type_id)
def field_from_path(self, header, handle, path):
"""
Support lookups as bytes or a tuple of bytes and optional index.
C style 'id.name' --> (b'id', b'name')
C style 'array[4]' --> ('array', 4)
"""
if type(path) is tuple:
name = path[0]
if len(path) >= 2 and type(path[1]) is not bytes:
name_tail = path[2:]
index = path[1]
assert(type(index) is int)
else:
name_tail = path[1:]
index = 0
else:
name = path
name_tail = None
index = 0
assert(type(name) is bytes)
field = self.field_from_name.get(name)
if field is not None:
handle.seek(field.dna_offset, os.SEEK_CUR)
if index != 0:
if field.dna_name.is_pointer:
index_offset = header.pointer_size * index
else:
index_offset = field.dna_type.size * index
assert(index_offset < field.dna_size)
handle.seek(index_offset, os.SEEK_CUR)
if not name_tail: # None or ()
return field
else:
return field.dna_type.field_from_path(header, handle, name_tail)
def field_get(self, header, handle, path,
default=...,
use_nil=True, use_str=True,
):
field = self.field_from_path(header, handle, path)
if field is None:
if default is not ...:
return default
else:
raise KeyError("%r not found in %r (%r)" %
(path, [f.dna_name.name_only for f in self.fields], self.dna_type_id))
dna_type = field.dna_type
dna_name = field.dna_name
if dna_name.is_pointer:
return DNA_IO.read_pointer(handle, header)
elif dna_type.dna_type_id == b'int':
if dna_name.array_size > 1:
return [DNA_IO.read_int(handle, header) for i in range(dna_name.array_size)]
return DNA_IO.read_int(handle, header)
elif dna_type.dna_type_id == b'short':
if dna_name.array_size > 1:
return [DNA_IO.read_short(handle, header) for i in range(dna_name.array_size)]
return DNA_IO.read_short(handle, header)
elif dna_type.dna_type_id == b'uint64_t':
if dna_name.array_size > 1:
return [DNA_IO.read_ulong(handle, header) for i in range(dna_name.array_size)]
return DNA_IO.read_ulong(handle, header)
elif dna_type.dna_type_id == b'float':
if dna_name.array_size > 1:
return [DNA_IO.read_float(handle, header) for i in range(dna_name.array_size)]
return DNA_IO.read_float(handle, header)
elif dna_type.dna_type_id == b'char':
if use_str:
if use_nil:
return DNA_IO.read_string0(handle, dna_name.array_size)
else:
return DNA_IO.read_string(handle, dna_name.array_size)
else:
if use_nil:
return DNA_IO.read_bytes0(handle, dna_name.array_size)
else:
return DNA_IO.read_bytes(handle, dna_name.array_size)
else:
raise NotImplementedError("%r exists but isn't pointer, can't resolve field %r" %
(path, dna_name.name_only), dna_name, dna_type)
def field_set(self, header, handle, path, value):
assert(type(path) == bytes)
field = self.field_from_path(header, handle, path)
if field is None:
raise KeyError("%r not found in %r" %
(path, [f.dna_name.name_only for f in self.fields]))
dna_type = field.dna_type
dna_name = field.dna_name
if dna_type.dna_type_id == b'char':
if type(value) is str:
return DNA_IO.write_string(handle, value, dna_name.array_size)
else:
return DNA_IO.write_bytes(handle, value, dna_name.array_size)
elif dna_type.dna_type_id == b'int':
DNA_IO.write_int(handle, header, value)
else:
raise NotImplementedError("Setting %r is not yet supported for %r" %
(dna_type, dna_name), dna_name, dna_type)
class DNA_IO:
"""
Module like class, for read-write utility functions.
Only stores static methods & constants.
"""
__slots__ = ()
def __new__(cls, *args, **kwargs):
raise RuntimeError("%s should not be instantiated" % cls)
@staticmethod
def write_string(handle, astring, fieldlen):
assert(isinstance(astring, str))
if len(astring) >= fieldlen:
stringw = astring[0:fieldlen]
else:
stringw = astring + '\0'
handle.write(stringw.encode('utf-8'))
@staticmethod
def write_bytes(handle, astring, fieldlen):
assert(isinstance(astring, (bytes, bytearray)))
if len(astring) >= fieldlen:
stringw = astring[0:fieldlen]
else:
stringw = astring + b'\0'
handle.write(stringw)
@staticmethod
def read_bytes(handle, length):
data = handle.read(length)
return data
@staticmethod
def read_bytes0(handle, length):
data = handle.read(length)
return DNA_IO.read_data0(data)
@staticmethod
def read_string(handle, length):
return DNA_IO.read_bytes(handle, length).decode('utf-8')
@staticmethod
def read_string0(handle, length):
return DNA_IO.read_bytes0(handle, length).decode('utf-8')
@staticmethod
def read_data0_offset(data, offset):
add = data.find(b'\0', offset) - offset
return data[offset:offset + add]
@staticmethod
def read_data0(data):
add = data.find(b'\0')
return data[:add]
USHORT = struct.Struct(b'<H'), struct.Struct(b'>H')
@staticmethod
def read_ushort(handle, fileheader):
st = DNA_IO.USHORT[fileheader.endian_index]
return st.unpack(handle.read(st.size))[0]
SSHORT = struct.Struct(b'<h'), struct.Struct(b'>h')
@staticmethod
def read_short(handle, fileheader):
st = DNA_IO.SSHORT[fileheader.endian_index]
return st.unpack(handle.read(st.size))[0]
UINT = struct.Struct(b'<I'), struct.Struct(b'>I')
@staticmethod
def read_uint(handle, fileheader):
st = DNA_IO.UINT[fileheader.endian_index]
return st.unpack(handle.read(st.size))[0]
SINT = struct.Struct(b'<i'), struct.Struct(b'>i')
@staticmethod
def read_int(handle, fileheader):
st = DNA_IO.SINT[fileheader.endian_index]
return st.unpack(handle.read(st.size))[0]
@staticmethod
def write_int(handle, fileheader, value):
assert isinstance(value, int), 'value must be int, but is %r: %r' % (type(value), value)
st = DNA_IO.SINT[fileheader.endian_index]
to_write = st.pack(value)
handle.write(to_write)
FLOAT = struct.Struct(b'<f'), struct.Struct(b'>f')
@staticmethod
def read_float(handle, fileheader):
st = DNA_IO.FLOAT[fileheader.endian_index]
return st.unpack(handle.read(st.size))[0]
ULONG = struct.Struct(b'<Q'), struct.Struct(b'>Q')
@staticmethod
def read_ulong(handle, fileheader):
st = DNA_IO.ULONG[fileheader.endian_index]
return st.unpack(handle.read(st.size))[0]
@staticmethod
def read_pointer(handle, header):
"""
reads an pointer from a file handle
the pointer size is given by the header (BlendFileHeader)
"""
if header.pointer_size == 4:
st = DNA_IO.UINT[header.endian_index]
return st.unpack(handle.read(st.size))[0]
if header.pointer_size == 8:
st = DNA_IO.ULONG[header.endian_index]
return st.unpack(handle.read(st.size))[0]

View File

@@ -1,3 +1,21 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
"""HTTP Cache management. """HTTP Cache management.
This module configures a cached session for the Requests package. This module configures a cached session for the Requests package.

View File

@@ -1,729 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# Copyright (C) 2014 Blender Aid
# http://www.blendearaid.com
# blenderaid@gmail.com
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ##### END GPL LICENSE BLOCK #####
import asyncio
import logging
import threading
import bpy
import bgl
import blf
import os
from bpy.types import AddonPreferences
from bpy.props import (BoolProperty, EnumProperty,
FloatProperty, FloatVectorProperty,
IntProperty, StringProperty)
import pillarsdk
from . import async_loop, pillar, cache
icon_width = 128
icon_height = 128
target_item_width = 400
target_item_height = 128
library_path = '/tmp'
library_icons_path = os.path.join(os.path.dirname(__file__), "icons")
class UpNode(pillarsdk.Node):
def __init__(self):
super().__init__()
self['_id'] = 'UP'
self['node_type'] = 'UP'
class MenuItem:
"""GUI menu item for the 3D View GUI."""
icon_margin_x = 4
icon_margin_y = 4
text_margin_x = 6
text_height = 16
text_width = 72
DEFAULT_ICONS = {
'FOLDER': os.path.join(library_icons_path, 'folder.png'),
'SPINNER': os.path.join(library_icons_path, 'spinner.png'),
}
SUPPORTED_NODE_TYPES = {'UP', 'group_texture', 'texture'}
def __init__(self, node, file_desc, thumb_path: str, label_text):
if node['node_type'] not in self.SUPPORTED_NODE_TYPES:
raise TypeError('Node of type %r not supported; supported are %r.' % (
node.group_texture, self.SUPPORTED_NODE_TYPES))
self.node = node # pillarsdk.Node, contains 'node_type' key to indicate type
self.file_desc = file_desc # pillarsdk.File object, or None if a 'folder' node.
self.label_text = label_text
self._thumb_path = ''
self.icon = None
self._is_folder = node['node_type'] == 'group_texture' or isinstance(node, UpNode)
self.thumb_path = thumb_path
# Updated when drawing the image
self.x = 0
self.y = 0
self.width = 0
self.height = 0
@property
def thumb_path(self) -> str:
return self._thumb_path
@thumb_path.setter
def thumb_path(self, new_thumb_path: str):
self._thumb_path = self.DEFAULT_ICONS.get(new_thumb_path, new_thumb_path)
if self._thumb_path:
self.icon = bpy.data.images.load(filepath=self._thumb_path)
else:
self.icon = None
@property
def node_uuid(self) -> str:
return self.node['_id']
def update(self, node, file_desc, thumb_path: str, label_text):
# We can get updated information about our Node, but a MenuItem should
# always represent one node, and it shouldn't be shared between nodes.
if self.node_uuid != node['_id']:
raise ValueError("Don't change the node ID this MenuItem reflects, "
"just create a new one.")
self.node = node
self.file_desc = file_desc # pillarsdk.File object, or None if a 'folder' node.
self.thumb_path = thumb_path
self.label_text = label_text
@property
def is_folder(self) -> bool:
return self._is_folder
def update_placement(self, x, y, width, height):
"""Use OpenGL to draw this one menu item."""
self.x = x
self.y = y
self.width = width
self.height = height
def draw(self, highlighted: bool):
bgl.glEnable(bgl.GL_BLEND)
if highlighted:
bgl.glColor4f(0.555, 0.555, 0.555, 0.8)
else:
bgl.glColor4f(0.447, 0.447, 0.447, 0.8)
bgl.glRectf(self.x, self.y, self.x + self.width, self.y + self.height)
texture = self.icon
err = texture.gl_load(filter=bgl.GL_NEAREST, mag=bgl.GL_NEAREST)
assert not err, 'OpenGL error: %i' % err
bgl.glColor4f(0.0, 0.0, 1.0, 0.5)
# bgl.glLineWidth(1.5)
# ------ TEXTURE ---------#
bgl.glBindTexture(bgl.GL_TEXTURE_2D, texture.bindcode[0])
bgl.glEnable(bgl.GL_TEXTURE_2D)
bgl.glBlendFunc(bgl.GL_SRC_ALPHA, bgl.GL_ONE_MINUS_SRC_ALPHA)
bgl.glColor4f(1, 1, 1, 1)
bgl.glBegin(bgl.GL_QUADS)
bgl.glTexCoord2d(0, 0)
bgl.glVertex2d(self.x + self.icon_margin_x, self.y)
bgl.glTexCoord2d(0, 1)
bgl.glVertex2d(self.x + self.icon_margin_x, self.y + icon_height)
bgl.glTexCoord2d(1, 1)
bgl.glVertex2d(self.x + self.icon_margin_x + icon_width, self.y + icon_height)
bgl.glTexCoord2d(1, 0)
bgl.glVertex2d(self.x + self.icon_margin_x + icon_width, self.y)
bgl.glEnd()
bgl.glDisable(bgl.GL_TEXTURE_2D)
bgl.glDisable(bgl.GL_BLEND)
texture.gl_free()
# draw some text
font_id = 0
blf.position(font_id,
self.x + self.icon_margin_x + icon_width + self.text_margin_x,
self.y + icon_height * 0.5 - 0.25 * self.text_height, 0)
blf.size(font_id, self.text_height, self.text_width)
blf.draw(font_id, self.label_text)
def hits(self, mouse_x: int, mouse_y: int) -> bool:
return self.x < mouse_x < self.x + self.width and self.y < mouse_y < self.y + self.height
class BlenderCloudBrowser(bpy.types.Operator):
bl_idname = 'pillar.browser'
bl_label = 'Blender Cloud Texture Browser'
_draw_handle = None
_state = 'INITIALIZING'
project_uuid = '5672beecc0261b2005ed1a33' # Blender Cloud project UUID
node = None # The Node object we're currently showing, or None if we're at the project top.
node_uuid = '' # Blender Cloud node UUID we're currently showing, i.e. None-safe self.node['_id']
# This contains a stack of Node objects that lead up to the currently browsed node.
# This allows us to display the "up" item.
path_stack = []
async_task = None # asyncio task for fetching thumbnails
signalling_future = None # asyncio future for signalling that we want to cancel everything.
timer = None
log = logging.getLogger('%s.BlenderCloudBrowser' % __name__)
_menu_item_lock = threading.Lock()
current_path = ''
current_display_content = []
loaded_images = set()
thumbnails_cache = ''
maximized_area = False
mouse_x = 0
mouse_y = 0
def invoke(self, context, event):
# Refuse to start if the file hasn't been saved.
if not context.blend_data.is_saved:
self.report({'ERROR'}, 'Please save your Blend file before using '
'the Blender Cloud addon.')
return {'CANCELLED'}
wm = context.window_manager
self.project_uuid = wm.blender_cloud_project
self.node_uuid = wm.blender_cloud_node
self.path_stack = []
self.thumbnails_cache = cache.cache_directory('thumbnails')
self.mouse_x = event.mouse_x
self.mouse_y = event.mouse_y
# See if we have to maximize the current area
if not context.screen.show_fullscreen:
self.maximized_area = True
bpy.ops.screen.screen_full_area(use_hide_panels=True)
# Add the region OpenGL drawing callback
# draw in view space with 'POST_VIEW' and 'PRE_VIEW'
self._draw_handle = context.space_data.draw_handler_add(
self.draw_menu, (context,), 'WINDOW', 'POST_PIXEL')
self.current_display_content = []
self.loaded_images = set()
self.check_credentials()
context.window_manager.modal_handler_add(self)
self.timer = context.window_manager.event_timer_add(1 / 30, context.window)
return {'RUNNING_MODAL'}
def modal(self, context, event):
task = self.async_task
if self._state != 'EXCEPTION' and task.done() and not task.cancelled():
ex = task.exception()
if ex is not None:
self._state = 'EXCEPTION'
self.log.error('Exception while running task: %s', ex)
return {'RUNNING_MODAL'}
if self._state == 'QUIT':
self._finish(context)
return {'FINISHED'}
if event.type == 'TAB' and event.value == 'RELEASE':
self.log.info('Ensuring async loop is running')
async_loop.ensure_async_loop()
if event.type == 'TIMER':
context.area.tag_redraw()
return {'RUNNING_MODAL'}
if 'MOUSE' in event.type:
context.area.tag_redraw()
self.mouse_x = event.mouse_x
self.mouse_y = event.mouse_y
if self._state == 'BROWSING' and event.type == 'LEFTMOUSE' and event.value == 'RELEASE':
selected = self.get_clicked()
if selected is None:
# No item clicked, ignore it.
return {'RUNNING_MODAL'}
if selected.is_folder:
self.descend_node(selected.node)
else:
if selected.file_desc is None:
# This can happen when the thumbnail information isn't loaded yet.
# Just ignore the click for now.
# TODO: think of a way to handle this properly.
return {'RUNNING_MODAL'}
self.handle_item_selection(context, selected)
elif event.type in {'RIGHTMOUSE', 'ESC'}:
self._finish(context)
return {'CANCELLED'}
return {'RUNNING_MODAL'}
def check_credentials(self):
self._state = 'CHECKING_CREDENTIALS'
self.log.debug('Checking credentials')
self._new_async_task(self._check_credentials())
async def _check_credentials(self):
"""Checks credentials with Pillar, and if ok goes to the BROWSING state."""
try:
await pillar.check_pillar_credentials()
except pillar.CredentialsNotSyncedError:
self.log.info('Credentials not synced, re-syncing automatically.')
else:
self.log.info('Credentials okay, browsing assets.')
await self.async_download_previews()
return
try:
await pillar.refresh_pillar_credentials()
except pillar.UserNotLoggedInError:
self.error('User not logged in on Blender ID.')
else:
self.log.info('Credentials refreshed and ok, browsing assets.')
await self.async_download_previews()
return
raise pillar.UserNotLoggedInError()
# self._new_async_task(self._check_credentials())
def descend_node(self, node):
"""Descends the node hierarchy by visiting this node.
Also keeps track of the current node, so that we know where the "up" button should go.
"""
# Going up or down?
if self.path_stack and isinstance(node, UpNode):
self.log.debug('Going up, pop the stack; pre-pop stack is %r', self.path_stack)
node = self.path_stack.pop()
else:
# Going down, keep track of where we were (project top-level is None)
self.path_stack.append(self.node)
self.log.debug('Going up, push the stack; post-push stack is %r', self.path_stack)
# Set 'current' to the given node
self.node_uuid = node['_id'] if node else None
self.node = node
self.browse_assets()
def _stop_async_task(self):
self.log.debug('Stopping async task')
if self.async_task is None:
self.log.debug('No async task, trivially stopped')
return
# Signal that we want to stop.
self.async_task.cancel()
if not self.signalling_future.done():
self.log.info("Signalling that we want to cancel anything that's running.")
self.signalling_future.cancel()
# Wait until the asynchronous task is done.
if not self.async_task.done():
self.log.info("blocking until async task is done.")
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(self.async_task)
except asyncio.CancelledError:
self.log.info('Asynchronous task was cancelled')
return
# noinspection PyBroadException
try:
self.async_task.result() # This re-raises any exception of the task.
except asyncio.CancelledError:
self.log.info('Asynchronous task was cancelled')
except Exception:
self.log.exception("Exception from asynchronous task")
def _finish(self, context):
self.log.debug('Finishing the modal operator')
self._stop_async_task()
self.clear_images()
context.space_data.draw_handler_remove(self._draw_handle, 'WINDOW')
context.window_manager.event_timer_remove(self.timer)
if self.maximized_area:
bpy.ops.screen.screen_full_area(use_hide_panels=True)
context.area.tag_redraw()
self.log.debug('Modal operator finished')
def clear_images(self):
"""Removes all images we loaded from Blender's memory."""
for image in bpy.data.images:
if image.filepath_raw not in self.loaded_images:
continue
image.user_clear()
bpy.data.images.remove(image)
self.loaded_images.clear()
self.current_display_content.clear()
def add_menu_item(self, *args) -> MenuItem:
menu_item = MenuItem(*args)
# Just make this thread-safe to be on the safe side.
with self._menu_item_lock:
self.current_display_content.append(menu_item)
self.loaded_images.add(menu_item.icon.filepath_raw)
return menu_item
def update_menu_item(self, node, *args) -> MenuItem:
node_uuid = node['_id']
# Just make this thread-safe to be on the safe side.
with self._menu_item_lock:
for menu_item in self.current_display_content:
if menu_item.node_uuid == node_uuid:
menu_item.update(node, *args)
self.loaded_images.add(menu_item.icon.filepath_raw)
break
else:
raise ValueError('Unable to find MenuItem(node_uuid=%r)' % node_uuid)
async def async_download_previews(self):
self._state = 'BROWSING'
thumbnails_directory = self.thumbnails_cache
self.log.info('Asynchronously downloading previews to %r', thumbnails_directory)
self.clear_images()
def thumbnail_loading(node, texture_node):
self.add_menu_item(node, None, 'SPINNER', texture_node['name'])
def thumbnail_loaded(node, file_desc, thumb_path):
self.update_menu_item(node, file_desc, thumb_path, file_desc['filename'])
# Download either by group_texture node UUID or by project UUID (which
# shows all top-level nodes)
if self.node_uuid:
self.log.debug('Getting subnodes for parent node %r', self.node_uuid)
children = await pillar.get_nodes(parent_node_uuid=self.node_uuid,
node_type='group_textures')
# Make sure we can go up again.
if self.path_stack:
self.add_menu_item(UpNode(), None, 'FOLDER', '.. up ..')
elif self.project_uuid:
self.log.debug('Getting subnodes for project node %r', self.project_uuid)
children = await pillar.get_nodes(self.project_uuid, '')
else:
# TODO: add "nothing here" icon and trigger re-draw
self.log.warning("Not node UUID and no project UUID, I can't do anything!")
return
# Download all child nodes
self.log.debug('Iterating over child nodes of %r', self.node_uuid)
for child in children:
# print(' - %(_id)s = %(name)s' % child)
self.add_menu_item(child, None, 'FOLDER', child['name'])
# There are only sub-nodes at the project level, no texture nodes,
# so we won't have to bother looking for textures.
if not self.node_uuid:
return
directory = os.path.join(thumbnails_directory, self.project_uuid, self.node_uuid)
os.makedirs(directory, exist_ok=True)
self.log.debug('Fetching texture thumbnails for node %r', self.node_uuid)
await pillar.fetch_texture_thumbs(self.node_uuid, 's', directory,
thumbnail_loading=thumbnail_loading,
thumbnail_loaded=thumbnail_loaded,
future=self.signalling_future)
def browse_assets(self):
self.log.debug('Browsing assets at project %r node %r', self.project_uuid, self.node_uuid)
self._new_async_task(self.async_download_previews())
def _new_async_task(self, async_task: asyncio.coroutine, future: asyncio.Future=None):
"""Stops the currently running async task, and starts another one."""
self.log.debug('Setting up a new task %r, so any existing task must be stopped', async_task)
self._stop_async_task()
# Download the previews asynchronously.
self.signalling_future = future or asyncio.Future()
self.async_task = asyncio.ensure_future(async_task)
self.log.debug('Created new task %r', self.async_task)
# Start the async manager so everything happens.
async_loop.ensure_async_loop()
def draw_menu(self, context):
"""Draws the GUI with OpenGL."""
drawers = {
'CHECKING_CREDENTIALS': self._draw_checking_credentials,
'BROWSING': self._draw_browser,
'DOWNLOADING_TEXTURE': self._draw_downloading,
'EXCEPTION': self._draw_exception,
}
if self._state in drawers:
drawer = drawers[self._state]
drawer(context)
# For debugging: draw the state
font_id = 0
bgl.glColor4f(1.0, 1.0, 1.0, 1.0)
blf.size(font_id, 20, 72)
blf.position(font_id, 5, 5, 0)
blf.draw(font_id, self._state)
bgl.glDisable(bgl.GL_BLEND)
@staticmethod
def _window_region(context):
window_regions = [region
for region in context.area.regions
if region.type == 'WINDOW']
return window_regions[0]
def _draw_browser(self, context):
"""OpenGL drawing code for the BROWSING state."""
margin_x = 5
margin_y = 5
padding_x = 5
window_region = self._window_region(context)
content_width = window_region.width - margin_x * 2
content_height = window_region.height - margin_y * 2
content_x = margin_x
content_y = context.area.height - margin_y - target_item_height
col_count = content_width // target_item_width
item_width = (content_width - (col_count * padding_x)) / col_count
item_height = target_item_height
block_width = item_width + padding_x
block_height = item_height + margin_y
bgl.glEnable(bgl.GL_BLEND)
bgl.glColor4f(0.0, 0.0, 0.0, 0.6)
bgl.glRectf(0, 0, window_region.width, window_region.height)
if self.current_display_content:
for item_idx, item in enumerate(self.current_display_content):
x = content_x + (item_idx % col_count) * block_width
y = content_y - (item_idx // col_count) * block_height
item.update_placement(x, y, item_width, item_height)
item.draw(highlighted=item.hits(self.mouse_x, self.mouse_y))
else:
font_id = 0
text = "Communicating with Blender Cloud"
bgl.glColor4f(1.0, 1.0, 1.0, 1.0)
blf.size(font_id, 20, 72)
text_width, text_height = blf.dimensions(font_id, text)
blf.position(font_id,
content_x + content_width * 0.5 - text_width * 0.5,
content_y - content_height * 0.3 + text_height * 0.5, 0)
blf.draw(font_id, text)
bgl.glDisable(bgl.GL_BLEND)
# bgl.glColor4f(0.0, 0.0, 0.0, 1.0)
def _draw_downloading(self, context):
"""OpenGL drawing code for the DOWNLOADING_TEXTURE state."""
self._draw_text_on_colour(context,
'Downloading texture from Blender Cloud',
(0.0, 0.0, 0.2, 0.6))
def _draw_checking_credentials(self, context):
"""OpenGL drawing code for the CHECKING_CREDENTIALS state."""
self._draw_text_on_colour(context,
'Checking login credentials',
(0.0, 0.0, 0.2, 0.6))
def _draw_text_on_colour(self, context, text, bgcolour):
content_height, content_width = self._window_size(context)
bgl.glEnable(bgl.GL_BLEND)
bgl.glColor4f(*bgcolour)
bgl.glRectf(0, 0, content_width, content_height)
font_id = 0
bgl.glColor4f(1.0, 1.0, 1.0, 1.0)
blf.size(font_id, 20, 72)
text_width, text_height = blf.dimensions(font_id, text)
blf.position(font_id,
content_width * 0.5 - text_width * 0.5,
content_height * 0.7 + text_height * 0.5, 0)
blf.draw(font_id, text)
bgl.glDisable(bgl.GL_BLEND)
def _window_size(self, context):
window_region = self._window_region(context)
content_width = window_region.width
content_height = window_region.height
return content_height, content_width
def _draw_exception(self, context):
"""OpenGL drawing code for the EXCEPTION state."""
import textwrap
content_height, content_width = self._window_size(context)
bgl.glEnable(bgl.GL_BLEND)
bgl.glColor4f(0.2, 0.0, 0.0, 0.6)
bgl.glRectf(0, 0, content_width, content_height)
font_id = 0
ex = self.async_task.exception()
if isinstance(ex, pillar.UserNotLoggedInError):
ex_msg = 'You are not logged in on Blender ID. Please log in at User Preferences, ' \
'System, Blender ID.'
else:
ex_msg = str(ex)
if not ex_msg:
ex_msg = str(type(ex))
text = "An error occurred:\n%s" % ex_msg
lines = textwrap.wrap(text)
bgl.glColor4f(1.0, 1.0, 1.0, 1.0)
blf.size(font_id, 20, 72)
_, text_height = blf.dimensions(font_id, 'yhBp')
def position(line_nr):
blf.position(font_id,
content_width * 0.1,
content_height * 0.8 - line_nr * text_height, 0)
for line_idx, line in enumerate(lines):
position(line_idx)
blf.draw(font_id, line)
bgl.glDisable(bgl.GL_BLEND)
def get_clicked(self) -> MenuItem:
for item in self.current_display_content:
if item.hits(self.mouse_x, self.mouse_y):
return item
return None
def handle_item_selection(self, context, item: MenuItem):
"""Called when the user clicks on a menu item that doesn't represent a folder."""
self.clear_images()
self._state = 'DOWNLOADING_TEXTURE'
node_path_components = [node['name'] for node in self.path_stack if node is not None]
local_path_components = [self.project_uuid] + node_path_components + [self.node['name']]
top_texture_directory = bpy.path.abspath(context.scene.local_texture_dir)
local_path = os.path.join(top_texture_directory, *local_path_components)
meta_path = os.path.join(top_texture_directory, '.blender_cloud')
self.log.info('Downloading texture %r to %s', item.node_uuid, local_path)
self.log.debug('Metadata will be stored at %s', meta_path)
file_paths = []
def texture_downloading(file_path, file_desc, *args):
self.log.info('Texture downloading to %s', file_path)
def texture_downloaded(file_path, file_desc, *args):
self.log.info('Texture downloaded to %r.', file_path)
image_dblock = bpy.data.images.load(filepath=file_path)
image_dblock['bcloud_file_uuid'] = file_desc['_id']
image_dblock['bcloud_texture_node_uuid'] = item.node_uuid
file_paths.append(file_path)
def texture_download_completed(_):
self.log.info('Texture download complete, inspect:\n%s', '\n'.join(file_paths))
self._state = 'QUIT'
signalling_future = asyncio.Future()
self._new_async_task(pillar.download_texture(item.node, local_path,
metadata_directory=meta_path,
texture_loading=texture_downloading,
texture_loaded=texture_downloaded,
future=signalling_future))
self.async_task.add_done_callback(texture_download_completed)
# store keymaps here to access after registration
addon_keymaps = []
def menu_draw(self, context):
layout = self.layout
layout.separator()
layout.operator(BlenderCloudBrowser.bl_idname, icon='MOD_SCREW')
def register():
bpy.utils.register_class(BlenderCloudBrowser)
# bpy.types.INFO_MT_mesh_add.append(menu_draw)
# handle the keymap
wm = bpy.context.window_manager
kc = wm.keyconfigs.addon
if not kc:
print('No addon key configuration space found, so no custom hotkeys added.')
return
km = kc.keymaps.new(name='Screen')
kmi = km.keymap_items.new('pillar.browser', 'A', 'PRESS', ctrl=True, shift=True, alt=True)
addon_keymaps.append((km, kmi))
def unregister():
bpy.utils.unregister_class(BlenderCloudBrowser)
# handle the keymap
for km, kmi in addon_keymaps:
km.keymap_items.remove(kmi)
addon_keymaps.clear()
if __name__ == "__main__":
register()

View File

@@ -0,0 +1,50 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
import logging
import pillarsdk
from pillarsdk import exceptions as sdk_exceptions
from .pillar import pillar_call
log = logging.getLogger(__name__)
HOME_PROJECT_ENDPOINT = '/bcloud/home-project'
async def get_home_project(params=None) -> pillarsdk.Project:
"""Returns the home project."""
log.debug('Getting home project')
try:
return await pillar_call(pillarsdk.Project.find_from_endpoint,
HOME_PROJECT_ENDPOINT, params=params)
except sdk_exceptions.ForbiddenAccess:
log.warning('Access to the home project was denied. '
'Double-check that you are logged in with valid BlenderID credentials.')
raise
except sdk_exceptions.ResourceNotFound:
log.warning('No home project available.')
raise
async def get_home_project_id() -> str:
"""Returns just the ID of the home project."""
home_proj = await get_home_project({'projection': {'_id': 1}})
home_proj_id = home_proj['_id']
return home_proj_id

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@@ -0,0 +1,338 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
import logging
import os.path
import tempfile
import datetime
import bpy
import pillarsdk
from pillarsdk import exceptions as sdk_exceptions
from .pillar import pillar_call
from . import async_loop, pillar, home_project, blender
REQUIRES_ROLES_FOR_IMAGE_SHARING = {'subscriber', 'demo'}
IMAGE_SHARING_GROUP_NODE_NAME = 'Image sharing'
log = logging.getLogger(__name__)
async def find_image_sharing_group_id(home_project_id, user_id):
# Find the top-level image sharing group node.
try:
share_group, created = await pillar.find_or_create_node(
where={'project': home_project_id,
'node_type': 'group',
'parent': None,
'name': IMAGE_SHARING_GROUP_NODE_NAME},
additional_create_props={
'user': user_id,
'properties': {},
},
projection={'_id': 1},
may_create=True)
except pillar.PillarError:
log.exception('Pillar error caught')
raise pillar.PillarError('Unable to find image sharing folder on the Cloud')
return share_group['_id']
class PILLAR_OT_image_share(pillar.PillarOperatorMixin,
async_loop.AsyncModalOperatorMixin,
bpy.types.Operator):
bl_idname = 'pillar.image_share'
bl_label = 'Share an image/screenshot via Blender Cloud'
bl_description = 'Uploads an image for sharing via Blender Cloud'
log = logging.getLogger('bpy.ops.%s' % bl_idname)
home_project_id = None
home_project_url = 'home'
share_group_id = None # top-level share group node ID
user_id = None
target = bpy.props.EnumProperty(
items=[
('FILE', 'File', 'Share an image file'),
('DATABLOCK', 'Datablock', 'Share an image datablock'),
('SCREENSHOT', 'Screenshot', 'Share a screenshot'),
],
name='target',
default='SCREENSHOT')
name = bpy.props.StringProperty(name='name',
description='File or datablock name to sync')
screenshot_show_multiview = bpy.props.BoolProperty(
name='screenshot_show_multiview',
description='Enable Multi-View',
default=False)
screenshot_use_multiview = bpy.props.BoolProperty(
name='screenshot_use_multiview',
description='Use Multi-View',
default=False)
screenshot_full = bpy.props.BoolProperty(
name='screenshot_full',
description='Full Screen, Capture the whole window (otherwise only capture the active area)',
default=False)
def invoke(self, context, event):
# Do a quick test on datablock dirtyness. If it's not packed and dirty,
# the user should save it first.
if self.target == 'DATABLOCK':
if not self.name:
self.report({'ERROR'}, 'No name given of the datablock to share.')
return {'CANCELLED'}
datablock = bpy.data.images[self.name]
if datablock.type == 'IMAGE' and datablock.is_dirty and not datablock.packed_file:
self.report({'ERROR'}, 'Datablock is dirty, save it first.')
return {'CANCELLED'}
async_loop.AsyncModalOperatorMixin.invoke(self, context, event)
self.log.info('Starting sharing')
self._new_async_task(self.async_execute(context))
return {'RUNNING_MODAL'}
async def async_execute(self, context):
"""Entry point of the asynchronous operator."""
self.report({'INFO'}, 'Communicating with Blender Cloud')
try:
# Refresh credentials
try:
self.user_id = await self.check_credentials(context,
REQUIRES_ROLES_FOR_IMAGE_SHARING)
self.log.debug('Found user ID: %s', self.user_id)
except pillar.NotSubscribedToCloudError:
self.log.exception('User not subscribed to cloud.')
self.report({'ERROR'}, 'Please subscribe to the Blender Cloud.')
self._state = 'QUIT'
return
except pillar.CredentialsNotSyncedError:
self.log.exception('Error checking/refreshing credentials.')
self.report({'ERROR'}, 'Please log in on Blender ID first.')
self._state = 'QUIT'
return
# Find the home project.
try:
home_proj = await home_project.get_home_project({
'projection': {'_id': 1, 'url': 1}
})
except sdk_exceptions.ForbiddenAccess:
self.log.exception('Forbidden access to home project.')
self.report({'ERROR'}, 'Did not get access to home project.')
self._state = 'QUIT'
return
except sdk_exceptions.ResourceNotFound:
self.report({'ERROR'}, 'Home project not found.')
self._state = 'QUIT'
return
self.home_project_id = home_proj['_id']
self.home_project_url = home_proj['url']
try:
gid = await find_image_sharing_group_id(self.home_project_id,
self.user_id)
self.share_group_id = gid
self.log.debug('Found group node ID: %s', self.share_group_id)
except sdk_exceptions.ForbiddenAccess:
self.log.exception('Unable to find Group ID')
self.report({'ERROR'}, 'Unable to find sync folder.')
self._state = 'QUIT'
return
await self.share_image(context)
except Exception as ex:
self.log.exception('Unexpected exception caught.')
self.report({'ERROR'}, 'Unexpected error %s: %s' % (type(ex), ex))
self._state = 'QUIT'
async def share_image(self, context):
"""Sends files to the Pillar server."""
if self.target == 'FILE':
self.report({'INFO'}, "Uploading %s '%s'" % (self.target.lower(), self.name))
node = await self.upload_file(self.name)
elif self.target == 'SCREENSHOT':
node = await self.upload_screenshot(context)
else:
self.report({'INFO'}, "Uploading %s '%s'" % (self.target.lower(), self.name))
node = await self.upload_datablock(context)
self.report({'INFO'}, 'Upload complete, creating link to share.')
share_info = await pillar_call(node.share)
url = share_info.get('short_link')
context.window_manager.clipboard = url
self.report({'INFO'}, 'The link has been copied to your clipboard: %s' % url)
await self.maybe_open_browser(url)
async def upload_file(self, filename: str, fileobj=None) -> pillarsdk.Node:
"""Uploads a file to the cloud, attached to the image sharing node.
Returns the node.
"""
self.log.info('Uploading file %s', filename)
node = await pillar_call(pillarsdk.Node.create_asset_from_file,
self.home_project_id,
self.share_group_id,
'image',
filename,
extra_where={'user': self.user_id},
always_create_new_node=True,
fileobj=fileobj,
caching=False)
node_id = node['_id']
self.log.info('Created node %s', node_id)
self.report({'INFO'}, 'File succesfully uploaded to the cloud!')
return node
async def maybe_open_browser(self, url):
prefs = blender.preferences()
if not prefs.open_browser_after_share:
return
import webbrowser
self.log.info('Opening browser at %s', url)
webbrowser.open_new_tab(url)
async def upload_datablock(self, context) -> pillarsdk.Node:
"""Saves a datablock to file if necessary, then upload.
Returns the node.
"""
self.log.info("Uploading datablock '%s'" % self.name)
datablock = bpy.data.images[self.name]
if datablock.type == 'RENDER_RESULT':
# Construct a sensible name for this render.
filename = '%s-%s-render%s' % (
os.path.splitext(os.path.basename(context.blend_data.filepath))[0],
context.scene.name,
context.scene.render.file_extension)
return await self.upload_via_tempdir(datablock, filename)
if datablock.packed_file is not None:
return await self.upload_packed_file(datablock)
if datablock.is_dirty:
# We can handle dirty datablocks like this if we want.
# However, I (Sybren) do NOT think it's a good idea to:
# - Share unsaved data to the cloud; users can assume it's saved
# to disk and close blender, losing their file.
# - Save unsaved data first; this can overwrite a file a user
# didn't want to overwrite.
filename = bpy.path.basename(datablock.filepath)
return await self.upload_via_tempdir(datablock, filename)
filepath = bpy.path.abspath(datablock.filepath)
return await self.upload_file(filepath)
async def upload_via_tempdir(self, datablock, filename_on_cloud) -> pillarsdk.Node:
"""Saves the datablock to file, and uploads it to the cloud.
Saving is done to a temporary directory, which is removed afterwards.
Returns the node.
"""
with tempfile.TemporaryDirectory() as tmpdir:
filepath = os.path.join(tmpdir, filename_on_cloud)
self.log.debug('Saving %s to %s', datablock, filepath)
datablock.save_render(filepath)
return await self.upload_file(filepath)
async def upload_packed_file(self, datablock) -> pillarsdk.Node:
"""Uploads a packed file directly from memory.
Returns the node.
"""
import io
filename = '%s.%s' % (datablock.name, datablock.file_format.lower())
fileobj = io.BytesIO(datablock.packed_file.data)
fileobj.seek(0) # ensure PillarSDK reads the file from the beginning.
self.log.info('Uploading packed file directly from memory to %r.', filename)
return await self.upload_file(filename, fileobj=fileobj)
async def upload_screenshot(self, context) -> pillarsdk.Node:
"""Takes a screenshot, saves it to a temp file, and uploads it."""
self.name = datetime.datetime.now().strftime('Screenshot-%Y-%m-%d-%H:%M:%S.png')
self.report({'INFO'}, "Uploading %s '%s'" % (self.target.lower(), self.name))
with tempfile.TemporaryDirectory() as tmpdir:
filepath = os.path.join(tmpdir, self.name)
self.log.debug('Saving screenshot to %s', filepath)
bpy.ops.screen.screenshot(filepath=filepath,
show_multiview=self.screenshot_show_multiview,
use_multiview=self.screenshot_use_multiview,
full=self.screenshot_full)
return await self.upload_file(filepath)
def image_editor_menu(self, context):
image = context.space_data.image
box = self.layout.row()
if image and image.has_data:
text = 'Share on Blender Cloud'
if image.type == 'IMAGE' and image.is_dirty and not image.packed_file:
box.enabled = False
text = 'Save image before sharing on Blender Cloud'
props = box.operator(PILLAR_OT_image_share.bl_idname, text=text,
icon_value=blender.icon('CLOUD'))
props.target = 'DATABLOCK'
props.name = image.name
def window_menu(self, context):
props = self.layout.operator(PILLAR_OT_image_share.bl_idname,
text='Share screenshot via Blender Cloud',
icon_value=blender.icon('CLOUD'))
props.target = 'SCREENSHOT'
props.screenshot_full = True
def register():
bpy.utils.register_class(PILLAR_OT_image_share)
bpy.types.IMAGE_MT_image.append(image_editor_menu)
bpy.types.INFO_MT_window.append(window_menu)
def unregister():
bpy.utils.unregister_class(PILLAR_OT_image_share)
bpy.types.IMAGE_MT_image.remove(image_editor_menu)
bpy.types.INFO_MT_window.remove(window_menu)

View File

@@ -1,9 +1,29 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
import asyncio import asyncio
import datetime
import json import json
import os import os
import functools import functools
import logging import logging
from contextlib import closing, contextmanager from contextlib import closing, contextmanager
import urllib.parse
import pathlib import pathlib
import requests import requests
@@ -16,8 +36,11 @@ from pillarsdk.utils import sanitize_filename
from . import cache from . import cache
SUBCLIENT_ID = 'PILLAR' SUBCLIENT_ID = 'PILLAR'
TEXTURE_NODE_TYPES = {'texture', 'hdri'}
_pillar_api = None # will become a pillarsdk.Api object. RFC1123_DATE_FORMAT = '%a, %d %b %Y %H:%M:%S GMT'
_pillar_api = {} # will become a mapping from bool (cached/non-cached) to pillarsdk.Api objects.
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
uncached_session = requests.session() uncached_session = requests.session()
_testing_blender_id_profile = None # Just for testing, overrides what is returned by blender_id_profile. _testing_blender_id_profile = None # Just for testing, overrides what is returned by blender_id_profile.
@@ -31,14 +54,15 @@ class UserNotLoggedInError(RuntimeError):
""" """
def __str__(self): def __str__(self):
return 'UserNotLoggedInError' return self.__class__.__name__
class CredentialsNotSyncedError(UserNotLoggedInError): class CredentialsNotSyncedError(UserNotLoggedInError):
"""Raised when the user may be logged in on Blender ID, but has no Blender Cloud token.""" """Raised when the user may be logged in on Blender ID, but has no Blender Cloud token."""
def __str__(self):
return 'CredentialsNotSyncedError' class NotSubscribedToCloudError(UserNotLoggedInError):
"""Raised when the user may be logged in on Blender ID, but has no Blender Cloud token."""
class PillarError(RuntimeError): class PillarError(RuntimeError):
@@ -62,6 +86,8 @@ class CloudPath(pathlib.PurePosixPath):
@property @property
def project_uuid(self) -> str: def project_uuid(self) -> str:
assert self.parts[0] == '/' assert self.parts[0] == '/'
if len(self.parts) <= 1:
return None
return self.parts[1] return self.parts[1]
@property @property
@@ -71,11 +97,10 @@ class CloudPath(pathlib.PurePosixPath):
@property @property
def node_uuid(self) -> str: def node_uuid(self) -> str:
node_uuids = self.node_uuids if len(self.parts) <= 2:
if not node_uuids:
return None return None
return node_uuids[-1]
return self.parts[-1]
@contextmanager @contextmanager
@@ -107,60 +132,87 @@ def blender_id_profile() -> 'blender_id.BlenderIdProfile':
return blender_id.get_active_profile() return blender_id.get_active_profile()
def pillar_api(pillar_endpoint: str = None) -> pillarsdk.Api: def blender_id_subclient() -> dict:
"""Returns the subclient dict, containing the 'subclient_user_id' and 'token' keys."""
profile = blender_id_profile()
if not profile:
raise UserNotLoggedInError()
subclient = profile.subclients.get(SUBCLIENT_ID)
if not subclient:
raise CredentialsNotSyncedError()
return subclient
def pillar_api(pillar_endpoint: str = None, caching=True) -> pillarsdk.Api:
"""Returns the Pillar SDK API object for the current user. """Returns the Pillar SDK API object for the current user.
The user must be logged in. The user must be logged in.
:param pillar_endpoint: URL of the Pillar server, for testing purposes. If not specified, :param pillar_endpoint: URL of the Pillar server, for testing purposes. If not specified,
it will use the addon preferences. it will use the addon preferences.
:param caching: whether to return a caching or non-caching API
""" """
global _pillar_api global _pillar_api
# Only return the Pillar API object if the user is still logged in. # Only return the Pillar API object if the user is still logged in.
profile = blender_id_profile() subclient = blender_id_subclient()
if not profile:
raise UserNotLoggedInError()
subclient = profile.subclients.get(SUBCLIENT_ID) if not _pillar_api:
if not subclient:
raise CredentialsNotSyncedError()
if _pillar_api is None:
# Allow overriding the endpoint before importing Blender-specific stuff. # Allow overriding the endpoint before importing Blender-specific stuff.
if pillar_endpoint is None: if pillar_endpoint is None:
from . import blender from . import blender
pillar_endpoint = blender.preferences().pillar_server pillar_endpoint = blender.preferences().pillar_server
pillarsdk.Api.requests_session = cache.requests_session() _caching_api = pillarsdk.Api(endpoint=pillar_endpoint,
username=subclient['subclient_user_id'],
password=SUBCLIENT_ID,
token=subclient['token'])
_caching_api.requests_session = cache.requests_session()
_pillar_api = pillarsdk.Api(endpoint=pillar_endpoint, _noncaching_api = pillarsdk.Api(endpoint=pillar_endpoint,
username=subclient['subclient_user_id'], username=subclient['subclient_user_id'],
password=SUBCLIENT_ID, password=SUBCLIENT_ID,
token=subclient['token']) token=subclient['token'])
_noncaching_api.requests_session = uncached_session
return _pillar_api # Send the addon version as HTTP header.
from blender_cloud import bl_info
addon_version = '.'.join(str(v) for v in bl_info['version'])
_caching_api.global_headers['Blender-Cloud-Addon'] = addon_version
_noncaching_api.global_headers['Blender-Cloud-Addon'] = addon_version
_pillar_api = {
True: _caching_api,
False: _noncaching_api,
}
return _pillar_api[caching]
# No more than this many Pillar calls should be made simultaneously # No more than this many Pillar calls should be made simultaneously
pillar_semaphore = asyncio.Semaphore(3) pillar_semaphore = asyncio.Semaphore(3)
async def pillar_call(pillar_func, *args, **kwargs): async def pillar_call(pillar_func, *args, caching=True, **kwargs):
partial = functools.partial(pillar_func, *args, api=pillar_api(), **kwargs) partial = functools.partial(pillar_func, *args, api=pillar_api(caching=caching), **kwargs)
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
async with pillar_semaphore: async with pillar_semaphore:
return await loop.run_in_executor(None, partial) return await loop.run_in_executor(None, partial)
async def check_pillar_credentials(): async def check_pillar_credentials(required_roles: set):
"""Tries to obtain the user at Pillar using the user's credentials. """Tries to obtain the user at Pillar using the user's credentials.
:param required_roles: set of roles to require -- having one of those is enough.
:raises UserNotLoggedInError: when the user is not logged in on Blender ID. :raises UserNotLoggedInError: when the user is not logged in on Blender ID.
:raises CredentialsNotSyncedError: when the user is logged in on Blender ID but :raises CredentialsNotSyncedError: when the user is logged in on Blender ID but
doesn't have a valid subclient token for Pillar. doesn't have a valid subclient token for Pillar.
:returns: the Pillar User ID of the current user.
""" """
profile = blender_id_profile() profile = blender_id_profile()
@@ -171,13 +223,28 @@ async def check_pillar_credentials():
if not subclient: if not subclient:
raise CredentialsNotSyncedError() raise CredentialsNotSyncedError()
try: pillar_user_id = subclient['subclient_user_id']
await get_project_uuid('textures') # Any query will do. if not pillar_user_id:
except pillarsdk.UnauthorizedAccess:
raise CredentialsNotSyncedError() raise CredentialsNotSyncedError()
try:
db_user = await pillar_call(pillarsdk.User.me)
except (pillarsdk.UnauthorizedAccess, pillarsdk.ResourceNotFound, pillarsdk.ForbiddenAccess):
raise CredentialsNotSyncedError()
async def refresh_pillar_credentials(): roles = db_user.roles or set()
log.debug('User has roles %r', roles)
if required_roles and not required_roles.intersection(set(roles)):
# Delete the subclient info. This forces a re-check later, which can
# then pick up on the user's new status.
del profile.subclients[SUBCLIENT_ID]
profile.save_json()
raise NotSubscribedToCloudError()
return pillar_user_id
async def refresh_pillar_credentials(required_roles: set):
"""Refreshes the authentication token on Pillar. """Refreshes the authentication token on Pillar.
:raises blender_id.BlenderIdCommError: when Blender ID refuses to send a token to Pillar. :raises blender_id.BlenderIdCommError: when Blender ID refuses to send a token to Pillar.
@@ -193,11 +260,15 @@ async def refresh_pillar_credentials():
# Create a subclient token and send it to Pillar. # Create a subclient token and send it to Pillar.
# May raise a blender_id.BlenderIdCommError # May raise a blender_id.BlenderIdCommError
blender_id.create_subclient_token(SUBCLIENT_ID, pillar_endpoint) try:
blender_id.create_subclient_token(SUBCLIENT_ID, pillar_endpoint)
except blender_id.communication.BlenderIdCommError as ex:
log.warning("Unable to create authentication token: %s", ex)
raise CredentialsNotSyncedError()
# Test the new URL # Test the new URL
_pillar_api = None _pillar_api = None
await get_project_uuid('textures') # Any query will do. return await check_pillar_credentials(required_roles)
async def get_project_uuid(project_url: str) -> str: async def get_project_uuid(project_url: str) -> str:
@@ -217,7 +288,7 @@ async def get_project_uuid(project_url: str) -> str:
async def get_nodes(project_uuid: str = None, parent_node_uuid: str = None, async def get_nodes(project_uuid: str = None, parent_node_uuid: str = None,
node_type: str = None) -> list: node_type=None, max_results=None) -> list:
"""Gets nodes for either a project or given a parent node. """Gets nodes for either a project or given a parent node.
@param project_uuid: the UUID of the project, or None if only querying by parent_node_uuid. @param project_uuid: the UUID of the project, or None if only querying by parent_node_uuid.
@@ -242,16 +313,43 @@ async def get_nodes(project_uuid: str = None, parent_node_uuid: str = None,
where['project'] = project_uuid where['project'] = project_uuid
if node_type: if node_type:
where['node_type'] = node_type if isinstance(node_type, str):
where['node_type'] = node_type
else:
# Convert set & tuple to list
where['node_type'] = {'$in': list(node_type)}
children = await pillar_call(pillarsdk.Node.all, { params = {'projection': {'name': 1, 'parent': 1, 'node_type': 1, 'properties.order': 1,
'projection': {'name': 1, 'parent': 1, 'node_type': 1, 'properties.status': 1, 'properties.files': 1,
'properties.order': 1, 'properties.status': 1, 'properties.content_type': 1, 'picture': 1},
'properties.files': 1, 'where': where,
'properties.content_type': 1, 'picture': 1}, 'embed': ['parent']}
'where': where,
'sort': 'properties.order', # Pagination
'embed': ['parent']}) if max_results:
params['max_results'] = int(max_results)
children = await pillar_call(pillarsdk.Node.all, params)
return children['_items']
async def get_texture_projects(max_results=None) -> list:
"""Returns project dicts that contain textures."""
params = {}
# Pagination
if max_results:
params['max_results'] = int(max_results)
try:
children = await pillar_call(pillarsdk.Project.all_from_endpoint,
'/bcloud/texture-libraries',
params=params)
except pillarsdk.ResourceNotFound as ex:
log.warning('Unable to find texture projects: %s', ex)
raise PillarError('Unable to find texture projects: %s' % ex)
return children['_items'] return children['_items']
@@ -367,7 +465,7 @@ async def fetch_thumbnail_info(file: pillarsdk.File, directory: str, desired_siz
finished. finished.
""" """
thumb_link = await pillar_call(file.thumbnail_file, desired_size) thumb_link = await pillar_call(file.thumbnail, desired_size)
if thumb_link is None: if thumb_link is None:
raise ValueError("File {} has no thumbnail of size {}" raise ValueError("File {} has no thumbnail of size {}"
@@ -403,7 +501,7 @@ async def fetch_texture_thumbs(parent_node_uuid: str, desired_size: str,
# Download all texture nodes in parallel. # Download all texture nodes in parallel.
log.debug('Getting child nodes of node %r', parent_node_uuid) log.debug('Getting child nodes of node %r', parent_node_uuid)
texture_nodes = await get_nodes(parent_node_uuid=parent_node_uuid, texture_nodes = await get_nodes(parent_node_uuid=parent_node_uuid,
node_type='texture') node_type=TEXTURE_NODE_TYPES)
if is_cancelled(future): if is_cancelled(future):
log.warning('fetch_texture_thumbs: Texture downloading cancelled') log.warning('fetch_texture_thumbs: Texture downloading cancelled')
@@ -429,7 +527,7 @@ async def download_texture_thumbnail(texture_node, desired_size: str,
thumbnail_loaded: callable, thumbnail_loaded: callable,
future: asyncio.Future = None): future: asyncio.Future = None):
# Skip non-texture nodes, as we can't thumbnail them anyway. # Skip non-texture nodes, as we can't thumbnail them anyway.
if texture_node['node_type'] != 'texture': if texture_node['node_type'] not in TEXTURE_NODE_TYPES:
return return
if is_cancelled(future): if is_cancelled(future):
@@ -439,11 +537,26 @@ async def download_texture_thumbnail(texture_node, desired_size: str,
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
# Find the File that belongs to this texture node # Find out which file to use for the thumbnail picture.
pic_uuid = texture_node['picture'] pic_uuid = texture_node.picture
if not pic_uuid:
# Fall back to the first texture file, if it exists.
log.debug('Node %r does not have a picture, falling back to first file.',
texture_node['_id'])
files = texture_node.properties and texture_node.properties.files
if not files:
log.info('Node %r does not have a picture nor files, skipping.', texture_node['_id'])
return
pic_uuid = files[0].file
if not pic_uuid:
log.info('Node %r does not have a picture nor files, skipping.', texture_node['_id'])
return
# Load the File that belongs to this texture node's picture.
loop.call_soon_threadsafe(thumbnail_loading, texture_node, texture_node) loop.call_soon_threadsafe(thumbnail_loading, texture_node, texture_node)
file_desc = await pillar_call(pillarsdk.File.find, pic_uuid, params={ file_desc = await pillar_call(pillarsdk.File.find, pic_uuid, params={
'projection': {'filename': 1, 'variations': 1, 'width': 1, 'height': 1}, 'projection': {'filename': 1, 'variations': 1, 'width': 1, 'height': 1,
'length': 1},
}) })
if file_desc is None: if file_desc is None:
@@ -472,14 +585,80 @@ async def download_texture_thumbnail(texture_node, desired_size: str,
loop.call_soon_threadsafe(thumbnail_loaded, texture_node, file_desc, thumb_path) loop.call_soon_threadsafe(thumbnail_loaded, texture_node, file_desc, thumb_path)
async def fetch_node_files(node: pillarsdk.Node,
*,
file_doc_loading: callable,
file_doc_loaded: callable,
future: asyncio.Future = None):
"""Fetches all files of a texture/hdri node.
@param node: Node document to fetch all file docs for.
@param file_doc_loading: callback function that takes (file_id, ) parameters,
which is called before a file document will be downloaded. This allows you to
show a "downloading" indicator.
@param file_doc_loaded: callback function that takes (file_id, pillarsdk.File object)
parameters, which is called for every thumbnail after it's been downloaded.
@param future: Future that's inspected; if it is not None and cancelled, texture downloading
is aborted.
"""
# Download all thumbnails in parallel.
if is_cancelled(future):
log.warning('fetch_texture_thumbs: Texture downloading cancelled')
return
coros = (download_file_doc(file_ref.file,
file_doc_loading=file_doc_loading,
file_doc_loaded=file_doc_loaded,
future=future)
for file_ref in node.properties.files)
# raises any exception from failed handle_texture_node() calls.
await asyncio.gather(*coros)
log.info('fetch_node_files: Done downloading %i files', len(node.properties.files))
async def download_file_doc(file_id,
*,
file_doc_loading: callable,
file_doc_loaded: callable,
future: asyncio.Future = None):
if is_cancelled(future):
log.debug('fetch_texture_thumbs cancelled before finding File for file_id %s', file_id)
return
loop = asyncio.get_event_loop()
# Load the File that belongs to this texture node's picture.
loop.call_soon_threadsafe(file_doc_loading, file_id)
file_desc = await pillar_call(pillarsdk.File.find, file_id, params={
'projection': {'filename': 1, 'variations': 1, 'width': 1, 'height': 1,
'length': 1},
})
if file_desc is None:
log.warning('Unable to find File for file_id %s', file_id)
loop.call_soon_threadsafe(file_doc_loaded, file_id, file_desc)
async def download_file_by_uuid(file_uuid, async def download_file_by_uuid(file_uuid,
target_directory: str, target_directory: str,
metadata_directory: str, metadata_directory: str,
*, *,
filename: str = None,
map_type: str = None, map_type: str = None,
file_loading: callable, file_loading: callable = None,
file_loaded: callable, file_loaded: callable = None,
file_loaded_sync: callable = None,
future: asyncio.Future): future: asyncio.Future):
"""Downloads a file from Pillar by its UUID.
:param filename: overrules the filename in file_doc['filename'] if given.
The extension from file_doc['filename'] is still used, though.
"""
if is_cancelled(future): if is_cancelled(future):
log.debug('download_file_by_uuid(%r) cancelled.', file_uuid) log.debug('download_file_by_uuid(%r) cancelled.', file_uuid)
return return
@@ -488,18 +667,27 @@ async def download_file_by_uuid(file_uuid,
# Find the File document. # Find the File document.
file_desc = await pillar_call(pillarsdk.File.find, file_uuid, params={ file_desc = await pillar_call(pillarsdk.File.find, file_uuid, params={
'projection': {'link': 1, 'filename': 1}, 'projection': {'link': 1, 'filename': 1, 'length': 1},
}) })
# Save the file document to disk # Save the file document to disk
metadata_file = os.path.join(metadata_directory, 'files', '%s.json' % file_uuid) metadata_file = os.path.join(metadata_directory, 'files', '%s.json' % file_uuid)
save_as_json(file_desc, metadata_file) save_as_json(file_desc, metadata_file)
file_path = os.path.join(target_directory, # Let the caller override the filename root.
sanitize_filename('%s-%s' % (map_type, file_desc['filename']))) root, ext = os.path.splitext(file_desc['filename'])
if filename:
root, _ = os.path.splitext(filename)
if not map_type or root.endswith(map_type):
target_filename = '%s%s' % (root, ext)
else:
target_filename = '%s-%s%s' % (root, map_type, ext)
file_path = os.path.join(target_directory, sanitize_filename(target_filename))
file_url = file_desc['link'] file_url = file_desc['link']
# log.debug('Texture %r:\n%s', file_uuid, pprint.pformat(file_desc.to_dict())) # log.debug('Texture %r:\n%s', file_uuid, pprint.pformat(file_desc.to_dict()))
loop.call_soon_threadsafe(file_loading, file_path, file_desc) if file_loading is not None:
loop.call_soon_threadsafe(file_loading, file_path, file_desc, map_type)
# Cached headers are stored in the project space # Cached headers are stored in the project space
header_store = os.path.join(metadata_directory, 'files', header_store = os.path.join(metadata_directory, 'files',
@@ -507,7 +695,10 @@ async def download_file_by_uuid(file_uuid,
await download_to_file(file_url, file_path, header_store=header_store, future=future) await download_to_file(file_url, file_path, header_store=header_store, future=future)
loop.call_soon_threadsafe(file_loaded, file_path, file_desc) if file_loaded is not None:
loop.call_soon_threadsafe(file_loaded, file_path, file_desc, map_type)
if file_loaded_sync is not None:
await file_loaded_sync(file_path, file_desc, map_type)
async def download_texture(texture_node, async def download_texture(texture_node,
@@ -517,23 +708,185 @@ async def download_texture(texture_node,
texture_loading: callable, texture_loading: callable,
texture_loaded: callable, texture_loaded: callable,
future: asyncio.Future): future: asyncio.Future):
if texture_node['node_type'] != 'texture': node_type_name = texture_node['node_type']
raise TypeError("Node type should be 'texture', not %r" % texture_node['node_type']) if node_type_name not in TEXTURE_NODE_TYPES:
raise TypeError("Node type should be in %r, not %r" %
(TEXTURE_NODE_TYPES, node_type_name))
filename = '%s.taken_from_file' % sanitize_filename(texture_node['name'])
# Download every file. Eve doesn't support embedding from a list-of-dicts. # Download every file. Eve doesn't support embedding from a list-of-dicts.
downloaders = (download_file_by_uuid(file_info['file'], downloaders = []
target_directory, for file_info in texture_node['properties']['files']:
metadata_directory, dlr = download_file_by_uuid(file_info['file'],
map_type=file_info['map_type'], target_directory,
file_loading=texture_loading, metadata_directory,
file_loaded=texture_loaded, filename=filename,
future=future) map_type=file_info.map_type or file_info.resolution,
for file_info in texture_node['properties']['files']) file_loading=texture_loading,
file_loaded=texture_loaded,
future=future)
downloaders.append(dlr)
return await asyncio.gather(*downloaders, return_exceptions=True) return await asyncio.gather(*downloaders, return_exceptions=True)
async def upload_file(project_id: str, file_path: pathlib.Path, *,
future: asyncio.Future) -> str:
"""Uploads a file to the Blender Cloud, returning a file document ID."""
from .blender import PILLAR_SERVER_URL
loop = asyncio.get_event_loop()
url = urllib.parse.urljoin(PILLAR_SERVER_URL, '/storage/stream/%s' % project_id)
# Upload the file in a different thread.
def upload():
auth_token = blender_id_subclient()['token']
with file_path.open(mode='rb') as infile:
return uncached_session.post(url,
files={'file': infile},
auth=(auth_token, SUBCLIENT_ID))
# Check for cancellation even before we start our POST request
if is_cancelled(future):
log.debug('Uploading was cancelled before doing the POST')
raise asyncio.CancelledError('Uploading was cancelled')
log.debug('Performing POST %s', url)
response = await loop.run_in_executor(None, upload)
log.debug('Status %i from POST %s', response.status_code, url)
response.raise_for_status()
resp = response.json()
log.debug('Upload response: %s', resp)
try:
file_id = resp['file_id']
except KeyError:
log.error('No file ID in upload response: %s', resp)
raise PillarError('No file ID in upload response: %s' % resp)
log.info('Uploaded %s to file ID %s', file_path, file_id)
return file_id
def is_cancelled(future: asyncio.Future) -> bool: def is_cancelled(future: asyncio.Future) -> bool:
# assert future is not None # for debugging purposes. # assert future is not None # for debugging purposes.
cancelled = future is not None and future.cancelled() cancelled = future is not None and future.cancelled()
return cancelled return cancelled
class PillarOperatorMixin:
async def check_credentials(self, context, required_roles) -> bool:
"""Checks credentials with Pillar, and if ok returns the user ID.
Returns None if the user cannot be found, or if the user is not a Cloud subscriber.
"""
# self.report({'INFO'}, 'Checking Blender Cloud credentials')
try:
user_id = await check_pillar_credentials(required_roles)
except NotSubscribedToCloudError:
self._log_subscription_needed()
raise
except CredentialsNotSyncedError:
self.log.info('Credentials not synced, re-syncing automatically.')
else:
self.log.info('Credentials okay.')
return user_id
try:
user_id = await refresh_pillar_credentials(required_roles)
except NotSubscribedToCloudError:
self._log_subscription_needed()
raise
except UserNotLoggedInError:
self.log.error('User not logged in on Blender ID.')
else:
self.log.info('Credentials refreshed and ok.')
return user_id
return None
def _log_subscription_needed(self):
self.log.warning(
'Please subscribe to the blender cloud at https://cloud.blender.org/join')
self.report({'INFO'},
'Please subscribe to the blender cloud at https://cloud.blender.org/join')
async def find_or_create_node(where: dict,
additional_create_props: dict = None,
projection: dict = None,
may_create: bool = True) -> (pillarsdk.Node, bool):
"""Finds a node by the `filter_props`, creates it using the additional props.
:returns: tuple (node, created), where 'created' is a bool indicating whether
a new node was created, or an exising one is returned.
"""
params = {
'where': where,
}
if projection:
params['projection'] = projection
found_node = await pillar_call(pillarsdk.Node.find_first, params, caching=False)
if found_node is not None:
return found_node, False
if not may_create:
return None, False
# Augment the node properties to form a complete node.
node_props = where.copy()
if additional_create_props:
node_props.update(additional_create_props)
log.debug('Creating new node %s', node_props)
created_node = pillarsdk.Node.new(node_props)
created_ok = await pillar_call(created_node.create)
if not created_ok:
log.error('Blender Cloud addon: unable to create node on the Cloud.')
raise PillarError('Unable to create node on the Cloud')
return created_node, True
async def attach_file_to_group(file_path: pathlib.Path,
home_project_id: str,
group_node_id: str,
user_id: str = None) -> pillarsdk.Node:
"""Creates an Asset node and attaches a file document to it."""
node = await pillar_call(pillarsdk.Node.create_asset_from_file,
home_project_id,
group_node_id,
'file',
str(file_path),
extra_where=user_id and {'user': user_id},
caching=False)
return node
def node_to_id(node: pillarsdk.Node) -> dict:
"""Converts a Node to a dict we can store in an ID property.
ID properties only support a handful of Python classes, so we have
to convert datetime.datetime to a string and remove None values.
"""
def to_rna(value):
if isinstance(value, dict):
return {k: to_rna(v) for k, v in value.items()}
if isinstance(value, datetime.datetime):
return value.strftime(RFC1123_DATE_FORMAT)
return value
as_dict = to_rna(node.to_dict())
return pillarsdk.utils.remove_none_attributes(as_dict)

View File

@@ -0,0 +1,526 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
"""Synchronises settings & startup file with the Cloud.
Caching is disabled on many PillarSDK calls, as synchronisation can happen
rapidly between multiple machines. This means that information can be outdated
in seconds, rather than the minutes the cache system assumes.
"""
import functools
import logging
import pathlib
import tempfile
import shutil
import bpy
import asyncio
import pillarsdk
from pillarsdk import exceptions as sdk_exceptions
from .pillar import pillar_call
from . import async_loop, pillar, cache, blendfile, home_project
SETTINGS_FILES_TO_UPLOAD = ['userpref.blend', 'startup.blend']
# These are RNA keys inside the userpref.blend file, and their
# Python properties names. These settings will not be synced.
LOCAL_SETTINGS_RNA = [
(b'dpi', 'system.dpi'),
(b'virtual_pixel', 'system.virtual_pixel_mode'),
(b'compute_device_id', 'system.compute_device'),
(b'compute_device_type', 'system.compute_device_type'),
(b'fontdir', 'filepaths.font_directory'),
(b'textudir', 'filepaths.texture_directory'),
(b'renderdir', 'filepaths.render_output_directory'),
(b'pythondir', 'filepaths.script_directory'),
(b'sounddir', 'filepaths.sound_directory'),
(b'tempdir', 'filepaths.temporary_directory'),
(b'render_cachedir', 'filepaths.render_cache_directory'),
(b'i18ndir', 'filepaths.i18n_branches_directory'),
(b'image_editor', 'filepaths.image_editor'),
(b'anim_player', 'filepaths.animation_player'),
]
REQUIRES_ROLES_FOR_SYNC = set() # no roles needed.
SYNC_GROUP_NODE_NAME = 'Blender Sync'
SYNC_GROUP_NODE_DESC = 'The [Blender Cloud Addon](https://cloud.blender.org/services' \
'#blender-addon) will synchronize your Blender settings here.'
log = logging.getLogger(__name__)
def set_blender_sync_status(set_status: str):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
bss = bpy.context.window_manager.blender_sync_status
bss.status = set_status
try:
return func(*args, **kwargs)
finally:
bss.status = 'IDLE'
return wrapper
return decorator
def async_set_blender_sync_status(set_status: str):
def decorator(func):
@functools.wraps(func)
async def wrapper(*args, **kwargs):
bss = bpy.context.window_manager.blender_sync_status
bss.status = set_status
try:
return await func(*args, **kwargs)
finally:
bss.status = 'IDLE'
return wrapper
return decorator
async def find_sync_group_id(home_project_id: str,
user_id: str,
blender_version: str,
*,
may_create=True) -> str:
"""Finds the group node in which to store sync assets.
If the group node doesn't exist and may_create=True, it creates it.
"""
# Find the top-level sync group node. This should have been
# created by Pillar while creating the home project.
try:
sync_group, created = await pillar.find_or_create_node(
where={'project': home_project_id,
'node_type': 'group',
'parent': None,
'name': SYNC_GROUP_NODE_NAME,
'user': user_id},
projection={'_id': 1},
may_create=False)
except pillar.PillarError:
raise pillar.PillarError('Unable to find sync folder on the Cloud')
if not may_create and sync_group is None:
log.info("Sync folder doesn't exist, and not creating it either.")
return None, None
# Find/create the sub-group for the requested Blender version
try:
sub_sync_group, created = await pillar.find_or_create_node(
where={'project': home_project_id,
'node_type': 'group',
'parent': sync_group['_id'],
'name': blender_version,
'user': user_id},
additional_create_props={
'description': 'Sync folder for Blender %s' % blender_version,
'properties': {'status': 'published'},
},
projection={'_id': 1},
may_create=may_create)
except pillar.PillarError:
raise pillar.PillarError('Unable to create sync folder on the Cloud')
if not may_create and sub_sync_group is None:
log.info("Sync folder for Blender version %s doesn't exist, "
"and not creating it either.", blender_version)
return sync_group['_id'], None
return sync_group['_id'], sub_sync_group['_id']
@functools.lru_cache()
async def available_blender_versions(home_project_id: str, user_id: str) -> list:
bss = bpy.context.window_manager.blender_sync_status
# Get the available Blender versions.
sync_group = await pillar_call(
pillarsdk.Node.find_first,
params={
'where': {'project': home_project_id,
'node_type': 'group',
'parent': None,
'name': SYNC_GROUP_NODE_NAME,
'user': user_id},
'projection': {'_id': 1},
},
caching=False)
if sync_group is None:
bss.report({'ERROR'}, 'No synced Blender settings in your Blender Cloud')
log.debug('-- unable to find sync group for home_project_id=%r and user_id=%r',
home_project_id, user_id)
return []
sync_nodes = await pillar_call(
pillarsdk.Node.all,
params={
'where': {'project': home_project_id,
'node_type': 'group',
'parent': sync_group['_id'],
'user': user_id},
'projection': {'_id': 1, 'name': 1},
'sort': '-name',
},
caching=False)
if not sync_nodes or not sync_nodes._items:
bss.report({'ERROR'}, 'No synced Blender settings in your Blender Cloud.')
return []
versions = [node.name for node in sync_nodes._items]
log.debug('Versions: %s', versions)
return versions
# noinspection PyAttributeOutsideInit
class PILLAR_OT_sync(pillar.PillarOperatorMixin,
async_loop.AsyncModalOperatorMixin,
bpy.types.Operator):
bl_idname = 'pillar.sync'
bl_label = 'Synchronise with Blender Cloud'
bl_description = 'Synchronises Blender settings with Blender Cloud'
log = logging.getLogger('bpy.ops.%s' % bl_idname)
home_project_id = None
sync_group_id = None # top-level sync group node ID
sync_group_versioned_id = None # sync group node ID for the given Blender version.
action = bpy.props.EnumProperty(
items=[
('PUSH', 'Push', 'Push settings to the Blender Cloud'),
('PULL', 'Pull', 'Pull settings from the Blender Cloud'),
('REFRESH', 'Refresh', 'Refresh available versions'),
('SELECT', 'Select', 'Select version to sync'),
],
name='action')
CURRENT_BLENDER_VERSION = '%i.%i' % bpy.app.version[:2]
blender_version = bpy.props.StringProperty(name='blender_version',
description='Blender version to sync for',
default=CURRENT_BLENDER_VERSION)
def bss_report(self, level, message):
bss = bpy.context.window_manager.blender_sync_status
bss.report(level, message)
def invoke(self, context, event):
if self.action == 'SELECT':
# Synchronous action
return self.action_select(context)
if self.action in {'PUSH', 'PULL'} and not self.blender_version:
self.bss_report({'ERROR'}, 'No Blender version to sync for was given.')
return {'CANCELLED'}
async_loop.AsyncModalOperatorMixin.invoke(self, context, event)
self.log.info('Starting synchronisation')
self._new_async_task(self.async_execute(context))
return {'RUNNING_MODAL'}
def action_select(self, context):
"""Allows selection of the Blender version to use.
This is a synchronous action, as it requires a dialog box.
"""
self.log.info('Performing action SELECT')
# Do a refresh before we can show the dropdown.
fut = asyncio.ensure_future(self.async_execute(context, action_override='REFRESH'))
loop = asyncio.get_event_loop()
loop.run_until_complete(fut)
self._state = 'SELECTING'
return context.window_manager.invoke_props_dialog(self)
def draw(self, context):
bss = bpy.context.window_manager.blender_sync_status
self.layout.prop(bss, 'version', text='Blender version')
def execute(self, context):
if self.action != 'SELECT':
log.debug('Ignoring execute() for action %r', self.action)
return {'FINISHED'}
log.debug('Performing execute() for action %r', self.action)
# Perform the sync when the user closes the dialog box.
bss = bpy.context.window_manager.blender_sync_status
bpy.ops.pillar.sync('INVOKE_DEFAULT',
action='PULL',
blender_version=bss.version)
return {'FINISHED'}
@async_set_blender_sync_status('SYNCING')
async def async_execute(self, context, *, action_override=None):
"""Entry point of the asynchronous operator."""
action = action_override or self.action
self.bss_report({'INFO'}, 'Communicating with Blender Cloud')
self.log.info('Performing action %s', action)
try:
# Refresh credentials
try:
self.user_id = await self.check_credentials(context, REQUIRES_ROLES_FOR_SYNC)
log.debug('Found user ID: %s', self.user_id)
except pillar.NotSubscribedToCloudError:
self.log.exception('User not subscribed to cloud.')
self.bss_report({'SUBSCRIBE'}, 'Please subscribe to the Blender Cloud.')
self._state = 'QUIT'
return
except pillar.CredentialsNotSyncedError:
self.log.exception('Error checking/refreshing credentials.')
self.bss_report({'ERROR'}, 'Please log in on Blender ID first.')
self._state = 'QUIT'
return
# Find the home project.
try:
self.home_project_id = await home_project.get_home_project_id()
except sdk_exceptions.ForbiddenAccess:
self.log.exception('Forbidden access to home project.')
self.bss_report({'ERROR'}, 'Did not get access to home project.')
self._state = 'QUIT'
return
except sdk_exceptions.ResourceNotFound:
self.bss_report({'ERROR'}, 'Home project not found.')
self._state = 'QUIT'
return
# Only create the folder structure if we're pushing.
may_create = self.action == 'PUSH'
try:
gid, subgid = await find_sync_group_id(self.home_project_id,
self.user_id,
self.blender_version,
may_create=may_create)
self.sync_group_id = gid
self.sync_group_versioned_id = subgid
self.log.debug('Found top-level group node ID: %s', self.sync_group_id)
self.log.debug('Found group node ID for %s: %s',
self.blender_version, self.sync_group_versioned_id)
except sdk_exceptions.ForbiddenAccess:
self.log.exception('Unable to find Group ID')
self.bss_report({'ERROR'}, 'Unable to find sync folder.')
self._state = 'QUIT'
return
# Perform the requested action.
action_method = {
'PUSH': self.action_push,
'PULL': self.action_pull,
'REFRESH': self.action_refresh,
}[action]
await action_method(context)
except Exception as ex:
self.log.exception('Unexpected exception caught.')
self.bss_report({'ERROR'}, 'Unexpected error: %s' % ex)
self._state = 'QUIT'
async def action_push(self, context):
"""Sends files to the Pillar server."""
self.log.info('Saved user preferences to disk before pushing to cloud.')
bpy.ops.wm.save_userpref()
config_dir = pathlib.Path(bpy.utils.user_resource('CONFIG'))
for fname in SETTINGS_FILES_TO_UPLOAD:
path = config_dir / fname
if not path.exists():
self.log.debug('Skipping non-existing %s', path)
continue
if self.signalling_future.cancelled():
self.bss_report({'WARNING'}, 'Upload aborted.')
return
self.bss_report({'INFO'}, 'Uploading %s' % fname)
try:
await pillar.attach_file_to_group(path,
self.home_project_id,
self.sync_group_versioned_id,
self.user_id)
except sdk_exceptions.RequestEntityTooLarge as ex:
self.log.error('File too big to upload: %s' % ex)
self.log.error('To upload larger files, please subscribe to Blender Cloud.')
self.bss_report({'SUBSCRIBE'}, 'File %s too big to upload. '
'Subscribe for unlimited space.' % fname)
self._state = 'QUIT'
return
await self.action_refresh(context)
# After pushing, change the 'pull' version to the current version of Blender.
# Or to the latest version, if by some mistake somewhere the current push
# isn't available after all.
bss = bpy.context.window_manager.blender_sync_status
if self.CURRENT_BLENDER_VERSION in bss.available_blender_versions:
bss.version = self.CURRENT_BLENDER_VERSION
else:
bss.version = max(bss.available_blender_versions)
self.bss_report({'INFO'}, 'Settings pushed to Blender Cloud.')
async def action_pull(self, context):
"""Loads files from the Pillar server."""
# If the sync group node doesn't exist, offer a list of groups that do.
if self.sync_group_id is None:
self.bss_report({'ERROR'},
'There are no synced Blender settings in your Blender Cloud.')
return
if self.sync_group_versioned_id is None:
self.bss_report({'ERROR'}, 'Therre are no synced Blender settings for version %s' %
self.blender_version)
return
self.bss_report({'INFO'}, 'Pulling settings from Blender Cloud')
with tempfile.TemporaryDirectory(prefix='bcloud-sync') as tempdir:
for fname in SETTINGS_FILES_TO_UPLOAD:
await self.download_settings_file(fname, tempdir)
self.bss_report({'WARNING'}, 'Settings pulled from Cloud, restart Blender to load them.')
async def action_refresh(self, context):
self.bss_report({'INFO'}, 'Refreshing available Blender versions.')
# Clear the LRU cache of available_blender_versions so that we can
# obtain new versions (if someone synced from somewhere else, for example)
available_blender_versions.cache_clear()
versions = await available_blender_versions(self.home_project_id, self.user_id)
bss = bpy.context.window_manager.blender_sync_status
bss.available_blender_versions = versions
if versions:
# There are versions to sync, so we can remove the status message.
# However, if there aren't any, the status message shows why, and
# shouldn't be erased.
self.bss_report({'INFO'}, '')
async def download_settings_file(self, fname: str, temp_dir: str):
config_dir = pathlib.Path(bpy.utils.user_resource('CONFIG'))
meta_path = cache.cache_directory('home-project', 'blender-sync')
self.bss_report({'INFO'}, 'Downloading %s from Cloud' % fname)
# Get the asset node
node_props = {'project': self.home_project_id,
'node_type': 'asset',
'parent': self.sync_group_versioned_id,
'name': fname}
node = await pillar_call(pillarsdk.Node.find_first, {
'where': node_props,
'projection': {'_id': 1, 'properties.file': 1}
}, caching=False)
if node is None:
self.bss_report({'INFO'}, 'Unable to find %s on Blender Cloud' % fname)
self.log.info('Unable to find node on Blender Cloud for %s', fname)
return
async def file_downloaded(file_path: str, file_desc: pillarsdk.File, map_type: str):
# Allow the caller to adjust the file before we move it into place.
if fname.lower() == 'userpref.blend':
await self.update_userpref_blend(file_path)
# Move the file next to the final location; as it may be on a
# different filesystem than the temporary directory, this can
# fail, and we don't want to destroy the existing file.
local_temp = config_dir / (fname + '~')
local_final = config_dir / fname
# Make a backup copy of the file as it was before pulling.
if local_final.exists():
local_bak = config_dir / (fname + '-pre-bcloud-pull')
self.move_file(local_final, local_bak)
self.move_file(file_path, local_temp)
self.move_file(local_temp, local_final)
file_id = node.properties.file
await pillar.download_file_by_uuid(file_id,
temp_dir,
str(meta_path),
file_loaded_sync=file_downloaded,
future=self.signalling_future)
def move_file(self, src, dst):
self.log.info('Moving %s to %s', src, dst)
shutil.move(str(src), str(dst))
async def update_userpref_blend(self, file_path: str):
self.log.info('Overriding machine-local settings in %s', file_path)
# Remember some settings that should not be overwritten from the Cloud.
up = bpy.context.user_preferences
remembered = {}
for rna_key, python_key in LOCAL_SETTINGS_RNA:
assert '.' in python_key, 'Sorry, this code assumes there is a dot in the Python key'
try:
value = up.path_resolve(python_key)
except ValueError:
# Setting doesn't exist. This can happen, for example Cycles
# settings on a build that doesn't have Cycles enabled.
continue
# Map enums from strings (in Python) to ints (in DNA).
dot_index = python_key.rindex('.')
parent_key, prop_key = python_key[:dot_index], python_key[dot_index + 1:]
parent = up.path_resolve(parent_key)
prop = parent.bl_rna.properties[prop_key]
if prop.type == 'ENUM':
log.debug('Rewriting %s from %r to %r',
python_key, value, prop.enum_items[value].value)
value = prop.enum_items[value].value
else:
log.debug('Keeping value of %s: %r', python_key, value)
remembered[rna_key] = value
log.debug('Overriding values: %s', remembered)
# Rewrite the userprefs.blend file to override the options.
with blendfile.open_blend(file_path, 'rb+') as blend:
prefs = next(block for block in blend.blocks
if block.code == b'USER')
for key, value in remembered.items():
self.log.debug('prefs[%r] = %r' % (key, prefs[key]))
self.log.debug(' -> setting prefs[%r] = %r' % (key, value))
prefs[key] = value
def register():
bpy.utils.register_class(PILLAR_OT_sync)
def unregister():
bpy.utils.unregister_class(PILLAR_OT_sync)

File diff suppressed because it is too large Load Diff

31
blender_cloud/utils.py Normal file
View File

@@ -0,0 +1,31 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
def sizeof_fmt(num: int, suffix='B') -> str:
"""Returns a human-readable size.
Source: http://stackoverflow.com/a/1094933/875379
"""
for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
if abs(num) < 1024:
return '%.1f %s%s' % (num, unit, suffix)
num /= 1024
return '%.1f Yi%s' % (num, suffix)

View File

@@ -1,3 +1,21 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
"""External dependencies loader.""" """External dependencies loader."""
import glob import glob
@@ -18,8 +36,9 @@ def load_wheel(module_name, fname_prefix):
try: try:
module = __import__(module_name) module = __import__(module_name)
except ImportError: except ImportError as ex:
pass log.debug('Unable to import %s directly, will try wheel: %s',
module_name, ex)
else: else:
log.debug('Was able to load %s from %s, no need to load wheel %s', log.debug('Was able to load %s from %s, no need to load wheel %s',
module_name, module.__file__, fname_prefix) module_name, module.__file__, fname_prefix)
@@ -30,7 +49,9 @@ def load_wheel(module_name, fname_prefix):
if not wheels: if not wheels:
raise RuntimeError('Unable to find wheel at %r' % path_pattern) raise RuntimeError('Unable to find wheel at %r' % path_pattern)
sys.path.append(wheels[0]) # If there are multiple wheels that match, load the latest one.
wheels.sort()
sys.path.append(wheels[-1])
module = __import__(module_name) module = __import__(module_name)
log.debug('Loaded %s from %s', module_name, module.__file__) log.debug('Loaded %s from %s', module_name, module.__file__)

8
clear_wheels.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
git clean -n -d -X blender_cloud/wheels/
echo "Press [ENTER] to actually delete those files."
read dummy
git clean -f -d -X blender_cloud/wheels/

View File

@@ -1,7 +1,7 @@
# Primary requirements: # Primary requirements:
CacheControl==0.11.6 -e git+https://github.com/sybrenstuvel/cachecontrol.git@sybren-filecache-delete-crash-fix#egg=CacheControl
lockfile==0.12.2 lockfile==0.12.2
pillarsdk==1.0.0 pillarsdk==1.5.0
wheel==0.29.0 wheel==0.29.0
# Secondary requirements: # Secondary requirements:

View File

@@ -1,4 +1,22 @@
#!/usr/bin/env python #!/usr/bin/env python3
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
import glob import glob
import sys import sys
import shutil import shutil
@@ -14,6 +32,7 @@ from distutils.command.install_egg_info import install_egg_info
from setuptools import setup, find_packages from setuptools import setup, find_packages
requirement_re = re.compile('[><=]+') requirement_re = re.compile('[><=]+')
sys.dont_write_bytecode = True
def set_default_path(var, default): def set_default_path(var, default):
@@ -83,9 +102,13 @@ class BuildWheels(Command):
# Build CacheControl. # Build CacheControl.
if not list(self.wheels_path.glob('CacheControl*.whl')): if not list(self.wheels_path.glob('CacheControl*.whl')):
log.info('Building CacheControl in %s', self.cachecontrol_path) log.info('Building CacheControl in %s', self.cachecontrol_path)
# self.git_clone(self.cachecontrol_path,
# 'https://github.com/ionrock/cachecontrol.git',
# 'v%s' % requirements['CacheControl'][1])
# FIXME: we need my clone until pull request #125 has been merged & released
self.git_clone(self.cachecontrol_path, self.git_clone(self.cachecontrol_path,
'https://github.com/ionrock/cachecontrol.git', 'https://github.com/sybrenstuvel/cachecontrol.git',
'v%s' % requirements['CacheControl'][1]) 'sybren-filecache-delete-crash-fix')
self.build_copy_wheel(self.cachecontrol_path) self.build_copy_wheel(self.cachecontrol_path)
# Ensure that the wheels are added to the data files. # Ensure that the wheels are added to the data files.
@@ -173,7 +196,7 @@ setup(
'wheels': BuildWheels}, 'wheels': BuildWheels},
name='blender_cloud', name='blender_cloud',
description='The Blender Cloud addon allows browsing the Blender Cloud from Blender.', description='The Blender Cloud addon allows browsing the Blender Cloud from Blender.',
version='1.0.0', version='1.4.1',
author='Sybren A. Stüvel', author='Sybren A. Stüvel',
author_email='sybren@stuvel.eu', author_email='sybren@stuvel.eu',
packages=find_packages('.'), packages=find_packages('.'),