672 Commits

Author SHA1 Message Date
Anna Sirota
b240481789 Forgot to update poetry.lock 2021-03-18 18:09:59 +01:00
Anna Sirota
866f3aafa3 Sort deps, pin setuptools and wheel 2021-03-18 17:34:26 +01:00
Anna Sirota
fa6d6acfde Also install setuptools and wheel 2021-03-18 13:55:38 +01:00
Anna Sirota
b255d4c54b Install the same version of cryptography in buildwheels image 2021-03-18 13:30:36 +01:00
Anna Sirota
a04bb4c8ce Remove mention of DEPLOY_BRANCH which is not actually used anywhere 2021-03-18 12:21:13 +01:00
Anna Sirota
78c2dfb6ab Pin everything, including cryptography 2021-03-18 11:32:05 +01:00
Anna Sirota
a12ea33cbe Revert everything except the pip fix and the branch flag 2021-03-18 10:52:16 +01:00
Anna Sirota
d2bf1d8d4f Allow using different branch to avoid having to create a branch for **each** of the deps 2021-03-17 18:23:17 +01:00
Anna Sirota
53d60969e0 Echo all scripts in the wheelbuilder 2021-03-17 16:31:06 +01:00
Anna Sirota
a458f4153e Try using CRYPTOGRAPHY_DONT_BUILD_RUST again 2021-03-17 16:09:24 +01:00
Anna Sirota
a352eae35f Install hopefully pip, setuptools and wheel in each base image 2021-03-17 15:44:55 +01:00
Anna Sirota
8a8d3f6d64 Install the right version of cryptography in another build.sh 2021-03-16 18:24:20 +01:00
Anna Sirota
6a04549b95 Add rustc to all base images 2021-03-16 18:00:34 +01:00
Anna Sirota
2806244dd3 Use CRYPTOGRAPHY_DONT_BUILD_RUST=1 2021-03-12 11:27:58 +01:00
Anna Sirota
539b2932ea or just give and install rustc already 2021-03-12 11:04:03 +01:00
Anna Sirota
0f5b2a4af2 Another pip install 2021-03-12 10:34:32 +01:00
Anna Sirota
815b978b75 Forgot to update poetry.lock 2021-03-12 10:15:30 +01:00
Anna Sirota
ff654dcc4b Another try 2021-03-12 09:57:59 +01:00
Anna Sirota
7143b46891 Try to work around Rust cryptography issue 2021-03-09 11:16:30 +01:00
81c5687f02 Deploy: Pin poetry version to 1.0
It looks like more recent versions of poetry do not handle well nested
local package deps.
https://github.com/python-poetry/poetry/issues/3098
2021-01-20 11:08:23 +01:00
Anna Sirota
6b835e74f1 Handle date_deletion_requested in user-modified webhook D10139 2021-01-20 10:43:11 +01:00
b6127c736d Update gulp-sass 2020-07-23 18:43:17 +02:00
7280f1dbc1 Update poetry.lock 2020-07-23 18:43:01 +02:00
28ee78ec02 Learn: New featured projects 2020-07-23 12:20:05 +02:00
96a54695af Homepage: New featured projects 2020-07-23 12:19:37 +02:00
cbdcd04423 Fixed copy-paste bug in remoteip.conf 2020-04-17 15:15:16 +02:00
c618e6cf17 Fixed typo in Dockerfile 2020-04-17 14:09:43 +02:00
b8defe329e Apache: enabled & configured mod_remoteip
This module makes it possible to do access control & logging based on
client's real IP address, rather than the internal IP address of HaProxy.
2020-04-17 11:38:53 +02:00
cf887d8f5f Homepage: Fix project name
From workshop to workflow
2020-04-10 10:09:53 +02:00
11effcb580 ToS: Update base pricing
Closes T74691.
2020-04-09 20:08:51 +02:00
35ba45445f ToS: Update Blender Institute Address 2020-04-09 20:08:07 +02:00
6c720b7b08 Homepage: Update banners 2020-04-09 20:02:03 +02:00
344a66f0eb Update package-lock.json 2020-04-09 16:28:19 +02:00
941ed4a0e0 Deploy: Add redirect for coffee-run and settlers 2020-04-09 16:28:19 +02:00
acfffce48d Configured Poetry to not use virtualenvs in ./.venv
Having a virtualenv in `.venv` is very convenient because many tools
automatically pick up on it. However, this then also happens during the
construction of the Docker images, which subsequently breaks.

Until a proper fix is found, it's easiest to just put the virtualenv
outside of the project.
2020-03-19 17:42:45 +01:00
b6b097483a Update pages with assets featuring latest content 2019-11-14 12:12:50 +01:00
b76be2a7ba Add /design-system endpoint
This is where the representation of the design system will reside.
When the application runs in production (with DEBUG = False) the url
will return 404.
2019-11-13 18:47:28 +01:00
c168c09293 Tweak to gulp all command
First run gulp in pillar and other dependencies, then run gulp in the
current repo.
2019-11-13 10:43:25 +01:00
db74c89e6f UI Libraries Template: Wrong link to characters project.
Thanks @kednar for the report!
2019-07-26 12:27:32 +02:00
23d7e50df2 Fix link in HDRi section 2019-06-20 19:41:04 +02:00
a8afccba00 Fix T65655 2019-06-10 17:51:55 +02:00
24118b6777 Re-locked dependencies 2019-05-31 17:05:05 +02:00
76825fda39 Render avatar of current user using Vue.js
Requires Pillar 47474ac936ffb1d179161c8a3cac5d20e6005659
2019-05-31 17:05:05 +02:00
d49f69ecbd Upgraded Gulp 3.9 → 4.0 and removed gulp-livereload 2019-05-31 12:27:35 +02:00
c91a52046d Fixed deprecation warning from WTForms 2019-05-29 16:40:51 +02:00
577bf8b964 MongoDB: fixed deprecation warnings
- collection.count() → either counting the result or using count_documents()
- collection.update() → replaced by update_one()
2019-05-29 16:40:51 +02:00
049d71dc77 UnitTest.assertEquals → assertEqual 2019-05-29 16:40:51 +02:00
2075c8a790 Re-locked dependencies 2019-05-29 16:40:51 +02:00
2d6b5d4b67 Webhook: Update users' avatars with Celery task when changed on Blender ID 2019-05-28 16:19:01 +02:00
5f497fc645 Docker-compose: Upgraded Mongo 3.4.2 → 3.4 (so latest micro in 3.4.x)
This currently upgrades to 3.4.20
2019-05-28 16:18:27 +02:00
a321d3501a Docker-compose: upgraded Redis 3.2.8 → 5.0 2019-05-28 16:18:27 +02:00
d2d4a52846 Re-locked dependencies after Pillar updated deps 2019-05-28 16:18:27 +02:00
ae176cbbdf Re-locked dependencies 2019-05-23 13:54:59 +02:00
d5f5b63b3f Werkzeug update 0.15.2 → 0.15.4 2019-05-22 10:33:23 +02:00
274766d6f4 Added little note about rerunning poetry update after dependencies changed 2019-05-14 12:02:14 +02:00
e74d573063 Re-locked dependencies 2019-05-14 11:34:38 +02:00
f514cc4176 README: documented use of Poetry 2019-05-14 10:36:15 +02:00
3d567ff6f8 Docker: use variables instead of hard-coded stuff
WHEELHOUSE: since we're defining the variable we might as well use it.
DOCKER_IMAGE_NAME: introduced to prevent duplications of the name, and to
    add a little confirmation message when the script is done.
2019-05-14 10:36:15 +02:00
ba69dd46a0 Staging: be more selective about which branch of pillar-python-sdk to use
Because pillar-python-sdk doesn't have a `production` branch, it was always
using `master`. Now it's only using `master` if `STAGING_BRANCH`=`production`.
2019-05-14 10:36:15 +02:00
cfbb3d7e5a Poetry'ising the docker stuff 2019-05-14 10:36:15 +02:00
c9bbf26a71 Moved to Poetry 2019-05-14 10:36:15 +02:00
35675866ee Build our own HAproxy docker image
The HAproxy docker image we were using is no longer maintained (hasn't been
for years), but is built upon Alpine Linux which has a big security leak:
https://talosintelligence.com/vulnerability_reports/TALOS-2019-0782

The security leak is fixed in this build of the docker image, but we should
move to something else (lke Træfik).
2019-05-09 14:12:02 +02:00
d813935f43 Fixed unittest
Broke in 468fc85751
2019-04-26 12:53:26 +02:00
947dab3185 Use 16_9 picture for project thumbnail
This allows us to use picture_header as an actual header from now on.
2019-04-19 13:00:04 +02:00
c0cb80ceec Use absolute url of Open Graph image links 2019-04-19 12:54:22 +02:00
48c8f79371 Use _opengraph macro in landing.pug 2019-04-19 12:53:58 +02:00
53b22641f2 Improve readability of _opengraph macro 2019-04-19 12:53:24 +02:00
4f9699c7ae Remove 16_9 image from extension props
This property is now available on Project level.
2019-04-19 12:52:47 +02:00
a04e62e3e9 Rename project_type to category in Project
Requires renaming custom_props.cloud.project_type fields to
custom_props.cloud.category in all documents of the projects
collection.
2019-04-19 11:13:31 +02:00
5df18b670a Display field description if available 2019-04-19 10:43:38 +02:00
a5cd12ad87 Remove unneeded if statement
When rendering this template we do not provide the hidden_fields
list (this code was partially copied from project edit.pug).
2019-04-19 10:43:16 +02:00
2c51407196 Fix typo 2019-04-19 10:06:15 +02:00
0e0716449a UI Footer: Add link to Films. 2019-04-18 15:33:38 +02:00
f0c4f1576c UI Footer: Rename links to sections.
LEARN -> TRAINING
RESOURCES -> CLOUD
2019-04-18 15:33:23 +02:00
42edd9486b UI Footer: Fix link to YouTube 2019-04-18 15:32:26 +02:00
8534cdbaeb Services: Use 16_9 image for opengraph. 2019-04-18 14:45:55 +02:00
7209a3c525 UI Homepage: Three cards for featured projects. 2019-04-15 12:46:31 +02:00
81a564a9d9 UI Learn: swap thumbnails and link to asset/project in courses and worshops. 2019-04-12 17:32:49 +02:00
1c47197fd2 UI Learn: tweak in wording. 2019-04-12 17:32:00 +02:00
ea95e7b2b2 UI Learn: Minor layout adjustment. 2019-04-12 17:31:38 +02:00
be28b2d13d UI Learn: add quick links to 3 items per category. 2019-04-12 17:31:16 +02:00
ea9da2acdb UI Libraries: swap thumbnail and link to asset. 2019-04-12 17:30:27 +02:00
c66592a5ae Libraries: cleanup leftover. 2019-04-12 17:29:54 +02:00
1e3041d997 UI Libraries: Remove hand-on section. 2019-04-12 17:29:17 +02:00
f5db3c8da2 UI Libraries: Layout adjustments. 2019-04-12 17:29:04 +02:00
224e4dc1e0 UI Libraries: wording tweaks. 2019-04-12 17:28:41 +02:00
c1e52ae320 UI Libraries: Swap Textures for HDR Images 2019-04-12 17:28:19 +02:00
b8be19c729 UI Libraries: Add quick links to 3 items. 2019-04-12 17:27:28 +02:00
51d081971c Libraries: cleanup unused scripts. 2019-04-12 17:26:02 +02:00
b438c319b0 UI: Layout adjustments to category_list components. 2019-04-12 17:22:55 +02:00
87bc2b5378 Utility for marking the first item on a list as 'new'.
The span element of the first child will include a 'new' label on it.

Usage: add the class 'list-first-new' to a list.
2019-04-12 17:21:47 +02:00
c3261ed83a New images for gallery, training, and libraries. 2019-04-12 17:17:59 +02:00
468fc85751 Homepage: increase random featured assets to six. 2019-04-10 17:19:35 +02:00
049fdf3b63 Homepage: bring back two column homepage, only on XL screens. 2019-04-10 17:19:17 +02:00
ad31b6338f Homepage: sass file for homepage styling. 2019-04-10 17:18:23 +02:00
61b1ab0c20 Remove whitespace 2019-04-08 16:44:29 +02:00
08de073464 Remove whitespace 2019-04-08 16:44:29 +02:00
a756fb5f6e UI Landing: Fix alignment on Firefox.
Thanks Ines for the report!
2019-04-04 18:48:18 +02:00
2050f4b7d8 Front-page update 2019-04-04 16:47:44 +02:00
51e22eb414 UI Landing: padding on browse button. 2019-04-04 16:41:28 +02:00
cabfce12c0 UI Landing: Use 16 by 9 image for opengraph. 2019-04-04 16:41:28 +02:00
b6d9039e82 Add navigation and extension links to /browse 2019-04-04 15:31:52 +02:00
4a6e971e71 Show only groups and assets in browse endpoint 2019-04-04 15:31:28 +02:00
eeab0a407e UI Landing: Style tweaks 2019-04-04 14:20:19 +02:00
eafc1e981f UI Landing: Timeline -> Project Timeline 2019-04-04 14:20:09 +02:00
dfbd03e448 UI Landing: Jumbotron padding tweak and mobile. 2019-04-04 14:19:52 +02:00
12c64f13a2 UI: Rename 'Explore' to 'Browse' 2019-04-04 14:19:36 +02:00
b62f500b2e Cast empty string value in form_field to None
For FilmProjectForm, when no value is specified we want to save it
as None in the project document.
2019-04-04 13:09:53 +02:00
6d47946b1b Fix for exception in /open-movies
When extension_props.cloud.poster was set to empty string, we would
try to get file anyway and we would set the has_poster has_poster
convenience attribute to true. This would lead to an exception when
trying to access the poster file object in the template.
2019-04-04 11:49:12 +02:00
b4ecf93485 UI Browse: Remove description. 2019-04-04 02:04:00 +02:00
3d6b2452d6 Fix in is_cloud_project
Handle missing extension_props attribute.
2019-04-04 00:50:35 +02:00
4afe23e284 Introducing top level browsing
We introduce a new /p/<project_url>/browse endpoint, which allows to
see all top-level nodes of a project.
2019-04-04 00:27:14 +02:00
479153af9b Do not show hidden pages in project landing 2019-04-04 00:27:14 +02:00
aee369cc5a UI Landing: alt name on image. 2019-04-04 00:26:38 +02:00
8f0670d017 UI Landing: Link icon, text and Explore button to project_explore_url.
To be replaced with the actual 'explore' endpoint.
2019-04-03 23:44:37 +02:00
858bed66f4 UI Landing: padding and column size adjustments. 2019-04-03 23:40:13 +02:00
7a02f86a5b UI Landing: open video_url in the page overlay. 2019-04-03 23:39:53 +02:00
37667424ab Cleanup: remove unused font-pillar.css link.
They are built inside project-main.sass now
2019-04-03 23:10:33 +02:00
b4739521ed Layout Template: Introducing announcements.
Used for non-subscribers (current_user without .has-cap('subscriber'),
to give a friendly reminder about cool promos!
2019-04-03 22:54:00 +02:00
d125a6ac55 config_local: Example for announcements to non-subscribers. 2019-04-03 22:50:37 +02:00
4473d56379 Fix for exception
Check that ‘extension_props’ exists in project before looking for
EXTENSION_NAME.
2019-04-03 17:36:27 +02:00
c971f799ef Override /p/<project_url>
By overriding this Pillar endpoint, we allow more control over how
the landing page of a project is rendered, based on the presence
of the ‘cloud’ extension property.
2019-04-03 17:00:37 +02:00
36f31caf04 UI Landing: Show logo and watch url if any. 2019-04-03 16:59:25 +02:00
d54d5ec157 Use poster file as preview for film projects 2019-04-03 16:43:48 +02:00
4a180e3784 Introducing setup_for_film functionality
It is now possible, only for user with admin capability, to setup a
project as ‘film’. This action can be performed via CLI using
./manage.py cloud setup_for_film <project_url> or via the web
interface in the Cloud settings area.
Setting up a project for film creates a number of extension props
under the ‘cloud’ key. Such properties are listed in the
cloud_extension_props variable in setup.py.

At this moment the functionality exists for a very specific purpose:
improving the presentation of public Film projects in the Blender
Cloud. It can be further extended to improve the presentation of
Training and Libraries later on.
2019-04-03 15:54:45 +02:00
6ac75c6a14 UI Production: use same header and opengraph as other collections. 2019-04-03 15:48:38 +02:00
4b7cc3f58e UI Libraries: Fix wrong URL for characters project. 2019-04-03 15:47:59 +02:00
cbbaf90002 Templates: Add opengraph to collections. 2019-04-03 15:43:53 +02:00
c5154240ca Cleanup. One line for block page_title. 2019-04-03 15:43:01 +02:00
1e62aff62c Cleanup: Unused mixin include. 2019-04-03 15:42:37 +02:00
b015cc8fa4 Cleanup: Remove URL from category_list_header component. 2019-04-03 15:33:04 +02:00
3be54da5c3 Cleanup: Remove unused components mixin. 2019-04-03 15:32:32 +02:00
5c1b94544a UI: Tweaks to descriptions in Learn. 2019-04-03 15:31:38 +02:00
1ed2a3937e UI: Tweaks to descriptions in Libraries. 2019-04-03 15:31:14 +02:00
80b69438ed UI: Tweaks to descriptions in Index Collection. 2019-04-03 15:30:46 +02:00
3f3112f272 Pug Components: category_list_item component.
Taken from Pillar. Used in Libraries, Training, etc.
2019-04-03 15:29:14 +02:00
1d7cbd8ba5 UI Films: minor style tweaks. 2019-04-03 15:03:52 +02:00
a53dc51680 UI Landing: Align header to top. 2019-04-03 15:03:30 +02:00
29ff9609e7 UI Films: use variable for project URL.
Instead of building url_for() many times.
2019-04-03 15:03:18 +02:00
bd07a63acd Index Collection Template: Use header and opengraph macros. 2019-04-03 15:02:11 +02:00
9bf6c0fcb3 Templates: New template for films. 2019-04-03 13:04:25 +02:00
142dd36e3c CSS: Add alias for pi-blender-cloud from font-pillar. 2019-04-03 13:01:39 +02:00
e03cf2f5da CSS: Include font-pillar as part of main.css 2019-04-03 13:01:08 +02:00
79b409c83a CSS: Include variables in project-main.sass 2019-04-03 11:49:17 +02:00
a67af63459 Template Libraries: Use header and opengraph macros. 2019-04-03 11:41:42 +02:00
362694f4b6 Template Learn: Use header and opengraph macros. 2019-04-03 11:41:33 +02:00
364fa76956 UI Landing: Use variable instead of magic number for background. 2019-04-03 11:40:55 +02:00
144dcf7a76 Sass: Introducing variables.sass file.
For Blender Cloud specific variables.
2019-04-03 11:40:01 +02:00
f8995fb657 Template Services: Use header and opengraph macros. 2019-04-03 11:39:11 +02:00
95762acf14 Templates: Introducing components.pug
For Blender Cloud specific components.

No need for them to be part of Pillar.
2019-04-03 11:38:27 +02:00
35a9986290 Templates: New macro for Opengraph.
To be used inside the opengraph block.

e.g
{% block og %}
  {{ opengraph(title, description, image, url) }}
{% endblock %}
2019-04-02 19:48:51 +02:00
e8c878d0f3 UI Blog: Light background color and border for edit bar.
Makes it stand out more especially when there is no image in the post.
2019-04-01 14:57:34 +02:00
fdf05d16de UI: Light background color for sidebar container. 2019-04-01 14:56:59 +02:00
49bffd108d UI Index Collection: Match style with Training and Libraries. 2019-04-01 12:33:47 +02:00
87c1eae1a6 UI Homepage: Replace 'film in production' with just Spring.
Since the film is no longer in production! We are done!
2019-04-01 12:32:49 +02:00
163aee6bbc UI Project: Show sidebar by default. 2019-03-29 15:46:43 +01:00
5f01b0d6ba CSS Cleanup
An oversight. We already styled node-details-description a few lines above.
2019-03-29 15:37:46 +01:00
da955ce4af UI Landing: No need to set 'landing' as title.
Just use the default since project landing is the same as project home.
2019-03-29 15:37:46 +01:00
ad1a55c5d5 UI Landing: Use dark background only on project home landing page. 2019-03-29 15:37:46 +01:00
db976a4d50 UI Landing: Bigger text in node description. 2019-03-29 15:37:46 +01:00
42215d2b02 UI Layout: New jinja block for adding custom classes to the body.
Usage: {% block bodyclasses %}custom-class-name{% endblock %}
2019-03-29 15:37:46 +01:00
9cf3a7147e Merge branch 'production' 2019-03-29 15:27:58 +01:00
fb016c3e3b build wheels using the correct Docker image 2019-03-29 15:18:10 +01:00
039ab1ce70 UI Project sidebar: padding and border classes adjustment. 2019-03-28 21:12:24 +01:00
1bd1c6e5fd Merge branch 'master' into wip-breadcrumbs 2019-03-28 21:04:13 +01:00
353660c7ba UI Project: Sidebar toggle button. 2019-03-28 21:03:24 +01:00
b77f3aaa49 Breadcrumbs: clicking a breadcrumb now calls displayNode(nodeId)
Requires Pillar 1fd17303a53016c15f685deebda65bea8e71be90
2019-03-28 16:43:15 +01:00
50c0842e72 Breadcrumbs: Move from sidebar to project context.
Since we will be able to toggle the sidebar from the breadcrumbs.
2019-03-28 16:06:32 +01:00
5d2ba7fc96 Main Stylesheet: Include breadcrumbs component from Pillar. 2019-03-28 16:04:56 +01:00
6972e1662d Show breadcrumbs in project navigation
Requires Pillar 4499f911.
2019-03-28 12:41:57 +01:00
af6022c997 UI Services: Minor layout adjustments.
* Align all items to the left.
* Match header with Training and Libraries.
* Make images a link.
2019-03-27 16:00:19 +01:00
2fc1738e99 UI Libraries: Use single-column listing.
Uses component from components.pug
2019-03-27 16:00:19 +01:00
149013d64c UI Learn: Use single-column listing.
Uses component from components.pug
2019-03-27 16:00:19 +01:00
e13a1bff1b UI Navigation: Move Training after Films.
So we have categories listing grouped together (Training/Libraries/Services).
2019-03-27 16:00:19 +01:00
9ac60fe922 Tweak build-all.sh to run with bash 3.2
The ;& statement is supported in 4.0 or higher, which is not natively
provided on macOS.
2019-03-27 15:38:22 +01:00
2f78f36f61 Update package-lock.json
The previous update was not sufficient apparently.
2019-03-27 15:10:36 +01:00
5edcf931e9 Update package-lock.json
The flatmap-stream dependency was removed from NPM since it was
malicious.
2019-03-27 14:02:21 +01:00
b417f25811 UI Homepage: Single-column layout. 2019-03-27 13:08:44 +01:00
691c1411bc Added jwtkeys/README.md 2019-03-27 12:58:08 +01:00
9f3946ba9e Documented deployment scripts 2019-03-27 12:57:50 +01:00
269daa0d43 UI Navigation: Padding adjustment. 2019-03-27 12:55:08 +01:00
39516e63cc UI Navigation: Cleanup (adding comments). 2019-03-27 12:54:54 +01:00
84881743ef UI Navigation: Category title 'Open Projects' rename to 'Films' 2019-03-27 12:54:10 +01:00
e975492869 UI Navigation: Link to project category.
Without it, going from one project to another in the same category (like
going from Textures library to HDRI) is cumbersome, having to go always
through the homepage first.
2019-03-27 12:53:27 +01:00
78e1c728fa UI Landing: Cleanup 2019-03-27 12:36:39 +01:00
be18dfb985 UI Landing: Margin top of project description. 2019-03-27 12:35:58 +01:00
8797b18754 UI Landing: Cleanup, remove unused attribute. 2019-03-27 12:34:44 +01:00
727ba3fd58 UI Landing: Simplify javascript code for overlay. 2019-03-27 12:34:11 +01:00
a369c04b38 UI Landing: Remove project name and gallery title. 2019-03-27 12:31:51 +01:00
fad5f803e8 UI Landing: Fade header to background color. 2019-03-27 12:18:15 +01:00
143cd27c55 UI Landing: Use dark background. 2019-03-27 12:17:15 +01:00
2f854ebeee UI Landing: Replace Blog updates with Timeline. 2019-03-27 12:14:24 +01:00
6fa3af50cf UI Landing: Show link to project edit. 2019-03-27 12:13:36 +01:00
1fd2443bd7 UI Landing: Show project name and summary in header. 2019-03-27 12:12:59 +01:00
c5f8add5f5 Made it easier to rebuild the Docker image after someone else built it
Because we only pushed the final image to Docker Hub, it was impossible to
pull the base image someone else created and "quickly" build a new deploy
image.

Now the deploy scripts push (some) of the intermediate images as well,
making it possible to pull them later. I've added `build-pull.sh` and
`full-pull.sh` to perform this pull and built up from the pulled images.
2019-03-13 15:47:12 +01:00
a8e5d593ac Address concern rBC6cbd5ca369ed
Improvements to the deployment script.
2019-02-13 12:12:36 +01:00
fc986b0ab6 Renamed docker/4_run/deploy to docker/4_run/staging
"Staging" covers the meaning of what is actually happening better than
"deploy". I want to keep "deploy" for actually deploying onto a production
server.
2019-02-13 10:39:18 +01:00
b7b6543043 Navigation: Unified cloud navigation
* Removed main drop down menu
* Added "My cloud" to user menu
* Attract/Flamenco is found under Production Tools menu
* Attract/Flamenco has the same navigation as its project
2019-02-07 14:45:55 +01:00
d32c44e50c Navigation: Unified cloud navigation
Welcome page was missing Blender cloud logo
2019-02-06 11:42:39 +01:00
369161e29f Navigation: Unified cloud navigation
* Removed main drop down menu
* Added "My cloud" to user menu
* Attract/Flamenco is found under Production Tools menu
* Attract/Flamenco has the same navigation as its project
2019-02-06 10:31:36 +01:00
3cd55e2a83 Added two scripts to make deployment a bit easier 2019-02-06 09:34:55 +01:00
1071915f27 Gulp fix for NodeJS 10 2019-01-04 14:21:27 +01:00
5f9406edd2 Vue Comments: Comments ported to Vue + DnD fileupload
* Drag and drop files to comment editor to add a file attachment
* Using Vue to render comments

Since comments now has attachments we need to update the schemas
./manage.py maintenance replace_pillar_node_type_schemas
2018-12-12 11:45:46 +01:00
5bf1693d5b Removed RabbitMQ docker container from docker-compose.yml
Now that Celery switched to using Redis as broker, we no longer need
RabbitMQ. Celery has been running on Redis for a while now and it all seems
fine, so it's time to wave the Rabbit goodbye.
2018-12-04 17:57:49 +01:00
27caff7e6e Docker: added little list of Redis database numbers we're using 2018-12-04 11:30:48 +01:00
63d25d1dca Fix broken thumbnail in Blog index 2018-11-23 14:56:57 +01:00
2950a4347a Quick-Search: Added Quick-search in the topbar
Changed how and what we store in elastic to unify it with how we store
things in mongodb so we can have more generic javascript code
to render the data.

Elastic changes:
  Added:
  Node.project.url

  Altered to store id instead of url
  Node.picture

  Made Post searchable

./manage.py elastic reset_index
./manage.py elastic reindex

Thanks to Pablo and Sybren
2018-11-22 15:31:52 +01:00
76a707e5bf Gulp: Watch for changes in both blender-cloud and pillar folders. 2018-11-20 19:19:22 +01:00
5218bd17e3 Project-Timeline: Introduced timeline on projects
Limited to projects of category assets and film for now.
2018-11-20 16:29:01 +01:00
37fe235d47 Lazy Home: Lazy load latest blog posts and assets and group by week and
project.

Javascript tutti.js and timeline.js is needed, and then the following to
init the timeline:

$('.timeline')
    .timeline({
        url: '/api/timeline'
    });

# Javascript Notes:
## ES6 transpile:
* Files in src/scripts/js/es6/common will be transpiled from
modern es6 js to old es5 js, and then added to tutti.js
* Files in src/scripts/js/es6/individual will be transpiled from
modern es6 js to old es5 js to individual module files
## JS Testing
* Added the Jest test framework to write javascript tests.
* `npm test` will run all the javascript tests

Thanks to Sybren for reviewing
2018-11-12 12:57:24 +01:00
1a49b24f8e Blog Bug fix: Unable to render blog before first post 2018-10-23 15:09:02 +02:00
0b2a3c99ce Loading bar: Introduced two event listeners on window 'pillar:workStart' and 'pillar:workStop' that (de)activates the loading bar.
Reason:
* To decouple code
* Have the loading bar active until whole page stopped working
* Have local loading info

Usage:
$.('.myClass')
   .on('pillar:workStart', function(){
    ... do stuff locally while loading ...
    })
   .on('pillar:workStop', function(){
   ... stop do stuff locally while loading ...
   })

$.('.myClass .mySubClass').trigger('pillar:workStart')
... do stuff ...
$.('.myClass .mySubClass').trigger('pillar:workStop')
2018-10-23 13:57:02 +02:00
a674de4db5 Remove CELERY_BEAT_SCHEDULE from config_local
CELERY_BEAT_SCHEDULE shouldn't need any changes in config_local for
production; the default should be production-ready.
2018-10-10 14:58:52 +02:00
d2815acd80 Free assets: Assets should not be advertised as free if the user is a logged in subscriber. 2018-10-04 17:44:08 +02:00
670c600382 Organizations: Added null check to properly render new Organizations 2018-10-04 15:48:26 +02:00
f2207bc4d4 Asset list item: Don't show user.full_name in latest and random assets 2018-10-04 12:30:05 +02:00
a9848c3fad Random asset bug fix: If 2 assets from the same project where returned, the second one would get a corrupt url 2018-10-04 10:11:38 +02:00
c81711de53 Video Duration: The duration of a video is now shown on thumbnails and bellow the video player
Asset nodes now have a new field called "properties.duration_seconds". This holds a copy of the duration stored on the referenced video file and stays in sync using eve hooks.

To migrate existing duration times from files to nodes you need to run the following:
./manage.py maintenance reconcile_node_video_duration -ag

There are 2 more maintenance commands to be used to determine if there are any missing durations in either files or nodes:
find_video_files_without_duration
find_video_nodes_without_duration

FFProbe is now used to detect what duration a video file has.

Reviewed by Sybren.
2018-10-03 18:30:40 +02:00
2a8a109d83 Homepage: Update sidebar image for Spring 2018-10-03 11:14:11 +02:00
35b1106ccc Tagged Asset: Added metadata
Video duration, Project link and pretty date
2018-09-26 11:29:15 +02:00
1f326b2728 Asset: Fix video progress not filling up correctly 2018-09-25 12:19:22 +02:00
ccb3187c17 Assets: Fix video progress not showing 2018-09-24 13:32:08 +02:00
734a8db145 Index collection: use gradient on header 2018-09-21 16:56:01 +02:00
82c6c30a0a Tweaks to featured item in index collection 2018-09-21 16:52:21 +02:00
594af19b2b Main dropdown tweaks for responsive.
Most of the changes are done in Pillar, in 0_navbar.js (part of tutti).
2018-09-21 16:19:13 +02:00
3de73ac35e CSS: Use bootstrap variable for button roundness 2018-09-21 16:14:58 +02:00
97c549de08 Project Landing: cleanup unused classes 2018-09-21 12:19:48 +02:00
b5ff89f4ca Node details: Center only on landing 2018-09-21 12:10:44 +02:00
ef8bd8d22b Use node.properties.status instead of node.status 2018-09-20 19:04:43 +02:00
4ff52a8af0 Add post status to posts query for homepage 2018-09-20 19:03:57 +02:00
218ba3831c Navigation: add Art Gallery to the libraries nav 2018-09-20 18:13:41 +02:00
a753f29ccc Navigation: Add Learn and Libraries to homepage nav
Also remove Courses, Workshops
2018-09-20 18:11:26 +02:00
1a7be4b565 Navigation: Move links in the main dropdown to their own macros
So they can be easily re-used in other templates.
2018-09-20 16:37:02 +02:00
765f36261a Blog: Remove unused macro 2018-09-20 16:36:00 +02:00
bce054d47d Tagged Assets: Set the loading bar when loading images 2018-09-20 15:27:41 +02:00
89ea34724b Tagged Assets: Initial 8, load 8 more 2018-09-20 15:27:22 +02:00
7983a7b038 Use loadingBar utility. 2018-09-20 15:20:58 +02:00
7ba8ff7580 New templates for /learn and /libraries 2018-09-20 15:00:10 +02:00
fd1db5d2e0 Production: make asset title a link 2018-09-20 13:17:50 +02:00
205e34289f Blog: show status if not published 2018-09-20 13:01:57 +02:00
e7190f09dc Landing: Center text 2018-09-20 12:14:47 +02:00
7e61d218b9 Navigation: Blender Cloud -> Homepage 2018-09-20 12:07:08 +02:00
12936c80ea Navigation: Remove Blog entry 2018-09-20 12:06:57 +02:00
dcac1317bf Rename secondary_navigation to navigation_project
And move to _navigation.pug with the other navigations macros.
2018-09-20 12:06:43 +02:00
afa0c96156 Navigation: add custom nav for services 2018-09-19 19:34:49 +02:00
012eaaef11 Production: Load 4 assets, load 4 more 2018-09-19 19:34:07 +02:00
54e4e76945 Footer: Production lessons
Also move Art Gallery to Libraries
2018-09-19 19:06:52 +02:00
6cbd5ca369 Improve asset building process
After running ./gulp for every project, we delete node_modules.
2018-09-19 16:57:07 +02:00
690b35bab1 Add more tags to NODE_TAGS 2018-09-19 16:15:31 +02:00
356e4705b3 Add digital-painting to NODE_TAGS 2018-09-19 15:53:56 +02:00
748190d15b Fix jumbotron in index collection 2018-09-19 15:50:03 +02:00
05ff27a12d Navigation: Position icons 2018-09-19 15:42:33 +02:00
74e18bb500 Fix navigation 2018-09-19 15:39:58 +02:00
c460359b31 Run ./gulp in every subproject dir
This is necessary since in our gulp files we reference assets in
node_modules using relative paths. This makes the asset building
process much slower, and should be addressed in the future.
2018-09-19 14:48:16 +02:00
14daead15d Use correct permission format for gulp-chmod 2018-09-19 14:45:56 +02:00
8e9d63df2b Follow art direction for Spring banner 2018-09-19 12:42:12 +02:00
10addb1521 Spring background for index collection 2018-09-19 12:38:35 +02:00
735e6400e3 Background for spring project 2018-09-19 12:38:35 +02:00
a1d84196cd Add additional dependencies to package.json 2018-09-19 12:00:37 +02:00
678a03dbf1 Update NODE_TAGS 2018-09-19 11:34:22 +02:00
811dc4d65b Mark Production Lessons as new 2018-09-19 11:33:40 +02:00
265794d4b7 Pass title to /production 2018-09-19 11:20:32 +02:00
ece0ba4ae7 Dropdowns tweaks based on feedback 2018-09-19 11:20:17 +02:00
2395bd8090 Production Lessons: Added more tags 2018-09-18 16:54:57 +02:00
fef7d5feac Homepage: update image for Spring 2018-09-18 13:57:44 +02:00
f0f96bf2f1 Home project: Fix creating new projects 2018-09-18 13:57:25 +02:00
e05a0c0e04 Tagged assets: Style 'load more items' button 2018-09-18 12:55:14 +02:00
dbba955afe Homepage: use asset list template for random assets 2018-09-18 12:54:57 +02:00
17240f5793 Landing: fix styling of gallery 2018-09-18 12:54:32 +02:00
8ff8975dbb Welcome page: Styling 2018-09-17 18:42:04 +02:00
0a144ec12d Blog: Edit post link 2018-09-17 18:34:43 +02:00
6d9fa89d90 Project: Darker tree 2018-09-17 18:15:49 +02:00
06e7ea53bb Footer: Fix broken links 2018-09-17 18:15:36 +02:00
00cd29befc Layout: move footer and main menu into their own files 2018-09-17 17:18:43 +02:00
a5c7ec285d Style tweaks 2018-09-17 17:09:43 +02:00
7fff47c5c5 Use spans for index_collection navigation 2018-09-17 15:04:07 +02:00
534e212802 Project Landing: Don't set title
As it's set by the pages themselves using node.properties.url
2018-09-17 15:03:53 +02:00
1fac97e3f8 Homepage: style sidebar and cleanup CSS
homepage.sass is like 10 lines now :)
2018-09-17 12:52:01 +02:00
0556c5ae9a Homepage: Style comments 2018-09-17 12:16:52 +02:00
bb2c351460 Generic classes for styling 2018-09-17 11:36:57 +02:00
a65d771bd6 Tagged Assets: Support passing arguments
Pass LOAD_INITIAL_COUNT and LOAD_NEXT_COUNT

Also only show 'Load more' if LOAD_NEXT_COUNT is not set to 0
2018-09-16 06:30:48 +02:00
b50a3e1fb3 Tagged assets: add video progress and watched label 2018-09-16 05:52:20 +02:00
6f88de3b20 Menu: Remove columns for trainings
Experiment with a more compact menu, more readable with not so much text.
2018-09-16 05:06:41 +02:00
6569e22fa8 Spacing 2018-09-16 05:03:12 +02:00
c773145bd6 Blog: use jumbotron overlay 2018-09-16 04:28:37 +02:00
ae907719d0 Index Collection: Limit columns to 3 2018-09-16 03:43:21 +02:00
88f936772d Blog: Layout adjustments 2018-09-16 03:06:08 +02:00
0f1088702d Blog: Fix showing wrong single post
Also center comments and other minor tweaks
2018-09-16 02:03:25 +02:00
40f6ebd99c Project Landing: Fix links in latest updates
Part of T56813
2018-09-15 22:19:47 +02:00
fca2b0f44f Project Landing: Center titles
Part of T56813
2018-09-15 22:15:05 +02:00
08b1b03802 Blog: name in title 2018-09-15 22:09:12 +02:00
23bf27ca75 Layout: Add Art Gallery to menu 2018-09-15 21:36:32 +02:00
15264877e6 Blog minor fixes and tweaks 2018-09-15 21:33:11 +02:00
2eb969f7ee Blog listing: Show posts as cards 2018-09-15 21:23:45 +02:00
1196f178e8 One class too much 2018-09-15 17:26:56 +02:00
aaeecc1429 Profile page: Styling and layout 2018-09-15 16:41:47 +02:00
df33a1803e Navigation menu: re-order items and minor tweaks 2018-09-15 06:15:49 +02:00
dc59bb53de New global navigation menu. 2018-09-15 05:36:23 +02:00
8fdd54eaad Class names and minor blog style tweak 2018-09-14 20:30:58 +02:00
11f44560bb Projects: Use render_secondary_navigation macro 2018-09-14 17:13:54 +02:00
1015254d93 Bring project-main from Pillar
Rename project-landing, to _project-landing and include it in project-main.
It's just a few lines of code to be worth keeping as a separate CSS.
2018-09-14 17:13:23 +02:00
dfa0c14bb0 Fix Scss paths 2018-09-14 14:43:55 +02:00
bbb643e371 Update package-lock.json 2018-09-14 13:12:07 +02:00
7d3c24d712 Project view: update videojs path
The new path removes the version number, check package.json for that.
2018-09-14 01:25:50 +02:00
e1433c3c2a Layout: Cleanup
* jQuery and Bootstrap are now part of tutti, built by Pillar.
* Markdown is no longer used as Pillar has its own converter.
2018-09-14 01:21:22 +02:00
90d6685add Gulp: Cleanup dead code
The task for building tutti was never used, since all functionality is
built by pillar.

Also remove the dependencies for jQuery and Bootstrap in package.json
2018-09-14 01:19:45 +02:00
9fd233a8dc Initial styling of production lessons. 2018-09-13 18:10:12 +02:00
a40eb5d6e4 Use the new navigation_links provided by pillar 2018-09-13 16:36:41 +02:00
fab0d412fa Navbar: padding for items 2018-09-12 19:00:54 +02:00
c5287da78c Layout: Search icon in the navbar 2018-09-11 19:41:16 +02:00
37726bee0f Cleanup 2018-09-11 19:35:05 +02:00
10f15185e0 Homepage: Use macro for listing assets 2018-09-11 17:46:09 +02:00
c90cd41e23 Use list-asset() mixin for homepage latest assets 2018-09-10 19:01:43 +02:00
c8261e5df6 User menu tweaks 2018-09-10 17:12:34 +02:00
34ae8e55c3 Project view: show blog and pages in header 2018-09-10 16:12:09 +02:00
f9368c0729 Layout: Use mixin for top navigation 2018-09-07 18:12:01 +02:00
b4c51007ab Style production listing 2018-09-07 18:11:46 +02:00
a17253c482 Project Landing: Fix missing pillar-font 2018-09-07 18:11:37 +02:00
e348b003b1 Use mixin components from Pillar 2018-09-07 17:19:36 +02:00
23fbb68cfc Replace #project-loading spinning icon with a .loader-bar 2018-09-07 14:56:38 +02:00
4a4e75ee59 Project view: Cleanup and minor layout tweaks 2018-09-07 12:47:31 +02:00
9aae856ac8 Project View: fix alignment of edit tools 2018-09-06 18:16:07 +02:00
94c2c6e550 Start of "production videos", a.k.a. tagged assets overview
Tagged assets are shown in a list per tag. The list is dynamically
loaded with JavaScript.
2018-09-06 16:08:23 +02:00
0b1f295480 Services: Fix layout 2018-09-06 14:33:44 +02:00
a64d3902fd Bootstrap popovers are no longer used. 2018-09-06 14:24:31 +02:00
8dd1de1018 Merge branch 'wip-redesign'
# Conflicts:
#	src/templates/homepage.pug
#	src/templates/services.pug
2018-09-06 14:13:22 +02:00
b12b320cf0 Project view: include video_plugins.min.js 2018-09-06 13:32:57 +02:00
95e51e90de Homepage cleanup. 2018-09-06 13:03:40 +02:00
92106459a0 Use Navigation Tabs for homepage and index collections 2018-09-06 13:03:22 +02:00
982047fc3b Pug: Tweaks to components 2018-09-06 12:59:48 +02:00
5a36888f61 Navigation: Dropdowns for Services and Open Projects 2018-09-06 12:58:56 +02:00
59e0adf3ca Pug: Bring new templates from Pillar 2018-09-06 12:54:15 +02:00
3fdbb92b93 Fixed typo 2018-09-05 15:11:46 +02:00
cf98883633 Make <meta property="og:url"> tags have an absolute URL
Most of those can simply use `{{ request.url }}`, as this already contains
the absolute URL of the currently displayed page.
2018-09-05 13:58:13 +02:00
7b32b97203 Remove https://cloud.blender.org from URLs
URLs should be host-relative, so that they also work on devservers.
URLs in emails should remain absolute, though; we may want to change those
to use {{ url_for(..., _external=True) }} at some point.
2018-09-05 13:57:15 +02:00
7f58be4568 Updated Blender Cloud add-on to 1.9.0
Also change the config_local.py so that we only have to change one variable
for a new version.
2018-09-05 13:40:24 +02:00
c25df6f0ad Cleanup 2018-08-31 19:33:14 +02:00
813750a006 Layout cleanup 2018-08-31 19:08:52 +02:00
9ba2735c8c CSS Cleanup 2018-08-31 19:08:23 +02:00
73e8a81f3c Minor style tweaks 2018-08-31 13:58:26 +02:00
9ffcde3348 Use styling from Pillar 2018-08-31 13:58:08 +02:00
f0b18e88f4 Use pug mixins for header, cards and navigation 2018-08-31 13:57:51 +02:00
ed211f9473 Introducing Pug mixin components
For now added jumbotron, secondary navigation, card decks and individual cards.

Thanks @sybren for the suggestion.
2018-08-31 13:57:09 +02:00
7a4c7d75f6 Project Landing uses new CSS 2018-08-31 13:55:35 +02:00
cc16351136 NPM: Upgrade libraries
New dependency is jQuery, we already depended on it it but now we use it
from npm package for easier version control and upgrades.
2018-08-31 13:54:39 +02:00
7dc1e6f9a1 Gulp: Only chmod files when in --production 2018-08-31 13:53:37 +02:00
fcf715b5b1 Services page: move to bootstrap 4 2018-08-31 13:52:40 +02:00
099984f97c Added #!/bin/sh at top of shell script 2018-08-30 12:53:34 +02:00
8bfb40ce54 Various Docker image upgrades, read the entire commit message!
- Ubuntu 17.10 → 18.04.
- Python 3.6.3 → 3.6.6.
- Use `DEBIAN_FRONTEND=noninteractive` to prevent prompts during
  installation.
- Install `tzdata` in the base image as it's required by subimages.
- Correctly set maintainer in Dockerfile.
2018-08-30 12:53:34 +02:00
d60a65c9f0 End BLENDER_ID_ENDPOINT with a slash 2018-08-30 12:46:58 +02:00
9cd2853e49 Upgrade pip after building Python 2018-08-30 12:31:31 +02:00
3d5554d9ce BLENDER_ID_ENDPOINT should end with a slash
There is always a path component in a URL.
2018-08-29 14:20:23 +02:00
764ccfa78e Add -e . to requirements-dev.txt 2018-08-29 12:24:45 +02:00
0b8ebecfea Don't import flask_login.request
`flask_login.request` is the exact same thing as `flask.request`, so
importing it from `flask_login` makes no sense. Also, it's been removed
from the new Flask-Login.
2018-08-29 12:19:40 +02:00
68d09dc886 CSS: cleanup 2018-08-28 15:56:06 +02:00
169a7f51f0 navbar-container: cleanup 2018-08-28 15:55:32 +02:00
f48a4883ae Index collection redesign 2018-08-27 16:58:01 +02:00
01b6693324 Cleanup styling. Use bootstrap classes instead 2018-08-27 16:57:34 +02:00
012ba06655 Use system fonts (see main.sass) 2018-08-27 16:55:48 +02:00
2be601d0b0 Introducing Bootstrap 4
Bootstrap (and its dependency popper.js) are now used from npm packages,
allowing better version control and custom building of only the required
components for both styling and javascript.

At this moment the whole styling of bootstrap is included, once the Cloud
redesign is over it will be stripped to only the used components.
2018-08-27 16:52:22 +02:00
4dc11b075a Gulp: watch Pillar styles folder for changes and compile Sass 2018-08-27 15:17:00 +02:00
4696d09fed Corrected rewrite rule for Caminandes 2018-07-06 14:56:40 +02:00
663cf7bf2d Update README with further setup instructions 2018-06-25 18:58:59 +02:00
4e8530478a Remove trailing slash from BLENDER_ID_ENDPOINT 2018-06-22 19:40:44 +02:00
b2d10b5ca7 Expose BLENDER_ID_ENDPOINT in example config 2018-06-22 19:40:10 +02:00
fad1aa75e9 Add virtualenv creation instructions 2018-06-22 16:23:44 +02:00
d9cf2d8631 Include package-lock.json for this project 2018-06-22 16:14:28 +02:00
34cb45d5f1 Fix link to Pillar website in README.md 2018-06-22 16:13:56 +02:00
3c74e79e4a Update README.md featuring Development setup instructions 2018-06-22 16:09:29 +02:00
fb2f245f1e Introducing "./gulp all" all command
It automates running the ./gulp command in all blender-cloud
related repos. Useful when setting up a project for the first time.
2018-06-22 15:39:24 +02:00
8c6fa1e423 Add config_local.example.py for development setup. 2018-06-21 19:08:07 +02:00
b66b6cf445 Reduced verbosity of mongo-backup.sh 2018-06-14 11:57:36 +02:00
b153cae70e Increased WSGI thread count 32 → 64 2018-06-13 10:47:28 +02:00
368e3f6d40 Use high-res image for landing header 2018-05-07 15:25:10 +02:00
e9ec850d90 Styling: tweaks for mobile 2018-04-17 00:28:49 +02:00
280d26801e Reduce scope of h2 styling 2018-04-16 20:36:23 +02:00
6c285c078f Cleanup for landing page 2018-04-16 18:35:16 +02:00
debfc4e356 Layout tweaks for landing page 2018-04-16 17:30:34 +02:00
541663ce0c Skip nodes with no image in landing gallery 2018-04-16 16:36:03 +02:00
1fb044e7c1 Use macro for rendering secondary navigation 2018-04-16 16:24:00 +02:00
4769e2e904 New landing page for Hero project 2018-04-16 14:38:08 +02:00
cf99383b9c Only load clipboard.min.js when authenticated
This is used in the attachments form, which is only available to
authenticated users.
2018-04-03 11:28:13 +02:00
8613ac7244 Merge branch 'master' into production 2018-04-03 11:10:01 +02:00
8c48a61114 Switched attachment rendering to shortcode system
Requires Pillar 3b452d14ce32e1a744fc526a922e1bb60b83ef25 or newer.
2018-04-03 11:02:33 +02:00
0259c5e0ec Add clipboard.min.js to layout
This is needed for the copy button on file asset uploading. Rather than
including it on all pages that could feature the file uploader it's now
loaded globally.
2018-03-29 17:34:22 +02:00
a726fd1fbe Remove time from logs; timestamp is added by Apache anyway. 2018-03-29 11:01:51 +02:00
df71738623 Nodes preview now uses typewatch and csrf_token 2018-03-28 23:38:09 +02:00
6a698daaa0 Remove time from logs; timestamp is added by Apache anyway. 2018-03-27 16:42:43 +02:00
7f538bdaee Replace "Intel" with "Intel Software" logo 2018-03-27 12:12:26 +02:00
5f07c7ce17 Use new hashing of static file names.
Every time the docker image is rebuilt a random hash is chosen.

Requires Pillar d560f89704e3a6f4490df57712525048c469bed2 or newer.
2018-03-23 17:39:04 +01:00
5a42e2dcb8 Flip condition to unindent pretty much all the code
No semantic changes.
2018-03-23 17:24:36 +01:00
d5a54b7cf1 Formatting 2018-03-23 16:37:58 +01:00
7cb4b37ae2 Renamed some docker files to Dockerfile
This makes it simpler to manage by using the default name. It also helps
my editor to recognise the file and highlight it properly.
2018-03-23 12:42:56 +01:00
1fca473257 Moved Apache files into separate subdir 2018-03-23 12:12:39 +01:00
5a6035a494 Change hostname from blender-cloud → cloud.local in docker-compose.yml 2018-03-23 12:11:59 +01:00
98698be7eb Add redirect for waking-the-forest project
Waking the Forest was originally part of the Art Gallery, but was
moved to its own dedicated workshop to increase visibility.
2018-03-19 11:17:45 +01:00
d4f072480c Add convenience redirect from /hero to /p/hero 2018-03-14 14:35:27 +01:00
2d036ee657 Fix rsync of MongoDB backup to Swami
- Forcing IPv4 no longer necessary
- Directory on Swami is determined by rrsync parameters on Swami side
  .ssh/authorized_keys file.
2018-03-09 14:02:39 +01:00
1bb762db6b Document simplified deployment procedure 2018-03-07 15:44:13 +01:00
11743c54e2 Fixed quarterly pricing + all pricing layout tweak 2018-03-07 15:12:36 +01:00
29d1d02bfd Prevent error when there are no Mongo backups to remove yet. 2018-02-22 10:01:59 +01:00
f7e5db2174 Copy files from work directory instead of $DOCKER_DEPLOYDIR
Those files aren't run from inside docker, so it's unnecessary to rebuild
the docker image when we want to deploy new versions.
2018-02-21 11:30:48 +01:00
2141aed06c Added script to run on server for nightly MongoDB backups
Forced to use IPv4 due to IPv6 connectivity issues with Swami.
2018-02-21 11:30:48 +01:00
6b56df9e9c Welcome page additions
New projects: Hero, Spring, and Minecraft Animation Workshop!
2018-02-20 18:56:18 +01:00
5a4519659a Welcome page: typo in Quarterly
Thanks to Michael Glass @WebSmith for reporting!
2018-02-14 12:06:20 +01:00
c3ddc831aa Stricter XSendFilePath in Apache config 2018-02-14 11:07:47 +01:00
1a63b51c48 Ignore files created by running celery-beat outside docker 2018-02-13 12:33:24 +01:00
484ac34c50 T53983: Explicitly version the picohttp docker image 2018-02-13 12:32:19 +01:00
908360eb1c Scripts for easier deployment without leaving ./deploy/
You can choose between build-quick.sh (which only does 4_run/build.sh) and
build-all.sh (which does a full Docker image rebuild).
2018-02-13 11:46:29 +01:00
3bf1c3ea1b About: introducing team profiles
CSS should be refactored, probably in the page itself.
2018-02-12 00:32:12 +01:00
fc58fbef5b Resize and cleanup team profile pictures 2018-02-12 00:29:22 +01:00
9e961580d3 Fix bad URL to store 2018-02-06 10:59:23 +01:00
87cf5a9844 Revert "Add https://cloud*/* as virtual host to haproxy config"
This reverts commit 3be926b9b3.
2018-02-02 13:00:57 +01:00
5a3a7a3883 LetsEncrypt fixes
- Changed virtual host weight for the letsencrypt docker so that it is
  higher than any other weight
- Copy the renewal script to the server (previously it was available
  to the host at /data/git/blender-cloud/…, but no longer.
2018-02-02 12:39:07 +01:00
3be926b9b3 Add https://cloud*/* as virtual host to haproxy config
This allows testing on https://cloud3/ for example, without having to
edit the docker-compose.yml file on the cloud3 server.
2018-02-02 12:20:39 +01:00
ff5af22771 Added missing hostname in README 2018-02-02 12:20:00 +01:00
6f73222dcd Add the most-changing files as last step for faster Docker rebuilds. 2018-02-02 12:09:16 +01:00
1617a119db No more branch check for subprojects
We only need the locations of those subprojects to get their Git URL, and
the state of the work directory doesn't matter.
2018-02-02 12:04:47 +01:00
bef402a6b0 Updated documentation for the new way to deploy Blender Cloud 2018-02-02 12:01:47 +01:00
94ef616593 Placing code + assets directly into Docker image
This radically changes the way we deploy to the production server, as a
Git checkout is no longer required there. All the necessary files are
now inside the docker image. As a result, /data/git should no longer be
mounted as a Docker volume.

- Renamed docker/build.sh → docker/full_rebuild.sh
  This makes it clearer that it performs a full rebuild of the Docker images.
- Full rebuilds should be done on a regular basis to pull in Ubuntu
  security updates.
- Removed rsync_ui.sh, we no longer need it. Other projects can also
  remove their rsync_ui.sh.
- Moved deploy.sh → deploy/2docker.sh and added deploy/2server.sh
2018-02-02 12:01:17 +01:00
ffc4f271e8 Upgrade Docker base image to Ubuntu 17.10 2018-02-01 15:53:33 +01:00
917417a8d5 Email: send link to homepage via login endpoint
This way we avoid displaying the welcome page if they are not
logged in.
2018-01-26 17:14:57 +01:00
48fd9f401a Email: tweaks to welcome message layout and content
According to the discussion after commit c2518e9ae1.
2018-01-26 17:09:45 +01:00
7ba1c55609 Biling page: moved user classification logic from template → view code
This is the kind of stuff that's much easier expressed in Python than in
the template code.

API calls removed:
  - fetching the user isn't necessary, since we have
    pillar.auth.current_user anyway.
  - a call for each group the user is member of, since we only used it to
    check whether the user has demo access anyway.
2018-01-25 14:06:13 +01:00
33aa819040 Override Roles & Caps explanation with something more specific for Cloud 2018-01-25 14:06:13 +01:00
70f28074a0 Use Jinja2 inheritance to render user settings pages.
This requires Pillar e55d38261b36756a2850716a453c08c9ee6be9e2 or newer.
2018-01-25 14:06:13 +01:00
14d77da47a store.blender.org → EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER
I've only done this on the billing page, because there it's important
for debugging purposes to know which URL was actually checked to obtain
the subscription expiry information.
2018-01-25 14:06:13 +01:00
0908a13519 Avoid error when Store cannot be reached. 2018-01-25 14:06:13 +01:00
06414ab0ed Override for comments_for_node
Make use of the render_comments_for_node function, and define custom logic to determine the value of can_post_comments.
2018-01-20 00:45:57 +01:00
67ce16fc78 Limit video max-width in homepage blog list 2018-01-18 11:57:53 +01:00
d00c32310b Update stats URL to the new Kibana dashboard. 2018-01-12 14:03:42 +01:00
add20f0c6c Fixed indentation 2018-01-12 12:33:48 +01:00
dde590d388 gulp: Don't pass --production to Pillar's gulp when deploying 2018-01-12 12:24:39 +01:00
996beaf090 Bump version of elasticproxy to 1.2 2018-01-12 11:55:13 +01:00
37b84cf75a Upgraded ElasticSearch and Kibana to 6.1.1
Requires a reset + reindex of everything (well, that's the easiest way to
get things indexed properly again), which will loose us the Cloud stats.
Before doing this, export those to MongDB and upgrade the statscollector
to the version that I'll be committing soon.
2018-01-12 11:55:13 +01:00
eb2a058ce2 Python 3.6.3 → 3.6.4 2018-01-12 11:55:13 +01:00
4891803552 Docker: never cache the base image when rebuilding 2018-01-12 11:55:13 +01:00
ab11f98331 Removed notifserv from docker-compose.yml 2018-01-12 11:55:13 +01:00
0ed03240e7 Made docker-compose.yml indentation consistent. 2018-01-12 11:55:07 +01:00
5c26756626 remove algolia 2018-01-12 11:54:39 +01:00
de6cdbaf19 Fixed broken networking with new docker-compose.yml
- No more 'links', all dockers can reach each other by name
- Added 'depends_on', which handles startup sequence
- Allowed haproxy connection to the docker daemon socket
- Told haproxy explicitly which services to proxy. The 'docker:' prefix
  comes from the fact that the directory containing the docker-compose.yml
  file is called 'docker'.
2018-01-03 15:27:28 +01:00
e641565e6a Docker: Limit logging for celery worker 2017-12-22 11:58:47 +01:00
617b600ce8 Upgraded docker-compose.yaml file format from 1 to 3.4
This allows us to set logging options, which weren't available in version 1.
I've also added newlines around each service definition, and made the
formatting consistent across the entire file (using align-yaml, one of the
tools of the atom-beautify plugin for Atom).
2017-12-22 11:54:06 +01:00
4c2632669b PEP8 formatting 2017-12-21 15:26:32 +01:00
c2518e9ae1 Send welcome email to new Cloud subscribers 2017-12-21 15:26:23 +01:00
0b34c5c1c6 Also create user when member of organisation 2017-12-20 14:22:26 +01:00
fd68f3fc8b Allow user creation from Blender ID webhook "user modified"
When the webhook indicates that the user has a Cloud subscription (demo,
active, or renewable), the user is immediately created.
2017-12-20 12:56:48 +01:00
a2c24375e5 Webhooks: smarter way to find the user the webhook applies to.
We now also match on Blender ID (through the 'auth' section of the user
document), so that even when we have missed an email change we can still
apply the changes from the webhook.
2017-12-20 12:22:51 +01:00
16e378b7ad Removed unit test for unknown emails
This behaviour is going to change, and by splitting that change up into
two commits the diff makes much more sense.
2017-12-20 11:43:07 +01:00
be4b36a661 Also accept user-modified webhook when old email address is unknown.
When the old email address is unknown, and the new one does map to a user,
use the webhook to update the new user.
2017-12-20 11:40:56 +01:00
ae700b5eef Added payload description of Blender ID webhook 2017-12-20 11:19:15 +01:00
3712ad6ddc Added test for store.py 2017-12-19 11:09:45 +01:00
b2cad0fb9a Fixed mistake in capability check 2017-12-19 10:51:59 +01:00
5e15185166 HaProxy: Explicitly configure allowed TLS ciphers 2017-12-13 14:00:51 +01:00
2a35c3e157 Deploy script: notify Sentry instead of Bugsnag of the deploy 2017-12-13 12:50:45 +01:00
8a4f6c3649 Switch from macros to blocks for navigation menus
For more informations see a7693aa78dcf0a0a77e113f34afa63fb4f615441 in pillar.git
2017-12-13 11:13:47 +01:00
9c2e6f9081 Fixed forced login for user switching 2017-12-12 11:25:48 +01:00
be62a38cd5 Merge branch 'production' 2017-12-12 10:22:14 +01:00
501fb76c7e Revert "Revert "Removed flamenco-user and attract-user role linking to subscriber/demo/admin""
This reverts commit d46b5d645b.
2017-12-12 10:22:07 +01:00
d46b5d645b Revert "Removed flamenco-user and attract-user role linking to subscriber/demo/admin"
Temporarily reverting due to an issue with missing roles and permissions.

This reverts commit c031c1e8ae.
2017-12-08 17:42:35 +01:00
5efca08c13 Docker Apache: added svnman XSendFilePath 2017-12-08 17:06:56 +01:00
abb0ded3f2 Lowered log level for webhook call for unknown user 2017-12-08 14:08:04 +01:00
f948969b20 Fixed webhook so that users' full_name isn't reset to the empty string. 2017-12-08 14:04:11 +01:00
1ae090789b Deploy: added PermitLocalCommand=no to SSH command
I'm using a LocalCommand invocation to change the terminal colours when
I SSH into a production server; this shouldn't be done by the deploy
script.
2017-12-08 10:12:55 +01:00
c031c1e8ae Removed flamenco-user and attract-user role linking to subscriber/demo/admin
It was a hack due to the lack of a capabilities system.
2017-12-07 17:09:06 +01:00
00805c3372 Upgraded Python to 3.6.3 2017-12-07 14:02:35 +01:00
035382d1e5 docker: allow running 1_base/build.sh with --no-cache
This rebuilds the image from scratch, also pulling in security updates.
2017-12-07 14:02:20 +01:00
7dcc0d5ead Size-based Apache log rotation 2017-12-07 10:21:55 +01:00
bff51e1e83 Added /renew endpoint to redirect to Blender Store renewal URL. 2017-12-06 14:36:02 +01:00
561db0428d Using new blender_id.get_user_blenderid() function 2017-12-05 11:56:33 +01:00
f7ce8d74a7 Implemented Blender ID webhook for user change notifications
Handles changes in:

 - roles
 - full name
 - email address
2017-12-01 19:06:16 +01:00
532bf60041 Obtain subscription info from Store.
This was in Pillar, and is now moved to Blender Cloud. It's used to obtain
the subscription expiration date from Blender Store.
2017-11-30 15:45:40 +01:00
d5d189de8c Reworked subscription reconciliation to use Blender ID instead of Store 2017-11-30 15:30:41 +01:00
5d281dc6ce Introduce 'has_subscription' role with 'can-renew-subscription' capability
This is in preparation of presenting different messages to users, when their
subscription should be renewed, or when they should buy a (new)
subscription.
2017-11-30 15:30:08 +01:00
efc18c5628 Open "this user on" links in new tab 2017-11-29 14:09:13 +01:00
127e422775 Updated Blender ID link to the new Blender ID 2017-11-29 14:05:39 +01:00
59f4edf192 Moved 'This user on Blender Store/ID' links from Pillar to Blender Cloud 2017-11-29 13:59:12 +01:00
5ea47b6f8f Fix navbar-toggle not visible on mobile 2017-11-28 16:06:11 +01:00
e27d8a3f08 Welcome: Replace all store URLs with variable
Also link Hjalti's video to actual video asset
2017-11-28 15:10:40 +01:00
c6407b4f7a Welcome page: Fix links to the store
And make some titles links as well
2017-11-28 14:44:54 +01:00
71bdff82b3 Home: In Production section for ongoing projects 2017-11-27 10:39:34 +01:00
98f695c54b Home: In Production section for ongoing projects 2017-11-24 19:36:43 +01:00
30fe3e1360 Welcome page: Explore should go to cloud.blender.org when logged in
Pointed out by Dr. Sybren
2017-11-24 16:47:32 +01:00
fd6c9ec02d Welcome page: Update prices 2017-11-23 18:31:57 +01:00
62dcd93584 Only load markdown.min.js if current_user has subscriber capabilities
Since commenting or node editing is the only place we use js markdown now.
2017-11-23 16:51:31 +01:00
8fc9529752 Homepage: Re-use render_blog_post macro to display posts
No need to have custom code when we have a nice little macro for it
2017-11-23 16:17:38 +01:00
f4e277f1d1 Homepage: Render attachments on posts 2017-11-23 16:16:10 +01:00
22f5ea7b62 Fixed creation of image sharing group node 2017-11-21 15:16:38 +01:00
1dd3fd3623 Simplified code a bit 2017-11-21 15:16:22 +01:00
48dc3837b5 cloud_share_img.py: stop using my personal hardcoded IDs 2017-11-21 15:09:13 +01:00
c34ffb66db Added CLI script to share images to the Cloud.
Assumes that you are logged in on Blender ID with the Blender ID Add-on.

Haven't tested what happens when you've never shared any images yet.
2017-11-21 15:00:23 +01:00
59d165b6d0 Subscription refresh: refresh the page on a 403
A 403 indicates the session isn't valid any more, and a window reload will
fix that (either by automatically logging in via Blender ID or by
providing a login prompt).
2017-11-17 12:13:25 +01:00
9dfe75ed1e Include pillar-svnman in docker image 2017-11-14 16:28:52 +01:00
494f307900 Fixed deploy script for SVNMan 2017-11-14 16:25:03 +01:00
59dd3352cd Added svnman to deploy.sh 2017-11-14 16:13:03 +01:00
4207d051a9 Added subscriber-pro role.
This is used in svnman to bind to the svn-use capability.
2017-11-14 16:09:29 +01:00
f1d990c03d Added SVNMan extension 2017-11-14 16:09:29 +01:00
08f56877ca Homepage: Small cleanup of classes and more space between posts
Also fix selectors for comments/assets
2017-11-10 17:35:19 +01:00
0c4db84940 Fix link to comments 2017-11-09 19:38:56 +01:00
5be0f3baf0 Update cache for CSS/JS 2017-11-09 18:59:44 +01:00
8288b4eab1 New Homepage: Featuring more in the blog posts 2017-11-09 18:37:32 +01:00
a34f3435c1 gitignore: ignore cloud/static/assets/css 2017-11-09 17:15:26 +01:00
ba9a2eb134 gitignore: absolute path for cloud/templates 2017-11-09 17:15:16 +01:00
73a0de3157 Merge branch 'master' of git.blender.org:blender-cloud 2017-11-08 16:25:12 +01:00
bcab6ac5b7 Stats files are no longer needed 2017-11-08 16:24:58 +01:00
c2492e8710 When logging in from /welcome, redirect to index
Backend logic simplified, to support the next arg if provided, otherwise simply redirect to the request.referrer (last visited page).
2017-11-07 18:25:42 +01:00
08b388f363 Display Log in on button only if current_user.is_anonymous 2017-11-07 18:25:42 +01:00
470fdf89f4 Add alt tag to images in welcome page 2017-11-07 18:25:42 +01:00
c00121d71c Introducing main.sass stylesheet
Blender Cloud used main.css from Pillar

As we try to strip Blender Cloud-specific content from Pillar,
this commit brings three .sass files over to this repository.

There are still plenty of Blender Cloud classes all over Pillar,
they will be brought here over time.
2017-11-07 16:56:45 +01:00
97dff66159 Add rewrite rule for new project 2017-10-25 15:52:33 +02:00
047737ef41 New block in the navigation menu for user specific entries
Called "navigation_user", used at the moment by the
/welcome page for turning "Login" into "Login and Explore"

Also added the block name to {% endblock %} where it makes sense.
There is no functional change, just easier to read.
2017-10-25 15:43:16 +02:00
5eaa202d49 Welcome Page: Fix count of awesome people (Cloud subscribers) 2017-10-23 15:34:31 +02:00
a9788e70c9 Welcome Page: Add links to titles and images 2017-10-23 15:34:00 +02:00
e19a38959e Merge branch 'production' 2017-10-19 14:59:38 +02:00
0c4fbcf65d Refresh the /welcome page 2017-10-18 20:06:34 +02:00
e86cc9df77 Remove deprecated classes 2017-10-18 20:06:11 +02:00
cb08c113af Fix missing background on /about page
Fixes T52867
2017-10-17 15:45:22 +02:00
5d26e3940e Bugsnag app revision: use timestamp instead of Git hash
This turns the 'revision' we send to Bugsnag into an incremental number
that denotes the current time. This is more indicative of the application
version than the git revision of the Blender Cloud repository, as the
latter doesn't include any changes in the other repositories.
2017-10-17 12:32:15 +02:00
5dc4b398c2 updateTitle is no longer needed since we now have DocumentTitleAPI
taking care of titles in 0_navbar.js (part of tutti.min.js)
2017-10-05 15:33:50 +02:00
c25aae82b5 Update title when notifications count has been updated 2017-10-02 19:55:08 +02:00
14d20edbb7 Gulp: converted package.json indentation to tabs 2017-09-28 15:42:50 +02:00
2c4527c4d6 Gulp: added 'cleanup' task that erases all gulp-generated files.
This runs automatically when using --production
2017-09-28 15:37:33 +02:00
3a7f5f7a7d Gulp: replaced hardcoded paths with variables. 2017-09-28 15:37:13 +02:00
ec99b3207a Gulp: fixed license expression 2017-09-28 15:36:42 +02:00
ac7dd4c60a Gulp: fixed project name and repo URL 2017-09-28 15:36:17 +02:00
e45976d4e5 Added ElasticProxy to the docker/README.md file. 2017-09-26 12:27:36 +02:00
565e5e95a5 Disabled web frontend for Grafista.
It still collects daily statistics, until we're 100% sure our own
pillar-statscollector is doing its job correctly.
2017-09-26 12:27:24 +02:00
c2fff6a9cb Removed -verbose from elasticproxy CLI 2017-09-26 11:58:58 +02:00
fa249d1952 Kibana: include config file that doesn't refer to xpack
The default Kibana config file still has an option to enable X-pack, which
is now removed.
2017-09-26 11:53:22 +02:00
d02de8e18b Disable Dev Tools panel. 2017-09-26 11:47:09 +02:00
c6d645f664 Removed old X-pack options
These aren't necessary any more now that we use an image with X-pack.
2017-09-26 11:46:59 +02:00
d90794a4f0 Put elasticproxy in between Kibana and ElasticSearch
This blocks any data-changing HTTP request from reaching ElasticSearch.
2017-09-26 11:27:45 +02:00
13ed89c480 Add terms and conditions and privacy statement 2017-09-21 22:17:52 +02:00
ab41f3afcd Spaces to tabs in pug file 2017-09-21 22:17:05 +02:00
5128675f55 Homepage Featured Project: summary link to project 2017-09-21 19:13:50 +02:00
5d0b8a1dc4 Added static page that embeds stats from Kibana.
It just has a full-width iframe that embeds the dashboard from
https://stats.cloud.blender.org/
2017-09-21 14:25:14 +02:00
82a4bcd185 Allow letsencrypt to run on stats.cloud.blender.org 2017-09-21 13:50:44 +02:00
4432e5b645 Added note in docker/README about permissions on /data/storage/elasticsearch 2017-09-21 13:50:31 +02:00
5948ee7c6f Kibana: more frequent garbage collection to limit memory usage
See https://github.com/elastic/kibana/issues/5170#issuecomment-163042525
2017-09-21 11:52:22 +02:00
5b5005f33e Oops 2017-09-21 11:46:01 +02:00
98177df7bd Elastic: moved memory limit to environment variable
This allows us to easily change it without rebuilding the Docker image.
2017-09-21 11:38:17 +02:00
8e6fc604e3 Build custom images for ElasticSearch & Kibana
We can then remove X-Pack and control ElasticSearch's memory usage.

This also gives us the opportunity to let Kibana do its optimization when
we build the image, rather than every time the container is recreated.
2017-09-21 11:28:16 +02:00
2e2fc791e1 Added Kibana to the docker-compose stack 2017-09-20 16:58:18 +02:00
00eb6d8685 Upgraded ElasticSearch 5.6.0 → 5.6.1 2017-09-19 13:48:53 +02:00
41d47cdeb8 PEP8 formatting 2017-09-19 13:45:48 +02:00
19ccffa4ae Cache the homepage template context.
This requires pillar-python-sdk c8eec9fa9d8a198df198538a38ca1ad2367bb3e6
or newer.
2017-09-19 13:45:10 +02:00
aaf95f96a7 Fixed user switching.
Basically this copies bd976e6c2e7867fee70c8654cc887bf1d3973bc1 from Pillar.
2017-09-19 13:39:41 +02:00
9034c36564 Added ElasticSearch docker container.
So far it's just a standalone docker container, as there is no publicly
accessible Kibana container yet. To use, just SSH-tunnel port 9200.
2017-09-18 17:38:55 +02:00
f3e0484328 Revert "Cache the entire homepage for 5 minutes."
This reverts commit 9a692d475b.
2017-09-18 13:05:29 +02:00
f74e05229c Replace pillar-web with blender-cloud, and add HTTPS support
Other vhosts are already configured to use the 'blender-cloud' hostname,
and now the main one is too. It also adds HTTPS support, so that you can
test locally without having to set FORCE_SSL to false. This does require
you to create a TLS certificate in /data/certs/blender-cloud.pem, using:

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
cat key.pem cert.pem > blender-cloud.pem
rm key.pem cert.pem
2017-09-18 12:21:33 +02:00
6d9192f0ab Upgrade Python 3.6.1 → 3.6.2 2017-09-18 12:17:50 +02:00
869e228069 Homepage Random Featured: Show project more prominently for first item 2017-09-17 20:12:47 +02:00
9a692d475b Cache the entire homepage for 5 minutes.
This means that comments and blog posts will take at most 5 minutes to show
up on /.
2017-09-15 17:46:20 +02:00
0b61e517d2 Added missing attached_to.url assignment 2017-09-15 17:03:31 +02:00
0c251c0470 Added missing projection 2017-09-15 15:46:52 +02:00
e103a712b3 Performance improvements for the homepage (activity, blog, random featured) 2017-09-15 15:25:27 +02:00
a7889214cf Locally hosting the Roboto font.
This removes the loadCSS() call, which uses setTimer() to repeatedly do
stuff while other stuff is loading. This saves quite a bit of CPU time
spent on JavaScript.
2017-09-15 11:45:09 +02:00
b554756f54 HaProxy: enable gzip compression for certain content types. 2017-09-15 11:03:28 +02:00
ea8ef9c8bb Restart Celery Beat docker container after deploy 2017-09-14 17:21:24 +02:00
f2ad7b8868 Bumped thread count to 32, lowered proc count to 2
We have 2 CPUs at our DigitalOcean droplet, hence the proc count.
2017-09-14 17:17:45 +02:00
22b8ed0bc0 Added "celery beat" docker container 2017-09-14 15:58:03 +02:00
8df67bfeb5 After building the image, tell the user the image name
This makes pushing easier, as you can copy-paste the name into the push
command.
2017-09-12 12:10:32 +02:00
f4ac9635a9 Fixed two mistakes in stats query count_nodes()
- 'p.x' → 'project.x'
- is_private: {$ne: true} → is_private: False, because true is the default.
2017-09-12 11:29:36 +02:00
ea0f88b4b8 rsync_ui: delete old files when rsyncing.
This is especially necessary for custom node templates, as those are used
when they exist, with fallback to generic templates. When those files are
removed from the repository, they should be removed from the production
server too.
2017-09-07 15:48:38 +02:00
020f70b2cb rsync_ui: nicer error message 2017-09-07 15:47:42 +02:00
2bac3997f6 Link to Blender Cloud add-on (and not the obsolete bundle link) 2017-09-05 11:32:24 +02:00
ff8e338f86 Don't include raw & chars in HTML 2017-09-05 11:31:39 +02:00
d80fc48635 Redirect properly after login
- Use next=xxx URL parameter
- Redirect to / when no next=xxx was given and the referrer is /welcome
2017-09-01 17:04:41 +02:00
9a49cb536b Run ./gulp script when deploying, rather than running gulp directly
This ensures dependencies are rebuilt when they changed.
2017-09-01 11:29:42 +02:00
a9280dd05f Introducing settings for blender-cloud
Moved Blender Cloud specific settings from Pillar into this application.
2017-08-30 23:13:06 +02:00
0b2583c22e Layout: Headers backdrops are no longer used 2017-08-30 15:42:03 +02:00
835851c1d7 Gulp: Fix livereload 2017-08-30 15:06:09 +02:00
1fb3c88a8f Override /login endpoint
For Blender Cloud we want to point directly to blender-id OAuth.
2017-08-25 11:56:43 +02:00
732fe7bc7c Fixed calls to do_badger to use keyword arguments for role(s)
- updated call to pass multiple roles at once
- updated call to pass single role with keyword arg

This requires Pillar 87afbc52f6c7d5eb63f0da745625d05b01e81783
2017-08-23 09:32:01 +02:00
838d85a2b7 Use capabilities system to check whether the user is a subscriber. 2017-08-22 16:40:05 +02:00
2e4ed6f3c8 Also allow demo users access to Flamenco & Attract 2017-08-15 15:25:53 +02:00
fd600c9fa0 Fix for broken url_for 2017-07-27 09:30:09 +02:00
7f65b77824 New redirect from /training to /courses 2017-07-26 16:51:35 +02:00
4575e8bf8b Public project changes
Now we organize training content into 'courses' and 'workshops'. This commit updates various endpoints and menus to reflect that decision.
2017-07-26 16:51:16 +02:00
fe408c3151 Moved projects index_collection from Pillar 2017-07-26 16:49:40 +02:00
9bf6a36bdd Return cached view for welcome page only if user is anonymous 2017-07-16 00:06:18 +02:00
560c13d3a8 Add node_modules to .gitignore 2017-07-16 00:05:35 +02:00
f33bb946ac Fix for static path 2017-07-14 12:57:58 +02:00
bda679a3bf Remove about.html
This is build with Gulp from a .pug file now.
2017-07-14 12:45:35 +02:00
0ea8b02920 Tweaks to rsync_ui.sh
Move Blender Cloud checks before Pillar deploy commands. Also add prefixes to distinguish between Pillar vs. cloud assets.
2017-07-14 12:44:32 +02:00
ae33a6f71e Moving Blender Cloud specific pages from Pillar
These pages were originally in the pillar repository, but they actually belong here. We also extended rsync_ui.sh to build and rsync to the Blender Cloud server.
2017-07-13 18:35:18 +02:00
f6df37ec24 Introducing the gulp command
From now on we work with .pug files for templates.
2017-07-13 18:26:54 +02:00
fca78faca0 Replace blurry image 2017-07-13 12:46:48 +02:00
c648fe0ca5 Thumbnails for Services page 2017-07-11 18:45:46 +02:00
b155b0916e CLI reconcile_users: process 10 users in parallel 2017-07-11 14:57:57 +02:00
f493d0a566 CLI reconcile_subscribers: show user count + index 2017-07-11 14:44:00 +02:00
093d80905f CLI reconcile_subscribers: show user count before starting 2017-07-11 14:40:46 +02:00
54e9aca16f CLI reconcile_subscribers uses do_badger & re-grants the subscriber role
This ensures that subscriber-linked roles like flamenco-user are synced
along with the subscriber status, and that Algolia picks up on those
changes as well.
2017-07-11 14:29:14 +02:00
3e5ad280b1 CLI command to create standard groups admin/demo/subscriber
This is part of what setup_db also does, but only for creating those
groups. Only really useful for debugging/developing.
2017-07-11 14:27:33 +02:00
0a9f0ebddb Roles '{flamenco,attract}-user' are now linked to 'subscriber'
All users who get 'subscriber' role automatically get 'flamenco-user' and
'attract-user', and when 'subscriber' is revoked, so are
'{flamenco,attract}-user'.
2017-07-11 12:40:13 +02:00
a43e84f93e Made script executable 2017-07-07 12:03:18 +02:00
56af1cd96c Added renewal script for Let's Encrypt 2017-07-07 12:02:49 +02:00
69862b8416 Docker: removed blender-cloud/.well-known from letsencrypt
That domain is handled by a different host.
2017-07-07 11:40:27 +02:00
b9a7f501ba Docker: Dropping support for cloudapi.blender.org 2017-07-07 11:40:25 +02:00
6fc77522ea Removed localhost declaration 2017-07-07 11:24:14 +02:00
538a10af60 Added docker container for serving letsencrypt webroot. 2017-07-07 11:23:38 +02:00
fd700ca6d9 Inject current_user_is_subscriber into Jinja context
This variable is set to True iff the user has a subscriber/demo/admin role.
It's an attempt to get some of the Cloud subscription specific stuff out
of Pillar.
2017-06-14 16:26:29 +02:00
e69ac4c5b9 Restart Celery worker after deploying new version. 2017-06-08 11:55:52 +02:00
a62d1e3d7d Updated celery-worker.sh to actually star the Celery worker. 2017-06-07 17:19:02 +02:00
d56e1afe73 Upgraded docker images Python 3.6.0 → 3.6.1 2017-06-07 13:22:57 +02:00
8b110df32a Convert all Redirects to RewriteRules in Apache conf
This allows for more precise redirection. Using the correct code (301, permanently moved) and L (Last rule, prevent any further evaluation).
2017-06-02 11:18:57 +02:00
9da841efc3 Added Celery worker docker container
This docker container uses the Blender Cloud image, but a different entry
point. It is not intended to be network-reachable from the outside world.
All it needs are connections to the databases (mongo, redis, rabbit).
2017-06-02 10:55:18 +02:00
c73efd5998 Added RabbitMQ, for serving as Celery broker. 2017-06-02 10:55:18 +02:00
93edfb789f Add Blender Cloud specific redirects
Originially hardcoded in Pillar, these redirects are rarely changed or added.
2017-06-01 17:27:00 +02:00
f0e53245a6 New images for the homepage 2017-05-22 16:01:35 +02:00
c775126cba Docker startup: create ${APACHE_LOG_DIR} if it doesn't exist yet 2017-05-17 10:34:13 +02:00
39ce21f7f9 Copy /lib/systemd/system/docker.service to /etc/systemd/system/docker.service
This allows later upgrading of docker without overwriting the changes
indicated in the README.
2017-05-17 10:12:22 +02:00
3308018887 Docker: replaced "latest-py36" with "latest" in README.md 2017-05-17 10:11:49 +02:00
b93448f682 Start cron when starting the Cloud docker
This will regularly run logrotate, preventing the build-up of log files
of multiple gigabytes.
2017-05-17 10:08:27 +02:00
2892a46a27 Docker: store Cloud container /var/log on host in /data/log 2017-05-17 10:07:40 +02:00
e2efc70e44 Import new subscription info function 2017-05-04 18:17:37 +02:00
bd9265e4f1 Tag blender_cloud Docker image as latest 2017-04-11 12:52:44 +02:00
57aeea337b Switch from id --group id -g
This makes the script compatible with macOS.
2017-04-11 12:34:07 +02:00
5c3d76c660 Added echo command to deploy.sh
We had a problem where the 'docker exec' command would hang, and having
an explicit "I'm now doing this thing" in the console helps analyse this.
2017-04-07 12:27:30 +02:00
9b0e996c10 Background images for Agent 327 and Andy's Waking the Forest 2017-03-22 21:57:12 +01:00
cad77f001c Fix start date of Blender Cloud in the About page 2017-03-18 12:47:08 +01:00
4c149b0d24 Use -W instead of -w for ping in deploy.sh
Use the waittime argument instead of deadline, since the latter is not supported on macOS.
2017-03-17 12:02:05 +01:00
28f9ec5ffa Support the before query for user count and blender syc users 2017-03-17 11:41:44 +01:00
56b5c89edf Fix missing cast for the before var 2017-03-17 11:41:00 +01:00
7472cf2b16 Properly count Blender Sync users
Use a dedicated aggregation pipeline to filter one startup.blend per home project and count them.
2017-03-17 10:46:29 +01:00
e09c73ec39 Introducing endpoint for stats
This endpoint can be queried on a daily basis to retrieve cloud usage  stats. For assets and comments we take into considerations only those who belong to public projects.
2017-03-17 09:40:50 +01:00
59ffebd33e Mount grafista storage volume in the correct path 2017-03-12 12:38:59 +01:00
e5e05bf086 Add cronjob for grafista data collection 2017-03-12 12:15:44 +01:00
3e9e075b1f Use production branch when checking out repos
Also added grafista repo to the list.
2017-03-12 12:15:21 +01:00
5c01c6ec3f Fixed path error in runserver.wsgi + no more hardcoded path. 2017-03-10 16:52:16 +01:00
343398a813 Improve image quality pictures in About page 2017-03-10 16:45:33 +01:00
88ff479140 New photo for Sybren 2017-03-10 16:22:54 +01:00
34affb0eac Removed executable permission from cli.py 2017-03-10 16:16:36 +01:00
17dfdc1825 Removed executable permission from images 2017-03-10 16:14:23 +01:00
c9aaebb370 Add images to about page 2017-03-10 16:04:42 +01:00
1d687d5c01 Introducing Cloud extension
We use a Pillar extension to register Blender Cloud specific endpoints.
2017-03-10 15:36:55 +01:00
2f1a32178a Explicitly pass hostname to deploy script.
The script now also pings the hostname to deply to, to see if it's alive
before doing anything else.
2017-03-10 14:24:22 +01:00
6cfe00c3ca Docker-compose: pinning new version of haproxy
This is actually the one we used in production.
2017-03-10 11:16:28 +01:00
727707e611 Allow deploying to either production or staging.
Requires that you set up 'cloud2' as a hostname for the staging server.
2017-03-10 09:55:04 +01:00
6e1425ab25 Docker: removed superfluous ; 2017-03-10 09:54:27 +01:00
85f3f19c34 Added some more deployment documentation 2017-03-09 15:45:51 +01:00
557ce1b922 Docker: add useful tail /var/log/apache2/access.log to bash_history 2017-03-09 15:30:45 +01:00
d2d04b2398 Docker: added missing libraries for JPEG and PNG support in Pillow. 2017-03-09 15:30:32 +01:00
e31b3cf8b4 Docker: pin specific versions for images, for reproducible deploys. 2017-03-09 11:02:04 +01:00
85c2b1bcd6 Docker: stop on errors in 3_buildwheels/build.sh 2017-03-09 11:01:36 +01:00
79b8194b2a Docker: exec single commands
This replaces bash with the docker command, freeing memory and
automatically returning the exit code of the docker command as the exit
code of the shell script.
2017-03-09 11:01:24 +01:00
06cc338b08 Docker: always apt-get update before apt-get install 2017-03-09 11:00:21 +01:00
b0ab696e49 Started documenting steps to set up a production machine from scratch. 2017-03-08 17:23:00 +01:00
30c9cfd538 Use armadillica/blender_cloud:latest-py36 in this branch 2017-03-08 17:18:52 +01:00
3af92b4436 Don't install subpackages as editable in requirements.txt
Doing this would require editable (and thus writable) checkouts, which
we don't have on our production machines.
2017-03-08 17:17:53 +01:00
2f6049edee Docker images: renamed pillar_py:3.6 to armadillica/pillar_py:3.6
This allows us to push the Python image to Docker Hub.
2017-03-08 13:55:02 +01:00
71a1a69f16 Updated paths for XSendFilePath
Now that we use egg links, and not symlinks, to install our packages,
we can use the actual paths.
2017-03-08 13:02:36 +01:00
fab68aa802 Removed virtualenv from manage.sh, and using exec 2017-03-08 12:38:16 +01:00
e27f5b7cec Docker-compose: use /data/git as one volume, instead of mapping all subdirs 2017-03-08 12:38:00 +01:00
d42762b457 Cleaned up runserver.wsgi to not depend on flup
It's not necessary; we already don't install it any more either.
2017-03-08 12:37:26 +01:00
a332f627a4 Tweaked docker-entrypoint.sh to properly install packages. 2017-03-08 12:36:06 +01:00
9fe7d7fd2b WIP: More docker tweaks 2017-03-08 12:35:35 +01:00
6adf45a94a Be more selective in what we install on the production docker image. 2017-03-08 12:34:48 +01:00
e443885460 Create links python and pip to python3 and pip3. 2017-03-08 12:33:15 +01:00
e086862567 WIP: building mod_wsgi against Python 3.6
The module is included in the built Python directory, in
/opt/python/mod-wsgi/mod_wsgi.so
2017-03-08 12:32:54 +01:00
5ad9f275ef Strip Python install, saves roughly 90 MB in final image size. 2017-03-07 23:01:19 +01:00
faf38dea7e Uncommented some accidentally commented-out stuff 2017-03-07 23:00:10 +01:00
d7e4995cfa WIP: more work on the docker structure, still not finished with 4_run
1_base: builds a base image, based on Ubuntu 16.10
2_buildpy: builds two images:
	2a: an image that can build Python 3.6
	2b: an image that contains the built Python 3.6 in /opt/python
3_buildwheels: builds an image to build wheel files, puts them in ../4_run
4_run: the production runtime image, which can't build anything and just runs.
2017-03-07 22:41:05 +01:00
af14910fa9 WIP breaking stuff: updating docker image build process for Python 3.6
This requires a new way to pass requirements.txt files to Docker (since
they now link to each other), as well as building Python ourselves (since
even Ubuntu 16.10 doesn't have a decent Python 3.6).

This is just a WIP commit, will be fixed soon(ish).
2017-03-07 16:51:51 +01:00
b6f729f35e Added requirements-dev.txt
It just links to the requirements-dev.txt files of the subprojects.
2017-03-07 14:21:15 +01:00
d3427bb73a Updated README for Python 3.6
Also mentioned Flamenco, made the "mkdir" command a bit more efficient, and used Python 3's pip.
2017-03-07 13:59:49 +01:00
039983dc00 README: added missing slashes to URLs 2017-03-07 13:57:41 +01:00
4a148f9163 Re-enabled Flamenco, as it seems to be working on Py36 2017-03-03 17:37:13 +01:00
df137c3258 Re-enabled Attract, it seems to work on Py3.6 2017-03-03 17:00:02 +01:00
3b239130d8 Removed all requirements; referring to other requirements.txt files
Also using -e to install required packages that aren't pip-installable.
2017-03-03 14:41:14 +01:00
d4984c495e Python 3.6: removed unnecessary __future__ import 2017-03-03 14:40:35 +01:00
566c89d745 Disabled Flamenco and Attract, until they are also ported to Python 3.6 2017-03-03 14:40:01 +01:00
227 changed files with 18581 additions and 612 deletions

14
.gitignore vendored
View File

@@ -3,6 +3,12 @@
.coverage
*.pyc
__pycache__
*.js.map
*.css.map
/cloud/templates/
/cloud/static/assets/
node_modules/
/config_local.py
@@ -12,4 +18,10 @@ __pycache__
/.eggs/
/dump/
/google_app*.json
/docker/3_run/wheelhouse/
/docker/2_buildpy/python/
/docker/4_run/wheelhouse/
/docker/4_run/deploy/
/docker/4_run/staging/
/celerybeat-schedule.bak
/celerybeat-schedule.dat
/celerybeat-schedule.dir

140
README.md
View File

@@ -1,43 +1,102 @@
# Blender Cloud
Welcome to the [Blender Cloud](https://cloud.blender.org) code repo!
Blender Cloud runs on the [Pillar](https://pillarframework.org) framework.
Welcome to the [Blender Cloud](https://cloud.blender.org/) code repo!
Blender Cloud runs on the [Pillar](https://pillarframework.org/) framework.
## Quick setup
Set up a node with these commands. Note that that script is already outdated...
## Development setup
Jumpstart Blender Cloud development with this simple guide.
### System setup
Blender Cloud relies on a number of services in order to run. Check out the [Pillar system setup](
https://pillarframework.org/development/system_setup/#step-by-step-setup) to set this up.
Add `cloud.local` to the `/etc/hosts` file on localhost. This is a development convention.
### Check out the code
Go to the local development directory and check out the following repositories, next to each other.
```
#!/usr/bin/env bash
mkdir -p /data/git
mkdir -p /data/storage
mkdir -p /data/config
mkdir -p /data/certs
sudo apt-get update
sudo apt-get -y install python-pip
pip install docker-compose
cd /data/git
cd /home/guest/Developer
git clone git://git.blender.org/pillar-python-sdk.git
git clone git://git.blender.org/pillar-server.git pillar
git clone git://git.blender.org/pillar.git
git clone git://git.blender.org/attract.git
git clone git://git.blender.org/flamenco.git
git clone git://git.blender.org/pillar-svnman.git
git clone git://git.blender.org/blender-cloud.git
```
## Deploying to production server
### Initial setup and configuration
First of all, add those aliases to the `[alias]` section of your `~/.gitconfig`
Create a virtualenv for the project and install the requirements. Dependencies are managed via
[Poetry](https://poetry.eustace.io/). Install it using `pip install -U --user poetry`.
```
cd blender-cloud
pip install --user -U poetry
poetry install
```
NOTE: After a dependency changed its own dependencies (say a new library was added as dependency of
Pillar), you need to run `poetry update`. This will take the new dependencies into account and write
them to the `poetry.lock` file.
Build assets and templates for all Blender Cloud dependencies using Gulp.
```
./gulp all
```
Make a copy of the config_local example, which will be further edited as the application is
configured.
```
cp config_local.example.py config_local.py
```
Setup the database with the initial collections and the admin user.
```
poetry run ./manage.py setup setup_db your_email
```
The command will return the following message:
```
Created project <project_id> for user <user_id>
```
Copy the value of `<project_id>` and assign it as value for `MAIN_PROJECT_ID`.
Run the application:
```
poetry run ./manage.py runserver
```
## Development
When ready to commit, change the remotes to work over SSH. For example:
`git remote set-url origin git@git.blender.org:blender-cloud.git`
For more information, check out [this guide](https://wiki.blender.org/wiki/Tools/Git#Commit_Access).
## Preparing the production branch for deployment
All revisions to deploy to production should be on the `production` branches of all the relevant
repositories.
Make sure you have these aliases in the `[alias]` section of your `~/.gitconfig`:
```
prod = "!git checkout production && git fetch origin production && gitk --all"
ff = "merge --ff-only"
pp = "!git push && if [ -e deploy.sh ]; then ./deploy.sh; fi && git checkout master"
```
The following commands should be executed for each subproject; specifically for
Pillar and Attract:
The following commands should be executed for each (sub)project; specifically for
the current repository, Pillar, Attract, Flamenco, and Pillar-SVNMan:
```
cd $projectdir
@@ -48,7 +107,7 @@ git stash # if you still have local stuff.
# pull from master, run unittests, push your changes to master.
git pull
py.test
poetry run py.test
git push
# Switch to production branch, and investigate the situation.
@@ -58,37 +117,12 @@ git prod
git ff master
# Run tests again
py.test
poetry run py.test
# Push the production branch and run dummy deploy script.
git pp # pp = "Push to Production"
# The above alias waits for [ENTER] until all deploys are done.
# Let it wait, perform the other commands in another terminal.
# Push the production branch.
git push
```
Now follow the above receipe on the Blender Cloud project as well.
Contrary to the subprojects, `git pp` will actually perform the deploy
for real.
## Deploying to production server
Now you can press `[ENTER]` in the Pillar and Attract terminals that
were still waiting for it.
After everything is done, your (sub)projects should all be back on
the master branch.
## Updating dependencies via Docker images
To update dependencies that need compiling, you need the `2_build` docker
container. To rebuild the lot, run `docker/build.sh`.
Follow these steps to deploy the new container on production:
1. run `docker/build.sh`
2. `docker push armadillica/blender_cloud`
On the production machine:
1. `docker pull armadillica/blender_cloud`
2. `docker-compose up -d` (from the `/data/git/blender-cloud/docker` directory)
See [deploy/README.md](deploy/README.md).

155
cloud/__init__.py Normal file
View File

@@ -0,0 +1,155 @@
import logging
import flask
from werkzeug.local import LocalProxy
import pillarsdk
import pillar.auth
from pillar.api.utils import authorization
from pillar.extension import PillarExtension
EXTENSION_NAME = 'cloud'
class CloudExtension(PillarExtension):
has_context_processor = True
user_roles = {'subscriber-pro', 'has_subscription'}
user_roles_indexable = {'subscriber-pro', 'has_subscription'}
user_caps = {
'has_subscription': {'can-renew-subscription'},
}
def __init__(self):
self._log = logging.getLogger('%s.CloudExtension' % __name__)
@property
def name(self):
return EXTENSION_NAME
def flask_config(self):
"""Returns extension-specific defaults for the Flask configuration.
Use this to set sensible default values for configuration settings
introduced by the extension.
:rtype: dict
"""
# Just so that it registers the management commands.
from . import cli
return {
'EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER': 'https://store.blender.org/api/',
'EXTERNAL_SUBSCRIPTIONS_TIMEOUT_SECS': 10,
'BLENDER_ID_WEBHOOK_USER_CHANGED_SECRET': 'oos9wah1Zoa0Yau6ahThohleiChephoi',
'NODE_TAGS': ['animation', 'modeling', 'rigging', 'sculpting', 'shading', 'texturing', 'lighting',
'character-pipeline', 'effects', 'video-editing', 'digital-painting', 'production-design',
'walk-through'],
}
def eve_settings(self):
"""Returns extensions to the Eve settings.
Currently only the DOMAIN key is used to insert new resources into
Eve's configuration.
:rtype: dict
"""
return {}
def blueprints(self):
"""Returns the list of top-level blueprints for the extension.
These blueprints will be mounted at the url prefix given to
app.load_extension().
:rtype: list of flask.Blueprint objects.
"""
from . import routes
import cloud.stats.routes
return [
routes.blueprint,
cloud.stats.routes.blueprint,
]
@property
def template_path(self):
import os.path
return os.path.join(os.path.dirname(__file__), 'templates')
@property
def static_path(self):
import os.path
return os.path.join(os.path.dirname(__file__), 'static')
def context_processor(self):
return {
'current_user_is_subscriber': authorization.user_has_cap('subscriber')
}
def is_cloud_project(self, project):
"""Returns whether the project is set up for Blender Cloud.
Requires the presence of the 'cloud' key in extension_props
"""
if project.extension_props is None:
# There are no extension_props on this project
return False
try:
pprops = project.extension_props[EXTENSION_NAME]
except AttributeError:
self._log.warning("is_cloud_project: Project url=%r doesn't have any "
"extension properties.", project['url'])
if self._log.isEnabledFor(logging.DEBUG):
import pprint
self._log.debug('Project: %s', pprint.pformat(project.to_dict()))
return False
except KeyError:
# Not set up for Blender Cloud
return False
if pprops is None:
self._log.debug("is_cloud_project: Project url=%r doesn't have Blender Cloud"
" extension properties.", project['url'])
return False
return True
@property
def has_project_settings(self) -> bool:
# Only available for admins
return pillar.auth.current_user.has_cap('admin')
def project_settings(self, project: pillarsdk.Project, **template_args: dict) -> flask.Response:
"""Renders the project settings page for this extension.
Set YourExtension.has_project_settings = True and Pillar will call this function.
:param project: the project for which to render the settings.
:param template_args: additional template arguments.
:returns: a Flask HTTP response
"""
from cloud.routes import project_settings
return project_settings(project, **template_args)
def setup_app(self, app):
from . import routes, webhooks, eve_hooks, email
routes.setup_app(app)
app.register_api_blueprint(webhooks.blueprint, '/webhooks')
eve_hooks.setup_app(app)
email.setup_app(app)
def _get_current_cloud():
"""Returns the Cloud extension of the current application."""
return flask.current_app.pillar_extensions[EXTENSION_NAME]
current_cloud = LocalProxy(_get_current_cloud)
"""Cloud extension of the current app."""

139
cloud/cli.py Normal file
View File

@@ -0,0 +1,139 @@
#!/usr/bin/env python
import logging
from urllib.parse import urljoin
from flask import current_app
from flask_script import Manager
import requests
from pillar.cli import manager
from pillar.api import service
from pillar.api.utils import authentication
import cloud.setup
log = logging.getLogger(__name__)
manager_cloud = Manager(
current_app, usage="Blender Cloud scripts")
@manager_cloud.command
def create_groups():
"""Creates the admin/demo/subscriber groups."""
import pprint
group_ids = {}
groups_coll = current_app.db('groups')
for group_name in ['admin', 'demo', 'subscriber']:
if groups_coll.find({'name': group_name}).count():
log.info('Group %s already exists, skipping', group_name)
continue
result = groups_coll.insert_one({'name': group_name})
group_ids[group_name] = result.inserted_id
service.fetch_role_to_group_id_map()
log.info('Created groups:\n%s', pprint.pformat(group_ids))
@manager_cloud.command
def reconcile_subscribers():
"""For every user, check their subscription status with the store."""
import threading
import concurrent.futures
from pillar.auth import UserClass
from pillar.api import blender_id
from pillar.api.blender_cloud.subscription import do_update_subscription
sessions = threading.local()
service.fetch_role_to_group_id_map()
users_coll = current_app.data.driver.db['users']
found = users_coll.find({'auth.provider': 'blender-id'})
count_users = found.count()
count_skipped = count_processed = 0
log.info('Processing %i users', count_users)
lock = threading.Lock()
real_current_app = current_app._get_current_object()
api_token = real_current_app.config['BLENDER_ID_USER_INFO_TOKEN']
api_url = real_current_app.config['BLENDER_ID_USER_INFO_API']
def do_user(idx, user):
nonlocal count_skipped, count_processed
log.info('Processing %i/%i %s', idx + 1, count_users, user['email'])
# Get the Requests session for this thread.
try:
sess = sessions.session
except AttributeError:
sess = sessions.session = requests.Session()
# Get the info from Blender ID
bid_user_id = blender_id.get_user_blenderid(user)
if not bid_user_id:
with lock:
count_skipped += 1
return
url = urljoin(api_url, bid_user_id)
resp = sess.get(url, headers={'Authorization': f'Bearer {api_token}'})
if resp.status_code == 404:
log.info('User %s with Blender ID %s not found, skipping', user['email'], bid_user_id)
with lock:
count_skipped += 1
return
if resp.status_code != 200:
log.error('Unable to reach Blender ID (code %d), aborting', resp.status_code)
with lock:
count_skipped += 1
return
bid_user = resp.json()
if not bid_user:
log.error('Unable to parse response for user %s, aborting', user['email'])
with lock:
count_skipped += 1
return
# Actually update the user, and do it thread-safe just to be sure.
with real_current_app.app_context():
local_user = UserClass.construct('', user)
with lock:
do_update_subscription(local_user, bid_user)
count_processed += 1
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
future_to_user = {executor.submit(do_user, idx, user): user
for idx, user in enumerate(found)}
for future in concurrent.futures.as_completed(future_to_user):
user = future_to_user[future]
try:
future.result()
except Exception as ex:
log.exception('Error updating user %s', user)
log.info('Done reconciling %d subscribers', count_users)
log.info(' processed: %d', count_processed)
log.info(' skipped : %d', count_skipped)
@manager_cloud.command
def setup_for_film(project_url):
"""Adds Blender Cloud film custom properties to a project."""
authentication.force_cli_user()
cloud.setup.setup_for_film(project_url)
manager.add_command("cloud", manager_cloud)

33
cloud/email.py Normal file
View File

@@ -0,0 +1,33 @@
import functools
import logging
import flask
from pillar.auth import UserClass
log = logging.getLogger(__name__)
def queue_welcome_mail(user: UserClass):
"""Queue a welcome email for execution by Celery."""
assert user.email
log.info('queueing welcome email to %s', user.email)
subject = 'Welcome to Blender Cloud'
render = functools.partial(flask.render_template, subject=subject, user=user)
text = render('emails/welcome.txt')
html = render('emails/welcome.html')
from pillar.celery import email_tasks
email_tasks.send_email.delay(user.full_name, user.email, subject, text, html)
def user_subscription_changed(user: UserClass, *, grant_roles: set, revoke_roles: set):
if user.has_cap('subscriber') and 'has_subscription' in grant_roles:
log.info('user %s just got a new subscription', user.email)
queue_welcome_mail(user)
def setup_app(app):
from pillar.api.blender_cloud import subscription
subscription.user_subscription_updated.connect(user_subscription_changed)

39
cloud/eve_hooks.py Normal file
View File

@@ -0,0 +1,39 @@
import logging
import typing
from pillar.auth import UserClass
from . import email
log = logging.getLogger(__name__)
def welcome_new_user(user_doc: dict):
"""Sends a welcome email to a new user."""
user_email = user_doc.get('email')
if not user_email:
log.warning('user %s has no email address', user_doc.get('_id', '-no-id-'))
return
# Only send mail to new users when they actually are subscribers.
user = UserClass.construct('', user_doc)
if not (user.has_cap('subscriber') or user.has_cap('can-renew-subscription')):
log.debug('user %s is new, but not a subscriber, so no email for them.', user_email)
return
email.queue_welcome_mail(user)
def welcome_new_users(user_docs: typing.List[dict]):
"""Sends a welcome email to new users."""
for user_doc in user_docs:
try:
welcome_new_user(user_doc)
except Exception:
log.exception('error sending welcome mail to user %s', user_doc)
def setup_app(app):
app.on_inserted_users += welcome_new_users

15
cloud/forms.py Normal file
View File

@@ -0,0 +1,15 @@
from flask_wtf import FlaskForm
from wtforms import BooleanField, StringField
from wtforms.fields.html5 import URLField
from wtforms.validators import URL
from pillar.web.utils.forms import FileSelectField
class FilmProjectForm(FlaskForm):
video_url = URLField(validators=[URL()])
poster = FileSelectField('Poster Image', file_format='image')
logo = FileSelectField('Logo', file_format='image')
is_in_production = BooleanField('In Production')
is_featured = BooleanField('Featured')
theme_color = StringField('Theme Color')

658
cloud/routes.py Normal file
View File

@@ -0,0 +1,658 @@
import functools
import json
import logging
import typing
import bson
from flask_login import login_required
import flask
import werkzeug.exceptions as wz_exceptions
from flask import Blueprint, render_template, redirect, session, url_for, abort, flash, request
from pillarsdk import Node, Project, User, exceptions as sdk_exceptions, Group
from pillarsdk.exceptions import ResourceNotFound
import pillar
import pillarsdk
from pillar import current_app
from pillar.api.utils import authorization
from pillar.auth import current_user
from pillar.web.users import forms
from pillar.web.utils import system_util, get_file, current_user_is_authenticated
from pillar.web.utils import attach_project_pictures
from pillar.web.settings import blueprint as blueprint_settings
from pillar.web.nodes.routes import url_for_node
from pillar.web.projects.routes import render_project
from pillar.web.projects.routes import find_project_or_404
from pillar.web.projects.routes import project_view
from pillar.web.projects.routes import project_navigation_links
from cloud import current_cloud
from cloud.forms import FilmProjectForm
from . import EXTENSION_NAME
blueprint = Blueprint('cloud', __name__)
log = logging.getLogger(__name__)
@blueprint.route('/')
def homepage():
if current_user.is_anonymous:
return redirect(url_for('cloud.welcome'))
return render_template(
'homepage.html',
api=system_util.pillar_api(),
**_homepage_context(),
)
def _homepage_context() -> dict:
"""Returns homepage template context variables."""
# Get latest blog posts
api = system_util.pillar_api()
# Get latest comments to any node
latest_comments = Node.latest('comments', api=api)
# Get a list of random featured assets
random_featured = get_random_featured_nodes()
# Parse results for replies
to_remove = []
@functools.lru_cache()
def _find_parent(parent_node_id) -> Node:
return Node.find(parent_node_id,
{'projection': {
'_id': 1,
'name': 1,
'node_type': 1,
'project': 1,
'parent': 1,
'properties.url': 1,
}},
api=api)
for idx, comment in enumerate(latest_comments._items):
if comment.properties.is_reply:
try:
comment.attached_to = _find_parent(comment.parent.parent)
except ResourceNotFound:
# Remove this comment
to_remove.append(idx)
else:
comment.attached_to = comment.parent
for idx in reversed(to_remove):
del latest_comments._items[idx]
for comment in latest_comments._items:
if not comment.attached_to:
continue
comment.attached_to.url = url_for_node(node=comment.attached_to)
comment.url = url_for_node(node=comment)
main_project = Project.find(current_app.config['MAIN_PROJECT_ID'], api=api)
main_project.picture_header = get_file(main_project.picture_header, api=api)
return dict(
main_project=main_project,
latest_comments=latest_comments._items,
random_featured=random_featured)
@blueprint.route('/design-system')
def design_system():
"""Display the design system page.
This endpoing is intended for development only, and returns a
rendered template only if the app is running in debug mode.
"""
if not current_app.config['DEBUG']:
abort(404)
return render_template('design_system.html')
@blueprint.route('/login')
def login():
from flask import request
if request.args.get('force'):
log.debug('Forcing logout of user before rendering login page.')
pillar.auth.logout_user()
next_after_login = request.args.get('next')
if not next_after_login:
next_after_login = request.referrer
session['next_after_login'] = next_after_login
return redirect(url_for('users.oauth_authorize', provider='blender-id'))
@blueprint.route('/welcome')
def welcome():
# Workaround to cache rendering of a page if user not logged in
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
return render_template('welcome.html')
return render_page()
@blueprint.route('/about')
def about():
return render_template('about.html')
@blueprint.route('/services')
def services():
return render_template('services.html')
@blueprint.route('/learn')
def learn():
return render_template('learn.html')
@blueprint.route('/libraries')
def libraries():
return render_template('libraries.html')
@blueprint.route('/stats')
def stats():
return render_template('stats.html')
@blueprint.route('/join')
def join():
"""Join page"""
return redirect('https://store.blender.org/product/membership/')
@blueprint.route('/renew')
def renew_subscription():
return render_template('renew_subscription.html')
def get_projects(category):
"""Utility to get projects based on category. Should be moved on the API
and improved with more extensive filtering capabilities.
"""
api = system_util.pillar_api()
projects = Project.all({
'where': {
'category': category,
'is_private': False},
'sort': '-_created',
}, api=api)
for project in projects._items:
attach_project_pictures(project, api)
return projects
@blueprint.route('/courses')
def courses():
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
projects = get_projects('course')
return render_template(
'projects_index_collection.html',
title='courses',
projects=projects._items,
api=system_util.pillar_api())
return render_page()
@blueprint.route('/open-projects')
def open_projects():
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
api = system_util.pillar_api()
projects = Project.all({
'where': {
'category': 'film',
'is_private': False
},
'sort': '-_created',
}, api=api)
for project in projects._items:
# Attach poster file (ensure the extension_props.cloud.poster
# attributes exists)
try:
# If the attribute exists, but is None, continue
if not project['extension_props'][EXTENSION_NAME]['poster']:
continue
# Fetch the file and embed it in the document
project.extension_props.cloud.poster = get_file(
project.extension_props.cloud.poster, api=api)
# Add convenience attribute that specifies the presence of the
# poster file
project.has_poster = True
# If there was a key error because one of the nested attributes is
# missing,
except KeyError:
continue
return render_template(
'films.html',
title='films',
projects=projects._items,
api=system_util.pillar_api())
return render_page()
@blueprint.route('/workshops')
def workshops():
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
projects = get_projects('workshop')
return render_template(
'projects_index_collection.html',
title='workshops',
projects=projects._items,
api=system_util.pillar_api())
return render_page()
def get_random_featured_nodes() -> typing.List[dict]:
"""Returns a list of project/node combinations for featured nodes.
A random subset of 3 featured nodes from all public projects is returned.
Assumes that the user actually has access to the public projects' nodes.
The dict is a node, with a 'project' key that contains a projected project.
"""
proj_coll = current_app.db('projects')
featured_nodes = proj_coll.aggregate([
{'$match': {'is_private': False}},
{'$project': {'nodes_featured': True,
'url': True,
'name': True,
'summary': True,
'picture_square': True}},
{'$unwind': {'path': '$nodes_featured'}},
{'$sample': {'size': 6}},
{'$lookup': {'from': 'nodes',
'localField': 'nodes_featured',
'foreignField': '_id',
'as': 'node'}},
{'$unwind': {'path': '$node'}},
{'$match': {'node._deleted': {'$ne': True}}},
{'$project': {'url': True,
'name': True,
'summary': True,
'picture_square': True,
'node._id': True,
'node.name': True,
'node.permissions': True,
'node.picture': True,
'node.properties.content_type': True,
'node.properties.duration_seconds': True,
'node.properties.url': True,
'node._created': True,
}
},
])
featured_node_documents = []
api = system_util.pillar_api()
for node_info in featured_nodes:
# Turn the project-with-node doc into a node-with-project doc.
node_document = node_info.pop('node')
node_document['project'] = node_info
node_document['_id'] = str(node_document['_id'])
node = Node(node_document)
node.picture = get_file(node.picture, api=api)
node.project.picture_square = get_file(node.project.picture_square, api=api)
featured_node_documents.append(node)
return featured_node_documents
@blueprint_settings.route('/emails', methods=['GET', 'POST'])
@login_required
def emails():
"""Main email settings.
"""
if current_user.has_role('protected'):
return abort(404) # TODO: make this 403, handle template properly
api = system_util.pillar_api()
user = User.find(current_user.objectid, api=api)
# Force creation of settings for the user (safely remove this code once
# implemented on account creation level, and after adding settings to all
# existing users)
if not user.settings:
user.settings = dict(email_communications=1)
user.update(api=api)
if user.settings.email_communications is None:
user.settings.email_communications = 1
user.update(api=api)
# Generate form
form = forms.UserSettingsEmailsForm(
email_communications=user.settings.email_communications)
if form.validate_on_submit():
try:
user.settings.email_communications = form.email_communications.data
user.update(api=api)
flash("Profile updated", 'success')
except sdk_exceptions.ResourceInvalid as e:
message = json.loads(e.content)
flash(message)
return render_template('users/settings/emails.html', form=form, title='emails')
@blueprint_settings.route('/billing')
@login_required
def billing():
"""View the subscription status of a user
"""
from . import store
log.debug('START OF REQUEST')
if current_user.has_role('protected'):
return abort(404) # TODO: make this 403, handle template properly
expiration_date = 'No subscription to expire'
# Classify the user based on their roles and capabilities
cap_subs = current_user.has_cap('subscriber')
if current_user.has_role('demo'):
user_cls = 'demo'
elif not cap_subs and current_user.has_cap('can-renew-subscription'):
# This user has an inactive but renewable subscription.
user_cls = 'subscriber-expired'
elif cap_subs:
if current_user.has_role('subscriber'):
# This user pays for their own subscription. Only in this case do we need to fetch
# the expiration date from the Store.
user_cls = 'subscriber'
store_user = store.fetch_subscription_info(current_user.email)
if store_user is None:
expiration_date = 'Unable to reach Blender Store to check'
else:
expiration_date = store_user['expiration_date'][:10]
elif current_user.has_role('org-subscriber'):
# An organisation pays for this subscription.
user_cls = 'subscriber-org'
else:
# This user gets the subscription cap from somewhere else (like an organisation).
user_cls = 'subscriber-other'
else:
user_cls = 'outsider'
return render_template(
'users/settings/billing.html',
user_cls=user_cls,
expiration_date=expiration_date,
title='billing')
@blueprint.route('/terms-and-conditions')
def terms_and_conditions():
return render_template('terms_and_conditions.html')
@blueprint.route('/privacy')
def privacy():
return render_template('privacy.html')
@blueprint.route('/production')
def production():
return render_template(
'production.html',
title='production')
@blueprint.route('/emails/welcome.send')
@login_required
def emails_welcome_send():
from cloud import email
email.queue_welcome_mail(current_user)
return f'queued mail to {current_user.email}'
@blueprint.route('/emails/welcome.html')
@login_required
def emails_welcome_html():
return render_template('emails/welcome.html',
subject='Welcome to Blender Cloud',
user=current_user)
@blueprint.route('/emails/welcome.txt')
@login_required
def emails_welcome_txt():
txt = render_template('emails/welcome.txt',
subject='Welcome to Blender Cloud',
user=current_user)
return flask.Response(txt, content_type='text/plain; charset=utf-8')
@blueprint.route('/p/<project_url>')
def project_landing(project_url):
"""Override Pillar project_view endpoint completely.
The first part of the function is identical to the one in Pillar, but the
second part (starting with 'Load custom project properties') extends the
behaviour to support film project landing pages.
"""
template_name = None
if request.args.get('format') == 'jstree':
log.warning('projects.view(%r) endpoint called with format=jstree, '
'redirecting to proper endpoint. URL is %s; referrer is %s',
project_url, request.url, request.referrer)
return redirect(url_for('projects.jstree', project_url=project_url))
api = system_util.pillar_api()
project = find_project_or_404(project_url,
embedded={'header_node': 1},
api=api)
# Load the header video file, if there is any.
header_video_file = None
header_video_node = None
if project.header_node and project.header_node.node_type == 'asset' and \
project.header_node.properties.content_type == 'video':
header_video_node = project.header_node
header_video_file = get_file(project.header_node.properties.file)
header_video_node.picture = get_file(header_video_node.picture)
extra_context = {
'header_video_file': header_video_file,
'header_video_node': header_video_node}
# Load custom project properties. If the project has a 'cloud' extension prop,
# render it using the projects/landing.html template and try to attach a
# number of additional attributes (pages, images, etc.).
if 'extension_props' in project and EXTENSION_NAME in project['extension_props']:
extension_props = project['extension_props'][EXTENSION_NAME]
extension_props['logo'] = get_file(extension_props['logo'])
pages = Node.all({
'where': {
'project': project._id,
'node_type': 'page',
'_deleted': {'$ne': True}},
'projection': {'name': 1}}, api=api)
extra_context.update({'pages': pages._items})
template_name = 'projects/landing.html'
return render_project(project, api,
extra_context=extra_context,
template_name=template_name)
@blueprint.route('/p/<project_url>/browse')
@project_view()
def project_browse(project: pillarsdk.Project):
"""Project view displaying all top-level nodes.
We render a regular project view, but we introduce an additional template
variable: browse. By doing that we prevent the regular project view
from loading and fetch via AJAX a "group" node-like view instead (see
project_browse_view_nodes).
"""
return render_template(
'projects/view.html',
api=system_util.pillar_api(),
project=project,
node=None,
show_project=True,
browse=True,
og_picture=None,
navigation_links=project_navigation_links(project, system_util.pillar_api()),
extension_sidebar_links=current_app.extension_sidebar_links(project))
@blueprint.route('/p/<project_url>/browse/nodes')
@project_view()
def project_browse_view_nodes(project: pillarsdk.Project):
"""Display top-level nodes for a Project.
This view is always meant to be served embedded, as part of project_browse.
"""
api = system_util.pillar_api()
# Get top level nodes
projection = {
'project': 1,
'name': 1,
'picture': 1,
'node_type': 1,
'properties.order': 1,
'properties.status': 1,
'user': 1,
'properties.content_type': 1,
'permissions.world': 1}
where = {
'project': project['_id'],
'parent': {'$exists': False},
'properties.status': 'published',
'_deleted': {'$ne': True},
'node_type': {'$in': ['group', 'asset']},
}
try:
nodes = Node.all({
'projection': projection,
'where': where,
'sort': [('properties.order', 1), ('name', 1)]}, api=api)
except pillarsdk.exceptions.ForbiddenAccess:
return render_template('errors/403_embed.html')
nodes = nodes._items
for child in nodes:
child.picture = get_file(child.picture, api=api)
return render_template(
'projects/browse_embed.html',
nodes=nodes)
def project_settings(project: pillarsdk.Project, **template_args: dict):
"""Renders the project settings page for Blender Cloud projects.
If the project has been setup for Blender Cloud, check for the cloud.category
property, to render the proper form.
"""
# Based on the project state, we can render a different template.
if not current_cloud.is_cloud_project(project):
return render_template('project_settings/offer_setup.html',
project=project, **template_args)
cloud_props = project['extension_props'][EXTENSION_NAME]
category = cloud_props['category']
if category != 'film':
log.error('No interface available to edit %s projects, yet' % category)
form = FilmProjectForm()
# Iterate over the form fields and set the data if exists in the project document
for field_name in form.data:
if field_name not in cloud_props:
continue
# Skip csrf_token field
if field_name == 'csrf_token':
continue
form_field = getattr(form, field_name)
form_field.data = cloud_props[field_name]
return render_template('project_settings/settings.html',
project=project,
form=form,
**template_args)
@blueprint.route('/<project_url>/settings/film', methods=['POST'])
@authorization.require_login(require_cap='admin')
@project_view()
def save_film_settings(project: pillarsdk.Project):
# Ensure that the project is setup for Cloud (see @attract_project_view for example)
form = FilmProjectForm()
if not form.validate_on_submit():
log.debug('Form submission failed')
# Return list of validation errors
updated_extension_props = {}
for field_name in form.data:
# Skip csrf_token field
if field_name == 'csrf_token':
continue
form_field = getattr(form, field_name)
# TODO(fsiddi) if form_field type is FileSelectField, convert it to ObjectId
# Currently this raises TypeError: Object of type 'ObjectId' is not JSON serializable
if form_field.data == '':
form_field.data = None
updated_extension_props[field_name] = form_field.data
# Update extension props and save project
extension_props = project['extension_props'][EXTENSION_NAME]
# Project is a Resource, so we update properties iteratively
for k, v in updated_extension_props.items():
extension_props[k] = v
project.update(api=system_util.pillar_api())
return '', 204
@blueprint.route('/<project_url>/setup-for-film', methods=['POST'])
@login_required
@project_view()
def setup_for_film(project: pillarsdk.Project):
import cloud.setup
project_id = project._id
if not project.has_method('PUT'):
log.warning('User %s tries to set up project %s for Blender Cloud, but has no PUT rights.',
current_user, project_id)
raise wz_exceptions.Forbidden()
log.info('User %s sets up project %s for Blender Cloud', current_user, project_id)
cloud.setup.setup_for_film(project.url)
return '', 204
def setup_app(app):
global _homepage_context
cached = app.cache.cached(timeout=300)
_homepage_context = cached(_homepage_context)

54
cloud/setup.py Normal file
View File

@@ -0,0 +1,54 @@
"""Setting up projects for Blender Cloud."""
import logging
from bson import ObjectId
from eve.methods.put import put_internal
from flask import current_app
from pillar.api.utils import remove_private_keys
from . import EXTENSION_NAME
log = logging.getLogger(__name__)
def setup_for_film(project_url):
"""Add Blender Cloud extension_props specific for film projects.
Returns the updated project.
"""
projects_collection = current_app.data.driver.db['projects']
# Find the project in the database.
project = projects_collection.find_one({'url': project_url})
if not project:
raise RuntimeError('Project %s does not exist.' % project_url)
# Set default extension properties. Be careful not to overwrite any properties that
# are already there.
all_extension_props = project.setdefault('extension_props', {})
cloud_extension_props = {
'category': 'film',
'theme_css': '',
# The accent color (can be 'blue' or '#FFBBAA' or 'rgba(1, 1, 1, 1)
'theme_color': '',
'is_in_production': False,
'video_url': '', # Oembeddable url
'poster': None, # File ObjectId
'logo': None, # File ObjectId
# TODO(fsiddi) when we introduce other setup_for_* in Blender Cloud, make available
# at a higher scope
'is_featured': False,
}
all_extension_props.setdefault(EXTENSION_NAME, cloud_extension_props)
project_id = ObjectId(project['_id'])
project = remove_private_keys(project)
result, _, _, status_code = put_internal('projects', project, _id=project_id)
if status_code != 200:
raise RuntimeError("Can't update project %s, issues: %s", project_id, result)
log.info('Project %s was updated for Blender Cloud.', project_url)

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

89
cloud/stats/__init__.py Normal file
View File

@@ -0,0 +1,89 @@
"""Interesting usage metrics"""
from flask import current_app
def count_nodes(query=None) -> int:
pipeline = [
{'$match': {'_deleted': {'$ne': 'true'}}},
{
'$lookup':
{
'from': "projects",
'localField': "project",
'foreignField': "_id",
'as': "project",
}
},
{
'$unwind':
{
'path': '$project',
}
},
{
'$project':
{
'project.is_private': 1,
}
},
{'$match': {'project.is_private': False}},
{'$count': 'tot'}
]
c = current_app.db()['nodes']
# If we provide a query, we extend the first $match step in the aggregation pipeline with
# with the extra parameters (for example node_type)
if query:
pipeline[0]['$match'].update(query)
# Return either a list with one item or an empty list
r = list(c.aggregate(pipeline=pipeline))
count = 0 if not r else r[0]['tot']
return count
def count_users(query=None) -> int:
u = current_app.db()['users']
return u.count(query)
def count_blender_sync(query=None) -> int:
pipeline = [
# 0 Find all startups.blend that are not deleted
{
'$match': {
'_deleted': {'$ne': 'true'},
'name': 'startup.blend',
}
},
# 1 Group them per project (drops any duplicates)
{'$group': {'_id': '$project'}},
# 2 Join the project info
{
'$lookup':
{
'from': "projects",
'localField': "_id",
'foreignField': "_id",
'as': "project",
}
},
# 3 Unwind the project list (there is always only one project)
{
'$unwind':
{
'path': '$project',
}
},
# 4 Find all home projects
{'$match': {'project.category': 'home'}},
{'$count': 'tot'}
]
c = current_app.db()['nodes']
# If we provide a query, we extend the first $match step in the aggregation pipeline with
# with the extra parameters (for example _created)
if query:
pipeline[0]['$match'].update(query)
# Return either a list with one item or an empty list
r = list(c.aggregate(pipeline=pipeline))
count = 0 if not r else r[0]['tot']
return count

57
cloud/stats/routes.py Normal file
View File

@@ -0,0 +1,57 @@
import logging
import datetime
import functools
from flask import Blueprint, jsonify
from cloud.stats import count_nodes, count_users, count_blender_sync
blueprint = Blueprint('cloud.stats', __name__, url_prefix='/s')
log = logging.getLogger(__name__)
@functools.lru_cache()
def get_stats(before: datetime.datetime):
query_comments = {'node_type': 'comment'}
query_assets = {'node_type': 'asset'}
date_query = {}
if before:
date_query = {'_created': {'$lt': before}}
query_comments.update(date_query)
query_assets.update(date_query)
stats = {
'comments': count_nodes(query_comments),
'assets': count_nodes(query_assets),
'users_total': count_users(date_query),
'users_blender_sync': count_blender_sync(date_query),
}
return stats
@blueprint.route('/')
@blueprint.route('/before/<int:before>')
def index(before: int=0):
"""
This endpoint is queried on a daily basis by grafista to retrieve cloud usage
stats. For assets and comments we take into considerations only those who belong
to public projects.
These is the data we retrieve
- Comments count
- Assets count (video, images and files)
- Users count (subscribers count goes via store)
- Blender Sync users
"""
# TODO: Implement project-level metrics (and update ad every child update)
if before:
before = datetime.datetime.strptime(str(before), '%Y%m%d')
else:
today = datetime.date.today()
before = datetime.datetime(today.year, today.month, today.day)
return jsonify(get_stats(before))

72
cloud/store.py Normal file
View File

@@ -0,0 +1,72 @@
"""Blender Store interface."""
import logging
import typing
from pillar import current_app
log = logging.getLogger(__name__)
def fetch_subscription_info(email: str) -> typing.Optional[dict]:
"""Returns the user info dict from the external subscriptions management server.
:returns: the store user info, or None if the user can't be found or there
was an error communicating. A dict like this is returned:
{
"shop_id": 700,
"cloud_access": 1,
"paid_balance": 314.75,
"balance_currency": "EUR",
"start_date": "2014-08-25 17:05:46",
"expiration_date": "2016-08-24 13:38:45",
"subscription_status": "wc-active",
"expiration_date_approximate": true
}
"""
from requests.adapters import HTTPAdapter
import requests.exceptions
external_subscriptions_server = current_app.config['EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER']
if log.isEnabledFor(logging.DEBUG):
import urllib.parse
log_email = urllib.parse.quote(email)
log.debug('Connecting to store at %s?blenderid=%s',
external_subscriptions_server, log_email)
# Retry a few times when contacting the store.
s = requests.Session()
s.mount(external_subscriptions_server, HTTPAdapter(max_retries=5))
try:
r = s.get(external_subscriptions_server,
params={'blenderid': email},
verify=current_app.config['TLS_CERT_FILE'],
timeout=current_app.config.get('EXTERNAL_SUBSCRIPTIONS_TIMEOUT_SECS', 10))
except requests.exceptions.ConnectionError as ex:
log.error('Error connecting to %s: %s', external_subscriptions_server, ex)
return None
except requests.exceptions.Timeout as ex:
log.error('Timeout communicating with %s: %s', external_subscriptions_server, ex)
return None
except requests.exceptions.RequestException as ex:
log.error('Some error communicating with %s: %s', external_subscriptions_server, ex)
return None
if r.status_code != 200:
log.warning("Error communicating with %s, code=%i, unable to check "
"subscription status of user %s",
external_subscriptions_server, r.status_code, email)
return None
store_user = r.json()
if log.isEnabledFor(logging.DEBUG):
import json
log.debug('Received JSON from store API: %s',
json.dumps(store_user, sort_keys=False, indent=4))
return store_user

3
cloud/tagged/__init__.py Normal file
View File

@@ -0,0 +1,3 @@
"""Routes for fetching tagged assets."""

16
cloud/tagged/routes.py Normal file
View File

@@ -0,0 +1,16 @@
import logging
import datetime
import functools
from flask import Blueprint, jsonify
blueprint = Blueprint('cloud.tagged', __name__, url_prefix='/tagged')
log = logging.getLogger(__name__)
@blueprint.route('/')
def index():
"""Return all tagged assets as JSON, grouped by tag."""

263
cloud/webhooks.py Normal file
View File

@@ -0,0 +1,263 @@
"""Blender ID webhooks."""
import functools
import hashlib
import hmac
import json
import logging
import typing
from flask import Blueprint, request
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.api.blender_cloud import subscription
from pillar.api.utils.authentication import create_new_user_document, make_unique_username
from pillar.auth import UserClass
blueprint = Blueprint('cloud-webhooks', __name__)
log = logging.getLogger(__name__)
WEBHOOK_MAX_BODY_SIZE = 1024 * 10 # 10 kB is large enough for
def webhook_payload(hmac_secret: str) -> dict:
"""Obtains the webhook payload from the request, verifying its HMAC.
:returns the webhook payload as dictionary.
"""
# Check the content type
if request.content_type != 'application/json':
log.info('request from %s to %s had bad content type %s',
request.remote_addr, request, request.content_type)
raise wz_exceptions.BadRequest('Content type not supported')
# Check the length of the body
if request.content_length > WEBHOOK_MAX_BODY_SIZE:
raise wz_exceptions.BadRequest('Request too large')
body = request.get_data()
if len(body) > request.content_length:
raise wz_exceptions.BadRequest('Larger body than Content-Length header')
# Validate the request
mac = hmac.new(hmac_secret.encode(), body, hashlib.sha256)
req_hmac = request.headers.get('X-Webhook-HMAC', '')
our_hmac = mac.hexdigest()
if not hmac.compare_digest(req_hmac, our_hmac):
log.info('request from %s to %s had bad HMAC %r, expected %r',
request.remote_addr, request, req_hmac, our_hmac)
raise wz_exceptions.BadRequest('Bad HMAC')
try:
return json.loads(body)
except json.JSONDecodeError as ex:
log.warning('request from %s to %s had bad JSON: %s',
request.remote_addr, request, ex)
raise wz_exceptions.BadRequest('Bad JSON')
def score(wh_payload: dict, user: dict) -> int:
"""Determine how likely it is that this is the correct user to modify.
:param wh_payload: the info we received from Blender ID;
see user_modified()
:param user: the user in our database
:return: the score for this user
"""
bid_str = str(wh_payload['id'])
try:
match_on_bid = any((auth['provider'] == 'blender-id' and auth['user_id'] == bid_str)
for auth in user['auth'])
except KeyError:
match_on_bid = False
match_on_old_email = user.get('email', 'none') == wh_payload.get('old_email', 'nothere')
match_on_new_email = user.get('email', 'none') == wh_payload.get('email', 'nothere')
return match_on_bid * 10 + match_on_old_email + match_on_new_email * 2
def insert_or_fetch_user(wh_payload: dict) -> typing.Optional[dict]:
"""Fetch the user from the DB or create it.
Only creates it if the webhook payload indicates they could actually use
Blender Cloud (i.e. demo or subscriber). This prevents us from creating
Cloud accounts for Blender Network users.
:returns the user document, or None when not created.
"""
users_coll = current_app.db('users')
my_log = log.getChild('insert_or_fetch_user')
bid_str = str(wh_payload['id'])
email = wh_payload['email']
# Find the user by their Blender ID, or any of their email addresses.
# We use one query to find all matching users. This is done as a
# consistency check; if more than one user is returned, we know the
# database is inconsistent with Blender ID and can emit a warning
# about this.
query = {'$or': [
{'auth.provider': 'blender-id', 'auth.user_id': bid_str},
{'email': {'$in': [wh_payload['old_email'], email]}},
]}
db_users = list(users_coll.find(query))
user_count = len(db_users)
if user_count > 1:
# Now we have to pay the price for finding users in one query; we
# have to prioritise them and return the one we think is most reliable.
calc_score = functools.partial(score, wh_payload)
best_score = max(db_users, key=calc_score)
my_log.error('%d users found for query %s, picking user %s (%s)',
user_count, query, best_score['_id'], best_score['email'])
return best_score
if user_count:
db_user = db_users[0]
my_log.debug('found user %s', db_user['email'])
return db_user
if wh_payload.get('date_deletion_requested'):
my_log.info('Received update for a deleted user %s, not creating', bid_str)
return None
# Pretend to create the user, so that we can inspect the resulting
# capabilities. This is more future-proof than looking at the list
# of roles in the webhook payload.
username = make_unique_username(email)
user_doc = create_new_user_document(email, bid_str, username,
provider='blender-id',
full_name=wh_payload['full_name'])
# Figure out the user's eventual roles. These aren't stored in the document yet,
# because that's handled by the badger service.
eventual_roles = [subscription.ROLES_BID_TO_PILLAR[r]
for r in wh_payload.get('roles', [])
if r in subscription.ROLES_BID_TO_PILLAR]
user_ob = UserClass.construct('', user_doc)
user_ob.roles = eventual_roles
user_ob.collect_capabilities()
create = (user_ob.has_cap('subscriber') or
user_ob.has_cap('can-renew-subscription') or
current_app.org_manager.user_is_unknown_member(email))
if not create:
my_log.info('Received update for unknown user %r without Cloud access (caps=%s)',
wh_payload['old_email'], user_ob.capabilities)
return None
# Actually create the user in the database.
r, _, _, status = current_app.post_internal('users', user_doc)
if status != 201:
my_log.error('unable to create user %s: : %r %r', email, status, r)
raise wz_exceptions.InternalServerError('unable to create user')
user_doc.update(r)
my_log.info('created user %r = %s to allow immediate Cloud access', email, user_doc['_id'])
return user_doc
@blueprint.route('/user-modified', methods=['POST'])
def user_modified():
"""Update the local user based on the info from Blender ID.
If the payload indicates the user has access to Blender Cloud (or at least
a renewable subscription), create the user if not already in our DB.
The payload we expect is a dictionary like:
{'id': 12345, # the user's ID in Blender ID
'old_email': 'old@example.com',
'full_name': 'Harry',
'email': 'new@example'com,
'avatar_changed': True,
'roles': ['role1', 'role2', …]}
"""
my_log = log.getChild('user_modified')
my_log.debug('Received request from %s', request.remote_addr)
hmac_secret = current_app.config['BLENDER_ID_WEBHOOK_USER_CHANGED_SECRET']
payload = webhook_payload(hmac_secret)
my_log.info('payload: %s', payload)
# Update the user
db_user = insert_or_fetch_user(payload)
if not db_user:
my_log.info('Received update for unknown user %r', payload['old_email'])
return '', 204
if payload.get('date_deletion_requested'):
delete_user(db_user, payload)
return '', 204
# Use direct database updates to change the email and full name.
# Also updates the db_user dict so that local_user below will have
# the updated information.
updates = {}
if db_user['email'] != payload['email']:
my_log.info('User changed email from %s to %s', payload['old_email'], payload['email'])
updates['email'] = payload['email']
db_user['email'] = payload['email']
if db_user['full_name'] != payload['full_name']:
my_log.info('User changed full name from %r to %r',
db_user['full_name'], payload['full_name'])
if payload['full_name']:
updates['full_name'] = payload['full_name']
else:
# Fall back to the username when the full name was erased.
updates['full_name'] = db_user['username']
db_user['full_name'] = updates['full_name']
if payload.get('avatar_changed'):
import pillar.celery.avatar
my_log.info('User %s changed avatar, scheduling download', db_user['_id'])
pillar.celery.avatar.sync_avatar_for_user.delay(str(db_user['_id']))
if updates:
users_coll = current_app.db('users')
update_res = users_coll.update_one({'_id': db_user['_id']},
{'$set': updates})
if update_res.matched_count != 1:
my_log.error('Unable to find user %s to update, even though '
'we found them by email address %s',
db_user['_id'], payload['old_email'])
# Defer to Pillar to do the role updates.
local_user = UserClass.construct('', db_user)
subscription.do_update_subscription(local_user, payload)
return '', 204
def delete_user(db_user, payload):
"""Handle deletion request coming from BID."""
my_log = log.getChild('delete_user')
date_deletion_requested = payload['date_deletion_requested']
bid_str = str(payload['id'])
local_id = db_user['_id']
my_log.info(
'User %s with BID=%s requested deletion on %s, soft-deleting the user',
local_id, bid_str, date_deletion_requested,
)
# Delete all session tokens linked to this user
token_coll = current_app.db('tokens')
delete_res = token_coll.delete_many({'user': local_id})
my_log.info('Deleted %s session tokens of user %s', delete_res.deleted_count, local_id)
# Soft-delete the user and clear their PII
users_coll = current_app.db('users')
updates = {
'_deleted': True,
'email': None,
'full_name': None,
'username': None,
'auth': [],
}
update_res = users_coll.update_one({'_id': local_id}, {'$set': updates})
if update_res.matched_count != 1:
my_log.error(
'Soft-deleted %s users %s with BID=%s',
update_res.matched_count, local_id, bid_str,
)
else:
my_log.warning('Soft-deleted user %s with BID=%s', local_id, bid_str)

296
cloud_share_img.py Executable file
View File

@@ -0,0 +1,296 @@
#!/usr/bin/env python3
from __future__ import print_function
"""CLI command for sharing an image via Blender Cloud.
Assumes that you are logged in on Blender ID with the Blender ID Add-on.
The user_config_dir and user_data_dir functions come from
https://github.com/ActiveState/appdirs/blob/master/appdirs.py and
are licensed under the MIT license.
"""
import argparse
import json
import mimetypes
import os.path
import pprint
import sys
import webbrowser
from urllib.parse import urljoin
import requests
cli = argparse.Namespace() # CLI args from argparser
sess = requests.Session()
IMAGE_SHARING_GROUP_NODE_NAME = 'Image sharing'
if sys.platform.startswith('java'):
import platform
os_name = platform.java_ver()[3][0]
if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc.
system = 'win32'
elif os_name.startswith('Mac'): # "Mac OS X", etc.
system = 'darwin'
else: # "Linux", "SunOS", "FreeBSD", etc.
# Setting this to "linux2" is not ideal, but only Windows or Mac
# are actually checked for and the rest of the module expects
# *sys.platform* style strings.
system = 'linux2'
else:
system = sys.platform
def request(method: str, rel_url: str, **kwargs) -> requests.Response:
kwargs.setdefault('auth', (cli.token, ''))
url = urljoin(cli.server_url, rel_url)
return sess.request(method, url, **kwargs)
def get(rel_url: str, **kwargs) -> requests.Response:
return request('GET', rel_url, **kwargs)
def post(rel_url: str, **kwargs) -> requests.Response:
return request('POST', rel_url, **kwargs)
def find_user_id() -> str:
"""Returns the current user ID."""
print(15 * '=', 'User info', 15 * '=')
resp = get('/api/users/me')
resp.raise_for_status()
user_info = resp.json()
print('You are logged in as %(full_name)s (%(_id)s)' % user_info)
return user_info['_id']
def find_home_project_id() -> dict:
resp = get('/api/bcloud/home-project')
resp.raise_for_status()
proj = resp.json()
proj_id = proj['_id']
print('Your home project ID is %s' % proj_id)
return proj_id
def find_image_sharing_group_id(home_project_id, user_id) -> str:
"""Find the top-level image sharing group node."""
node_doc = {
'project': home_project_id,
'node_type': 'group',
'name': IMAGE_SHARING_GROUP_NODE_NAME,
'user': user_id,
}
resp = get('/api/nodes', params={'where': json.dumps(node_doc)})
resp.raise_for_status()
items = resp.json()['_items']
if not items:
print('Share group not found, creating one.')
node_doc.update({
'properties': {},
})
resp = post('/api/nodes', json=node_doc)
resp.raise_for_status()
share_group = resp.json()
else:
share_group = items[0]
# print('Share group:', share_group)
return share_group['_id']
def upload_image():
user_id = find_user_id()
home_project_id = find_home_project_id()
group_id = find_image_sharing_group_id(home_project_id, user_id)
basename = os.path.basename(cli.imgfile)
print('Sharing group ID is %s' % group_id)
# Upload the image to the project.
print('Uploading %r' % cli.imgfile)
mimetype, _ = mimetypes.guess_type(cli.imgfile, strict=False)
with open(cli.imgfile, mode='rb') as infile:
resp = post('api/storage/stream/%s' % home_project_id,
files={'file': (basename, infile, mimetype)})
resp.raise_for_status()
file_upload_resp = resp.json()
file_upload_status = file_upload_resp.get('_status') or file_upload_resp.get('status')
if file_upload_status != 'ok':
raise ValueError('Received bad status %s from Pillar: %s' %
(file_upload_status, json.dumps(file_upload_resp)))
file_id = file_upload_resp['file_id']
print('File ID is', file_id)
# Create the asset node
asset_node = {
'project': home_project_id,
'node_type': 'asset',
'name': basename,
'parent': group_id,
'properties': {
'content_type': mimetype,
'file': file_id,
},
}
resp = post('api/nodes', json=asset_node)
resp.raise_for_status()
node_info = resp.json()
node_id = node_info['_id']
print('Created asset node', node_id)
# Share the node to get a public URL.
resp = post('api/nodes/%s/share' % node_id)
resp.raise_for_status()
share_info = resp.json()
print(json.dumps(share_info, indent=4))
url = share_info.get('short_link')
print('Opening %s in a browser' % url)
webbrowser.open_new_tab(url)
def find_credentials():
"""Finds BlenderID credentials.
:rtype: str
:returns: the authentication token to use.
"""
import glob
# Find BlenderID profile file.
configpath = user_config_dir('blender', 'Blender Foundation', roaming=True)
found = glob.glob(os.path.join(configpath, '*'))
for confpath in reversed(sorted(found)):
profiles_path = os.path.join(confpath, 'config', 'blender_id', 'profiles.json')
if not os.path.exists(profiles_path):
continue
print('Reading credentials from %s' % profiles_path)
with open(profiles_path) as infile:
profiles = json.load(infile, encoding='utf8')
if profiles:
break
else:
print('Unable to find Blender ID credentials. Log in with the Blender ID add-on in '
'Blender first.')
raise SystemExit()
active_profile = profiles[u'active_profile']
profile = profiles[u'profiles'][active_profile]
print('Logging in as %s' % profile[u'username'])
return profile[u'token']
def main():
global cli
parser = argparse.ArgumentParser()
parser.add_argument('imgfile', help='The image file to share.')
parser.add_argument('-u', '--server-url', default='https://cloud.blender.org/',
help='URL of the Flamenco server.')
parser.add_argument('-t', '--token',
help='Authentication token to use. If not given, your token from the '
'Blender ID add-on is used.')
cli = parser.parse_args()
if not cli.token:
cli.token = find_credentials()
upload_image()
def user_config_dir(appname=None, appauthor=None, version=None, roaming=False):
r"""Return full path to the user-specific config dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"roaming" (boolean, default False) can be set True to use the Windows
roaming appdata directory. That means that for users on a Windows
network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user config directories are:
Mac OS X: same as user_data_dir
Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined
Win *: same as user_data_dir
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
That means, by default "~/.config/<AppName>".
"""
if system in {"win32", "darwin"}:
path = user_data_dir(appname, appauthor, None, roaming)
else:
path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config"))
if appname:
path = os.path.join(path, appname)
if appname and version:
path = os.path.join(path, version)
return path
def user_data_dir(appname=None, appauthor=None, version=None, roaming=False):
r"""Return full path to the user-specific data dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"roaming" (boolean, default False) can be set True to use the Windows
roaming appdata directory. That means that for users on a Windows
network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user data directories are:
Mac OS X: ~/Library/Application Support/<AppName>
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
That means, by default "~/.local/share/<AppName>".
"""
if system == "win32":
raise RuntimeError("Sorry, Windows is not supported for now.")
elif system == 'darwin':
path = os.path.expanduser('~/Library/Application Support/')
if appname:
path = os.path.join(path, appname)
else:
path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share"))
if appname:
path = os.path.join(path, appname)
if appname and version:
path = os.path.join(path, version)
return path
if __name__ == '__main__':
main()

43
config_local.example.py Normal file
View File

@@ -0,0 +1,43 @@
import os
DEBUG = True
BLENDER_ID_ENDPOINT = 'http://id.local:8000/'
SERVER_NAME = 'cloud.local:5001'
SCHEME = 'http'
PILLAR_SERVER_ENDPOINT = f'{SCHEME}://{SERVER_NAME}/api/'
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = 'true'
os.environ['PILLAR_MONGO_DBNAME'] = 'cloud'
os.environ['PILLAR_MONGO_PORT'] = '27017'
os.environ['PILLAR_MONGO_HOST'] = 'mongo'
os.environ['PILLAR_SERVER_ENDPOINT'] = PILLAR_SERVER_ENDPOINT
SECRET_KEY = '##DEFINE##'
OAUTH_CREDENTIALS = {
'blender-id': {
'id': 'CLOUD-OF-SNOWFLAKES-42',
'secret': '##DEFINE##',
}
}
MAIN_PROJECT_ID = '##DEFINE##'
URLER_SERVICE_AUTH_TOKEN = '##DEFINE##'
ZENCODER_API_KEY = '##DEFINE##'
ZENCODER_NOTIFICATIONS_SECRET = '##DEFINE##'
ZENCODER_NOTIFICATIONS_URL = 'http://zencoderfetcher/'
# Special announcement on top of every page, for non-subscribers.
# category: 'string', can be 'info', 'warning', 'danger', or 'success'.
# message: 'string', any text, it gets markdowned.
# icon: 'string', any icon in font-pillar. e.g. 'pi-heart-filled'
UI_ANNOUNCEMENT_NON_SUBSCRIBERS = {
'category': 'danger',
'message': 'Spring will swing away the gray clouds, until then, '
'[take cover under Blender Cloud](https://cloud.blender.org)!',
'icon': 'pi-heart-filled',
}

124
deploy.sh
View File

@@ -1,124 +0,0 @@
#!/bin/bash -e
# Deploys the current production branch to the production machine.
PROJECT_NAME="blender-cloud"
DOCKER_NAME="blender_cloud"
REMOTE_ROOT="/data/git/${PROJECT_NAME}"
SSH="ssh -o ClearAllForwardings=yes cloud.blender.org"
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$($readlink -f "$0")")"
cd ${ROOT}
# Check that we're on production branch.
if [ $(git rev-parse --abbrev-ref HEAD) != "production" ]; then
echo "You are NOT on the production branch, refusing to deploy." >&2
exit 1
fi
# Check that production branch has been pushed.
if [ -n "$(git log origin/production..production --oneline)" ]; then
echo "WARNING: not all changes to the production branch have been pushed."
echo "Press [ENTER] to continue deploying current origin/production, CTRL+C to abort."
read dummy
fi
function find_module()
{
MODULE_NAME=$1
MODULE_DIR=$(python <<EOT
from __future__ import print_function
import os.path
try:
import ${MODULE_NAME}
except ImportError:
raise SystemExit('${MODULE_NAME} not found on Python path. Are you in the correct venv?')
print(os.path.dirname(os.path.dirname(${MODULE_NAME}.__file__)))
EOT
)
if [ $(git -C $MODULE_DIR rev-parse --abbrev-ref HEAD) != "production" ]; then
echo "${MODULE_NAME}: ($MODULE_DIR) NOT on the production branch, refusing to deploy." >&2
exit 1
fi
echo $MODULE_DIR
}
# Find our modules
PILLAR_DIR=$(find_module pillar)
ATTRACT_DIR=$(find_module attract)
FLAMENCO_DIR=$(find_module flamenco)
echo "Pillar : $PILLAR_DIR"
echo "Attract : $ATTRACT_DIR"
echo "Flamenco: $FLAMENCO_DIR"
if [ -z "$PILLAR_DIR" -o -z "$ATTRACT_DIR" -o -z "$FLAMENCO_DIR" ];
then
exit 1
fi
# SSH to cloud to pull all files in
function git_pull() {
PROJECT_NAME="$1"
BRANCH="$2"
REMOTE_ROOT="/data/git/${PROJECT_NAME}"
echo "==================================================================="
echo "UPDATING FILES ON ${PROJECT_NAME}"
${SSH} git -C ${REMOTE_ROOT} fetch origin ${BRANCH}
${SSH} git -C ${REMOTE_ROOT} log origin/${BRANCH}..${BRANCH} --oneline
${SSH} git -C ${REMOTE_ROOT} merge --ff-only origin/${BRANCH}
}
git_pull pillar-python-sdk master
git_pull pillar production
git_pull attract production
git_pull flamenco production
git_pull blender-cloud production
# Update the virtualenv
#${SSH} -t docker exec ${DOCKER_NAME} /data/venv/bin/pip install -U -r ${REMOTE_ROOT}/requirements.txt --exists-action w
# RSync the world
$ATTRACT_DIR/rsync_ui.sh
$FLAMENCO_DIR/rsync_ui.sh
./rsync_ui.sh
# Notify Bugsnag of this new deploy.
echo
echo "==================================================================="
GIT_REVISION=$(${SSH} git -C ${REMOTE_ROOT} describe --always)
echo "Notifying Bugsnag of this new deploy of revision ${GIT_REVISION}."
BUGSNAG_API_KEY=$(${SSH} python -c "\"import sys; sys.path.append('${REMOTE_ROOT}'); import config_local; print(config_local.BUGSNAG_API_KEY)\"")
curl --data "apiKey=${BUGSNAG_API_KEY}&revision=${GIT_REVISION}" https://notify.bugsnag.com/deploy
echo
# Wait for [ENTER] to restart the server
echo
echo "==================================================================="
echo "NOTE: If you want to edit config_local.py on the server, do so now."
echo "NOTE: Press [ENTER] to continue and restart the server process."
read dummy
${SSH} docker exec ${DOCKER_NAME} apache2ctl graceful
echo "Server process restarted"
echo
echo "==================================================================="
echo "Clearing front page from Redis cache."
${SSH} docker exec redis redis-cli DEL pwview//
echo
echo "==================================================================="
echo "Deploy of ${PROJECT_NAME} is done."
echo "==================================================================="

130
deploy/2docker.sh Executable file
View File

@@ -0,0 +1,130 @@
#!/bin/bash -e
STAGING_BRANCH=${STAGING_BRANCH:-production}
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
STAGINGDIR="$ROOT/docker/4_run/staging"
PROJECT_NAME="$(basename $ROOT)"
if [ -e $STAGINGDIR ]; then
echo "$STAGINGDIR already exists, press [ENTER] to destroy and re-install, Ctrl+C to abort."
read dummy
rm -rf $STAGINGDIR
else
echo -n "Installing into $STAGINGDIR"
echo "press [ENTER] to continue, Ctrl+C to abort."
read dummy
fi
cd ${ROOT}
mkdir -p $STAGINGDIR
REMOTE_ROOT="$STAGINGDIR/$PROJECT_NAME"
if [ -z "$SKIP_BRANCH_CHECK" ]; then
# Check that we're on production branch.
if [ $(git rev-parse --abbrev-ref HEAD) != "$STAGING_BRANCH" ]; then
echo "You are NOT on the $STAGING_BRANCH branch, refusing to stage." >&2
exit 1
fi
# Check that production branch has been pushed.
if [ -n "$(git log origin/$STAGING_BRANCH..$STAGING_BRANCH --oneline)" ]; then
echo "WARNING: not all changes to the $STAGING_BRANCH branch have been pushed."
echo "Press [ENTER] to continue staging current origin/$STAGING_BRANCH, CTRL+C to abort."
read dummy
fi
fi
function find_module()
{
MODULE_NAME=$1
MODULE_DIR=$(python <<EOT
from __future__ import print_function
import os.path
try:
import ${MODULE_NAME}
except ImportError:
raise SystemExit('${MODULE_NAME} not found on Python path. Are you in the correct venv?')
print(os.path.dirname(os.path.dirname(${MODULE_NAME}.__file__)))
EOT
)
echo $MODULE_DIR
}
# Find our modules
echo "==================================================================="
echo "LOCAL MODULE LOCATIONS"
PILLAR_DIR=$(find_module pillar)
ATTRACT_DIR=$(find_module attract)
FLAMENCO_DIR=$(find_module flamenco)
SVNMAN_DIR=$(find_module svnman)
SDK_DIR=$(find_module pillarsdk)
echo "Pillar : $PILLAR_DIR"
echo "Attract : $ATTRACT_DIR"
echo "Flamenco: $FLAMENCO_DIR"
echo "SVNMan : $SVNMAN_DIR"
echo "SDK : $SDK_DIR"
if [ -z "$PILLAR_DIR" -o -z "$ATTRACT_DIR" -o -z "$FLAMENCO_DIR" -o -z "$SVNMAN_DIR" -o -z "$SDK_DIR" ];
then
exit 1
fi
function git_clone() {
PROJECT_NAME="$1"
BRANCH="$2"
LOCAL_ROOT="$3"
echo "==================================================================="
echo "CLONING REPO ON $PROJECT_NAME @$BRANCH"
URL=$(git -C $LOCAL_ROOT remote get-url origin)
git -C $STAGINGDIR clone --depth 1 --branch $BRANCH $URL $PROJECT_NAME
}
if [ "$STAGING_BRANCH" == "production" ]; then
SDK_STAGING_BRANCH=master # SDK doesn't have a production branch
else
SDK_STAGING_BRANCH=$STAGING_BRANCH
fi
git_clone pillar-python-sdk $SDK_STAGING_BRANCH $SDK_DIR
git_clone pillar $STAGING_BRANCH $PILLAR_DIR
git_clone attract $STAGING_BRANCH $ATTRACT_DIR
git_clone flamenco $STAGING_BRANCH $FLAMENCO_DIR
git_clone pillar-svnman $STAGING_BRANCH $SVNMAN_DIR
git_clone blender-cloud $STAGING_BRANCH $ROOT
# Gulp everywhere
GULP=$ROOT/node_modules/.bin/gulp
if [ ! -e $GULP -o gulpfile.js -nt $GULP ]; then
npm install
touch $GULP # installer doesn't always touch this after a build, so we do.
fi
# List of projects
PROJECTS="pillar attract flamenco pillar-svnman blender-cloud"
# Run ./gulp for every project
for PROJECT in $PROJECTS; do
pushd $STAGINGDIR/$PROJECT; ./gulp --production; popd;
done
# Remove node_modules (only after all projects with interdependencies have been built)
for PROJECT in $PROJECTS; do
pushd $STAGINGDIR/$PROJECT; rm -r node_modules; popd;
done
echo
echo "==================================================================="
echo "Staging of ${PROJECT_NAME} is ready for dockerisation."
echo "==================================================================="

80
deploy/2server.sh Executable file
View File

@@ -0,0 +1,80 @@
#!/bin/bash -e
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
PROJECT_NAME="$(basename $ROOT)"
DOCKER_IMAGE="armadillica/blender_cloud:latest"
REMOTE_SECRET_CONFIG_DIR="/data/config"
REMOTE_DOCKER_COMPOSE_DIR="/root/docker"
#################################################################################
case $1 in
cloud*)
DEPLOYHOST="$1"
;;
*)
echo "Use $0 cloud{nr}|cloud.blender.org" >&2
exit 1
esac
SSH_OPTS="-o ClearAllForwardings=yes -o PermitLocalCommand=no"
SSH="ssh $SSH_OPTS $DEPLOYHOST"
SCP="scp $SSH_OPTS"
echo -n "Deploying to $DEPLOYHOST"
if ! ping $DEPLOYHOST -q -c 1 -W 2 >/dev/null; then
echo "host $DEPLOYHOST cannot be pinged, refusing to deploy." >&2
exit 2
fi
cat <<EOT
[ping OK]
Make sure that you have pushed the $DOCKER_IMAGE
docker image to Docker Hub.
press [ENTER] to continue, Ctrl+C to abort.
EOT
read dummy
#################################################################################
echo "==================================================================="
echo "Bringing remote Docker up to date…"
$SSH mkdir -p $REMOTE_DOCKER_COMPOSE_DIR
$SCP \
$ROOT/docker/{docker-compose.yml,renew-letsencrypt.sh,mongo-backup.{cron,sh}} \
$DEPLOYHOST:$REMOTE_DOCKER_COMPOSE_DIR
$SSH -T <<EOT
set -e
cd $REMOTE_DOCKER_COMPOSE_DIR
docker pull $DOCKER_IMAGE
docker-compose up -d
echo
echo "==================================================================="
echo "Clearing front page from Redis cache."
docker exec redis redis-cli DEL pwview//
EOT
# Notify Sentry of this new deploy.
# See https://sentry.io/blender-institute/blender-cloud/settings/release-tracking/
# and https://docs.sentry.io/api/releases/post-organization-releases/
# and https://sentry.io/api/
echo
echo "==================================================================="
REVISION=$(date +'%Y%m%d.%H%M%S.%Z')
echo "Notifying Sentry of this new deploy of revision $REVISION."
SENTRY_RELEASE_URL="$($SSH env PYTHONPATH="$REMOTE_SECRET_CONFIG_DIR" python3 -c "\"import config_secrets; print(config_secrets.SENTRY_RELEASE_URL)\"")"
curl -s "$SENTRY_RELEASE_URL" -XPOST -H 'Content-Type: application/json' -d "{\"version\": \"$REVISION\"}" | json_pp
echo
echo
echo "==================================================================="
echo "Deploy to $DEPLOYHOST done."
echo "==================================================================="

39
deploy/README.md Normal file
View File

@@ -0,0 +1,39 @@
# Deploying to Production
```
workon blender-cloud # activate your virtualenv
cd $projectdir/deploy
./full-pull.sh
```
## The Details
Deployment consists of a few steps:
1. Populate a staging directory with the files from the production branches of the various projects.
2. Create Docker images.
3. Push the docker images to Docker Hub.
4. Pull the docker images on the production server and rebuild+restart the containers.
The scripts involved are:
- `2docker.sh`: performs step 1. above.
- `build-{xxx}.sh`: performs steps 2. and 3. above.
- `2server.sh`: performs step 4. above.
The `full-{xxx}.sh` scripts perform all the steps, and call into `build-{xxx}.sh`.
For `xxx` there are:
- `all`: Rebuild all Docker images from scratch. This is good for getting the latest updates to the
base image.
- `pull`: Pull the base and intermediate images from Docker Hub so that they are the same as the
last time someone pushed to production, then rebuilds the final Docker image.
- `quick`: Just rebuild the final Docker image. Only use this if the last time a deployment to
the production server was done was by you, on the machine you're working on now.
## Hacking Stuff
To deploy another branch than `production`, do `export STAGING_BRANCH=otherbranch` before starting
the above commands.

43
deploy/build-all.sh Executable file
View File

@@ -0,0 +1,43 @@
#!/bin/bash -e
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
case "$(basename "$0")" in
build-pull.sh)
docker pull armadillica/pillar_py:3.6
docker pull armadillica/pillar_wheelbuilder:latest
pushd "$ROOT/docker/3_buildwheels"
./build.sh
popd
pushd "$ROOT/docker/4_run"
./build.sh
;;
build-quick.sh)
pushd "$ROOT/docker/4_run"
./build.sh
;;
build-all.sh)
pushd "$ROOT/docker"
./full_rebuild.sh
;;
*)
echo "Unknown script $0, aborting" >&2
exit 1
esac
popd
echo
echo "Press [ENTER] to push the new Docker images."
read dummy
docker push armadillica/pillar_py:3.6
docker push armadillica/pillar_wheelbuilder:latest
docker push armadillica/blender_cloud:latest
echo
echo "Build is done, ready to update the server."

1
deploy/build-pull.sh Symbolic link
View File

@@ -0,0 +1 @@
build-quick.sh

1
deploy/build-quick.sh Symbolic link
View File

@@ -0,0 +1 @@
build-all.sh

9
deploy/full-all.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -e
NAME="$(basename "$0")"
./2docker.sh
./${NAME/full-/build-}
./2server.sh cloud2

1
deploy/full-pull.sh Symbolic link
View File

@@ -0,0 +1 @@
full-all.sh

1
deploy/full-quick.sh Symbolic link
View File

@@ -0,0 +1 @@
full-all.sh

10
docker/1_base/Dockerfile Normal file
View File

@@ -0,0 +1,10 @@
FROM ubuntu:18.04
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN set -ex; \
apt-get update; \
DEBIAN_FRONTEND=noninteractive apt-get install \
-qyy -o APT::Install-Recommends=false -o APT::Install-Suggests=false \
tzdata openssl ca-certificates locales; \
locale-gen en_US.UTF-8 en_GB.UTF-8 nl_NL.UTF-8
ENV LANG en_US.UTF-8

View File

@@ -1,16 +0,0 @@
FROM ubuntu:16.04
MAINTAINER Francesco Siddi <francesco@blender.org>
RUN apt-get update && apt-get install -qyy \
-o APT::Install-Recommends=false -o APT::Install-Suggests=false \
python-pip libffi6 openssl ffmpeg rsyslog logrotate
RUN mkdir -p /data/git/pillar \
&& mkdir -p /data/storage \
&& mkdir -p /data/config \
&& mkdir -p /data/venv \
&& mkdir -p /data/wheelhouse
RUN pip install virtualenv
RUN virtualenv /data/venv
RUN . /data/venv/bin/activate && pip install -U pip && pip install wheel

3
docker/1_base/build.sh Normal file → Executable file
View File

@@ -1,3 +1,4 @@
#!/usr/bin/env bash
docker build -t pillar_base -f base.docker .;
# Uses --no-cache to always get the latest upstream (security) upgrades.
exec docker build --no-cache "$@" -t pillar_base .

View File

@@ -1,3 +0,0 @@
#!/usr/bin/env bash
. /data/venv/bin/activate && pip wheel --wheel-dir=/data/wheelhouse -r /requirements.txt

View File

@@ -1,26 +0,0 @@
FROM pillar_base
MAINTAINER Francesco Siddi <francesco@blender.org>
RUN apt-get update && apt-get install -qy \
git \
gcc \
libffi-dev \
libssl-dev \
pypy-dev \
python-dev \
python-imaging \
zlib1g-dev \
libjpeg-dev \
libtiff-dev \
python-crypto \
python-openssl
ENV WHEELHOUSE=/data/wheelhouse
ENV PIP_WHEEL_DIR=/data/wheelhouse
ENV PIP_FIND_LINKS=/data/wheelhouse
VOLUME /data/wheelhouse
ADD requirements.txt /requirements.txt
ADD build-wheels.sh /build-wheels.sh
ENTRYPOINT ["bash", "build-wheels.sh"]

View File

@@ -1,11 +0,0 @@
#!/usr/bin/env bash
mkdir -p ../3_run/wheelhouse;
cp ../../requirements.txt .;
docker build -t pillar_build -f build.docker .;
docker run --rm \
-v "$(pwd)"/../3_run/wheelhouse:/data/wheelhouse \
pillar_build;
rm requirements.txt;

View File

@@ -0,0 +1 @@
c3f30a0aff425dda77d19e02f420d6ba Python-3.6.6.tar.xz

61
docker/2_buildpy/build.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -e
# macOS does not support readlink -f, so we use greadlink instead
if [ $(uname) == 'Darwin' ]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
PYTHONTARGET=$($readlink -f ./python)
mkdir -p "$PYTHONTARGET"
echo "Python will be built to $PYTHONTARGET"
docker build -t pillar_build -f buildpy.docker .
# Use the docker image to build Python 3.6 and mod-wsgi
GID=$(id -g)
docker run --rm -i \
-v "$PYTHONTARGET:/opt/python" \
pillar_build <<EOT
set -e
cd \$PYTHONSOURCE
./configure \
--prefix=/opt/python \
--enable-ipv6 \
--enable-shared \
--with-ensurepip=upgrade
make -j8 install
# Make sure we can run Python
ldconfig
# Upgrade pip
/opt/python/bin/python3 -m pip install -U pip
# Build mod-wsgi-py3 for Python 3.6
cd /dpkg/mod-wsgi-*
./configure --with-python=/opt/python/bin/python3
make -j8 install
mkdir -p /opt/python/mod-wsgi
cp /usr/lib/apache2/modules/mod_wsgi.so /opt/python/mod-wsgi
chown -R $UID:$GID /opt/python/*
EOT
# Strip some stuff we don't need from the Python install.
rm -rf $PYTHONTARGET/lib/python3.*/test
rm -rf $PYTHONTARGET/lib/python3.*/config-3.*/libpython3.*.a
find $PYTHONTARGET/lib -name '*.so.*' -o -name '*.so' | while read libname; do
chmod u+w "$libname"
strip "$libname"
done
# Create another docker image which contains the actual Python.
# This one will serve as base for the Wheel builder and the
# production image.
docker build -t armadillica/pillar_py:3.6 -f includepy.docker .

View File

@@ -0,0 +1,35 @@
FROM pillar_base
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN sed -i 's/^# deb-src/deb-src/' /etc/apt/sources.list && \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -qy \
build-essential \
apache2-dev \
checkinstall \
curl
RUN apt-get build-dep -y python3.6
ADD Python-3.6.6.tar.xz.md5 /Python-3.6.6.tar.xz.md5
# Install Python sources
RUN curl -O https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tar.xz && \
md5sum -c Python-3.6.6.tar.xz.md5 && \
tar xf Python-3.6.6.tar.xz && \
rm -v Python-3.6.6.tar.xz
# Install mod-wsgi sources
RUN mkdir -p /dpkg && cd /dpkg && apt-get source libapache2-mod-wsgi-py3
# To be able to install Python outside the docker.
VOLUME /opt/python
# To be able to run Python; after building, ldconfig has to be re-run to do this.
# This makes it easier to use Python right after building (for example to build
# mod-wsgi for Python 3.6).
RUN echo /opt/python/lib > /etc/ld.so.conf.d/python.conf
RUN ldconfig
ENV PATH=/opt/python/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV PYTHONSOURCE=/Python-3.6.6

View File

@@ -0,0 +1,13 @@
FROM pillar_base
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
ADD python /opt/python
RUN echo /opt/python/lib > /etc/ld.so.conf.d/python.conf
RUN ldconfig
RUN echo Python is installed in /opt/python/ > README.python
ENV PATH=/opt/python/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN cd /opt/python/bin && \
ln -s python3 python

View File

@@ -0,0 +1,20 @@
FROM armadillica/pillar_py:3.6
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN set -ex; \
apt-get update; \
DEBIAN_FRONTEND=noninteractive apt-get install -qy \
git \
build-essential \
checkinstall \
libffi-dev \
libssl-dev \
libjpeg-dev \
zlib1g-dev
ENV WHEELHOUSE=/data/wheelhouse
ENV PIP_WHEEL_DIR=/data/wheelhouse
ENV PIP_FIND_LINKS=/data/wheelhouse
RUN mkdir -p $WHEELHOUSE
VOLUME /data/wheelhouse

62
docker/3_buildwheels/build.sh Executable file
View File

@@ -0,0 +1,62 @@
#!/usr/bin/env bash
DOCKER_IMAGE_NAME=armadillica/pillar_wheelbuilder
set -e
# macOS does not support readlink -f, so we use greadlink instead
if [ $(uname) == 'Darwin' ]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
TOPDEVDIR="$($readlink -f ../../..)"
echo "Top-level development dir is $TOPDEVDIR"
WHEELHOUSE="$($readlink -f ../4_run/wheelhouse)"
if [ -z "$WHEELHOUSE" ]; then
echo "Error, ../4_run might not exist." >&2
exit 2
fi
echo "Wheelhouse is $WHEELHOUSE"
mkdir -p "$WHEELHOUSE"
rm -f "$WHEELHOUSE"/*
docker build -t $DOCKER_IMAGE_NAME:latest .
GID=$(id -g)
docker run --rm -i \
-v "$WHEELHOUSE:/data/wheelhouse" \
-v "$TOPDEVDIR:/data/topdev" \
$DOCKER_IMAGE_NAME <<EOT
set -e
set -x
# Globally upgrade Pip, so that we can get a compatible version of the cryptography package.
# See https://github.com/pyca/cryptography/issues/5771
pip3 install --upgrade pip setuptools wheel
# Pin poetry to 1.0, as more recent version to not support nested filesystem package
# dependencies.
pip3 install wheel poetry==1.0 cryptography==2.7
# Build wheels for all dependencies.
cd /data/topdev/blender-cloud
poetry install --no-dev
# Apparently pip doesn't like projects without setup.py, so it think we have 'pillar-svnman' as
# requirement (because that's the name of the directory). We have to grep that out.
poetry run pip3 freeze | grep -v '\(pillar\)\|\(^-[ef] \)' > \$WHEELHOUSE/requirements.txt
pip3 wheel --wheel-dir=\$WHEELHOUSE -r \$WHEELHOUSE/requirements.txt
chown -R $UID:$GID \$WHEELHOUSE
EOT
# Remove our own projects, they shouldn't be installed as wheel (for now).
rm -f $WHEELHOUSE/{attract,flamenco,pillar,pillarsdk}*.whl
echo "Build of $DOCKER_IMAGE_NAME:latest is done."

View File

@@ -1,39 +0,0 @@
<VirtualHost *:80>
# EnableSendfile on
XSendFile on
XSendFilePath /data/storage/pillar
XSendFilePath /data/git/pillar
XSendFilePath /data/venv/lib/python2.7/site-packages/attract/static/
XSendFilePath /data/venv/lib/python2.7/site-packages/flamenco/static/
XsendFilePath /data/git/blender-cloud
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
# LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
WSGIDaemonProcess cloud processes=4 threads=1 maximum-requests=10000
WSGIPassAuthorization On
WSGIScriptAlias / /data/git/blender-cloud/runserver.wsgi \
process-group=cloud application-group=%{GLOBAL}
<Directory /data/git/blender-cloud>
<Files runserver.wsgi>
Require all granted
</Files>
</Directory>
# Temporary edit to remap the old cloudapi.blender.org to cloud.blender.org/api
RewriteEngine On
RewriteCond "%{HTTP_HOST}" "^cloudapi\.blender\.org" [NC]
RewriteRule (.*) /api$1 [PT]
</VirtualHost>

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
cp ../../requirements.txt .;
docker build -t armadillica/blender_cloud -f run.docker .;
rm requirements.txt;

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env bash
if [ ! -f /installed ]; then
echo "Installing pillar and pillar-sdk"
# TODO: curretly doing pip install -e takes a long time, so we symlink
# . /data/venv/bin/activate && pip install -e /data/git/pillar
ln -s /data/git/pillar/pillar /data/venv/lib/python2.7/site-packages/pillar
# . /data/venv/bin/activate && pip install -e /data/git/attract
ln -s /data/git/attract/attract /data/venv/lib/python2.7/site-packages/attract
# . /data/venv/bin/activate && pip install -e /data/git/flamenco/packages/flamenco
ln -s /data/git/flamenco/packages/flamenco/flamenco/ /data/venv/lib/python2.7/site-packages/flamenco
# . /data/venv/bin/activate && pip install -e /data/git/pillar-python-sdk
ln -s /data/git/pillar-python-sdk/pillarsdk /data/venv/lib/python2.7/site-packages/pillarsdk
touch installed
fi
if [ "$DEV" = "true" ]; then
echo "Running in development mode"
cd /data/git/blender-cloud
bash /manage.sh runserver --host='0.0.0.0'
else
# Run Apache
a2enmod rewrite
/usr/sbin/apache2ctl -D FOREGROUND
fi

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash -e
. /data/venv/bin/activate
cd /data/git/blender-cloud
python manage.py "$@"

View File

@@ -1,46 +0,0 @@
FROM pillar_base
RUN apt-get update && apt-get install -qyy \
-o APT::Install-Recommends=true -o APT::Install-Suggests=false \
git \
apache2 \
libapache2-mod-wsgi \
libapache2-mod-xsendfile \
libjpeg8 \
libtiff5 \
nano vim curl \
&& rm -rf /var/lib/apt/lists/*
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
ADD requirements.txt /requirements.txt
ADD wheelhouse /data/wheelhouse
RUN . /data/venv/bin/activate \
&& pip install --no-index --find-links=/data/wheelhouse -r requirements.txt \
&& rm /requirements.txt
VOLUME /data/git/blender-cloud
VOLUME /data/git/pillar
VOLUME /data/git/pillar-python-sdk
VOLUME /data/config
VOLUME /data/storage
ENV USE_X_SENDFILE True
EXPOSE 80
EXPOSE 5000
ADD apache2.conf /etc/apache2/apache2.conf
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD docker-entrypoint.sh /docker-entrypoint.sh
ADD manage.sh /manage.sh
ENTRYPOINT ["bash", "/docker-entrypoint.sh"]

65
docker/4_run/Dockerfile Executable file
View File

@@ -0,0 +1,65 @@
FROM armadillica/pillar_py:3.6
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN set -ex; \
apt-get update; \
DEBIAN_FRONTEND=noninteractive apt-get install -qy \
-o APT::Install-Recommends=false -o APT::Install-Suggests=false \
git \
apache2 \
libapache2-mod-xsendfile \
libjpeg8 \
libtiff5 \
ffmpeg \
rsyslog logrotate \
nano vim-tiny curl; \
rm -rf /var/lib/apt/lists/*
RUN ln -s /usr/bin/vim.tiny /usr/bin/vim
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
ADD wheelhouse /data/wheelhouse
RUN pip3 install --no-index --find-links=/data/wheelhouse -r /data/wheelhouse/requirements.txt
VOLUME /data/config
VOLUME /data/storage
VOLUME /var/log
ENV USE_X_SENDFILE True
EXPOSE 80
EXPOSE 5000
ADD apache/remoteip.conf /etc/apache2/mods-available/
ADD apache/wsgi-py36.* /etc/apache2/mods-available/
RUN a2enmod remoteip && a2enmod rewrite && a2enmod wsgi-py36
ADD apache/apache2.conf /etc/apache2/apache2.conf
ADD apache/000-default.conf /etc/apache2/sites-available/000-default.conf
ADD apache/logrotate.conf /etc/logrotate.d/apache2
ADD *.sh /
# Remove some empty top-level directories we won't use anyway.
RUN rmdir /media /home 2>/dev/null || true
# This file includes some useful commands to have in the shell history
# for easy access.
ADD bash_history /root/.bash_history
ENTRYPOINT /docker-entrypoint.sh
# Add the most-changing files as last step for faster rebuilds.
ADD config_local.py /data/git/blender-cloud/
ADD staging /data/git
RUN python3 -c "import re, secrets; \
f = open('/data/git/blender-cloud/config_local.py', 'a'); \
h = re.sub(r'[_.~-]', '', secrets.token_urlsafe())[:8]; \
print(f'STATIC_FILE_HASH = {h!r}', file=f)"

View File

@@ -0,0 +1,56 @@
<VirtualHost *:80>
XSendFile on
XSendFilePath /data/storage/pillar
XSendFilePath /data/git/pillar/pillar/web/static/
XSendFilePath /data/git/attract/attract/static/
XSendFilePath /data/git/flamenco/flamenco/static/
XsendFilePath /data/git/pillar-svnman/svnman/static/
XsendFilePath /data/git/blender-cloud/static/
XsendFilePath /data/git/blender-cloud/cloud/static/
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
# LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
WSGIDaemonProcess cloud processes=2 threads=64 maximum-requests=10000
WSGIPassAuthorization On
WSGIScriptAlias / /data/git/blender-cloud/runserver.wsgi \
process-group=cloud application-group=%{GLOBAL}
<Directory /data/git/blender-cloud>
<Files runserver.wsgi>
Require all granted
</Files>
</Directory>
# Temporary edit to remap the old cloudapi.blender.org to cloud.blender.org/api
RewriteEngine On
RewriteCond "%{HTTP_HOST}" "^cloudapi\.blender\.org" [NC]
RewriteRule (.*) /api$1 [PT]
# Redirects for blender-cloud projects
RewriteRule "^/p/blender-cloud/?$" "/blog" [R=301,L]
RewriteRule "^/agent327/?$" "/p/agent-327" [R=301,L]
RewriteRule "^/caminandes/?$" "/p/caminandes-3" [R=301,L]
RewriteRule "^/cf2/?$" "/p/creature-factory-2" [R=301,L]
RewriteRule "^/characters/?$" "/p/characters" [R=301,L]
RewriteRule "^/gallery/?$" "/p/gallery" [R=301,L]
RewriteRule "^/hdri/?$" "/p/hdri" [R=301,L]
RewriteRule "^/textures/?$" "/p/textures" [R=301,L]
RewriteRule "^/training/?$" "/courses" [R=301,L]
RewriteRule "^/spring/?$" "/p/spring" [R=301,L]
RewriteRule "^/hero/?$" "/p/hero" [R=301,L]
RewriteRule "^/coffee-run/?$" "/p/coffee-run" [R=301,L]
RewriteRule "^/settlers/?$" "/p/settlers" [R=301,L]
# Waking the forest was moved from the art gallery to its own workshop
RewriteRule "^/p/gallery/58cfec4f88ac8f1440aeb309/?$" "/p/waking-the-forest" [R=301,L]
</VirtualHost>

View File

@@ -133,9 +133,9 @@ AccessFileName .htaccess
# Note that the use of %{X-Forwarded-For}i instead of %h is not recommended.
# Use mod_remoteip instead.
#
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

View File

@@ -0,0 +1,21 @@
/var/log/apache2/*.log {
daily
missingok
rotate 14
size 100M
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if /etc/init.d/apache2 status > /dev/null ; then \
/etc/init.d/apache2 reload > /dev/null; \
fi;
endscript
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi; \
endscript
}

View File

@@ -0,0 +1,2 @@
RemoteIPHeader X-Forwarded-For
RemoteIPInternalProxy 172.16.0.0/12

View File

@@ -0,0 +1,122 @@
<IfModule mod_wsgi.c>
#This config file is provided to give an overview of the directives,
#which are only allowed in the 'server config' context.
#For a detailed description of all avaiable directives please read
#http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives
#WSGISocketPrefix: Configure directory to use for daemon sockets.
#
#Apache's DEFAULT_REL_RUNTIMEDIR should be the proper place for WSGI's
#Socket. In case you want to mess with the permissions of the directory,
#you need to define WSGISocketPrefix to an alternative directory.
#See http://code.google.com/p/modwsgi/wiki/ConfigurationIssues for more
#information
#WSGISocketPrefix /var/run/apache2/wsgi
#WSGIPythonOptimize: Enables basic Python optimisation features.
#
#Sets the level of Python compiler optimisations. The default is '0'
#which means no optimisations are applied.
#Setting the optimisation level to '1' or above will have the effect
#of enabling basic Python optimisations and changes the filename
#extension for compiled (bytecode) files from .pyc to .pyo.
#When the optimisation level is set to '2', doc strings will not be
#generated and retained. This will result in a smaller memory footprint,
#but may cause some Python packages which interrogate doc strings in some
#way to fail.
#WSGIPythonOptimize 0
#WSGIPythonPath: Additional directories to search for Python modules,
# overriding the PYTHONPATH environment variable.
#
#Used to specify additional directories to search for Python modules.
#If multiple directories are specified they should be separated by a ':'.
WSGIPythonPath /opt/python/lib/python3.6/site-packages
#WSGIPythonEggs: Directory to use for Python eggs cache.
#
#Used to specify the directory to be used as the Python eggs cache
#directory for all sub interpreters created within embedded mode.
#This directive achieves the same affect as having set the
#PYTHON_EGG_CACHE environment variable.
#Note that the directory specified must exist and be writable by the user
#that the Apache child processes run as. The directive only applies to
#mod_wsgi embedded mode. To set the Python eggs cache directory for
#mod_wsgi daemon processes, use the 'python-eggs' option to the
#WSGIDaemonProcess directive instead.
#WSGIPythonEggs directory
#WSGIRestrictEmbedded: Enable restrictions on use of embedded mode.
#
#The WSGIRestrictEmbedded directive determines whether mod_wsgi embedded
#mode is enabled or not. If set to 'On' and the restriction on embedded
#mode is therefore enabled, any attempt to make a request against a
#WSGI application which hasn't been properly configured so as to be
#delegated to a daemon mode process will fail with a HTTP internal server
#error response.
#WSGIRestrictEmbedded On|Off
#WSGIRestrictStdin: Enable restrictions on use of STDIN.
#WSGIRestrictStdout: Enable restrictions on use of STDOUT.
#WSGIRestrictSignal: Enable restrictions on use of signal().
#
#Well behaved WSGI applications neither should try to read/write from/to
#STDIN/STDOUT, nor should they try to register signal handlers. If your
#application needs an exception from this rule, you can disable the
#restrictions here.
#WSGIRestrictStdin On
#WSGIRestrictStdout On
#WSGIRestrictSignal On
#WSGIAcceptMutex: Specify type of accept mutex used by daemon processes.
#
#The WSGIAcceptMutex directive sets the method that mod_wsgi will use to
#serialize multiple daemon processes in a process group accepting requests
#on a socket connection from the Apache child processes. If this directive
#is not defined then the same type of mutex mechanism as used by Apache for
#the main Apache child processes when accepting connections from a client
#will be used. If set the method types are the same as for the Apache
#AcceptMutex directive.
#WSGIAcceptMutex default
#WSGIImportScript: Specify a script file to be loaded on process start.
#
#The WSGIImportScript directive can be used to specify a script file to be
#loaded when a process starts. Options must be provided to indicate the
#name of the process group and the application group into which the script
#will be loaded.
#WSGIImportScript process-group=name application-group=name
#WSGILazyInitialization: Enable/disable lazy initialisation of Python.
#
#The WSGILazyInitialization directives sets whether or not the Python
#interpreter is preinitialised within the Apache parent process or whether
#lazy initialisation is performed, and the Python interpreter only
#initialised in the Apache server processes or mod_wsgi daemon processes
#after they have forked from the Apache parent process.
#WSGILazyInitialization On|Off
</IfModule>

View File

@@ -0,0 +1 @@
LoadModule wsgi_module /opt/python/mod-wsgi/mod_wsgi.so

View File

@@ -0,0 +1,9 @@
bash docker-entrypoint.sh
env | sort
apache2ctl start
apache2ctl graceful
/manage.sh operations worker -- -C
celery status --broker amqp://guest:guest@rabbit:5672//
celery events --broker amqp://guest:guest@rabbit:5672//
tail -n 40 -f /var/log/apache2/access.log
tail -n 40 -f /var/log/apache2/error.log

5
docker/4_run/build.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash -e
docker build -t armadillica/blender_cloud:latest .
echo "Done, built armadillica/blender_cloud:latest"

6
docker/4_run/celery-beat.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
source /install_scripts.sh
source /manage.sh celery beat -- \
--schedule /data/storage/pillar/celerybeat-schedule.db \
--pid /data/storage/pillar/celerybeat.pid

4
docker/4_run/celery-worker.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
source /install_scripts.sh
source /manage.sh celery worker -- -C

View File

@@ -0,0 +1,123 @@
import os
from collections import defaultdict
DEBUG = False
SCHEME = 'https'
PREFERRED_URL_SCHEME = 'https'
SERVER_NAME = 'cloud.blender.org'
# os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = 'true'
os.environ['PILLAR_MONGO_DBNAME'] = 'cloud'
os.environ['PILLAR_MONGO_PORT'] = '27017'
os.environ['PILLAR_MONGO_HOST'] = 'mongo'
USE_X_SENDFILE = True
STORAGE_BACKEND = 'gcs'
CDN_SERVICE_DOMAIN = 'blendercloud-pro.r.worldssl.net'
CDN_CONTENT_SUBFOLDER = ''
CDN_STORAGE_ADDRESS = 'push-11.cdnsun.com'
CACHE_TYPE = 'redis' # null
CACHE_KEY_PREFIX = 'pw_'
CACHE_REDIS_HOST = 'redis'
CACHE_REDIS_PORT = '6379'
CACHE_REDIS_URL = 'redis://redis:6379'
PILLAR_SERVER_ENDPOINT = 'https://cloud.blender.org/api/'
BLENDER_ID_ENDPOINT = 'https://www.blender.org/id/'
GCLOUD_APP_CREDENTIALS = '/data/config/google_app.json'
GCLOUD_PROJECT = 'blender-cloud'
MAIN_PROJECT_ID = '563a9c8cf0e722006ce97b03'
# MAIN_PROJECT_ID = '57aa07c088bef606e89078bd'
ALGOLIA_INDEX_USERS = 'pro_Users'
ALGOLIA_INDEX_NODES = 'pro_Nodes'
ZENCODER_NOTIFICATIONS_URL = 'https://cloud.blender.org/api/encoding/zencoder/notifications'
FILE_LINK_VALIDITY = defaultdict(
lambda: 3600 * 24 * 30, # default of 1 month.
gcs=3600 * 23, # 23 hours for Google Cloud Storage.
cdnsun=3600 * 23
)
LOGGING = {
'version': 1,
'formatters': {
'default': {'format': '%(levelname)8s %(name)s %(message)s'}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'default',
'stream': 'ext://sys.stderr',
}
},
'loggers': {
'pillar': {'level': 'INFO'},
# 'pillar.auth': {'level': 'DEBUG'},
# 'pillar.api.blender_id': {'level': 'DEBUG'},
# 'pillar.api.blender_cloud.subscription': {'level': 'DEBUG'},
'bcloud': {'level': 'INFO'},
'cloud': {'level': 'INFO'},
'attract': {'level': 'INFO'},
'flamenco': {'level': 'INFO'},
# 'pillar.api.file_storage': {'level': 'DEBUG'},
# 'pillar.api.file_storage.ensure_valid_link': {'level': 'INFO'},
'pillar.api.file_storage.refresh_links_for_backend': {'level': 'DEBUG'},
'werkzeug': {'level': 'DEBUG'},
'eve': {'level': 'WARNING'},
# 'elasticsearch': {'level': 'DEBUG'},
},
'root': {
'level': 'WARNING',
'handlers': [
'console',
],
}
}
# Latest version of the add-on.
BLENDER_CLOUD_ADDON_VERSION = '1.9.0'
REDIRECTS = {
# For old links, refer to the services page (hopefully it refreshes then)
'downloads/blender_cloud-latest-bundle.zip': 'https://cloud.blender.org/services#blender-addon',
# Latest Blender Cloud add-on.
'downloads/blender_cloud-latest-addon.zip':
f'https://storage.googleapis.com/institute-storage/addons/'
f'blender_cloud-{BLENDER_CLOUD_ADDON_VERSION}.addon.zip',
# Redirect old Grafista endpoint to /stats
'/stats/': '/stats',
}
UTM_LINKS = {
'cartoon_brew': {
'image': 'https://imgur.com/13nQTi3.png',
'link': 'https://store.blender.org/product/membership/'
}
}
SVNMAN_REPO_URL = 'https://svn.blender.cloud/repo/'
SVNMAN_API_URL = 'https://svn.blender.cloud/api/'
# Mail options, see pillar.celery.email_tasks.
SMTP_HOST = 'proog.blender.org'
SMTP_PORT = 25
SMTP_USERNAME = 'server@blender.cloud'
SMTP_TIMEOUT = 30 # timeout in seconds, https://docs.python.org/3/library/smtplib.html#smtplib.SMTP
MAIL_RETRY = 180 # in seconds, delay until trying to send an email again.
MAIL_DEFAULT_FROM_NAME = 'Blender Cloud'
MAIL_DEFAULT_FROM_ADDR = 'cloudsupport@blender.org'
# MUST be 8 characters long, see pillar.flask_extra.HashedPathConverter
# STATIC_FILE_HASH = '12345678'
# The value used in production is appended from Dockerfile.

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
source /install_scripts.sh
# Make sure that log rotation works.
mkdir -p ${APACHE_LOG_DIR}
service cron start
if [ "$DEV" = "true" ]; then
echo "Running in development mode"
cd /data/git/blender-cloud
exec bash /manage.sh runserver --host='0.0.0.0'
else
exec /usr/sbin/apache2ctl -D FOREGROUND
fi

View File

@@ -0,0 +1,21 @@
#!/bin/sh
if [ -f /installed ]; then
return
fi
SITEPKG=$(echo /opt/python/lib/python3.*/site-packages)
echo "Installing Blender Cloud packages into $SITEPKG"
# TODO: 'pip3 install -e' runs 'setup.py develop', which runs 'setup.py egg_info',
# which can't write the egg info to the read-only /data/git volume. This is why
# we manually install the links.
for SUBPROJ in /data/git/{pillar,pillar-python-sdk,attract,flamenco,pillar-svnman}; do
NAME=$(python3 $SUBPROJ/setup.py --name)
echo "... $NAME"
echo $SUBPROJ >> $SITEPKG/easy-install.pth
echo $SUBPROJ > $SITEPKG/$NAME.egg-link
done
echo "All packages installed."
touch /installed

5
docker/4_run/manage.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
set -e
cd /data/git/blender-cloud
exec python manage.py "$@"

96
docker/README.md Normal file
View File

@@ -0,0 +1,96 @@
# Setting up a production machine
To get the docker stack up and running, we use the following, on an Ubuntu 16.10 machine.
## 0. Basic stuff
Install the machine, use `locale-gen nl_NL.UTF-8` or similar commands to generate locale
definitions. Set up automatic security updates and backups, the usual.
## 1. Install Docker
Install Docker itself, as described in the
[Docker CE for Ubuntu manual](https://store.docker.com/editions/community/docker-ce-server-ubuntu?tab=description):
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce
## 2. Configure Docker to use "overlay"
Configure Docker to use "overlay" instead of "aufs" for the images. This prevents
[segfaults in auplink](https://bugs.launchpad.net/ubuntu/+source/aufs-tools/+bug/1442568).
1. Set `DOCKER_OPTS="-s overlay"` in `/etc/defaults/docker`
2. Copy `/lib/systemd/system/docker.service` to `/etc/systemd/system/docker.service`.
This allows later upgrading of docker without overwriting the changes we're about to do.
2. Edit the `[Service]` section of `/etc/systemd/system/docker.service`:
1. Add `EnvironmentFile=/etc/default/docker`
2. Append ` $DOCKER_OPTS` to the `ExecStart` line
3. Run `systemctl daemon-reload`
4. Remove all your containers and images.
5. Restart Docker: `systemctl restart docker`
## 3. Pull the Blender Cloud docker image
`docker pull armadillica/blender_cloud:latest`
## 4. Get docker-compose + our repositories
See the [Quick setup](../README.md) on how to get those. Then run:
cd /data/git/blender-cloud/docker
docker-compose up -d
Set up permissions for Docker volumes; the following should be writable by
- `/data/storage/pillar`: writable by `www-data` and `root` (do a `chown root:www-data`
and `chmod 2770`).
- `/data/storage/db`: writable by uid 999.
## 5. Set up TLS
Place TLS certificates in `/data/certs/{cloud,cloudapi}.blender.org.pem`.
They should contain (in order) the private key, the host certificate, and the
CA certificate.
## 6. Create a local config
Blender Cloud expects the following files to exist:
- `/data/git/blender_cloud/config_local.py` with machine-local configuration overrides
- `/data/config/google_app.json` with Google Cloud Storage credentials.
When run from Docker, the `docker/4_run/config_local.py` file will be used. Overrides for that file
can be placed in `/data/config/config_secrets.py`.
## 7. ElasticSearch & kibana
ElasticSearch and Kibana run in our self-rolled images. This is needed because by default
- ElasticSearch uses up to 2 GB of RAM, which is too much for our droplet, and
- the Docker images contain the proprietary X-Pack plugin, which we don't want.
This also gives us the opportunity to let Kibana do its optimization when we build the image, rather
than every time the container is recreated.
`/data/storage/elasticsearch` needs to be writable by UID 1000, GID 1000.
Kibana connects to [ElasticProxy](https://github.com/armadillica/elasticproxy), which only allows
GET, HEAD, and some specific POST requests. This ensures that the public-facing Kibana cannot be
used to change the ElasticSearch database.
Production Kibana can be placed in read-only mode, but this is not necessary now that we use
ElasticProxy. However, I've left this in here as reference.
`curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : true }'`
If editing is desired, temporarily turn off read-only mode:
`curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : false }'`

View File

@@ -1,16 +0,0 @@
#!/usr/bin/env bash
set -x;
set -e;
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $DIR;
cd 1_base/;
bash build.sh;
cd ../2_build/;
bash build.sh;
cd ../3_run/;
bash build.sh;

View File

@@ -1,68 +1,183 @@
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- /data/storage/db:/data/db
ports:
- "127.0.0.1:27017:27017"
redis:
image: redis
container_name: redis
restart: always
blender_cloud:
image: armadillica/blender_cloud
container_name: blender_cloud
restart: always
environment:
VIRTUAL_HOST: http://cloudapi.blender.org/*,https://cloudapi.blender.org/*,http://cloud.blender.org/*,https://cloud.blender.org/*,http://pillar-web/*
VIRTUAL_HOST_WEIGHT: 10
FORCE_SSL: "true"
volumes:
- /data/git/blender-cloud:/data/git/blender-cloud:ro
- /data/git/attract:/data/git/attract:ro
- /data/git/flamenco:/data/git/flamenco:ro
- /data/git/pillar:/data/git/pillar:ro
- /data/git/pillar-python-sdk:/data/git/pillar-python-sdk:ro
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
links:
- mongo
- redis
# notifserv:
# container_name: notifserv
# image: armadillica/pillar-notifserv:cd8fa678436563ac3b800b2721e36830c32e4656
# restart: always
# links:
# - mongo
# environment:
# VIRTUAL_HOST: https://cloud.blender.org/notifications*,http://pillar-web/notifications*
# VIRTUAL_HOST_WEIGHT: 20
# FORCE_SSL: true
grafista:
image: armadillica/grafista
container_name: grafista
restart: always
environment:
VIRTUAL_HOST: http://cloud.blender.org/stats/*,https://cloud.blender.org/stats/*,http://blender-cloud/stats/*
VIRTUAL_HOST_WEIGHT: 20
FORCE_SSL: "true"
volumes:
- /data/git/grafista:/data/git/grafista:ro
- /data/storage/grafista:/data/storage
haproxy:
image: dockercloud/haproxy
container_name: haproxy
restart: always
ports:
- "443:443"
- "80:80"
environment:
- CERT_FOLDER=/certs/
- TIMEOUT=connect 5s, client 5m, server 10m
links:
- blender_cloud
- grafista
# - notifserv
volumes:
- '/data/certs:/certs'
version: '3.4'
services:
mongo:
image: mongo:3.4
container_name: mongo
restart: always
volumes:
- /data/storage/db:/data/db
- /data/storage/db-bak:/data/db-bak # for backing up stuff etc.
ports:
- "127.0.0.1:27017:27017"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
# Databases in use:
# 0: Flask Cache
# 1: Celery (backend)
# 2: Celery (broker)
redis:
image: redis:5.0
container_name: redis
restart: always
ports:
- "127.0.0.1:6379:6379"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
elastic:
# This image is defined in blender-cloud/docker/elastic
image: armadillica/elasticsearch:6.1.1
container_name: elastic
restart: always
volumes:
# NOTE: this path must be writable by UID=1000 GID=1000.
- /data/storage/elastic:/usr/share/elasticsearch/data
ports:
- "127.0.0.1:9200:9200"
environment:
ES_JAVA_OPTS: "-Xms256m -Xmx256m"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
elasticproxy:
image: armadillica/elasticproxy:1.2
container_name: elasticproxy
restart: always
command: /elasticproxy -elastic http://elastic:9200/
depends_on:
- elastic
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
kibana:
# This image is defined in blender-cloud/docker/elastic
image: armadillica/kibana:6.1.1
container_name: kibana
restart: always
environment:
SERVER_NAME: "stats.cloud.blender.org"
ELASTICSEARCH_URL: http://elasticproxy:9200
CONSOLE_ENABLED: 'false'
VIRTUAL_HOST: http://stats.cloud.blender.org/*,https://stats.cloud.blender.org/*,http://stats.cloud.local/*,https://stats.cloud.local/*
VIRTUAL_HOST_WEIGHT: 20
FORCE_SSL: "true"
# See https://github.com/elastic/kibana/issues/5170#issuecomment-163042525
NODE_OPTIONS: "--max-old-space-size=200"
depends_on:
- elasticproxy
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
blender_cloud:
image: armadillica/blender_cloud:latest
container_name: blender_cloud
restart: always
environment:
VIRTUAL_HOST: http://cloud.blender.org/*,https://cloud.blender.org/*,http://cloud.local/*,https://cloud.local/*
VIRTUAL_HOST_WEIGHT: 10
FORCE_SSL: "true"
GZIP_COMPRESSION_TYPE: "text/html text/plain text/css application/javascript"
PILLAR_CONFIG: /data/config/config_secrets.py
volumes:
# format: HOST:CONTAINER
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
- /data/log:/var/log
depends_on:
- mongo
- redis
celery_worker:
image: armadillica/blender_cloud:latest
entrypoint: /celery-worker.sh
container_name: celery_worker
restart: always
environment:
PILLAR_CONFIG: /data/config/config_secrets.py
volumes:
# format: HOST:CONTAINER
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
- /data/log:/var/log
depends_on:
- mongo
- redis
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
celery_beat:
image: armadillica/blender_cloud:latest
entrypoint: /celery-beat.sh
container_name: celery_beat
restart: always
environment:
PILLAR_CONFIG: /data/config/config_secrets.py
volumes:
# format: HOST:CONTAINER
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
- /data/log:/var/log
depends_on:
- mongo
- redis
- celery_worker
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
letsencrypt:
image: armadillica/picohttp:1.0
container_name: letsencrypt
restart: always
environment:
WEBROOT: /data/letsencrypt
LISTEN: '[::]:80'
VIRTUAL_HOST: http://cloud.blender.org/.well-known/*, http://stats.cloud.blender.org/.well-known/*
VIRTUAL_HOST_WEIGHT: 30
volumes:
- /data/letsencrypt:/data/letsencrypt
haproxy:
# This image is defined in blender-cloud/docker/haproxy
image: armadillica/haproxy:1.6.7
container_name: haproxy
restart: always
ports:
- "443:443"
- "80:80"
environment:
- ADDITIONAL_SERVICES=docker:blender_cloud,docker:letsencrypt,docker:kibana
- CERT_FOLDER=/certs/
- TIMEOUT=connect 5s, client 5m, server 10m
- SSL_BIND_CIPHERS=ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
- SSL_BIND_OPTIONS=no-sslv3
- EXTRA_GLOBAL_SETTINGS=tune.ssl.default-dh-param 2048
depends_on:
- blender_cloud
- letsencrypt
- kibana
volumes:
- '/data/certs:/certs'
- /var/run/docker.sock:/var/run/docker.sock

View File

@@ -0,0 +1,10 @@
FROM docker.elastic.co/elasticsearch/elasticsearch:6.1.1
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
RUN elasticsearch-plugin remove --purge x-pack
ADD elasticsearch.yml jvm.options /usr/share/elasticsearch/config/
USER root
RUN chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/config/
USER elasticsearch

View File

@@ -0,0 +1,6 @@
FROM docker.elastic.co/kibana/kibana:6.1.1
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
RUN bin/kibana-plugin remove x-pack
ADD kibana.yml /usr/share/kibana/config/kibana.yml
RUN kibana 2>&1 | grep -m 1 "Optimization of .* complete"

15
docker/elastic/build.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash -e
# When updating this, also update the versions in Dockerfile-*, and make sure that
# it matches the versions of the elasticsearch and elasticsearch_dsl packages
# used in Pillar. Those don't have to match exactly, but the major version should.
VERSION=6.1.1
docker build -t armadillica/elasticsearch:${VERSION} -f Dockerfile-elastic .
docker build -t armadillica/kibana:${VERSION} -f Dockerfile-kibana .
docker tag armadillica/elasticsearch:${VERSION} armadillica/elasticsearch:latest
docker tag armadillica/kibana:${VERSION} armadillica/kibana:latest
echo "Done, built armadillica/elasticsearch:${VERSION} and armadillica/kibana:${VERSION}"
echo "Also tagged as armadillica/elasticsearch:latest and armadillica/kibana:latest"

View File

@@ -0,0 +1,7 @@
cluster.name: "blender-cloud"
network.host: 0.0.0.0
# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

112
docker/elastic/jvm.options Normal file
View File

@@ -0,0 +1,112 @@
## JVM configuration
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
# Sybren: uncommented so that we can set those options using the ES_JAVA_OPTS environment variable.
#-Xms512m
#-Xmx512m
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## optimizations
# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch
## basic
# force the server VM (remove on 32-bit client JVMs)
-server
# explicitly set the stack size (reduce to 320k on 32-bit client JVMs)
-Xss1m
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
-Djna.nosys=true
# use old-style file permissions on JDK9
-Djdk.io.permissionsUseCanonicalPath=true
# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${heap.dump.path}
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${loggc}
# By default, the GC log file will not rotate.
# By uncommenting the lines below, the GC log file
# will be rotated every 128MB at most 32 times.
#-XX:+UseGCLogFileRotation
#-XX:NumberOfGCLogFiles=32
#-XX:GCLogFileSize=128M
# Elasticsearch 5.0.0 will throw an exception on unquoted field names in JSON.
# If documents were already indexed with unquoted fields in a previous version
# of Elasticsearch, some operations may throw errors.
#
# WARNING: This option will be removed in Elasticsearch 6.0.0 and is provided
# only for migration purposes.
#-Delasticsearch.json.allow_unquoted_field_names=true

View File

@@ -0,0 +1,8 @@
---
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
# Hide dev tools
console.enabled: false

17
docker/full_rebuild.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -xe
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $DIR/1_base
bash build.sh
cd $DIR/2_buildpy
bash build.sh
cd $DIR/3_buildwheels
bash build.sh
cd $DIR/4_run
bash build.sh

View File

@@ -0,0 +1,5 @@
FROM dockercloud/haproxy:1.6.7
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
# Fix https://talosintelligence.com/vulnerability_reports/TALOS-2019-0782
RUN sed 's/root::/root:!:/' -i /etc/shadow

10
docker/haproxy/build.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/bin/bash -e
# When updating this, also update the version in Dockerfile
VERSION=1.6.7
docker build -t armadillica/haproxy:${VERSION} .
docker tag armadillica/haproxy:${VERSION} armadillica/haproxy:latest
echo "Done, built armadillica/haproxy:${VERSION}"
echo "Also tagged as armadillica/haproxy:latest"

5
docker/mongo-backup.cron Normal file
View File

@@ -0,0 +1,5 @@
# Change to suit your needs, then place in /etc/cron.d/mongo-backup
# (so remove the .cron from the name)
MAILTO=yourname@youraddress.org
30 5 * * * root /root/docker/mongo-backup.sh

28
docker/mongo-backup.sh Executable file
View File

@@ -0,0 +1,28 @@
#!/bin/bash -e
BACKUPDIR=/data/storage/db-bak
DATE=$(date +'%Y-%m-%d-%H%M')
ARCHIVE=$BACKUPDIR/mongo-live-$DATE.tar.xz
# Just a sanity check before we give it to 'rm -rf'
if [ -z "$DATE" ]; then
echo "Empty string found where the date should be, aborting."
exit 1
fi
# /data/db-bak in Docker is /data/storage/db-bak on the host.
docker exec mongo mongodump -d cloud \
--out /data/db-bak/dump-$DATE \
--excludeCollection tokens \
--excludeCollection flamenco_task_logs \
--quiet
cd $BACKUPDIR
tar -Jcf $ARCHIVE dump-$DATE/
rm -rf dump-$DATE
TO_DELETE="$(ls $BACKUPDIR/mongo-live-*.tar.xz | head -n -7)"
[ -z "$TO_DELETE" ] || rm "$TO_DELETE"
rsync -a $BACKUPDIR/mongo-live-*.tar.xz cloud-backup@swami-direct.blender.cloud:

27
docker/renew-letsencrypt.sh Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/bash -e
# First time creating a certificate for a domain, use:
# certbot certonly --webroot -w /data/letsencrypt -d $DOMAINNAME
cd /data/letsencrypt
certbot renew
echo
echo "Recreating HAProxy certificates"
for certdir in /etc/letsencrypt/live/*; do
domain=$(basename $certdir)
echo " - $domain"
cat $certdir/privkey.pem $certdir/fullchain.pem > $domain.pem
mv $domain.pem /data/certs/
done
echo
echo -n "Restarting "
docker restart haproxy
echo "Certificate renewal completed."

33
gulp Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash -ex
GULP=./node_modules/.bin/gulp
function install() {
npm install
touch $GULP # installer doesn't always touch this after a build, so we do.
}
# Rebuild Gulp if missing or outdated.
[ -e $GULP ] || install
[ gulpfile.js -nt $GULP ] && install
if [ "$1" == "watch" ]; then
# Treat "gulp watch" as "gulp && gulp watch"
$GULP
elif [ "$1" == "all" ]; then
pushd .
# This is useful when building the Blender Cloud project for the first time.
# Run ./gulp in all depending projects (pillar, attract, flamenco, pillar-svnman)
declare -a repos=("pillar" "attract" "flamenco" "pillar-svnman")
for r in "${repos[@]}"
do
cd ../$r
./gulp
done
popd
# Run "gulp" once inside the repo
$GULP
exit 1
fi
exec $GULP "$@"

130
gulpfile.js Normal file
View File

@@ -0,0 +1,130 @@
let argv = require('minimist')(process.argv.slice(2));
let autoprefixer = require('gulp-autoprefixer');
let cache = require('gulp-cached');
let chmod = require('gulp-chmod');
let concat = require('gulp-concat');
let git = require('gulp-git');
let gulp = require('gulp');
let gulpif = require('gulp-if');
let pug = require('gulp-pug');
let plumber = require('gulp-plumber');
let rename = require('gulp-rename');
let sass = require('gulp-sass');
let sourcemaps = require('gulp-sourcemaps');
let uglify = require('gulp-uglify-es').default;
let enabled = {
uglify: argv.production,
maps: !argv.production,
failCheck: !argv.production,
prettyPug: !argv.production,
cachify: !argv.production,
cleanup: argv.production,
chmod: argv.production,
};
let destination = {
css: 'cloud/static/assets/css',
pug: 'cloud/templates',
js: 'cloud/static/assets/js',
}
let source = {
pillar: '../pillar/'
}
/* CSS */
gulp.task('styles', function(done) {
gulp.src('src/styles/**/*.sass')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(sass({
outputStyle: 'compressed'}
))
.pipe(autoprefixer("last 3 versions"))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.css));
done();
});
/* Templates - Pug */
gulp.task('templates', function(done) {
gulp.src('src/templates/**/*.pug')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('templating')))
.pipe(pug({
pretty: enabled.prettyPug
}))
.pipe(gulp.dest(destination.pug));
// TODO(venomgfx): please check why 'gulp watch' doesn't pick up on .txt changes.
gulp.src('src/templates/**/*.txt')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('templating')))
.pipe(gulp.dest(destination.pug));
done();
});
/* Tutti gets built by Pillar. See gulpfile.js in pillar.*/
/* Individual Uglified Scripts */
gulp.task('scripts', function(done) {
gulp.src('src/scripts/*.js')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('scripting')))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(gulpif(enabled.uglify, uglify()))
.pipe(rename({suffix: '.min'}))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulpif(enabled.chmod, chmod(0o644)))
.pipe(gulp.dest(destination.js));
done();
});
// While developing, run 'gulp watch'
gulp.task('watch',function(done) {
let watchStyles = [
'src/styles/**/*.sass',
source.pillar + 'src/styles/**/*.sass',
];
let watchScripts = [
'src/scripts/**/*.js',
source.pillar + 'src/scripts/**/*.js',
];
let watchTemplates = [
'src/templates/**/*.pug',
source.pillar + 'src/templates/**/*.pug',
];
gulp.watch(watchStyles, gulp.series('styles'));
gulp.watch(watchScripts, gulp.series('scripts'));
gulp.watch(watchTemplates, gulp.series('templates'));
done();
});
// Erases all generated files in output directories.
gulp.task('cleanup', function(done) {
let paths = [];
for (attr in destination) {
paths.push(destination[attr]);
}
git.clean({ args: '-f -X ' + paths.join(' ') }, function (err) {
if(err) throw err;
});
done();
});
// Run 'gulp' to build everything at once
let tasks = [];
if (enabled.cleanup) tasks.push('cleanup');
gulp.task('default', gulp.parallel(tasks.concat(['styles', 'templates', 'scripts'])));

6
jwtkeys/README.md Normal file
View File

@@ -0,0 +1,6 @@
# Flamenco Server JWT keys
To generate a keypair for `ES256`:
openssl ecparam -genkey -name prime256v1 -noout -out es256-private.pem
openssl ec -in es256-private.pem -pubout -out es256-public.pem

View File

@@ -1,40 +1,7 @@
#!/usr/bin/env python
from __future__ import print_function
import logging
from flask import current_app
from pillar import cli
from pillar.cli import manager_maintenance
from cloud import app
log = logging.getLogger(__name__)
@manager_maintenance.command
def reconcile_subscribers():
"""For every user, check their subscription status with the store."""
from pillar.auth.subscriptions import fetch_user
users_coll = current_app.data.driver.db['users']
unsubscribed_users = []
for user in users_coll.find({'roles': 'subscriber'}):
print('Processing %s' % user['email'])
print(' Checking subscription')
user_store = fetch_user(user['email'])
if user_store['cloud_access'] == 0:
print(' Removing subscriber role')
users_coll.update(
{'_id': user['_id']},
{'$pull': {'roles': 'subscriber'}})
unsubscribed_users.append(user['email'])
if not unsubscribed_users:
return
print('The following users have been unsubscribed')
for user in unsubscribed_users:
print(user)
from runserver import app
cli.manager.app = app
cli.manager.run()

6417
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

32
package.json Normal file
View File

@@ -0,0 +1,32 @@
{
"name": "blender-cloud",
"license": "GPL-2.0+",
"author": "Blender Institute",
"repository": {
"type": "git",
"url": "git://git.blender.org/blender-cloud.git"
},
"devDependencies": {
"gulp": "~4.0",
"gulp-autoprefixer": "~6.0.0",
"gulp-cached": "~1.1.1",
"gulp-chmod": "~2.0.0",
"gulp-concat": "~2.6.1",
"gulp-if": "^2.0.2",
"gulp-git": "~2.8.0",
"gulp-plumber": "~1.2.0",
"gulp-pug": "~4.0.1",
"gulp-rename": "~1.4.0",
"gulp-sass": "~4.1.0",
"gulp-sourcemaps": "~2.6.4",
"gulp-uglify-es": "^1.0.4",
"minimist": "^1.2.0"
},
"dependencies": {
"bootstrap": "^4.1.3",
"jquery": "^3.3.1",
"natives": "^1.1.6",
"popper.js": "^1.14.4",
"video.js": "^7.2.2"
}
}

1917
poetry.lock generated Normal file

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More