Compare commits
294 Commits
wip-readme
...
wip-open-p
Author | SHA1 | Date | |
---|---|---|---|
1fb044e7c1 | |||
4769e2e904 | |||
cf99383b9c | |||
8613ac7244 | |||
8c48a61114 | |||
0259c5e0ec | |||
a726fd1fbe | |||
df71738623 | |||
6a698daaa0 | |||
7f538bdaee | |||
5f07c7ce17 | |||
5a42e2dcb8 | |||
d5a54b7cf1 | |||
7cb4b37ae2 | |||
1fca473257 | |||
5a6035a494 | |||
98698be7eb | |||
d4f072480c | |||
2d036ee657 | |||
1bb762db6b | |||
11743c54e2 | |||
29d1d02bfd | |||
f7e5db2174 | |||
2141aed06c | |||
6b56df9e9c | |||
5a4519659a | |||
c3ddc831aa | |||
1a63b51c48 | |||
484ac34c50 | |||
908360eb1c | |||
3bf1c3ea1b | |||
fc58fbef5b | |||
9e961580d3 | |||
87cf5a9844 | |||
5a3a7a3883 | |||
3be926b9b3 | |||
ff5af22771 | |||
6f73222dcd | |||
1617a119db | |||
bef402a6b0 | |||
94ef616593 | |||
ffc4f271e8 | |||
917417a8d5 | |||
48fd9f401a | |||
7ba1c55609 | |||
33aa819040 | |||
70f28074a0 | |||
14d77da47a | |||
0908a13519 | |||
06414ab0ed | |||
67ce16fc78 | |||
d00c32310b | |||
add20f0c6c | |||
dde590d388 | |||
996beaf090 | |||
37b84cf75a | |||
eb2a058ce2 | |||
4891803552 | |||
ab11f98331 | |||
0ed03240e7 | |||
5c26756626 | |||
de6cdbaf19 | |||
e641565e6a | |||
617b600ce8 | |||
4c2632669b | |||
c2518e9ae1 | |||
0b34c5c1c6 | |||
fd68f3fc8b | |||
a2c24375e5 | |||
16e378b7ad | |||
be4b36a661 | |||
ae700b5eef | |||
3712ad6ddc | |||
b2cad0fb9a | |||
5e15185166 | |||
2a35c3e157 | |||
8a4f6c3649 | |||
9c2e6f9081 | |||
be62a38cd5 | |||
501fb76c7e | |||
d46b5d645b | |||
5efca08c13 | |||
abb0ded3f2 | |||
f948969b20 | |||
1ae090789b | |||
c031c1e8ae | |||
00805c3372 | |||
035382d1e5 | |||
7dcc0d5ead | |||
bff51e1e83 | |||
561db0428d | |||
f7ce8d74a7 | |||
532bf60041 | |||
d5d189de8c | |||
5d281dc6ce | |||
efc18c5628 | |||
127e422775 | |||
59f4edf192 | |||
5ea47b6f8f | |||
e27d8a3f08 | |||
c6407b4f7a | |||
71bdff82b3 | |||
98f695c54b | |||
30fe3e1360 | |||
fd6c9ec02d | |||
62dcd93584 | |||
8fc9529752 | |||
f4e277f1d1 | |||
22f5ea7b62 | |||
1dd3fd3623 | |||
48dc3837b5 | |||
c34ffb66db | |||
59d165b6d0 | |||
9dfe75ed1e | |||
494f307900 | |||
59dd3352cd | |||
4207d051a9 | |||
f1d990c03d | |||
08f56877ca | |||
0c4db84940 | |||
5be0f3baf0 | |||
8288b4eab1 | |||
a34f3435c1 | |||
ba9a2eb134 | |||
73a0de3157 | |||
bcab6ac5b7 | |||
c2492e8710 | |||
08b388f363 | |||
470fdf89f4 | |||
c00121d71c | |||
97dff66159 | |||
047737ef41 | |||
5eaa202d49 | |||
a9788e70c9 | |||
e19a38959e | |||
0c4fbcf65d | |||
e86cc9df77 | |||
cb08c113af | |||
5d26e3940e | |||
5dc4b398c2 | |||
c25aae82b5 | |||
14d20edbb7 | |||
2c4527c4d6 | |||
3a7f5f7a7d | |||
ec99b3207a | |||
ac7dd4c60a | |||
e45976d4e5 | |||
565e5e95a5 | |||
c2fff6a9cb | |||
fa249d1952 | |||
d02de8e18b | |||
c6d645f664 | |||
d90794a4f0 | |||
13ed89c480 | |||
ab41f3afcd | |||
5128675f55 | |||
5d0b8a1dc4 | |||
82a4bcd185 | |||
4432e5b645 | |||
5948ee7c6f | |||
5b5005f33e | |||
98177df7bd | |||
8e6fc604e3 | |||
2e2fc791e1 | |||
00eb6d8685 | |||
41d47cdeb8 | |||
19ccffa4ae | |||
aaf95f96a7 | |||
9034c36564 | |||
f3e0484328 | |||
f74e05229c | |||
6d9192f0ab | |||
869e228069 | |||
9a692d475b | |||
0b61e517d2 | |||
0c251c0470 | |||
e103a712b3 | |||
a7889214cf | |||
b554756f54 | |||
ea8ef9c8bb | |||
f2ad7b8868 | |||
22b8ed0bc0 | |||
8df67bfeb5 | |||
f4ac9635a9 | |||
ea0f88b4b8 | |||
020f70b2cb | |||
2bac3997f6 | |||
ff8e338f86 | |||
d80fc48635 | |||
9a49cb536b | |||
a9280dd05f | |||
0b2583c22e | |||
835851c1d7 | |||
1fb3c88a8f | |||
732fe7bc7c | |||
838d85a2b7 | |||
2e4ed6f3c8 | |||
fd600c9fa0 | |||
7f65b77824 | |||
4575e8bf8b | |||
fe408c3151 | |||
9bf6a36bdd | |||
560c13d3a8 | |||
f33bb946ac | |||
bda679a3bf | |||
0ea8b02920 | |||
ae33a6f71e | |||
f6df37ec24 | |||
fca78faca0 | |||
c648fe0ca5 | |||
b155b0916e | |||
f493d0a566 | |||
093d80905f | |||
54e9aca16f | |||
3e5ad280b1 | |||
0a9f0ebddb | |||
a43e84f93e | |||
56af1cd96c | |||
69862b8416 | |||
b9a7f501ba | |||
6fc77522ea | |||
538a10af60 | |||
fd700ca6d9 | |||
e69ac4c5b9 | |||
a62d1e3d7d | |||
d56e1afe73 | |||
8b110df32a | |||
9da841efc3 | |||
c73efd5998 | |||
93edfb789f | |||
f0e53245a6 | |||
c775126cba | |||
39ce21f7f9 | |||
3308018887 | |||
b93448f682 | |||
2892a46a27 | |||
e2efc70e44 | |||
bd9265e4f1 | |||
57aeea337b | |||
5c3d76c660 | |||
9b0e996c10 | |||
cad77f001c | |||
4c149b0d24 | |||
28f9ec5ffa | |||
56b5c89edf | |||
7472cf2b16 | |||
e09c73ec39 | |||
59ffebd33e | |||
e5e05bf086 | |||
3e9e075b1f | |||
5c01c6ec3f | |||
343398a813 | |||
88ff479140 | |||
34affb0eac | |||
17dfdc1825 | |||
c9aaebb370 | |||
1d687d5c01 | |||
2f1a32178a | |||
6cfe00c3ca | |||
727707e611 | |||
6e1425ab25 | |||
85f3f19c34 | |||
557ce1b922 | |||
d2d04b2398 | |||
e31b3cf8b4 | |||
85c2b1bcd6 | |||
79b8194b2a | |||
06cc338b08 | |||
b0ab696e49 | |||
30c9cfd538 | |||
3af92b4436 | |||
2f6049edee | |||
71a1a69f16 | |||
fab68aa802 | |||
e27f5b7cec | |||
d42762b457 | |||
a332f627a4 | |||
9fe7d7fd2b | |||
6adf45a94a | |||
e443885460 | |||
e086862567 | |||
5ad9f275ef | |||
faf38dea7e | |||
d7e4995cfa | |||
af14910fa9 | |||
b6f729f35e | |||
d3427bb73a | |||
039983dc00 | |||
4a148f9163 | |||
df137c3258 | |||
3b239130d8 | |||
d4984c495e | |||
566c89d745 | |||
cb44509a18 |
11
.gitignore
vendored
@@ -4,6 +4,10 @@
|
||||
*.pyc
|
||||
__pycache__
|
||||
|
||||
/cloud/templates/
|
||||
/cloud/static/assets/css/
|
||||
node_modules/
|
||||
|
||||
/config_local.py
|
||||
|
||||
/build
|
||||
@@ -12,4 +16,9 @@ __pycache__
|
||||
/.eggs/
|
||||
/dump/
|
||||
/google_app*.json
|
||||
/docker/3_run/wheelhouse/
|
||||
/docker/2_buildpy/python/
|
||||
/docker/4_run/wheelhouse/
|
||||
/docker/4_run/deploy/
|
||||
/celerybeat-schedule.bak
|
||||
/celerybeat-schedule.dat
|
||||
/celerybeat-schedule.dir
|
||||
|
84
README.md
@@ -1,43 +1,49 @@
|
||||
# Blender Cloud
|
||||
|
||||
Welcome to the [Blender Cloud](https://cloud.blender.org) code repo!
|
||||
Blender Cloud runs on the [Pillar](https://pillarframework.org) framework.
|
||||
Welcome to the [Blender Cloud](https://cloud.blender.org/) code repo!
|
||||
Blender Cloud runs on the [Pillar](https://pillarframework.org/) framework.
|
||||
|
||||
## Quick setup
|
||||
Set up a node with these commands. Note that that script is already outdated...
|
||||
Set up a node with these commands.
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
|
||||
mkdir -p /data/git
|
||||
mkdir -p /data/storage
|
||||
mkdir -p /data/config
|
||||
mkdir -p /data/certs
|
||||
|
||||
sudo mkdir -p /data/{git,storage,config,certs}
|
||||
sudo apt-get update
|
||||
sudo apt-get -y install python-pip
|
||||
pip install docker-compose
|
||||
sudo apt-get -y install python3-pip
|
||||
pip3 install docker-compose
|
||||
|
||||
cd /data/git
|
||||
git clone git://git.blender.org/pillar-python-sdk.git
|
||||
git clone git://git.blender.org/pillar-server.git pillar
|
||||
git clone git://git.blender.org/attract.git
|
||||
git clone git://git.blender.org/flamenco.git
|
||||
git clone git://git.blender.org/blender-cloud.git
|
||||
git clone git://git.blender.org/pillar.git -b production
|
||||
git clone git://git.blender.org/attract.git -b production
|
||||
git clone git://git.blender.org/flamenco.git -b production
|
||||
git clone git://git.blender.org/blender-cloud.git -b production
|
||||
git clone https://github.com/armadillica/grafista.git -b production
|
||||
|
||||
echo '0 8 * * * root docker exec -d grafista bash manage.sh collect' > /etc/cron.d/grafista
|
||||
|
||||
```
|
||||
|
||||
## Deploying to production server
|
||||
After these commands, run `deploy.sh` to build the static files and deploy
|
||||
those too (see below).
|
||||
|
||||
First of all, add those aliases to the `[alias]` section of your `~/.gitconfig`
|
||||
|
||||
## Preparing the production branch for deployment
|
||||
|
||||
All revisions to deploy to production should be on the `production` branches of all the relevant
|
||||
repositories.
|
||||
|
||||
Make sure you have these aliases in the `[alias]` section of your `~/.gitconfig`:
|
||||
|
||||
```
|
||||
prod = "!git checkout production && git fetch origin production && gitk --all"
|
||||
ff = "merge --ff-only"
|
||||
pp = "!git push && if [ -e deploy.sh ]; then ./deploy.sh; fi && git checkout master"
|
||||
```
|
||||
|
||||
The following commands should be executed for each subproject; specifically for
|
||||
Pillar and Attract:
|
||||
The following commands should be executed for each (sub)project; specifically for
|
||||
the current repository, Pillar, Attract, Flamenco, and Pillar-SVNMan:
|
||||
|
||||
```
|
||||
cd $projectdir
|
||||
@@ -60,35 +66,19 @@ git ff master
|
||||
# Run tests again
|
||||
py.test
|
||||
|
||||
# Push the production branch and run dummy deploy script.
|
||||
git pp # pp = "Push to Production"
|
||||
|
||||
# The above alias waits for [ENTER] until all deploys are done.
|
||||
# Let it wait, perform the other commands in another terminal.
|
||||
# Push the production branch.
|
||||
git push
|
||||
```
|
||||
|
||||
Now follow the above receipe on the Blender Cloud project as well.
|
||||
Contrary to the subprojects, `git pp` will actually perform the deploy
|
||||
for real.
|
||||
## Deploying to production server
|
||||
|
||||
Now you can press `[ENTER]` in the Pillar and Attract terminals that
|
||||
were still waiting for it.
|
||||
```
|
||||
workon blender-cloud # activate your virtualenv
|
||||
cd $projectdir/deploy
|
||||
./2docker.sh
|
||||
./build-all.sh # or ./build-quick.sh
|
||||
./2server.sh servername
|
||||
```
|
||||
|
||||
After everything is done, your (sub)projects should all be back on
|
||||
the master branch.
|
||||
|
||||
|
||||
## Updating dependencies via Docker images
|
||||
|
||||
To update dependencies that need compiling, you need the `2_build` docker
|
||||
container. To rebuild the lot, run `docker/build.sh`.
|
||||
|
||||
Follow these steps to deploy the new container on production:
|
||||
|
||||
1. run `docker/build.sh`
|
||||
2. `docker push armadillica/blender_cloud`
|
||||
|
||||
On the production machine:
|
||||
|
||||
1. `docker pull armadillica/blender_cloud`
|
||||
2. `docker-compose up -d` (from the `/data/git/blender-cloud/docker` directory)
|
||||
To deploy another branch than `production`, do `export DEPLOY_BRANCH=otherbranch` before starting
|
||||
the above commands.
|
||||
|
102
cloud/__init__.py
Normal file
@@ -0,0 +1,102 @@
|
||||
import logging
|
||||
|
||||
import flask
|
||||
from werkzeug.local import LocalProxy
|
||||
|
||||
from pillar.api.utils import authorization
|
||||
from pillar.extension import PillarExtension
|
||||
|
||||
EXTENSION_NAME = 'cloud'
|
||||
|
||||
|
||||
class CloudExtension(PillarExtension):
|
||||
has_context_processor = True
|
||||
user_roles = {'subscriber-pro', 'has_subscription'}
|
||||
user_roles_indexable = {'subscriber-pro', 'has_subscription'}
|
||||
|
||||
user_caps = {
|
||||
'has_subscription': {'can-renew-subscription'},
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
self._log = logging.getLogger('%s.CloudExtension' % __name__)
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return EXTENSION_NAME
|
||||
|
||||
def flask_config(self):
|
||||
"""Returns extension-specific defaults for the Flask configuration.
|
||||
|
||||
Use this to set sensible default values for configuration settings
|
||||
introduced by the extension.
|
||||
|
||||
:rtype: dict
|
||||
"""
|
||||
|
||||
# Just so that it registers the management commands.
|
||||
from . import cli
|
||||
|
||||
return {
|
||||
'EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER': 'https://store.blender.org/api/',
|
||||
'EXTERNAL_SUBSCRIPTIONS_TIMEOUT_SECS': 10,
|
||||
'BLENDER_ID_WEBHOOK_USER_CHANGED_SECRET': 'oos9wah1Zoa0Yau6ahThohleiChephoi',
|
||||
}
|
||||
|
||||
def eve_settings(self):
|
||||
"""Returns extensions to the Eve settings.
|
||||
|
||||
Currently only the DOMAIN key is used to insert new resources into
|
||||
Eve's configuration.
|
||||
|
||||
:rtype: dict
|
||||
"""
|
||||
|
||||
return {}
|
||||
|
||||
def blueprints(self):
|
||||
"""Returns the list of top-level blueprints for the extension.
|
||||
|
||||
These blueprints will be mounted at the url prefix given to
|
||||
app.load_extension().
|
||||
|
||||
:rtype: list of flask.Blueprint objects.
|
||||
"""
|
||||
from . import routes
|
||||
import cloud.stats.routes
|
||||
return [
|
||||
routes.blueprint,
|
||||
cloud.stats.routes.blueprint,
|
||||
]
|
||||
|
||||
@property
|
||||
def template_path(self):
|
||||
import os.path
|
||||
return os.path.join(os.path.dirname(__file__), 'templates')
|
||||
|
||||
@property
|
||||
def static_path(self):
|
||||
import os.path
|
||||
return os.path.join(os.path.dirname(__file__), 'static')
|
||||
|
||||
def context_processor(self):
|
||||
return {
|
||||
'current_user_is_subscriber': authorization.user_has_cap('subscriber')
|
||||
}
|
||||
|
||||
def setup_app(self, app):
|
||||
from . import routes, webhooks, eve_hooks, email
|
||||
|
||||
routes.setup_app(app)
|
||||
app.register_api_blueprint(webhooks.blueprint, '/webhooks')
|
||||
eve_hooks.setup_app(app)
|
||||
email.setup_app(app)
|
||||
|
||||
|
||||
def _get_current_cloud():
|
||||
"""Returns the Cloud extension of the current application."""
|
||||
return flask.current_app.pillar_extensions[EXTENSION_NAME]
|
||||
|
||||
|
||||
current_cloud = LocalProxy(_get_current_cloud)
|
||||
"""Cloud extension of the current app."""
|
129
cloud/cli.py
Normal file
@@ -0,0 +1,129 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import logging
|
||||
from urllib.parse import urljoin
|
||||
|
||||
from flask import current_app
|
||||
from flask_script import Manager
|
||||
import requests
|
||||
|
||||
from pillar.cli import manager
|
||||
from pillar.api import service
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
manager_cloud = Manager(
|
||||
current_app, usage="Blender Cloud scripts")
|
||||
|
||||
|
||||
@manager_cloud.command
|
||||
def create_groups():
|
||||
"""Creates the admin/demo/subscriber groups."""
|
||||
|
||||
import pprint
|
||||
|
||||
group_ids = {}
|
||||
groups_coll = current_app.db('groups')
|
||||
|
||||
for group_name in ['admin', 'demo', 'subscriber']:
|
||||
if groups_coll.find({'name': group_name}).count():
|
||||
log.info('Group %s already exists, skipping', group_name)
|
||||
continue
|
||||
result = groups_coll.insert_one({'name': group_name})
|
||||
group_ids[group_name] = result.inserted_id
|
||||
|
||||
service.fetch_role_to_group_id_map()
|
||||
log.info('Created groups:\n%s', pprint.pformat(group_ids))
|
||||
|
||||
|
||||
@manager_cloud.command
|
||||
def reconcile_subscribers():
|
||||
"""For every user, check their subscription status with the store."""
|
||||
|
||||
import threading
|
||||
import concurrent.futures
|
||||
|
||||
from pillar.auth import UserClass
|
||||
from pillar.api import blender_id
|
||||
from pillar.api.blender_cloud.subscription import do_update_subscription
|
||||
|
||||
sessions = threading.local()
|
||||
|
||||
service.fetch_role_to_group_id_map()
|
||||
|
||||
users_coll = current_app.data.driver.db['users']
|
||||
found = users_coll.find({'auth.provider': 'blender-id'})
|
||||
count_users = found.count()
|
||||
count_skipped = count_processed = 0
|
||||
log.info('Processing %i users', count_users)
|
||||
|
||||
lock = threading.Lock()
|
||||
|
||||
real_current_app = current_app._get_current_object()
|
||||
|
||||
api_token = real_current_app.config['BLENDER_ID_USER_INFO_TOKEN']
|
||||
api_url = real_current_app.config['BLENDER_ID_USER_INFO_API']
|
||||
|
||||
def do_user(idx, user):
|
||||
nonlocal count_skipped, count_processed
|
||||
|
||||
log.info('Processing %i/%i %s', idx + 1, count_users, user['email'])
|
||||
|
||||
# Get the Requests session for this thread.
|
||||
try:
|
||||
sess = sessions.session
|
||||
except AttributeError:
|
||||
sess = sessions.session = requests.Session()
|
||||
|
||||
# Get the info from Blender ID
|
||||
bid_user_id = blender_id.get_user_blenderid(user)
|
||||
if not bid_user_id:
|
||||
with lock:
|
||||
count_skipped += 1
|
||||
return
|
||||
|
||||
url = urljoin(api_url, bid_user_id)
|
||||
resp = sess.get(url, headers={'Authorization': f'Bearer {api_token}'})
|
||||
|
||||
if resp.status_code == 404:
|
||||
log.info('User %s with Blender ID %s not found, skipping', user['email'], bid_user_id)
|
||||
with lock:
|
||||
count_skipped += 1
|
||||
return
|
||||
|
||||
if resp.status_code != 200:
|
||||
log.error('Unable to reach Blender ID (code %d), aborting', resp.status_code)
|
||||
with lock:
|
||||
count_skipped += 1
|
||||
return
|
||||
|
||||
bid_user = resp.json()
|
||||
if not bid_user:
|
||||
log.error('Unable to parse response for user %s, aborting', user['email'])
|
||||
with lock:
|
||||
count_skipped += 1
|
||||
return
|
||||
|
||||
# Actually update the user, and do it thread-safe just to be sure.
|
||||
with real_current_app.app_context():
|
||||
local_user = UserClass.construct('', user)
|
||||
with lock:
|
||||
do_update_subscription(local_user, bid_user)
|
||||
count_processed += 1
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
|
||||
future_to_user = {executor.submit(do_user, idx, user): user
|
||||
for idx, user in enumerate(found)}
|
||||
for future in concurrent.futures.as_completed(future_to_user):
|
||||
user = future_to_user[future]
|
||||
try:
|
||||
future.result()
|
||||
except Exception as ex:
|
||||
log.exception('Error updating user %s', user)
|
||||
|
||||
log.info('Done reconciling %d subscribers', count_users)
|
||||
log.info(' processed: %d', count_processed)
|
||||
log.info(' skipped : %d', count_skipped)
|
||||
|
||||
|
||||
manager.add_command("cloud", manager_cloud)
|
33
cloud/email.py
Normal file
@@ -0,0 +1,33 @@
|
||||
import functools
|
||||
import logging
|
||||
|
||||
import flask
|
||||
|
||||
from pillar.auth import UserClass
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def queue_welcome_mail(user: UserClass):
|
||||
"""Queue a welcome email for execution by Celery."""
|
||||
assert user.email
|
||||
log.info('queueing welcome email to %s', user.email)
|
||||
|
||||
subject = 'Welcome to Blender Cloud'
|
||||
render = functools.partial(flask.render_template, subject=subject, user=user)
|
||||
text = render('emails/welcome.txt')
|
||||
html = render('emails/welcome.html')
|
||||
|
||||
from pillar.celery import email_tasks
|
||||
email_tasks.send_email.delay(user.full_name, user.email, subject, text, html)
|
||||
|
||||
|
||||
def user_subscription_changed(user: UserClass, *, grant_roles: set, revoke_roles: set):
|
||||
if user.has_cap('subscriber') and 'has_subscription' in grant_roles:
|
||||
log.info('user %s just got a new subscription', user.email)
|
||||
queue_welcome_mail(user)
|
||||
|
||||
|
||||
def setup_app(app):
|
||||
from pillar.api.blender_cloud import subscription
|
||||
subscription.user_subscription_updated.connect(user_subscription_changed)
|
39
cloud/eve_hooks.py
Normal file
@@ -0,0 +1,39 @@
|
||||
import logging
|
||||
import typing
|
||||
|
||||
from pillar.auth import UserClass
|
||||
|
||||
from . import email
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def welcome_new_user(user_doc: dict):
|
||||
"""Sends a welcome email to a new user."""
|
||||
|
||||
user_email = user_doc.get('email')
|
||||
if not user_email:
|
||||
log.warning('user %s has no email address', user_doc.get('_id', '-no-id-'))
|
||||
return
|
||||
|
||||
# Only send mail to new users when they actually are subscribers.
|
||||
user = UserClass.construct('', user_doc)
|
||||
if not (user.has_cap('subscriber') or user.has_cap('can-renew-subscription')):
|
||||
log.debug('user %s is new, but not a subscriber, so no email for them.', user_email)
|
||||
return
|
||||
|
||||
email.queue_welcome_mail(user)
|
||||
|
||||
|
||||
def welcome_new_users(user_docs: typing.List[dict]):
|
||||
"""Sends a welcome email to new users."""
|
||||
|
||||
for user_doc in user_docs:
|
||||
try:
|
||||
welcome_new_user(user_doc)
|
||||
except Exception:
|
||||
log.exception('error sending welcome mail to user %s', user_doc)
|
||||
|
||||
|
||||
def setup_app(app):
|
||||
app.on_inserted_users += welcome_new_users
|
468
cloud/routes.py
Normal file
@@ -0,0 +1,468 @@
|
||||
import functools
|
||||
import json
|
||||
import logging
|
||||
import typing
|
||||
|
||||
from flask_login import current_user, login_required
|
||||
import flask
|
||||
from flask import Blueprint, render_template, redirect, session, url_for, abort, flash
|
||||
from pillarsdk import Node, Project, User, exceptions as sdk_exceptions, Group
|
||||
from pillarsdk.exceptions import ResourceNotFound
|
||||
|
||||
from pillar import current_app
|
||||
import pillar.api
|
||||
from pillar.web.users import forms
|
||||
from pillar.web.utils import system_util, get_file, current_user_is_authenticated
|
||||
from pillar.web.utils import attach_project_pictures
|
||||
from pillar.web.settings import blueprint as blueprint_settings
|
||||
from pillar.web.nodes.routes import url_for_node
|
||||
from pillar.web.nodes.custom.comments import render_comments_for_node
|
||||
from pillar.web.projects.routes import render_project
|
||||
from pillar.web.projects.routes import find_project_or_404
|
||||
|
||||
|
||||
blueprint = Blueprint('cloud', __name__)
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@blueprint.route('/')
|
||||
def homepage():
|
||||
if current_user.is_anonymous:
|
||||
return redirect(url_for('cloud.welcome'))
|
||||
|
||||
return render_template(
|
||||
'homepage.html',
|
||||
api=system_util.pillar_api(),
|
||||
**_homepage_context(),
|
||||
)
|
||||
|
||||
|
||||
def _homepage_context() -> dict:
|
||||
"""Returns homepage template context variables."""
|
||||
|
||||
# Get latest blog posts
|
||||
api = system_util.pillar_api()
|
||||
latest_posts = Node.all({
|
||||
'projection': {
|
||||
'name': 1,
|
||||
'project': 1,
|
||||
'node_type': 1,
|
||||
'picture': 1,
|
||||
'properties.url': 1,
|
||||
'properties.content': 1,
|
||||
'properties.attachments': 1
|
||||
},
|
||||
|
||||
'where': {'node_type': 'post', 'properties.status': 'published'},
|
||||
'embedded': {'project': 1},
|
||||
'sort': '-_created',
|
||||
'max_results': '3'
|
||||
}, api=api)
|
||||
|
||||
# Append picture Files to last_posts
|
||||
for post in latest_posts._items:
|
||||
post.picture = get_file(post.picture, api=api)
|
||||
post.url = url_for_node(node=post)
|
||||
|
||||
# Get latest assets added to any project
|
||||
latest_assets = Node.latest('assets', api=api)
|
||||
|
||||
# Append picture Files to latest_assets
|
||||
for asset in latest_assets._items:
|
||||
asset.picture = get_file(asset.picture, api=api)
|
||||
asset.url = url_for_node(node=asset)
|
||||
|
||||
# Get latest comments to any node
|
||||
latest_comments = Node.latest('comments', api=api)
|
||||
|
||||
# Get a list of random featured assets
|
||||
random_featured = get_random_featured_nodes()
|
||||
|
||||
# Parse results for replies
|
||||
to_remove = []
|
||||
|
||||
@functools.lru_cache()
|
||||
def _find_parent(parent_node_id) -> Node:
|
||||
return Node.find(parent_node_id,
|
||||
{'projection': {
|
||||
'_id': 1,
|
||||
'name': 1,
|
||||
'node_type': 1,
|
||||
'project': 1,
|
||||
'properties.url': 1,
|
||||
}},
|
||||
api=api)
|
||||
|
||||
for idx, comment in enumerate(latest_comments._items):
|
||||
if comment.properties.is_reply:
|
||||
try:
|
||||
comment.attached_to = _find_parent(comment.parent.parent)
|
||||
except ResourceNotFound:
|
||||
# Remove this comment
|
||||
to_remove.append(idx)
|
||||
else:
|
||||
comment.attached_to = comment.parent
|
||||
|
||||
for idx in reversed(to_remove):
|
||||
del latest_comments._items[idx]
|
||||
|
||||
for comment in latest_comments._items:
|
||||
if not comment.attached_to:
|
||||
continue
|
||||
comment.attached_to.url = url_for_node(node=comment.attached_to)
|
||||
comment.url = url_for_node(node=comment)
|
||||
|
||||
main_project = Project.find(current_app.config['MAIN_PROJECT_ID'], api=api)
|
||||
main_project.picture_header = get_file(main_project.picture_header, api=api)
|
||||
|
||||
# Merge latest assets and comments into one activity stream.
|
||||
def sort_key(item):
|
||||
return item._created
|
||||
|
||||
activity_stream = sorted(latest_assets._items, key=sort_key, reverse=True)
|
||||
|
||||
for node in activity_stream:
|
||||
node.url = url_for_node(node=node)
|
||||
|
||||
return dict(
|
||||
main_project=main_project,
|
||||
latest_posts=latest_posts._items,
|
||||
latest_comments=latest_comments._items,
|
||||
activity_stream=activity_stream,
|
||||
random_featured=random_featured)
|
||||
|
||||
|
||||
@blueprint.route('/login')
|
||||
def login():
|
||||
from flask import request
|
||||
|
||||
if request.args.get('force'):
|
||||
log.debug('Forcing logout of user before rendering login page.')
|
||||
pillar.auth.logout_user()
|
||||
|
||||
next_after_login = request.args.get('next')
|
||||
|
||||
if not next_after_login:
|
||||
next_after_login = request.referrer
|
||||
|
||||
session['next_after_login'] = next_after_login
|
||||
return redirect(url_for('users.oauth_authorize', provider='blender-id'))
|
||||
|
||||
|
||||
@blueprint.route('/welcome')
|
||||
def welcome():
|
||||
# Workaround to cache rendering of a page if user not logged in
|
||||
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
|
||||
def render_page():
|
||||
return render_template('welcome.html')
|
||||
|
||||
return render_page()
|
||||
|
||||
|
||||
@blueprint.route('/about')
|
||||
def about():
|
||||
return render_template('about.html')
|
||||
|
||||
|
||||
@blueprint.route('/services')
|
||||
def services():
|
||||
return render_template('services.html')
|
||||
|
||||
|
||||
@blueprint.route('/stats')
|
||||
def stats():
|
||||
return render_template('stats.html')
|
||||
|
||||
|
||||
@blueprint.route('/join')
|
||||
def join():
|
||||
"""Join page"""
|
||||
return redirect('https://store.blender.org/product/membership/')
|
||||
|
||||
|
||||
@blueprint.route('/renew')
|
||||
def renew_subscription():
|
||||
return render_template('renew_subscription.html')
|
||||
|
||||
|
||||
def get_projects(category):
|
||||
"""Utility to get projects based on category. Should be moved on the API
|
||||
and improved with more extensive filtering capabilities.
|
||||
"""
|
||||
api = system_util.pillar_api()
|
||||
projects = Project.all({
|
||||
'where': {
|
||||
'category': category,
|
||||
'is_private': False},
|
||||
'sort': '-_created',
|
||||
}, api=api)
|
||||
for project in projects._items:
|
||||
attach_project_pictures(project, api)
|
||||
return projects
|
||||
|
||||
|
||||
@blueprint.route('/courses')
|
||||
def courses():
|
||||
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
|
||||
def render_page():
|
||||
projects = get_projects('course')
|
||||
return render_template(
|
||||
'projects_index_collection.html',
|
||||
title='courses',
|
||||
projects=projects._items,
|
||||
api=system_util.pillar_api())
|
||||
|
||||
return render_page()
|
||||
|
||||
|
||||
@blueprint.route('/open-projects')
|
||||
def open_projects():
|
||||
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
|
||||
def render_page():
|
||||
projects = get_projects('film')
|
||||
return render_template(
|
||||
'projects_index_collection.html',
|
||||
title='open-projects',
|
||||
projects=projects._items,
|
||||
api=system_util.pillar_api())
|
||||
|
||||
return render_page()
|
||||
|
||||
|
||||
@blueprint.route('/workshops')
|
||||
def workshops():
|
||||
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
|
||||
def render_page():
|
||||
projects = get_projects('workshop')
|
||||
return render_template(
|
||||
'projects_index_collection.html',
|
||||
title='workshops',
|
||||
projects=projects._items,
|
||||
api=system_util.pillar_api())
|
||||
|
||||
return render_page()
|
||||
|
||||
|
||||
def get_random_featured_nodes() -> typing.List[dict]:
|
||||
"""Returns a list of project/node combinations for featured nodes.
|
||||
|
||||
A random subset of 3 featured nodes from all public projects is returned.
|
||||
Assumes that the user actually has access to the public projects' nodes.
|
||||
|
||||
The dict is a node, with a 'project' key that contains a projected project.
|
||||
"""
|
||||
|
||||
proj_coll = current_app.db('projects')
|
||||
featured_nodes = proj_coll.aggregate([
|
||||
{'$match': {'is_private': False}},
|
||||
{'$project': {'nodes_featured': True,
|
||||
'url': True,
|
||||
'name': True,
|
||||
'summary': True,
|
||||
'picture_square': True}},
|
||||
{'$unwind': {'path': '$nodes_featured'}},
|
||||
{'$sample': {'size': 3}},
|
||||
{'$lookup': {'from': 'nodes',
|
||||
'localField': 'nodes_featured',
|
||||
'foreignField': '_id',
|
||||
'as': 'node'}},
|
||||
{'$unwind': {'path': '$node'}},
|
||||
{'$project': {'url': True,
|
||||
'name': True,
|
||||
'summary': True,
|
||||
'picture_square': True,
|
||||
'node._id': True,
|
||||
'node.name': True,
|
||||
'node.permissions': True,
|
||||
'node.picture': True,
|
||||
'node.properties.content_type': True,
|
||||
'node.properties.url': True}},
|
||||
])
|
||||
|
||||
featured_node_documents = []
|
||||
api = system_util.pillar_api()
|
||||
for node_info in featured_nodes:
|
||||
# Turn the project-with-node doc into a node-with-project doc.
|
||||
node_document = node_info.pop('node')
|
||||
node_document['project'] = node_info
|
||||
|
||||
node = Node(node_document)
|
||||
node.picture = get_file(node.picture, api=api)
|
||||
node.url = url_for_node(node=node)
|
||||
node.project.url = url_for('projects.view', project_url=node.project.url)
|
||||
node.project.picture_square = get_file(node.project.picture_square, api=api)
|
||||
featured_node_documents.append(node)
|
||||
|
||||
return featured_node_documents
|
||||
|
||||
|
||||
@blueprint_settings.route('/emails', methods=['GET', 'POST'])
|
||||
@login_required
|
||||
def emails():
|
||||
"""Main email settings.
|
||||
"""
|
||||
if current_user.has_role('protected'):
|
||||
return abort(404) # TODO: make this 403, handle template properly
|
||||
api = system_util.pillar_api()
|
||||
user = User.find(current_user.objectid, api=api)
|
||||
|
||||
# Force creation of settings for the user (safely remove this code once
|
||||
# implemented on account creation level, and after adding settings to all
|
||||
# existing users)
|
||||
if not user.settings:
|
||||
user.settings = dict(email_communications=1)
|
||||
user.update(api=api)
|
||||
|
||||
if user.settings.email_communications is None:
|
||||
user.settings.email_communications = 1
|
||||
user.update(api=api)
|
||||
|
||||
# Generate form
|
||||
form = forms.UserSettingsEmailsForm(
|
||||
email_communications=user.settings.email_communications)
|
||||
|
||||
if form.validate_on_submit():
|
||||
try:
|
||||
user.settings.email_communications = form.email_communications.data
|
||||
user.update(api=api)
|
||||
flash("Profile updated", 'success')
|
||||
except sdk_exceptions.ResourceInvalid as e:
|
||||
message = json.loads(e.content)
|
||||
flash(message)
|
||||
|
||||
return render_template('users/settings/emails.html', form=form, title='emails')
|
||||
|
||||
|
||||
@blueprint_settings.route('/billing')
|
||||
@login_required
|
||||
def billing():
|
||||
"""View the subscription status of a user
|
||||
"""
|
||||
from . import store
|
||||
|
||||
log.debug('START OF REQUEST')
|
||||
|
||||
if current_user.has_role('protected'):
|
||||
return abort(404) # TODO: make this 403, handle template properly
|
||||
|
||||
expiration_date = 'No subscription to expire'
|
||||
|
||||
# Classify the user based on their roles and capabilities
|
||||
cap_subs = current_user.has_cap('subscriber')
|
||||
if current_user.has_role('demo'):
|
||||
user_cls = 'demo'
|
||||
elif not cap_subs and current_user.has_cap('can-renew-subscription'):
|
||||
# This user has an inactive but renewable subscription.
|
||||
user_cls = 'subscriber-expired'
|
||||
elif cap_subs:
|
||||
if current_user.has_role('subscriber'):
|
||||
# This user pays for their own subscription. Only in this case do we need to fetch
|
||||
# the expiration date from the Store.
|
||||
user_cls = 'subscriber'
|
||||
store_user = store.fetch_subscription_info(current_user.email)
|
||||
if store_user is None:
|
||||
expiration_date = 'Unable to reach Blender Store to check'
|
||||
else:
|
||||
expiration_date = store_user['expiration_date'][:10]
|
||||
|
||||
elif current_user.has_role('org-subscriber'):
|
||||
# An organisation pays for this subscription.
|
||||
user_cls = 'subscriber-org'
|
||||
else:
|
||||
# This user gets the subscription cap from somewhere else (like an organisation).
|
||||
user_cls = 'subscriber-other'
|
||||
else:
|
||||
user_cls = 'outsider'
|
||||
|
||||
return render_template(
|
||||
'users/settings/billing.html',
|
||||
user_cls=user_cls,
|
||||
expiration_date=expiration_date,
|
||||
title='billing')
|
||||
|
||||
|
||||
@blueprint.route('/terms-and-conditions')
|
||||
def terms_and_conditions():
|
||||
return render_template('terms_and_conditions.html')
|
||||
|
||||
|
||||
@blueprint.route('/privacy')
|
||||
def privacy():
|
||||
return render_template('privacy.html')
|
||||
|
||||
|
||||
@blueprint.route('/emails/welcome.send')
|
||||
@login_required
|
||||
def emails_welcome_send():
|
||||
from cloud import email
|
||||
email.queue_welcome_mail(current_user)
|
||||
return f'queued mail to {current_user.email}'
|
||||
|
||||
|
||||
@blueprint.route('/emails/welcome.html')
|
||||
@login_required
|
||||
def emails_welcome_html():
|
||||
return render_template('emails/welcome.html',
|
||||
subject='Welcome to Blender Cloud',
|
||||
user=current_user)
|
||||
|
||||
|
||||
@blueprint.route('/emails/welcome.txt')
|
||||
@login_required
|
||||
def emails_welcome_txt():
|
||||
txt = render_template('emails/welcome.txt',
|
||||
subject='Welcome to Blender Cloud',
|
||||
user=current_user)
|
||||
return flask.Response(txt, content_type='text/plain; charset=utf-8')
|
||||
|
||||
|
||||
@blueprint.route('/nodes/<string(length=24):node_id>/comments')
|
||||
def comments_for_node(node_id):
|
||||
"""Overrides the default render_comments_for_node.
|
||||
|
||||
This is done in order to extend can_post_comments by requiring the
|
||||
subscriber capability.
|
||||
"""
|
||||
|
||||
api = system_util.pillar_api()
|
||||
|
||||
node = Node.find(node_id, api=api)
|
||||
project = Project({'_id': node.project})
|
||||
can_post_comments = project.node_type_has_method('comment', 'POST', api=api)
|
||||
can_comment_override = flask.request.args.get('can_comment', 'True') == 'True'
|
||||
can_post_comments = can_post_comments and can_comment_override and current_user.has_cap(
|
||||
'subscriber')
|
||||
|
||||
return render_comments_for_node(node_id, can_post_comments=can_post_comments)
|
||||
|
||||
|
||||
@blueprint.route('/p/hero')
|
||||
def project_hero():
|
||||
api = system_util.pillar_api()
|
||||
project = find_project_or_404('hero',
|
||||
embedded={'header_node': 1},
|
||||
api=api)
|
||||
# Load the header video file, if there is any.
|
||||
header_video_file = None
|
||||
header_video_node = None
|
||||
if project.header_node and project.header_node.node_type == 'asset' and \
|
||||
project.header_node.properties.content_type == 'video':
|
||||
header_video_node = project.header_node
|
||||
header_video_file = get_file(project.header_node.properties.file)
|
||||
header_video_node.picture = get_file(header_video_node.picture)
|
||||
|
||||
pages = Node.all({
|
||||
'where': {'project': project._id, 'node_type': 'page'},
|
||||
'projection': {'name': 1}}, api=api)
|
||||
|
||||
return render_project(project, api,
|
||||
extra_context={'header_video_file': header_video_file,
|
||||
'header_video_node': header_video_node,
|
||||
'pages': pages._items,},
|
||||
template_name='projects/landing.html')
|
||||
|
||||
|
||||
def setup_app(app):
|
||||
global _homepage_context
|
||||
cached = app.cache.cached(timeout=300)
|
||||
_homepage_context = cached(_homepage_context)
|
BIN
cloud/static/img/2014_03_09_sxsw.jpg
Normal file
After Width: | Height: | Size: 41 KiB |
BIN
cloud/static/img/2014_03_10_cosmos.jpg
Normal file
After Width: | Height: | Size: 46 KiB |
BIN
cloud/static/img/2015_10_30_glass.jpg
Normal file
After Width: | Height: | Size: 51 KiB |
BIN
cloud/static/img/2015_11_19_art.jpg
Normal file
After Width: | Height: | Size: 72 KiB |
BIN
cloud/static/img/2015_11_24_bip.jpg
Normal file
After Width: | Height: | Size: 14 KiB |
BIN
cloud/static/img/2015_12_01_blenrig.jpg
Normal file
After Width: | Height: | Size: 45 KiB |
BIN
cloud/static/img/2015_12_23_textures.jpg
Normal file
After Width: | Height: | Size: 53 KiB |
BIN
cloud/static/img/2016_01_05_charlib.jpg
Normal file
After Width: | Height: | Size: 60 KiB |
BIN
cloud/static/img/2016_01_30_llamigos.jpg
Normal file
After Width: | Height: | Size: 70 KiB |
BIN
cloud/static/img/2016_03_01_sybren.jpg
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
cloud/static/img/2016_05_03_projects.jpg
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
cloud/static/img/2016_05_09_projectsharing.jpg
Normal file
After Width: | Height: | Size: 34 KiB |
BIN
cloud/static/img/2016_05_11_addon.jpg
Normal file
After Width: | Height: | Size: 30 KiB |
BIN
cloud/static/img/2016_05_23_privtextures.jpg
Normal file
After Width: | Height: | Size: 30 KiB |
BIN
cloud/static/img/2016_06_30_sync.jpg
Normal file
After Width: | Height: | Size: 46 KiB |
BIN
cloud/static/img/2016_07_14_image.jpg
Normal file
After Width: | Height: | Size: 35 KiB |
BIN
cloud/static/img/2016_07_27_hdri.jpg
Normal file
After Width: | Height: | Size: 91 KiB |
BIN
cloud/static/img/2016_12_06_toon.jpg
Normal file
After Width: | Height: | Size: 50 KiB |
BIN
cloud/static/img/2017_03_10_agent.jpg
Normal file
After Width: | Height: | Size: 68 KiB |
89
cloud/stats/__init__.py
Normal file
@@ -0,0 +1,89 @@
|
||||
"""Interesting usage metrics"""
|
||||
|
||||
from flask import current_app
|
||||
|
||||
|
||||
def count_nodes(query=None) -> int:
|
||||
pipeline = [
|
||||
{'$match': {'_deleted': {'$ne': 'true'}}},
|
||||
{
|
||||
'$lookup':
|
||||
{
|
||||
'from': "projects",
|
||||
'localField': "project",
|
||||
'foreignField': "_id",
|
||||
'as': "project",
|
||||
}
|
||||
},
|
||||
{
|
||||
'$unwind':
|
||||
{
|
||||
'path': '$project',
|
||||
}
|
||||
},
|
||||
{
|
||||
'$project':
|
||||
{
|
||||
'project.is_private': 1,
|
||||
}
|
||||
},
|
||||
{'$match': {'project.is_private': False}},
|
||||
{'$count': 'tot'}
|
||||
]
|
||||
c = current_app.db()['nodes']
|
||||
# If we provide a query, we extend the first $match step in the aggregation pipeline with
|
||||
# with the extra parameters (for example node_type)
|
||||
if query:
|
||||
pipeline[0]['$match'].update(query)
|
||||
# Return either a list with one item or an empty list
|
||||
r = list(c.aggregate(pipeline=pipeline))
|
||||
count = 0 if not r else r[0]['tot']
|
||||
return count
|
||||
|
||||
|
||||
def count_users(query=None) -> int:
|
||||
u = current_app.db()['users']
|
||||
return u.count(query)
|
||||
|
||||
|
||||
def count_blender_sync(query=None) -> int:
|
||||
pipeline = [
|
||||
# 0 Find all startups.blend that are not deleted
|
||||
{
|
||||
'$match': {
|
||||
'_deleted': {'$ne': 'true'},
|
||||
'name': 'startup.blend',
|
||||
}
|
||||
},
|
||||
# 1 Group them per project (drops any duplicates)
|
||||
{'$group': {'_id': '$project'}},
|
||||
# 2 Join the project info
|
||||
{
|
||||
'$lookup':
|
||||
{
|
||||
'from': "projects",
|
||||
'localField': "_id",
|
||||
'foreignField': "_id",
|
||||
'as': "project",
|
||||
}
|
||||
},
|
||||
# 3 Unwind the project list (there is always only one project)
|
||||
{
|
||||
'$unwind':
|
||||
{
|
||||
'path': '$project',
|
||||
}
|
||||
},
|
||||
# 4 Find all home projects
|
||||
{'$match': {'project.category': 'home'}},
|
||||
{'$count': 'tot'}
|
||||
]
|
||||
c = current_app.db()['nodes']
|
||||
# If we provide a query, we extend the first $match step in the aggregation pipeline with
|
||||
# with the extra parameters (for example _created)
|
||||
if query:
|
||||
pipeline[0]['$match'].update(query)
|
||||
# Return either a list with one item or an empty list
|
||||
r = list(c.aggregate(pipeline=pipeline))
|
||||
count = 0 if not r else r[0]['tot']
|
||||
return count
|
57
cloud/stats/routes.py
Normal file
@@ -0,0 +1,57 @@
|
||||
import logging
|
||||
import datetime
|
||||
import functools
|
||||
|
||||
from flask import Blueprint, jsonify
|
||||
from cloud.stats import count_nodes, count_users, count_blender_sync
|
||||
|
||||
blueprint = Blueprint('cloud.stats', __name__, url_prefix='/s')
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@functools.lru_cache()
|
||||
def get_stats(before: datetime.datetime):
|
||||
query_comments = {'node_type': 'comment'}
|
||||
query_assets = {'node_type': 'asset'}
|
||||
|
||||
date_query = {}
|
||||
|
||||
if before:
|
||||
date_query = {'_created': {'$lt': before}}
|
||||
query_comments.update(date_query)
|
||||
query_assets.update(date_query)
|
||||
|
||||
stats = {
|
||||
'comments': count_nodes(query_comments),
|
||||
'assets': count_nodes(query_assets),
|
||||
'users_total': count_users(date_query),
|
||||
'users_blender_sync': count_blender_sync(date_query),
|
||||
}
|
||||
return stats
|
||||
|
||||
|
||||
@blueprint.route('/')
|
||||
@blueprint.route('/before/<int:before>')
|
||||
def index(before: int=0):
|
||||
"""
|
||||
This endpoint is queried on a daily basis by grafista to retrieve cloud usage
|
||||
stats. For assets and comments we take into considerations only those who belong
|
||||
to public projects.
|
||||
|
||||
These is the data we retrieve
|
||||
|
||||
- Comments count
|
||||
- Assets count (video, images and files)
|
||||
- Users count (subscribers count goes via store)
|
||||
- Blender Sync users
|
||||
"""
|
||||
|
||||
# TODO: Implement project-level metrics (and update ad every child update)
|
||||
if before:
|
||||
before = datetime.datetime.strptime(str(before), '%Y%m%d')
|
||||
else:
|
||||
today = datetime.date.today()
|
||||
before = datetime.datetime(today.year, today.month, today.day)
|
||||
|
||||
return jsonify(get_stats(before))
|
72
cloud/store.py
Normal file
@@ -0,0 +1,72 @@
|
||||
"""Blender Store interface."""
|
||||
|
||||
import logging
|
||||
import typing
|
||||
|
||||
from pillar import current_app
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def fetch_subscription_info(email: str) -> typing.Optional[dict]:
|
||||
"""Returns the user info dict from the external subscriptions management server.
|
||||
|
||||
:returns: the store user info, or None if the user can't be found or there
|
||||
was an error communicating. A dict like this is returned:
|
||||
{
|
||||
"shop_id": 700,
|
||||
"cloud_access": 1,
|
||||
"paid_balance": 314.75,
|
||||
"balance_currency": "EUR",
|
||||
"start_date": "2014-08-25 17:05:46",
|
||||
"expiration_date": "2016-08-24 13:38:45",
|
||||
"subscription_status": "wc-active",
|
||||
"expiration_date_approximate": true
|
||||
}
|
||||
"""
|
||||
|
||||
from requests.adapters import HTTPAdapter
|
||||
import requests.exceptions
|
||||
|
||||
external_subscriptions_server = current_app.config['EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER']
|
||||
|
||||
if log.isEnabledFor(logging.DEBUG):
|
||||
import urllib.parse
|
||||
|
||||
log_email = urllib.parse.quote(email)
|
||||
log.debug('Connecting to store at %s?blenderid=%s',
|
||||
external_subscriptions_server, log_email)
|
||||
|
||||
# Retry a few times when contacting the store.
|
||||
s = requests.Session()
|
||||
s.mount(external_subscriptions_server, HTTPAdapter(max_retries=5))
|
||||
|
||||
try:
|
||||
r = s.get(external_subscriptions_server,
|
||||
params={'blenderid': email},
|
||||
verify=current_app.config['TLS_CERT_FILE'],
|
||||
timeout=current_app.config.get('EXTERNAL_SUBSCRIPTIONS_TIMEOUT_SECS', 10))
|
||||
except requests.exceptions.ConnectionError as ex:
|
||||
log.error('Error connecting to %s: %s', external_subscriptions_server, ex)
|
||||
return None
|
||||
except requests.exceptions.Timeout as ex:
|
||||
log.error('Timeout communicating with %s: %s', external_subscriptions_server, ex)
|
||||
return None
|
||||
except requests.exceptions.RequestException as ex:
|
||||
log.error('Some error communicating with %s: %s', external_subscriptions_server, ex)
|
||||
return None
|
||||
|
||||
if r.status_code != 200:
|
||||
log.warning("Error communicating with %s, code=%i, unable to check "
|
||||
"subscription status of user %s",
|
||||
external_subscriptions_server, r.status_code, email)
|
||||
return None
|
||||
|
||||
store_user = r.json()
|
||||
|
||||
if log.isEnabledFor(logging.DEBUG):
|
||||
import json
|
||||
log.debug('Received JSON from store API: %s',
|
||||
json.dumps(store_user, sort_keys=False, indent=4))
|
||||
|
||||
return store_user
|
216
cloud/webhooks.py
Normal file
@@ -0,0 +1,216 @@
|
||||
"""Blender ID webhooks."""
|
||||
|
||||
import functools
|
||||
import hashlib
|
||||
import hmac
|
||||
import json
|
||||
import logging
|
||||
import typing
|
||||
|
||||
from flask_login import request
|
||||
from flask import Blueprint
|
||||
import werkzeug.exceptions as wz_exceptions
|
||||
|
||||
from pillar import current_app
|
||||
from pillar.api.blender_cloud import subscription
|
||||
from pillar.api.utils.authentication import create_new_user_document, make_unique_username
|
||||
from pillar.auth import UserClass
|
||||
|
||||
blueprint = Blueprint('cloud-webhooks', __name__)
|
||||
log = logging.getLogger(__name__)
|
||||
WEBHOOK_MAX_BODY_SIZE = 1024 * 10 # 10 kB is large enough for
|
||||
|
||||
|
||||
def webhook_payload(hmac_secret: str) -> dict:
|
||||
"""Obtains the webhook payload from the request, verifying its HMAC.
|
||||
|
||||
:returns the webhook payload as dictionary.
|
||||
"""
|
||||
# Check the content type
|
||||
if request.content_type != 'application/json':
|
||||
log.info('request from %s to %s had bad content type %s',
|
||||
request.remote_addr, request, request.content_type)
|
||||
raise wz_exceptions.BadRequest('Content type not supported')
|
||||
|
||||
# Check the length of the body
|
||||
if request.content_length > WEBHOOK_MAX_BODY_SIZE:
|
||||
raise wz_exceptions.BadRequest('Request too large')
|
||||
body = request.get_data()
|
||||
if len(body) > request.content_length:
|
||||
raise wz_exceptions.BadRequest('Larger body than Content-Length header')
|
||||
|
||||
# Validate the request
|
||||
mac = hmac.new(hmac_secret.encode(), body, hashlib.sha256)
|
||||
req_hmac = request.headers.get('X-Webhook-HMAC', '')
|
||||
our_hmac = mac.hexdigest()
|
||||
if not hmac.compare_digest(req_hmac, our_hmac):
|
||||
log.info('request from %s to %s had bad HMAC %r, expected %r',
|
||||
request.remote_addr, request, req_hmac, our_hmac)
|
||||
raise wz_exceptions.BadRequest('Bad HMAC')
|
||||
|
||||
try:
|
||||
return json.loads(body)
|
||||
except json.JSONDecodeError as ex:
|
||||
log.warning('request from %s to %s had bad JSON: %s',
|
||||
request.remote_addr, request, ex)
|
||||
raise wz_exceptions.BadRequest('Bad JSON')
|
||||
|
||||
|
||||
def score(wh_payload: dict, user: dict) -> int:
|
||||
"""Determine how likely it is that this is the correct user to modify.
|
||||
|
||||
:param wh_payload: the info we received from Blender ID;
|
||||
see user_modified()
|
||||
:param user: the user in our database
|
||||
:return: the score for this user
|
||||
"""
|
||||
|
||||
bid_str = str(wh_payload['id'])
|
||||
try:
|
||||
match_on_bid = any((auth['provider'] == 'blender-id' and auth['user_id'] == bid_str)
|
||||
for auth in user['auth'])
|
||||
except KeyError:
|
||||
match_on_bid = False
|
||||
|
||||
match_on_old_email = user.get('email', 'none') == wh_payload.get('old_email', 'nothere')
|
||||
match_on_new_email = user.get('email', 'none') == wh_payload.get('email', 'nothere')
|
||||
return match_on_bid * 10 + match_on_old_email + match_on_new_email * 2
|
||||
|
||||
|
||||
def insert_or_fetch_user(wh_payload: dict) -> typing.Optional[dict]:
|
||||
"""Fetch the user from the DB or create it.
|
||||
|
||||
Only creates it if the webhook payload indicates they could actually use
|
||||
Blender Cloud (i.e. demo or subscriber). This prevents us from creating
|
||||
Cloud accounts for Blender Network users.
|
||||
|
||||
:returns the user document, or None when not created.
|
||||
"""
|
||||
|
||||
users_coll = current_app.db('users')
|
||||
my_log = log.getChild('insert_or_fetch_user')
|
||||
|
||||
bid_str = str(wh_payload['id'])
|
||||
email = wh_payload['email']
|
||||
|
||||
# Find the user by their Blender ID, or any of their email addresses.
|
||||
# We use one query to find all matching users. This is done as a
|
||||
# consistency check; if more than one user is returned, we know the
|
||||
# database is inconsistent with Blender ID and can emit a warning
|
||||
# about this.
|
||||
query = {'$or': [
|
||||
{'auth.provider': 'blender-id', 'auth.user_id': bid_str},
|
||||
{'email': {'$in': [wh_payload['old_email'], email]}},
|
||||
]}
|
||||
db_users = users_coll.find(query)
|
||||
user_count = db_users.count()
|
||||
if user_count > 1:
|
||||
# Now we have to pay the price for finding users in one query; we
|
||||
# have to prioritise them and return the one we think is most reliable.
|
||||
calc_score = functools.partial(score, wh_payload)
|
||||
best_score = max(db_users, key=calc_score)
|
||||
|
||||
my_log.error('%d users found for query %s, picking user %s (%s)',
|
||||
user_count, query, best_score['_id'], best_score['email'])
|
||||
return best_score
|
||||
if user_count:
|
||||
db_user = db_users[0]
|
||||
my_log.debug('found user %s', db_user['email'])
|
||||
return db_user
|
||||
|
||||
# Pretend to create the user, so that we can inspect the resulting
|
||||
# capabilities. This is more future-proof than looking at the list
|
||||
# of roles in the webhook payload.
|
||||
username = make_unique_username(email)
|
||||
user_doc = create_new_user_document(email, bid_str, username,
|
||||
provider='blender-id',
|
||||
full_name=wh_payload['full_name'])
|
||||
|
||||
# Figure out the user's eventual roles. These aren't stored in the document yet,
|
||||
# because that's handled by the badger service.
|
||||
eventual_roles = [subscription.ROLES_BID_TO_PILLAR[r]
|
||||
for r in wh_payload.get('roles', [])
|
||||
if r in subscription.ROLES_BID_TO_PILLAR]
|
||||
user_ob = UserClass.construct('', user_doc)
|
||||
user_ob.roles = eventual_roles
|
||||
user_ob.collect_capabilities()
|
||||
create = (user_ob.has_cap('subscriber') or
|
||||
user_ob.has_cap('can-renew-subscription') or
|
||||
current_app.org_manager.user_is_unknown_member(email))
|
||||
if not create:
|
||||
my_log.info('Received update for unknown user %r without Cloud access (caps=%s)',
|
||||
wh_payload['old_email'], user_ob.capabilities)
|
||||
return None
|
||||
|
||||
# Actually create the user in the database.
|
||||
r, _, _, status = current_app.post_internal('users', user_doc)
|
||||
if status != 201:
|
||||
my_log.error('unable to create user %s: : %r %r', email, status, r)
|
||||
raise wz_exceptions.InternalServerError('unable to create user')
|
||||
|
||||
user_doc.update(r)
|
||||
my_log.info('created user %r = %s to allow immediate Cloud access', email, user_doc['_id'])
|
||||
return user_doc
|
||||
|
||||
|
||||
@blueprint.route('/user-modified', methods=['POST'])
|
||||
def user_modified():
|
||||
"""Update the local user based on the info from Blender ID.
|
||||
|
||||
If the payload indicates the user has access to Blender Cloud (or at least
|
||||
a renewable subscription), create the user if not already in our DB.
|
||||
|
||||
The payload we expect is a dictionary like:
|
||||
{'id': 12345, # the user's ID in Blender ID
|
||||
'old_email': 'old@example.com',
|
||||
'full_name': 'Harry',
|
||||
'email': 'new@example'com,
|
||||
'roles': ['role1', 'role2', …]}
|
||||
"""
|
||||
my_log = log.getChild('user_modified')
|
||||
my_log.debug('Received request from %s', request.remote_addr)
|
||||
|
||||
hmac_secret = current_app.config['BLENDER_ID_WEBHOOK_USER_CHANGED_SECRET']
|
||||
payload = webhook_payload(hmac_secret)
|
||||
|
||||
my_log.info('payload: %s', payload)
|
||||
|
||||
# Update the user
|
||||
db_user = insert_or_fetch_user(payload)
|
||||
if not db_user:
|
||||
my_log.info('Received update for unknown user %r', payload['old_email'])
|
||||
return '', 204
|
||||
|
||||
# Use direct database updates to change the email and full name.
|
||||
# Also updates the db_user dict so that local_user below will have
|
||||
# the updated information.
|
||||
updates = {}
|
||||
if db_user['email'] != payload['email']:
|
||||
my_log.info('User changed email from %s to %s', payload['old_email'], payload['email'])
|
||||
updates['email'] = payload['email']
|
||||
db_user['email'] = payload['email']
|
||||
|
||||
if db_user['full_name'] != payload['full_name']:
|
||||
my_log.info('User changed full name from %r to %r',
|
||||
db_user['full_name'], payload['full_name'])
|
||||
if payload['full_name']:
|
||||
updates['full_name'] = payload['full_name']
|
||||
else:
|
||||
# Fall back to the username when the full name was erased.
|
||||
updates['full_name'] = db_user['username']
|
||||
db_user['full_name'] = updates['full_name']
|
||||
|
||||
if updates:
|
||||
users_coll = current_app.db('users')
|
||||
update_res = users_coll.update_one({'_id': db_user['_id']},
|
||||
{'$set': updates})
|
||||
if update_res.matched_count != 1:
|
||||
my_log.error('Unable to find user %s to update, even though '
|
||||
'we found them by email address %s',
|
||||
db_user['_id'], payload['old_email'])
|
||||
|
||||
# Defer to Pillar to do the role updates.
|
||||
local_user = UserClass.construct('', db_user)
|
||||
subscription.do_update_subscription(local_user, payload)
|
||||
|
||||
return '', 204
|
296
cloud_share_img.py
Executable file
@@ -0,0 +1,296 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
"""CLI command for sharing an image via Blender Cloud.
|
||||
|
||||
Assumes that you are logged in on Blender ID with the Blender ID Add-on.
|
||||
|
||||
The user_config_dir and user_data_dir functions come from
|
||||
https://github.com/ActiveState/appdirs/blob/master/appdirs.py and
|
||||
are licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import mimetypes
|
||||
import os.path
|
||||
import pprint
|
||||
import sys
|
||||
import webbrowser
|
||||
|
||||
from urllib.parse import urljoin
|
||||
|
||||
import requests
|
||||
|
||||
cli = argparse.Namespace() # CLI args from argparser
|
||||
sess = requests.Session()
|
||||
IMAGE_SHARING_GROUP_NODE_NAME = 'Image sharing'
|
||||
|
||||
if sys.platform.startswith('java'):
|
||||
import platform
|
||||
|
||||
os_name = platform.java_ver()[3][0]
|
||||
if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc.
|
||||
system = 'win32'
|
||||
elif os_name.startswith('Mac'): # "Mac OS X", etc.
|
||||
system = 'darwin'
|
||||
else: # "Linux", "SunOS", "FreeBSD", etc.
|
||||
# Setting this to "linux2" is not ideal, but only Windows or Mac
|
||||
# are actually checked for and the rest of the module expects
|
||||
# *sys.platform* style strings.
|
||||
system = 'linux2'
|
||||
else:
|
||||
system = sys.platform
|
||||
|
||||
|
||||
def request(method: str, rel_url: str, **kwargs) -> requests.Response:
|
||||
kwargs.setdefault('auth', (cli.token, ''))
|
||||
url = urljoin(cli.server_url, rel_url)
|
||||
return sess.request(method, url, **kwargs)
|
||||
|
||||
|
||||
def get(rel_url: str, **kwargs) -> requests.Response:
|
||||
return request('GET', rel_url, **kwargs)
|
||||
|
||||
|
||||
def post(rel_url: str, **kwargs) -> requests.Response:
|
||||
return request('POST', rel_url, **kwargs)
|
||||
|
||||
|
||||
def find_user_id() -> str:
|
||||
"""Returns the current user ID."""
|
||||
|
||||
print(15 * '=', 'User info', 15 * '=')
|
||||
resp = get('/api/users/me')
|
||||
resp.raise_for_status()
|
||||
|
||||
user_info = resp.json()
|
||||
print('You are logged in as %(full_name)s (%(_id)s)' % user_info)
|
||||
|
||||
return user_info['_id']
|
||||
|
||||
|
||||
def find_home_project_id() -> dict:
|
||||
resp = get('/api/bcloud/home-project')
|
||||
resp.raise_for_status()
|
||||
|
||||
proj = resp.json()
|
||||
proj_id = proj['_id']
|
||||
print('Your home project ID is %s' % proj_id)
|
||||
return proj_id
|
||||
|
||||
|
||||
def find_image_sharing_group_id(home_project_id, user_id) -> str:
|
||||
"""Find the top-level image sharing group node."""
|
||||
|
||||
node_doc = {
|
||||
'project': home_project_id,
|
||||
'node_type': 'group',
|
||||
'name': IMAGE_SHARING_GROUP_NODE_NAME,
|
||||
'user': user_id,
|
||||
}
|
||||
|
||||
resp = get('/api/nodes', params={'where': json.dumps(node_doc)})
|
||||
resp.raise_for_status()
|
||||
items = resp.json()['_items']
|
||||
|
||||
if not items:
|
||||
print('Share group not found, creating one.')
|
||||
node_doc.update({
|
||||
'properties': {},
|
||||
})
|
||||
resp = post('/api/nodes', json=node_doc)
|
||||
resp.raise_for_status()
|
||||
share_group = resp.json()
|
||||
else:
|
||||
share_group = items[0]
|
||||
|
||||
# print('Share group:', share_group)
|
||||
return share_group['_id']
|
||||
|
||||
|
||||
def upload_image():
|
||||
user_id = find_user_id()
|
||||
home_project_id = find_home_project_id()
|
||||
group_id = find_image_sharing_group_id(home_project_id, user_id)
|
||||
basename = os.path.basename(cli.imgfile)
|
||||
print('Sharing group ID is %s' % group_id)
|
||||
|
||||
# Upload the image to the project.
|
||||
print('Uploading %r' % cli.imgfile)
|
||||
mimetype, _ = mimetypes.guess_type(cli.imgfile, strict=False)
|
||||
with open(cli.imgfile, mode='rb') as infile:
|
||||
resp = post('api/storage/stream/%s' % home_project_id,
|
||||
files={'file': (basename, infile, mimetype)})
|
||||
resp.raise_for_status()
|
||||
file_upload_resp = resp.json()
|
||||
file_upload_status = file_upload_resp.get('_status') or file_upload_resp.get('status')
|
||||
if file_upload_status != 'ok':
|
||||
raise ValueError('Received bad status %s from Pillar: %s' %
|
||||
(file_upload_status, json.dumps(file_upload_resp)))
|
||||
file_id = file_upload_resp['file_id']
|
||||
print('File ID is', file_id)
|
||||
|
||||
# Create the asset node
|
||||
asset_node = {
|
||||
'project': home_project_id,
|
||||
'node_type': 'asset',
|
||||
'name': basename,
|
||||
'parent': group_id,
|
||||
'properties': {
|
||||
'content_type': mimetype,
|
||||
'file': file_id,
|
||||
},
|
||||
}
|
||||
resp = post('api/nodes', json=asset_node)
|
||||
resp.raise_for_status()
|
||||
node_info = resp.json()
|
||||
node_id = node_info['_id']
|
||||
print('Created asset node', node_id)
|
||||
|
||||
# Share the node to get a public URL.
|
||||
resp = post('api/nodes/%s/share' % node_id)
|
||||
resp.raise_for_status()
|
||||
share_info = resp.json()
|
||||
print(json.dumps(share_info, indent=4))
|
||||
|
||||
url = share_info.get('short_link')
|
||||
print('Opening %s in a browser' % url)
|
||||
webbrowser.open_new_tab(url)
|
||||
|
||||
|
||||
def find_credentials():
|
||||
"""Finds BlenderID credentials.
|
||||
|
||||
:rtype: str
|
||||
:returns: the authentication token to use.
|
||||
"""
|
||||
import glob
|
||||
|
||||
# Find BlenderID profile file.
|
||||
configpath = user_config_dir('blender', 'Blender Foundation', roaming=True)
|
||||
found = glob.glob(os.path.join(configpath, '*'))
|
||||
for confpath in reversed(sorted(found)):
|
||||
profiles_path = os.path.join(confpath, 'config', 'blender_id', 'profiles.json')
|
||||
if not os.path.exists(profiles_path):
|
||||
continue
|
||||
|
||||
print('Reading credentials from %s' % profiles_path)
|
||||
with open(profiles_path) as infile:
|
||||
profiles = json.load(infile, encoding='utf8')
|
||||
if profiles:
|
||||
break
|
||||
else:
|
||||
print('Unable to find Blender ID credentials. Log in with the Blender ID add-on in '
|
||||
'Blender first.')
|
||||
raise SystemExit()
|
||||
|
||||
active_profile = profiles[u'active_profile']
|
||||
profile = profiles[u'profiles'][active_profile]
|
||||
print('Logging in as %s' % profile[u'username'])
|
||||
|
||||
return profile[u'token']
|
||||
|
||||
|
||||
def main():
|
||||
global cli
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('imgfile', help='The image file to share.')
|
||||
parser.add_argument('-u', '--server-url', default='https://cloud.blender.org/',
|
||||
help='URL of the Flamenco server.')
|
||||
parser.add_argument('-t', '--token',
|
||||
help='Authentication token to use. If not given, your token from the '
|
||||
'Blender ID add-on is used.')
|
||||
|
||||
cli = parser.parse_args()
|
||||
if not cli.token:
|
||||
cli.token = find_credentials()
|
||||
|
||||
upload_image()
|
||||
|
||||
|
||||
def user_config_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||
r"""Return full path to the user-specific config dir for this application.
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"roaming" (boolean, default False) can be set True to use the Windows
|
||||
roaming appdata directory. That means that for users on a Windows
|
||||
network setup for roaming profiles, this user data will be
|
||||
sync'd on login. See
|
||||
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||
for a discussion of issues.
|
||||
Typical user config directories are:
|
||||
Mac OS X: same as user_data_dir
|
||||
Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined
|
||||
Win *: same as user_data_dir
|
||||
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
|
||||
That means, by default "~/.config/<AppName>".
|
||||
"""
|
||||
if system in {"win32", "darwin"}:
|
||||
path = user_data_dir(appname, appauthor, None, roaming)
|
||||
else:
|
||||
path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config"))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def user_data_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||
r"""Return full path to the user-specific data dir for this application.
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"roaming" (boolean, default False) can be set True to use the Windows
|
||||
roaming appdata directory. That means that for users on a Windows
|
||||
network setup for roaming profiles, this user data will be
|
||||
sync'd on login. See
|
||||
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||
for a discussion of issues.
|
||||
Typical user data directories are:
|
||||
Mac OS X: ~/Library/Application Support/<AppName>
|
||||
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
|
||||
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
|
||||
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
|
||||
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
|
||||
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
|
||||
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
|
||||
That means, by default "~/.local/share/<AppName>".
|
||||
"""
|
||||
if system == "win32":
|
||||
raise RuntimeError("Sorry, Windows is not supported for now.")
|
||||
elif system == 'darwin':
|
||||
path = os.path.expanduser('~/Library/Application Support/')
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
else:
|
||||
path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share"))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
124
deploy.sh
@@ -1,124 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
# Deploys the current production branch to the production machine.
|
||||
PROJECT_NAME="blender-cloud"
|
||||
DOCKER_NAME="blender_cloud"
|
||||
REMOTE_ROOT="/data/git/${PROJECT_NAME}"
|
||||
|
||||
SSH="ssh -o ClearAllForwardings=yes cloud.blender.org"
|
||||
|
||||
# macOS does not support readlink -f, so we use greadlink instead
|
||||
if [[ `uname` == 'Darwin' ]]; then
|
||||
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
|
||||
readlink='greadlink'
|
||||
else
|
||||
readlink='readlink'
|
||||
fi
|
||||
|
||||
ROOT="$(dirname "$($readlink -f "$0")")"
|
||||
cd ${ROOT}
|
||||
|
||||
# Check that we're on production branch.
|
||||
if [ $(git rev-parse --abbrev-ref HEAD) != "production" ]; then
|
||||
echo "You are NOT on the production branch, refusing to deploy." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check that production branch has been pushed.
|
||||
if [ -n "$(git log origin/production..production --oneline)" ]; then
|
||||
echo "WARNING: not all changes to the production branch have been pushed."
|
||||
echo "Press [ENTER] to continue deploying current origin/production, CTRL+C to abort."
|
||||
read dummy
|
||||
fi
|
||||
|
||||
function find_module()
|
||||
{
|
||||
MODULE_NAME=$1
|
||||
MODULE_DIR=$(python <<EOT
|
||||
from __future__ import print_function
|
||||
import os.path
|
||||
try:
|
||||
import ${MODULE_NAME}
|
||||
except ImportError:
|
||||
raise SystemExit('${MODULE_NAME} not found on Python path. Are you in the correct venv?')
|
||||
|
||||
print(os.path.dirname(os.path.dirname(${MODULE_NAME}.__file__)))
|
||||
EOT
|
||||
)
|
||||
|
||||
if [ $(git -C $MODULE_DIR rev-parse --abbrev-ref HEAD) != "production" ]; then
|
||||
echo "${MODULE_NAME}: ($MODULE_DIR) NOT on the production branch, refusing to deploy." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo $MODULE_DIR
|
||||
}
|
||||
|
||||
# Find our modules
|
||||
PILLAR_DIR=$(find_module pillar)
|
||||
ATTRACT_DIR=$(find_module attract)
|
||||
FLAMENCO_DIR=$(find_module flamenco)
|
||||
|
||||
echo "Pillar : $PILLAR_DIR"
|
||||
echo "Attract : $ATTRACT_DIR"
|
||||
echo "Flamenco: $FLAMENCO_DIR"
|
||||
|
||||
if [ -z "$PILLAR_DIR" -o -z "$ATTRACT_DIR" -o -z "$FLAMENCO_DIR" ];
|
||||
then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# SSH to cloud to pull all files in
|
||||
function git_pull() {
|
||||
PROJECT_NAME="$1"
|
||||
BRANCH="$2"
|
||||
REMOTE_ROOT="/data/git/${PROJECT_NAME}"
|
||||
|
||||
echo "==================================================================="
|
||||
echo "UPDATING FILES ON ${PROJECT_NAME}"
|
||||
${SSH} git -C ${REMOTE_ROOT} fetch origin ${BRANCH}
|
||||
${SSH} git -C ${REMOTE_ROOT} log origin/${BRANCH}..${BRANCH} --oneline
|
||||
${SSH} git -C ${REMOTE_ROOT} merge --ff-only origin/${BRANCH}
|
||||
}
|
||||
|
||||
git_pull pillar-python-sdk master
|
||||
git_pull pillar production
|
||||
git_pull attract production
|
||||
git_pull flamenco production
|
||||
git_pull blender-cloud production
|
||||
|
||||
# Update the virtualenv
|
||||
#${SSH} -t docker exec ${DOCKER_NAME} /data/venv/bin/pip install -U -r ${REMOTE_ROOT}/requirements.txt --exists-action w
|
||||
|
||||
# RSync the world
|
||||
$ATTRACT_DIR/rsync_ui.sh
|
||||
$FLAMENCO_DIR/rsync_ui.sh
|
||||
./rsync_ui.sh
|
||||
|
||||
# Notify Bugsnag of this new deploy.
|
||||
echo
|
||||
echo "==================================================================="
|
||||
GIT_REVISION=$(${SSH} git -C ${REMOTE_ROOT} describe --always)
|
||||
echo "Notifying Bugsnag of this new deploy of revision ${GIT_REVISION}."
|
||||
BUGSNAG_API_KEY=$(${SSH} python -c "\"import sys; sys.path.append('${REMOTE_ROOT}'); import config_local; print(config_local.BUGSNAG_API_KEY)\"")
|
||||
curl --data "apiKey=${BUGSNAG_API_KEY}&revision=${GIT_REVISION}" https://notify.bugsnag.com/deploy
|
||||
echo
|
||||
|
||||
# Wait for [ENTER] to restart the server
|
||||
echo
|
||||
echo "==================================================================="
|
||||
echo "NOTE: If you want to edit config_local.py on the server, do so now."
|
||||
echo "NOTE: Press [ENTER] to continue and restart the server process."
|
||||
read dummy
|
||||
${SSH} docker exec ${DOCKER_NAME} apache2ctl graceful
|
||||
echo "Server process restarted"
|
||||
|
||||
echo
|
||||
echo "==================================================================="
|
||||
echo "Clearing front page from Redis cache."
|
||||
${SSH} docker exec redis redis-cli DEL pwview//
|
||||
|
||||
echo
|
||||
echo "==================================================================="
|
||||
echo "Deploy of ${PROJECT_NAME} is done."
|
||||
echo "==================================================================="
|
116
deploy/2docker.sh
Executable file
@@ -0,0 +1,116 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
DEPLOY_BRANCH=${DEPLOY_BRANCH:-production}
|
||||
|
||||
# macOS does not support readlink -f, so we use greadlink instead
|
||||
if [[ `uname` == 'Darwin' ]]; then
|
||||
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
|
||||
readlink='greadlink'
|
||||
else
|
||||
readlink='readlink'
|
||||
fi
|
||||
|
||||
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
|
||||
DEPLOYDIR="$ROOT/docker/4_run/deploy"
|
||||
PROJECT_NAME="$(basename $ROOT)"
|
||||
|
||||
if [ -e $DEPLOYDIR ]; then
|
||||
echo "$DEPLOYDIR already exists, press [ENTER] to DESTROY AND DEPLOY, Ctrl+C to abort."
|
||||
read dummy
|
||||
rm -rf $DEPLOYDIR
|
||||
else
|
||||
echo -n "Deploying to $DEPLOYDIR… "
|
||||
echo "press [ENTER] to continue, Ctrl+C to abort."
|
||||
read dummy
|
||||
fi
|
||||
|
||||
cd ${ROOT}
|
||||
mkdir -p $DEPLOYDIR
|
||||
REMOTE_ROOT="$DEPLOYDIR/$PROJECT_NAME"
|
||||
|
||||
if [ -z "$SKIP_BRANCH_CHECK" ]; then
|
||||
# Check that we're on production branch.
|
||||
if [ $(git rev-parse --abbrev-ref HEAD) != "$DEPLOY_BRANCH" ]; then
|
||||
echo "You are NOT on the $DEPLOY_BRANCH branch, refusing to deploy." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check that production branch has been pushed.
|
||||
if [ -n "$(git log origin/$DEPLOY_BRANCH..$DEPLOY_BRANCH --oneline)" ]; then
|
||||
echo "WARNING: not all changes to the $DEPLOY_BRANCH branch have been pushed."
|
||||
echo "Press [ENTER] to continue deploying current origin/$DEPLOY_BRANCH, CTRL+C to abort."
|
||||
read dummy
|
||||
fi
|
||||
fi
|
||||
|
||||
function find_module()
|
||||
{
|
||||
MODULE_NAME=$1
|
||||
MODULE_DIR=$(python <<EOT
|
||||
from __future__ import print_function
|
||||
import os.path
|
||||
try:
|
||||
import ${MODULE_NAME}
|
||||
except ImportError:
|
||||
raise SystemExit('${MODULE_NAME} not found on Python path. Are you in the correct venv?')
|
||||
|
||||
print(os.path.dirname(os.path.dirname(${MODULE_NAME}.__file__)))
|
||||
EOT
|
||||
)
|
||||
echo $MODULE_DIR
|
||||
}
|
||||
|
||||
# Find our modules
|
||||
echo "==================================================================="
|
||||
echo "LOCAL MODULE LOCATIONS"
|
||||
PILLAR_DIR=$(find_module pillar)
|
||||
ATTRACT_DIR=$(find_module attract)
|
||||
FLAMENCO_DIR=$(find_module flamenco)
|
||||
SVNMAN_DIR=$(find_module svnman)
|
||||
SDK_DIR=$(find_module pillarsdk)
|
||||
|
||||
echo "Pillar : $PILLAR_DIR"
|
||||
echo "Attract : $ATTRACT_DIR"
|
||||
echo "Flamenco: $FLAMENCO_DIR"
|
||||
echo "SVNMan : $SVNMAN_DIR"
|
||||
echo "SDK : $SDK_DIR"
|
||||
|
||||
if [ -z "$PILLAR_DIR" -o -z "$ATTRACT_DIR" -o -z "$FLAMENCO_DIR" -o -z "$SVNMAN_DIR" -o -z "$SDK_DIR" ];
|
||||
then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
function git_clone() {
|
||||
PROJECT_NAME="$1"
|
||||
BRANCH="$2"
|
||||
LOCAL_ROOT="$3"
|
||||
|
||||
echo "==================================================================="
|
||||
echo "CLONING REPO ON $PROJECT_NAME @$BRANCH"
|
||||
URL=$(git -C $LOCAL_ROOT remote get-url origin)
|
||||
git -C $DEPLOYDIR clone --depth 1 --branch $BRANCH $URL $PROJECT_NAME
|
||||
}
|
||||
|
||||
git_clone pillar-python-sdk master $SDK_DIR
|
||||
git_clone pillar $DEPLOY_BRANCH $PILLAR_DIR
|
||||
git_clone attract $DEPLOY_BRANCH $ATTRACT_DIR
|
||||
git_clone flamenco $DEPLOY_BRANCH $FLAMENCO_DIR
|
||||
git_clone pillar-svnman $DEPLOY_BRANCH $SVNMAN_DIR
|
||||
git_clone blender-cloud $DEPLOY_BRANCH $ROOT
|
||||
|
||||
# Gulp everywhere
|
||||
GULP=$ROOT/node_modules/.bin/gulp
|
||||
if [ ! -e $GULP -o gulpfile.js -nt $GULP ]; then
|
||||
npm install
|
||||
touch $GULP # installer doesn't always touch this after a build, so we do.
|
||||
fi
|
||||
$GULP --cwd $DEPLOYDIR/pillar --production
|
||||
$GULP --cwd $DEPLOYDIR/attract --production
|
||||
$GULP --cwd $DEPLOYDIR/flamenco --production
|
||||
$GULP --cwd $DEPLOYDIR/pillar-svnman --production
|
||||
$GULP --cwd $DEPLOYDIR/blender-cloud --production
|
||||
|
||||
echo
|
||||
echo "==================================================================="
|
||||
echo "Deploy of ${PROJECT_NAME} is ready for dockerisation."
|
||||
echo "==================================================================="
|
81
deploy/2server.sh
Executable file
@@ -0,0 +1,81 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
# macOS does not support readlink -f, so we use greadlink instead
|
||||
if [[ `uname` == 'Darwin' ]]; then
|
||||
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
|
||||
readlink='greadlink'
|
||||
else
|
||||
readlink='readlink'
|
||||
fi
|
||||
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
|
||||
PROJECT_NAME="$(basename $ROOT)"
|
||||
DOCKER_DEPLOYDIR="$ROOT/docker/4_run/deploy"
|
||||
DOCKER_IMAGE="armadillica/blender_cloud:latest"
|
||||
REMOTE_SECRET_CONFIG_DIR="/data/config"
|
||||
REMOTE_DOCKER_COMPOSE_DIR="/root/docker"
|
||||
|
||||
#################################################################################
|
||||
case $1 in
|
||||
cloud*)
|
||||
DEPLOYHOST="$1"
|
||||
;;
|
||||
*)
|
||||
echo "Use $0 cloud{nr}|cloud.blender.org" >&2
|
||||
exit 1
|
||||
esac
|
||||
SSH_OPTS="-o ClearAllForwardings=yes -o PermitLocalCommand=no"
|
||||
SSH="ssh $SSH_OPTS $DEPLOYHOST"
|
||||
SCP="scp $SSH_OPTS"
|
||||
|
||||
echo -n "Deploying to $DEPLOYHOST… "
|
||||
|
||||
if ! ping $DEPLOYHOST -q -c 1 -W 2 >/dev/null; then
|
||||
echo "host $DEPLOYHOST cannot be pinged, refusing to deploy." >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
cat <<EOT
|
||||
[ping OK]
|
||||
|
||||
Make sure that you have pushed the $DOCKER_IMAGE
|
||||
docker image to Docker Hub.
|
||||
|
||||
press [ENTER] to continue, Ctrl+C to abort.
|
||||
EOT
|
||||
read dummy
|
||||
|
||||
#################################################################################
|
||||
echo "==================================================================="
|
||||
echo "Bringing remote Docker up to date…"
|
||||
$SSH mkdir -p $REMOTE_DOCKER_COMPOSE_DIR
|
||||
$SCP \
|
||||
$ROOT/docker/{docker-compose.yml,renew-letsencrypt.sh,mongo-backup.{cron,sh}} \
|
||||
$DEPLOYHOST:$REMOTE_DOCKER_COMPOSE_DIR
|
||||
$SSH -T <<EOT
|
||||
set -e
|
||||
cd $REMOTE_DOCKER_COMPOSE_DIR
|
||||
docker pull $DOCKER_IMAGE
|
||||
docker-compose up -d
|
||||
|
||||
echo
|
||||
echo "==================================================================="
|
||||
echo "Clearing front page from Redis cache."
|
||||
docker exec redis redis-cli DEL pwview//
|
||||
EOT
|
||||
|
||||
# Notify Sentry of this new deploy.
|
||||
# See https://sentry.io/blender-institute/blender-cloud/settings/release-tracking/
|
||||
# and https://docs.sentry.io/api/releases/post-organization-releases/
|
||||
# and https://sentry.io/api/
|
||||
echo
|
||||
echo "==================================================================="
|
||||
REVISION=$(date +'%Y%m%d.%H%M%S.%Z')
|
||||
echo "Notifying Sentry of this new deploy of revision $REVISION."
|
||||
SENTRY_RELEASE_URL="$($SSH env PYTHONPATH="$REMOTE_SECRET_CONFIG_DIR" python3 -c "\"import config_secrets; print(config_secrets.SENTRY_RELEASE_URL)\"")"
|
||||
curl -s "$SENTRY_RELEASE_URL" -XPOST -H 'Content-Type: application/json' -d "{\"version\": \"$REVISION\"}" | json_pp
|
||||
echo
|
||||
|
||||
echo
|
||||
echo "==================================================================="
|
||||
echo "Deploy to $DEPLOYHOST done."
|
||||
echo "==================================================================="
|
1
deploy/build-all.sh
Symbolic link
@@ -0,0 +1 @@
|
||||
build-quick.sh
|
34
deploy/build-quick.sh
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
# macOS does not support readlink -f, so we use greadlink instead
|
||||
if [[ `uname` == 'Darwin' ]]; then
|
||||
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
|
||||
readlink='greadlink'
|
||||
else
|
||||
readlink='readlink'
|
||||
fi
|
||||
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
|
||||
DOCKERDIR="$ROOT/docker/4_run"
|
||||
|
||||
case "$(basename "$0")" in
|
||||
build-quick.sh)
|
||||
pushd "$ROOT/docker/4_run"
|
||||
./build.sh
|
||||
;;
|
||||
build-all.sh)
|
||||
pushd "$ROOT/docker"
|
||||
./full_rebuild.sh
|
||||
;;
|
||||
*)
|
||||
echo "Unknown script $0, aborting" >&2
|
||||
exit 1
|
||||
esac
|
||||
|
||||
popd
|
||||
echo
|
||||
echo "Press [ENTER] to push the new Docker image."
|
||||
read dummy
|
||||
docker push armadillica/blender_cloud:latest
|
||||
echo
|
||||
echo "Build is done, ready to update the server."
|
||||
|
6
docker/1_base/Dockerfile
Normal file
@@ -0,0 +1,6 @@
|
||||
FROM ubuntu:17.10
|
||||
MAINTAINER Francesco Siddi <francesco@blender.org>
|
||||
|
||||
RUN apt-get update && apt-get install -qyy \
|
||||
-o APT::Install-Recommends=false -o APT::Install-Suggests=false \
|
||||
openssl ca-certificates
|
@@ -1,16 +0,0 @@
|
||||
FROM ubuntu:16.04
|
||||
MAINTAINER Francesco Siddi <francesco@blender.org>
|
||||
|
||||
RUN apt-get update && apt-get install -qyy \
|
||||
-o APT::Install-Recommends=false -o APT::Install-Suggests=false \
|
||||
python-pip libffi6 openssl ffmpeg rsyslog logrotate
|
||||
|
||||
RUN mkdir -p /data/git/pillar \
|
||||
&& mkdir -p /data/storage \
|
||||
&& mkdir -p /data/config \
|
||||
&& mkdir -p /data/venv \
|
||||
&& mkdir -p /data/wheelhouse
|
||||
|
||||
RUN pip install virtualenv
|
||||
RUN virtualenv /data/venv
|
||||
RUN . /data/venv/bin/activate && pip install -U pip && pip install wheel
|
3
docker/1_base/build.sh
Normal file → Executable file
@@ -1,3 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
docker build -t pillar_base -f base.docker .;
|
||||
# Uses --no-cache to always get the latest upstream (security) upgrades.
|
||||
exec docker build --no-cache "$@" -t pillar_base .
|
||||
|
@@ -1,3 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
. /data/venv/bin/activate && pip wheel --wheel-dir=/data/wheelhouse -r /requirements.txt
|
@@ -1,26 +0,0 @@
|
||||
FROM pillar_base
|
||||
MAINTAINER Francesco Siddi <francesco@blender.org>
|
||||
|
||||
RUN apt-get update && apt-get install -qy \
|
||||
git \
|
||||
gcc \
|
||||
libffi-dev \
|
||||
libssl-dev \
|
||||
pypy-dev \
|
||||
python-dev \
|
||||
python-imaging \
|
||||
zlib1g-dev \
|
||||
libjpeg-dev \
|
||||
libtiff-dev \
|
||||
python-crypto \
|
||||
python-openssl
|
||||
|
||||
ENV WHEELHOUSE=/data/wheelhouse
|
||||
ENV PIP_WHEEL_DIR=/data/wheelhouse
|
||||
ENV PIP_FIND_LINKS=/data/wheelhouse
|
||||
|
||||
VOLUME /data/wheelhouse
|
||||
|
||||
ADD requirements.txt /requirements.txt
|
||||
ADD build-wheels.sh /build-wheels.sh
|
||||
ENTRYPOINT ["bash", "build-wheels.sh"]
|
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
mkdir -p ../3_run/wheelhouse;
|
||||
cp ../../requirements.txt .;
|
||||
|
||||
docker build -t pillar_build -f build.docker .;
|
||||
docker run --rm \
|
||||
-v "$(pwd)"/../3_run/wheelhouse:/data/wheelhouse \
|
||||
pillar_build;
|
||||
|
||||
rm requirements.txt;
|
1
docker/2_buildpy/Python-3.6.4.tar.xz.md5
Normal file
@@ -0,0 +1 @@
|
||||
1325134dd525b4a2c3272a1a0214dd54 Python-3.6.4.tar.xz
|
58
docker/2_buildpy/build.sh
Executable file
@@ -0,0 +1,58 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
# macOS does not support readlink -f, so we use greadlink instead
|
||||
if [ $(uname) == 'Darwin' ]; then
|
||||
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
|
||||
readlink='greadlink'
|
||||
else
|
||||
readlink='readlink'
|
||||
fi
|
||||
|
||||
PYTHONTARGET=$($readlink -f ./python)
|
||||
|
||||
mkdir -p "$PYTHONTARGET"
|
||||
echo "Python will be built to $PYTHONTARGET"
|
||||
|
||||
docker build -t pillar_build -f buildpy.docker .
|
||||
|
||||
# Use the docker image to build Python 3.6 and mod-wsgi
|
||||
GID=$(id -g)
|
||||
docker run --rm -i \
|
||||
-v "$PYTHONTARGET:/opt/python" \
|
||||
pillar_build <<EOT
|
||||
set -e
|
||||
cd \$PYTHONSOURCE
|
||||
./configure \
|
||||
--prefix=/opt/python \
|
||||
--enable-ipv6 \
|
||||
--enable-shared \
|
||||
--with-ensurepip=upgrade
|
||||
make -j8 install
|
||||
|
||||
# Make sure we can run Python
|
||||
ldconfig
|
||||
|
||||
# Build mod-wsgi-py3 for Python 3.6
|
||||
cd /dpkg/mod-wsgi-*
|
||||
./configure --with-python=/opt/python/bin/python3
|
||||
make -j8 install
|
||||
mkdir -p /opt/python/mod-wsgi
|
||||
cp /usr/lib/apache2/modules/mod_wsgi.so /opt/python/mod-wsgi
|
||||
|
||||
chown -R $UID:$GID /opt/python/*
|
||||
EOT
|
||||
|
||||
# Strip some stuff we don't need from the Python install.
|
||||
rm -rf $PYTHONTARGET/lib/python3.*/test
|
||||
rm -rf $PYTHONTARGET/lib/python3.*/config-3.*/libpython3.*.a
|
||||
find $PYTHONTARGET/lib -name '*.so.*' -o -name '*.so' | while read libname; do
|
||||
chmod u+w "$libname"
|
||||
strip "$libname"
|
||||
done
|
||||
|
||||
# Create another docker image which contains the actual Python.
|
||||
# This one will serve as base for the Wheel builder and the
|
||||
# production image.
|
||||
docker build -t armadillica/pillar_py:3.6 -f includepy.docker .
|
35
docker/2_buildpy/buildpy.docker
Normal file
@@ -0,0 +1,35 @@
|
||||
FROM pillar_base
|
||||
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
|
||||
|
||||
RUN sed -i 's/^# deb-src/deb-src/' /etc/apt/sources.list && \
|
||||
apt-get update && \
|
||||
apt-get install -qy \
|
||||
build-essential \
|
||||
apache2-dev \
|
||||
checkinstall \
|
||||
curl
|
||||
|
||||
RUN apt-get build-dep -y python3.6
|
||||
|
||||
ADD Python-3.6.4.tar.xz.md5 /Python-3.6.4.tar.xz.md5
|
||||
|
||||
# Install Python sources
|
||||
RUN curl -O https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tar.xz && \
|
||||
md5sum -c Python-3.6.4.tar.xz.md5 && \
|
||||
tar xf Python-3.6.4.tar.xz && \
|
||||
rm -v Python-3.6.4.tar.xz
|
||||
|
||||
# Install mod-wsgi sources
|
||||
RUN mkdir -p /dpkg && cd /dpkg && apt-get source libapache2-mod-wsgi-py3
|
||||
|
||||
# To be able to install Python outside the docker.
|
||||
VOLUME /opt/python
|
||||
|
||||
# To be able to run Python; after building, ldconfig has to be re-run to do this.
|
||||
# This makes it easier to use Python right after building (for example to build
|
||||
# mod-wsgi for Python 3.6).
|
||||
RUN echo /opt/python/lib > /etc/ld.so.conf.d/python.conf
|
||||
RUN ldconfig
|
||||
ENV PATH=/opt/python/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
|
||||
ENV PYTHONSOURCE=/Python-3.6.4
|
14
docker/2_buildpy/includepy.docker
Normal file
@@ -0,0 +1,14 @@
|
||||
FROM pillar_base
|
||||
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
|
||||
|
||||
ADD python /opt/python
|
||||
|
||||
RUN echo /opt/python/lib > /etc/ld.so.conf.d/python.conf
|
||||
RUN ldconfig
|
||||
|
||||
RUN echo Python is installed in /opt/python/ > README.python
|
||||
ENV PATH=/opt/python/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
|
||||
RUN cd /opt/python/bin && \
|
||||
ln -s python3 python && \
|
||||
ln -s pip3 pip
|
18
docker/3_buildwheels/Dockerfile
Normal file
@@ -0,0 +1,18 @@
|
||||
FROM armadillica/pillar_py:3.6
|
||||
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
|
||||
|
||||
RUN apt-get update && apt-get install -qy \
|
||||
git \
|
||||
build-essential \
|
||||
checkinstall \
|
||||
libffi-dev \
|
||||
libssl-dev \
|
||||
libjpeg-dev \
|
||||
zlib1g-dev
|
||||
|
||||
ENV WHEELHOUSE=/data/wheelhouse
|
||||
ENV PIP_WHEEL_DIR=/data/wheelhouse
|
||||
ENV PIP_FIND_LINKS=/data/wheelhouse
|
||||
RUN mkdir -p $WHEELHOUSE
|
||||
|
||||
VOLUME /data/wheelhouse
|
45
docker/3_buildwheels/build.sh
Executable file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
# macOS does not support readlink -f, so we use greadlink instead
|
||||
if [ $(uname) == 'Darwin' ]; then
|
||||
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
|
||||
readlink='greadlink'
|
||||
else
|
||||
readlink='readlink'
|
||||
fi
|
||||
|
||||
TOPDEVDIR="$($readlink -f ../../..)"
|
||||
echo "Top-level development dir is $TOPDEVDIR"
|
||||
|
||||
WHEELHOUSE="$($readlink -f ../4_run/wheelhouse)"
|
||||
if [ -z "$WHEELHOUSE" ]; then
|
||||
echo "Error, ../4_run might not exist." >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
echo "Wheelhouse is $WHEELHOUSE"
|
||||
mkdir -p "$WHEELHOUSE"
|
||||
|
||||
docker build -t pillar_wheelbuilder .
|
||||
|
||||
GID=$(id -g)
|
||||
docker run --rm -i \
|
||||
-v "$WHEELHOUSE:/data/wheelhouse" \
|
||||
-v "$TOPDEVDIR:/data/topdev" \
|
||||
pillar_wheelbuilder <<EOT
|
||||
set -e
|
||||
# Build wheels for all dependencies.
|
||||
cd /data/topdev/blender-cloud
|
||||
pip3 install wheel
|
||||
pip3 wheel --wheel-dir=/data/wheelhouse -r requirements.txt
|
||||
chown -R $UID:$GID /data/wheelhouse
|
||||
|
||||
# Install the dependencies so that we can get a full freeze.
|
||||
pip3 install --no-index --find-links=/data/wheelhouse -r requirements.txt
|
||||
pip3 freeze | grep -v '^-[ef] ' > /data/wheelhouse/requirements.txt
|
||||
EOT
|
||||
|
||||
# Remove our own projects, they shouldn't be installed as wheel (for now).
|
||||
rm -f $WHEELHOUSE/{attract,flamenco,pillar,pillarsdk}*.whl
|
@@ -1,39 +0,0 @@
|
||||
<VirtualHost *:80>
|
||||
# EnableSendfile on
|
||||
XSendFile on
|
||||
XSendFilePath /data/storage/pillar
|
||||
XSendFilePath /data/git/pillar
|
||||
XSendFilePath /data/venv/lib/python2.7/site-packages/attract/static/
|
||||
XSendFilePath /data/venv/lib/python2.7/site-packages/flamenco/static/
|
||||
XsendFilePath /data/git/blender-cloud
|
||||
|
||||
ServerAdmin webmaster@localhost
|
||||
DocumentRoot /var/www/html
|
||||
|
||||
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
|
||||
# error, crit, alert, emerg.
|
||||
# It is also possible to configure the loglevel for particular
|
||||
# modules, e.g.
|
||||
# LogLevel info ssl:warn
|
||||
|
||||
ErrorLog ${APACHE_LOG_DIR}/error.log
|
||||
CustomLog ${APACHE_LOG_DIR}/access.log combined
|
||||
|
||||
WSGIDaemonProcess cloud processes=4 threads=1 maximum-requests=10000
|
||||
WSGIPassAuthorization On
|
||||
|
||||
WSGIScriptAlias / /data/git/blender-cloud/runserver.wsgi \
|
||||
process-group=cloud application-group=%{GLOBAL}
|
||||
|
||||
<Directory /data/git/blender-cloud>
|
||||
<Files runserver.wsgi>
|
||||
Require all granted
|
||||
</Files>
|
||||
</Directory>
|
||||
|
||||
# Temporary edit to remap the old cloudapi.blender.org to cloud.blender.org/api
|
||||
RewriteEngine On
|
||||
RewriteCond "%{HTTP_HOST}" "^cloudapi\.blender\.org" [NC]
|
||||
RewriteRule (.*) /api$1 [PT]
|
||||
|
||||
</VirtualHost>
|
@@ -1,5 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
cp ../../requirements.txt .;
|
||||
docker build -t armadillica/blender_cloud -f run.docker .;
|
||||
rm requirements.txt;
|
@@ -1,25 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
if [ ! -f /installed ]; then
|
||||
echo "Installing pillar and pillar-sdk"
|
||||
# TODO: curretly doing pip install -e takes a long time, so we symlink
|
||||
# . /data/venv/bin/activate && pip install -e /data/git/pillar
|
||||
ln -s /data/git/pillar/pillar /data/venv/lib/python2.7/site-packages/pillar
|
||||
# . /data/venv/bin/activate && pip install -e /data/git/attract
|
||||
ln -s /data/git/attract/attract /data/venv/lib/python2.7/site-packages/attract
|
||||
# . /data/venv/bin/activate && pip install -e /data/git/flamenco/packages/flamenco
|
||||
ln -s /data/git/flamenco/packages/flamenco/flamenco/ /data/venv/lib/python2.7/site-packages/flamenco
|
||||
# . /data/venv/bin/activate && pip install -e /data/git/pillar-python-sdk
|
||||
ln -s /data/git/pillar-python-sdk/pillarsdk /data/venv/lib/python2.7/site-packages/pillarsdk
|
||||
touch installed
|
||||
fi
|
||||
|
||||
if [ "$DEV" = "true" ]; then
|
||||
echo "Running in development mode"
|
||||
cd /data/git/blender-cloud
|
||||
bash /manage.sh runserver --host='0.0.0.0'
|
||||
else
|
||||
# Run Apache
|
||||
a2enmod rewrite
|
||||
/usr/sbin/apache2ctl -D FOREGROUND
|
||||
fi
|
@@ -1,5 +0,0 @@
|
||||
#!/usr/bin/env bash -e
|
||||
|
||||
. /data/venv/bin/activate
|
||||
cd /data/git/blender-cloud
|
||||
python manage.py "$@"
|
@@ -1,46 +0,0 @@
|
||||
FROM pillar_base
|
||||
|
||||
RUN apt-get update && apt-get install -qyy \
|
||||
-o APT::Install-Recommends=true -o APT::Install-Suggests=false \
|
||||
git \
|
||||
apache2 \
|
||||
libapache2-mod-wsgi \
|
||||
libapache2-mod-xsendfile \
|
||||
libjpeg8 \
|
||||
libtiff5 \
|
||||
nano vim curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV APACHE_RUN_USER www-data
|
||||
ENV APACHE_RUN_GROUP www-data
|
||||
ENV APACHE_LOG_DIR /var/log/apache2
|
||||
ENV APACHE_PID_FILE /var/run/apache2.pid
|
||||
ENV APACHE_RUN_DIR /var/run/apache2
|
||||
ENV APACHE_LOCK_DIR /var/lock/apache2
|
||||
|
||||
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
|
||||
|
||||
ADD requirements.txt /requirements.txt
|
||||
ADD wheelhouse /data/wheelhouse
|
||||
|
||||
RUN . /data/venv/bin/activate \
|
||||
&& pip install --no-index --find-links=/data/wheelhouse -r requirements.txt \
|
||||
&& rm /requirements.txt
|
||||
|
||||
VOLUME /data/git/blender-cloud
|
||||
VOLUME /data/git/pillar
|
||||
VOLUME /data/git/pillar-python-sdk
|
||||
VOLUME /data/config
|
||||
VOLUME /data/storage
|
||||
|
||||
ENV USE_X_SENDFILE True
|
||||
|
||||
EXPOSE 80
|
||||
EXPOSE 5000
|
||||
|
||||
ADD apache2.conf /etc/apache2/apache2.conf
|
||||
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
|
||||
ADD docker-entrypoint.sh /docker-entrypoint.sh
|
||||
ADD manage.sh /manage.sh
|
||||
|
||||
ENTRYPOINT ["bash", "/docker-entrypoint.sh"]
|
62
docker/4_run/Dockerfile
Executable file
@@ -0,0 +1,62 @@
|
||||
FROM armadillica/pillar_py:3.6
|
||||
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
|
||||
|
||||
RUN apt-get update && apt-get install -qyy \
|
||||
-o APT::Install-Recommends=false -o APT::Install-Suggests=false \
|
||||
git \
|
||||
apache2 \
|
||||
libapache2-mod-xsendfile \
|
||||
libjpeg8 \
|
||||
libtiff5 \
|
||||
ffmpeg \
|
||||
rsyslog logrotate \
|
||||
nano vim-tiny curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN ln -s /usr/bin/vim.tiny /usr/bin/vim
|
||||
|
||||
ENV APACHE_RUN_USER www-data
|
||||
ENV APACHE_RUN_GROUP www-data
|
||||
ENV APACHE_LOG_DIR /var/log/apache2
|
||||
ENV APACHE_PID_FILE /var/run/apache2.pid
|
||||
ENV APACHE_RUN_DIR /var/run/apache2
|
||||
ENV APACHE_LOCK_DIR /var/lock/apache2
|
||||
|
||||
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
|
||||
|
||||
ADD wheelhouse /data/wheelhouse
|
||||
RUN pip3 install --no-index --find-links=/data/wheelhouse -r /data/wheelhouse/requirements.txt
|
||||
|
||||
VOLUME /data/config
|
||||
VOLUME /data/storage
|
||||
VOLUME /var/log
|
||||
|
||||
ENV USE_X_SENDFILE True
|
||||
|
||||
EXPOSE 80
|
||||
EXPOSE 5000
|
||||
|
||||
ADD apache/wsgi-py36.* /etc/apache2/mods-available/
|
||||
RUN a2enmod rewrite && a2enmod wsgi-py36
|
||||
|
||||
ADD apache/apache2.conf /etc/apache2/apache2.conf
|
||||
ADD apache/000-default.conf /etc/apache2/sites-available/000-default.conf
|
||||
ADD apache/logrotate.conf /etc/logrotate.d/apache2
|
||||
ADD *.sh /
|
||||
|
||||
# Remove some empty top-level directories we won't use anyway.
|
||||
RUN rmdir /media /home 2>/dev/null || true
|
||||
|
||||
# This file includes some useful commands to have in the shell history
|
||||
# for easy access.
|
||||
ADD bash_history /root/.bash_history
|
||||
|
||||
ENTRYPOINT /docker-entrypoint.sh
|
||||
|
||||
# Add the most-changing files as last step for faster rebuilds.
|
||||
ADD config_local.py /data/git/blender-cloud/
|
||||
ADD deploy /data/git
|
||||
RUN python3 -c "import re, secrets; \
|
||||
f = open('/data/git/blender-cloud/config_local.py', 'a'); \
|
||||
h = re.sub(r'[_.~-]', '', secrets.token_urlsafe())[:8]; \
|
||||
print(f'STATIC_FILE_HASH = {h!r}', file=f)"
|
54
docker/4_run/apache/000-default.conf
Normal file
@@ -0,0 +1,54 @@
|
||||
<VirtualHost *:80>
|
||||
XSendFile on
|
||||
XSendFilePath /data/storage/pillar
|
||||
XSendFilePath /data/git/pillar/pillar/web/static/
|
||||
XSendFilePath /data/git/attract/attract/static/
|
||||
XSendFilePath /data/git/flamenco/flamenco/static/
|
||||
XsendFilePath /data/git/pillar-svnman/svnman/static/
|
||||
XsendFilePath /data/git/blender-cloud/static/
|
||||
XsendFilePath /data/git/blender-cloud/cloud/static/
|
||||
|
||||
ServerAdmin webmaster@localhost
|
||||
DocumentRoot /var/www/html
|
||||
|
||||
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
|
||||
# error, crit, alert, emerg.
|
||||
# It is also possible to configure the loglevel for particular
|
||||
# modules, e.g.
|
||||
# LogLevel info ssl:warn
|
||||
|
||||
ErrorLog ${APACHE_LOG_DIR}/error.log
|
||||
CustomLog ${APACHE_LOG_DIR}/access.log combined
|
||||
|
||||
WSGIDaemonProcess cloud processes=2 threads=32 maximum-requests=10000
|
||||
WSGIPassAuthorization On
|
||||
|
||||
WSGIScriptAlias / /data/git/blender-cloud/runserver.wsgi \
|
||||
process-group=cloud application-group=%{GLOBAL}
|
||||
|
||||
<Directory /data/git/blender-cloud>
|
||||
<Files runserver.wsgi>
|
||||
Require all granted
|
||||
</Files>
|
||||
</Directory>
|
||||
|
||||
# Temporary edit to remap the old cloudapi.blender.org to cloud.blender.org/api
|
||||
RewriteEngine On
|
||||
RewriteCond "%{HTTP_HOST}" "^cloudapi\.blender\.org" [NC]
|
||||
RewriteRule (.*) /api$1 [PT]
|
||||
|
||||
# Redirects for blender-cloud projects
|
||||
RewriteRule "^/p/blender-cloud/?$" "/blog" [R=301,L]
|
||||
RewriteRule "^/agent327/?$" "/p/agent-327" [R=301,L]
|
||||
RewriteRule "^/caminandes/?$" "/p/caminandes" [R=301,L]
|
||||
RewriteRule "^/cf2/?$" "/p/creature-factory-2" [R=301,L]
|
||||
RewriteRule "^/characters/?$" "/p/characters" [R=301,L]
|
||||
RewriteRule "^/gallery/?$" "/p/gallery" [R=301,L]
|
||||
RewriteRule "^/hdri/?$" "/p/hdri" [R=301,L]
|
||||
RewriteRule "^/textures/?$" "/p/textures" [R=301,L]
|
||||
RewriteRule "^/training/?$" "/courses" [R=301,L]
|
||||
RewriteRule "^/spring/?$" "/p/spring" [R=301,L]
|
||||
RewriteRule "^/hero/?$" "/p/hero" [R=301,L]
|
||||
# Waking the forest was moved from the art gallery to its own workshop
|
||||
RewriteRule "^/p/gallery/58cfec4f88ac8f1440aeb309/?$" "/p/waking-the-forest" [R=301,L]
|
||||
</VirtualHost>
|
21
docker/4_run/apache/logrotate.conf
Normal file
@@ -0,0 +1,21 @@
|
||||
/var/log/apache2/*.log {
|
||||
daily
|
||||
missingok
|
||||
rotate 14
|
||||
size 100M
|
||||
compress
|
||||
delaycompress
|
||||
notifempty
|
||||
create 640 root adm
|
||||
sharedscripts
|
||||
postrotate
|
||||
if /etc/init.d/apache2 status > /dev/null ; then \
|
||||
/etc/init.d/apache2 reload > /dev/null; \
|
||||
fi;
|
||||
endscript
|
||||
prerotate
|
||||
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
|
||||
run-parts /etc/logrotate.d/httpd-prerotate; \
|
||||
fi; \
|
||||
endscript
|
||||
}
|
122
docker/4_run/apache/wsgi-py36.conf
Normal file
@@ -0,0 +1,122 @@
|
||||
<IfModule mod_wsgi.c>
|
||||
|
||||
|
||||
#This config file is provided to give an overview of the directives,
|
||||
#which are only allowed in the 'server config' context.
|
||||
#For a detailed description of all avaiable directives please read
|
||||
#http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives
|
||||
|
||||
|
||||
#WSGISocketPrefix: Configure directory to use for daemon sockets.
|
||||
#
|
||||
#Apache's DEFAULT_REL_RUNTIMEDIR should be the proper place for WSGI's
|
||||
#Socket. In case you want to mess with the permissions of the directory,
|
||||
#you need to define WSGISocketPrefix to an alternative directory.
|
||||
#See http://code.google.com/p/modwsgi/wiki/ConfigurationIssues for more
|
||||
#information
|
||||
|
||||
#WSGISocketPrefix /var/run/apache2/wsgi
|
||||
|
||||
|
||||
#WSGIPythonOptimize: Enables basic Python optimisation features.
|
||||
#
|
||||
#Sets the level of Python compiler optimisations. The default is '0'
|
||||
#which means no optimisations are applied.
|
||||
#Setting the optimisation level to '1' or above will have the effect
|
||||
#of enabling basic Python optimisations and changes the filename
|
||||
#extension for compiled (bytecode) files from .pyc to .pyo.
|
||||
#When the optimisation level is set to '2', doc strings will not be
|
||||
#generated and retained. This will result in a smaller memory footprint,
|
||||
#but may cause some Python packages which interrogate doc strings in some
|
||||
#way to fail.
|
||||
|
||||
#WSGIPythonOptimize 0
|
||||
|
||||
|
||||
#WSGIPythonPath: Additional directories to search for Python modules,
|
||||
# overriding the PYTHONPATH environment variable.
|
||||
#
|
||||
#Used to specify additional directories to search for Python modules.
|
||||
#If multiple directories are specified they should be separated by a ':'.
|
||||
|
||||
WSGIPythonPath /opt/python/lib/python3.6/site-packages
|
||||
|
||||
#WSGIPythonEggs: Directory to use for Python eggs cache.
|
||||
#
|
||||
#Used to specify the directory to be used as the Python eggs cache
|
||||
#directory for all sub interpreters created within embedded mode.
|
||||
#This directive achieves the same affect as having set the
|
||||
#PYTHON_EGG_CACHE environment variable.
|
||||
#Note that the directory specified must exist and be writable by the user
|
||||
#that the Apache child processes run as. The directive only applies to
|
||||
#mod_wsgi embedded mode. To set the Python eggs cache directory for
|
||||
#mod_wsgi daemon processes, use the 'python-eggs' option to the
|
||||
#WSGIDaemonProcess directive instead.
|
||||
|
||||
#WSGIPythonEggs directory
|
||||
|
||||
|
||||
|
||||
#WSGIRestrictEmbedded: Enable restrictions on use of embedded mode.
|
||||
#
|
||||
#The WSGIRestrictEmbedded directive determines whether mod_wsgi embedded
|
||||
#mode is enabled or not. If set to 'On' and the restriction on embedded
|
||||
#mode is therefore enabled, any attempt to make a request against a
|
||||
#WSGI application which hasn't been properly configured so as to be
|
||||
#delegated to a daemon mode process will fail with a HTTP internal server
|
||||
#error response.
|
||||
|
||||
#WSGIRestrictEmbedded On|Off
|
||||
|
||||
|
||||
|
||||
#WSGIRestrictStdin: Enable restrictions on use of STDIN.
|
||||
#WSGIRestrictStdout: Enable restrictions on use of STDOUT.
|
||||
#WSGIRestrictSignal: Enable restrictions on use of signal().
|
||||
#
|
||||
#Well behaved WSGI applications neither should try to read/write from/to
|
||||
#STDIN/STDOUT, nor should they try to register signal handlers. If your
|
||||
#application needs an exception from this rule, you can disable the
|
||||
#restrictions here.
|
||||
|
||||
#WSGIRestrictStdin On
|
||||
#WSGIRestrictStdout On
|
||||
#WSGIRestrictSignal On
|
||||
|
||||
|
||||
|
||||
#WSGIAcceptMutex: Specify type of accept mutex used by daemon processes.
|
||||
#
|
||||
#The WSGIAcceptMutex directive sets the method that mod_wsgi will use to
|
||||
#serialize multiple daemon processes in a process group accepting requests
|
||||
#on a socket connection from the Apache child processes. If this directive
|
||||
#is not defined then the same type of mutex mechanism as used by Apache for
|
||||
#the main Apache child processes when accepting connections from a client
|
||||
#will be used. If set the method types are the same as for the Apache
|
||||
#AcceptMutex directive.
|
||||
|
||||
#WSGIAcceptMutex default
|
||||
|
||||
|
||||
|
||||
#WSGIImportScript: Specify a script file to be loaded on process start.
|
||||
#
|
||||
#The WSGIImportScript directive can be used to specify a script file to be
|
||||
#loaded when a process starts. Options must be provided to indicate the
|
||||
#name of the process group and the application group into which the script
|
||||
#will be loaded.
|
||||
|
||||
#WSGIImportScript process-group=name application-group=name
|
||||
|
||||
|
||||
#WSGILazyInitialization: Enable/disable lazy initialisation of Python.
|
||||
#
|
||||
#The WSGILazyInitialization directives sets whether or not the Python
|
||||
#interpreter is preinitialised within the Apache parent process or whether
|
||||
#lazy initialisation is performed, and the Python interpreter only
|
||||
#initialised in the Apache server processes or mod_wsgi daemon processes
|
||||
#after they have forked from the Apache parent process.
|
||||
|
||||
#WSGILazyInitialization On|Off
|
||||
|
||||
</IfModule>
|
1
docker/4_run/apache/wsgi-py36.load
Normal file
@@ -0,0 +1 @@
|
||||
LoadModule wsgi_module /opt/python/mod-wsgi/mod_wsgi.so
|
9
docker/4_run/bash_history
Normal file
@@ -0,0 +1,9 @@
|
||||
bash docker-entrypoint.sh
|
||||
env | sort
|
||||
apache2ctl start
|
||||
apache2ctl graceful
|
||||
/manage.sh operations worker -- -C
|
||||
celery status --broker amqp://guest:guest@rabbit:5672//
|
||||
celery events --broker amqp://guest:guest@rabbit:5672//
|
||||
tail -n 40 -f /var/log/apache2/access.log
|
||||
tail -n 40 -f /var/log/apache2/error.log
|
5
docker/4_run/build.sh
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
docker build -t armadillica/blender_cloud:latest .
|
||||
|
||||
echo "Done, built armadillica/blender_cloud:latest"
|
6
docker/4_run/celery-beat.sh
Executable file
@@ -0,0 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source /install_scripts.sh
|
||||
source /manage.sh celery beat -- \
|
||||
--schedule /data/storage/pillar/celerybeat-schedule.db \
|
||||
--pid /data/storage/pillar/celerybeat.pid
|
4
docker/4_run/celery-worker.sh
Executable file
@@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source /install_scripts.sh
|
||||
source /manage.sh celery worker -- -C
|
131
docker/4_run/config_local.py
Normal file
@@ -0,0 +1,131 @@
|
||||
import os
|
||||
from collections import defaultdict
|
||||
|
||||
DEBUG = False
|
||||
|
||||
SCHEME = 'https'
|
||||
PREFERRED_URL_SCHEME = 'https'
|
||||
SERVER_NAME = 'cloud.blender.org'
|
||||
|
||||
# os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = 'true'
|
||||
os.environ['PILLAR_MONGO_DBNAME'] = 'cloud'
|
||||
os.environ['PILLAR_MONGO_PORT'] = '27017'
|
||||
os.environ['PILLAR_MONGO_HOST'] = 'mongo'
|
||||
|
||||
USE_X_SENDFILE = True
|
||||
|
||||
STORAGE_BACKEND = 'gcs'
|
||||
|
||||
CDN_SERVICE_DOMAIN = 'blendercloud-pro.r.worldssl.net'
|
||||
CDN_CONTENT_SUBFOLDER = ''
|
||||
CDN_STORAGE_ADDRESS = 'push-11.cdnsun.com'
|
||||
|
||||
CACHE_TYPE = 'redis' # null
|
||||
CACHE_KEY_PREFIX = 'pw_'
|
||||
CACHE_REDIS_HOST = 'redis'
|
||||
CACHE_REDIS_PORT = '6379'
|
||||
CACHE_REDIS_URL = 'redis://redis:6379'
|
||||
|
||||
PILLAR_SERVER_ENDPOINT = 'https://cloud.blender.org/api/'
|
||||
|
||||
BLENDER_ID_ENDPOINT = 'https://www.blender.org/id/'
|
||||
|
||||
GCLOUD_APP_CREDENTIALS = '/data/config/google_app.json'
|
||||
GCLOUD_PROJECT = 'blender-cloud'
|
||||
|
||||
MAIN_PROJECT_ID = '563a9c8cf0e722006ce97b03'
|
||||
# MAIN_PROJECT_ID = '57aa07c088bef606e89078bd'
|
||||
|
||||
ALGOLIA_INDEX_USERS = 'pro_Users'
|
||||
ALGOLIA_INDEX_NODES = 'pro_Nodes'
|
||||
|
||||
ZENCODER_NOTIFICATIONS_URL = 'https://cloud.blender.org/api/encoding/zencoder/notifications'
|
||||
|
||||
FILE_LINK_VALIDITY = defaultdict(
|
||||
lambda: 3600 * 24 * 30, # default of 1 month.
|
||||
gcs=3600 * 23, # 23 hours for Google Cloud Storage.
|
||||
cdnsun=3600 * 23
|
||||
)
|
||||
|
||||
LOGGING = {
|
||||
'version': 1,
|
||||
'formatters': {
|
||||
'default': {'format': '%(levelname)8s %(name)s %(message)s'}
|
||||
},
|
||||
'handlers': {
|
||||
'console': {
|
||||
'class': 'logging.StreamHandler',
|
||||
'formatter': 'default',
|
||||
'stream': 'ext://sys.stderr',
|
||||
}
|
||||
},
|
||||
'loggers': {
|
||||
'pillar': {'level': 'INFO'},
|
||||
# 'pillar.auth': {'level': 'DEBUG'},
|
||||
# 'pillar.api.blender_id': {'level': 'DEBUG'},
|
||||
# 'pillar.api.blender_cloud.subscription': {'level': 'DEBUG'},
|
||||
'bcloud': {'level': 'INFO'},
|
||||
'cloud': {'level': 'INFO'},
|
||||
'attract': {'level': 'INFO'},
|
||||
'flamenco': {'level': 'INFO'},
|
||||
# 'pillar.api.file_storage': {'level': 'DEBUG'},
|
||||
# 'pillar.api.file_storage.ensure_valid_link': {'level': 'INFO'},
|
||||
'pillar.api.file_storage.refresh_links_for_backend': {'level': 'DEBUG'},
|
||||
'werkzeug': {'level': 'DEBUG'},
|
||||
'eve': {'level': 'WARNING'},
|
||||
# 'elasticsearch': {'level': 'DEBUG'},
|
||||
},
|
||||
'root': {
|
||||
'level': 'WARNING',
|
||||
'handlers': [
|
||||
'console',
|
||||
],
|
||||
}
|
||||
}
|
||||
|
||||
REDIRECTS = {
|
||||
# For old links, refer to the services page (hopefully it refreshes then)
|
||||
'downloads/blender_cloud-latest-bundle.zip': 'https://cloud.blender.org/services#blender-addon',
|
||||
|
||||
# Latest Blender Cloud add-on; remember to update BLENDER_CLOUD_ADDON_VERSION.
|
||||
'downloads/blender_cloud-latest-addon.zip':
|
||||
'https://storage.googleapis.com/institute-storage/addons/blender_cloud-1.8.0.addon.zip',
|
||||
|
||||
# Redirect old Grafista endpoint to /stats
|
||||
'/stats/': '/stats',
|
||||
}
|
||||
|
||||
# Latest version of the add-on; remember to update REDIRECTS.
|
||||
BLENDER_CLOUD_ADDON_VERSION = '1.8.0'
|
||||
|
||||
UTM_LINKS = {
|
||||
'cartoon_brew': {
|
||||
'image': 'https://imgur.com/13nQTi3.png',
|
||||
'link': 'https://store.blender.org/product/membership/'
|
||||
}
|
||||
}
|
||||
|
||||
# Disabled until we have regenerated the majority of the links.
|
||||
CELERY_BEAT_SCHEDULE = {
|
||||
'regenerate-expired-links': {
|
||||
'task': 'pillar.celery.file_link_tasks.regenerate_all_expired_links',
|
||||
'schedule': 600, # every N seconds
|
||||
'args': ('gcs', 500)
|
||||
},
|
||||
}
|
||||
|
||||
SVNMAN_REPO_URL = 'https://svn.blender.cloud/repo/'
|
||||
SVNMAN_API_URL = 'https://svn.blender.cloud/api/'
|
||||
|
||||
# Mail options, see pillar.celery.email_tasks.
|
||||
SMTP_HOST = 'proog.blender.org'
|
||||
SMTP_PORT = 25
|
||||
SMTP_USERNAME = 'server@blender.cloud'
|
||||
SMTP_TIMEOUT = 30 # timeout in seconds, https://docs.python.org/3/library/smtplib.html#smtplib.SMTP
|
||||
MAIL_RETRY = 180 # in seconds, delay until trying to send an email again.
|
||||
MAIL_DEFAULT_FROM_NAME = 'Blender Cloud'
|
||||
MAIL_DEFAULT_FROM_ADDR = 'cloudsupport@blender.org'
|
||||
|
||||
# MUST be 8 characters long, see pillar.flask_extra.HashedPathConverter
|
||||
# STATIC_FILE_HASH = '12345678'
|
||||
# The value used in production is appended from Dockerfile.
|
15
docker/4_run/docker-entrypoint.sh
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source /install_scripts.sh
|
||||
|
||||
# Make sure that log rotation works.
|
||||
mkdir -p ${APACHE_LOG_DIR}
|
||||
service cron start
|
||||
|
||||
if [ "$DEV" = "true" ]; then
|
||||
echo "Running in development mode"
|
||||
cd /data/git/blender-cloud
|
||||
exec bash /manage.sh runserver --host='0.0.0.0'
|
||||
else
|
||||
exec /usr/sbin/apache2ctl -D FOREGROUND
|
||||
fi
|
20
docker/4_run/install_scripts.sh
Normal file
@@ -0,0 +1,20 @@
|
||||
|
||||
if [ -f /installed ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
SITEPKG=$(echo /opt/python/lib/python3.*/site-packages)
|
||||
echo "Installing Blender Cloud packages into $SITEPKG"
|
||||
|
||||
# TODO: 'pip3 install -e' runs 'setup.py develop', which runs 'setup.py egg_info',
|
||||
# which can't write the egg info to the read-only /data/git volume. This is why
|
||||
# we manually install the links.
|
||||
for SUBPROJ in /data/git/{pillar,pillar-python-sdk,attract,flamenco,pillar-svnman}; do
|
||||
NAME=$(python3 $SUBPROJ/setup.py --name)
|
||||
echo "... $NAME"
|
||||
echo $SUBPROJ >> $SITEPKG/easy-install.pth
|
||||
echo $SUBPROJ > $SITEPKG/$NAME.egg-link
|
||||
done
|
||||
echo "All packages installed."
|
||||
|
||||
touch /installed
|
5
docker/4_run/manage.sh
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
cd /data/git/blender-cloud
|
||||
exec python manage.py "$@"
|
96
docker/README.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Setting up a production machine
|
||||
|
||||
To get the docker stack up and running, we use the following, on an Ubuntu 16.10 machine.
|
||||
|
||||
## 0. Basic stuff
|
||||
|
||||
Install the machine, use `locale-gen nl_NL.UTF-8` or similar commands to generate locale
|
||||
definitions. Set up automatic security updates and backups, the usual.
|
||||
|
||||
## 1. Install Docker
|
||||
|
||||
Install Docker itself, as described in the
|
||||
[Docker CE for Ubuntu manual](https://store.docker.com/editions/community/docker-ce-server-ubuntu?tab=description):
|
||||
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) stable"
|
||||
apt-get update
|
||||
apt-get install docker-ce
|
||||
|
||||
## 2. Configure Docker to use "overlay"
|
||||
|
||||
Configure Docker to use "overlay" instead of "aufs" for the images. This prevents
|
||||
[segfaults in auplink](https://bugs.launchpad.net/ubuntu/+source/aufs-tools/+bug/1442568).
|
||||
|
||||
1. Set `DOCKER_OPTS="-s overlay"` in `/etc/defaults/docker`
|
||||
2. Copy `/lib/systemd/system/docker.service` to `/etc/systemd/system/docker.service`.
|
||||
This allows later upgrading of docker without overwriting the changes we're about to do.
|
||||
2. Edit the `[Service]` section of `/etc/systemd/system/docker.service`:
|
||||
1. Add `EnvironmentFile=/etc/default/docker`
|
||||
2. Append ` $DOCKER_OPTS` to the `ExecStart` line
|
||||
3. Run `systemctl daemon-reload`
|
||||
4. Remove all your containers and images.
|
||||
5. Restart Docker: `systemctl restart docker`
|
||||
|
||||
## 3. Pull the Blender Cloud docker image
|
||||
|
||||
`docker pull armadillica/blender_cloud:latest`
|
||||
|
||||
## 4. Get docker-compose + our repositories
|
||||
|
||||
See the [Quick setup](../README.md) on how to get those. Then run:
|
||||
|
||||
cd /data/git/blender-cloud/docker
|
||||
docker-compose up -d
|
||||
|
||||
|
||||
Set up permissions for Docker volumes; the following should be writable by
|
||||
|
||||
- `/data/storage/pillar`: writable by `www-data` and `root` (do a `chown root:www-data`
|
||||
and `chmod 2770`).
|
||||
- `/data/storage/db`: writable by uid 999.
|
||||
|
||||
|
||||
## 5. Set up TLS
|
||||
|
||||
Place TLS certificates in `/data/certs/{cloud,cloudapi}.blender.org.pem`.
|
||||
They should contain (in order) the private key, the host certificate, and the
|
||||
CA certificate.
|
||||
|
||||
|
||||
## 6. Create a local config
|
||||
|
||||
Blender Cloud expects the following files to exist:
|
||||
|
||||
- `/data/git/blender_cloud/config_local.py` with machine-local configuration overrides
|
||||
- `/data/config/google_app.json` with Google Cloud Storage credentials.
|
||||
|
||||
When run from Docker, the `docker/4_run/config_local.py` file will be used. Overrides for that file
|
||||
can be placed in `/data/config/config_secrets.py`.
|
||||
|
||||
|
||||
## 7. ElasticSearch & kibana
|
||||
|
||||
ElasticSearch and Kibana run in our self-rolled images. This is needed because by default
|
||||
|
||||
- ElasticSearch uses up to 2 GB of RAM, which is too much for our droplet, and
|
||||
- the Docker images contain the proprietary X-Pack plugin, which we don't want.
|
||||
|
||||
This also gives us the opportunity to let Kibana do its optimization when we build the image, rather
|
||||
than every time the container is recreated.
|
||||
|
||||
`/data/storage/elasticsearch` needs to be writable by UID 1000, GID 1000.
|
||||
|
||||
Kibana connects to [ElasticProxy](https://github.com/armadillica/elasticproxy), which only allows
|
||||
GET, HEAD, and some specific POST requests. This ensures that the public-facing Kibana cannot be
|
||||
used to change the ElasticSearch database.
|
||||
|
||||
Production Kibana can be placed in read-only mode, but this is not necessary now that we use
|
||||
ElasticProxy. However, I've left this in here as reference.
|
||||
|
||||
`curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : true }'`
|
||||
|
||||
If editing is desired, temporarily turn off read-only mode:
|
||||
|
||||
`curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : false }'`
|
@@ -1,16 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -x;
|
||||
set -e;
|
||||
|
||||
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
cd $DIR;
|
||||
|
||||
cd 1_base/;
|
||||
bash build.sh;
|
||||
|
||||
cd ../2_build/;
|
||||
bash build.sh;
|
||||
|
||||
cd ../3_run/;
|
||||
bash build.sh;
|
@@ -1,68 +1,191 @@
|
||||
mongo:
|
||||
image: mongo
|
||||
container_name: mongo
|
||||
restart: always
|
||||
volumes:
|
||||
- /data/storage/db:/data/db
|
||||
ports:
|
||||
- "127.0.0.1:27017:27017"
|
||||
redis:
|
||||
image: redis
|
||||
container_name: redis
|
||||
restart: always
|
||||
blender_cloud:
|
||||
image: armadillica/blender_cloud
|
||||
container_name: blender_cloud
|
||||
restart: always
|
||||
environment:
|
||||
VIRTUAL_HOST: http://cloudapi.blender.org/*,https://cloudapi.blender.org/*,http://cloud.blender.org/*,https://cloud.blender.org/*,http://pillar-web/*
|
||||
VIRTUAL_HOST_WEIGHT: 10
|
||||
FORCE_SSL: "true"
|
||||
volumes:
|
||||
- /data/git/blender-cloud:/data/git/blender-cloud:ro
|
||||
- /data/git/attract:/data/git/attract:ro
|
||||
- /data/git/flamenco:/data/git/flamenco:ro
|
||||
- /data/git/pillar:/data/git/pillar:ro
|
||||
- /data/git/pillar-python-sdk:/data/git/pillar-python-sdk:ro
|
||||
- /data/config:/data/config:ro
|
||||
- /data/storage/pillar:/data/storage/pillar
|
||||
links:
|
||||
- mongo
|
||||
- redis
|
||||
# notifserv:
|
||||
# container_name: notifserv
|
||||
# image: armadillica/pillar-notifserv:cd8fa678436563ac3b800b2721e36830c32e4656
|
||||
# restart: always
|
||||
# links:
|
||||
# - mongo
|
||||
# environment:
|
||||
# VIRTUAL_HOST: https://cloud.blender.org/notifications*,http://pillar-web/notifications*
|
||||
# VIRTUAL_HOST_WEIGHT: 20
|
||||
# FORCE_SSL: true
|
||||
grafista:
|
||||
image: armadillica/grafista
|
||||
container_name: grafista
|
||||
restart: always
|
||||
environment:
|
||||
VIRTUAL_HOST: http://cloud.blender.org/stats/*,https://cloud.blender.org/stats/*,http://blender-cloud/stats/*
|
||||
VIRTUAL_HOST_WEIGHT: 20
|
||||
FORCE_SSL: "true"
|
||||
volumes:
|
||||
- /data/git/grafista:/data/git/grafista:ro
|
||||
- /data/storage/grafista:/data/storage
|
||||
haproxy:
|
||||
image: dockercloud/haproxy
|
||||
container_name: haproxy
|
||||
restart: always
|
||||
ports:
|
||||
- "443:443"
|
||||
- "80:80"
|
||||
environment:
|
||||
- CERT_FOLDER=/certs/
|
||||
- TIMEOUT=connect 5s, client 5m, server 10m
|
||||
links:
|
||||
- blender_cloud
|
||||
- grafista
|
||||
# - notifserv
|
||||
volumes:
|
||||
- '/data/certs:/certs'
|
||||
version: '3.4'
|
||||
services:
|
||||
mongo:
|
||||
image: mongo:3.4.2
|
||||
container_name: mongo
|
||||
restart: always
|
||||
volumes:
|
||||
- /data/storage/db:/data/db
|
||||
- /data/storage/db-bak:/data/db-bak # for backing up stuff etc.
|
||||
ports:
|
||||
- "127.0.0.1:27017:27017"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
redis:
|
||||
image: redis:3.2.8
|
||||
container_name: redis
|
||||
restart: always
|
||||
ports:
|
||||
- "127.0.0.1:6379:6379"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
rabbit:
|
||||
image: rabbitmq:3.6.10
|
||||
container_name: rabbit
|
||||
restart: always
|
||||
ports:
|
||||
- "127.0.0.1:5672:5672"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
elastic:
|
||||
image: armadillica/elasticsearch:6.1.1
|
||||
container_name: elastic
|
||||
restart: always
|
||||
volumes:
|
||||
# NOTE: this path must be writable by UID=1000 GID=1000.
|
||||
- /data/storage/elastic:/usr/share/elasticsearch/data
|
||||
ports:
|
||||
- "127.0.0.1:9200:9200"
|
||||
environment:
|
||||
ES_JAVA_OPTS: "-Xms256m -Xmx256m"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
elasticproxy:
|
||||
image: armadillica/elasticproxy:1.2
|
||||
container_name: elasticproxy
|
||||
restart: always
|
||||
command: /elasticproxy -elastic http://elastic:9200/
|
||||
depends_on:
|
||||
- elastic
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
kibana:
|
||||
image: armadillica/kibana:6.1.1
|
||||
container_name: kibana
|
||||
restart: always
|
||||
environment:
|
||||
SERVER_NAME: "stats.cloud.blender.org"
|
||||
ELASTICSEARCH_URL: http://elasticproxy:9200
|
||||
CONSOLE_ENABLED: 'false'
|
||||
VIRTUAL_HOST: http://stats.cloud.blender.org/*,https://stats.cloud.blender.org/*,http://stats.cloud.local/*,https://stats.cloud.local/*
|
||||
VIRTUAL_HOST_WEIGHT: 20
|
||||
FORCE_SSL: "true"
|
||||
|
||||
# See https://github.com/elastic/kibana/issues/5170#issuecomment-163042525
|
||||
NODE_OPTIONS: "--max-old-space-size=200"
|
||||
depends_on:
|
||||
- elasticproxy
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
blender_cloud:
|
||||
image: armadillica/blender_cloud:latest
|
||||
container_name: blender_cloud
|
||||
restart: always
|
||||
environment:
|
||||
VIRTUAL_HOST: http://cloud.blender.org/*,https://cloud.blender.org/*,http://cloud.local/*,https://cloud.local/*
|
||||
VIRTUAL_HOST_WEIGHT: 10
|
||||
FORCE_SSL: "true"
|
||||
GZIP_COMPRESSION_TYPE: "text/html text/plain text/css application/javascript"
|
||||
PILLAR_CONFIG: /data/config/config_secrets.py
|
||||
volumes:
|
||||
# format: HOST:CONTAINER
|
||||
- /data/config:/data/config:ro
|
||||
- /data/storage/pillar:/data/storage/pillar
|
||||
- /data/log:/var/log
|
||||
depends_on:
|
||||
- mongo
|
||||
- redis
|
||||
- rabbit
|
||||
|
||||
celery_worker:
|
||||
image: armadillica/blender_cloud:latest
|
||||
entrypoint: /celery-worker.sh
|
||||
container_name: celery_worker
|
||||
restart: always
|
||||
environment:
|
||||
PILLAR_CONFIG: /data/config/config_secrets.py
|
||||
volumes:
|
||||
# format: HOST:CONTAINER
|
||||
- /data/config:/data/config:ro
|
||||
- /data/storage/pillar:/data/storage/pillar
|
||||
- /data/log:/var/log
|
||||
depends_on:
|
||||
- mongo
|
||||
- redis
|
||||
- rabbit
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
celery_beat:
|
||||
image: armadillica/blender_cloud:latest
|
||||
entrypoint: /celery-beat.sh
|
||||
container_name: celery_beat
|
||||
restart: always
|
||||
environment:
|
||||
PILLAR_CONFIG: /data/config/config_secrets.py
|
||||
volumes:
|
||||
# format: HOST:CONTAINER
|
||||
- /data/config:/data/config:ro
|
||||
- /data/storage/pillar:/data/storage/pillar
|
||||
- /data/log:/var/log
|
||||
depends_on:
|
||||
- mongo
|
||||
- redis
|
||||
- rabbit
|
||||
- celery_worker
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "20"
|
||||
|
||||
letsencrypt:
|
||||
image: armadillica/picohttp:1.0
|
||||
container_name: letsencrypt
|
||||
restart: always
|
||||
environment:
|
||||
WEBROOT: /data/letsencrypt
|
||||
LISTEN: '[::]:80'
|
||||
VIRTUAL_HOST: http://cloud.blender.org/.well-known/*, http://stats.cloud.blender.org/.well-known/*
|
||||
VIRTUAL_HOST_WEIGHT: 30
|
||||
volumes:
|
||||
- /data/letsencrypt:/data/letsencrypt
|
||||
|
||||
haproxy:
|
||||
image: dockercloud/haproxy:1.5.3
|
||||
container_name: haproxy
|
||||
restart: always
|
||||
ports:
|
||||
- "443:443"
|
||||
- "80:80"
|
||||
environment:
|
||||
- ADDITIONAL_SERVICES=docker:blender_cloud,docker:letsencrypt,docker:kibana
|
||||
- CERT_FOLDER=/certs/
|
||||
- TIMEOUT=connect 5s, client 5m, server 10m
|
||||
- SSL_BIND_CIPHERS=ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
|
||||
- SSL_BIND_OPTIONS=no-sslv3
|
||||
- EXTRA_GLOBAL_SETTINGS=tune.ssl.default-dh-param 2048
|
||||
depends_on:
|
||||
- blender_cloud
|
||||
- letsencrypt
|
||||
- kibana
|
||||
volumes:
|
||||
- '/data/certs:/certs'
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
|
10
docker/elastic/Dockerfile-elastic
Normal file
@@ -0,0 +1,10 @@
|
||||
FROM docker.elastic.co/elasticsearch/elasticsearch:6.1.1
|
||||
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
|
||||
|
||||
RUN elasticsearch-plugin remove --purge x-pack
|
||||
|
||||
ADD elasticsearch.yml jvm.options /usr/share/elasticsearch/config/
|
||||
|
||||
USER root
|
||||
RUN chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/config/
|
||||
USER elasticsearch
|
6
docker/elastic/Dockerfile-kibana
Normal file
@@ -0,0 +1,6 @@
|
||||
FROM docker.elastic.co/kibana/kibana:6.1.1
|
||||
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
|
||||
|
||||
RUN bin/kibana-plugin remove x-pack
|
||||
ADD kibana.yml /usr/share/kibana/config/kibana.yml
|
||||
RUN kibana 2>&1 | grep -m 1 "Optimization of .* complete"
|
15
docker/elastic/build.sh
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
# When updating this, also update the versions in Dockerfile-*, and make sure that
|
||||
# it matches the versions of the elasticsearch and elasticsearch_dsl packages
|
||||
# used in Pillar. Those don't have to match exactly, but the major version should.
|
||||
VERSION=6.1.1
|
||||
|
||||
docker build -t armadillica/elasticsearch:${VERSION} -f Dockerfile-elastic .
|
||||
docker build -t armadillica/kibana:${VERSION} -f Dockerfile-kibana .
|
||||
|
||||
docker tag armadillica/elasticsearch:${VERSION} armadillica/elasticsearch:latest
|
||||
docker tag armadillica/kibana:${VERSION} armadillica/kibana:latest
|
||||
|
||||
echo "Done, built armadillica/elasticsearch:${VERSION} and armadillica/kibana:${VERSION}"
|
||||
echo "Also tagged as armadillica/elasticsearch:latest and armadillica/kibana:latest"
|
7
docker/elastic/elasticsearch.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
cluster.name: "blender-cloud"
|
||||
network.host: 0.0.0.0
|
||||
|
||||
# minimum_master_nodes need to be explicitly set when bound on a public IP
|
||||
# set to 1 to allow single node clusters
|
||||
# Details: https://github.com/elastic/elasticsearch/pull/17288
|
||||
discovery.zen.minimum_master_nodes: 1
|
112
docker/elastic/jvm.options
Normal file
@@ -0,0 +1,112 @@
|
||||
## JVM configuration
|
||||
|
||||
################################################################
|
||||
## IMPORTANT: JVM heap size
|
||||
################################################################
|
||||
##
|
||||
## You should always set the min and max JVM heap
|
||||
## size to the same value. For example, to set
|
||||
## the heap to 4 GB, set:
|
||||
##
|
||||
## -Xms4g
|
||||
## -Xmx4g
|
||||
##
|
||||
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
|
||||
## for more information
|
||||
##
|
||||
################################################################
|
||||
|
||||
# Xms represents the initial size of total heap space
|
||||
# Xmx represents the maximum size of total heap space
|
||||
|
||||
# Sybren: uncommented so that we can set those options using the ES_JAVA_OPTS environment variable.
|
||||
#-Xms512m
|
||||
#-Xmx512m
|
||||
|
||||
################################################################
|
||||
## Expert settings
|
||||
################################################################
|
||||
##
|
||||
## All settings below this section are considered
|
||||
## expert settings. Don't tamper with them unless
|
||||
## you understand what you are doing
|
||||
##
|
||||
################################################################
|
||||
|
||||
## GC configuration
|
||||
-XX:+UseConcMarkSweepGC
|
||||
-XX:CMSInitiatingOccupancyFraction=75
|
||||
-XX:+UseCMSInitiatingOccupancyOnly
|
||||
|
||||
## optimizations
|
||||
|
||||
# pre-touch memory pages used by the JVM during initialization
|
||||
-XX:+AlwaysPreTouch
|
||||
|
||||
## basic
|
||||
|
||||
# force the server VM (remove on 32-bit client JVMs)
|
||||
-server
|
||||
|
||||
# explicitly set the stack size (reduce to 320k on 32-bit client JVMs)
|
||||
-Xss1m
|
||||
|
||||
# set to headless, just in case
|
||||
-Djava.awt.headless=true
|
||||
|
||||
# ensure UTF-8 encoding by default (e.g. filenames)
|
||||
-Dfile.encoding=UTF-8
|
||||
|
||||
# use our provided JNA always versus the system one
|
||||
-Djna.nosys=true
|
||||
|
||||
# use old-style file permissions on JDK9
|
||||
-Djdk.io.permissionsUseCanonicalPath=true
|
||||
|
||||
# flags to configure Netty
|
||||
-Dio.netty.noUnsafe=true
|
||||
-Dio.netty.noKeySetOptimization=true
|
||||
-Dio.netty.recycler.maxCapacityPerThread=0
|
||||
|
||||
# log4j 2
|
||||
-Dlog4j.shutdownHookEnabled=false
|
||||
-Dlog4j2.disable.jmx=true
|
||||
-Dlog4j.skipJansi=true
|
||||
|
||||
## heap dumps
|
||||
|
||||
# generate a heap dump when an allocation from the Java heap fails
|
||||
# heap dumps are created in the working directory of the JVM
|
||||
-XX:+HeapDumpOnOutOfMemoryError
|
||||
|
||||
# specify an alternative path for heap dumps
|
||||
# ensure the directory exists and has sufficient space
|
||||
#-XX:HeapDumpPath=${heap.dump.path}
|
||||
|
||||
## GC logging
|
||||
|
||||
#-XX:+PrintGCDetails
|
||||
#-XX:+PrintGCTimeStamps
|
||||
#-XX:+PrintGCDateStamps
|
||||
#-XX:+PrintClassHistogram
|
||||
#-XX:+PrintTenuringDistribution
|
||||
#-XX:+PrintGCApplicationStoppedTime
|
||||
|
||||
# log GC status to a file with time stamps
|
||||
# ensure the directory exists
|
||||
#-Xloggc:${loggc}
|
||||
|
||||
# By default, the GC log file will not rotate.
|
||||
# By uncommenting the lines below, the GC log file
|
||||
# will be rotated every 128MB at most 32 times.
|
||||
#-XX:+UseGCLogFileRotation
|
||||
#-XX:NumberOfGCLogFiles=32
|
||||
#-XX:GCLogFileSize=128M
|
||||
|
||||
# Elasticsearch 5.0.0 will throw an exception on unquoted field names in JSON.
|
||||
# If documents were already indexed with unquoted fields in a previous version
|
||||
# of Elasticsearch, some operations may throw errors.
|
||||
#
|
||||
# WARNING: This option will be removed in Elasticsearch 6.0.0 and is provided
|
||||
# only for migration purposes.
|
||||
#-Delasticsearch.json.allow_unquoted_field_names=true
|
8
docker/elastic/kibana.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
|
||||
server.name: kibana
|
||||
server.host: "0"
|
||||
elasticsearch.url: http://elasticsearch:9200
|
||||
|
||||
# Hide dev tools
|
||||
console.enabled: false
|
17
docker/full_rebuild.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -xe
|
||||
|
||||
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
|
||||
cd $DIR/1_base
|
||||
bash build.sh
|
||||
|
||||
cd $DIR/2_buildpy
|
||||
bash build.sh
|
||||
|
||||
cd $DIR/3_buildwheels
|
||||
bash build.sh
|
||||
|
||||
cd $DIR/4_run
|
||||
bash build.sh
|
5
docker/mongo-backup.cron
Normal file
@@ -0,0 +1,5 @@
|
||||
# Change to suit your needs, then place in /etc/cron.d/mongo-backup
|
||||
# (so remove the .cron from the name)
|
||||
|
||||
MAILTO=yourname@youraddress.org
|
||||
30 5 * * * root /root/docker/mongo-backup.sh
|
28
docker/mongo-backup.sh
Executable file
@@ -0,0 +1,28 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
BACKUPDIR=/data/storage/db-bak
|
||||
DATE=$(date +'%Y-%m-%d-%H%M')
|
||||
ARCHIVE=$BACKUPDIR/mongo-live-$DATE.tar.xz
|
||||
|
||||
# Just a sanity check before we give it to 'rm -rf'
|
||||
if [ -z "$DATE" ]; then
|
||||
echo "Empty string found where the date should be, aborting."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
# /data/db-bak in Docker is /data/storage/db-bak on the host.
|
||||
docker exec mongo mongodump -d cloud \
|
||||
--out /data/db-bak/dump-$DATE \
|
||||
--excludeCollection tokens \
|
||||
--excludeCollection flamenco_task_logs \
|
||||
--quiet
|
||||
|
||||
cd $BACKUPDIR
|
||||
tar -Jcf $ARCHIVE dump-$DATE/
|
||||
rm -rf dump-$DATE
|
||||
|
||||
TO_DELETE="$(ls $BACKUPDIR/mongo-live-*.tar.xz | head -n -7)"
|
||||
[ -z "$TO_DELETE" ] || rm -v $TO_DELETE
|
||||
|
||||
rsync -va $BACKUPDIR/mongo-live-*.tar.xz cloud-backup@swami-direct.blender.cloud:
|
27
docker/renew-letsencrypt.sh
Executable file
@@ -0,0 +1,27 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
# First time creating a certificate for a domain, use:
|
||||
# certbot certonly --webroot -w /data/letsencrypt -d $DOMAINNAME
|
||||
|
||||
cd /data/letsencrypt
|
||||
|
||||
certbot renew
|
||||
|
||||
echo
|
||||
echo "Recreating HAProxy certificates"
|
||||
|
||||
for certdir in /etc/letsencrypt/live/*; do
|
||||
domain=$(basename $certdir)
|
||||
echo " - $domain"
|
||||
|
||||
cat $certdir/privkey.pem $certdir/fullchain.pem > $domain.pem
|
||||
mv $domain.pem /data/certs/
|
||||
done
|
||||
|
||||
|
||||
echo
|
||||
echo -n "Restarting "
|
||||
docker restart haproxy
|
||||
|
||||
echo "Certificate renewal completed."
|
||||
|
19
gulp
Executable file
@@ -0,0 +1,19 @@
|
||||
#!/bin/bash -ex
|
||||
|
||||
GULP=./node_modules/.bin/gulp
|
||||
|
||||
function install() {
|
||||
npm install
|
||||
touch $GULP # installer doesn't always touch this after a build, so we do.
|
||||
}
|
||||
|
||||
# Rebuild Gulp if missing or outdated.
|
||||
[ -e $GULP ] || install
|
||||
[ gulpfile.js -nt $GULP ] && install
|
||||
|
||||
if [ "$1" == "watch" ]; then
|
||||
# Treat "gulp watch" as "gulp && gulp watch"
|
||||
$GULP
|
||||
fi
|
||||
|
||||
exec $GULP "$@"
|
128
gulpfile.js
Normal file
@@ -0,0 +1,128 @@
|
||||
var argv = require('minimist')(process.argv.slice(2));
|
||||
var autoprefixer = require('gulp-autoprefixer');
|
||||
var chmod = require('gulp-chmod');
|
||||
var concat = require('gulp-concat');
|
||||
var git = require('gulp-git');
|
||||
var gulp = require('gulp');
|
||||
var gulpif = require('gulp-if');
|
||||
var pug = require('gulp-pug');
|
||||
var livereload = require('gulp-livereload');
|
||||
var plumber = require('gulp-plumber');
|
||||
var rename = require('gulp-rename');
|
||||
var sass = require('gulp-sass');
|
||||
var sourcemaps = require('gulp-sourcemaps');
|
||||
var uglify = require('gulp-uglify');
|
||||
var cache = require('gulp-cached');
|
||||
|
||||
var enabled = {
|
||||
uglify: argv.production,
|
||||
maps: argv.production,
|
||||
failCheck: !argv.production,
|
||||
prettyPug: !argv.production,
|
||||
cachify: !argv.production,
|
||||
cleanup: argv.production,
|
||||
};
|
||||
|
||||
var destination = {
|
||||
css: 'cloud/static/assets/css',
|
||||
pug: 'cloud/templates',
|
||||
js: 'cloud/static/assets/js',
|
||||
}
|
||||
|
||||
|
||||
/* CSS */
|
||||
gulp.task('styles', function() {
|
||||
gulp.src('src/styles/**/*.sass')
|
||||
.pipe(gulpif(enabled.failCheck, plumber()))
|
||||
.pipe(gulpif(enabled.maps, sourcemaps.init()))
|
||||
.pipe(sass({
|
||||
outputStyle: 'compressed'}
|
||||
))
|
||||
.pipe(autoprefixer("last 3 versions"))
|
||||
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
|
||||
.pipe(gulp.dest(destination.css))
|
||||
.pipe(gulpif(argv.livereload, livereload()));
|
||||
});
|
||||
|
||||
|
||||
/* Templates - Pug */
|
||||
gulp.task('templates', function() {
|
||||
gulp.src('src/templates/**/*.pug')
|
||||
.pipe(gulpif(enabled.failCheck, plumber()))
|
||||
.pipe(gulpif(enabled.cachify, cache('templating')))
|
||||
.pipe(pug({
|
||||
pretty: enabled.prettyPug
|
||||
}))
|
||||
.pipe(gulp.dest(destination.pug))
|
||||
.pipe(gulpif(argv.livereload, livereload()));
|
||||
// TODO(venomgfx): please check why 'gulp watch' doesn't pick up on .txt changes.
|
||||
gulp.src('src/templates/**/*.txt')
|
||||
.pipe(gulpif(enabled.failCheck, plumber()))
|
||||
.pipe(gulpif(enabled.cachify, cache('templating')))
|
||||
.pipe(gulp.dest(destination.pug))
|
||||
.pipe(gulpif(argv.livereload, livereload()));
|
||||
});
|
||||
|
||||
|
||||
/* Individual Uglified Scripts */
|
||||
gulp.task('scripts', function() {
|
||||
gulp.src('src/scripts/*.js')
|
||||
.pipe(gulpif(enabled.failCheck, plumber()))
|
||||
.pipe(gulpif(enabled.cachify, cache('scripting')))
|
||||
.pipe(gulpif(enabled.maps, sourcemaps.init()))
|
||||
.pipe(gulpif(enabled.uglify, uglify()))
|
||||
.pipe(rename({suffix: '.min'}))
|
||||
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
|
||||
.pipe(chmod(644))
|
||||
.pipe(gulp.dest(destination.js))
|
||||
.pipe(gulpif(argv.livereload, livereload()));
|
||||
});
|
||||
|
||||
|
||||
/* Collection of scripts in src/scripts/tutti/ to merge into tutti.min.js */
|
||||
/* Since it's always loaded, it's only for functions that we want site-wide */
|
||||
gulp.task('scripts_concat_tutti', function() {
|
||||
gulp.src('src/scripts/tutti/**/*.js')
|
||||
.pipe(gulpif(enabled.failCheck, plumber()))
|
||||
.pipe(gulpif(enabled.maps, sourcemaps.init()))
|
||||
.pipe(concat("tutti.min.js"))
|
||||
.pipe(gulpif(enabled.uglify, uglify()))
|
||||
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
|
||||
.pipe(chmod(644))
|
||||
.pipe(gulp.dest(destination.js))
|
||||
.pipe(gulpif(argv.livereload, livereload()));
|
||||
});
|
||||
|
||||
|
||||
|
||||
// While developing, run 'gulp watch'
|
||||
gulp.task('watch',function() {
|
||||
// Only listen for live reloads if ran with --livereload
|
||||
if (argv.livereload){
|
||||
livereload.listen();
|
||||
}
|
||||
|
||||
gulp.watch('src/styles/**/*.sass',['styles']);
|
||||
gulp.watch('src/templates/**/*.pug',['templates']);
|
||||
gulp.watch('src/scripts/*.js',['scripts']);
|
||||
gulp.watch('src/scripts/tutti/**/*.js',['scripts_concat_tutti']);
|
||||
});
|
||||
|
||||
// Erases all generated files in output directories.
|
||||
gulp.task('cleanup', function() {
|
||||
var paths = [];
|
||||
for (attr in destination) {
|
||||
paths.push(destination[attr]);
|
||||
}
|
||||
|
||||
git.clean({ args: '-f -X ' + paths.join(' ') }, function (err) {
|
||||
if(err) throw err;
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
|
||||
// Run 'gulp' to build everything at once
|
||||
var tasks = [];
|
||||
if (enabled.cleanup) tasks.push('cleanup');
|
||||
gulp.task('default', tasks.concat(['styles', 'templates', 'scripts', 'scripts_concat_tutti']));
|
35
manage.py
@@ -1,40 +1,7 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import logging
|
||||
from flask import current_app
|
||||
from pillar import cli
|
||||
from pillar.cli import manager_maintenance
|
||||
from cloud import app
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@manager_maintenance.command
|
||||
def reconcile_subscribers():
|
||||
"""For every user, check their subscription status with the store."""
|
||||
from pillar.auth.subscriptions import fetch_user
|
||||
|
||||
users_coll = current_app.data.driver.db['users']
|
||||
unsubscribed_users = []
|
||||
for user in users_coll.find({'roles': 'subscriber'}):
|
||||
print('Processing %s' % user['email'])
|
||||
print(' Checking subscription')
|
||||
user_store = fetch_user(user['email'])
|
||||
if user_store['cloud_access'] == 0:
|
||||
print(' Removing subscriber role')
|
||||
users_coll.update(
|
||||
{'_id': user['_id']},
|
||||
{'$pull': {'roles': 'subscriber'}})
|
||||
unsubscribed_users.append(user['email'])
|
||||
|
||||
if not unsubscribed_users:
|
||||
return
|
||||
|
||||
print('The following users have been unsubscribed')
|
||||
for user in unsubscribed_users:
|
||||
print(user)
|
||||
from runserver import app
|
||||
|
||||
cli.manager.app = app
|
||||
cli.manager.run()
|
||||
|
27
package.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"name": "blender-cloud",
|
||||
"license": "GPL-2.0+",
|
||||
"author": "Blender Institute",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git://git.blender.org/blender-cloud.git"
|
||||
},
|
||||
"devDependencies": {
|
||||
"gulp": "~3.9.1",
|
||||
"gulp-autoprefixer": "~2.3.1",
|
||||
"gulp-cached": "~1.1.0",
|
||||
"gulp-chmod": "~1.3.0",
|
||||
"gulp-concat": "~2.6.0",
|
||||
"gulp-if": "^2.0.1",
|
||||
"gulp-git": "~2.4.2",
|
||||
"gulp-pug": "~3.2.0",
|
||||
"gulp-jade": "~1.1.0",
|
||||
"gulp-livereload": "~3.8.1",
|
||||
"gulp-plumber": "~1.1.0",
|
||||
"gulp-rename": "~1.2.2",
|
||||
"gulp-sass": "~2.3.1",
|
||||
"gulp-sourcemaps": "~1.6.0",
|
||||
"gulp-uglify": "~1.5.3",
|
||||
"minimist": "^1.2.0"
|
||||
}
|
||||
}
|
11
requirements-dev.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
-r ../pillar-python-sdk/requirements-dev.txt
|
||||
-r ../pillar/requirements-dev.txt
|
||||
-r ../attract/requirements-dev.txt
|
||||
-r ../flamenco/requirements-dev.txt
|
||||
-r ../pillar-svnman/requirements-dev.txt
|
||||
|
||||
-e ../pillar-python-sdk
|
||||
-e ../pillar
|
||||
-e ../attract
|
||||
-e ../flamenco
|
||||
-e ../pillar-svnman
|
@@ -1,67 +1,4 @@
|
||||
# Primary requirements
|
||||
# pillarsdk
|
||||
# pillar
|
||||
# attract
|
||||
# flamenco
|
||||
|
||||
# Secondary requirements (i.e. pulled in from primary requirements)
|
||||
algoliasearch==1.8.0
|
||||
attrs==16.2.0
|
||||
bcrypt==2.0.0
|
||||
blinker==1.4
|
||||
bugsnag==2.3.1
|
||||
bleach==1.4.3
|
||||
Cerberus==0.9.2
|
||||
cffi==1.7.0
|
||||
commonmark==0.7.2
|
||||
cryptography==1.4
|
||||
enum34==1.1.6
|
||||
Eve==0.6.3
|
||||
Events==0.2.1
|
||||
Flask==0.10.1
|
||||
Flask-Cache==0.13.1
|
||||
Flask-Script==2.0.5
|
||||
Flask-Login==0.3.2
|
||||
Flask-OAuthlib==0.9.3
|
||||
Flask-PyMongo==0.4.1
|
||||
Flask-WTF==0.12
|
||||
flup==1.0.2
|
||||
future==0.15.2
|
||||
gcloud==0.12.0
|
||||
google-apitools==0.4.11
|
||||
googleapis-common-protos==1.2.0
|
||||
html5lib==0.9999999
|
||||
httplib2==0.9.2
|
||||
idna==2.0
|
||||
ipaddress==1.0.16
|
||||
itsdangerous==0.24
|
||||
Jinja2==2.8
|
||||
MarkupSafe==0.23
|
||||
markdown==2.6.7
|
||||
ndg-httpsclient==0.4.0
|
||||
oauth2client==3.0.0
|
||||
oauthlib==1.1.2
|
||||
pathlib2==2.2.1
|
||||
Pillow==2.8.1
|
||||
protobuf==3.0.0
|
||||
protorpc==0.11.1
|
||||
pyasn1==0.1.9
|
||||
pyasn1-modules==0.0.8
|
||||
pycparser==2.14
|
||||
pycrypto==2.6.1
|
||||
pylru==1.0.4
|
||||
pymongo==3.3.0
|
||||
pyOpenSSL==0.15.1
|
||||
python-dateutil==2.5.3
|
||||
redis==2.10.5
|
||||
requests==2.9.1
|
||||
requests-oauthlib==0.6.2
|
||||
rsa==3.4.2
|
||||
scandir==1.4
|
||||
simplejson==3.8.2
|
||||
six==1.10.0
|
||||
svn==0.3.43
|
||||
WebOb==1.5.0
|
||||
Werkzeug==0.11.10
|
||||
WTForms==2.1
|
||||
zencoder==0.6.5
|
||||
-r ../pillar/requirements.txt
|
||||
-r ../attract/requirements.txt
|
||||
-r ../flamenco/requirements.txt
|
||||
-r ../pillar-svnman/requirements.txt
|
||||
|
40
rsync_ui.sh
@@ -1,40 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
PILLAR_DIR=$(python <<EOT
|
||||
from __future__ import print_function
|
||||
import os.path
|
||||
import pillar
|
||||
|
||||
print(os.path.dirname(os.path.dirname(pillar.__file__)))
|
||||
EOT
|
||||
)
|
||||
|
||||
ASSETS="$PILLAR_DIR/pillar/web/static/assets/"
|
||||
TEMPLATES="$PILLAR_DIR/pillar/web/templates/"
|
||||
|
||||
if [ ! -d "$ASSETS" ]; then
|
||||
echo "Unable to find assets dir $ASSETS"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd $PILLAR_DIR
|
||||
if [ $(git rev-parse --abbrev-ref HEAD) != "production" ]; then
|
||||
echo "You are NOT on the production branch, refusing to rsync_ui." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "*** GULPA GULPA ***"
|
||||
if [ -x ./node_modules/.bin/gulp ]; then
|
||||
./node_modules/.bin/gulp --production
|
||||
else
|
||||
gulp --production
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "*** SYNCING ASSETS ***"
|
||||
rsync -avh $ASSETS root@cloud.blender.org:/data/git/pillar/pillar/web/static/assets/
|
||||
|
||||
echo
|
||||
echo "*** SYNCING TEMPLATES ***"
|
||||
rsync -avh $TEMPLATES root@cloud.blender.org:/data/git/pillar/pillar/web/templates/
|
@@ -3,13 +3,19 @@
|
||||
from pillar import PillarServer
|
||||
from attract import AttractExtension
|
||||
from flamenco import FlamencoExtension
|
||||
from svnman import SVNManExtension
|
||||
from cloud import CloudExtension
|
||||
|
||||
attract = AttractExtension()
|
||||
flamenco = FlamencoExtension()
|
||||
svnman = SVNManExtension()
|
||||
cloud = CloudExtension()
|
||||
|
||||
app = PillarServer('.')
|
||||
app.load_extension(attract, '/attract')
|
||||
app.load_extension(flamenco, '/flamenco')
|
||||
app.load_extension(svnman, '/svn')
|
||||
app.load_extension(cloud, None)
|
||||
app.process_extensions()
|
||||
|
||||
if __name__ == '__main__':
|
@@ -1,22 +1,23 @@
|
||||
from os.path import abspath, dirname
|
||||
import sys
|
||||
|
||||
activate_this = '/data/venv/bin/activate_this.py'
|
||||
execfile(activate_this, dict(__file__=activate_this))
|
||||
from flup.server.fcgi import WSGIServer
|
||||
my_path = dirname(abspath(__file__))
|
||||
sys.path.append(my_path)
|
||||
|
||||
from pillar import PillarServer
|
||||
from attract import AttractExtension
|
||||
from flamenco import FlamencoExtension
|
||||
|
||||
sys.path.append('/data/git/blender-cloud/')
|
||||
from svnman import SVNManExtension
|
||||
from cloud import CloudExtension
|
||||
|
||||
attract = AttractExtension()
|
||||
flamenco = FlamencoExtension()
|
||||
svnman = SVNManExtension()
|
||||
cloud = CloudExtension()
|
||||
|
||||
application = PillarServer(dirname(abspath(__file__)))
|
||||
application = PillarServer(my_path)
|
||||
application.load_extension(attract, '/attract')
|
||||
application.load_extension(flamenco, '/flamenco')
|
||||
application.load_extension(svnman, '/svn')
|
||||
application.load_extension(cloud, None)
|
||||
application.process_extensions()
|
||||
|
||||
if __name__ == '__main__':
|
||||
WSGIServer(application).run()
|
||||
|
5
setup.cfg
Normal file
@@ -0,0 +1,5 @@
|
||||
[tool:pytest]
|
||||
addopts = -v --cov cloud --cov-report term-missing --ignore node_modules --ignore docker
|
||||
|
||||
[pep8]
|
||||
max-line-length = 100
|
9
src/styles/_about.sass
Normal file
@@ -0,0 +1,9 @@
|
||||
section.team
|
||||
h2, .people-container
|
||||
text-align: center
|
||||
h3
|
||||
margin-bottom: 0
|
||||
h3 small
|
||||
display: block
|
||||
.people-intro, .row
|
||||
margin-bottom: 20px
|
752
src/styles/_homepage.sass
Normal file
@@ -0,0 +1,752 @@
|
||||
.dashboard-container
|
||||
+container-behavior
|
||||
+media-xs
|
||||
flex-direction: column
|
||||
align-content: center
|
||||
align-items: flex-start
|
||||
display: flex
|
||||
justify-content: space-around
|
||||
word-break: break-word
|
||||
|
||||
section.dashboard-main,
|
||||
section.dashboard-secondary
|
||||
+media-xs
|
||||
width: 100%
|
||||
margin: 20px auto
|
||||
|
||||
img
|
||||
max-width: 100%
|
||||
|
||||
section.dashboard-main
|
||||
+container-box
|
||||
width: 52%
|
||||
|
||||
section.dashboard-secondary
|
||||
width: 46%
|
||||
flex-direction: column
|
||||
margin-right: auto
|
||||
|
||||
span.section-lead
|
||||
display: block
|
||||
padding: 10px 0
|
||||
color: $color-text-dark-secondary
|
||||
|
||||
section.dashboard-main,
|
||||
section.dashboard-secondary
|
||||
h4
|
||||
padding-bottom: 5px
|
||||
margin-bottom: 20px
|
||||
position: relative
|
||||
|
||||
&:before
|
||||
position: absolute
|
||||
width: 50px
|
||||
height: 2px
|
||||
top: 125%
|
||||
content: ' '
|
||||
display: block
|
||||
background-color: $color-primary
|
||||
|
||||
a
|
||||
color: $color-text
|
||||
|
||||
&:hover
|
||||
color: $color-primary
|
||||
cursor: pointer
|
||||
|
||||
nav#nav-tabs,
|
||||
nav#sub-nav-tabs
|
||||
ul#nav-tabs__list,
|
||||
ul#sub-nav-tabs__list
|
||||
margin: 0
|
||||
padding: 0
|
||||
list-style: none
|
||||
border-bottom: thin solid $color-background
|
||||
+clearfix
|
||||
|
||||
li.nav-tabs__list-tab
|
||||
float: left
|
||||
border: none
|
||||
border-bottom: 3px solid transparent
|
||||
color: $color-text-dark-primary
|
||||
user-select: none
|
||||
|
||||
&:hover
|
||||
border-color: rgba($color-secondary, .3)
|
||||
cursor: pointer
|
||||
color: $color-text-dark
|
||||
a
|
||||
color: $color-text-dark
|
||||
|
||||
a
|
||||
display: block
|
||||
text-decoration: none
|
||||
padding: 10px 15px 5px
|
||||
color: $color-text-dark-primary
|
||||
|
||||
i
|
||||
margin-right: 5px
|
||||
color: $color-text-dark-secondary
|
||||
font-size: .9em
|
||||
|
||||
&.pi-blender
|
||||
margin-right: 10px
|
||||
|
||||
span
|
||||
color: $color-text-dark-hint
|
||||
margin-left: 5px
|
||||
|
||||
&.active
|
||||
border-color: $color-secondary
|
||||
color: $color-secondary-dark
|
||||
a, i
|
||||
color: $color-secondary-dark
|
||||
|
||||
&.disabled
|
||||
border-color: $color-background-light
|
||||
color: $color-text-dark-hint
|
||||
cursor: default
|
||||
a, i
|
||||
color: $color-text-dark-hint
|
||||
|
||||
&:hover
|
||||
border-color: $color-background-light
|
||||
pointer-events: none
|
||||
|
||||
li.create
|
||||
cursor: pointer
|
||||
display: inline-block
|
||||
float: right
|
||||
font:
|
||||
size: 1.2em
|
||||
weight: 400
|
||||
padding: 5px 10px
|
||||
margin-top: 3px
|
||||
|
||||
a
|
||||
color: $color-success
|
||||
text-decoration: none
|
||||
|
||||
&.disabled
|
||||
cursor: wait
|
||||
border-color: $color-success
|
||||
opacity: .8
|
||||
a
|
||||
cursor: wait
|
||||
|
||||
section.stream
|
||||
background-color: white
|
||||
border-bottom: thin solid $color-background-dark
|
||||
|
||||
ul.activity-stream__list
|
||||
list-style: none
|
||||
margin: 0
|
||||
padding: 0
|
||||
|
||||
$activity-stream-thumbnail-size: 110px
|
||||
> li
|
||||
position: relative
|
||||
display: flex
|
||||
padding: 10px 0
|
||||
overflow: hidden
|
||||
border-top: thin solid $color-background-dark
|
||||
|
||||
&:first-child
|
||||
border: none
|
||||
|
||||
&.active .activity-stream__list-details .title
|
||||
color: $color-primary
|
||||
|
||||
&:hover
|
||||
.title
|
||||
text-decoration: underline
|
||||
&.video
|
||||
a.image
|
||||
&:hover
|
||||
i
|
||||
font-size: 3.5em
|
||||
img
|
||||
opacity: .9
|
||||
img
|
||||
opacity: .7
|
||||
z-index: 0
|
||||
transition: opacity 150ms ease-in-out
|
||||
|
||||
i
|
||||
+position-center-translate
|
||||
z-index: 1
|
||||
color: rgba(white, .6)
|
||||
font-size: 3em
|
||||
transition: font-size 100ms ease-in-out
|
||||
|
||||
&.comment
|
||||
.activity-stream__list-thumbnail
|
||||
background: transparent
|
||||
color: $node-type-comment
|
||||
font-size: 1.2em
|
||||
box-shadow: none
|
||||
|
||||
i
|
||||
+position-center-translate
|
||||
left: 22px
|
||||
top: 19px
|
||||
|
||||
.activity-stream__list-details
|
||||
padding: 0
|
||||
.title
|
||||
color: $color-text-dark
|
||||
padding: 7px 10px 2px 10px
|
||||
font-size: 1em
|
||||
margin: 0
|
||||
|
||||
ul.meta
|
||||
padding: 0 10px 7px 10px
|
||||
|
||||
li
|
||||
&.where-parent:before
|
||||
content: '\e83a'
|
||||
font-family: 'pillar-font'
|
||||
|
||||
&.what:before
|
||||
display: none
|
||||
|
||||
&.post
|
||||
.activity-stream__list-thumbnail
|
||||
border-color: $node-type-post
|
||||
background-color: $node-type-post
|
||||
.activity-stream__list-details .title
|
||||
color: darken($node-type-post, 15%)
|
||||
font:
|
||||
size: 1.3em
|
||||
weight: 500
|
||||
|
||||
&.asset, &.comment, &.post
|
||||
&:hover
|
||||
cursor: pointer
|
||||
&.empty
|
||||
display: none
|
||||
color: $color-text-dark-primary
|
||||
padding: 20px
|
||||
text-align: center
|
||||
span
|
||||
color: $color-primary
|
||||
&:hover
|
||||
text-decoration: underline
|
||||
cursor: pointer
|
||||
|
||||
&.with-picture
|
||||
min-height: $activity-stream-thumbnail-size
|
||||
|
||||
.activity-stream__list-thumbnail
|
||||
background-color: black
|
||||
width: $activity-stream-thumbnail-size * 1.69
|
||||
min-width: $activity-stream-thumbnail-size * 1.69
|
||||
|
||||
.activity-stream__list-thumbnail-icon
|
||||
position: absolute
|
||||
top: 0
|
||||
left: 0
|
||||
right: 0
|
||||
bottom: 0
|
||||
font-size: 1.3em
|
||||
text-shadow: 1px 1px 0 rgba(black, .2)
|
||||
background-image: linear-gradient(10deg, rgba(black, .5) 0%, transparent 40%)
|
||||
|
||||
i
|
||||
position: absolute
|
||||
bottom: -8px
|
||||
left: 20px
|
||||
top: initial
|
||||
right: initial
|
||||
color: white
|
||||
|
||||
.activity-stream__list-thumbnail
|
||||
position: relative
|
||||
display: flex
|
||||
justify-content: center
|
||||
align-items: center
|
||||
overflow: hidden
|
||||
width: 35px
|
||||
height: auto
|
||||
min-width: 35px
|
||||
min-height: auto
|
||||
|
||||
+media-xs
|
||||
display: none
|
||||
|
||||
|
||||
&.image i
|
||||
color: $node-type-asset_image
|
||||
&.file i
|
||||
color: $node-type-asset_file
|
||||
&.video i
|
||||
color: $node-type-asset_video
|
||||
|
||||
i
|
||||
+position-center-translate
|
||||
left: 23px
|
||||
top: 21px
|
||||
font-size: 1.1em
|
||||
|
||||
img
|
||||
max-height: $activity-stream-thumbnail-size
|
||||
+position-center-translate
|
||||
|
||||
.activity-stream__list-details
|
||||
display: flex
|
||||
flex-direction: column
|
||||
justify-content: space-around
|
||||
flex: 1
|
||||
overflow: hidden
|
||||
position: relative
|
||||
max-width: 100%
|
||||
margin-right: auto
|
||||
padding: 10px 0
|
||||
|
||||
+media-xs
|
||||
margin-left: 0
|
||||
|
||||
.ribbon
|
||||
+ribbon
|
||||
right: -47px
|
||||
top: 5px
|
||||
font:
|
||||
size: 12px
|
||||
weight: 500
|
||||
|
||||
span
|
||||
padding: 1px 50px
|
||||
|
||||
.title
|
||||
display: inline-block
|
||||
padding: 0 10px
|
||||
color: $color-text-dark
|
||||
font-size: 1.1em
|
||||
|
||||
span
|
||||
@include badge(hsl(hue($color-success), 60%, 45%), 3px)
|
||||
font-size: .7em
|
||||
padding: 1px 5px
|
||||
margin-right: 5px
|
||||
|
||||
ul.meta
|
||||
+list-meta
|
||||
padding: 5px 10px 0 10px
|
||||
font-size: .85em
|
||||
color: $color-text-dark-secondary
|
||||
display: flex
|
||||
white-space: nowrap
|
||||
|
||||
&.extra
|
||||
margin-top: auto
|
||||
|
||||
li
|
||||
padding-left: 10px
|
||||
&:before
|
||||
left: -5px
|
||||
&.where-project
|
||||
+text-overflow-ellipsis
|
||||
|
||||
section.comments
|
||||
padding: 0 15px 5px
|
||||
|
||||
ul
|
||||
padding: 0
|
||||
|
||||
> ul
|
||||
list-style-type: none
|
||||
margin: 10px 0 0
|
||||
|
||||
> li
|
||||
+text-overflow-ellipsis
|
||||
border-top: thin solid $color-background-dark
|
||||
padding: 10px 0
|
||||
|
||||
&:first-child
|
||||
border: none
|
||||
|
||||
> a
|
||||
+text-overflow-ellipsis
|
||||
color: $color-text
|
||||
display: block
|
||||
padding-bottom: 5px
|
||||
|
||||
section.blog-stream
|
||||
+media-md
|
||||
padding-left: 10px
|
||||
+media-sm
|
||||
padding-left: 10px
|
||||
position: relative
|
||||
|
||||
.feed
|
||||
position: absolute
|
||||
top: 10px
|
||||
right: 10px
|
||||
font-size: 1.4em
|
||||
color: lighten($color-text-dark-hint, 10%)
|
||||
|
||||
&:hover
|
||||
color: $color-primary
|
||||
|
||||
> ul
|
||||
margin: 0
|
||||
padding: 0
|
||||
list-style: none
|
||||
border-top: thin solid $color-background
|
||||
|
||||
.blog_index-item
|
||||
+container-box
|
||||
display: flex
|
||||
flex-direction: column
|
||||
margin-bottom: 50px
|
||||
|
||||
&:before
|
||||
height: 1px
|
||||
background-color: $color-background-dark
|
||||
position: absolute
|
||||
bottom: -26px
|
||||
left: 25px
|
||||
right: 25px
|
||||
content: ' '
|
||||
|
||||
&:last-child
|
||||
margin-bottom: 0
|
||||
|
||||
&:before
|
||||
display: none
|
||||
|
||||
video
|
||||
max-width: 100%
|
||||
|
||||
a.item-title
|
||||
font-size: 1.6em
|
||||
padding: 5px 15px
|
||||
display: block
|
||||
color: $color-text
|
||||
|
||||
&:hover
|
||||
color: $color-primary
|
||||
|
||||
ul.meta
|
||||
+list-meta
|
||||
font-size: .9em
|
||||
padding: 15px 15px 5px
|
||||
|
||||
|
||||
&.blog-non-featured
|
||||
border-radius: 0
|
||||
margin: 0
|
||||
|
||||
.item-content
|
||||
+node-details-description
|
||||
padding: 10px 15px
|
||||
|
||||
.blog-stream__list-details
|
||||
.title
|
||||
color: $color-text-dark-primary
|
||||
display: block
|
||||
font-size: 1.3em
|
||||
|
||||
&:hover
|
||||
color: $color-primary
|
||||
|
||||
ul.meta
|
||||
+list-meta
|
||||
padding-top: 5px
|
||||
font-size: .9em
|
||||
color: $color-text-dark-secondary
|
||||
|
||||
li
|
||||
padding-left: 10px
|
||||
&:before
|
||||
left: -5px
|
||||
|
||||
.blog_index-header
|
||||
display: block
|
||||
position: relative
|
||||
|
||||
img
|
||||
border-top-left-radius: 3px
|
||||
border-top-right-radius: 3px
|
||||
width: 100%
|
||||
|
||||
.more
|
||||
text-align: center
|
||||
|
||||
a
|
||||
color: $color-text
|
||||
display: block
|
||||
padding: 25px 0
|
||||
text-decoration: underline
|
||||
width: 100%
|
||||
|
||||
&:hover
|
||||
color: $color-primary
|
||||
|
||||
section.random-asset
|
||||
border-bottom: thin solid $color-background-dark
|
||||
|
||||
ul.random-asset__list
|
||||
list-style: none
|
||||
padding: 0
|
||||
|
||||
> li
|
||||
align-items: center
|
||||
border-top: thin solid $color-background
|
||||
display: flex
|
||||
padding: 7px 0
|
||||
position: relative
|
||||
overflow: hidden
|
||||
|
||||
&:first-child
|
||||
border-top: none
|
||||
|
||||
.ribbon
|
||||
+ribbon
|
||||
right: -47px
|
||||
top: 5px
|
||||
font:
|
||||
size: 12px
|
||||
weight: 500
|
||||
|
||||
z-index: 1
|
||||
|
||||
span
|
||||
padding: 1px 50px
|
||||
|
||||
.random-asset__list-thumbnail
|
||||
background-color: $color-background
|
||||
display: block
|
||||
height: 50px
|
||||
margin-right: 15px
|
||||
min-height: 50px
|
||||
min-width: 50px
|
||||
overflow: hidden
|
||||
position: relative
|
||||
width: 50px
|
||||
|
||||
img
|
||||
width: 100%
|
||||
|
||||
i
|
||||
+position-center-translate
|
||||
font-size: 1.6em
|
||||
color: $color-text-light
|
||||
|
||||
&.image
|
||||
background-color: $node-type-asset_image
|
||||
&.file
|
||||
background-color: $node-type-asset_file
|
||||
font-size: .8em
|
||||
&.video
|
||||
background-color: $node-type-asset_video
|
||||
font-size: .8em
|
||||
&.None
|
||||
background-color: $node-type-group
|
||||
|
||||
.random-asset__list-details
|
||||
.title
|
||||
display: block
|
||||
font-size: 1em
|
||||
color: $color-text-dark-primary
|
||||
|
||||
&:hover
|
||||
color: $color-primary
|
||||
|
||||
ul.meta
|
||||
+list-meta
|
||||
padding-top: 5px
|
||||
font-size: .9em
|
||||
|
||||
li
|
||||
&:before
|
||||
left: -5px
|
||||
&.what
|
||||
text-transform: capitalize
|
||||
|
||||
&.featured
|
||||
align-items: flex-start
|
||||
flex-direction: column
|
||||
padding: 0
|
||||
|
||||
a.title
|
||||
font-size: 1.1em
|
||||
padding: 10px 0 5px
|
||||
display: block
|
||||
color: $color-text
|
||||
|
||||
&:hover
|
||||
color: $color-primary
|
||||
|
||||
a.random-asset__thumbnail
|
||||
display: block
|
||||
position: relative
|
||||
|
||||
&.video
|
||||
background-color: black
|
||||
img
|
||||
opacity: .7
|
||||
|
||||
img
|
||||
transition: opacity 150ms ease-in-out
|
||||
width: 100%
|
||||
max-width: 100%
|
||||
|
||||
i
|
||||
+position-center-translate
|
||||
color: white
|
||||
font-size: 3em
|
||||
text-shadow: 0 0 25px black
|
||||
transition: font-size 150ms ease-in-out
|
||||
|
||||
&:hover
|
||||
i
|
||||
font-size: 3.5em
|
||||
img
|
||||
opacity: .85
|
||||
|
||||
ul.meta
|
||||
+list-meta
|
||||
padding-bottom: 10px
|
||||
|
||||
|
||||
section.announcement
|
||||
+container-box
|
||||
margin-left: 15px
|
||||
margin-right: 15px
|
||||
|
||||
.header-icons
|
||||
display: flex
|
||||
align-items: center
|
||||
justify-content: center
|
||||
padding: 20px 0 5px 0
|
||||
|
||||
i
|
||||
font-size: 2.5em
|
||||
color: $color-info
|
||||
|
||||
&.pi-heart-filled
|
||||
color: $color-danger
|
||||
margin-left: 5px
|
||||
|
||||
img.header
|
||||
width: 100%
|
||||
margin: 0 auto
|
||||
border-top-left-radius: 3px
|
||||
border-top-right-radius: 3px
|
||||
|
||||
iframe
|
||||
width: 100%
|
||||
position: relative
|
||||
left: 15px
|
||||
margin: 25px auto
|
||||
|
||||
+media-sm
|
||||
height: 500px
|
||||
+media-md
|
||||
height: 520px
|
||||
+media-lg
|
||||
height: 580px
|
||||
|
||||
.text
|
||||
padding: 15px
|
||||
|
||||
.title
|
||||
padding-bottom: 10px
|
||||
font:
|
||||
family: $font-body
|
||||
size: 1.4em
|
||||
weight: 300
|
||||
|
||||
+media-xs
|
||||
font-size: 1.4em
|
||||
|
||||
strong
|
||||
color: $color-primary-dark
|
||||
|
||||
a
|
||||
color: $color-text-dark-primary
|
||||
|
||||
.lead
|
||||
font-size: 1em
|
||||
+list-bullets
|
||||
|
||||
ul
|
||||
margin-top: 10px
|
||||
padding-left: 10px
|
||||
|
||||
hr
|
||||
border: none
|
||||
height: 1px
|
||||
width: 100%
|
||||
margin: 10px 0
|
||||
background-color: $color-background
|
||||
clear: both
|
||||
|
||||
+media-xs
|
||||
padding-left: 10px
|
||||
|
||||
.buttons
|
||||
margin: 15px auto 0 auto
|
||||
display: flex
|
||||
align-items: center
|
||||
justify-content: space-around
|
||||
flex-wrap: wrap
|
||||
|
||||
a
|
||||
+button($color-text-light, 3px)
|
||||
padding: 5px 0
|
||||
margin:
|
||||
bottom: 5px
|
||||
right: auto
|
||||
left: auto
|
||||
font-size: .9em
|
||||
opacity: 1
|
||||
flex: 1
|
||||
|
||||
+media-xs
|
||||
margin: 10px auto
|
||||
width: 100%
|
||||
|
||||
&:first-child
|
||||
margin-right: 15px
|
||||
|
||||
&.blue
|
||||
+button(hsl(hue($color-info), 60%, 45%), 3px)
|
||||
|
||||
&.orange
|
||||
+button(hsl(hue($color-secondary), 50%, 50%), 3px)
|
||||
padding: 5px 15px
|
||||
|
||||
&.green
|
||||
+button(hsl(hue($color-success), 60%, 40%), 3px, true)
|
||||
|
||||
|
||||
section.dashboard-in-production
|
||||
.in-production-project
|
||||
border-bottom: thin solid $color-background-dark
|
||||
color: $color-text-dark-primary
|
||||
display: block
|
||||
font-size: 1.1em
|
||||
margin-bottom: 15px
|
||||
|
||||
> img
|
||||
margin-bottom: 15px
|
||||
|
||||
body.homepage
|
||||
.dashboard-container
|
||||
.dashboard-main
|
||||
+media-xs
|
||||
width: 100%
|
||||
background-color: transparent
|
||||
box-shadow: none
|
||||
width: 60%
|
||||
|
||||
.dashboard-secondary
|
||||
+container-box
|
||||
+media-xs
|
||||
width: 100%
|
||||
width: 38%
|
||||
|
||||
> section
|
||||
padding: 15px
|
39
src/styles/_services.sass
Normal file
@@ -0,0 +1,39 @@
|
||||
.services
|
||||
#page-header
|
||||
text-shadow: 1px 1px 0 rgba(black, .2)
|
||||
border-bottom: none
|
||||
align-items: initial
|
||||
|
||||
.page-title
|
||||
text-align: left
|
||||
font-size: 3em
|
||||
margin: 0
|
||||
|
||||
.page-title-summary
|
||||
max-width: 900px
|
||||
text-align: left
|
||||
font-size: 1.4em
|
||||
|
||||
.page-card-side
|
||||
img
|
||||
max-width: 100%
|
||||
border-radius: 3px
|
||||
|
||||
.tip
|
||||
margin-top: 15px
|
||||
color: $color-text-dark-secondary
|
||||
font-size: .8em
|
||||
|
||||
a
|
||||
color: $color-primary
|
||||
text-decoration: underline
|
||||
|
||||
span.text-background
|
||||
+text-background(white, #358, 2px, 5px 0)
|
||||
|
||||
.navbar-backdrop-overlay
|
||||
background-image: linear-gradient(rgba(black, .3), rgba(black, .8))
|
||||
|
||||
#blender-sync
|
||||
small strong
|
||||
color: $color-success
|
401
src/styles/_welcome.sass
Normal file
@@ -0,0 +1,401 @@
|
||||
.join
|
||||
position: relative
|
||||
|
||||
nav.navbar
|
||||
background-color: white
|
||||
|
||||
.navbar-brand
|
||||
color: $color-text
|
||||
|
||||
li a.navbar-item
|
||||
color: $color-text
|
||||
|
||||
.navbar-container
|
||||
+container-behavior
|
||||
|
||||
.navbar-toggle
|
||||
border: 2px solid $color-text-dark-primary
|
||||
color: $color-text
|
||||
|
||||
.navbar-nav
|
||||
+media-xs
|
||||
padding: 10px
|
||||
|
||||
#page-header
|
||||
+media-lg
|
||||
min-height: 600px
|
||||
+media-xs
|
||||
background-size: cover
|
||||
background-position: right
|
||||
min-height: 500px
|
||||
|
||||
.page-title,
|
||||
.page-title-summary
|
||||
line-height: 1.2em
|
||||
text-align: left
|
||||
text-shadow: 1px 1px 0 rgba(black, .5), 0 0 25px rgba(black, .5)
|
||||
|
||||
.page-title-summary
|
||||
font-size: 1.4em
|
||||
|
||||
li.special
|
||||
color: $color-success
|
||||
font-weight: bold
|
||||
|
||||
.page-card
|
||||
&:nth-child(even)
|
||||
background-color: white
|
||||
|
||||
&-header
|
||||
margin-bottom: 0
|
||||
|
||||
&-side
|
||||
+media-xs
|
||||
width: 100%
|
||||
max-width: 100%
|
||||
|
||||
&.text
|
||||
+media-xs
|
||||
padding-left: 50px
|
||||
padding:
|
||||
bottom: 150px
|
||||
top: 150px
|
||||
|
||||
&.right
|
||||
background-color: $color-background-light
|
||||
|
||||
.page-card-title,
|
||||
.page-card-summary
|
||||
padding-left: 25px
|
||||
|
||||
.page-card-title:after
|
||||
left: 25px
|
||||
|
||||
.text
|
||||
+media-xs
|
||||
padding-left: 25px
|
||||
|
||||
&.intro
|
||||
+media-xs
|
||||
margin-left: 50px
|
||||
|
||||
.page-card-side
|
||||
+media-sm
|
||||
max-width: 500px
|
||||
max-width: 600px
|
||||
|
||||
padding:
|
||||
bottom: 0
|
||||
top: 50px
|
||||
|
||||
&+.page-card-header
|
||||
padding-top: 0
|
||||
|
||||
.page-card-summary
|
||||
font-size: 1.2em
|
||||
|
||||
.page-card-image
|
||||
align-items: center
|
||||
display: flex
|
||||
height: 100%
|
||||
justify-content: center
|
||||
|
||||
&.light
|
||||
overflow: hidden
|
||||
position: relative
|
||||
|
||||
.learn
|
||||
color: white
|
||||
text-decoration: underline
|
||||
|
||||
.page-card
|
||||
&-title
|
||||
color: white
|
||||
font-weight: bold
|
||||
|
||||
&:after
|
||||
border-color: white
|
||||
|
||||
&-summary
|
||||
color: white
|
||||
position: relative
|
||||
text-shadow: 1px 1px 0 rgba(black, .5), 0 0 25px rgba(black, .2)
|
||||
z-index: 1
|
||||
|
||||
p
|
||||
font-size: .9em
|
||||
|
||||
&.summary-action
|
||||
font-size: .8em
|
||||
|
||||
a.page-card-cta
|
||||
font-size: .9em
|
||||
|
||||
a
|
||||
color: white
|
||||
text-decoration: underline
|
||||
|
||||
|
||||
a.page-card-cta
|
||||
background: #ff4970
|
||||
border-radius: 3px
|
||||
border: none
|
||||
box-shadow: 1px 1px 0 rgba(black, .2)
|
||||
font-weight: bold
|
||||
margin-right: 15px
|
||||
margin-top: 25px
|
||||
padding: 7px 20px
|
||||
text-decoration: none
|
||||
text-shadow: none
|
||||
|
||||
&:hover
|
||||
background: lighten(#ff4970, 5%)
|
||||
|
||||
.page-card-image
|
||||
img
|
||||
+media-xs
|
||||
display: none
|
||||
max-width: initial
|
||||
position: absolute
|
||||
width: initial
|
||||
z-index: 0
|
||||
|
||||
&.training
|
||||
margin-top: 45px
|
||||
|
||||
a.page-card-cta
|
||||
background: #0082ff
|
||||
|
||||
.page-card-image
|
||||
img
|
||||
right: 50px
|
||||
top: 50%
|
||||
transform: translateY(-50%)
|
||||
|
||||
&.open-movies
|
||||
.page-card-image
|
||||
img
|
||||
top: 50px
|
||||
left: 50px
|
||||
|
||||
&.services
|
||||
.page-card-image
|
||||
img
|
||||
left: 50px
|
||||
|
||||
&.subscribe
|
||||
padding: 40px 0
|
||||
text-align: center
|
||||
|
||||
span strong
|
||||
color: $color-danger
|
||||
|
||||
.page-card-side
|
||||
width: 100%
|
||||
max-width: 100%
|
||||
|
||||
.page-card-title
|
||||
line-height: 1.5em
|
||||
color: white
|
||||
text-align: center
|
||||
padding-bottom: 15px
|
||||
|
||||
&:after
|
||||
border: none
|
||||
|
||||
.page-card-summary
|
||||
color: $color-text-light-primary
|
||||
|
||||
.page-card-cta
|
||||
text-align: center
|
||||
font-size: 1.3em
|
||||
padding: 7px 35px
|
||||
margin-right: initial
|
||||
border: thin solid rgba(white, .8)
|
||||
border-radius: 3px
|
||||
|
||||
|
||||
.training-other
|
||||
color: $color-text-dark-secondary
|
||||
font-size: .9em
|
||||
padding: 30px
|
||||
text-align: center
|
||||
|
||||
a
|
||||
color: $color-text-dark-secondary
|
||||
text-decoration: underline
|
||||
|
||||
&:hover
|
||||
color: $color-text-dark-primary
|
||||
|
||||
.pricing
|
||||
+media-xs
|
||||
padding-top: 30px
|
||||
margin-bottom: 0
|
||||
padding-bottom: 100px
|
||||
|
||||
|
||||
.supported-by
|
||||
padding: 50px 0
|
||||
text-align: center
|
||||
background-color: $color-background
|
||||
border-top: 5px solid $color-background-dark
|
||||
|
||||
img.logos
|
||||
padding: 60px 0 100px 0
|
||||
max-height: 500px
|
||||
max-width: 100%
|
||||
|
||||
+media-xs
|
||||
max-width: 100%
|
||||
|
||||
.assets
|
||||
padding-bottom: 80px
|
||||
|
||||
.flex
|
||||
+media-xs
|
||||
flex-direction: column
|
||||
|
||||
display: flex
|
||||
|
||||
.box
|
||||
flex: 1
|
||||
margin: 20px
|
||||
|
||||
.page-triplet-container
|
||||
.triplet-card
|
||||
+media-xs
|
||||
margin-top: 20px
|
||||
|
||||
.navbar
|
||||
.nav-item-sign-in
|
||||
a.navbar-item
|
||||
background-color: $color-primary
|
||||
border: none
|
||||
border-radius: 3px
|
||||
color: white
|
||||
height: auto
|
||||
font-weight: bold
|
||||
margin-top: 5px
|
||||
margin-left: 10px
|
||||
padding: 10px 20px
|
||||
|
||||
&:hover
|
||||
background-color: lighten($color-primary, 10%)
|
||||
box-shadow: none
|
||||
|
||||
.container.wide-on-sm
|
||||
+media-sm
|
||||
width: 100%
|
||||
|
||||
|
||||
section.pricing
|
||||
padding: 50px 0
|
||||
margin-bottom: 25px
|
||||
background-color: $color-background-light
|
||||
+clearfix
|
||||
+media-xs
|
||||
padding: 0
|
||||
|
||||
h2
|
||||
text-align: center
|
||||
font-size: 2.3em
|
||||
text-shadow: 1px 1px 0 white
|
||||
margin:
|
||||
top: 10px
|
||||
bottom: 50px
|
||||
padding: 0
|
||||
|
||||
+media-xs
|
||||
font-size: 1.6em
|
||||
|
||||
.box
|
||||
margin-top: 40px
|
||||
padding: 20px 20px 60px 20px
|
||||
border: 1px solid $color-text-dark-hint
|
||||
background-color: white
|
||||
position: relative
|
||||
text-align: center
|
||||
border-top: 3px solid rgba($color-text-dark-secondary, .5)
|
||||
border-bottom-left-radius: 3px
|
||||
border-bottom-right-radius: 3px
|
||||
|
||||
+media-xs
|
||||
margin-bottom: 15px
|
||||
|
||||
&.yearly
|
||||
border-top: 3px solid $color-info
|
||||
transform: scale(1.1)
|
||||
|
||||
+media-xs
|
||||
transform: scale(1)
|
||||
|
||||
a.sign-up-now
|
||||
+button($color-primary, 3px, true)
|
||||
|
||||
h3
|
||||
font:
|
||||
size: 1.8em
|
||||
family: $font-body
|
||||
padding-bottom: 0
|
||||
margin: 25px 0 0 10px
|
||||
|
||||
.pricing-display
|
||||
position: relative
|
||||
color: $color-info
|
||||
padding: 10px 0
|
||||
|
||||
.currency-sign,
|
||||
.digit-dec
|
||||
font-size: 1.7em
|
||||
position: relative
|
||||
top: -25px
|
||||
font-weight: 100
|
||||
|
||||
.digit-int
|
||||
font-size: 4.5em
|
||||
font-weight: 100
|
||||
|
||||
.pricing-caption
|
||||
color: $color-text-dark-hint
|
||||
text-align: center
|
||||
padding-bottom: 25px
|
||||
|
||||
ul
|
||||
text-align: left
|
||||
|
||||
+list-bullets
|
||||
|
||||
ul
|
||||
color: $color-text-dark-primary
|
||||
padding-left: 10px
|
||||
margin-top: 15px
|
||||
|
||||
li
|
||||
padding-bottom: 8px
|
||||
|
||||
.sign-up-now
|
||||
position: absolute
|
||||
bottom: 25px
|
||||
width: 65%
|
||||
left: 50%
|
||||
transform: translateX(-50%)
|
||||
font-size: 1.2em
|
||||
|
||||
+button($color-primary, 3px)
|
||||
padding: 5px 25px
|
||||
white-space: nowrap
|
||||
text-align: center
|
||||
|
||||
.education
|
||||
color: $color-text-dark-primary
|
||||
font-size: 1.1em
|
||||
padding: 25px 15px 0
|
||||
text-align: center
|
||||
|
||||
h3
|
||||
color: $color-primary-dark
|
||||
|
||||
.btn
|
||||
margin-top: 15px
|
||||
min-width: 200px
|
26
src/styles/main.sass
Normal file
@@ -0,0 +1,26 @@
|
||||
@import ../../../pillar/src/styles/_normalize
|
||||
@import ../../../pillar/src/styles/_config
|
||||
@import ../../../pillar/src/styles/_utils
|
||||
|
||||
/* Generic styles (comments, notifications, etc) come from base.css */
|
||||
|
||||
/* Blender Cloud specific styles */
|
||||
@import ../../../pillar/src/styles/_project
|
||||
@import ../../../pillar/src/styles/_project-sharing
|
||||
@import ../../../pillar/src/styles/_project-dashboard
|
||||
@import ../../../pillar/src/styles/_user
|
||||
@import _welcome
|
||||
@import _homepage
|
||||
@import _services
|
||||
@import _about
|
||||
@import ../../../pillar/src/styles/_search
|
||||
@import ../../../pillar/src/styles/_organizations
|
||||
|
||||
/* services, about, etc */
|
||||
@import ../../../pillar/src/styles/_pages
|
||||
|
||||
/* plugins are included here, don't include in base unless needed by other pillar apps */
|
||||
@import ../../../pillar/src/styles/plugins/_jstree
|
||||
@import ../../../pillar/src/styles/plugins/_js_select2
|
||||
|
||||
/* CSS for pillar-font comes from fontello.com using static/assets/font/config.json */
|
347
src/styles/project-landing.sass
Normal file
@@ -0,0 +1,347 @@
|
||||
@import ../../../pillar/src/styles/_config
|
||||
@import ../../../pillar/src/styles/_utils
|
||||
|
||||
$node-latest-thumbnail-size: 160px
|
||||
$node-latest-gallery-thumbnail-size: 200px
|
||||
body
|
||||
background-color: white
|
||||
.page-body
|
||||
background-color: white
|
||||
|
||||
nav.navbar
|
||||
background-color: white
|
||||
|
||||
.navbar-brand
|
||||
color: $color-text
|
||||
|
||||
li a.navbar-item
|
||||
color: $color-text
|
||||
&:hover
|
||||
color: black
|
||||
&:focus
|
||||
color: black
|
||||
&.active
|
||||
color: black
|
||||
.dropdown.open
|
||||
a
|
||||
background-color: white
|
||||
.dropdown.libraries
|
||||
&:hover
|
||||
background: none
|
||||
ul.dropdown-menu
|
||||
background-color: white
|
||||
li
|
||||
a
|
||||
color: $color-text
|
||||
&:hover
|
||||
color: black
|
||||
background-color: white
|
||||
.navbar-container
|
||||
+container-behavior
|
||||
|
||||
.navbar-toggle
|
||||
border: 2px solid $color-text-dark-primary
|
||||
color: $color-text
|
||||
|
||||
.navbar-nav
|
||||
+media-xs
|
||||
padding: 10px
|
||||
|
||||
.search-input
|
||||
display: none
|
||||
|
||||
.node-details-container
|
||||
max-width: 620px
|
||||
font-family: $font-body
|
||||
font-size: 1.3em
|
||||
line-height: 1.5em
|
||||
margin: 0 auto 40px auto
|
||||
padding-bottom: 40px
|
||||
border-bottom: thin solid $color-background
|
||||
|
||||
|
||||
p
|
||||
margin-bottom: 1.3em
|
||||
|
||||
header
|
||||
display: flex
|
||||
flex-direction: column /* stack flex items vertically */
|
||||
position: relative
|
||||
img.header
|
||||
width: 100%
|
||||
flex-direction: column /* stack flex items vertically */
|
||||
position: relative
|
||||
a.page-card-cta
|
||||
position: absolute
|
||||
left: 76%
|
||||
top: 50%
|
||||
transform: translate(-50%, -50%)
|
||||
color: white
|
||||
font-weight: bold
|
||||
background: #ff4970
|
||||
border-radius: 3px
|
||||
border: none
|
||||
box-shadow: 1px 1px 0 rgba(black, .2)
|
||||
padding: 7px 20px
|
||||
text-decoration: none
|
||||
text-shadow: none
|
||||
|
||||
&:hover
|
||||
background: lighten(#ff4970, 5%)
|
||||
|
||||
|
||||
h2
|
||||
text-align: center
|
||||
margin-bottom: 40px
|
||||
|
||||
section
|
||||
max-width: 1024px
|
||||
padding-top: 20px
|
||||
border-top: thin solid $color-background
|
||||
margin: 0 auto
|
||||
|
||||
a.btn
|
||||
margin: 20px auto
|
||||
font-size: 1.3em
|
||||
padding: 9px 18px
|
||||
border-radius: 8px
|
||||
color: $color-text-dark
|
||||
|
||||
.navbar-secondary
|
||||
max-width: 620px
|
||||
margin: 0 auto
|
||||
|
||||
.navbar-container
|
||||
border-bottom: 1px solid #dddddd
|
||||
|
||||
.navbar-collapse
|
||||
padding-left: 0
|
||||
|
||||
li
|
||||
a
|
||||
padding-left: 20px
|
||||
padding-right: 20px
|
||||
color: $color-text
|
||||
&:hover
|
||||
&.active
|
||||
background: none
|
||||
color: black
|
||||
box-shadow: 0px 2px 0 rgba(red, .8)
|
||||
|
||||
.node-extra
|
||||
display: flex
|
||||
flex-direction: column
|
||||
|
||||
//padding: 0 20px
|
||||
width: 100%
|
||||
|
||||
|
||||
.node-updates
|
||||
flex: 1
|
||||
font-size: 1.1em
|
||||
|
||||
ul
|
||||
padding: 0
|
||||
margin: 0 0 15px 0
|
||||
display: flex
|
||||
flex-direction: row
|
||||
flex-wrap: wrap
|
||||
|
||||
li
|
||||
display: flex
|
||||
flex-direction: column
|
||||
list-style: none
|
||||
padding: 5px
|
||||
cursor: pointer
|
||||
width: 33.3333%
|
||||
|
||||
+media-xs
|
||||
width: 100%
|
||||
|
||||
&.texture, &.group_texture
|
||||
width: 25%
|
||||
|
||||
&:hover
|
||||
img
|
||||
opacity: .9
|
||||
a.title
|
||||
//color: $color-primary
|
||||
text-decoration: underline
|
||||
|
||||
&.post
|
||||
.info .title
|
||||
//color: $node-type-post
|
||||
font-size: 1.1em
|
||||
a.image
|
||||
border: none
|
||||
//border-color: $node-type-post
|
||||
background-color: hsl(hue($node-type-post), 20%, 55%)
|
||||
|
||||
&.asset.image a.image
|
||||
border-color: $node-type-asset_image
|
||||
background-color: hsl(hue($node-type-asset_image), 20%, 55%)
|
||||
&.asset.file a.image
|
||||
border-color: $node-type-asset_file
|
||||
background-color: hsl(hue($node-type-asset_file), 20%, 55%)
|
||||
&.asset.video a.image
|
||||
border-color: $node-type-asset_video
|
||||
background-color: hsl(hue($node-type-asset_video), 20%, 55%)
|
||||
|
||||
.image
|
||||
width: 100%
|
||||
height: $node-latest-thumbnail-size
|
||||
min-height: $node-latest-thumbnail-size
|
||||
max-height: $node-latest-thumbnail-size
|
||||
background-color: $color-background
|
||||
margin: 5px auto 10px auto
|
||||
position: relative
|
||||
overflow: hidden
|
||||
border-radius: 0
|
||||
|
||||
img
|
||||
max-height: $node-latest-thumbnail-size
|
||||
+position-center-translate
|
||||
|
||||
i
|
||||
color: rgba(white, .9)
|
||||
font-size: 1.8em
|
||||
position: absolute
|
||||
bottom: 3px
|
||||
left: 5px
|
||||
text-shadow: 1px 1px 0 rgba(black, .2)
|
||||
|
||||
&.pi-file-archive
|
||||
font-size: 1.5em
|
||||
bottom: 5px
|
||||
&.pi-newspaper
|
||||
font-size: 1.6em
|
||||
left: 7px
|
||||
|
||||
.ribbon
|
||||
+ribbon
|
||||
|
||||
.info
|
||||
width: 100%
|
||||
height: 100%
|
||||
display: flex
|
||||
flex-direction: column
|
||||
justify-content: space-between
|
||||
word-break: break-word
|
||||
|
||||
.description
|
||||
font-size: 1em
|
||||
line-height: 1.8em
|
||||
padding-top: 8px
|
||||
color: $color-text-dark-primary
|
||||
|
||||
.title
|
||||
display: block
|
||||
font-size: 1.3em
|
||||
color: $color-text-dark
|
||||
font-weight: 600
|
||||
+clearfix
|
||||
+text-overflow-ellipsis
|
||||
|
||||
span.details
|
||||
width: 100%
|
||||
display: block
|
||||
font-size: 1em
|
||||
line-height: 1.2em
|
||||
padding: 5px 0
|
||||
color: $color-text-dark-secondary
|
||||
+clearfix
|
||||
|
||||
.who
|
||||
margin-left: 3px
|
||||
.what
|
||||
text-transform: capitalize
|
||||
|
||||
|
||||
$bg-color: #444
|
||||
$bg-color2: #666
|
||||
$yellow: rgb(249,229,89)
|
||||
$almost-white: rgb(255,255,255)
|
||||
$btn-transparent-color: rgba(249,229,89,1)
|
||||
$btn-transparent-bg: rgba(249,229,89,0)
|
||||
|
||||
|
||||
section.gallery
|
||||
max-width: 1024px
|
||||
margin: 60px auto 0 auto
|
||||
text-align: center
|
||||
padding-bottom: 40px
|
||||
|
||||
p
|
||||
color: $almost-white
|
||||
padding: 0 40px
|
||||
|
||||
|
||||
.thumbnail
|
||||
float: left
|
||||
position: relative
|
||||
width: 23%
|
||||
padding-bottom: 23%
|
||||
margin: 0.83%
|
||||
overflow: hidden
|
||||
&:hover
|
||||
box-shadow: 2px 2px 50px 0 rgba(0,0,0,0.3)
|
||||
|
||||
.img-container
|
||||
position: absolute
|
||||
width: 100%
|
||||
height: 100%
|
||||
|
||||
img
|
||||
width: 300%
|
||||
transform: translate(-20%,-10%)
|
||||
|
||||
&:hover .img-caption
|
||||
top: 0
|
||||
left: 0
|
||||
.btn-trans
|
||||
background: rgba(255,255,255,0.4)
|
||||
|
||||
.img-caption
|
||||
position: absolute
|
||||
width: 100%
|
||||
height: 100%
|
||||
background: rgba(0, 0, 0, 0.3)
|
||||
text-align: center
|
||||
|
||||
.table
|
||||
display: table
|
||||
.table-cell
|
||||
display: table-cell
|
||||
vertical-align: bottom
|
||||
border: none
|
||||
|
||||
@media screen and (max-width: 992px)
|
||||
.thumbnail
|
||||
width: 22%
|
||||
padding-bottom: 22%
|
||||
margin: 1.5%
|
||||
|
||||
.img-container:hover .img-caption
|
||||
top: 0
|
||||
left: 0
|
||||
|
||||
.img-caption
|
||||
position: absolute
|
||||
width: 100%
|
||||
height: 100%
|
||||
background: rgba(0, 0, 0, .7)
|
||||
text-align: center
|
||||
a
|
||||
color: $yellow
|
||||
|
||||
@media screen and (max-width: 720px)
|
||||
.thumbnail
|
||||
width: 29%
|
||||
padding-bottom: 29%
|
||||
margin: 2.16%
|
||||
|
||||
@media screen and (max-width: 470px)
|
||||
.thumbnail
|
||||
width: 44%
|
||||
padding-bottom: 44%
|
||||
margin: 3%
|
374
src/templates/about.pug
Normal file
@@ -0,0 +1,374 @@
|
||||
| {% extends 'layout.html' %}
|
||||
| {% block page_title %}Welcome{% endblock %}
|
||||
| {% block css %}
|
||||
| {{ super() }}
|
||||
style.
|
||||
.page-card-side {
|
||||
padding: 60px 10px !important;
|
||||
}
|
||||
| {% endblock css %}
|
||||
| {% block body %}
|
||||
#page-container
|
||||
#page-header(style="background-image: url({{ url_for('static', filename='assets/img/backgrounds/pattern_01.jpg')}})")
|
||||
.page-title(style='text-align: left')
|
||||
em ABOUT
|
||||
i.pi-blender-cloud-logo
|
||||
.page-title-summary
|
||||
| Blender Cloud means inspiration, knowledge, and tools in one place.
|
||||
br
|
||||
| Started in 2014, it has been pushing the meaning of recurring crowdfunding ever since.
|
||||
br
|
||||
| By subscribing to Blender Cloud you support the creation of open content,
|
||||
br
|
||||
| the development of high-end production tools like Blender and access a
|
||||
br
|
||||
| unique set of learning and creative resources.
|
||||
#page-content
|
||||
|
||||
section.team
|
||||
.container
|
||||
h2.
|
||||
Meet a restless team of artists and developers <br/>
|
||||
wants to share their work with you.
|
||||
|
||||
.people-container
|
||||
.people-intro
|
||||
h3 Blender Institute
|
||||
span Amsterdam, The Netherlands
|
||||
|
||||
.row
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/tonroosendaal",
|
||||
data-blenderhead='ton')
|
||||
img(alt="Ton", src="{{ url_for('static', filename='assets/img/people/ton.jpg')}}")
|
||||
.bio
|
||||
h3 Ton Roosendaal
|
||||
small CEO Blender Foundation. Producer Blender Institute
|
||||
span The Netherlands
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/fsiddi",
|
||||
data-blenderhead='francesco')
|
||||
img(alt="Francesco", src="{{ url_for('static', filename='assets/img/people/francesco.jpg')}}")
|
||||
.bio
|
||||
h3 Francesco Siddi
|
||||
small Pipeline Tools & Back-end Web Development
|
||||
span Italy
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/hjalti",
|
||||
data-blenderhead='hjalti')
|
||||
img(alt="Hjalti", src="{{ url_for('static', filename='assets/img/people/hjalti.jpg')}}")
|
||||
.bio
|
||||
h3 Hjalti Hjálmarsson
|
||||
small Director. Animation. Layout.
|
||||
span Iceland
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/PabloVazquez_",
|
||||
data-blenderhead='pablo')
|
||||
img(alt="Pablo", src="{{ url_for('static', filename='assets/img/people/pablo.jpg')}}")
|
||||
.bio
|
||||
h3 Pablo Vázquez
|
||||
small Lighting, Rendering. Front-end Web Development
|
||||
span Argentina
|
||||
.row
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/artificial3d",
|
||||
data-blenderhead='andy')
|
||||
img(alt="Andy", src="{{ url_for('static', filename='assets/img/people/andy.jpg')}}")
|
||||
.bio
|
||||
h3 Andy Goralczyk
|
||||
small Shading, Lighting, Rendering, FX
|
||||
span Germany
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://developer.blender.org/p/sergey/",
|
||||
data-blenderhead='sergey')
|
||||
img(alt="Sergey", src="{{ url_for('static', filename='assets/img/people/sergey.jpg')}}")
|
||||
.bio
|
||||
h3 Sergey Sharybin
|
||||
small Blender & Cycles Core Developer
|
||||
span Russia
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/sastuvel",
|
||||
data-blenderhead='sybren')
|
||||
img(alt="Sybren", src="{{ url_for('static', filename='assets/img/people/sybren.jpg')}}")
|
||||
.bio
|
||||
h3 Sybren Stüvel
|
||||
small Blender Cloud Developer
|
||||
span The Netherlands
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/dfelinto",
|
||||
data-blenderhead='dalai')
|
||||
img(alt="dalai", src="{{ url_for('static', filename='assets/img/people/dalai.jpg')}}")
|
||||
.bio
|
||||
h3 Dalai Felinto
|
||||
small Blender Developer
|
||||
span Brazil
|
||||
|
||||
.people-container.online
|
||||
.people-intro
|
||||
h3 Online Collaborators
|
||||
span Contributing to Blender Cloud from all over the globe.
|
||||
.row
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/davidrevoy",
|
||||
data-blenderhead='david')
|
||||
img(alt="David", src="{{ url_for('static', filename='assets/img/people/david.jpg')}}")
|
||||
.bio
|
||||
h3 David Revoy
|
||||
small Illustrator & Concept Artist
|
||||
span France
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/s_koenig",
|
||||
data-blenderhead='sebastian')
|
||||
img(alt="Sebastian", src="{{ url_for('static', filename='assets/img/people/sebastian.jpg')}}")
|
||||
.bio
|
||||
h3 Sebastian König
|
||||
small VFX
|
||||
span Germany
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/gleb_alexandrov",
|
||||
data-blenderhead='gleb')
|
||||
img(alt="Gleb", src="{{ url_for('static', filename='assets/img/people/gleb.jpg')}}")
|
||||
.bio
|
||||
h3 Gleb Alexandrov
|
||||
small Lighting & Shading
|
||||
span Belarus
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/the_mantissa",
|
||||
data-blenderhead='midge')
|
||||
img(alt="Midge", src="{{ url_for('static', filename='assets/img/people/midge.jpg')}}")
|
||||
.bio
|
||||
h3 Midge Sinnaeve
|
||||
small Motion Graphics
|
||||
span Belgium
|
||||
|
||||
.row
|
||||
.col-md-3
|
||||
a.face(
|
||||
href="https://twitter.com/jpbouza",
|
||||
data-blenderhead='jpbouza')
|
||||
img(alt="Juan Pablo", src="{{ url_for('static', filename='assets/img/people/jpbouza.jpg')}}")
|
||||
.bio
|
||||
h3 Juan Pablo Bouza
|
||||
small Rigging
|
||||
span Argentina
|
||||
|
||||
section.page-card
|
||||
h2 A bit of History
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Launch at SXSW
|
||||
small March 9th, 2014
|
||||
.page-card-summary
|
||||
| First happy cloud video and crowdfunding for Cosmos Laundromat Pilot.
|
||||
.page-card-side
|
||||
a(href='https://gooseberry.blender.org/gooseberry-campaign-launched-we-need-10k-people-to-help/')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2014_03_09_sxsw.jpg') }}", alt="SXSW")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://gooseberry.blender.org/gooseberry-campaign-launched-we-need-10k-people-to-help/')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2014_03_10_cosmos.jpg') }}", alt="Cosmos Laundromat")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Gooseberry | Cosmos Laundromat
|
||||
small March 10th, 2015
|
||||
.page-card-summary
|
||||
| Weekly folders with updates for subscribers. Initial development of Attract, which will become the new cloud some months later on.
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Glass Half
|
||||
small October 30th, 2015
|
||||
.page-card-summary
|
||||
| Introducing integrated blogs in Blender Cloud projects. Glass Half is the first project fully developed on the new Blender Cloud. It's also the first and only project to have share its
|
||||
a(href='https://cloud.blender.org/p/glass-half/5627bb22f0e7220061109c9f') animation dailies
|
||||
| ! But the biggest outcome from Glass Half was definitely
|
||||
a(href='https://cloud.blender.org/p/glass-half/569d6044c379cf445461293e') Flexirig
|
||||
| .
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/p/glass-half/blog/glass-half-premiere')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2015_10_30_glass.jpg') }}", alt="Glass Half")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/new-art-gallery-with-gleb-alexandrov')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2015_11_19_art.jpg') }}", alt="Art Gallery")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Art Gallery
|
||||
small November 19th, 2015
|
||||
.page-card-summary
|
||||
| Learn by example. Introducing a place for amazing artwork to be shared, along with its blendfiles and breakdowns.
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Blender Institute Podcast
|
||||
small November 24th, 2015
|
||||
.page-card-summary
|
||||
| With so much going on in the Cloud at at the studio. The Blender Institute Podcast was born! Sharing our daily studio work, Blender community news, and interacting with the awesome Blender Cloud subscribers.
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-blender-institute-podcast')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2015_11_24_bip.jpg') }}", alt="Blender Institute Podcast")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/p/blenrig/blog/welcome-to-the-blenrig-project')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2015_12_01_blenrig.jpg') }}", alt="Blenrig")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Blenrig
|
||||
small December 1st, 2015
|
||||
.page-card-summary
|
||||
| The most powerful and versatile rigging framework for Blender, used and tested through Cosmos Laundromat and the Caminandes series, is now part of Blender Cloud!
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Texture Library
|
||||
small December 23rd, 2015
|
||||
.page-card-summary
|
||||
| The biggest source for CC0/Public Domain textures on the interwebs goes live. First as beta, as a quick gift right before Xmas 2015!
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/new-texture-library')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2015_12_23_textures.jpg') }}", alt="Texture Library")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/nraryew-the-character-lib')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_01_05_charlib.jpg') }}", alt="Character Library")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Character Library
|
||||
small January 5th, 2016
|
||||
.page-card-summary
|
||||
| High-quality, animation-ready characters collection from all the Blender Institute open projects, plus a brand new one: Vincent!
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Caminandes: Llamigos
|
||||
small January 30th, 2016
|
||||
.page-card-summary
|
||||
| The
|
||||
a(href='https://www.youtube.com/watch?v=SkVqJ1SGeL0') third episode
|
||||
| of the Caminandes series was completely done -and sponsored! through Blender Cloud. It's also the only project til date to have
|
||||
a(href='https://www.youtube.com/watch?v=kQH897V9bDg&list=PLI2TkLMzCSr_H6ppmzDtU0ut0RwxGvXjv') nicely edited Weekly video reports
|
||||
| .
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/p/caminandes-3/blog/caminandes-llamigos')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_01_30_llamigos.jpg') }}", alt="Caminandes: Llamigos")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/welcome-sybren')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_03_01_sybren.jpg') }}", alt="Dr. Sybren!")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Sybren
|
||||
small March 1st, 2016
|
||||
.page-card-summary
|
||||
| Dr. Sybren Stüvel starts working at the Blender Institute!
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Private Projects
|
||||
small May 3rd, 2016
|
||||
.page-card-summary
|
||||
| Create your own private projects on Blender Cloud.
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/welcome-sybren')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_05_03_projects.jpg') }}", alt="Projects")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-project-sharing')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_05_09_projectsharing.jpg') }}", alt="Sharing")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Project Sharing
|
||||
small May 9th, 2016
|
||||
.page-card-summary
|
||||
| Team work! Share your projects with other Blender Cloud subscribers.
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Blender Cloud add-on with Texture Library
|
||||
small May 11th, 2016
|
||||
.page-card-summary
|
||||
| Browse the textures from within Blender!
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-project-sharing')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_05_11_addon.jpg') }}", alt="Blender Cloud Add-on")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-private-texture-libraries')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_05_23_privtextures.jpg') }}", alt="Texture Libraries")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Private Texture Libraries
|
||||
small May 23rd, 2016
|
||||
.page-card-summary
|
||||
| Create your own private textures library and browse it in Blender with our add-on.
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Blender Sync
|
||||
small June 30th, 2016
|
||||
.page-card-summary
|
||||
| Sync your Blender preferences across multiple devices.
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-blender-sync')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_06_30_sync.jpg') }}", alt="Blender Sync")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-image-sharing')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_07_14_image.jpg') }}", alt="Image Sharing")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Image Sharing
|
||||
small July 14th, 2016
|
||||
.page-card-summary
|
||||
| Quickly share renders and Blender screenshots within Blender with our add-on.
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
a(href='https://cloud.blender.org/blog/introducing-the-hdri-library')
|
||||
| HDRI Library
|
||||
small July 27th, 2016
|
||||
.page-card-summary
|
||||
| High-dynamic range images are now available on Blender Cloud! With their own special viewer. Also available via the Blender Cloud add-on.
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-the-hdri-library')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_07_27_hdri.jpg') }}", alt="HDRI Library")
|
||||
section.page-card
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/blog/introducing-the-hdri-library')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2016_12_06_toon.jpg') }}", alt="Hdri Library")
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
a(href='https://cloud.blender.org/blog/new-training-toon-character-workflow')
|
||||
| Toon Character Workflow
|
||||
small December 6th, 2016
|
||||
.page-card-summary
|
||||
| YouTube star Dillon Gu joins Blender Cloud for a new tutorial series that will guide you from the basics to a finished toon-shaded character.
|
||||
section.page-card
|
||||
.page-card-side
|
||||
h2.page-card-title
|
||||
| Agent 327 - Barbershop
|
||||
.page-card-summary
|
||||
p
|
||||
| Follow the ongoing progress of the Barbershop fight scene, an animation test for the Agent 327 project. By subscribing to Blender Cloud, you get access
|
||||
| to all resources and training produced so far!
|
||||
a.page-card-cta(href='https://store.blender.org/product/membership/') Subscribe
|
||||
.page-card-side
|
||||
a(href='https://cloud.blender.org/p/agent-327')
|
||||
img.img-responsive(src="{{ url_for('static_cloud', filename='img/2017_03_10_agent.jpg') }}", alt="Agent 327")
|
||||
|
||||
|
||||
|
||||
| {% endblock body%}
|
93
src/templates/emails/layout.pug
Normal file
@@ -0,0 +1,93 @@
|
||||
doctype html
|
||||
html
|
||||
head
|
||||
title {% block title %}{{ subject }}{% endblock %}
|
||||
style.
|
||||
@import url('https://fonts.googleapis.com/css?family=Roboto');
|
||||
|
||||
html, body {
|
||||
font-family: 'Roboto', 'Noto Sans', sans-serif;
|
||||
font-size: 11pt;
|
||||
background-color: #eaebec;
|
||||
color: black;
|
||||
}
|
||||
|
||||
section {
|
||||
max-width: 522px;
|
||||
/*width: 522px;*/
|
||||
margin: 0 auto 15px auto;
|
||||
padding: 10px 25px;
|
||||
box-shadow: rgba(0, 0, 0, 0.298039) 0px 1px 4px -1px;
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
a:link {
|
||||
color: #2e99b8;
|
||||
}
|
||||
|
||||
a:link, a:visited {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
section.about, p.ps {
|
||||
color: #888;
|
||||
font-size: smaller;
|
||||
}
|
||||
|
||||
section.about a:link, p.ps a:link {
|
||||
color: #7297ab;
|
||||
}
|
||||
|
||||
h1 {
|
||||
text-shadow: 1px 1px 1px rgba(0, 0, 0, 0.5), 0 0 25px rgba(0, 0, 0, 0.6);
|
||||
color: white;
|
||||
font-size: 30px;
|
||||
background: #d5d0cb url('https://cloud.blender.org/static/assets/img/email/background_caminandes_3_03.jpg') no-repeat top center;
|
||||
-webkit-background-size: cover;
|
||||
-moz-background-size: cover;
|
||||
-o-background-size: cover;
|
||||
background-size: cover;
|
||||
padding: 35px 25px;
|
||||
max-width: 522px;
|
||||
/*width: 522px;*/
|
||||
margin: 15px auto 0 auto;
|
||||
}
|
||||
|
||||
h2, h3 {
|
||||
font-weight: 300;
|
||||
color: #eb5e28;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
p {
|
||||
line-height: 150%;
|
||||
}
|
||||
|
||||
p.closing {
|
||||
margin-top: 2.5em;
|
||||
}
|
||||
|
||||
p.buttons {
|
||||
text-align: center;
|
||||
margin: 4ex;
|
||||
}
|
||||
|
||||
a.button {
|
||||
text-align: center;
|
||||
max-width: 30ex;
|
||||
padding: 5px 30px;
|
||||
text-decoration: none;
|
||||
border-radius: 3px;
|
||||
border: thin solid #2e99b8;
|
||||
color: #2e99b8;
|
||||
background-color: transparent;
|
||||
text-shadow: none;
|
||||
transition: color 350ms ease-out, border 150ms ease-in-out, opacity 150ms ease-in-out, background-color 150ms ease-in-out;
|
||||
}
|
||||
|
||||
a.button:hover {
|
||||
color: white;
|
||||
background-color: #2e99b8;
|
||||
}
|
||||
body
|
||||
| {% block body %}{% endblock %}
|
1
src/templates/emails/layout.txt
Normal file
@@ -0,0 +1 @@
|
||||
{% block body %}{% endblock %}
|
53
src/templates/emails/welcome.pug
Normal file
@@ -0,0 +1,53 @@
|
||||
| {% extends "emails/layout.html" %}
|
||||
| {% block body %}
|
||||
|
||||
h1 Welcome to Blender Cloud!
|
||||
section
|
||||
|
||||
h2 Hi {{ user.full_name or user.email }},
|
||||
|
||||
p.
|
||||
Thanks for joining Blender Cloud, the Open Content creation platform! Your subscription helps
|
||||
our team to create more Open Projects, training, services and of course to make Blender the best
|
||||
CG pipeline in the world. You rock!
|
||||
p.buttons
|
||||
a.button(href="{{ abs_url('cloud.login', next='/') }}", target='_blank') Explore Now >
|
||||
|
||||
p.
|
||||
Here is a quick guide to help you get started with Blender Cloud.
|
||||
|
||||
h2 Discover the Training Content
|
||||
|
||||
p.
|
||||
Our high quality training is organised in #[a(href="{{ abs_url('cloud.courses') }}", target='_blank') Courses],
|
||||
where experienced trainers teach you step-by-step specific techniques, and
|
||||
#[a(href="{{ abs_url('cloud.workshops') }}", target='_blank') Workshops],
|
||||
where you get the feeling of peeking behind the shoulders of an artist explaining their creative workflow.
|
||||
h2 Try our Services
|
||||
p.
|
||||
Make sure you download the #[a(href="{{ abs_url('cloud.services') }}", target='_blank') Blender Cloud Add-on],
|
||||
so you can synchronize your Blender settings across multiple computers with
|
||||
#[a(href="https://cloud.blender.org/blog/introducing-blender-sync", target='_blank') Blender Sync],
|
||||
access our Texture and HDRI libraries directly within Blender, and much more.
|
||||
h2 Follow the Open Projects
|
||||
p.
|
||||
Follow #[a(href='https://cloud.blender.org/p/hero/') Hero] and
|
||||
#[a(href='https://cloud.blender.org/p/spring/') Spring], access exclusive making-of content and
|
||||
assets from our current and past
|
||||
#[a(href="{{ abs_url('cloud.open_projects') }}", target='_blank') Open Projects].
|
||||
h2 We are here for you
|
||||
p.
|
||||
Do you have any question about your subscription? Any suggestion on how to improve Blender Cloud?
|
||||
Just reply to this message or write us at
|
||||
#[a(href='mailto:cloudsupport@blender.org') cloudsupport@blender.org] – on working days we'll
|
||||
get back to you within a day.
|
||||
p
|
||||
| Cheers,
|
||||
br
|
||||
| Sybren and the Blender Cloud Team
|
||||
hr
|
||||
small.
|
||||
PS: If you do not want to receive other emails from us,
|
||||
#[a(href="{{ abs_url('settings.emails') }}") we've got you covered].
|
||||
|
||||
| {% endblock %}
|