1 Commits

Author SHA1 Message Date
db4ad6ec40 WIP instructions for dev installation from scratch 2017-02-09 23:24:12 +01:00
227 changed files with 634 additions and 18581 deletions

14
.gitignore vendored
View File

@@ -3,12 +3,6 @@
.coverage
*.pyc
__pycache__
*.js.map
*.css.map
/cloud/templates/
/cloud/static/assets/
node_modules/
/config_local.py
@@ -18,10 +12,4 @@ node_modules/
/.eggs/
/dump/
/google_app*.json
/docker/2_buildpy/python/
/docker/4_run/wheelhouse/
/docker/4_run/deploy/
/docker/4_run/staging/
/celerybeat-schedule.bak
/celerybeat-schedule.dat
/celerybeat-schedule.dir
/docker/3_run/wheelhouse/

164
README.md
View File

@@ -1,102 +1,43 @@
# Blender Cloud
Welcome to the [Blender Cloud](https://cloud.blender.org/) code repo!
Blender Cloud runs on the [Pillar](https://pillarframework.org/) framework.
Welcome to the [Blender Cloud](https://cloud.blender.org) code repo!
Blender Cloud runs on the [Pillar](https://pillarframework.org) framework.
## Development setup
Jumpstart Blender Cloud development with this simple guide.
### System setup
Blender Cloud relies on a number of services in order to run. Check out the [Pillar system setup](
https://pillarframework.org/development/system_setup/#step-by-step-setup) to set this up.
Add `cloud.local` to the `/etc/hosts` file on localhost. This is a development convention.
### Check out the code
Go to the local development directory and check out the following repositories, next to each other.
## Quick setup
Set up a node with these commands. Note that that script is already outdated...
```
cd /home/guest/Developer
#!/usr/bin/env bash
mkdir -p /data/git
mkdir -p /data/storage
mkdir -p /data/config
mkdir -p /data/certs
sudo apt-get update
sudo apt-get -y install python-pip
pip install docker-compose
cd /data/git
git clone git://git.blender.org/pillar-python-sdk.git
git clone git://git.blender.org/pillar.git
git clone git://git.blender.org/pillar-server.git pillar
git clone git://git.blender.org/attract.git
git clone git://git.blender.org/flamenco.git
git clone git://git.blender.org/pillar-svnman.git
git clone git://git.blender.org/blender-cloud.git
```
### Initial setup and configuration
## Deploying to production server
Create a virtualenv for the project and install the requirements. Dependencies are managed via
[Poetry](https://poetry.eustace.io/). Install it using `pip install -U --user poetry`.
```
cd blender-cloud
pip install --user -U poetry
poetry install
```
NOTE: After a dependency changed its own dependencies (say a new library was added as dependency of
Pillar), you need to run `poetry update`. This will take the new dependencies into account and write
them to the `poetry.lock` file.
Build assets and templates for all Blender Cloud dependencies using Gulp.
```
./gulp all
```
Make a copy of the config_local example, which will be further edited as the application is
configured.
```
cp config_local.example.py config_local.py
```
Setup the database with the initial collections and the admin user.
```
poetry run ./manage.py setup setup_db your_email
```
The command will return the following message:
```
Created project <project_id> for user <user_id>
```
Copy the value of `<project_id>` and assign it as value for `MAIN_PROJECT_ID`.
Run the application:
```
poetry run ./manage.py runserver
```
## Development
When ready to commit, change the remotes to work over SSH. For example:
`git remote set-url origin git@git.blender.org:blender-cloud.git`
For more information, check out [this guide](https://wiki.blender.org/wiki/Tools/Git#Commit_Access).
## Preparing the production branch for deployment
All revisions to deploy to production should be on the `production` branches of all the relevant
repositories.
Make sure you have these aliases in the `[alias]` section of your `~/.gitconfig`:
First of all, add those aliases to the `[alias]` section of your `~/.gitconfig`
```
prod = "!git checkout production && git fetch origin production && gitk --all"
ff = "merge --ff-only"
pp = "!git push && if [ -e deploy.sh ]; then ./deploy.sh; fi && git checkout master"
```
The following commands should be executed for each (sub)project; specifically for
the current repository, Pillar, Attract, Flamenco, and Pillar-SVNMan:
The following commands should be executed for each subproject; specifically for
Pillar and Attract:
```
cd $projectdir
@@ -107,7 +48,7 @@ git stash # if you still have local stuff.
# pull from master, run unittests, push your changes to master.
git pull
poetry run py.test
py.test
git push
# Switch to production branch, and investigate the situation.
@@ -117,12 +58,61 @@ git prod
git ff master
# Run tests again
poetry run py.test
py.test
# Push the production branch.
git push
# Push the production branch and run dummy deploy script.
git pp # pp = "Push to Production"
# The above alias waits for [ENTER] until all deploys are done.
# Let it wait, perform the other commands in another terminal.
```
## Deploying to production server
Now follow the above receipe on the Blender Cloud project as well.
Contrary to the subprojects, `git pp` will actually perform the deploy
for real.
See [deploy/README.md](deploy/README.md).
Now you can press `[ENTER]` in the Pillar and Attract terminals that
were still waiting for it.
After everything is done, your (sub)projects should all be back on
the master branch.
## Updating dependencies via Docker images
To update dependencies that need compiling, you need the `2_build` docker
container. To rebuild the lot, run `docker/build.sh`.
Follow these steps to deploy the new container on production:
1. run `docker/build.sh`
2. `docker push armadillica/blender_cloud`
On the production machine:
1. `docker pull armadillica/blender_cloud`
2. `docker-compose up -d` (from the `/data/git/blender-cloud/docker` directory)
## Development setup
Here are some notes on how to get the whole Blender Cloud dev environment set up
from scratch. First off, we agree to develop on bare metal and use Docker only for
production.
Initial requirements:
- Python 2.7
- NPM
- Blender ID server up and running (see blender-id for more)
Here are the steps:
- Clone the fantastic 5 (pillar, pillar-python-sdk, attract, flamenco and
blender-cloud)
- Run `npm install` in pillar, attract and flamenco
- Create `config_local.py` in blender-cloud
- `pip install -e .` in every repo except blender-cloud
- `pip install -r requirements.txt` in blender-cloud
- setup_db
- create_urler_account

View File

@@ -3,19 +3,13 @@
from pillar import PillarServer
from attract import AttractExtension
from flamenco import FlamencoExtension
from svnman import SVNManExtension
from cloud import CloudExtension
attract = AttractExtension()
flamenco = FlamencoExtension()
svnman = SVNManExtension()
cloud = CloudExtension()
app = PillarServer('.')
app.load_extension(attract, '/attract')
app.load_extension(flamenco, '/flamenco')
app.load_extension(svnman, '/svn')
app.load_extension(cloud, None)
app.process_extensions()
if __name__ == '__main__':

View File

@@ -1,155 +0,0 @@
import logging
import flask
from werkzeug.local import LocalProxy
import pillarsdk
import pillar.auth
from pillar.api.utils import authorization
from pillar.extension import PillarExtension
EXTENSION_NAME = 'cloud'
class CloudExtension(PillarExtension):
has_context_processor = True
user_roles = {'subscriber-pro', 'has_subscription'}
user_roles_indexable = {'subscriber-pro', 'has_subscription'}
user_caps = {
'has_subscription': {'can-renew-subscription'},
}
def __init__(self):
self._log = logging.getLogger('%s.CloudExtension' % __name__)
@property
def name(self):
return EXTENSION_NAME
def flask_config(self):
"""Returns extension-specific defaults for the Flask configuration.
Use this to set sensible default values for configuration settings
introduced by the extension.
:rtype: dict
"""
# Just so that it registers the management commands.
from . import cli
return {
'EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER': 'https://store.blender.org/api/',
'EXTERNAL_SUBSCRIPTIONS_TIMEOUT_SECS': 10,
'BLENDER_ID_WEBHOOK_USER_CHANGED_SECRET': 'oos9wah1Zoa0Yau6ahThohleiChephoi',
'NODE_TAGS': ['animation', 'modeling', 'rigging', 'sculpting', 'shading', 'texturing', 'lighting',
'character-pipeline', 'effects', 'video-editing', 'digital-painting', 'production-design',
'walk-through'],
}
def eve_settings(self):
"""Returns extensions to the Eve settings.
Currently only the DOMAIN key is used to insert new resources into
Eve's configuration.
:rtype: dict
"""
return {}
def blueprints(self):
"""Returns the list of top-level blueprints for the extension.
These blueprints will be mounted at the url prefix given to
app.load_extension().
:rtype: list of flask.Blueprint objects.
"""
from . import routes
import cloud.stats.routes
return [
routes.blueprint,
cloud.stats.routes.blueprint,
]
@property
def template_path(self):
import os.path
return os.path.join(os.path.dirname(__file__), 'templates')
@property
def static_path(self):
import os.path
return os.path.join(os.path.dirname(__file__), 'static')
def context_processor(self):
return {
'current_user_is_subscriber': authorization.user_has_cap('subscriber')
}
def is_cloud_project(self, project):
"""Returns whether the project is set up for Blender Cloud.
Requires the presence of the 'cloud' key in extension_props
"""
if project.extension_props is None:
# There are no extension_props on this project
return False
try:
pprops = project.extension_props[EXTENSION_NAME]
except AttributeError:
self._log.warning("is_cloud_project: Project url=%r doesn't have any "
"extension properties.", project['url'])
if self._log.isEnabledFor(logging.DEBUG):
import pprint
self._log.debug('Project: %s', pprint.pformat(project.to_dict()))
return False
except KeyError:
# Not set up for Blender Cloud
return False
if pprops is None:
self._log.debug("is_cloud_project: Project url=%r doesn't have Blender Cloud"
" extension properties.", project['url'])
return False
return True
@property
def has_project_settings(self) -> bool:
# Only available for admins
return pillar.auth.current_user.has_cap('admin')
def project_settings(self, project: pillarsdk.Project, **template_args: dict) -> flask.Response:
"""Renders the project settings page for this extension.
Set YourExtension.has_project_settings = True and Pillar will call this function.
:param project: the project for which to render the settings.
:param template_args: additional template arguments.
:returns: a Flask HTTP response
"""
from cloud.routes import project_settings
return project_settings(project, **template_args)
def setup_app(self, app):
from . import routes, webhooks, eve_hooks, email
routes.setup_app(app)
app.register_api_blueprint(webhooks.blueprint, '/webhooks')
eve_hooks.setup_app(app)
email.setup_app(app)
def _get_current_cloud():
"""Returns the Cloud extension of the current application."""
return flask.current_app.pillar_extensions[EXTENSION_NAME]
current_cloud = LocalProxy(_get_current_cloud)
"""Cloud extension of the current app."""

View File

@@ -1,139 +0,0 @@
#!/usr/bin/env python
import logging
from urllib.parse import urljoin
from flask import current_app
from flask_script import Manager
import requests
from pillar.cli import manager
from pillar.api import service
from pillar.api.utils import authentication
import cloud.setup
log = logging.getLogger(__name__)
manager_cloud = Manager(
current_app, usage="Blender Cloud scripts")
@manager_cloud.command
def create_groups():
"""Creates the admin/demo/subscriber groups."""
import pprint
group_ids = {}
groups_coll = current_app.db('groups')
for group_name in ['admin', 'demo', 'subscriber']:
if groups_coll.find({'name': group_name}).count():
log.info('Group %s already exists, skipping', group_name)
continue
result = groups_coll.insert_one({'name': group_name})
group_ids[group_name] = result.inserted_id
service.fetch_role_to_group_id_map()
log.info('Created groups:\n%s', pprint.pformat(group_ids))
@manager_cloud.command
def reconcile_subscribers():
"""For every user, check their subscription status with the store."""
import threading
import concurrent.futures
from pillar.auth import UserClass
from pillar.api import blender_id
from pillar.api.blender_cloud.subscription import do_update_subscription
sessions = threading.local()
service.fetch_role_to_group_id_map()
users_coll = current_app.data.driver.db['users']
found = users_coll.find({'auth.provider': 'blender-id'})
count_users = found.count()
count_skipped = count_processed = 0
log.info('Processing %i users', count_users)
lock = threading.Lock()
real_current_app = current_app._get_current_object()
api_token = real_current_app.config['BLENDER_ID_USER_INFO_TOKEN']
api_url = real_current_app.config['BLENDER_ID_USER_INFO_API']
def do_user(idx, user):
nonlocal count_skipped, count_processed
log.info('Processing %i/%i %s', idx + 1, count_users, user['email'])
# Get the Requests session for this thread.
try:
sess = sessions.session
except AttributeError:
sess = sessions.session = requests.Session()
# Get the info from Blender ID
bid_user_id = blender_id.get_user_blenderid(user)
if not bid_user_id:
with lock:
count_skipped += 1
return
url = urljoin(api_url, bid_user_id)
resp = sess.get(url, headers={'Authorization': f'Bearer {api_token}'})
if resp.status_code == 404:
log.info('User %s with Blender ID %s not found, skipping', user['email'], bid_user_id)
with lock:
count_skipped += 1
return
if resp.status_code != 200:
log.error('Unable to reach Blender ID (code %d), aborting', resp.status_code)
with lock:
count_skipped += 1
return
bid_user = resp.json()
if not bid_user:
log.error('Unable to parse response for user %s, aborting', user['email'])
with lock:
count_skipped += 1
return
# Actually update the user, and do it thread-safe just to be sure.
with real_current_app.app_context():
local_user = UserClass.construct('', user)
with lock:
do_update_subscription(local_user, bid_user)
count_processed += 1
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
future_to_user = {executor.submit(do_user, idx, user): user
for idx, user in enumerate(found)}
for future in concurrent.futures.as_completed(future_to_user):
user = future_to_user[future]
try:
future.result()
except Exception as ex:
log.exception('Error updating user %s', user)
log.info('Done reconciling %d subscribers', count_users)
log.info(' processed: %d', count_processed)
log.info(' skipped : %d', count_skipped)
@manager_cloud.command
def setup_for_film(project_url):
"""Adds Blender Cloud film custom properties to a project."""
authentication.force_cli_user()
cloud.setup.setup_for_film(project_url)
manager.add_command("cloud", manager_cloud)

View File

@@ -1,33 +0,0 @@
import functools
import logging
import flask
from pillar.auth import UserClass
log = logging.getLogger(__name__)
def queue_welcome_mail(user: UserClass):
"""Queue a welcome email for execution by Celery."""
assert user.email
log.info('queueing welcome email to %s', user.email)
subject = 'Welcome to Blender Cloud'
render = functools.partial(flask.render_template, subject=subject, user=user)
text = render('emails/welcome.txt')
html = render('emails/welcome.html')
from pillar.celery import email_tasks
email_tasks.send_email.delay(user.full_name, user.email, subject, text, html)
def user_subscription_changed(user: UserClass, *, grant_roles: set, revoke_roles: set):
if user.has_cap('subscriber') and 'has_subscription' in grant_roles:
log.info('user %s just got a new subscription', user.email)
queue_welcome_mail(user)
def setup_app(app):
from pillar.api.blender_cloud import subscription
subscription.user_subscription_updated.connect(user_subscription_changed)

View File

@@ -1,39 +0,0 @@
import logging
import typing
from pillar.auth import UserClass
from . import email
log = logging.getLogger(__name__)
def welcome_new_user(user_doc: dict):
"""Sends a welcome email to a new user."""
user_email = user_doc.get('email')
if not user_email:
log.warning('user %s has no email address', user_doc.get('_id', '-no-id-'))
return
# Only send mail to new users when they actually are subscribers.
user = UserClass.construct('', user_doc)
if not (user.has_cap('subscriber') or user.has_cap('can-renew-subscription')):
log.debug('user %s is new, but not a subscriber, so no email for them.', user_email)
return
email.queue_welcome_mail(user)
def welcome_new_users(user_docs: typing.List[dict]):
"""Sends a welcome email to new users."""
for user_doc in user_docs:
try:
welcome_new_user(user_doc)
except Exception:
log.exception('error sending welcome mail to user %s', user_doc)
def setup_app(app):
app.on_inserted_users += welcome_new_users

View File

@@ -1,15 +0,0 @@
from flask_wtf import FlaskForm
from wtforms import BooleanField, StringField
from wtforms.fields.html5 import URLField
from wtforms.validators import URL
from pillar.web.utils.forms import FileSelectField
class FilmProjectForm(FlaskForm):
video_url = URLField(validators=[URL()])
poster = FileSelectField('Poster Image', file_format='image')
logo = FileSelectField('Logo', file_format='image')
is_in_production = BooleanField('In Production')
is_featured = BooleanField('Featured')
theme_color = StringField('Theme Color')

View File

@@ -1,658 +0,0 @@
import functools
import json
import logging
import typing
import bson
from flask_login import login_required
import flask
import werkzeug.exceptions as wz_exceptions
from flask import Blueprint, render_template, redirect, session, url_for, abort, flash, request
from pillarsdk import Node, Project, User, exceptions as sdk_exceptions, Group
from pillarsdk.exceptions import ResourceNotFound
import pillar
import pillarsdk
from pillar import current_app
from pillar.api.utils import authorization
from pillar.auth import current_user
from pillar.web.users import forms
from pillar.web.utils import system_util, get_file, current_user_is_authenticated
from pillar.web.utils import attach_project_pictures
from pillar.web.settings import blueprint as blueprint_settings
from pillar.web.nodes.routes import url_for_node
from pillar.web.projects.routes import render_project
from pillar.web.projects.routes import find_project_or_404
from pillar.web.projects.routes import project_view
from pillar.web.projects.routes import project_navigation_links
from cloud import current_cloud
from cloud.forms import FilmProjectForm
from . import EXTENSION_NAME
blueprint = Blueprint('cloud', __name__)
log = logging.getLogger(__name__)
@blueprint.route('/')
def homepage():
if current_user.is_anonymous:
return redirect(url_for('cloud.welcome'))
return render_template(
'homepage.html',
api=system_util.pillar_api(),
**_homepage_context(),
)
def _homepage_context() -> dict:
"""Returns homepage template context variables."""
# Get latest blog posts
api = system_util.pillar_api()
# Get latest comments to any node
latest_comments = Node.latest('comments', api=api)
# Get a list of random featured assets
random_featured = get_random_featured_nodes()
# Parse results for replies
to_remove = []
@functools.lru_cache()
def _find_parent(parent_node_id) -> Node:
return Node.find(parent_node_id,
{'projection': {
'_id': 1,
'name': 1,
'node_type': 1,
'project': 1,
'parent': 1,
'properties.url': 1,
}},
api=api)
for idx, comment in enumerate(latest_comments._items):
if comment.properties.is_reply:
try:
comment.attached_to = _find_parent(comment.parent.parent)
except ResourceNotFound:
# Remove this comment
to_remove.append(idx)
else:
comment.attached_to = comment.parent
for idx in reversed(to_remove):
del latest_comments._items[idx]
for comment in latest_comments._items:
if not comment.attached_to:
continue
comment.attached_to.url = url_for_node(node=comment.attached_to)
comment.url = url_for_node(node=comment)
main_project = Project.find(current_app.config['MAIN_PROJECT_ID'], api=api)
main_project.picture_header = get_file(main_project.picture_header, api=api)
return dict(
main_project=main_project,
latest_comments=latest_comments._items,
random_featured=random_featured)
@blueprint.route('/design-system')
def design_system():
"""Display the design system page.
This endpoing is intended for development only, and returns a
rendered template only if the app is running in debug mode.
"""
if not current_app.config['DEBUG']:
abort(404)
return render_template('design_system.html')
@blueprint.route('/login')
def login():
from flask import request
if request.args.get('force'):
log.debug('Forcing logout of user before rendering login page.')
pillar.auth.logout_user()
next_after_login = request.args.get('next')
if not next_after_login:
next_after_login = request.referrer
session['next_after_login'] = next_after_login
return redirect(url_for('users.oauth_authorize', provider='blender-id'))
@blueprint.route('/welcome')
def welcome():
# Workaround to cache rendering of a page if user not logged in
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
return render_template('welcome.html')
return render_page()
@blueprint.route('/about')
def about():
return render_template('about.html')
@blueprint.route('/services')
def services():
return render_template('services.html')
@blueprint.route('/learn')
def learn():
return render_template('learn.html')
@blueprint.route('/libraries')
def libraries():
return render_template('libraries.html')
@blueprint.route('/stats')
def stats():
return render_template('stats.html')
@blueprint.route('/join')
def join():
"""Join page"""
return redirect('https://store.blender.org/product/membership/')
@blueprint.route('/renew')
def renew_subscription():
return render_template('renew_subscription.html')
def get_projects(category):
"""Utility to get projects based on category. Should be moved on the API
and improved with more extensive filtering capabilities.
"""
api = system_util.pillar_api()
projects = Project.all({
'where': {
'category': category,
'is_private': False},
'sort': '-_created',
}, api=api)
for project in projects._items:
attach_project_pictures(project, api)
return projects
@blueprint.route('/courses')
def courses():
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
projects = get_projects('course')
return render_template(
'projects_index_collection.html',
title='courses',
projects=projects._items,
api=system_util.pillar_api())
return render_page()
@blueprint.route('/open-projects')
def open_projects():
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
api = system_util.pillar_api()
projects = Project.all({
'where': {
'category': 'film',
'is_private': False
},
'sort': '-_created',
}, api=api)
for project in projects._items:
# Attach poster file (ensure the extension_props.cloud.poster
# attributes exists)
try:
# If the attribute exists, but is None, continue
if not project['extension_props'][EXTENSION_NAME]['poster']:
continue
# Fetch the file and embed it in the document
project.extension_props.cloud.poster = get_file(
project.extension_props.cloud.poster, api=api)
# Add convenience attribute that specifies the presence of the
# poster file
project.has_poster = True
# If there was a key error because one of the nested attributes is
# missing,
except KeyError:
continue
return render_template(
'films.html',
title='films',
projects=projects._items,
api=system_util.pillar_api())
return render_page()
@blueprint.route('/workshops')
def workshops():
@current_app.cache.cached(timeout=3600, unless=current_user_is_authenticated)
def render_page():
projects = get_projects('workshop')
return render_template(
'projects_index_collection.html',
title='workshops',
projects=projects._items,
api=system_util.pillar_api())
return render_page()
def get_random_featured_nodes() -> typing.List[dict]:
"""Returns a list of project/node combinations for featured nodes.
A random subset of 3 featured nodes from all public projects is returned.
Assumes that the user actually has access to the public projects' nodes.
The dict is a node, with a 'project' key that contains a projected project.
"""
proj_coll = current_app.db('projects')
featured_nodes = proj_coll.aggregate([
{'$match': {'is_private': False}},
{'$project': {'nodes_featured': True,
'url': True,
'name': True,
'summary': True,
'picture_square': True}},
{'$unwind': {'path': '$nodes_featured'}},
{'$sample': {'size': 6}},
{'$lookup': {'from': 'nodes',
'localField': 'nodes_featured',
'foreignField': '_id',
'as': 'node'}},
{'$unwind': {'path': '$node'}},
{'$match': {'node._deleted': {'$ne': True}}},
{'$project': {'url': True,
'name': True,
'summary': True,
'picture_square': True,
'node._id': True,
'node.name': True,
'node.permissions': True,
'node.picture': True,
'node.properties.content_type': True,
'node.properties.duration_seconds': True,
'node.properties.url': True,
'node._created': True,
}
},
])
featured_node_documents = []
api = system_util.pillar_api()
for node_info in featured_nodes:
# Turn the project-with-node doc into a node-with-project doc.
node_document = node_info.pop('node')
node_document['project'] = node_info
node_document['_id'] = str(node_document['_id'])
node = Node(node_document)
node.picture = get_file(node.picture, api=api)
node.project.picture_square = get_file(node.project.picture_square, api=api)
featured_node_documents.append(node)
return featured_node_documents
@blueprint_settings.route('/emails', methods=['GET', 'POST'])
@login_required
def emails():
"""Main email settings.
"""
if current_user.has_role('protected'):
return abort(404) # TODO: make this 403, handle template properly
api = system_util.pillar_api()
user = User.find(current_user.objectid, api=api)
# Force creation of settings for the user (safely remove this code once
# implemented on account creation level, and after adding settings to all
# existing users)
if not user.settings:
user.settings = dict(email_communications=1)
user.update(api=api)
if user.settings.email_communications is None:
user.settings.email_communications = 1
user.update(api=api)
# Generate form
form = forms.UserSettingsEmailsForm(
email_communications=user.settings.email_communications)
if form.validate_on_submit():
try:
user.settings.email_communications = form.email_communications.data
user.update(api=api)
flash("Profile updated", 'success')
except sdk_exceptions.ResourceInvalid as e:
message = json.loads(e.content)
flash(message)
return render_template('users/settings/emails.html', form=form, title='emails')
@blueprint_settings.route('/billing')
@login_required
def billing():
"""View the subscription status of a user
"""
from . import store
log.debug('START OF REQUEST')
if current_user.has_role('protected'):
return abort(404) # TODO: make this 403, handle template properly
expiration_date = 'No subscription to expire'
# Classify the user based on their roles and capabilities
cap_subs = current_user.has_cap('subscriber')
if current_user.has_role('demo'):
user_cls = 'demo'
elif not cap_subs and current_user.has_cap('can-renew-subscription'):
# This user has an inactive but renewable subscription.
user_cls = 'subscriber-expired'
elif cap_subs:
if current_user.has_role('subscriber'):
# This user pays for their own subscription. Only in this case do we need to fetch
# the expiration date from the Store.
user_cls = 'subscriber'
store_user = store.fetch_subscription_info(current_user.email)
if store_user is None:
expiration_date = 'Unable to reach Blender Store to check'
else:
expiration_date = store_user['expiration_date'][:10]
elif current_user.has_role('org-subscriber'):
# An organisation pays for this subscription.
user_cls = 'subscriber-org'
else:
# This user gets the subscription cap from somewhere else (like an organisation).
user_cls = 'subscriber-other'
else:
user_cls = 'outsider'
return render_template(
'users/settings/billing.html',
user_cls=user_cls,
expiration_date=expiration_date,
title='billing')
@blueprint.route('/terms-and-conditions')
def terms_and_conditions():
return render_template('terms_and_conditions.html')
@blueprint.route('/privacy')
def privacy():
return render_template('privacy.html')
@blueprint.route('/production')
def production():
return render_template(
'production.html',
title='production')
@blueprint.route('/emails/welcome.send')
@login_required
def emails_welcome_send():
from cloud import email
email.queue_welcome_mail(current_user)
return f'queued mail to {current_user.email}'
@blueprint.route('/emails/welcome.html')
@login_required
def emails_welcome_html():
return render_template('emails/welcome.html',
subject='Welcome to Blender Cloud',
user=current_user)
@blueprint.route('/emails/welcome.txt')
@login_required
def emails_welcome_txt():
txt = render_template('emails/welcome.txt',
subject='Welcome to Blender Cloud',
user=current_user)
return flask.Response(txt, content_type='text/plain; charset=utf-8')
@blueprint.route('/p/<project_url>')
def project_landing(project_url):
"""Override Pillar project_view endpoint completely.
The first part of the function is identical to the one in Pillar, but the
second part (starting with 'Load custom project properties') extends the
behaviour to support film project landing pages.
"""
template_name = None
if request.args.get('format') == 'jstree':
log.warning('projects.view(%r) endpoint called with format=jstree, '
'redirecting to proper endpoint. URL is %s; referrer is %s',
project_url, request.url, request.referrer)
return redirect(url_for('projects.jstree', project_url=project_url))
api = system_util.pillar_api()
project = find_project_or_404(project_url,
embedded={'header_node': 1},
api=api)
# Load the header video file, if there is any.
header_video_file = None
header_video_node = None
if project.header_node and project.header_node.node_type == 'asset' and \
project.header_node.properties.content_type == 'video':
header_video_node = project.header_node
header_video_file = get_file(project.header_node.properties.file)
header_video_node.picture = get_file(header_video_node.picture)
extra_context = {
'header_video_file': header_video_file,
'header_video_node': header_video_node}
# Load custom project properties. If the project has a 'cloud' extension prop,
# render it using the projects/landing.html template and try to attach a
# number of additional attributes (pages, images, etc.).
if 'extension_props' in project and EXTENSION_NAME in project['extension_props']:
extension_props = project['extension_props'][EXTENSION_NAME]
extension_props['logo'] = get_file(extension_props['logo'])
pages = Node.all({
'where': {
'project': project._id,
'node_type': 'page',
'_deleted': {'$ne': True}},
'projection': {'name': 1}}, api=api)
extra_context.update({'pages': pages._items})
template_name = 'projects/landing.html'
return render_project(project, api,
extra_context=extra_context,
template_name=template_name)
@blueprint.route('/p/<project_url>/browse')
@project_view()
def project_browse(project: pillarsdk.Project):
"""Project view displaying all top-level nodes.
We render a regular project view, but we introduce an additional template
variable: browse. By doing that we prevent the regular project view
from loading and fetch via AJAX a "group" node-like view instead (see
project_browse_view_nodes).
"""
return render_template(
'projects/view.html',
api=system_util.pillar_api(),
project=project,
node=None,
show_project=True,
browse=True,
og_picture=None,
navigation_links=project_navigation_links(project, system_util.pillar_api()),
extension_sidebar_links=current_app.extension_sidebar_links(project))
@blueprint.route('/p/<project_url>/browse/nodes')
@project_view()
def project_browse_view_nodes(project: pillarsdk.Project):
"""Display top-level nodes for a Project.
This view is always meant to be served embedded, as part of project_browse.
"""
api = system_util.pillar_api()
# Get top level nodes
projection = {
'project': 1,
'name': 1,
'picture': 1,
'node_type': 1,
'properties.order': 1,
'properties.status': 1,
'user': 1,
'properties.content_type': 1,
'permissions.world': 1}
where = {
'project': project['_id'],
'parent': {'$exists': False},
'properties.status': 'published',
'_deleted': {'$ne': True},
'node_type': {'$in': ['group', 'asset']},
}
try:
nodes = Node.all({
'projection': projection,
'where': where,
'sort': [('properties.order', 1), ('name', 1)]}, api=api)
except pillarsdk.exceptions.ForbiddenAccess:
return render_template('errors/403_embed.html')
nodes = nodes._items
for child in nodes:
child.picture = get_file(child.picture, api=api)
return render_template(
'projects/browse_embed.html',
nodes=nodes)
def project_settings(project: pillarsdk.Project, **template_args: dict):
"""Renders the project settings page for Blender Cloud projects.
If the project has been setup for Blender Cloud, check for the cloud.category
property, to render the proper form.
"""
# Based on the project state, we can render a different template.
if not current_cloud.is_cloud_project(project):
return render_template('project_settings/offer_setup.html',
project=project, **template_args)
cloud_props = project['extension_props'][EXTENSION_NAME]
category = cloud_props['category']
if category != 'film':
log.error('No interface available to edit %s projects, yet' % category)
form = FilmProjectForm()
# Iterate over the form fields and set the data if exists in the project document
for field_name in form.data:
if field_name not in cloud_props:
continue
# Skip csrf_token field
if field_name == 'csrf_token':
continue
form_field = getattr(form, field_name)
form_field.data = cloud_props[field_name]
return render_template('project_settings/settings.html',
project=project,
form=form,
**template_args)
@blueprint.route('/<project_url>/settings/film', methods=['POST'])
@authorization.require_login(require_cap='admin')
@project_view()
def save_film_settings(project: pillarsdk.Project):
# Ensure that the project is setup for Cloud (see @attract_project_view for example)
form = FilmProjectForm()
if not form.validate_on_submit():
log.debug('Form submission failed')
# Return list of validation errors
updated_extension_props = {}
for field_name in form.data:
# Skip csrf_token field
if field_name == 'csrf_token':
continue
form_field = getattr(form, field_name)
# TODO(fsiddi) if form_field type is FileSelectField, convert it to ObjectId
# Currently this raises TypeError: Object of type 'ObjectId' is not JSON serializable
if form_field.data == '':
form_field.data = None
updated_extension_props[field_name] = form_field.data
# Update extension props and save project
extension_props = project['extension_props'][EXTENSION_NAME]
# Project is a Resource, so we update properties iteratively
for k, v in updated_extension_props.items():
extension_props[k] = v
project.update(api=system_util.pillar_api())
return '', 204
@blueprint.route('/<project_url>/setup-for-film', methods=['POST'])
@login_required
@project_view()
def setup_for_film(project: pillarsdk.Project):
import cloud.setup
project_id = project._id
if not project.has_method('PUT'):
log.warning('User %s tries to set up project %s for Blender Cloud, but has no PUT rights.',
current_user, project_id)
raise wz_exceptions.Forbidden()
log.info('User %s sets up project %s for Blender Cloud', current_user, project_id)
cloud.setup.setup_for_film(project.url)
return '', 204
def setup_app(app):
global _homepage_context
cached = app.cache.cached(timeout=300)
_homepage_context = cached(_homepage_context)

View File

@@ -1,54 +0,0 @@
"""Setting up projects for Blender Cloud."""
import logging
from bson import ObjectId
from eve.methods.put import put_internal
from flask import current_app
from pillar.api.utils import remove_private_keys
from . import EXTENSION_NAME
log = logging.getLogger(__name__)
def setup_for_film(project_url):
"""Add Blender Cloud extension_props specific for film projects.
Returns the updated project.
"""
projects_collection = current_app.data.driver.db['projects']
# Find the project in the database.
project = projects_collection.find_one({'url': project_url})
if not project:
raise RuntimeError('Project %s does not exist.' % project_url)
# Set default extension properties. Be careful not to overwrite any properties that
# are already there.
all_extension_props = project.setdefault('extension_props', {})
cloud_extension_props = {
'category': 'film',
'theme_css': '',
# The accent color (can be 'blue' or '#FFBBAA' or 'rgba(1, 1, 1, 1)
'theme_color': '',
'is_in_production': False,
'video_url': '', # Oembeddable url
'poster': None, # File ObjectId
'logo': None, # File ObjectId
# TODO(fsiddi) when we introduce other setup_for_* in Blender Cloud, make available
# at a higher scope
'is_featured': False,
}
all_extension_props.setdefault(EXTENSION_NAME, cloud_extension_props)
project_id = ObjectId(project['_id'])
project = remove_private_keys(project)
result, _, _, status_code = put_internal('projects', project, _id=project_id)
if status_code != 200:
raise RuntimeError("Can't update project %s, issues: %s", project_id, result)
log.info('Project %s was updated for Blender Cloud.', project_url)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

View File

@@ -1,89 +0,0 @@
"""Interesting usage metrics"""
from flask import current_app
def count_nodes(query=None) -> int:
pipeline = [
{'$match': {'_deleted': {'$ne': 'true'}}},
{
'$lookup':
{
'from': "projects",
'localField': "project",
'foreignField': "_id",
'as': "project",
}
},
{
'$unwind':
{
'path': '$project',
}
},
{
'$project':
{
'project.is_private': 1,
}
},
{'$match': {'project.is_private': False}},
{'$count': 'tot'}
]
c = current_app.db()['nodes']
# If we provide a query, we extend the first $match step in the aggregation pipeline with
# with the extra parameters (for example node_type)
if query:
pipeline[0]['$match'].update(query)
# Return either a list with one item or an empty list
r = list(c.aggregate(pipeline=pipeline))
count = 0 if not r else r[0]['tot']
return count
def count_users(query=None) -> int:
u = current_app.db()['users']
return u.count(query)
def count_blender_sync(query=None) -> int:
pipeline = [
# 0 Find all startups.blend that are not deleted
{
'$match': {
'_deleted': {'$ne': 'true'},
'name': 'startup.blend',
}
},
# 1 Group them per project (drops any duplicates)
{'$group': {'_id': '$project'}},
# 2 Join the project info
{
'$lookup':
{
'from': "projects",
'localField': "_id",
'foreignField': "_id",
'as': "project",
}
},
# 3 Unwind the project list (there is always only one project)
{
'$unwind':
{
'path': '$project',
}
},
# 4 Find all home projects
{'$match': {'project.category': 'home'}},
{'$count': 'tot'}
]
c = current_app.db()['nodes']
# If we provide a query, we extend the first $match step in the aggregation pipeline with
# with the extra parameters (for example _created)
if query:
pipeline[0]['$match'].update(query)
# Return either a list with one item or an empty list
r = list(c.aggregate(pipeline=pipeline))
count = 0 if not r else r[0]['tot']
return count

View File

@@ -1,57 +0,0 @@
import logging
import datetime
import functools
from flask import Blueprint, jsonify
from cloud.stats import count_nodes, count_users, count_blender_sync
blueprint = Blueprint('cloud.stats', __name__, url_prefix='/s')
log = logging.getLogger(__name__)
@functools.lru_cache()
def get_stats(before: datetime.datetime):
query_comments = {'node_type': 'comment'}
query_assets = {'node_type': 'asset'}
date_query = {}
if before:
date_query = {'_created': {'$lt': before}}
query_comments.update(date_query)
query_assets.update(date_query)
stats = {
'comments': count_nodes(query_comments),
'assets': count_nodes(query_assets),
'users_total': count_users(date_query),
'users_blender_sync': count_blender_sync(date_query),
}
return stats
@blueprint.route('/')
@blueprint.route('/before/<int:before>')
def index(before: int=0):
"""
This endpoint is queried on a daily basis by grafista to retrieve cloud usage
stats. For assets and comments we take into considerations only those who belong
to public projects.
These is the data we retrieve
- Comments count
- Assets count (video, images and files)
- Users count (subscribers count goes via store)
- Blender Sync users
"""
# TODO: Implement project-level metrics (and update ad every child update)
if before:
before = datetime.datetime.strptime(str(before), '%Y%m%d')
else:
today = datetime.date.today()
before = datetime.datetime(today.year, today.month, today.day)
return jsonify(get_stats(before))

View File

@@ -1,72 +0,0 @@
"""Blender Store interface."""
import logging
import typing
from pillar import current_app
log = logging.getLogger(__name__)
def fetch_subscription_info(email: str) -> typing.Optional[dict]:
"""Returns the user info dict from the external subscriptions management server.
:returns: the store user info, or None if the user can't be found or there
was an error communicating. A dict like this is returned:
{
"shop_id": 700,
"cloud_access": 1,
"paid_balance": 314.75,
"balance_currency": "EUR",
"start_date": "2014-08-25 17:05:46",
"expiration_date": "2016-08-24 13:38:45",
"subscription_status": "wc-active",
"expiration_date_approximate": true
}
"""
from requests.adapters import HTTPAdapter
import requests.exceptions
external_subscriptions_server = current_app.config['EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER']
if log.isEnabledFor(logging.DEBUG):
import urllib.parse
log_email = urllib.parse.quote(email)
log.debug('Connecting to store at %s?blenderid=%s',
external_subscriptions_server, log_email)
# Retry a few times when contacting the store.
s = requests.Session()
s.mount(external_subscriptions_server, HTTPAdapter(max_retries=5))
try:
r = s.get(external_subscriptions_server,
params={'blenderid': email},
verify=current_app.config['TLS_CERT_FILE'],
timeout=current_app.config.get('EXTERNAL_SUBSCRIPTIONS_TIMEOUT_SECS', 10))
except requests.exceptions.ConnectionError as ex:
log.error('Error connecting to %s: %s', external_subscriptions_server, ex)
return None
except requests.exceptions.Timeout as ex:
log.error('Timeout communicating with %s: %s', external_subscriptions_server, ex)
return None
except requests.exceptions.RequestException as ex:
log.error('Some error communicating with %s: %s', external_subscriptions_server, ex)
return None
if r.status_code != 200:
log.warning("Error communicating with %s, code=%i, unable to check "
"subscription status of user %s",
external_subscriptions_server, r.status_code, email)
return None
store_user = r.json()
if log.isEnabledFor(logging.DEBUG):
import json
log.debug('Received JSON from store API: %s',
json.dumps(store_user, sort_keys=False, indent=4))
return store_user

View File

@@ -1,3 +0,0 @@
"""Routes for fetching tagged assets."""

View File

@@ -1,16 +0,0 @@
import logging
import datetime
import functools
from flask import Blueprint, jsonify
blueprint = Blueprint('cloud.tagged', __name__, url_prefix='/tagged')
log = logging.getLogger(__name__)
@blueprint.route('/')
def index():
"""Return all tagged assets as JSON, grouped by tag."""

View File

@@ -1,263 +0,0 @@
"""Blender ID webhooks."""
import functools
import hashlib
import hmac
import json
import logging
import typing
from flask import Blueprint, request
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.api.blender_cloud import subscription
from pillar.api.utils.authentication import create_new_user_document, make_unique_username
from pillar.auth import UserClass
blueprint = Blueprint('cloud-webhooks', __name__)
log = logging.getLogger(__name__)
WEBHOOK_MAX_BODY_SIZE = 1024 * 10 # 10 kB is large enough for
def webhook_payload(hmac_secret: str) -> dict:
"""Obtains the webhook payload from the request, verifying its HMAC.
:returns the webhook payload as dictionary.
"""
# Check the content type
if request.content_type != 'application/json':
log.info('request from %s to %s had bad content type %s',
request.remote_addr, request, request.content_type)
raise wz_exceptions.BadRequest('Content type not supported')
# Check the length of the body
if request.content_length > WEBHOOK_MAX_BODY_SIZE:
raise wz_exceptions.BadRequest('Request too large')
body = request.get_data()
if len(body) > request.content_length:
raise wz_exceptions.BadRequest('Larger body than Content-Length header')
# Validate the request
mac = hmac.new(hmac_secret.encode(), body, hashlib.sha256)
req_hmac = request.headers.get('X-Webhook-HMAC', '')
our_hmac = mac.hexdigest()
if not hmac.compare_digest(req_hmac, our_hmac):
log.info('request from %s to %s had bad HMAC %r, expected %r',
request.remote_addr, request, req_hmac, our_hmac)
raise wz_exceptions.BadRequest('Bad HMAC')
try:
return json.loads(body)
except json.JSONDecodeError as ex:
log.warning('request from %s to %s had bad JSON: %s',
request.remote_addr, request, ex)
raise wz_exceptions.BadRequest('Bad JSON')
def score(wh_payload: dict, user: dict) -> int:
"""Determine how likely it is that this is the correct user to modify.
:param wh_payload: the info we received from Blender ID;
see user_modified()
:param user: the user in our database
:return: the score for this user
"""
bid_str = str(wh_payload['id'])
try:
match_on_bid = any((auth['provider'] == 'blender-id' and auth['user_id'] == bid_str)
for auth in user['auth'])
except KeyError:
match_on_bid = False
match_on_old_email = user.get('email', 'none') == wh_payload.get('old_email', 'nothere')
match_on_new_email = user.get('email', 'none') == wh_payload.get('email', 'nothere')
return match_on_bid * 10 + match_on_old_email + match_on_new_email * 2
def insert_or_fetch_user(wh_payload: dict) -> typing.Optional[dict]:
"""Fetch the user from the DB or create it.
Only creates it if the webhook payload indicates they could actually use
Blender Cloud (i.e. demo or subscriber). This prevents us from creating
Cloud accounts for Blender Network users.
:returns the user document, or None when not created.
"""
users_coll = current_app.db('users')
my_log = log.getChild('insert_or_fetch_user')
bid_str = str(wh_payload['id'])
email = wh_payload['email']
# Find the user by their Blender ID, or any of their email addresses.
# We use one query to find all matching users. This is done as a
# consistency check; if more than one user is returned, we know the
# database is inconsistent with Blender ID and can emit a warning
# about this.
query = {'$or': [
{'auth.provider': 'blender-id', 'auth.user_id': bid_str},
{'email': {'$in': [wh_payload['old_email'], email]}},
]}
db_users = list(users_coll.find(query))
user_count = len(db_users)
if user_count > 1:
# Now we have to pay the price for finding users in one query; we
# have to prioritise them and return the one we think is most reliable.
calc_score = functools.partial(score, wh_payload)
best_score = max(db_users, key=calc_score)
my_log.error('%d users found for query %s, picking user %s (%s)',
user_count, query, best_score['_id'], best_score['email'])
return best_score
if user_count:
db_user = db_users[0]
my_log.debug('found user %s', db_user['email'])
return db_user
if wh_payload.get('date_deletion_requested'):
my_log.info('Received update for a deleted user %s, not creating', bid_str)
return None
# Pretend to create the user, so that we can inspect the resulting
# capabilities. This is more future-proof than looking at the list
# of roles in the webhook payload.
username = make_unique_username(email)
user_doc = create_new_user_document(email, bid_str, username,
provider='blender-id',
full_name=wh_payload['full_name'])
# Figure out the user's eventual roles. These aren't stored in the document yet,
# because that's handled by the badger service.
eventual_roles = [subscription.ROLES_BID_TO_PILLAR[r]
for r in wh_payload.get('roles', [])
if r in subscription.ROLES_BID_TO_PILLAR]
user_ob = UserClass.construct('', user_doc)
user_ob.roles = eventual_roles
user_ob.collect_capabilities()
create = (user_ob.has_cap('subscriber') or
user_ob.has_cap('can-renew-subscription') or
current_app.org_manager.user_is_unknown_member(email))
if not create:
my_log.info('Received update for unknown user %r without Cloud access (caps=%s)',
wh_payload['old_email'], user_ob.capabilities)
return None
# Actually create the user in the database.
r, _, _, status = current_app.post_internal('users', user_doc)
if status != 201:
my_log.error('unable to create user %s: : %r %r', email, status, r)
raise wz_exceptions.InternalServerError('unable to create user')
user_doc.update(r)
my_log.info('created user %r = %s to allow immediate Cloud access', email, user_doc['_id'])
return user_doc
@blueprint.route('/user-modified', methods=['POST'])
def user_modified():
"""Update the local user based on the info from Blender ID.
If the payload indicates the user has access to Blender Cloud (or at least
a renewable subscription), create the user if not already in our DB.
The payload we expect is a dictionary like:
{'id': 12345, # the user's ID in Blender ID
'old_email': 'old@example.com',
'full_name': 'Harry',
'email': 'new@example'com,
'avatar_changed': True,
'roles': ['role1', 'role2', …]}
"""
my_log = log.getChild('user_modified')
my_log.debug('Received request from %s', request.remote_addr)
hmac_secret = current_app.config['BLENDER_ID_WEBHOOK_USER_CHANGED_SECRET']
payload = webhook_payload(hmac_secret)
my_log.info('payload: %s', payload)
# Update the user
db_user = insert_or_fetch_user(payload)
if not db_user:
my_log.info('Received update for unknown user %r', payload['old_email'])
return '', 204
if payload.get('date_deletion_requested'):
delete_user(db_user, payload)
return '', 204
# Use direct database updates to change the email and full name.
# Also updates the db_user dict so that local_user below will have
# the updated information.
updates = {}
if db_user['email'] != payload['email']:
my_log.info('User changed email from %s to %s', payload['old_email'], payload['email'])
updates['email'] = payload['email']
db_user['email'] = payload['email']
if db_user['full_name'] != payload['full_name']:
my_log.info('User changed full name from %r to %r',
db_user['full_name'], payload['full_name'])
if payload['full_name']:
updates['full_name'] = payload['full_name']
else:
# Fall back to the username when the full name was erased.
updates['full_name'] = db_user['username']
db_user['full_name'] = updates['full_name']
if payload.get('avatar_changed'):
import pillar.celery.avatar
my_log.info('User %s changed avatar, scheduling download', db_user['_id'])
pillar.celery.avatar.sync_avatar_for_user.delay(str(db_user['_id']))
if updates:
users_coll = current_app.db('users')
update_res = users_coll.update_one({'_id': db_user['_id']},
{'$set': updates})
if update_res.matched_count != 1:
my_log.error('Unable to find user %s to update, even though '
'we found them by email address %s',
db_user['_id'], payload['old_email'])
# Defer to Pillar to do the role updates.
local_user = UserClass.construct('', db_user)
subscription.do_update_subscription(local_user, payload)
return '', 204
def delete_user(db_user, payload):
"""Handle deletion request coming from BID."""
my_log = log.getChild('delete_user')
date_deletion_requested = payload['date_deletion_requested']
bid_str = str(payload['id'])
local_id = db_user['_id']
my_log.info(
'User %s with BID=%s requested deletion on %s, soft-deleting the user',
local_id, bid_str, date_deletion_requested,
)
# Delete all session tokens linked to this user
token_coll = current_app.db('tokens')
delete_res = token_coll.delete_many({'user': local_id})
my_log.info('Deleted %s session tokens of user %s', delete_res.deleted_count, local_id)
# Soft-delete the user and clear their PII
users_coll = current_app.db('users')
updates = {
'_deleted': True,
'email': None,
'full_name': None,
'username': None,
'auth': [],
}
update_res = users_coll.update_one({'_id': local_id}, {'$set': updates})
if update_res.matched_count != 1:
my_log.error(
'Soft-deleted %s users %s with BID=%s',
update_res.matched_count, local_id, bid_str,
)
else:
my_log.warning('Soft-deleted user %s with BID=%s', local_id, bid_str)

View File

@@ -1,296 +0,0 @@
#!/usr/bin/env python3
from __future__ import print_function
"""CLI command for sharing an image via Blender Cloud.
Assumes that you are logged in on Blender ID with the Blender ID Add-on.
The user_config_dir and user_data_dir functions come from
https://github.com/ActiveState/appdirs/blob/master/appdirs.py and
are licensed under the MIT license.
"""
import argparse
import json
import mimetypes
import os.path
import pprint
import sys
import webbrowser
from urllib.parse import urljoin
import requests
cli = argparse.Namespace() # CLI args from argparser
sess = requests.Session()
IMAGE_SHARING_GROUP_NODE_NAME = 'Image sharing'
if sys.platform.startswith('java'):
import platform
os_name = platform.java_ver()[3][0]
if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc.
system = 'win32'
elif os_name.startswith('Mac'): # "Mac OS X", etc.
system = 'darwin'
else: # "Linux", "SunOS", "FreeBSD", etc.
# Setting this to "linux2" is not ideal, but only Windows or Mac
# are actually checked for and the rest of the module expects
# *sys.platform* style strings.
system = 'linux2'
else:
system = sys.platform
def request(method: str, rel_url: str, **kwargs) -> requests.Response:
kwargs.setdefault('auth', (cli.token, ''))
url = urljoin(cli.server_url, rel_url)
return sess.request(method, url, **kwargs)
def get(rel_url: str, **kwargs) -> requests.Response:
return request('GET', rel_url, **kwargs)
def post(rel_url: str, **kwargs) -> requests.Response:
return request('POST', rel_url, **kwargs)
def find_user_id() -> str:
"""Returns the current user ID."""
print(15 * '=', 'User info', 15 * '=')
resp = get('/api/users/me')
resp.raise_for_status()
user_info = resp.json()
print('You are logged in as %(full_name)s (%(_id)s)' % user_info)
return user_info['_id']
def find_home_project_id() -> dict:
resp = get('/api/bcloud/home-project')
resp.raise_for_status()
proj = resp.json()
proj_id = proj['_id']
print('Your home project ID is %s' % proj_id)
return proj_id
def find_image_sharing_group_id(home_project_id, user_id) -> str:
"""Find the top-level image sharing group node."""
node_doc = {
'project': home_project_id,
'node_type': 'group',
'name': IMAGE_SHARING_GROUP_NODE_NAME,
'user': user_id,
}
resp = get('/api/nodes', params={'where': json.dumps(node_doc)})
resp.raise_for_status()
items = resp.json()['_items']
if not items:
print('Share group not found, creating one.')
node_doc.update({
'properties': {},
})
resp = post('/api/nodes', json=node_doc)
resp.raise_for_status()
share_group = resp.json()
else:
share_group = items[0]
# print('Share group:', share_group)
return share_group['_id']
def upload_image():
user_id = find_user_id()
home_project_id = find_home_project_id()
group_id = find_image_sharing_group_id(home_project_id, user_id)
basename = os.path.basename(cli.imgfile)
print('Sharing group ID is %s' % group_id)
# Upload the image to the project.
print('Uploading %r' % cli.imgfile)
mimetype, _ = mimetypes.guess_type(cli.imgfile, strict=False)
with open(cli.imgfile, mode='rb') as infile:
resp = post('api/storage/stream/%s' % home_project_id,
files={'file': (basename, infile, mimetype)})
resp.raise_for_status()
file_upload_resp = resp.json()
file_upload_status = file_upload_resp.get('_status') or file_upload_resp.get('status')
if file_upload_status != 'ok':
raise ValueError('Received bad status %s from Pillar: %s' %
(file_upload_status, json.dumps(file_upload_resp)))
file_id = file_upload_resp['file_id']
print('File ID is', file_id)
# Create the asset node
asset_node = {
'project': home_project_id,
'node_type': 'asset',
'name': basename,
'parent': group_id,
'properties': {
'content_type': mimetype,
'file': file_id,
},
}
resp = post('api/nodes', json=asset_node)
resp.raise_for_status()
node_info = resp.json()
node_id = node_info['_id']
print('Created asset node', node_id)
# Share the node to get a public URL.
resp = post('api/nodes/%s/share' % node_id)
resp.raise_for_status()
share_info = resp.json()
print(json.dumps(share_info, indent=4))
url = share_info.get('short_link')
print('Opening %s in a browser' % url)
webbrowser.open_new_tab(url)
def find_credentials():
"""Finds BlenderID credentials.
:rtype: str
:returns: the authentication token to use.
"""
import glob
# Find BlenderID profile file.
configpath = user_config_dir('blender', 'Blender Foundation', roaming=True)
found = glob.glob(os.path.join(configpath, '*'))
for confpath in reversed(sorted(found)):
profiles_path = os.path.join(confpath, 'config', 'blender_id', 'profiles.json')
if not os.path.exists(profiles_path):
continue
print('Reading credentials from %s' % profiles_path)
with open(profiles_path) as infile:
profiles = json.load(infile, encoding='utf8')
if profiles:
break
else:
print('Unable to find Blender ID credentials. Log in with the Blender ID add-on in '
'Blender first.')
raise SystemExit()
active_profile = profiles[u'active_profile']
profile = profiles[u'profiles'][active_profile]
print('Logging in as %s' % profile[u'username'])
return profile[u'token']
def main():
global cli
parser = argparse.ArgumentParser()
parser.add_argument('imgfile', help='The image file to share.')
parser.add_argument('-u', '--server-url', default='https://cloud.blender.org/',
help='URL of the Flamenco server.')
parser.add_argument('-t', '--token',
help='Authentication token to use. If not given, your token from the '
'Blender ID add-on is used.')
cli = parser.parse_args()
if not cli.token:
cli.token = find_credentials()
upload_image()
def user_config_dir(appname=None, appauthor=None, version=None, roaming=False):
r"""Return full path to the user-specific config dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"roaming" (boolean, default False) can be set True to use the Windows
roaming appdata directory. That means that for users on a Windows
network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user config directories are:
Mac OS X: same as user_data_dir
Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined
Win *: same as user_data_dir
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
That means, by default "~/.config/<AppName>".
"""
if system in {"win32", "darwin"}:
path = user_data_dir(appname, appauthor, None, roaming)
else:
path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config"))
if appname:
path = os.path.join(path, appname)
if appname and version:
path = os.path.join(path, version)
return path
def user_data_dir(appname=None, appauthor=None, version=None, roaming=False):
r"""Return full path to the user-specific data dir for this application.
"appname" is the name of application.
If None, just the system directory is returned.
"appauthor" (only used on Windows) is the name of the
appauthor or distributing body for this application. Typically
it is the owning company name. This falls back to appname. You may
pass False to disable it.
"version" is an optional version path element to append to the
path. You might want to use this if you want multiple versions
of your app to be able to run independently. If used, this
would typically be "<major>.<minor>".
Only applied when appname is present.
"roaming" (boolean, default False) can be set True to use the Windows
roaming appdata directory. That means that for users on a Windows
network setup for roaming profiles, this user data will be
sync'd on login. See
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
for a discussion of issues.
Typical user data directories are:
Mac OS X: ~/Library/Application Support/<AppName>
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
That means, by default "~/.local/share/<AppName>".
"""
if system == "win32":
raise RuntimeError("Sorry, Windows is not supported for now.")
elif system == 'darwin':
path = os.path.expanduser('~/Library/Application Support/')
if appname:
path = os.path.join(path, appname)
else:
path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share"))
if appname:
path = os.path.join(path, appname)
if appname and version:
path = os.path.join(path, version)
return path
if __name__ == '__main__':
main()

View File

@@ -1,43 +0,0 @@
import os
DEBUG = True
BLENDER_ID_ENDPOINT = 'http://id.local:8000/'
SERVER_NAME = 'cloud.local:5001'
SCHEME = 'http'
PILLAR_SERVER_ENDPOINT = f'{SCHEME}://{SERVER_NAME}/api/'
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = 'true'
os.environ['PILLAR_MONGO_DBNAME'] = 'cloud'
os.environ['PILLAR_MONGO_PORT'] = '27017'
os.environ['PILLAR_MONGO_HOST'] = 'mongo'
os.environ['PILLAR_SERVER_ENDPOINT'] = PILLAR_SERVER_ENDPOINT
SECRET_KEY = '##DEFINE##'
OAUTH_CREDENTIALS = {
'blender-id': {
'id': 'CLOUD-OF-SNOWFLAKES-42',
'secret': '##DEFINE##',
}
}
MAIN_PROJECT_ID = '##DEFINE##'
URLER_SERVICE_AUTH_TOKEN = '##DEFINE##'
ZENCODER_API_KEY = '##DEFINE##'
ZENCODER_NOTIFICATIONS_SECRET = '##DEFINE##'
ZENCODER_NOTIFICATIONS_URL = 'http://zencoderfetcher/'
# Special announcement on top of every page, for non-subscribers.
# category: 'string', can be 'info', 'warning', 'danger', or 'success'.
# message: 'string', any text, it gets markdowned.
# icon: 'string', any icon in font-pillar. e.g. 'pi-heart-filled'
UI_ANNOUNCEMENT_NON_SUBSCRIBERS = {
'category': 'danger',
'message': 'Spring will swing away the gray clouds, until then, '
'[take cover under Blender Cloud](https://cloud.blender.org)!',
'icon': 'pi-heart-filled',
}

124
deploy.sh Executable file
View File

@@ -0,0 +1,124 @@
#!/bin/bash -e
# Deploys the current production branch to the production machine.
PROJECT_NAME="blender-cloud"
DOCKER_NAME="blender_cloud"
REMOTE_ROOT="/data/git/${PROJECT_NAME}"
SSH="ssh -o ClearAllForwardings=yes cloud.blender.org"
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$($readlink -f "$0")")"
cd ${ROOT}
# Check that we're on production branch.
if [ $(git rev-parse --abbrev-ref HEAD) != "production" ]; then
echo "You are NOT on the production branch, refusing to deploy." >&2
exit 1
fi
# Check that production branch has been pushed.
if [ -n "$(git log origin/production..production --oneline)" ]; then
echo "WARNING: not all changes to the production branch have been pushed."
echo "Press [ENTER] to continue deploying current origin/production, CTRL+C to abort."
read dummy
fi
function find_module()
{
MODULE_NAME=$1
MODULE_DIR=$(python <<EOT
from __future__ import print_function
import os.path
try:
import ${MODULE_NAME}
except ImportError:
raise SystemExit('${MODULE_NAME} not found on Python path. Are you in the correct venv?')
print(os.path.dirname(os.path.dirname(${MODULE_NAME}.__file__)))
EOT
)
if [ $(git -C $MODULE_DIR rev-parse --abbrev-ref HEAD) != "production" ]; then
echo "${MODULE_NAME}: ($MODULE_DIR) NOT on the production branch, refusing to deploy." >&2
exit 1
fi
echo $MODULE_DIR
}
# Find our modules
PILLAR_DIR=$(find_module pillar)
ATTRACT_DIR=$(find_module attract)
FLAMENCO_DIR=$(find_module flamenco)
echo "Pillar : $PILLAR_DIR"
echo "Attract : $ATTRACT_DIR"
echo "Flamenco: $FLAMENCO_DIR"
if [ -z "$PILLAR_DIR" -o -z "$ATTRACT_DIR" -o -z "$FLAMENCO_DIR" ];
then
exit 1
fi
# SSH to cloud to pull all files in
function git_pull() {
PROJECT_NAME="$1"
BRANCH="$2"
REMOTE_ROOT="/data/git/${PROJECT_NAME}"
echo "==================================================================="
echo "UPDATING FILES ON ${PROJECT_NAME}"
${SSH} git -C ${REMOTE_ROOT} fetch origin ${BRANCH}
${SSH} git -C ${REMOTE_ROOT} log origin/${BRANCH}..${BRANCH} --oneline
${SSH} git -C ${REMOTE_ROOT} merge --ff-only origin/${BRANCH}
}
git_pull pillar-python-sdk master
git_pull pillar production
git_pull attract production
git_pull flamenco production
git_pull blender-cloud production
# Update the virtualenv
#${SSH} -t docker exec ${DOCKER_NAME} /data/venv/bin/pip install -U -r ${REMOTE_ROOT}/requirements.txt --exists-action w
# RSync the world
$ATTRACT_DIR/rsync_ui.sh
$FLAMENCO_DIR/rsync_ui.sh
./rsync_ui.sh
# Notify Bugsnag of this new deploy.
echo
echo "==================================================================="
GIT_REVISION=$(${SSH} git -C ${REMOTE_ROOT} describe --always)
echo "Notifying Bugsnag of this new deploy of revision ${GIT_REVISION}."
BUGSNAG_API_KEY=$(${SSH} python -c "\"import sys; sys.path.append('${REMOTE_ROOT}'); import config_local; print(config_local.BUGSNAG_API_KEY)\"")
curl --data "apiKey=${BUGSNAG_API_KEY}&revision=${GIT_REVISION}" https://notify.bugsnag.com/deploy
echo
# Wait for [ENTER] to restart the server
echo
echo "==================================================================="
echo "NOTE: If you want to edit config_local.py on the server, do so now."
echo "NOTE: Press [ENTER] to continue and restart the server process."
read dummy
${SSH} docker exec ${DOCKER_NAME} apache2ctl graceful
echo "Server process restarted"
echo
echo "==================================================================="
echo "Clearing front page from Redis cache."
${SSH} docker exec redis redis-cli DEL pwview//
echo
echo "==================================================================="
echo "Deploy of ${PROJECT_NAME} is done."
echo "==================================================================="

View File

@@ -1,130 +0,0 @@
#!/bin/bash -e
STAGING_BRANCH=${STAGING_BRANCH:-production}
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
STAGINGDIR="$ROOT/docker/4_run/staging"
PROJECT_NAME="$(basename $ROOT)"
if [ -e $STAGINGDIR ]; then
echo "$STAGINGDIR already exists, press [ENTER] to destroy and re-install, Ctrl+C to abort."
read dummy
rm -rf $STAGINGDIR
else
echo -n "Installing into $STAGINGDIR"
echo "press [ENTER] to continue, Ctrl+C to abort."
read dummy
fi
cd ${ROOT}
mkdir -p $STAGINGDIR
REMOTE_ROOT="$STAGINGDIR/$PROJECT_NAME"
if [ -z "$SKIP_BRANCH_CHECK" ]; then
# Check that we're on production branch.
if [ $(git rev-parse --abbrev-ref HEAD) != "$STAGING_BRANCH" ]; then
echo "You are NOT on the $STAGING_BRANCH branch, refusing to stage." >&2
exit 1
fi
# Check that production branch has been pushed.
if [ -n "$(git log origin/$STAGING_BRANCH..$STAGING_BRANCH --oneline)" ]; then
echo "WARNING: not all changes to the $STAGING_BRANCH branch have been pushed."
echo "Press [ENTER] to continue staging current origin/$STAGING_BRANCH, CTRL+C to abort."
read dummy
fi
fi
function find_module()
{
MODULE_NAME=$1
MODULE_DIR=$(python <<EOT
from __future__ import print_function
import os.path
try:
import ${MODULE_NAME}
except ImportError:
raise SystemExit('${MODULE_NAME} not found on Python path. Are you in the correct venv?')
print(os.path.dirname(os.path.dirname(${MODULE_NAME}.__file__)))
EOT
)
echo $MODULE_DIR
}
# Find our modules
echo "==================================================================="
echo "LOCAL MODULE LOCATIONS"
PILLAR_DIR=$(find_module pillar)
ATTRACT_DIR=$(find_module attract)
FLAMENCO_DIR=$(find_module flamenco)
SVNMAN_DIR=$(find_module svnman)
SDK_DIR=$(find_module pillarsdk)
echo "Pillar : $PILLAR_DIR"
echo "Attract : $ATTRACT_DIR"
echo "Flamenco: $FLAMENCO_DIR"
echo "SVNMan : $SVNMAN_DIR"
echo "SDK : $SDK_DIR"
if [ -z "$PILLAR_DIR" -o -z "$ATTRACT_DIR" -o -z "$FLAMENCO_DIR" -o -z "$SVNMAN_DIR" -o -z "$SDK_DIR" ];
then
exit 1
fi
function git_clone() {
PROJECT_NAME="$1"
BRANCH="$2"
LOCAL_ROOT="$3"
echo "==================================================================="
echo "CLONING REPO ON $PROJECT_NAME @$BRANCH"
URL=$(git -C $LOCAL_ROOT remote get-url origin)
git -C $STAGINGDIR clone --depth 1 --branch $BRANCH $URL $PROJECT_NAME
}
if [ "$STAGING_BRANCH" == "production" ]; then
SDK_STAGING_BRANCH=master # SDK doesn't have a production branch
else
SDK_STAGING_BRANCH=$STAGING_BRANCH
fi
git_clone pillar-python-sdk $SDK_STAGING_BRANCH $SDK_DIR
git_clone pillar $STAGING_BRANCH $PILLAR_DIR
git_clone attract $STAGING_BRANCH $ATTRACT_DIR
git_clone flamenco $STAGING_BRANCH $FLAMENCO_DIR
git_clone pillar-svnman $STAGING_BRANCH $SVNMAN_DIR
git_clone blender-cloud $STAGING_BRANCH $ROOT
# Gulp everywhere
GULP=$ROOT/node_modules/.bin/gulp
if [ ! -e $GULP -o gulpfile.js -nt $GULP ]; then
npm install
touch $GULP # installer doesn't always touch this after a build, so we do.
fi
# List of projects
PROJECTS="pillar attract flamenco pillar-svnman blender-cloud"
# Run ./gulp for every project
for PROJECT in $PROJECTS; do
pushd $STAGINGDIR/$PROJECT; ./gulp --production; popd;
done
# Remove node_modules (only after all projects with interdependencies have been built)
for PROJECT in $PROJECTS; do
pushd $STAGINGDIR/$PROJECT; rm -r node_modules; popd;
done
echo
echo "==================================================================="
echo "Staging of ${PROJECT_NAME} is ready for dockerisation."
echo "==================================================================="

View File

@@ -1,80 +0,0 @@
#!/bin/bash -e
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
PROJECT_NAME="$(basename $ROOT)"
DOCKER_IMAGE="armadillica/blender_cloud:latest"
REMOTE_SECRET_CONFIG_DIR="/data/config"
REMOTE_DOCKER_COMPOSE_DIR="/root/docker"
#################################################################################
case $1 in
cloud*)
DEPLOYHOST="$1"
;;
*)
echo "Use $0 cloud{nr}|cloud.blender.org" >&2
exit 1
esac
SSH_OPTS="-o ClearAllForwardings=yes -o PermitLocalCommand=no"
SSH="ssh $SSH_OPTS $DEPLOYHOST"
SCP="scp $SSH_OPTS"
echo -n "Deploying to $DEPLOYHOST"
if ! ping $DEPLOYHOST -q -c 1 -W 2 >/dev/null; then
echo "host $DEPLOYHOST cannot be pinged, refusing to deploy." >&2
exit 2
fi
cat <<EOT
[ping OK]
Make sure that you have pushed the $DOCKER_IMAGE
docker image to Docker Hub.
press [ENTER] to continue, Ctrl+C to abort.
EOT
read dummy
#################################################################################
echo "==================================================================="
echo "Bringing remote Docker up to date…"
$SSH mkdir -p $REMOTE_DOCKER_COMPOSE_DIR
$SCP \
$ROOT/docker/{docker-compose.yml,renew-letsencrypt.sh,mongo-backup.{cron,sh}} \
$DEPLOYHOST:$REMOTE_DOCKER_COMPOSE_DIR
$SSH -T <<EOT
set -e
cd $REMOTE_DOCKER_COMPOSE_DIR
docker pull $DOCKER_IMAGE
docker-compose up -d
echo
echo "==================================================================="
echo "Clearing front page from Redis cache."
docker exec redis redis-cli DEL pwview//
EOT
# Notify Sentry of this new deploy.
# See https://sentry.io/blender-institute/blender-cloud/settings/release-tracking/
# and https://docs.sentry.io/api/releases/post-organization-releases/
# and https://sentry.io/api/
echo
echo "==================================================================="
REVISION=$(date +'%Y%m%d.%H%M%S.%Z')
echo "Notifying Sentry of this new deploy of revision $REVISION."
SENTRY_RELEASE_URL="$($SSH env PYTHONPATH="$REMOTE_SECRET_CONFIG_DIR" python3 -c "\"import config_secrets; print(config_secrets.SENTRY_RELEASE_URL)\"")"
curl -s "$SENTRY_RELEASE_URL" -XPOST -H 'Content-Type: application/json' -d "{\"version\": \"$REVISION\"}" | json_pp
echo
echo
echo "==================================================================="
echo "Deploy to $DEPLOYHOST done."
echo "==================================================================="

View File

@@ -1,39 +0,0 @@
# Deploying to Production
```
workon blender-cloud # activate your virtualenv
cd $projectdir/deploy
./full-pull.sh
```
## The Details
Deployment consists of a few steps:
1. Populate a staging directory with the files from the production branches of the various projects.
2. Create Docker images.
3. Push the docker images to Docker Hub.
4. Pull the docker images on the production server and rebuild+restart the containers.
The scripts involved are:
- `2docker.sh`: performs step 1. above.
- `build-{xxx}.sh`: performs steps 2. and 3. above.
- `2server.sh`: performs step 4. above.
The `full-{xxx}.sh` scripts perform all the steps, and call into `build-{xxx}.sh`.
For `xxx` there are:
- `all`: Rebuild all Docker images from scratch. This is good for getting the latest updates to the
base image.
- `pull`: Pull the base and intermediate images from Docker Hub so that they are the same as the
last time someone pushed to production, then rebuilds the final Docker image.
- `quick`: Just rebuild the final Docker image. Only use this if the last time a deployment to
the production server was done was by you, on the machine you're working on now.
## Hacking Stuff
To deploy another branch than `production`, do `export STAGING_BRANCH=otherbranch` before starting
the above commands.

View File

@@ -1,43 +0,0 @@
#!/bin/bash -e
# macOS does not support readlink -f, so we use greadlink instead
if [[ `uname` == 'Darwin' ]]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
ROOT="$(dirname "$(dirname "$($readlink -f "$0")")")"
case "$(basename "$0")" in
build-pull.sh)
docker pull armadillica/pillar_py:3.6
docker pull armadillica/pillar_wheelbuilder:latest
pushd "$ROOT/docker/3_buildwheels"
./build.sh
popd
pushd "$ROOT/docker/4_run"
./build.sh
;;
build-quick.sh)
pushd "$ROOT/docker/4_run"
./build.sh
;;
build-all.sh)
pushd "$ROOT/docker"
./full_rebuild.sh
;;
*)
echo "Unknown script $0, aborting" >&2
exit 1
esac
popd
echo
echo "Press [ENTER] to push the new Docker images."
read dummy
docker push armadillica/pillar_py:3.6
docker push armadillica/pillar_wheelbuilder:latest
docker push armadillica/blender_cloud:latest
echo
echo "Build is done, ready to update the server."

View File

@@ -1 +0,0 @@
build-quick.sh

View File

@@ -1 +0,0 @@
build-all.sh

View File

@@ -1,9 +0,0 @@
#!/bin/bash
set -e
NAME="$(basename "$0")"
./2docker.sh
./${NAME/full-/build-}
./2server.sh cloud2

View File

@@ -1 +0,0 @@
full-all.sh

View File

@@ -1 +0,0 @@
full-all.sh

View File

@@ -1,10 +0,0 @@
FROM ubuntu:18.04
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN set -ex; \
apt-get update; \
DEBIAN_FRONTEND=noninteractive apt-get install \
-qyy -o APT::Install-Recommends=false -o APT::Install-Suggests=false \
tzdata openssl ca-certificates locales; \
locale-gen en_US.UTF-8 en_GB.UTF-8 nl_NL.UTF-8
ENV LANG en_US.UTF-8

16
docker/1_base/base.docker Executable file
View File

@@ -0,0 +1,16 @@
FROM ubuntu:16.04
MAINTAINER Francesco Siddi <francesco@blender.org>
RUN apt-get update && apt-get install -qyy \
-o APT::Install-Recommends=false -o APT::Install-Suggests=false \
python-pip libffi6 openssl ffmpeg rsyslog logrotate
RUN mkdir -p /data/git/pillar \
&& mkdir -p /data/storage \
&& mkdir -p /data/config \
&& mkdir -p /data/venv \
&& mkdir -p /data/wheelhouse
RUN pip install virtualenv
RUN virtualenv /data/venv
RUN . /data/venv/bin/activate && pip install -U pip && pip install wheel

3
docker/1_base/build.sh Executable file → Normal file
View File

@@ -1,4 +1,3 @@
#!/usr/bin/env bash
# Uses --no-cache to always get the latest upstream (security) upgrades.
exec docker build --no-cache "$@" -t pillar_base .
docker build -t pillar_base -f base.docker .;

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env bash
. /data/venv/bin/activate && pip wheel --wheel-dir=/data/wheelhouse -r /requirements.txt

26
docker/2_build/build.docker Executable file
View File

@@ -0,0 +1,26 @@
FROM pillar_base
MAINTAINER Francesco Siddi <francesco@blender.org>
RUN apt-get update && apt-get install -qy \
git \
gcc \
libffi-dev \
libssl-dev \
pypy-dev \
python-dev \
python-imaging \
zlib1g-dev \
libjpeg-dev \
libtiff-dev \
python-crypto \
python-openssl
ENV WHEELHOUSE=/data/wheelhouse
ENV PIP_WHEEL_DIR=/data/wheelhouse
ENV PIP_FIND_LINKS=/data/wheelhouse
VOLUME /data/wheelhouse
ADD requirements.txt /requirements.txt
ADD build-wheels.sh /build-wheels.sh
ENTRYPOINT ["bash", "build-wheels.sh"]

11
docker/2_build/build.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/usr/bin/env bash
mkdir -p ../3_run/wheelhouse;
cp ../../requirements.txt .;
docker build -t pillar_build -f build.docker .;
docker run --rm \
-v "$(pwd)"/../3_run/wheelhouse:/data/wheelhouse \
pillar_build;
rm requirements.txt;

View File

@@ -1 +0,0 @@
c3f30a0aff425dda77d19e02f420d6ba Python-3.6.6.tar.xz

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env bash
set -e
# macOS does not support readlink -f, so we use greadlink instead
if [ $(uname) == 'Darwin' ]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
PYTHONTARGET=$($readlink -f ./python)
mkdir -p "$PYTHONTARGET"
echo "Python will be built to $PYTHONTARGET"
docker build -t pillar_build -f buildpy.docker .
# Use the docker image to build Python 3.6 and mod-wsgi
GID=$(id -g)
docker run --rm -i \
-v "$PYTHONTARGET:/opt/python" \
pillar_build <<EOT
set -e
cd \$PYTHONSOURCE
./configure \
--prefix=/opt/python \
--enable-ipv6 \
--enable-shared \
--with-ensurepip=upgrade
make -j8 install
# Make sure we can run Python
ldconfig
# Upgrade pip
/opt/python/bin/python3 -m pip install -U pip
# Build mod-wsgi-py3 for Python 3.6
cd /dpkg/mod-wsgi-*
./configure --with-python=/opt/python/bin/python3
make -j8 install
mkdir -p /opt/python/mod-wsgi
cp /usr/lib/apache2/modules/mod_wsgi.so /opt/python/mod-wsgi
chown -R $UID:$GID /opt/python/*
EOT
# Strip some stuff we don't need from the Python install.
rm -rf $PYTHONTARGET/lib/python3.*/test
rm -rf $PYTHONTARGET/lib/python3.*/config-3.*/libpython3.*.a
find $PYTHONTARGET/lib -name '*.so.*' -o -name '*.so' | while read libname; do
chmod u+w "$libname"
strip "$libname"
done
# Create another docker image which contains the actual Python.
# This one will serve as base for the Wheel builder and the
# production image.
docker build -t armadillica/pillar_py:3.6 -f includepy.docker .

View File

@@ -1,35 +0,0 @@
FROM pillar_base
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN sed -i 's/^# deb-src/deb-src/' /etc/apt/sources.list && \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -qy \
build-essential \
apache2-dev \
checkinstall \
curl
RUN apt-get build-dep -y python3.6
ADD Python-3.6.6.tar.xz.md5 /Python-3.6.6.tar.xz.md5
# Install Python sources
RUN curl -O https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tar.xz && \
md5sum -c Python-3.6.6.tar.xz.md5 && \
tar xf Python-3.6.6.tar.xz && \
rm -v Python-3.6.6.tar.xz
# Install mod-wsgi sources
RUN mkdir -p /dpkg && cd /dpkg && apt-get source libapache2-mod-wsgi-py3
# To be able to install Python outside the docker.
VOLUME /opt/python
# To be able to run Python; after building, ldconfig has to be re-run to do this.
# This makes it easier to use Python right after building (for example to build
# mod-wsgi for Python 3.6).
RUN echo /opt/python/lib > /etc/ld.so.conf.d/python.conf
RUN ldconfig
ENV PATH=/opt/python/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV PYTHONSOURCE=/Python-3.6.6

View File

@@ -1,13 +0,0 @@
FROM pillar_base
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
ADD python /opt/python
RUN echo /opt/python/lib > /etc/ld.so.conf.d/python.conf
RUN ldconfig
RUN echo Python is installed in /opt/python/ > README.python
ENV PATH=/opt/python/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN cd /opt/python/bin && \
ln -s python3 python

View File

@@ -1,20 +0,0 @@
FROM armadillica/pillar_py:3.6
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN set -ex; \
apt-get update; \
DEBIAN_FRONTEND=noninteractive apt-get install -qy \
git \
build-essential \
checkinstall \
libffi-dev \
libssl-dev \
libjpeg-dev \
zlib1g-dev
ENV WHEELHOUSE=/data/wheelhouse
ENV PIP_WHEEL_DIR=/data/wheelhouse
ENV PIP_FIND_LINKS=/data/wheelhouse
RUN mkdir -p $WHEELHOUSE
VOLUME /data/wheelhouse

View File

@@ -1,62 +0,0 @@
#!/usr/bin/env bash
DOCKER_IMAGE_NAME=armadillica/pillar_wheelbuilder
set -e
# macOS does not support readlink -f, so we use greadlink instead
if [ $(uname) == 'Darwin' ]; then
command -v greadlink 2>/dev/null 2>&1 || { echo >&2 "Install greadlink using brew."; exit 1; }
readlink='greadlink'
else
readlink='readlink'
fi
TOPDEVDIR="$($readlink -f ../../..)"
echo "Top-level development dir is $TOPDEVDIR"
WHEELHOUSE="$($readlink -f ../4_run/wheelhouse)"
if [ -z "$WHEELHOUSE" ]; then
echo "Error, ../4_run might not exist." >&2
exit 2
fi
echo "Wheelhouse is $WHEELHOUSE"
mkdir -p "$WHEELHOUSE"
rm -f "$WHEELHOUSE"/*
docker build -t $DOCKER_IMAGE_NAME:latest .
GID=$(id -g)
docker run --rm -i \
-v "$WHEELHOUSE:/data/wheelhouse" \
-v "$TOPDEVDIR:/data/topdev" \
$DOCKER_IMAGE_NAME <<EOT
set -e
set -x
# Globally upgrade Pip, so that we can get a compatible version of the cryptography package.
# See https://github.com/pyca/cryptography/issues/5771
pip3 install --upgrade pip setuptools wheel
# Pin poetry to 1.0, as more recent version to not support nested filesystem package
# dependencies.
pip3 install wheel poetry==1.0 cryptography==2.7
# Build wheels for all dependencies.
cd /data/topdev/blender-cloud
poetry install --no-dev
# Apparently pip doesn't like projects without setup.py, so it think we have 'pillar-svnman' as
# requirement (because that's the name of the directory). We have to grep that out.
poetry run pip3 freeze | grep -v '\(pillar\)\|\(^-[ef] \)' > \$WHEELHOUSE/requirements.txt
pip3 wheel --wheel-dir=\$WHEELHOUSE -r \$WHEELHOUSE/requirements.txt
chown -R $UID:$GID \$WHEELHOUSE
EOT
# Remove our own projects, they shouldn't be installed as wheel (for now).
rm -f $WHEELHOUSE/{attract,flamenco,pillar,pillarsdk}*.whl
echo "Build of $DOCKER_IMAGE_NAME:latest is done."

View File

@@ -0,0 +1,39 @@
<VirtualHost *:80>
# EnableSendfile on
XSendFile on
XSendFilePath /data/storage/pillar
XSendFilePath /data/git/pillar
XSendFilePath /data/venv/lib/python2.7/site-packages/attract/static/
XSendFilePath /data/venv/lib/python2.7/site-packages/flamenco/static/
XsendFilePath /data/git/blender-cloud
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
# LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
WSGIDaemonProcess cloud processes=4 threads=1 maximum-requests=10000
WSGIPassAuthorization On
WSGIScriptAlias / /data/git/blender-cloud/runserver.wsgi \
process-group=cloud application-group=%{GLOBAL}
<Directory /data/git/blender-cloud>
<Files runserver.wsgi>
Require all granted
</Files>
</Directory>
# Temporary edit to remap the old cloudapi.blender.org to cloud.blender.org/api
RewriteEngine On
RewriteCond "%{HTTP_HOST}" "^cloudapi\.blender\.org" [NC]
RewriteRule (.*) /api$1 [PT]
</VirtualHost>

View File

@@ -133,9 +133,9 @@ AccessFileName .htaccess
# Note that the use of %{X-Forwarded-For}i instead of %h is not recommended.
# Use mod_remoteip instead.
#
LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O" common
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

5
docker/3_run/build.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
cp ../../requirements.txt .;
docker build -t armadillica/blender_cloud -f run.docker .;
rm requirements.txt;

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
if [ ! -f /installed ]; then
echo "Installing pillar and pillar-sdk"
# TODO: curretly doing pip install -e takes a long time, so we symlink
# . /data/venv/bin/activate && pip install -e /data/git/pillar
ln -s /data/git/pillar/pillar /data/venv/lib/python2.7/site-packages/pillar
# . /data/venv/bin/activate && pip install -e /data/git/attract
ln -s /data/git/attract/attract /data/venv/lib/python2.7/site-packages/attract
# . /data/venv/bin/activate && pip install -e /data/git/flamenco/packages/flamenco
ln -s /data/git/flamenco/packages/flamenco/flamenco/ /data/venv/lib/python2.7/site-packages/flamenco
# . /data/venv/bin/activate && pip install -e /data/git/pillar-python-sdk
ln -s /data/git/pillar-python-sdk/pillarsdk /data/venv/lib/python2.7/site-packages/pillarsdk
touch installed
fi
if [ "$DEV" = "true" ]; then
echo "Running in development mode"
cd /data/git/blender-cloud
bash /manage.sh runserver --host='0.0.0.0'
else
# Run Apache
a2enmod rewrite
/usr/sbin/apache2ctl -D FOREGROUND
fi

5
docker/3_run/manage.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash -e
. /data/venv/bin/activate
cd /data/git/blender-cloud
python manage.py "$@"

46
docker/3_run/run.docker Executable file
View File

@@ -0,0 +1,46 @@
FROM pillar_base
RUN apt-get update && apt-get install -qyy \
-o APT::Install-Recommends=true -o APT::Install-Suggests=false \
git \
apache2 \
libapache2-mod-wsgi \
libapache2-mod-xsendfile \
libjpeg8 \
libtiff5 \
nano vim curl \
&& rm -rf /var/lib/apt/lists/*
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
ADD requirements.txt /requirements.txt
ADD wheelhouse /data/wheelhouse
RUN . /data/venv/bin/activate \
&& pip install --no-index --find-links=/data/wheelhouse -r requirements.txt \
&& rm /requirements.txt
VOLUME /data/git/blender-cloud
VOLUME /data/git/pillar
VOLUME /data/git/pillar-python-sdk
VOLUME /data/config
VOLUME /data/storage
ENV USE_X_SENDFILE True
EXPOSE 80
EXPOSE 5000
ADD apache2.conf /etc/apache2/apache2.conf
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD docker-entrypoint.sh /docker-entrypoint.sh
ADD manage.sh /manage.sh
ENTRYPOINT ["bash", "/docker-entrypoint.sh"]

View File

@@ -1,65 +0,0 @@
FROM armadillica/pillar_py:3.6
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
RUN set -ex; \
apt-get update; \
DEBIAN_FRONTEND=noninteractive apt-get install -qy \
-o APT::Install-Recommends=false -o APT::Install-Suggests=false \
git \
apache2 \
libapache2-mod-xsendfile \
libjpeg8 \
libtiff5 \
ffmpeg \
rsyslog logrotate \
nano vim-tiny curl; \
rm -rf /var/lib/apt/lists/*
RUN ln -s /usr/bin/vim.tiny /usr/bin/vim
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
ADD wheelhouse /data/wheelhouse
RUN pip3 install --no-index --find-links=/data/wheelhouse -r /data/wheelhouse/requirements.txt
VOLUME /data/config
VOLUME /data/storage
VOLUME /var/log
ENV USE_X_SENDFILE True
EXPOSE 80
EXPOSE 5000
ADD apache/remoteip.conf /etc/apache2/mods-available/
ADD apache/wsgi-py36.* /etc/apache2/mods-available/
RUN a2enmod remoteip && a2enmod rewrite && a2enmod wsgi-py36
ADD apache/apache2.conf /etc/apache2/apache2.conf
ADD apache/000-default.conf /etc/apache2/sites-available/000-default.conf
ADD apache/logrotate.conf /etc/logrotate.d/apache2
ADD *.sh /
# Remove some empty top-level directories we won't use anyway.
RUN rmdir /media /home 2>/dev/null || true
# This file includes some useful commands to have in the shell history
# for easy access.
ADD bash_history /root/.bash_history
ENTRYPOINT /docker-entrypoint.sh
# Add the most-changing files as last step for faster rebuilds.
ADD config_local.py /data/git/blender-cloud/
ADD staging /data/git
RUN python3 -c "import re, secrets; \
f = open('/data/git/blender-cloud/config_local.py', 'a'); \
h = re.sub(r'[_.~-]', '', secrets.token_urlsafe())[:8]; \
print(f'STATIC_FILE_HASH = {h!r}', file=f)"

View File

@@ -1,56 +0,0 @@
<VirtualHost *:80>
XSendFile on
XSendFilePath /data/storage/pillar
XSendFilePath /data/git/pillar/pillar/web/static/
XSendFilePath /data/git/attract/attract/static/
XSendFilePath /data/git/flamenco/flamenco/static/
XsendFilePath /data/git/pillar-svnman/svnman/static/
XsendFilePath /data/git/blender-cloud/static/
XsendFilePath /data/git/blender-cloud/cloud/static/
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
# LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
WSGIDaemonProcess cloud processes=2 threads=64 maximum-requests=10000
WSGIPassAuthorization On
WSGIScriptAlias / /data/git/blender-cloud/runserver.wsgi \
process-group=cloud application-group=%{GLOBAL}
<Directory /data/git/blender-cloud>
<Files runserver.wsgi>
Require all granted
</Files>
</Directory>
# Temporary edit to remap the old cloudapi.blender.org to cloud.blender.org/api
RewriteEngine On
RewriteCond "%{HTTP_HOST}" "^cloudapi\.blender\.org" [NC]
RewriteRule (.*) /api$1 [PT]
# Redirects for blender-cloud projects
RewriteRule "^/p/blender-cloud/?$" "/blog" [R=301,L]
RewriteRule "^/agent327/?$" "/p/agent-327" [R=301,L]
RewriteRule "^/caminandes/?$" "/p/caminandes-3" [R=301,L]
RewriteRule "^/cf2/?$" "/p/creature-factory-2" [R=301,L]
RewriteRule "^/characters/?$" "/p/characters" [R=301,L]
RewriteRule "^/gallery/?$" "/p/gallery" [R=301,L]
RewriteRule "^/hdri/?$" "/p/hdri" [R=301,L]
RewriteRule "^/textures/?$" "/p/textures" [R=301,L]
RewriteRule "^/training/?$" "/courses" [R=301,L]
RewriteRule "^/spring/?$" "/p/spring" [R=301,L]
RewriteRule "^/hero/?$" "/p/hero" [R=301,L]
RewriteRule "^/coffee-run/?$" "/p/coffee-run" [R=301,L]
RewriteRule "^/settlers/?$" "/p/settlers" [R=301,L]
# Waking the forest was moved from the art gallery to its own workshop
RewriteRule "^/p/gallery/58cfec4f88ac8f1440aeb309/?$" "/p/waking-the-forest" [R=301,L]
</VirtualHost>

View File

@@ -1,21 +0,0 @@
/var/log/apache2/*.log {
daily
missingok
rotate 14
size 100M
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if /etc/init.d/apache2 status > /dev/null ; then \
/etc/init.d/apache2 reload > /dev/null; \
fi;
endscript
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi; \
endscript
}

View File

@@ -1,2 +0,0 @@
RemoteIPHeader X-Forwarded-For
RemoteIPInternalProxy 172.16.0.0/12

View File

@@ -1,122 +0,0 @@
<IfModule mod_wsgi.c>
#This config file is provided to give an overview of the directives,
#which are only allowed in the 'server config' context.
#For a detailed description of all avaiable directives please read
#http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives
#WSGISocketPrefix: Configure directory to use for daemon sockets.
#
#Apache's DEFAULT_REL_RUNTIMEDIR should be the proper place for WSGI's
#Socket. In case you want to mess with the permissions of the directory,
#you need to define WSGISocketPrefix to an alternative directory.
#See http://code.google.com/p/modwsgi/wiki/ConfigurationIssues for more
#information
#WSGISocketPrefix /var/run/apache2/wsgi
#WSGIPythonOptimize: Enables basic Python optimisation features.
#
#Sets the level of Python compiler optimisations. The default is '0'
#which means no optimisations are applied.
#Setting the optimisation level to '1' or above will have the effect
#of enabling basic Python optimisations and changes the filename
#extension for compiled (bytecode) files from .pyc to .pyo.
#When the optimisation level is set to '2', doc strings will not be
#generated and retained. This will result in a smaller memory footprint,
#but may cause some Python packages which interrogate doc strings in some
#way to fail.
#WSGIPythonOptimize 0
#WSGIPythonPath: Additional directories to search for Python modules,
# overriding the PYTHONPATH environment variable.
#
#Used to specify additional directories to search for Python modules.
#If multiple directories are specified they should be separated by a ':'.
WSGIPythonPath /opt/python/lib/python3.6/site-packages
#WSGIPythonEggs: Directory to use for Python eggs cache.
#
#Used to specify the directory to be used as the Python eggs cache
#directory for all sub interpreters created within embedded mode.
#This directive achieves the same affect as having set the
#PYTHON_EGG_CACHE environment variable.
#Note that the directory specified must exist and be writable by the user
#that the Apache child processes run as. The directive only applies to
#mod_wsgi embedded mode. To set the Python eggs cache directory for
#mod_wsgi daemon processes, use the 'python-eggs' option to the
#WSGIDaemonProcess directive instead.
#WSGIPythonEggs directory
#WSGIRestrictEmbedded: Enable restrictions on use of embedded mode.
#
#The WSGIRestrictEmbedded directive determines whether mod_wsgi embedded
#mode is enabled or not. If set to 'On' and the restriction on embedded
#mode is therefore enabled, any attempt to make a request against a
#WSGI application which hasn't been properly configured so as to be
#delegated to a daemon mode process will fail with a HTTP internal server
#error response.
#WSGIRestrictEmbedded On|Off
#WSGIRestrictStdin: Enable restrictions on use of STDIN.
#WSGIRestrictStdout: Enable restrictions on use of STDOUT.
#WSGIRestrictSignal: Enable restrictions on use of signal().
#
#Well behaved WSGI applications neither should try to read/write from/to
#STDIN/STDOUT, nor should they try to register signal handlers. If your
#application needs an exception from this rule, you can disable the
#restrictions here.
#WSGIRestrictStdin On
#WSGIRestrictStdout On
#WSGIRestrictSignal On
#WSGIAcceptMutex: Specify type of accept mutex used by daemon processes.
#
#The WSGIAcceptMutex directive sets the method that mod_wsgi will use to
#serialize multiple daemon processes in a process group accepting requests
#on a socket connection from the Apache child processes. If this directive
#is not defined then the same type of mutex mechanism as used by Apache for
#the main Apache child processes when accepting connections from a client
#will be used. If set the method types are the same as for the Apache
#AcceptMutex directive.
#WSGIAcceptMutex default
#WSGIImportScript: Specify a script file to be loaded on process start.
#
#The WSGIImportScript directive can be used to specify a script file to be
#loaded when a process starts. Options must be provided to indicate the
#name of the process group and the application group into which the script
#will be loaded.
#WSGIImportScript process-group=name application-group=name
#WSGILazyInitialization: Enable/disable lazy initialisation of Python.
#
#The WSGILazyInitialization directives sets whether or not the Python
#interpreter is preinitialised within the Apache parent process or whether
#lazy initialisation is performed, and the Python interpreter only
#initialised in the Apache server processes or mod_wsgi daemon processes
#after they have forked from the Apache parent process.
#WSGILazyInitialization On|Off
</IfModule>

View File

@@ -1 +0,0 @@
LoadModule wsgi_module /opt/python/mod-wsgi/mod_wsgi.so

View File

@@ -1,9 +0,0 @@
bash docker-entrypoint.sh
env | sort
apache2ctl start
apache2ctl graceful
/manage.sh operations worker -- -C
celery status --broker amqp://guest:guest@rabbit:5672//
celery events --broker amqp://guest:guest@rabbit:5672//
tail -n 40 -f /var/log/apache2/access.log
tail -n 40 -f /var/log/apache2/error.log

View File

@@ -1,5 +0,0 @@
#!/bin/bash -e
docker build -t armadillica/blender_cloud:latest .
echo "Done, built armadillica/blender_cloud:latest"

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bash
source /install_scripts.sh
source /manage.sh celery beat -- \
--schedule /data/storage/pillar/celerybeat-schedule.db \
--pid /data/storage/pillar/celerybeat.pid

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
source /install_scripts.sh
source /manage.sh celery worker -- -C

View File

@@ -1,123 +0,0 @@
import os
from collections import defaultdict
DEBUG = False
SCHEME = 'https'
PREFERRED_URL_SCHEME = 'https'
SERVER_NAME = 'cloud.blender.org'
# os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = 'true'
os.environ['PILLAR_MONGO_DBNAME'] = 'cloud'
os.environ['PILLAR_MONGO_PORT'] = '27017'
os.environ['PILLAR_MONGO_HOST'] = 'mongo'
USE_X_SENDFILE = True
STORAGE_BACKEND = 'gcs'
CDN_SERVICE_DOMAIN = 'blendercloud-pro.r.worldssl.net'
CDN_CONTENT_SUBFOLDER = ''
CDN_STORAGE_ADDRESS = 'push-11.cdnsun.com'
CACHE_TYPE = 'redis' # null
CACHE_KEY_PREFIX = 'pw_'
CACHE_REDIS_HOST = 'redis'
CACHE_REDIS_PORT = '6379'
CACHE_REDIS_URL = 'redis://redis:6379'
PILLAR_SERVER_ENDPOINT = 'https://cloud.blender.org/api/'
BLENDER_ID_ENDPOINT = 'https://www.blender.org/id/'
GCLOUD_APP_CREDENTIALS = '/data/config/google_app.json'
GCLOUD_PROJECT = 'blender-cloud'
MAIN_PROJECT_ID = '563a9c8cf0e722006ce97b03'
# MAIN_PROJECT_ID = '57aa07c088bef606e89078bd'
ALGOLIA_INDEX_USERS = 'pro_Users'
ALGOLIA_INDEX_NODES = 'pro_Nodes'
ZENCODER_NOTIFICATIONS_URL = 'https://cloud.blender.org/api/encoding/zencoder/notifications'
FILE_LINK_VALIDITY = defaultdict(
lambda: 3600 * 24 * 30, # default of 1 month.
gcs=3600 * 23, # 23 hours for Google Cloud Storage.
cdnsun=3600 * 23
)
LOGGING = {
'version': 1,
'formatters': {
'default': {'format': '%(levelname)8s %(name)s %(message)s'}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'default',
'stream': 'ext://sys.stderr',
}
},
'loggers': {
'pillar': {'level': 'INFO'},
# 'pillar.auth': {'level': 'DEBUG'},
# 'pillar.api.blender_id': {'level': 'DEBUG'},
# 'pillar.api.blender_cloud.subscription': {'level': 'DEBUG'},
'bcloud': {'level': 'INFO'},
'cloud': {'level': 'INFO'},
'attract': {'level': 'INFO'},
'flamenco': {'level': 'INFO'},
# 'pillar.api.file_storage': {'level': 'DEBUG'},
# 'pillar.api.file_storage.ensure_valid_link': {'level': 'INFO'},
'pillar.api.file_storage.refresh_links_for_backend': {'level': 'DEBUG'},
'werkzeug': {'level': 'DEBUG'},
'eve': {'level': 'WARNING'},
# 'elasticsearch': {'level': 'DEBUG'},
},
'root': {
'level': 'WARNING',
'handlers': [
'console',
],
}
}
# Latest version of the add-on.
BLENDER_CLOUD_ADDON_VERSION = '1.9.0'
REDIRECTS = {
# For old links, refer to the services page (hopefully it refreshes then)
'downloads/blender_cloud-latest-bundle.zip': 'https://cloud.blender.org/services#blender-addon',
# Latest Blender Cloud add-on.
'downloads/blender_cloud-latest-addon.zip':
f'https://storage.googleapis.com/institute-storage/addons/'
f'blender_cloud-{BLENDER_CLOUD_ADDON_VERSION}.addon.zip',
# Redirect old Grafista endpoint to /stats
'/stats/': '/stats',
}
UTM_LINKS = {
'cartoon_brew': {
'image': 'https://imgur.com/13nQTi3.png',
'link': 'https://store.blender.org/product/membership/'
}
}
SVNMAN_REPO_URL = 'https://svn.blender.cloud/repo/'
SVNMAN_API_URL = 'https://svn.blender.cloud/api/'
# Mail options, see pillar.celery.email_tasks.
SMTP_HOST = 'proog.blender.org'
SMTP_PORT = 25
SMTP_USERNAME = 'server@blender.cloud'
SMTP_TIMEOUT = 30 # timeout in seconds, https://docs.python.org/3/library/smtplib.html#smtplib.SMTP
MAIL_RETRY = 180 # in seconds, delay until trying to send an email again.
MAIL_DEFAULT_FROM_NAME = 'Blender Cloud'
MAIL_DEFAULT_FROM_ADDR = 'cloudsupport@blender.org'
# MUST be 8 characters long, see pillar.flask_extra.HashedPathConverter
# STATIC_FILE_HASH = '12345678'
# The value used in production is appended from Dockerfile.

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env bash
source /install_scripts.sh
# Make sure that log rotation works.
mkdir -p ${APACHE_LOG_DIR}
service cron start
if [ "$DEV" = "true" ]; then
echo "Running in development mode"
cd /data/git/blender-cloud
exec bash /manage.sh runserver --host='0.0.0.0'
else
exec /usr/sbin/apache2ctl -D FOREGROUND
fi

View File

@@ -1,21 +0,0 @@
#!/bin/sh
if [ -f /installed ]; then
return
fi
SITEPKG=$(echo /opt/python/lib/python3.*/site-packages)
echo "Installing Blender Cloud packages into $SITEPKG"
# TODO: 'pip3 install -e' runs 'setup.py develop', which runs 'setup.py egg_info',
# which can't write the egg info to the read-only /data/git volume. This is why
# we manually install the links.
for SUBPROJ in /data/git/{pillar,pillar-python-sdk,attract,flamenco,pillar-svnman}; do
NAME=$(python3 $SUBPROJ/setup.py --name)
echo "... $NAME"
echo $SUBPROJ >> $SITEPKG/easy-install.pth
echo $SUBPROJ > $SITEPKG/$NAME.egg-link
done
echo "All packages installed."
touch /installed

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -e
cd /data/git/blender-cloud
exec python manage.py "$@"

View File

@@ -1,96 +0,0 @@
# Setting up a production machine
To get the docker stack up and running, we use the following, on an Ubuntu 16.10 machine.
## 0. Basic stuff
Install the machine, use `locale-gen nl_NL.UTF-8` or similar commands to generate locale
definitions. Set up automatic security updates and backups, the usual.
## 1. Install Docker
Install Docker itself, as described in the
[Docker CE for Ubuntu manual](https://store.docker.com/editions/community/docker-ce-server-ubuntu?tab=description):
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce
## 2. Configure Docker to use "overlay"
Configure Docker to use "overlay" instead of "aufs" for the images. This prevents
[segfaults in auplink](https://bugs.launchpad.net/ubuntu/+source/aufs-tools/+bug/1442568).
1. Set `DOCKER_OPTS="-s overlay"` in `/etc/defaults/docker`
2. Copy `/lib/systemd/system/docker.service` to `/etc/systemd/system/docker.service`.
This allows later upgrading of docker without overwriting the changes we're about to do.
2. Edit the `[Service]` section of `/etc/systemd/system/docker.service`:
1. Add `EnvironmentFile=/etc/default/docker`
2. Append ` $DOCKER_OPTS` to the `ExecStart` line
3. Run `systemctl daemon-reload`
4. Remove all your containers and images.
5. Restart Docker: `systemctl restart docker`
## 3. Pull the Blender Cloud docker image
`docker pull armadillica/blender_cloud:latest`
## 4. Get docker-compose + our repositories
See the [Quick setup](../README.md) on how to get those. Then run:
cd /data/git/blender-cloud/docker
docker-compose up -d
Set up permissions for Docker volumes; the following should be writable by
- `/data/storage/pillar`: writable by `www-data` and `root` (do a `chown root:www-data`
and `chmod 2770`).
- `/data/storage/db`: writable by uid 999.
## 5. Set up TLS
Place TLS certificates in `/data/certs/{cloud,cloudapi}.blender.org.pem`.
They should contain (in order) the private key, the host certificate, and the
CA certificate.
## 6. Create a local config
Blender Cloud expects the following files to exist:
- `/data/git/blender_cloud/config_local.py` with machine-local configuration overrides
- `/data/config/google_app.json` with Google Cloud Storage credentials.
When run from Docker, the `docker/4_run/config_local.py` file will be used. Overrides for that file
can be placed in `/data/config/config_secrets.py`.
## 7. ElasticSearch & kibana
ElasticSearch and Kibana run in our self-rolled images. This is needed because by default
- ElasticSearch uses up to 2 GB of RAM, which is too much for our droplet, and
- the Docker images contain the proprietary X-Pack plugin, which we don't want.
This also gives us the opportunity to let Kibana do its optimization when we build the image, rather
than every time the container is recreated.
`/data/storage/elasticsearch` needs to be writable by UID 1000, GID 1000.
Kibana connects to [ElasticProxy](https://github.com/armadillica/elasticproxy), which only allows
GET, HEAD, and some specific POST requests. This ensures that the public-facing Kibana cannot be
used to change the ElasticSearch database.
Production Kibana can be placed in read-only mode, but this is not necessary now that we use
ElasticProxy. However, I've left this in here as reference.
`curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : true }'`
If editing is desired, temporarily turn off read-only mode:
`curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : false }'`

16
docker/build.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -x;
set -e;
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $DIR;
cd 1_base/;
bash build.sh;
cd ../2_build/;
bash build.sh;
cd ../3_run/;
bash build.sh;

View File

@@ -1,183 +1,68 @@
version: '3.4'
services:
mongo:
image: mongo:3.4
container_name: mongo
restart: always
volumes:
- /data/storage/db:/data/db
- /data/storage/db-bak:/data/db-bak # for backing up stuff etc.
ports:
- "127.0.0.1:27017:27017"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
# Databases in use:
# 0: Flask Cache
# 1: Celery (backend)
# 2: Celery (broker)
redis:
image: redis:5.0
container_name: redis
restart: always
ports:
- "127.0.0.1:6379:6379"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
elastic:
# This image is defined in blender-cloud/docker/elastic
image: armadillica/elasticsearch:6.1.1
container_name: elastic
restart: always
volumes:
# NOTE: this path must be writable by UID=1000 GID=1000.
- /data/storage/elastic:/usr/share/elasticsearch/data
ports:
- "127.0.0.1:9200:9200"
environment:
ES_JAVA_OPTS: "-Xms256m -Xmx256m"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
elasticproxy:
image: armadillica/elasticproxy:1.2
container_name: elasticproxy
restart: always
command: /elasticproxy -elastic http://elastic:9200/
depends_on:
- elastic
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
kibana:
# This image is defined in blender-cloud/docker/elastic
image: armadillica/kibana:6.1.1
container_name: kibana
restart: always
environment:
SERVER_NAME: "stats.cloud.blender.org"
ELASTICSEARCH_URL: http://elasticproxy:9200
CONSOLE_ENABLED: 'false'
VIRTUAL_HOST: http://stats.cloud.blender.org/*,https://stats.cloud.blender.org/*,http://stats.cloud.local/*,https://stats.cloud.local/*
VIRTUAL_HOST_WEIGHT: 20
FORCE_SSL: "true"
# See https://github.com/elastic/kibana/issues/5170#issuecomment-163042525
NODE_OPTIONS: "--max-old-space-size=200"
depends_on:
- elasticproxy
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
blender_cloud:
image: armadillica/blender_cloud:latest
container_name: blender_cloud
restart: always
environment:
VIRTUAL_HOST: http://cloud.blender.org/*,https://cloud.blender.org/*,http://cloud.local/*,https://cloud.local/*
VIRTUAL_HOST_WEIGHT: 10
FORCE_SSL: "true"
GZIP_COMPRESSION_TYPE: "text/html text/plain text/css application/javascript"
PILLAR_CONFIG: /data/config/config_secrets.py
volumes:
# format: HOST:CONTAINER
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
- /data/log:/var/log
depends_on:
- mongo
- redis
celery_worker:
image: armadillica/blender_cloud:latest
entrypoint: /celery-worker.sh
container_name: celery_worker
restart: always
environment:
PILLAR_CONFIG: /data/config/config_secrets.py
volumes:
# format: HOST:CONTAINER
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
- /data/log:/var/log
depends_on:
- mongo
- redis
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
celery_beat:
image: armadillica/blender_cloud:latest
entrypoint: /celery-beat.sh
container_name: celery_beat
restart: always
environment:
PILLAR_CONFIG: /data/config/config_secrets.py
volumes:
# format: HOST:CONTAINER
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
- /data/log:/var/log
depends_on:
- mongo
- redis
- celery_worker
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "20"
letsencrypt:
image: armadillica/picohttp:1.0
container_name: letsencrypt
restart: always
environment:
WEBROOT: /data/letsencrypt
LISTEN: '[::]:80'
VIRTUAL_HOST: http://cloud.blender.org/.well-known/*, http://stats.cloud.blender.org/.well-known/*
VIRTUAL_HOST_WEIGHT: 30
volumes:
- /data/letsencrypt:/data/letsencrypt
haproxy:
# This image is defined in blender-cloud/docker/haproxy
image: armadillica/haproxy:1.6.7
container_name: haproxy
restart: always
ports:
- "443:443"
- "80:80"
environment:
- ADDITIONAL_SERVICES=docker:blender_cloud,docker:letsencrypt,docker:kibana
- CERT_FOLDER=/certs/
- TIMEOUT=connect 5s, client 5m, server 10m
- SSL_BIND_CIPHERS=ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
- SSL_BIND_OPTIONS=no-sslv3
- EXTRA_GLOBAL_SETTINGS=tune.ssl.default-dh-param 2048
depends_on:
- blender_cloud
- letsencrypt
- kibana
volumes:
- '/data/certs:/certs'
- /var/run/docker.sock:/var/run/docker.sock
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- /data/storage/db:/data/db
ports:
- "127.0.0.1:27017:27017"
redis:
image: redis
container_name: redis
restart: always
blender_cloud:
image: armadillica/blender_cloud
container_name: blender_cloud
restart: always
environment:
VIRTUAL_HOST: http://cloudapi.blender.org/*,https://cloudapi.blender.org/*,http://cloud.blender.org/*,https://cloud.blender.org/*,http://pillar-web/*
VIRTUAL_HOST_WEIGHT: 10
FORCE_SSL: "true"
volumes:
- /data/git/blender-cloud:/data/git/blender-cloud:ro
- /data/git/attract:/data/git/attract:ro
- /data/git/flamenco:/data/git/flamenco:ro
- /data/git/pillar:/data/git/pillar:ro
- /data/git/pillar-python-sdk:/data/git/pillar-python-sdk:ro
- /data/config:/data/config:ro
- /data/storage/pillar:/data/storage/pillar
links:
- mongo
- redis
# notifserv:
# container_name: notifserv
# image: armadillica/pillar-notifserv:cd8fa678436563ac3b800b2721e36830c32e4656
# restart: always
# links:
# - mongo
# environment:
# VIRTUAL_HOST: https://cloud.blender.org/notifications*,http://pillar-web/notifications*
# VIRTUAL_HOST_WEIGHT: 20
# FORCE_SSL: true
grafista:
image: armadillica/grafista
container_name: grafista
restart: always
environment:
VIRTUAL_HOST: http://cloud.blender.org/stats/*,https://cloud.blender.org/stats/*,http://blender-cloud/stats/*
VIRTUAL_HOST_WEIGHT: 20
FORCE_SSL: "true"
volumes:
- /data/git/grafista:/data/git/grafista:ro
- /data/storage/grafista:/data/storage
haproxy:
image: dockercloud/haproxy
container_name: haproxy
restart: always
ports:
- "443:443"
- "80:80"
environment:
- CERT_FOLDER=/certs/
- TIMEOUT=connect 5s, client 5m, server 10m
links:
- blender_cloud
- grafista
# - notifserv
volumes:
- '/data/certs:/certs'

View File

@@ -1,10 +0,0 @@
FROM docker.elastic.co/elasticsearch/elasticsearch:6.1.1
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
RUN elasticsearch-plugin remove --purge x-pack
ADD elasticsearch.yml jvm.options /usr/share/elasticsearch/config/
USER root
RUN chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/config/
USER elasticsearch

View File

@@ -1,6 +0,0 @@
FROM docker.elastic.co/kibana/kibana:6.1.1
LABEL maintainer Sybren A. Stüvel <sybren@blender.studio>
RUN bin/kibana-plugin remove x-pack
ADD kibana.yml /usr/share/kibana/config/kibana.yml
RUN kibana 2>&1 | grep -m 1 "Optimization of .* complete"

View File

@@ -1,15 +0,0 @@
#!/bin/bash -e
# When updating this, also update the versions in Dockerfile-*, and make sure that
# it matches the versions of the elasticsearch and elasticsearch_dsl packages
# used in Pillar. Those don't have to match exactly, but the major version should.
VERSION=6.1.1
docker build -t armadillica/elasticsearch:${VERSION} -f Dockerfile-elastic .
docker build -t armadillica/kibana:${VERSION} -f Dockerfile-kibana .
docker tag armadillica/elasticsearch:${VERSION} armadillica/elasticsearch:latest
docker tag armadillica/kibana:${VERSION} armadillica/kibana:latest
echo "Done, built armadillica/elasticsearch:${VERSION} and armadillica/kibana:${VERSION}"
echo "Also tagged as armadillica/elasticsearch:latest and armadillica/kibana:latest"

View File

@@ -1,7 +0,0 @@
cluster.name: "blender-cloud"
network.host: 0.0.0.0
# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

View File

@@ -1,112 +0,0 @@
## JVM configuration
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
# Sybren: uncommented so that we can set those options using the ES_JAVA_OPTS environment variable.
#-Xms512m
#-Xmx512m
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## optimizations
# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch
## basic
# force the server VM (remove on 32-bit client JVMs)
-server
# explicitly set the stack size (reduce to 320k on 32-bit client JVMs)
-Xss1m
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
-Djna.nosys=true
# use old-style file permissions on JDK9
-Djdk.io.permissionsUseCanonicalPath=true
# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${heap.dump.path}
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${loggc}
# By default, the GC log file will not rotate.
# By uncommenting the lines below, the GC log file
# will be rotated every 128MB at most 32 times.
#-XX:+UseGCLogFileRotation
#-XX:NumberOfGCLogFiles=32
#-XX:GCLogFileSize=128M
# Elasticsearch 5.0.0 will throw an exception on unquoted field names in JSON.
# If documents were already indexed with unquoted fields in a previous version
# of Elasticsearch, some operations may throw errors.
#
# WARNING: This option will be removed in Elasticsearch 6.0.0 and is provided
# only for migration purposes.
#-Delasticsearch.json.allow_unquoted_field_names=true

View File

@@ -1,8 +0,0 @@
---
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
# Hide dev tools
console.enabled: false

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env bash
set -xe
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $DIR/1_base
bash build.sh
cd $DIR/2_buildpy
bash build.sh
cd $DIR/3_buildwheels
bash build.sh
cd $DIR/4_run
bash build.sh

View File

@@ -1,5 +0,0 @@
FROM dockercloud/haproxy:1.6.7
LABEL maintainer="Sybren A. Stüvel <sybren@blender.studio>"
# Fix https://talosintelligence.com/vulnerability_reports/TALOS-2019-0782
RUN sed 's/root::/root:!:/' -i /etc/shadow

View File

@@ -1,10 +0,0 @@
#!/bin/bash -e
# When updating this, also update the version in Dockerfile
VERSION=1.6.7
docker build -t armadillica/haproxy:${VERSION} .
docker tag armadillica/haproxy:${VERSION} armadillica/haproxy:latest
echo "Done, built armadillica/haproxy:${VERSION}"
echo "Also tagged as armadillica/haproxy:latest"

View File

@@ -1,5 +0,0 @@
# Change to suit your needs, then place in /etc/cron.d/mongo-backup
# (so remove the .cron from the name)
MAILTO=yourname@youraddress.org
30 5 * * * root /root/docker/mongo-backup.sh

View File

@@ -1,28 +0,0 @@
#!/bin/bash -e
BACKUPDIR=/data/storage/db-bak
DATE=$(date +'%Y-%m-%d-%H%M')
ARCHIVE=$BACKUPDIR/mongo-live-$DATE.tar.xz
# Just a sanity check before we give it to 'rm -rf'
if [ -z "$DATE" ]; then
echo "Empty string found where the date should be, aborting."
exit 1
fi
# /data/db-bak in Docker is /data/storage/db-bak on the host.
docker exec mongo mongodump -d cloud \
--out /data/db-bak/dump-$DATE \
--excludeCollection tokens \
--excludeCollection flamenco_task_logs \
--quiet
cd $BACKUPDIR
tar -Jcf $ARCHIVE dump-$DATE/
rm -rf dump-$DATE
TO_DELETE="$(ls $BACKUPDIR/mongo-live-*.tar.xz | head -n -7)"
[ -z "$TO_DELETE" ] || rm "$TO_DELETE"
rsync -a $BACKUPDIR/mongo-live-*.tar.xz cloud-backup@swami-direct.blender.cloud:

View File

@@ -1,27 +0,0 @@
#!/bin/bash -e
# First time creating a certificate for a domain, use:
# certbot certonly --webroot -w /data/letsencrypt -d $DOMAINNAME
cd /data/letsencrypt
certbot renew
echo
echo "Recreating HAProxy certificates"
for certdir in /etc/letsencrypt/live/*; do
domain=$(basename $certdir)
echo " - $domain"
cat $certdir/privkey.pem $certdir/fullchain.pem > $domain.pem
mv $domain.pem /data/certs/
done
echo
echo -n "Restarting "
docker restart haproxy
echo "Certificate renewal completed."

33
gulp
View File

@@ -1,33 +0,0 @@
#!/bin/bash -ex
GULP=./node_modules/.bin/gulp
function install() {
npm install
touch $GULP # installer doesn't always touch this after a build, so we do.
}
# Rebuild Gulp if missing or outdated.
[ -e $GULP ] || install
[ gulpfile.js -nt $GULP ] && install
if [ "$1" == "watch" ]; then
# Treat "gulp watch" as "gulp && gulp watch"
$GULP
elif [ "$1" == "all" ]; then
pushd .
# This is useful when building the Blender Cloud project for the first time.
# Run ./gulp in all depending projects (pillar, attract, flamenco, pillar-svnman)
declare -a repos=("pillar" "attract" "flamenco" "pillar-svnman")
for r in "${repos[@]}"
do
cd ../$r
./gulp
done
popd
# Run "gulp" once inside the repo
$GULP
exit 1
fi
exec $GULP "$@"

View File

@@ -1,130 +0,0 @@
let argv = require('minimist')(process.argv.slice(2));
let autoprefixer = require('gulp-autoprefixer');
let cache = require('gulp-cached');
let chmod = require('gulp-chmod');
let concat = require('gulp-concat');
let git = require('gulp-git');
let gulp = require('gulp');
let gulpif = require('gulp-if');
let pug = require('gulp-pug');
let plumber = require('gulp-plumber');
let rename = require('gulp-rename');
let sass = require('gulp-sass');
let sourcemaps = require('gulp-sourcemaps');
let uglify = require('gulp-uglify-es').default;
let enabled = {
uglify: argv.production,
maps: !argv.production,
failCheck: !argv.production,
prettyPug: !argv.production,
cachify: !argv.production,
cleanup: argv.production,
chmod: argv.production,
};
let destination = {
css: 'cloud/static/assets/css',
pug: 'cloud/templates',
js: 'cloud/static/assets/js',
}
let source = {
pillar: '../pillar/'
}
/* CSS */
gulp.task('styles', function(done) {
gulp.src('src/styles/**/*.sass')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(sass({
outputStyle: 'compressed'}
))
.pipe(autoprefixer("last 3 versions"))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.css));
done();
});
/* Templates - Pug */
gulp.task('templates', function(done) {
gulp.src('src/templates/**/*.pug')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('templating')))
.pipe(pug({
pretty: enabled.prettyPug
}))
.pipe(gulp.dest(destination.pug));
// TODO(venomgfx): please check why 'gulp watch' doesn't pick up on .txt changes.
gulp.src('src/templates/**/*.txt')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('templating')))
.pipe(gulp.dest(destination.pug));
done();
});
/* Tutti gets built by Pillar. See gulpfile.js in pillar.*/
/* Individual Uglified Scripts */
gulp.task('scripts', function(done) {
gulp.src('src/scripts/*.js')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('scripting')))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(gulpif(enabled.uglify, uglify()))
.pipe(rename({suffix: '.min'}))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulpif(enabled.chmod, chmod(0o644)))
.pipe(gulp.dest(destination.js));
done();
});
// While developing, run 'gulp watch'
gulp.task('watch',function(done) {
let watchStyles = [
'src/styles/**/*.sass',
source.pillar + 'src/styles/**/*.sass',
];
let watchScripts = [
'src/scripts/**/*.js',
source.pillar + 'src/scripts/**/*.js',
];
let watchTemplates = [
'src/templates/**/*.pug',
source.pillar + 'src/templates/**/*.pug',
];
gulp.watch(watchStyles, gulp.series('styles'));
gulp.watch(watchScripts, gulp.series('scripts'));
gulp.watch(watchTemplates, gulp.series('templates'));
done();
});
// Erases all generated files in output directories.
gulp.task('cleanup', function(done) {
let paths = [];
for (attr in destination) {
paths.push(destination[attr]);
}
git.clean({ args: '-f -X ' + paths.join(' ') }, function (err) {
if(err) throw err;
});
done();
});
// Run 'gulp' to build everything at once
let tasks = [];
if (enabled.cleanup) tasks.push('cleanup');
gulp.task('default', gulp.parallel(tasks.concat(['styles', 'templates', 'scripts'])));

View File

@@ -1,6 +0,0 @@
# Flamenco Server JWT keys
To generate a keypair for `ES256`:
openssl ecparam -genkey -name prime256v1 -noout -out es256-private.pem
openssl ec -in es256-private.pem -pubout -out es256-public.pem

View File

@@ -1,7 +1,40 @@
#!/usr/bin/env python
from __future__ import print_function
import logging
from flask import current_app
from pillar import cli
from runserver import app
from pillar.cli import manager_maintenance
from cloud import app
log = logging.getLogger(__name__)
@manager_maintenance.command
def reconcile_subscribers():
"""For every user, check their subscription status with the store."""
from pillar.auth.subscriptions import fetch_user
users_coll = current_app.data.driver.db['users']
unsubscribed_users = []
for user in users_coll.find({'roles': 'subscriber'}):
print('Processing %s' % user['email'])
print(' Checking subscription')
user_store = fetch_user(user['email'])
if user_store['cloud_access'] == 0:
print(' Removing subscriber role')
users_coll.update(
{'_id': user['_id']},
{'$pull': {'roles': 'subscriber'}})
unsubscribed_users.append(user['email'])
if not unsubscribed_users:
return
print('The following users have been unsubscribed')
for user in unsubscribed_users:
print(user)
cli.manager.app = app
cli.manager.run()

6417
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,32 +0,0 @@
{
"name": "blender-cloud",
"license": "GPL-2.0+",
"author": "Blender Institute",
"repository": {
"type": "git",
"url": "git://git.blender.org/blender-cloud.git"
},
"devDependencies": {
"gulp": "~4.0",
"gulp-autoprefixer": "~6.0.0",
"gulp-cached": "~1.1.1",
"gulp-chmod": "~2.0.0",
"gulp-concat": "~2.6.1",
"gulp-if": "^2.0.2",
"gulp-git": "~2.8.0",
"gulp-plumber": "~1.2.0",
"gulp-pug": "~4.0.1",
"gulp-rename": "~1.4.0",
"gulp-sass": "~4.1.0",
"gulp-sourcemaps": "~2.6.4",
"gulp-uglify-es": "^1.0.4",
"minimist": "^1.2.0"
},
"dependencies": {
"bootstrap": "^4.1.3",
"jquery": "^3.3.1",
"natives": "^1.1.6",
"popper.js": "^1.14.4",
"video.js": "^7.2.2"
}
}

Some files were not shown because too many files have changed in this diff Show More