Compare commits

..

1 Commits

535 changed files with 70804 additions and 53666 deletions

View File

@ -1,6 +0,0 @@
{
"project_id" : "Pillar Server",
"conduit_uri" : "https://developer.blender.org/",
"git.default-relative-commit" : "origin/master",
"arc.land.update.default" : "rebase"
}

View File

@ -1,3 +0,0 @@
{
"presets": ["@babel/preset-env"]
}

8
.gitignore vendored
View File

@ -12,24 +12,18 @@ config_local.py
/build /build
/.cache /.cache
/.pytest_cache/ /*.egg-info/
*.egg-info/
profile.stats profile.stats
/dump/ /dump/
/.eggs /.eggs
/devdeps/pip-wheel-metadata/
/node_modules /node_modules
/.sass-cache /.sass-cache
*.css.map *.css.map
*.js.map *.js.map
/translations/*/LC_MESSAGES/*.mo
pillar/web/static/assets/css/*.css pillar/web/static/assets/css/*.css
pillar/web/static/assets/js/*.min.js pillar/web/static/assets/js/*.min.js
pillar/web/static/assets/js/vendor/video.min.js
pillar/web/static/storage/ pillar/web/static/storage/
pillar/web/static/uploads/ pillar/web/static/uploads/
pillar/web/templates/ pillar/web/templates/
/poetry.lock

View File

@ -1,85 +0,0 @@
Pillar
======
This is the latest iteration on the Attract project. We are building a unified
framework called Pillar. Pillar will combine Blender Cloud and Attract. You
can see Pillar in action on the [Blender Cloud](https://cloud.blender.org).
## Custom fonts
The icons on the website are drawn using a custom font, stored in
[pillar/web/static/font](pillar/web/static/font).
This font is generated via [Fontello](http://fontello.com/) by uploading
[pillar/web/static/font/config.json](pillar/web/static/font/config.json).
Note that we only use the WOFF and WOFF2 formats, and discard the others
supplied by Fontello.
After replacing the font files & `config.json`, edit the Fontello-supplied
`font.css` to remove all font formats except `woff` and `woff2`. Then upload
it to [css2sass](http://css2sass.herokuapp.com/) to convert it to SASS, and
place it in [src/styles/font-pillar.sass](src/styles/font-pillar.sass).
Don't forget to Gulp!
## Installation
Dependencies are managed via [Poetry](https://poetry.eustace.io/).
Make sure your /data directory exists and is writable by the current user.
Alternatively, provide a `pillar/config_local.py` that changes the relevant
settings.
```
git clone git@git.blender.org:pillar-python-sdk.git ../pillar-python-sdk
pip install -U --user poetry
poetry install
```
## HDRi viewer
The HDRi viewer uses [Google VRView](https://github.com/googlevr/vrview). To upgrade,
get those files:
* [three.min.js](https://raw.githubusercontent.com/googlevr/vrview/master/build/three.min.js)
* [embed.min.js](https://raw.githubusercontent.com/googlevr/vrview/master/build/embed.min.js)
* [loading.gif](https://raw.githubusercontent.com/googlevr/vrview/master/images/loading.gif)
and place them in `pillar/web/static/assets/vrview`. Replace `images/loading.gif` in `embed.min.js` with `static/pillar/assets/vrview/loading.gif`.
You may also want to compare their
[index.html](https://raw.githubusercontent.com/googlevr/vrview/master/index.html) to our
`src/templates/vrview.pug`.
When on a HDRi page with the viewer embedded, use this JavaScript code to find the current
yaw: `vrview_window.contentWindow.yaw()`. This can be passed as `default_yaw` parameter to
the iframe.
## Celery
Pillar requires [Celery](http://www.celeryproject.org/) for background task processing. This in
turn requires a backend and a broker, for which the default Pillar configuration uses Redis and
RabbitMQ.
You can run the Celery Worker using `manage.py celery worker`.
Find other Celery operations with the `manage.py celery` command.
## Elasticsearch
Pillar uses [Elasticsearch](https://www.elastic.co/products/elasticsearch) to power the search engine.
You will need to run the `manage.py elastic reset_index` command to initialize the indexing.
If you need to reindex your documents in elastic you run the `manage.py elastic reindex` command.
## Translations
If the language you want to support doesn't exist, you need to run: `translations init es_AR`.
Every time a new string is marked for translation you need to update the entire catalog: `translations update`
And once more strings are translated, you need to compile the translations: `translations compile`
*To mark strings strings for translations in Python scripts you need to
wrap them with the `flask_babel.gettext` function.
For .pug templates wrap them with `_()`.*

View File

@ -1,3 +1,3 @@
#!/bin/bash -ex #!/bin/bash -ex
mongodump -h localhost:27018 -d cloud --out dump/$(date +'%Y-%m-%d-%H%M') --excludeCollection tokens --excludeCollection flamenco_task_logs mongodump -h localhost:27018 -d cloud --out dump/$(date +'%Y-%m-%d-%H%M') --excludeCollection tokens

View File

@ -1,16 +0,0 @@
[tool.poetry]
name = "pillar-devdeps"
version = "1.0"
description = ""
authors = [
"Francesco Siddi <francesco@blender.org>",
"Pablo Vazquez <pablo@blender.studio>",
"Sybren Stüvel <sybren@blender.studio>",
]
[tool.poetry.dependencies]
python = "~3.6"
mypy = "^0.501"
pytest = "~4.4"
pytest-cov = "~2.7"
responses = "~0.10"

19
gulp
View File

@ -1,19 +0,0 @@
#!/bin/bash -ex
GULP=./node_modules/.bin/gulp
function install() {
npm install
touch $GULP # installer doesn't always touch this after a build, so we do.
}
# Rebuild Gulp if missing or outdated.
[ -e $GULP ] || install
[ gulpfile.js -nt $GULP ] && install
if [ "$1" == "watch" ]; then
# Treat "gulp watch" as "gulp && gulp watch"
$GULP
fi
exec $GULP "$@"

View File

@ -1,51 +1,27 @@
let argv = require('minimist')(process.argv.slice(2)); var argv = require('minimist')(process.argv.slice(2));
let autoprefixer = require('gulp-autoprefixer'); var autoprefixer = require('gulp-autoprefixer');
let cache = require('gulp-cached'); var chmod = require('gulp-chmod');
let chmod = require('gulp-chmod'); var concat = require('gulp-concat');
let concat = require('gulp-concat'); var gulp = require('gulp');
let git = require('gulp-git'); var gulpif = require('gulp-if');
let gulpif = require('gulp-if'); var jade = require('gulp-jade');
let gulp = require('gulp'); var livereload = require('gulp-livereload');
let livereload = require('gulp-livereload'); var plumber = require('gulp-plumber');
let plumber = require('gulp-plumber'); var rename = require('gulp-rename');
let pug = require('gulp-pug'); var sass = require('gulp-sass');
let rename = require('gulp-rename'); var sourcemaps = require('gulp-sourcemaps');
let sass = require('gulp-sass'); var uglify = require('gulp-uglify');
let sourcemaps = require('gulp-sourcemaps');
let uglify = require('gulp-uglify-es').default;
let browserify = require('browserify');
let babelify = require('babelify');
let sourceStream = require('vinyl-source-stream');
let glob = require('glob');
let es = require('event-stream');
let path = require('path');
let buffer = require('vinyl-buffer');
let enabled = { var enabled = {
uglify: argv.production, uglify: argv.production,
maps: !argv.production, maps: argv.production,
failCheck: !argv.production, failCheck: argv.production,
prettyPug: !argv.production, prettyPug: !argv.production,
cachify: !argv.production, liveReload: !argv.production
cleanup: argv.production,
chmod: argv.production,
}; };
let destination = { /* CSS */
css: 'pillar/web/static/assets/css', gulp.task('styles', function() {
pug: 'pillar/web/templates',
js: 'pillar/web/static/assets/js',
}
let source = {
bootstrap: 'node_modules/bootstrap/',
jquery: 'node_modules/jquery/',
popper: 'node_modules/popper.js/',
vue: 'node_modules/vue/',
}
/* Stylesheets */
gulp.task('styles', function(done) {
gulp.src('src/styles/**/*.sass') gulp.src('src/styles/**/*.sass')
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init())) .pipe(gulpif(enabled.maps, sourcemaps.init()))
@ -54,181 +30,75 @@ gulp.task('styles', function(done) {
)) ))
.pipe(autoprefixer("last 3 versions")) .pipe(autoprefixer("last 3 versions"))
.pipe(gulpif(enabled.maps, sourcemaps.write("."))) .pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.css)) .pipe(gulp.dest('pillar/web/static/assets/css'))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(enabled.liveReload, livereload()));
done();
}); });
/* Templates */ /* Templates - Jade */
gulp.task('templates', function(done) { gulp.task('templates', function() {
gulp.src('src/templates/**/*.pug') gulp.src('src/templates/**/*.jade')
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('templating'))) .pipe(jade({
.pipe(pug({
pretty: enabled.prettyPug pretty: enabled.prettyPug
})) }))
.pipe(gulp.dest(destination.pug)) .pipe(gulp.dest('pillar/web/templates/'))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(enabled.liveReload, livereload()));
done();
}); });
/* Individual Uglified Scripts */ /* Individual Uglified Scripts */
gulp.task('scripts', function(done) { gulp.task('scripts', function() {
gulp.src('src/scripts/*.js') gulp.src('src/scripts/*.js')
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('scripting')))
.pipe(gulpif(enabled.maps, sourcemaps.init())) .pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(gulpif(enabled.uglify, uglify())) .pipe(gulpif(enabled.uglify, uglify()))
.pipe(rename({suffix: '.min'})) .pipe(rename({suffix: '.min'}))
.pipe(gulpif(enabled.maps, sourcemaps.write("."))) .pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulpif(enabled.chmod, chmod(0o644))) .pipe(chmod(644))
.pipe(gulp.dest(destination.js)) .pipe(gulp.dest('pillar/web/static/assets/js/'))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(enabled.liveReload, livereload()));
done();
});
function browserify_base(entry) {
let pathSplited = path.dirname(entry).split(path.sep);
let moduleName = pathSplited[pathSplited.length - 1];
return browserify({
entries: [entry],
standalone: 'pillar.' + moduleName,
})
.transform(babelify, { "presets": ["@babel/preset-env"] })
.bundle()
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(sourceStream(path.basename(entry)))
.pipe(buffer())
.pipe(rename({
basename: moduleName,
extname: '.min.js'
}));
}
/**
* Transcompile and package common modules to be included in tutti.js.
*
* Example:
* src/scripts/js/es6/common/api/init.js
* src/scripts/js/es6/common/events/init.js
* Everything exported in api/init.js will end up in module pillar.api.*, and everything exported in events/init.js
* will end up in pillar.events.*
*/
function browserify_common() {
return glob.sync('src/scripts/js/es6/common/**/init.js').map(browserify_base);
}
/**
* Transcompile and package individual modules.
*
* Example:
* src/scripts/js/es6/individual/coolstuff/init.js
* Will create a coolstuff.js and everything exported in init.js will end up in namespace pillar.coolstuff.*
*/
gulp.task('scripts_browserify', function(done) {
glob('src/scripts/js/es6/individual/**/init.js', function(err, files) {
if(err) done(err);
var tasks = files.map(function(entry) {
return browserify_base(entry)
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(gulpif(enabled.uglify, uglify()))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.js));
});
es.merge(tasks).on('end', done);
})
}); });
/* Collection of scripts in src/scripts/tutti/ and src/scripts/js/es6/common/ to merge into tutti.min.js /* Collection of scripts in src/scripts/tutti/ to merge into tutti.min.js */
* Since it's always loaded, it's only for functions that we want site-wide. /* Since it's always loaded, it's only for functions that we want site-wide */
* It also includes jQuery and Bootstrap (and its dependency popper), since gulp.task('scripts_concat_tutti', function() {
* the site doesn't work without it anyway.*/ gulp.src('src/scripts/tutti/**/*.js')
gulp.task('scripts_concat_tutti', function(done) {
let toUglify = [
source.jquery + 'dist/jquery.min.js',
source.vue + (enabled.uglify ? 'dist/vue.min.js' : 'dist/vue.js'),
source.popper + 'dist/umd/popper.min.js',
source.bootstrap + 'js/dist/index.js',
source.bootstrap + 'js/dist/util.js',
source.bootstrap + 'js/dist/alert.js',
source.bootstrap + 'js/dist/collapse.js',
source.bootstrap + 'js/dist/dropdown.js',
source.bootstrap + 'js/dist/tooltip.js',
'src/scripts/tutti/**/*.js'
];
es.merge(gulp.src(toUglify), ...browserify_common())
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init())) .pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(concat("tutti.min.js")) .pipe(concat("tutti.min.js"))
.pipe(gulpif(enabled.uglify, uglify())) .pipe(gulpif(enabled.uglify, uglify()))
.pipe(gulpif(enabled.maps, sourcemaps.write("."))) .pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulpif(enabled.chmod, chmod(0o644))) .pipe(chmod(644))
.pipe(gulp.dest(destination.js)) .pipe(gulp.dest('pillar/web/static/assets/js/'))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(enabled.liveReload, livereload()));
done();
}); });
gulp.task('scripts_concat_markdown', function() {
/* Simply move these vendor scripts from node_modules. */ gulp.src('src/scripts/markdown/**/*.js')
gulp.task('scripts_move_vendor', function(done) { .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
let toMove = [ .pipe(concat("markdown.min.js"))
'node_modules/video.js/dist/video.min.js', .pipe(gulpif(enabled.uglify, uglify()))
]; .pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(chmod(644))
gulp.src(toMove) .pipe(gulp.dest('pillar/web/static/assets/js/'))
.pipe(gulp.dest(destination.js + '/vendor/')); .pipe(gulpif(enabled.liveReload, livereload()));
done();
}); });
// While developing, run 'gulp watch' // While developing, run 'gulp watch'
gulp.task('watch',function(done) { gulp.task('watch',function() {
// Only listen for live reloads if ran with --livereload
if (argv.livereload){
livereload.listen(); livereload.listen();
}
gulp.watch('src/styles/**/*.sass',gulp.series('styles')); gulp.watch('src/styles/**/*.sass',['styles']);
gulp.watch('src/templates/**/*.pug',gulp.series('templates')); gulp.watch('src/templates/**/*.jade',['templates']);
gulp.watch('src/scripts/*.js',gulp.series('scripts')); gulp.watch('src/scripts/*.js',['scripts']);
gulp.watch('src/scripts/tutti/**/*.js',gulp.series('scripts_concat_tutti')); gulp.watch('src/scripts/tutti/**/*.js',['scripts_concat_tutti']);
gulp.watch('src/scripts/js/**/*.js',gulp.series(['scripts_browserify', 'scripts_concat_tutti'])); gulp.watch('src/scripts/markdown/**/*.js',['scripts_concat_markdown']);
done();
});
// Erases all generated files in output directories.
gulp.task('cleanup', function(done) {
let paths = [];
for (attr in destination) {
paths.push(destination[attr]);
}
git.clean({ args: '-f -X ' + paths.join(' ') }, function (err) {
if(err) throw err;
});
done();
}); });
// Run 'gulp' to build everything at once // Run 'gulp' to build everything at once
let tasks = []; gulp.task('default', ['styles', 'templates', 'scripts', 'scripts_concat_tutti', 'scripts_concat_markdown']);
if (enabled.cleanup) tasks.push('cleanup');
// gulp.task('default', gulp.parallel('styles', 'templates', 'scripts', 'scripts_tutti'));
gulp.task('default', gulp.parallel(tasks.concat([
'styles',
'templates',
'scripts',
'scripts_concat_tutti',
'scripts_move_vendor',
'scripts_browserify',
])));

View File

@ -1,180 +0,0 @@
// For a detailed explanation regarding each configuration property, visit:
// https://jestjs.io/docs/en/configuration.html
module.exports = {
// All imported modules in your tests should be mocked automatically
// automock: false,
// Stop running tests after the first failure
// bail: false,
// Respect "browser" field in package.json when resolving modules
// browser: false,
// The directory where Jest should store its cached dependency information
// cacheDirectory: "/tmp/jest_rs",
// Automatically clear mock calls and instances between every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
// collectCoverage: false,
// An array of glob patterns indicating a set of files for which coverage information should be collected
// collectCoverageFrom: null,
// The directory where Jest should output its coverage files
// coverageDirectory: null,
// An array of regexp pattern strings used to skip coverage collection
// coveragePathIgnorePatterns: [
// "/node_modules/"
// ],
// A list of reporter names that Jest uses when writing coverage reports
// coverageReporters: [
// "json",
// "text",
// "lcov",
// "clover"
// ],
// An object that configures minimum threshold enforcement for coverage results
// coverageThreshold: null,
// Make calling deprecated APIs throw helpful error messages
// errorOnDeprecated: false,
// Force coverage collection from ignored files usin a array of glob patterns
// forceCoverageMatch: [],
// A path to a module which exports an async function that is triggered once before all test suites
// globalSetup: null,
// A path to a module which exports an async function that is triggered once after all test suites
// globalTeardown: null,
// A set of global variables that need to be available in all test environments
// globals: {},
// An array of directory names to be searched recursively up from the requiring module's location
// moduleDirectories: [
// "node_modules"
// ],
// An array of file extensions your modules use
// moduleFileExtensions: [
// "js",
// "json",
// "jsx",
// "node"
// ],
// A map from regular expressions to module names that allow to stub out resources with a single module
// moduleNameMapper: {},
// An array of regexp pattern strings, matched against all module paths before considered 'visible' to the module loader
// modulePathIgnorePatterns: [],
// Activates notifications for test results
// notify: false,
// An enum that specifies notification mode. Requires { notify: true }
// notifyMode: "always",
// A preset that is used as a base for Jest's configuration
// preset: null,
// Run tests from one or more projects
// projects: null,
// Use this configuration option to add custom reporters to Jest
// reporters: undefined,
// Automatically reset mock state between every test
// resetMocks: false,
// Reset the module registry before running each individual test
// resetModules: false,
// A path to a custom resolver
// resolver: null,
// Automatically restore mock state between every test
// restoreMocks: false,
// The root directory that Jest should scan for tests and modules within
// rootDir: null,
// A list of paths to directories that Jest should use to search for files in
// roots: [
// "<rootDir>"
// ],
// Allows you to use a custom runner instead of Jest's default test runner
// runner: "jest-runner",
// The paths to modules that run some code to configure or set up the testing environment before each test
setupFiles: ["<rootDir>/src/scripts/js/es6/test_config/test-env.js"],
// The path to a module that runs some code to configure or set up the testing framework before each test
// setupTestFrameworkScriptFile: null,
// A list of paths to snapshot serializer modules Jest should use for snapshot testing
// snapshotSerializers: [],
// The test environment that will be used for testing
testEnvironment: "jsdom",
// Options that will be passed to the testEnvironment
// testEnvironmentOptions: {},
// Adds a location field to test results
// testLocationInResults: false,
// The glob patterns Jest uses to detect test files
// testMatch: [
// "**/__tests__/**/*.js?(x)",
// "**/?(*.)+(spec|test).js?(x)"
// ],
// An array of regexp pattern strings that are matched against all test paths, matched tests are skipped
// testPathIgnorePatterns: [
// "/node_modules/"
// ],
// The regexp pattern Jest uses to detect test files
// testRegex: "",
// This option allows the use of a custom results processor
// testResultsProcessor: null,
// This option allows use of a custom test runner
// testRunner: "jasmine2",
// This option sets the URL for the jsdom environment. It is reflected in properties such as location.href
// testURL: "http://localhost",
// Setting this value to "fake" allows the use of fake timers for functions such as "setTimeout"
// timers: "real",
// A map from regular expressions to paths to transformers
// transform: null,
// An array of regexp pattern strings that are matched against all source file paths, matched files will skip transformation
// transformIgnorePatterns: [
// "/node_modules/"
// ],
// An array of regexp pattern strings that are matched against all modules before the module loader will automatically return a mock for them
// unmockedModulePathPatterns: undefined,
// Indicates whether each individual test should be reported during the run
// verbose: null,
// An array of regexp patterns that are matched against all source file paths before re-running tests in watch mode
// watchPathIgnorePatterns: [],
// Whether to use watchman for file crawling
// watchman: true,
};

783
old-src/manage.py Normal file
View File

@ -0,0 +1,783 @@
#!/usr/bin/env python
from __future__ import division
from __future__ import print_function
import copy
import logging
import os
from bson.objectid import ObjectId
from eve.methods.post import post_internal
from eve.methods.put import put_internal
from flask.ext.script import Manager
# Use a sensible default when running manage.py commands.
if not os.environ.get('EVE_SETTINGS'):
settings_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'pillar', 'eve_settings.py')
os.environ['EVE_SETTINGS'] = settings_path
# from pillar import app
from pillar.api.node_types.asset import node_type_asset
from pillar.api.node_types import node_type_blog
from pillar.api.node_types.comment import node_type_comment
from pillar.api.node_types.group import node_type_group
from pillar.api.node_types.post import node_type_post
from pillar.api.node_types import node_type_storage
from pillar.api.node_types.texture import node_type_texture
manager = Manager()
log = logging.getLogger('manage')
log.setLevel(logging.INFO)
MONGO_HOST = os.environ.get('MONGO_HOST', 'localhost')
@manager.command
def runserver(**options):
# Automatic creation of STORAGE_DIR path if it's missing
if not os.path.exists(app.config['STORAGE_DIR']):
os.makedirs(app.config['STORAGE_DIR'])
app.run(host=app.config['HOST'],
port=app.config['PORT'],
debug=app.config['DEBUG'],
**options)
@manager.command
def runserver_memlimit(limit_kb=1000000):
import resource
limit_b = int(limit_kb) * 1024
for rsrc in (resource.RLIMIT_AS, resource.RLIMIT_DATA, resource.RLIMIT_RSS):
resource.setrlimit(rsrc, (limit_b, limit_b))
runserver()
@manager.command
def runserver_profile(pfile='profile.stats'):
import cProfile
cProfile.run('runserver(use_reloader=False)', pfile)
def each_project_node_type(node_type_name=None):
"""Generator, yields (project, node_type) tuples for all projects and node types.
When a node type name is given, only yields those node types.
"""
projects_coll = app.data.driver.db['projects']
for project in projects_coll.find():
for node_type in project['node_types']:
if node_type_name is None or node_type['name'] == node_type_name:
yield project, node_type
def post_item(entry, data):
return post_internal(entry, data)
def put_item(collection, item):
item_id = item['_id']
internal_fields = ['_id', '_etag', '_updated', '_created']
for field in internal_fields:
item.pop(field, None)
# print item
# print type(item_id)
p = put_internal(collection, item, **{'_id': item_id})
if p[0]['_status'] == 'ERR':
print(p)
print(item)
@manager.command
def setup_db(admin_email):
"""Setup the database
- Create admin, subscriber and demo Group collection
- Create admin user (must use valid blender-id credentials)
- Create one project
"""
# Create default groups
groups_list = []
for group in ['admin', 'subscriber', 'demo']:
g = {'name': group}
g = post_internal('groups', g)
groups_list.append(g[0]['_id'])
print("Creating group {0}".format(group))
# Create admin user
user = {'username': admin_email,
'groups': groups_list,
'roles': ['admin', 'subscriber', 'demo'],
'settings': {'email_communications': 1},
'auth': [],
'full_name': admin_email,
'email': admin_email}
result, _, _, status = post_internal('users', user)
if status != 201:
raise SystemExit('Error creating user {}: {}'.format(admin_email, result))
user.update(result)
print("Created user {0}".format(user['_id']))
# Create a default project by faking a POST request.
with app.test_request_context(data={'project_name': u'Default Project'}):
from flask import g
from pillar.api import projects
g.current_user = {'user_id': user['_id'],
'groups': user['groups'],
'roles': set(user['roles'])}
projects.create_project(overrides={'url': 'default-project',
'is_private': False})
def _default_permissions():
"""Returns a dict of default permissions.
Usable for projects, node types, and others.
:rtype: dict
"""
from pillar.api.projects import DEFAULT_ADMIN_GROUP_PERMISSIONS
groups_collection = app.data.driver.db['groups']
admin_group = groups_collection.find_one({'name': 'admin'})
default_permissions = {
'world': ['GET'],
'users': [],
'groups': [
{'group': admin_group['_id'],
'methods': DEFAULT_ADMIN_GROUP_PERMISSIONS[:]},
]
}
return default_permissions
@manager.command
def setup_for_attract(project_uuid, replace=False):
"""Adds Attract node types to the project.
:param project_uuid: the UUID of the project to update
:type project_uuid: str
:param replace: whether to replace existing Attract node types (True),
or to keep existing node types (False, the default).
:type replace: bool
"""
from pillar.api.node_types import node_type_act
from pillar.api.node_types.scene import node_type_scene
from pillar.api.node_types import node_type_shot
# Copy permissions from the project, then give everyone with PUT
# access also DELETE access.
project = _get_project(project_uuid)
permissions = copy.deepcopy(project['permissions'])
for perms in permissions.values():
for perm in perms:
methods = set(perm['methods'])
if 'PUT' not in perm['methods']:
continue
methods.add('DELETE')
perm['methods'] = list(methods)
node_type_act['permissions'] = permissions
node_type_scene['permissions'] = permissions
node_type_shot['permissions'] = permissions
# Add the missing node types.
for node_type in (node_type_act, node_type_scene, node_type_shot):
found = [nt for nt in project['node_types']
if nt['name'] == node_type['name']]
if found:
assert len(found) == 1, 'node type name should be unique (found %ix)' % len(found)
# TODO: validate that the node type contains all the properties Attract needs.
if replace:
log.info('Replacing existing node type %s', node_type['name'])
project['node_types'].remove(found[0])
else:
continue
project['node_types'].append(node_type)
_update_project(project_uuid, project)
log.info('Project %s was updated for Attract.', project_uuid)
def _get_project(project_uuid):
"""Find a project in the database, or SystemExit()s.
:param project_uuid: UUID of the project
:type: str
:return: the project
:rtype: dict
"""
projects_collection = app.data.driver.db['projects']
project_id = ObjectId(project_uuid)
# Find the project in the database.
project = projects_collection.find_one(project_id)
if not project:
log.error('Project %s does not exist.', project_uuid)
raise SystemExit()
return project
def _update_project(project_uuid, project):
"""Updates a project in the database, or SystemExit()s.
:param project_uuid: UUID of the project
:type: str
:param project: the project data, should be the entire project document
:type: dict
:return: the project
:rtype: dict
"""
from pillar.api.utils import remove_private_keys
project_id = ObjectId(project_uuid)
project = remove_private_keys(project)
result, _, _, _ = put_internal('projects', project, _id=project_id)
if result['_status'] != 'OK':
log.error("Can't update project %s, issues: %s", project_uuid, result['_issues'])
raise SystemExit()
@manager.command
def refresh_project_permissions():
"""Replaces the admin group permissions of each project with the defaults."""
from pillar.api.projects import DEFAULT_ADMIN_GROUP_PERMISSIONS
proj_coll = app.data.driver.db['projects']
result = proj_coll.update_many({}, {'$set': {
'permissions.groups.0.methods': DEFAULT_ADMIN_GROUP_PERMISSIONS
}})
print('Matched %i documents' % result.matched_count)
print('Updated %i documents' % result.modified_count)
@manager.command
def refresh_home_project_permissions():
"""Replaces the home project comment node type permissions with proper ones."""
proj_coll = app.data.driver.db['projects']
from pillar.api.blender_cloud import home_project
from pillar.api import service
service.fetch_role_to_group_id_map()
fake_node_type = home_project.assign_permissions(node_type_comment,
subscriber_methods=[u'GET', u'POST'],
world_methods=[u'GET'])
perms = fake_node_type['permissions']
result = proj_coll.update_many(
{'category': 'home', 'node_types.name': 'comment'},
{'$set': {'node_types.$.permissions': perms}})
print('Matched %i documents' % result.matched_count)
print('Updated %i documents' % result.modified_count)
@manager.command
def clear_db():
"""Wipes the database
"""
from pymongo import MongoClient
client = MongoClient(MONGO_HOST, 27017)
db = client.eve
db.drop_collection('nodes')
db.drop_collection('node_types')
db.drop_collection('tokens')
db.drop_collection('users')
@manager.command
def add_parent_to_nodes():
"""Find the parent of any node in the nodes collection"""
import codecs
import sys
UTF8Writer = codecs.getwriter('utf8')
sys.stdout = UTF8Writer(sys.stdout)
nodes_collection = app.data.driver.db['nodes']
def find_parent_project(node):
if node and 'parent' in node:
parent = nodes_collection.find_one({'_id': node['parent']})
return find_parent_project(parent)
if node:
return node
else:
return None
nodes = nodes_collection.find()
nodes_index = 0
nodes_orphan = 0
for node in nodes:
nodes_index += 1
if node['node_type'] == ObjectId("55a615cfea893bd7d0489f2d"):
print(u"Skipping project node - {0}".format(node['name']))
else:
project = find_parent_project(node)
if project:
nodes_collection.update({'_id': node['_id']},
{"$set": {'project': project['_id']}})
print(u"{0} {1}".format(node['_id'], node['name']))
else:
nodes_orphan += 1
nodes_collection.remove({'_id': node['_id']})
print("Removed {0} {1}".format(node['_id'], node['name']))
print("Edited {0} nodes".format(nodes_index))
print("Orphan {0} nodes".format(nodes_orphan))
@manager.command
def make_project_public(project_id):
"""Convert every node of a project from pending to public"""
DRY_RUN = False
nodes_collection = app.data.driver.db['nodes']
for n in nodes_collection.find({'project': ObjectId(project_id)}):
n['properties']['status'] = 'published'
print(u"Publishing {0} {1}".format(n['_id'], n['name'].encode('ascii', 'ignore')))
if not DRY_RUN:
put_item('nodes', n)
@manager.command
def set_attachment_names():
"""Loop through all existing nodes and assign proper ContentDisposition
metadata to referenced files that are using GCS.
"""
from pillar.api.utils.gcs import update_file_name
nodes_collection = app.data.driver.db['nodes']
for n in nodes_collection.find():
print("Updating node {0}".format(n['_id']))
update_file_name(n)
@manager.command
def files_verify_project():
"""Verify for missing or conflicting node/file ids"""
nodes_collection = app.data.driver.db['nodes']
files_collection = app.data.driver.db['files']
issues = dict(missing=[], conflicting=[], processing=[])
def _parse_file(item, file_id):
f = files_collection.find_one({'_id': file_id})
if f:
if 'project' in item and 'project' in f:
if item['project'] != f['project']:
issues['conflicting'].append(item['_id'])
if 'status' in item['properties'] \
and item['properties']['status'] == 'processing':
issues['processing'].append(item['_id'])
else:
issues['missing'].append(
"{0} missing {1}".format(item['_id'], file_id))
for item in nodes_collection.find():
print("Verifying node {0}".format(item['_id']))
if 'file' in item['properties']:
_parse_file(item, item['properties']['file'])
elif 'files' in item['properties']:
for f in item['properties']['files']:
_parse_file(item, f['file'])
print("===")
print("Issues detected:")
for k, v in issues.iteritems():
print("{0}:".format(k))
for i in v:
print(i)
print("===")
def replace_node_type(project, node_type_name, new_node_type):
"""Update or create the specified node type. We rely on the fact that
node_types have a unique name in a project.
"""
old_node_type = next(
(item for item in project['node_types'] if item.get('name') \
and item['name'] == node_type_name), None)
if old_node_type:
for i, v in enumerate(project['node_types']):
if v['name'] == node_type_name:
project['node_types'][i] = new_node_type
else:
project['node_types'].append(new_node_type)
@manager.command
def project_upgrade_node_types(project_id):
projects_collection = app.data.driver.db['projects']
project = projects_collection.find_one({'_id': ObjectId(project_id)})
replace_node_type(project, 'group', node_type_group)
replace_node_type(project, 'asset', node_type_asset)
replace_node_type(project, 'storage', node_type_storage)
replace_node_type(project, 'comment', node_type_comment)
replace_node_type(project, 'blog', node_type_blog)
replace_node_type(project, 'post', node_type_post)
replace_node_type(project, 'texture', node_type_texture)
put_item('projects', project)
@manager.command
def test_put_item(node_id):
import pprint
nodes_collection = app.data.driver.db['nodes']
node = nodes_collection.find_one(ObjectId(node_id))
pprint.pprint(node)
put_item('nodes', node)
@manager.command
def test_post_internal(node_id):
import pprint
nodes_collection = app.data.driver.db['nodes']
node = nodes_collection.find_one(ObjectId(node_id))
internal_fields = ['_id', '_etag', '_updated', '_created']
for field in internal_fields:
node.pop(field, None)
pprint.pprint(node)
print(post_internal('nodes', node))
@manager.command
def algolia_push_users():
"""Loop through all users and push them to Algolia"""
from pillar.api.utils.algolia import algolia_index_user_save
users_collection = app.data.driver.db['users']
for user in users_collection.find():
print("Pushing {0}".format(user['username']))
algolia_index_user_save(user)
@manager.command
def algolia_push_nodes():
"""Loop through all nodes and push them to Algolia"""
from pillar.api.utils.algolia import algolia_index_node_save
nodes_collection = app.data.driver.db['nodes']
for node in nodes_collection.find():
print(u"Pushing {0}: {1}".format(node['_id'], node['name'].encode(
'ascii', 'ignore')))
algolia_index_node_save(node)
@manager.command
def files_make_public_t():
"""Loop through all files and if they are images on GCS, make the size t
public
"""
from gcloud.exceptions import InternalServerError
from pillar.api.utils.gcs import GoogleCloudStorageBucket
files_collection = app.data.driver.db['files']
for f in files_collection.find({'backend': 'gcs'}):
if 'variations' not in f:
continue
variation_t = next((item for item in f['variations']
if item['size'] == 't'), None)
if not variation_t:
continue
try:
storage = GoogleCloudStorageBucket(str(f['project']))
blob = storage.Get(variation_t['file_path'], to_dict=False)
if not blob:
print('Unable to find blob for project %s file %s' % (f['project'], f['_id']))
continue
print('Making blob public: {0}'.format(blob.path))
blob.make_public()
except InternalServerError as ex:
print('Internal Server Error: ', ex)
@manager.command
def subscribe_node_owners():
"""Automatically subscribe node owners to notifications for items created
in the past.
"""
from pillar.api.nodes import after_inserting_nodes
nodes_collection = app.data.driver.db['nodes']
for n in nodes_collection.find():
if 'parent' in n:
after_inserting_nodes([n])
@manager.command
def refresh_project_links(project, chunk_size=50, quiet=False):
"""Regenerates almost-expired file links for a certain project."""
if quiet:
import logging
from pillar import log
logging.getLogger().setLevel(logging.WARNING)
log.setLevel(logging.WARNING)
chunk_size = int(chunk_size) # CLI parameters are passed as strings
from pillar.api import file_storage
file_storage.refresh_links_for_project(project, chunk_size, 2 * 3600)
@manager.command
def register_local_user(email, password):
from pillar.api.local_auth import create_local_user
create_local_user(email, password)
@manager.command
def add_group_to_projects(group_name):
"""Prototype to add a specific group, in read-only mode, to all node_types
for all projects.
"""
methods = ['GET']
groups_collection = app.data.driver.db['groups']
projects_collections = app.data.driver.db['projects']
group = groups_collection.find_one({'name': group_name})
for project in projects_collections.find():
print("Processing: {}".format(project['name']))
for node_type in project['node_types']:
node_type_name = node_type['name']
base_node_types = ['group', 'asset', 'blog', 'post', 'page',
'comment', 'group_texture', 'storage', 'texture']
if node_type_name in base_node_types:
print("Processing: {0}".format(node_type_name))
# Check if group already exists in the permissions
g = next((g for g in node_type['permissions']['groups']
if g['group'] == group['_id']), None)
# If not, we add it
if g is None:
print("Adding permissions")
permissions = {
'group': group['_id'],
'methods': methods}
node_type['permissions']['groups'].append(permissions)
projects_collections.update(
{'_id': project['_id']}, project)
@manager.command
def add_license_props():
"""Add license fields to all node types asset for every project."""
projects_collections = app.data.driver.db['projects']
for project in projects_collections.find():
print("Processing {}".format(project['_id']))
for node_type in project['node_types']:
if node_type['name'] == 'asset':
node_type['dyn_schema']['license_notes'] = {'type': 'string'}
node_type['dyn_schema']['license_type'] = {
'type': 'string',
'allowed': [
'cc-by',
'cc-0',
'cc-by-sa',
'cc-by-nd',
'cc-by-nc',
'copyright'
],
'default': 'cc-by'
}
node_type['form_schema']['license_notes'] = {}
node_type['form_schema']['license_type'] = {}
projects_collections.update(
{'_id': project['_id']}, project)
@manager.command
def refresh_file_sizes():
"""Computes & stores the 'length_aggregate_in_bytes' fields of all files."""
from pillar.api import file_storage
matched = 0
unmatched = 0
total_size = 0
files_collection = app.data.driver.db['files']
for file_doc in files_collection.find():
file_storage.compute_aggregate_length(file_doc)
length = file_doc['length_aggregate_in_bytes']
total_size += length
result = files_collection.update_one({'_id': file_doc['_id']},
{'$set': {'length_aggregate_in_bytes': length}})
if result.matched_count != 1:
log.warning('Unable to update document %s', file_doc['_id'])
unmatched += 1
else:
matched += 1
log.info('Updated %i file documents.', matched)
if unmatched:
log.warning('Unable to update %i documents.', unmatched)
log.info('%i bytes (%.3f GiB) storage used in total.',
total_size, total_size / 1024 ** 3)
@manager.command
def project_stats():
import csv
import sys
from collections import defaultdict
from functools import partial
from pillar.api import projects
proj_coll = app.data.driver.db['projects']
nodes = app.data.driver.db['nodes']
aggr = defaultdict(partial(defaultdict, int))
csvout = csv.writer(sys.stdout)
csvout.writerow(['project ID', 'owner', 'private', 'file size',
'nr of nodes', 'nr of top-level nodes', ])
for proj in proj_coll.find(projection={'user': 1,
'name': 1,
'is_private': 1,
'_id': 1}):
project_id = proj['_id']
is_private = proj.get('is_private', False)
row = [str(project_id),
unicode(proj['user']).encode('utf-8'),
is_private]
file_size = projects.project_total_file_size(project_id)
row.append(file_size)
node_count_result = nodes.aggregate([
{'$match': {'project': project_id}},
{'$project': {'parent': 1,
'is_top': {'$cond': [{'$gt': ['$parent', None]}, 0, 1]},
}},
{'$group': {
'_id': None,
'all': {'$sum': 1},
'top': {'$sum': '$is_top'},
}}
])
try:
node_counts = next(node_count_result)
nodes_all = node_counts['all']
nodes_top = node_counts['top']
except StopIteration:
# No result from the nodes means nodeless project.
nodes_all = 0
nodes_top = 0
row.append(nodes_all)
row.append(nodes_top)
for collection in aggr[None], aggr[is_private]:
collection['project_count'] += 1
collection['project_count'] += 1
collection['file_size'] += file_size
collection['node_count'] += nodes_all
collection['top_nodes'] += nodes_top
csvout.writerow(row)
csvout.writerow([
'public', '', '%i projects' % aggr[False]['project_count'],
aggr[False]['file_size'], aggr[False]['node_count'], aggr[False]['top_nodes'],
])
csvout.writerow([
'private', '', '%i projects' % aggr[True]['project_count'],
aggr[True]['file_size'], aggr[True]['node_count'], aggr[True]['top_nodes'],
])
csvout.writerow([
'total', '', '%i projects' % aggr[None]['project_count'],
aggr[None]['file_size'], aggr[None]['node_count'], aggr[None]['top_nodes'],
])
@manager.command
def add_node_types():
"""Add texture and group_texture node types to all projects"""
from pillar.api.node_types.texture import node_type_texture
from pillar.api.node_types.group_texture import node_type_group_texture
from pillar.api.utils import project_get_node_type
projects_collections = app.data.driver.db['projects']
for project in projects_collections.find():
print("Processing {}".format(project['_id']))
if not project_get_node_type(project, 'group_texture'):
project['node_types'].append(node_type_group_texture)
print("Added node type: {}".format(node_type_group_texture['name']))
if not project_get_node_type(project, 'texture'):
project['node_types'].append(node_type_texture)
print("Added node type: {}".format(node_type_texture['name']))
projects_collections.update(
{'_id': project['_id']}, project)
@manager.command
def update_texture_node_type():
"""Update allowed values for textures node_types"""
projects_collections = app.data.driver.db['projects']
for project in projects_collections.find():
print("Processing {}".format(project['_id']))
for node_type in project['node_types']:
if node_type['name'] == 'texture':
allowed = [
'color',
'specular',
'bump',
'normal',
'translucency',
'emission',
'alpha'
]
node_type['dyn_schema']['files']['schema']['schema']['map_type'][
'allowed'] = allowed
projects_collections.update(
{'_id': project['_id']}, project)
@manager.command
def update_texture_nodes_maps():
"""Update abbreviated texture map types to the extended version"""
nodes_collection = app.data.driver.db['nodes']
remap = {
'col': 'color',
'spec': 'specular',
'nor': 'normal'}
for node in nodes_collection.find({'node_type': 'texture'}):
for v in node['properties']['files']:
try:
updated_map_types = remap[v['map_type']]
print("Updating {} to {}".format(v['map_type'], updated_map_types))
v['map_type'] = updated_map_types
except KeyError:
print("Skipping {}".format(v['map_type']))
nodes_collection.update({'_id': node['_id']}, node)
if __name__ == '__main__':
manager.run()

10475
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -1,54 +1,24 @@
{ {
"name": "pillar", "name": "pillar",
"license": "GPL-2.0+",
"author": "Blender Institute",
"repository": { "repository": {
"type": "git", "type": "git",
"url": "git://git.blender.org/pillar.git" "url": "https://github.com/armadillica/pillar.git"
}, },
"author": "Blender Institute",
"license": "GPL",
"devDependencies": { "devDependencies": {
"@babel/core": "7.1.6", "gulp": "~3.9.1",
"@babel/preset-env": "7.1.6", "gulp-sass": "~2.3.1",
"acorn": "5.7.3", "gulp-autoprefixer": "~2.3.1",
"babel-core": "7.0.0-bridge.0", "gulp-if": "^2.0.1",
"babelify": "10.0.0", "gulp-jade": "~1.1.0",
"browserify": "16.2.3", "gulp-sourcemaps": "~1.6.0",
"gulp": "4.0.0", "gulp-plumber": "~1.1.0",
"gulp-autoprefixer": "6.0.0", "gulp-livereload": "~3.8.1",
"gulp-babel": "8.0.0", "gulp-concat": "~2.6.0",
"gulp-cached": "1.1.1", "gulp-uglify": "~1.5.3",
"gulp-chmod": "2.0.0", "gulp-rename": "~1.2.2",
"gulp-concat": "2.6.1", "gulp-chmod": "~1.3.0",
"gulp-git": "2.8.0", "minimist": "^1.2.0"
"gulp-if": "2.0.2",
"gulp-livereload": "4.0.0",
"gulp-plumber": "1.2.0",
"gulp-pug": "4.0.1",
"gulp-rename": "1.4.0",
"gulp-sass": "4.1.0",
"gulp-sourcemaps": "2.6.4",
"gulp-uglify-es": "1.0.4",
"jest": "^24.8.0",
"minimist": "1.2.0",
"vinyl-buffer": "1.0.1",
"vinyl-source-stream": "2.0.0"
},
"dependencies": {
"bootstrap": "^4.3.1",
"glob": "7.1.3",
"jquery": "^3.4.1",
"natives": "^1.1.6",
"popper.js": "1.14.4",
"video.js": "7.2.2",
"vue": "2.5.17"
},
"scripts": {
"test": "jest"
},
"__COMMENTS__": [
"natives@1.1.6 for Gulp 3.x on Node 10.x: https://github.com/gulpjs/gulp/issues/2162#issuecomment-385197164"
],
"resolutions": {
"natives": "1.1.6"
} }
} }

View File

@ -1,62 +1,25 @@
"""Pillar server.""" """Pillar server."""
import collections
import contextlib
import copy import copy
import json
import logging import logging
import logging.config import logging.config
import subprocess import subprocess
import tempfile import tempfile
import typing
import os
import os.path
import pathlib
import warnings
# These warnings have to be suppressed before the first import.
# Eve is falling behind on Cerberus. See https://github.com/pyeve/eve/issues/1278
warnings.filterwarnings(
'ignore', category=DeprecationWarning,
message="Methods for type testing are deprecated, use TypeDefinition and the "
"'types_mapping'-property of a Validator-instance instead")
# Werkzeug deprecated Request.is_xhr, but it works fine with jQuery and we don't need a reminder
# every time a unit test is run.
warnings.filterwarnings('ignore', category=DeprecationWarning,
message="'Request.is_xhr' is deprecated as of version 0.13 and will be "
"removed in version 1.0.")
import jinja2 import jinja2
import flask import os
import os.path
from eve import Eve from eve import Eve
from flask import g, render_template, request
from flask_babel import Babel, gettext as _
from flask.templating import TemplateNotFound
import pymongo.database
from werkzeug.local import LocalProxy
# Declare pillar.current_app before importing other Pillar modules.
def _get_current_app():
"""Returns the current application."""
return flask.current_app
current_app: 'PillarServer' = LocalProxy(_get_current_app)
"""the current app, annotated as PillarServer"""
from pillar.api import custom_field_validation from pillar.api import custom_field_validation
from pillar.api.utils import authentication from pillar.api.utils import authentication
import pillar.web.jinja from pillar.api.utils import gravatar
from pillar.web.utils import pretty_date
from pillar.web.nodes.routes import url_for_node
from . import api from . import api
from . import web from . import web
from . import auth from . import auth
from . import sentry_extra
import pillar.api.organizations
empty_settings = { empty_settings = {
# Use a random URL prefix when booting Eve, to ensure that any # Use a random URL prefix when booting Eve, to ensure that any
@ -67,49 +30,11 @@ empty_settings = {
} }
class ConfigurationMissingError(SystemExit): class PillarServer(Eve):
"""Raised when a vital configuration key is missing. def __init__(self, app_root, **kwargs):
Causes Python to exit.
"""
class BlinkerCompatibleEve(Eve):
"""Workaround for https://github.com/pyeve/eve/issues/1087"""
def __getattr__(self, name):
if name in {"im_self", "im_func"}:
raise AttributeError("type object '%s' has no attribute '%s'" %
(self.__class__.__name__, name))
return super().__getattr__(name)
class PillarServer(BlinkerCompatibleEve):
def __init__(self, app_root: str, **kwargs) -> None:
from .extension import PillarExtension
from celery import Celery
from flask_wtf.csrf import CSRFProtect
kwargs.setdefault('validator', custom_field_validation.ValidateCustomFields) kwargs.setdefault('validator', custom_field_validation.ValidateCustomFields)
super(PillarServer, self).__init__(settings=empty_settings, **kwargs) super(PillarServer, self).__init__(settings=empty_settings, **kwargs)
# mapping from extension name to extension object.
map_type = typing.MutableMapping[str, PillarExtension]
self.pillar_extensions: map_type = collections.OrderedDict()
self.pillar_extensions_template_paths = [] # list of paths
# The default roles Pillar uses. Will probably all move to extensions at some point.
self._user_roles: typing.Set[str] = {
'demo', 'admin', 'subscriber', 'homeproject',
'protected', 'org-subscriber', 'video-encoder',
'service', 'badger', 'svner',
}
self._user_roles_indexable: typing.Set[str] = {'demo', 'admin', 'subscriber'}
# Mapping from role name to capabilities given to that role.
self._user_caps: typing.MutableMapping[str, typing.FrozenSet[str]] = \
collections.defaultdict(frozenset)
self.app_root = os.path.abspath(app_root) self.app_root = os.path.abspath(app_root)
self._load_flask_config() self._load_flask_config()
self._config_logging() self._config_logging()
@ -117,13 +42,9 @@ class PillarServer(BlinkerCompatibleEve):
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__)) self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.log.info('Creating new instance from %r', self.app_root) self.log.info('Creating new instance from %r', self.app_root)
self._config_url_map()
self._config_auth_token_hmac_key()
self._config_tempdirs() self._config_tempdirs()
self._config_git() self._config_git()
self._config_bugsnag()
self.sentry: typing.Optional[sentry_extra.PillarSentry] = None
self._config_sentry()
self._config_google_cloud_storage() self._config_google_cloud_storage()
self.algolia_index_users = None self.algolia_index_users = None
@ -141,34 +62,14 @@ class PillarServer(BlinkerCompatibleEve):
'api', 'eve_settings.py') 'api', 'eve_settings.py')
# self.settings = self.config['EVE_SETTINGS_PATH'] # self.settings = self.config['EVE_SETTINGS_PATH']
self.load_config() self.load_config()
self._validate_config()
# Configure authentication # Configure authentication
self.login_manager = auth.config_login_manager(self) self.login_manager = auth.config_login_manager(self)
self.oauth_blender_id = auth.config_oauth_login(self)
self._config_caching() self._config_caching()
self._config_translations() self.before_first_request(self.setup_db_indices)
# Celery itself is configured after all extensions have loaded.
self.celery: Celery = None
self.org_manager = pillar.api.organizations.OrgManager()
# Make CSRF protection available to the application. By default it is
# disabled on all endpoints. More info at WTF_CSRF_CHECK_DEFAULT in config.py
self.csrf = CSRFProtect(self)
def _validate_config(self):
if not self.config.get('SECRET_KEY'):
raise ConfigurationMissingError('SECRET_KEY configuration key is missing')
server_name = self.config.get('SERVER_NAME')
if not server_name:
raise ConfigurationMissingError('SERVER_NAME configuration key is missing, should be a '
'FQDN with TLD')
if server_name != 'localhost' and '.' not in server_name:
raise ConfigurationMissingError('SERVER_NAME should contain a FQDN with TLD')
def _load_flask_config(self): def _load_flask_config(self):
# Load configuration from different sources, to make it easy to override # Load configuration from different sources, to make it easy to override
@ -190,30 +91,6 @@ class PillarServer(BlinkerCompatibleEve):
if self.config['DEBUG']: if self.config['DEBUG']:
log.info('Pillar starting, debug=%s', self.config['DEBUG']) log.info('Pillar starting, debug=%s', self.config['DEBUG'])
def _config_url_map(self):
"""Extend Flask url_map with our own converters."""
import secrets, re
from . import flask_extra
if not self.config.get('STATIC_FILE_HASH'):
self.log.warning('STATIC_FILE_HASH is empty, generating random one')
h = re.sub(r'[_.~-]', '', secrets.token_urlsafe())[:8]
self.config['STATIC_FILE_HASH'] = h
self.url_map.converters['hashed_path'] = flask_extra.HashedPathConverter
def _config_auth_token_hmac_key(self):
"""Load AUTH_TOKEN_HMAC_KEY, falling back to SECRET_KEY."""
hmac_key = self.config.get('AUTH_TOKEN_HMAC_KEY')
if not hmac_key:
self.log.warning('AUTH_TOKEN_HMAC_KEY not set, falling back to SECRET_KEY')
hmac_key = self.config['AUTH_TOKEN_HMAC_KEY'] = self.config['SECRET_KEY']
if isinstance(hmac_key, str):
self.log.warning('Converting AUTH_TOKEN_HMAC_KEY to bytes')
self.config['AUTH_TOKEN_HMAC_KEY'] = hmac_key.encode('utf8')
def _config_tempdirs(self): def _config_tempdirs(self):
storage_dir = self.config['STORAGE_DIR'] storage_dir = self.config['STORAGE_DIR']
if not os.path.exists(storage_dir): if not os.path.exists(storage_dir):
@ -239,18 +116,25 @@ class PillarServer(BlinkerCompatibleEve):
self.config['GIT_REVISION'] = 'unknown' self.config['GIT_REVISION'] = 'unknown'
self.log.info('Git revision %r', self.config['GIT_REVISION']) self.log.info('Git revision %r', self.config['GIT_REVISION'])
def _config_sentry(self): def _config_bugsnag(self):
# TODO(Sybren): keep Sentry unconfigured when running CLI commands. # Configure Bugsnag
sentry_dsn = self.config.get('SENTRY_CONFIG', {}).get('dsn') if self.config.get('TESTING') or not self.config.get('BUGSNAG_API_KEY'):
if self.config.get('TESTING') or sentry_dsn in {'', '-set-in-config-local-'}: self.log.info('Bugsnag NOT configured.')
self.log.warning('Sentry NOT configured.')
self.sentry = None
return return
self.sentry = sentry_extra.PillarSentry( import bugsnag
self, logging=True, level=logging.WARNING, from bugsnag.flask import handle_exceptions
logging_exclusions=('werkzeug',)) from bugsnag.handlers import BugsnagHandler
self.log.debug('Sentry setup complete')
bugsnag.configure(
api_key=self.config['BUGSNAG_API_KEY'],
project_root="/data/git/pillar/pillar",
)
handle_exceptions(self)
bs_handler = BugsnagHandler()
bs_handler.setLevel(logging.ERROR)
self.log.addHandler(bs_handler)
def _config_google_cloud_storage(self): def _config_google_cloud_storage(self):
# Google Cloud project # Google Cloud project
@ -258,17 +142,17 @@ class PillarServer(BlinkerCompatibleEve):
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = \ os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = \
self.config['GCLOUD_APP_CREDENTIALS'] self.config['GCLOUD_APP_CREDENTIALS']
except KeyError: except KeyError:
raise ConfigurationMissingError('GCLOUD_APP_CREDENTIALS configuration is missing') raise SystemExit('GCLOUD_APP_CREDENTIALS configuration is missing')
# Storage backend (GCS) # Storage backend (GCS)
try: try:
os.environ['GCLOUD_PROJECT'] = self.config['GCLOUD_PROJECT'] os.environ['GCLOUD_PROJECT'] = self.config['GCLOUD_PROJECT']
except KeyError: except KeyError:
raise ConfigurationMissingError('GCLOUD_PROJECT configuration value is missing') raise SystemExit('GCLOUD_PROJECT configuration value is missing')
def _config_algolia(self): def _config_algolia(self):
# Algolia search # Algolia search
if 'algolia' not in self.config['SEARCH_BACKENDS']: if self.config['SEARCH_BACKEND'] != 'algolia':
return return
from algoliasearch import algoliasearch from algoliasearch import algoliasearch
@ -282,172 +166,46 @@ class PillarServer(BlinkerCompatibleEve):
def _config_encoding_backend(self): def _config_encoding_backend(self):
# Encoding backend # Encoding backend
if self.config['ENCODING_BACKEND'] != 'zencoder': if self.config['ENCODING_BACKEND'] != 'zencoder':
self.log.warning('Encoding backend %r not supported, no video encoding possible!',
self.config['ENCODING_BACKEND'])
return return
self.log.info('Setting up video encoding backend %r',
self.config['ENCODING_BACKEND'])
from zencoder import Zencoder from zencoder import Zencoder
self.encoding_service_client = Zencoder(self.config['ZENCODER_API_KEY']) self.encoding_service_client = Zencoder(self.config['ZENCODER_API_KEY'])
def _config_caching(self): def _config_caching(self):
from flask_caching import Cache from flask_cache import Cache
self.cache = Cache(self) self.cache = Cache(self)
def set_languages(self, translations_folder: pathlib.Path):
"""Set the supported languages based on translations folders
English is an optional language included by default, since we will
never have a translations folder for it.
"""
self.default_locale = self.config['DEFAULT_LOCALE']
self.config['BABEL_DEFAULT_LOCALE'] = self.default_locale
# Determine available languages.
languages = list()
# The available languages will be determined based on available
# translations in the //translations/ folder. The exception is (American) English
# since all the text is originally in English already.
# That said, if rare occasions we may want to never show
# the site in English.
if self.config['SUPPORT_ENGLISH']:
languages.append('en_US')
base_path = pathlib.Path(self.app_root) / 'translations'
if not base_path.is_dir():
self.log.debug('Project has no translations folder: %s', base_path)
else:
languages.extend(i.name for i in base_path.iterdir() if i.is_dir())
# Use set for quicker lookup
self.languages = set(languages)
self.log.info('Available languages: %s' % ', '.join(self.languages))
def _config_translations(self):
"""
Initialize translations variable.
The BABEL_TRANSLATION_DIRECTORIES has the folder for the compiled
translations files. It uses ; separation for the extension folders.
"""
self.log.info('Configure translations')
translations_path = pathlib.Path(__file__).parents[1].joinpath('translations')
self.config['BABEL_TRANSLATION_DIRECTORIES'] = str(translations_path)
babel = Babel(self)
self.set_languages(translations_path)
# get_locale() is registered as a callback for locale selection.
# That prevents the function from being garbage collected.
@babel.localeselector
def get_locale() -> str:
"""
Callback runs before each request to give us a chance to choose the
language to use when producing its response.
We set g.locale to be able to access it from the template pages.
We still need to return it explicitly, since this function is
called as part of the babel translation framework.
We are using the 'Accept-Languages' header to match the available
translations with the user supported languages.
"""
locale = request.accept_languages.best_match(
self.languages, self.default_locale)
g.locale = locale
return locale
def load_extension(self, pillar_extension, url_prefix): def load_extension(self, pillar_extension, url_prefix):
from .extension import PillarExtension from .extension import PillarExtension
if not isinstance(pillar_extension, PillarExtension): self.log.info('Initialising extension %r', pillar_extension)
if self.config.get('DEBUG'): assert isinstance(pillar_extension, PillarExtension)
for cls in type(pillar_extension).mro():
self.log.error('class %42r (%i) is %42r (%i): %s',
cls, id(cls), PillarExtension, id(PillarExtension),
cls is PillarExtension)
raise AssertionError('Extension has wrong type %r' % type(pillar_extension))
self.log.info('Loading extension %s', pillar_extension.name)
# Remember this extension, and disallow duplicates.
if pillar_extension.name in self.pillar_extensions:
raise ValueError('Extension with name %s already loaded', pillar_extension.name)
self.pillar_extensions[pillar_extension.name] = pillar_extension
# Load extension Flask configuration # Load extension Flask configuration
for key, value in pillar_extension.flask_config().items(): for key, value in pillar_extension.flask_config():
self.config.setdefault(key, value) self.config.setdefault(key, value)
# Load extension blueprint(s) # Load extension blueprint(s)
for blueprint in pillar_extension.blueprints(): for blueprint in pillar_extension.blueprints():
if blueprint.url_prefix: self.register_blueprint(blueprint, url_prefix=url_prefix)
if not url_prefix:
# If we registered the extension with url_prefix=None
url_prefix = ''
blueprint_prefix = url_prefix + blueprint.url_prefix
else:
blueprint_prefix = url_prefix
self.register_blueprint(blueprint, url_prefix=blueprint_prefix)
# Load template paths
tpath = pillar_extension.template_path
if tpath:
self.log.info('Extension %s: adding template path %s',
pillar_extension.name, tpath)
if not os.path.exists(tpath):
raise ValueError('Template path %s for extension %s does not exist.',
tpath, pillar_extension.name)
self.pillar_extensions_template_paths.append(tpath)
# Load extension Eve settings # Load extension Eve settings
eve_settings = pillar_extension.eve_settings() eve_settings = pillar_extension.eve_settings()
if 'DOMAIN' in eve_settings:
pillar_ext_prefix = pillar_extension.name + '_'
pillar_url_prefix = pillar_extension.name + '/'
for key, collection in eve_settings['DOMAIN'].items(): for key, collection in eve_settings['DOMAIN'].items():
assert key.startswith(pillar_ext_prefix), \ source = '%s.%s' % (pillar_extension.name, key)
'Eve collection names of %s MUST start with %r' % \ url = '%s/%s' % (pillar_extension.name, key)
(pillar_extension.name, pillar_ext_prefix)
url = key.replace(pillar_ext_prefix, pillar_url_prefix)
collection.setdefault('datasource', {}).setdefault('source', key) collection.setdefault('datasource', {}).setdefault('source', source)
collection.setdefault('url', url) collection.setdefault('url', url)
self.config['DOMAIN'].update(eve_settings['DOMAIN']) self.config['DOMAIN'].update(eve_settings['DOMAIN'])
# Configure the extension translations
trpath = pillar_extension.translations_path
if not trpath:
self.log.debug('Extension %s does not have a translations folder',
pillar_extension.name)
return
self.log.info('Extension %s: adding translations path %s',
pillar_extension.name, trpath)
# Babel requires semi-colon string separation
self.config['BABEL_TRANSLATION_DIRECTORIES'] += ';' + str(trpath)
def _config_jinja_env(self): def _config_jinja_env(self):
# Start with the extensions...
paths_list = [
jinja2.FileSystemLoader(path)
for path in reversed(self.pillar_extensions_template_paths)
]
# ...then load Pillar paths.
pillar_dir = os.path.dirname(os.path.realpath(__file__)) pillar_dir = os.path.dirname(os.path.realpath(__file__))
parent_theme_path = os.path.join(pillar_dir, 'web', 'templates') parent_theme_path = os.path.join(pillar_dir, 'web', 'templates')
current_path = os.path.join(self.app_root, 'templates') current_path = os.path.join(self.app_root, 'templates')
paths_list += [ paths_list = [
jinja2.FileSystemLoader(current_path), jinja2.FileSystemLoader(current_path),
jinja2.FileSystemLoader(parent_theme_path), jinja2.FileSystemLoader(parent_theme_path),
self.jinja_loader self.jinja_loader
@ -458,116 +216,36 @@ class PillarServer(BlinkerCompatibleEve):
custom_jinja_loader = jinja2.ChoiceLoader(paths_list) custom_jinja_loader = jinja2.ChoiceLoader(paths_list)
self.jinja_loader = custom_jinja_loader self.jinja_loader = custom_jinja_loader
pillar.web.jinja.setup_jinja_env(self.jinja_env, self.config) def format_pretty_date(d):
return pretty_date(d)
# Register context processors from extensions def format_pretty_date_time(d):
for ext in self.pillar_extensions.values(): return pretty_date(d, detail=True)
if not ext.has_context_processor:
continue
self.log.debug('Registering context processor for %s', ext.name) self.jinja_env.filters['pretty_date'] = format_pretty_date
self.context_processor(ext.context_processor) self.jinja_env.filters['pretty_date_time'] = format_pretty_date_time
self.jinja_env.globals['url_for_node'] = url_for_node
def _config_static_dirs(self): def _config_static_dirs(self):
pillar_dir = os.path.dirname(os.path.realpath(__file__))
# Setup static folder for the instanced app # Setup static folder for the instanced app
self.static_folder = os.path.join(self.app_root, 'static') self.static_folder = os.path.join(self.app_root, 'static')
# Setup static folder for Pillar # Setup static folder for Pillar
pillar_dir = os.path.dirname(os.path.realpath(__file__)) self.pillar_static_folder = os.path.join(pillar_dir, 'web', 'static')
pillar_static_folder = os.path.join(pillar_dir, 'web', 'static')
self.register_static_file_endpoint('/static/pillar', 'static_pillar', pillar_static_folder)
# Setup static folders for extensions from flask.views import MethodView
for name, ext in self.pillar_extensions.items(): from flask import send_from_directory
if not ext.static_path: from flask import current_app
continue
self.register_static_file_endpoint('/static/%s' % name,
'static_%s' % name,
ext.static_path)
def _config_celery(self): class PillarStaticFile(MethodView):
from celery import Celery def get(self, filename):
return send_from_directory(current_app.pillar_static_folder,
filename)
self.log.info('Configuring Celery') self.add_url_rule('/static/pillar/<path:filename>',
view_func=PillarStaticFile.as_view('static_pillar'))
# Pillar-defined Celery task modules:
celery_task_modules = [
'pillar.celery.avatar',
'pillar.celery.badges',
'pillar.celery.email_tasks',
'pillar.celery.file_link_tasks',
'pillar.celery.search_index_tasks',
'pillar.celery.tasks',
]
# Allow Pillar extensions from defining their own Celery tasks.
for extension in self.pillar_extensions.values():
celery_task_modules.extend(extension.celery_task_modules)
self.celery = Celery(
'pillar.celery',
backend=self.config['CELERY_BACKEND'],
broker=self.config['CELERY_BROKER'],
include=celery_task_modules,
task_track_started=True,
result_expires=3600,
)
# This configures the Celery task scheduler in such a way that we don't
# have to import the pillar.celery.XXX modules. Remember to run
# 'manage.py celery beat' too, otherwise those will never run.
beat_schedule = self.config.get('CELERY_BEAT_SCHEDULE')
if beat_schedule:
self.celery.conf.beat_schedule = beat_schedule
self.log.info('Pinging Celery workers')
self.log.info('Response: %s', self.celery.control.ping())
def _config_user_roles(self):
"""Gathers all user roles from extensions.
The union of all user roles can be obtained from self.user_roles.
"""
for extension in self.pillar_extensions.values():
indexed_but_not_defined = extension.user_roles_indexable - extension.user_roles
if indexed_but_not_defined:
raise ValueError('Extension %s has roles %s indexable but not in user_roles',
extension.name, indexed_but_not_defined)
self._user_roles.update(extension.user_roles)
self._user_roles_indexable.update(extension.user_roles_indexable)
self.log.info('Loaded %i user roles from extensions, %i of which are indexable',
len(self._user_roles), len(self._user_roles_indexable))
def _config_user_caps(self):
"""Merges all capability settings from app config and extensions."""
app_caps = collections.defaultdict(frozenset, **self.config['USER_CAPABILITIES'])
for extension in self.pillar_extensions.values():
ext_caps = extension.user_caps
for role, caps in ext_caps.items():
union_caps = frozenset(app_caps[role] | caps)
app_caps[role] = union_caps
self._user_caps = app_caps
if self.log.isEnabledFor(logging.DEBUG):
import pprint
self.log.debug('Configured user capabilities: %s', pprint.pformat(self._user_caps))
def register_static_file_endpoint(self, url_prefix, endpoint_name, static_folder):
from pillar.web.staticfile import PillarStaticFile
view_func = PillarStaticFile.as_view(endpoint_name, static_folder=static_folder)
self.add_url_rule(f'{url_prefix}/<hashed_path:filename>', view_func=view_func)
def process_extensions(self): def process_extensions(self):
"""This is about Eve extensions, not Pillar extensions."""
# Re-initialise Eve after we allowed Pillar submodules to be loaded. # Re-initialise Eve after we allowed Pillar submodules to be loaded.
# EVIL STARTS HERE. It just copies part of the Eve.__init__() method. # EVIL STARTS HERE. It just copies part of the Eve.__init__() method.
self.set_defaults() self.set_defaults()
@ -590,159 +268,17 @@ class PillarServer(BlinkerCompatibleEve):
self.finish_startup() self.finish_startup()
def register_error_handlers(self):
super(PillarServer, self).register_error_handlers()
# Register error handlers per code.
for code in (403, 404, 412, 500):
self.register_error_handler(code, self.pillar_error_handler)
# Register error handlers per exception.
from pillarsdk import exceptions as sdk_exceptions
sdk_handlers = [
(sdk_exceptions.UnauthorizedAccess, self.handle_sdk_unauth),
(sdk_exceptions.ForbiddenAccess, self.handle_sdk_forbidden),
(sdk_exceptions.ResourceNotFound, self.handle_sdk_resource_not_found),
(sdk_exceptions.ResourceInvalid, self.handle_sdk_resource_invalid),
(sdk_exceptions.MethodNotAllowed, self.handle_sdk_method_not_allowed),
(sdk_exceptions.PreconditionFailed, self.handle_sdk_precondition_failed),
]
for (eclass, handler) in sdk_handlers:
self.register_error_handler(eclass, handler)
def handle_sdk_unauth(self, error):
"""Global exception handling for pillarsdk UnauthorizedAccess
Currently the api is fully locked down so we need to constantly
check for user authorization.
"""
return flask.redirect(flask.url_for('users.login'))
def handle_sdk_forbidden(self, error):
self.log.info('Forwarding ForbiddenAccess exception to client: %s', error, exc_info=True)
error.code = 403
return self.pillar_error_handler(error)
def handle_sdk_resource_not_found(self, error):
self.log.info('Forwarding ResourceNotFound exception to client: %s', error, exc_info=True)
content = getattr(error, 'content', None)
if content:
try:
error_content = json.loads(content)
except ValueError:
error_content = None
if error_content and error_content.get('_deleted', False):
# This document used to exist, but doesn't any more. Let the user know.
doc_name = error_content.get('name')
node_type = error_content.get('node_type')
if node_type:
node_type = node_type.replace('_', ' ').title()
if doc_name:
description = '%s "%s" was deleted.' % (node_type, doc_name)
else:
description = 'This %s was deleted.' % (node_type,)
else:
if doc_name:
description = '"%s" was deleted.' % doc_name
else:
description = None
error.description = description
error.code = 404
return self.pillar_error_handler(error)
def handle_sdk_precondition_failed(self, error):
self.log.info('Forwarding PreconditionFailed exception to client: %s', error)
error.code = 412
return self.pillar_error_handler(error)
def handle_sdk_resource_invalid(self, error):
self.log.exception('Forwarding ResourceInvalid exception to client: %s', error, exc_info=True)
# Raising a Werkzeug 422 exception doens't work, as Flask turns it into a 500.
return _('The submitted data could not be validated.'), 422
def handle_sdk_method_not_allowed(self, error):
"""Forwards 405 Method Not Allowed to the client.
This is actually not fair, as a 405 between Pillar and Pillar-Web
doesn't imply that the request the client did on Pillar-Web is not
allowed. However, it does allow us to debug this if it happens, by
watching for 405s in the browser.
"""
from flask import request
self.log.info('Forwarding MethodNotAllowed exception to client: %s', error, exc_info=True)
self.log.info('HTTP Referer is %r', request.referrer)
# Raising a Werkzeug 405 exception doens't work, as Flask turns it into a 500.
return 'The requested HTTP method is not allowed on this URL.', 405
def pillar_error_handler(self, error_ob):
# 'error_ob' can be any exception. If it's not a Werkzeug exception,
# handle it as a 500.
if not hasattr(error_ob, 'code'):
error_ob.code = 500
if not hasattr(error_ob, 'description'):
error_ob.description = str(error_ob)
if request.full_path.startswith('/%s/' % self.config['URL_PREFIX']):
from pillar.api.utils import jsonify
# This is an API request, so respond in JSON.
return jsonify({
'_status': 'ERR',
'_code': error_ob.code,
'_message': error_ob.description,
}, status=error_ob.code)
# See whether we should return an embedded page or a regular one.
if request.is_xhr:
fname = 'errors/%i_embed.html' % error_ob.code
else:
fname = 'errors/%i.html' % error_ob.code
# Also handle the case where we didn't create a template for this error.
try:
return render_template(fname, description=error_ob.description), error_ob.code
except TemplateNotFound:
self.log.warning('Error template %s for code %i not found',
fname, error_ob.code)
return render_template('errors/500.html'), error_ob.code
def finish_startup(self): def finish_startup(self):
self.log.info('Using MongoDB database %r', self.config['MONGO_DBNAME']) self.log.info('Using MongoDB database %r', self.config['MONGO_DBNAME'])
with self.app_context():
self.setup_db_indices()
self._config_celery()
api.setup_app(self) api.setup_app(self)
web.setup_app(self) web.setup_app(self)
authentication.setup_app(self) authentication.setup_app(self)
# Register Flask Debug Toolbar (disabled by default).
from flask_debugtoolbar import DebugToolbarExtension
DebugToolbarExtension(self)
for ext in self.pillar_extensions.values():
self.log.info('Setting up extension %s', ext.name)
ext.setup_app(self)
self._config_jinja_env() self._config_jinja_env()
self._config_static_dirs() self._config_static_dirs()
self._config_user_roles()
self._config_user_caps()
# Only enable this when debugging. # Only enable this when debugging.
# TODO(fsiddi): Consider removing this in favor of the routes tab in Flask Debug Toolbar.
# self._list_routes() # self._list_routes()
def setup_db_indices(self): def setup_db_indices(self):
@ -762,7 +298,6 @@ class PillarServer(BlinkerCompatibleEve):
coll = db['tokens'] coll = db['tokens']
coll.create_index([('user', pymongo.ASCENDING)]) coll.create_index([('user', pymongo.ASCENDING)])
coll.create_index([('token', pymongo.ASCENDING)]) coll.create_index([('token', pymongo.ASCENDING)])
coll.create_index([('token_hashed', pymongo.ASCENDING)])
coll = db['notifications'] coll = db['notifications']
coll.create_index([('user', pymongo.ASCENDING)]) coll.create_index([('user', pymongo.ASCENDING)])
@ -778,22 +313,6 @@ class PillarServer(BlinkerCompatibleEve):
coll.create_index([('parent', pymongo.ASCENDING)]) coll.create_index([('parent', pymongo.ASCENDING)])
coll.create_index([('short_code', pymongo.ASCENDING)], coll.create_index([('short_code', pymongo.ASCENDING)],
sparse=True, unique=True) sparse=True, unique=True)
# Used for latest assets & comments
coll.create_index([('properties.status', pymongo.ASCENDING),
('node_type', pymongo.ASCENDING),
('_created', pymongo.DESCENDING)])
# Used for asset tags
coll.create_index([('properties.tags', pymongo.ASCENDING)])
coll = db['projects']
# This index is used for statistics, and for fetching public projects.
coll.create_index([('is_private', pymongo.ASCENDING)])
coll.create_index([('category', pymongo.ASCENDING)])
coll = db['organizations']
coll.create_index([('ip_ranges.start', pymongo.ASCENDING)])
coll.create_index([('ip_ranges.end', pymongo.ASCENDING)])
self.log.debug('Created database indices')
def register_api_blueprint(self, blueprint, url_prefix): def register_api_blueprint(self, blueprint, url_prefix):
# TODO: use Eve config variable instead of hard-coded '/api' # TODO: use Eve config variable instead of hard-coded '/api'
@ -805,50 +324,32 @@ class PillarServer(BlinkerCompatibleEve):
return 'basic ' + base64.b64encode('%s:%s' % (username, subclient_id)) return 'basic ' + base64.b64encode('%s:%s' % (username, subclient_id))
def post_internal(self, resource: str, payl=None, skip_validation=False): def post_internal(self, resource, payl=None, skip_validation=False):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810""" """Workaround for Eve issue https://github.com/nicolaiarocci/eve/issues/810"""
from eve.methods.post import post_internal from eve.methods.post import post_internal
url = self.config['URLS'][resource] with self.test_request_context(method='POST', path='%s/%s' % (self.api_prefix, resource)):
path = '%s/%s' % (self.api_prefix, url) return post_internal(resource, payl=payl, skip_validation=skip_validation)
with self.__fake_request_url_rule('POST', path): def put_internal(self, resource, payload=None, concurrency_check=False,
return post_internal(resource, payl=payl, skip_validation=skip_validation)[:4]
def put_internal(self, resource: str, payload=None, concurrency_check=False,
skip_validation=False, **lookup): skip_validation=False, **lookup):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810""" """Workaround for Eve issue https://github.com/nicolaiarocci/eve/issues/810"""
from eve.methods.put import put_internal from eve.methods.put import put_internal
url = self.config['URLS'][resource] path = '%s/%s/%s' % (self.api_prefix, resource, lookup['_id'])
path = '%s/%s/%s' % (self.api_prefix, url, lookup['_id']) with self.test_request_context(method='PUT', path=path):
with self.__fake_request_url_rule('PUT', path):
return put_internal(resource, payload=payload, concurrency_check=concurrency_check, return put_internal(resource, payload=payload, concurrency_check=concurrency_check,
skip_validation=skip_validation, **lookup)[:4] skip_validation=skip_validation, **lookup)
def patch_internal(self, resource: str, payload=None, concurrency_check=False, def patch_internal(self, resource, payload=None, concurrency_check=False,
skip_validation=False, **lookup): skip_validation=False, **lookup):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810""" """Workaround for Eve issue https://github.com/nicolaiarocci/eve/issues/810"""
from eve.methods.patch import patch_internal from eve.methods.patch import patch_internal
url = self.config['URLS'][resource] path = '%s/%s/%s' % (self.api_prefix, resource, lookup['_id'])
path = '%s/%s/%s' % (self.api_prefix, url, lookup['_id']) with self.test_request_context(method='PATCH', path=path):
with self.__fake_request_url_rule('PATCH', path):
return patch_internal(resource, payload=payload, concurrency_check=concurrency_check, return patch_internal(resource, payload=payload, concurrency_check=concurrency_check,
skip_validation=skip_validation, **lookup)[:4] skip_validation=skip_validation, **lookup)
def delete_internal(self, resource: str, concurrency_check=False,
suppress_callbacks=False, **lookup):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.delete import deleteitem_internal
url = self.config['URLS'][resource]
path = '%s/%s/%s' % (self.api_prefix, url, lookup['_id'])
with self.__fake_request_url_rule('DELETE', path):
return deleteitem_internal(resource,
concurrency_check=concurrency_check,
suppress_callbacks=suppress_callbacks,
**lookup)[:4]
def _list_routes(self): def _list_routes(self):
from pprint import pprint from pprint import pprint
@ -866,82 +367,8 @@ class PillarServer(BlinkerCompatibleEve):
# and rules that require parameters # and rules that require parameters
if "GET" in rule.methods and has_no_empty_params(rule): if "GET" in rule.methods and has_no_empty_params(rule):
url = url_for(rule.endpoint, **(rule.defaults or {})) url = url_for(rule.endpoint, **(rule.defaults or {}))
links.append((url, rule.endpoint, rule.methods)) links.append((url, rule.endpoint))
if "PATCH" in rule.methods:
args = {arg: arg for arg in rule.arguments}
url = url_for(rule.endpoint, **args)
links.append((url, rule.endpoint, rule.methods))
links.sort(key=lambda t: (('/api/' in t[0]), len(t[0]))) links.sort(key=lambda t: len(t[0]) + 100 * ('/api/' in t[0]))
pprint(links, width=300) pprint(links)
def db(self, collection_name: str = None) \
-> typing.Union[pymongo.collection.Collection, pymongo.database.Database]:
"""Returns the MongoDB database, or the collection (if given)"""
if collection_name:
return self.data.driver.db[collection_name]
return self.data.driver.db
def extension_sidebar_links(self, project):
"""Returns the sidebar links for the given projects.
:returns: HTML as a string for the sidebar.
"""
if not project:
return ''
return jinja2.Markup(''.join(ext.sidebar_links(project)
for ext in self.pillar_extensions.values()))
@contextlib.contextmanager
def __fake_request_url_rule(self, method: str, url_path: str):
"""Tries to force-set the request URL rule.
This is required by Eve (since 0.70) to be able to construct a
Location HTTP header that points to the resource item.
See post_internal, put_internal and patch_internal.
"""
import werkzeug.exceptions as wz_exceptions
with self.test_request_context(method=method, path=url_path) as ctx:
try:
rule, _ = ctx.url_adapter.match(url_path, method=method, return_rule=True)
except (wz_exceptions.MethodNotAllowed, wz_exceptions.NotFound):
# We're POSTing things that we haven't told Eve are POSTable. Try again using the
# GET method.
rule, _ = ctx.url_adapter.match(url_path, method='GET', return_rule=True)
current_request = request._get_current_object()
current_request.url_rule = rule
yield ctx
def validator_for_resource(self,
resource_name: str) -> custom_field_validation.ValidateCustomFields:
schema = self.config['DOMAIN'][resource_name]['schema']
validator = self.validator(schema, resource_name)
return validator
@property
def user_roles(self) -> typing.FrozenSet[str]:
return frozenset(self._user_roles)
@property
def user_roles_indexable(self) -> typing.FrozenSet[str]:
return frozenset(self._user_roles_indexable)
@property
def user_caps(self) -> typing.Mapping[str, typing.FrozenSet[str]]:
return self._user_caps
@property
def real_app(self) -> 'PillarServer':
"""The real application object.
Can be used to obtain the real app object from a LocalProxy.
"""
return self

View File

@ -1,20 +1,15 @@
def setup_app(app): def setup_app(app):
from . import encoding, blender_id, projects, local_auth, file_storage from . import encoding, blender_id, projects, local_auth, file_storage
from . import users, nodes, latest, blender_cloud, service, activities, timeline from . import users, nodes, latest, blender_cloud, service, activities
from . import organizations
from . import search
encoding.setup_app(app, url_prefix='/encoding') encoding.setup_app(app, url_prefix='/encoding')
blender_id.setup_app(app, url_prefix='/blender_id') blender_id.setup_app(app, url_prefix='/blender_id')
search.setup_app(app, url_prefix='/newsearch')
projects.setup_app(app, api_prefix='/p') projects.setup_app(app, api_prefix='/p')
local_auth.setup_app(app, url_prefix='/auth') local_auth.setup_app(app, url_prefix='/auth')
file_storage.setup_app(app, url_prefix='/storage') file_storage.setup_app(app, url_prefix='/storage')
latest.setup_app(app, url_prefix='/latest') latest.setup_app(app, url_prefix='/latest')
timeline.setup_app(app, url_prefix='/timeline')
blender_cloud.setup_app(app, url_prefix='/bcloud') blender_cloud.setup_app(app, url_prefix='/bcloud')
users.setup_app(app, api_prefix='/users') users.setup_app(app, api_prefix='/users')
service.setup_app(app, api_prefix='/service') service.setup_app(app, api_prefix='/service')
nodes.setup_app(app, url_prefix='/nodes') nodes.setup_app(app, url_prefix='/nodes')
activities.setup_app(app) activities.setup_app(app)
organizations.setup_app(app)

View File

@ -1,10 +1,5 @@
import logging from flask import g, request, current_app
from pillar.api.utils import gravatar
from flask import request, current_app
import pillar.api.users.avatar
from pillar.auth import current_user
log = logging.getLogger(__name__)
def notification_parse(notification): def notification_parse(notification):
@ -18,11 +13,6 @@ def notification_parse(notification):
if activity is None or activity['object_type'] != 'node': if activity is None or activity['object_type'] != 'node':
return return
node = nodes_collection.find_one({'_id': activity['object']}) node = nodes_collection.find_one({'_id': activity['object']})
if not node:
# This can happen when a notification is generated and then the
# node is deleted.
return
# Initial support only for node_type comments # Initial support only for node_type comments
if node['node_type'] != 'comment': if node['node_type'] != 'comment':
return return
@ -31,7 +21,7 @@ def notification_parse(notification):
object_name = '' object_name = ''
object_id = activity['object'] object_id = activity['object']
if node['parent']['user'] == current_user.user_id: if node['parent']['user'] == g.current_user['user_id']:
owner = "your {0}".format(node['parent']['node_type']) owner = "your {0}".format(node['parent']['node_type'])
else: else:
parent_comment_user = users_collection.find_one( parent_comment_user = users_collection.find_one(
@ -53,7 +43,7 @@ def notification_parse(notification):
action = activity['verb'] action = activity['verb']
lookup = { lookup = {
'user': current_user.user_id, 'user': g.current_user['user_id'],
'context_object_type': 'node', 'context_object_type': 'node',
'context_object': context_object_id, 'context_object': context_object_id,
} }
@ -68,7 +58,7 @@ def notification_parse(notification):
if actor: if actor:
parsed_actor = { parsed_actor = {
'username': actor['username'], 'username': actor['username'],
'avatar': pillar.api.users.avatar.url(actor)} 'avatar': gravatar(actor['email'])}
else: else:
parsed_actor = None parsed_actor = None
@ -91,14 +81,14 @@ def notification_parse(notification):
def notification_get_subscriptions(context_object_type, context_object_id, actor_user_id): def notification_get_subscriptions(context_object_type, context_object_id, actor_user_id):
subscriptions_collection = current_app.db('activities-subscriptions') subscriptions_collection = current_app.data.driver.db['activities-subscriptions']
lookup = { lookup = {
'user': {"$ne": actor_user_id}, 'user': {"$ne": actor_user_id},
'context_object_type': context_object_type, 'context_object_type': context_object_type,
'context_object': context_object_id, 'context_object': context_object_id,
'is_subscribed': True, 'is_subscribed': True,
} }
return subscriptions_collection.find(lookup), subscriptions_collection.count_documents(lookup) return subscriptions_collection.find(lookup)
def activity_subscribe(user_id, context_object_type, context_object_id): def activity_subscribe(user_id, context_object_type, context_object_id):
@ -119,8 +109,6 @@ def activity_subscribe(user_id, context_object_type, context_object_id):
# If no subscription exists, we create one # If no subscription exists, we create one
if not subscription: if not subscription:
# Workaround for issue: https://github.com/pyeve/eve/issues/1174
lookup['notifications'] = {}
current_app.post_internal('activities-subscriptions', lookup) current_app.post_internal('activities-subscriptions', lookup)
@ -140,74 +128,30 @@ def activity_object_add(actor_user_id, verb, object_type, object_id,
:param object_id: object id, to be traced with object_type_id :param object_id: object id, to be traced with object_type_id
""" """
subscriptions, subscription_count = notification_get_subscriptions( subscriptions = notification_get_subscriptions(
context_object_type, context_object_id, actor_user_id) context_object_type, context_object_id, actor_user_id)
if subscription_count == 0: if subscriptions.count() > 0:
return activity = dict(
actor_user=actor_user_id,
verb=verb,
object_type=object_type,
object=object_id,
context_object_type=context_object_type,
context_object=context_object_id
)
info, status = register_activity(actor_user_id, verb, object_type, object_id, activity = current_app.post_internal('activities', activity)
context_object_type, context_object_id) if activity[3] != 201:
if status != 201:
# If creation failed for any reason, do not create a any notifcation # If creation failed for any reason, do not create a any notifcation
return return
for subscription in subscriptions: for subscription in subscriptions:
notification = dict( notification = dict(
user=subscription['user'], user=subscription['user'],
activity=info['_id']) activity=activity[0]['_id'])
current_app.post_internal('notifications', notification) current_app.post_internal('notifications', notification)
def register_activity(actor_user_id, verb, object_type, object_id,
context_object_type, context_object_id,
project_id=None,
node_type=None):
"""Registers an activity.
This works using the following pattern:
ACTOR -> VERB -> OBJECT -> CONTEXT
:param actor_user_id: id of the user who is changing the object
:param verb: the action on the object ('commented', 'replied')
:param object_type: hardcoded name, see database schema
:param object_id: object id, to be traced with object_type
:param context_object_type: the type of the context object, like 'project' or 'node',
see database schema
:param context_object_id:
:param project_id: optional project ID to make the activity easily queryable
per project.
:param node_type: optional, node type of the node receiving the activity.
:returns: tuple (info, status_code), where a successful operation should have
status_code=201. If it is not 201, a warning is logged.
"""
activity = {
'actor_user': actor_user_id,
'verb': verb,
'object_type': object_type,
'object': object_id,
'context_object_type': context_object_type,
'context_object': context_object_id}
if project_id:
activity['project'] = project_id
if node_type:
activity['node_type'] = node_type
info, _, _, status_code = current_app.post_internal('activities', activity)
if status_code != 201:
log.error('register_activity: code %i creating activity %s: %s',
status_code, activity, info)
else:
log.info('register_activity: user %s "%s" on %s %s, context %s %s',
actor_user_id, verb, object_type, object_id,
context_object_type, context_object_id)
return info, status_code
def before_returning_item_notifications(response): def before_returning_item_notifications(response):
if request.args.get('parse'): if request.args.get('parse'):
notification_parse(response) notification_parse(response)

View File

@ -24,8 +24,7 @@ def blender_cloud_addon_version():
def setup_app(app, url_prefix): def setup_app(app, url_prefix):
from . import texture_libs, home_project, subscription from . import texture_libs, home_project
texture_libs.setup_app(app, url_prefix=url_prefix) texture_libs.setup_app(app, url_prefix=url_prefix)
home_project.setup_app(app, url_prefix=url_prefix) home_project.setup_app(app, url_prefix=url_prefix)
subscription.setup_app(app, url_prefix=url_prefix)

View File

@ -1,11 +1,12 @@
import copy import copy
import logging import logging
from bson import ObjectId import datetime
from bson import ObjectId, tz_util
from eve.methods.get import get from eve.methods.get import get
from flask import Blueprint, current_app, request from flask import Blueprint, g, current_app, request
from pillar.api import utils from pillar.api import utils
from pillar.api.utils import authentication, authorization, utcnow from pillar.api.utils import authentication, authorization
from werkzeug import exceptions as wz_exceptions from werkzeug import exceptions as wz_exceptions
from pillar.api.projects import utils as proj_utils from pillar.api.projects import utils as proj_utils
@ -17,7 +18,7 @@ log = logging.getLogger(__name__)
HOME_PROJECT_USERS = set() HOME_PROJECT_USERS = set()
# Users with any of these roles will get full write access to their home project. # Users with any of these roles will get full write access to their home project.
HOME_PROJECT_WRITABLE_USERS = {'subscriber', 'demo'} HOME_PROJECT_WRITABLE_USERS = {u'subscriber', u'demo'}
HOME_PROJECT_DESCRIPTION = ('# Your home project\n\n' HOME_PROJECT_DESCRIPTION = ('# Your home project\n\n'
'This is your home project. It allows synchronisation ' 'This is your home project. It allows synchronisation '
@ -29,7 +30,7 @@ HOME_PROJECT_SUMMARY = 'This is your home project. Here you can sync your Blende
# 'as a pastebin for text, images and other assets, and ' # 'as a pastebin for text, images and other assets, and '
# 'allows synchronisation of your Blender settings.') # 'allows synchronisation of your Blender settings.')
# HOME_PROJECT_SUMMARY = 'This is your home project. Pastebin and Blender settings sync in one!' # HOME_PROJECT_SUMMARY = 'This is your home project. Pastebin and Blender settings sync in one!'
SYNC_GROUP_NODE_NAME = 'Blender Sync' SYNC_GROUP_NODE_NAME = u'Blender Sync'
SYNC_GROUP_NODE_DESC = ('The [Blender Cloud Addon](https://cloud.blender.org/services' SYNC_GROUP_NODE_DESC = ('The [Blender Cloud Addon](https://cloud.blender.org/services'
'#blender-addon) will synchronize your Blender settings here.') '#blender-addon) will synchronize your Blender settings here.')
@ -112,7 +113,7 @@ def create_home_project(user_id, write_access):
# Re-validate the authentication token, so that the put_internal call sees the # Re-validate the authentication token, so that the put_internal call sees the
# new group created for the project. # new group created for the project.
authentication.validate_token(force=True) authentication.validate_token()
# There are a few things in the on_insert_projects hook we need to adjust. # There are a few things in the on_insert_projects hook we need to adjust.
@ -134,8 +135,8 @@ def create_home_project(user_id, write_access):
# This allows people to comment on shared images and see comments. # This allows people to comment on shared images and see comments.
node_type_comment = assign_permissions( node_type_comment = assign_permissions(
node_type_comment, node_type_comment,
subscriber_methods=['GET', 'POST'], subscriber_methods=[u'GET', u'POST'],
world_methods=['GET']) world_methods=[u'GET'])
project['node_types'] = [ project['node_types'] = [
node_type_group, node_type_group,
@ -200,10 +201,8 @@ def home_project():
Eve projections are supported, but at least the following fields must be present: Eve projections are supported, but at least the following fields must be present:
'permissions', 'category', 'user' 'permissions', 'category', 'user'
""" """
from pillar.auth import current_user user_id = g.current_user['user_id']
roles = g.current_user.get('roles', ())
user_id = current_user.user_id
roles = current_user.roles
log.debug('Possibly creating home project for user %s with roles %s', user_id, roles) log.debug('Possibly creating home project for user %s with roles %s', user_id, roles)
if HOME_PROJECT_USERS and not HOME_PROJECT_USERS.intersection(roles): if HOME_PROJECT_USERS and not HOME_PROJECT_USERS.intersection(roles):
@ -216,7 +215,7 @@ def home_project():
write_access = write_access_with_roles(roles) write_access = write_access_with_roles(roles)
create_home_project(user_id, write_access) create_home_project(user_id, write_access)
resp, _, _, status, _ = get('projects', category='home', user=user_id) resp, _, _, status, _ = get('projects', category=u'home', user=user_id)
if status != 200: if status != 200:
return utils.jsonify(resp), status return utils.jsonify(resp), status
@ -249,18 +248,18 @@ def home_project_permissions(write_access):
""" """
if write_access: if write_access:
return ['GET', 'PUT', 'POST', 'DELETE'] return [u'GET', u'PUT', u'POST', u'DELETE']
return ['GET'] return [u'GET']
def has_home_project(user_id): def has_home_project(user_id):
"""Returns True iff the user has a home project.""" """Returns True iff the user has a home project."""
proj_coll = current_app.data.driver.db['projects'] proj_coll = current_app.data.driver.db['projects']
return proj_coll.count_documents({'user': user_id, 'category': 'home', '_deleted': False}) > 0 return proj_coll.count({'user': user_id, 'category': 'home', '_deleted': False}) > 0
def get_home_project(user_id: ObjectId, projection=None) -> dict: def get_home_project(user_id, projection=None):
"""Returns the home project""" """Returns the home project"""
proj_coll = current_app.data.driver.db['projects'] proj_coll = current_app.data.driver.db['projects']
@ -272,7 +271,7 @@ def is_home_project(project_id, user_id):
"""Returns True iff the given project exists and is the user's home project.""" """Returns True iff the given project exists and is the user's home project."""
proj_coll = current_app.data.driver.db['projects'] proj_coll = current_app.data.driver.db['projects']
return proj_coll.count_documents({'_id': project_id, return proj_coll.count({'_id': project_id,
'user': user_id, 'user': user_id,
'category': 'home', 'category': 'home',
'_deleted': False}) > 0 '_deleted': False}) > 0
@ -281,7 +280,7 @@ def is_home_project(project_id, user_id):
def mark_node_updated(node_id): def mark_node_updated(node_id):
"""Uses pymongo to set the node's _updated to "now".""" """Uses pymongo to set the node's _updated to "now"."""
now = utcnow() now = datetime.datetime.now(tz=tz_util.utc)
nodes_coll = current_app.data.driver.db['nodes'] nodes_coll = current_app.data.driver.db['nodes']
return nodes_coll.update_one({'_id': node_id}, return nodes_coll.update_one({'_id': node_id},

View File

@ -1,180 +0,0 @@
import logging
import typing
import blinker
from flask import Blueprint, Response
import requests
from requests.adapters import HTTPAdapter
from pillar import auth, current_app
from pillar.api import blender_id
from pillar.api.utils import authorization, jsonify
from pillar.auth import current_user
log = logging.getLogger(__name__)
blueprint = Blueprint('blender_cloud.subscription', __name__)
# Mapping from roles on Blender ID to roles here in Pillar.
# Roles not mentioned here will not be synced from Blender ID.
ROLES_BID_TO_PILLAR = {
'cloud_subscriber': 'subscriber',
'cloud_demo': 'demo',
'cloud_has_subscription': 'has_subscription',
}
user_subscription_updated = blinker.NamedSignal(
'user_subscription_updated',
'The sender is a UserClass instance, kwargs includes "revoke_roles" and "grant_roles".')
@blueprint.route('/update-subscription')
@authorization.require_login()
def update_subscription() -> typing.Tuple[str, int]:
"""Updates the subscription status of the current user.
Returns an empty HTTP response.
"""
my_log: logging.Logger = log.getChild('update_subscription')
real_current_user = auth.get_current_user() # multiple accesses, just get unproxied.
try:
bid_user = blender_id.fetch_blenderid_user()
except blender_id.LogoutUser:
auth.logout_user()
return '', 204
if not bid_user:
my_log.warning('Logged in user %s has no BlenderID account! '
'Unable to update subscription status.', real_current_user.user_id)
return '', 204
do_update_subscription(real_current_user, bid_user)
return '', 204
@blueprint.route('/update-subscription-for/<user_id>', methods=['POST'])
@authorization.require_login(require_cap='admin')
def update_subscription_for(user_id: str):
"""Updates the user based on their info at Blender ID."""
from urllib.parse import urljoin
from pillar.api.utils import str2id
my_log = log.getChild('update_subscription_for')
bid_session = requests.Session()
bid_session.mount('https://', HTTPAdapter(max_retries=5))
bid_session.mount('http://', HTTPAdapter(max_retries=5))
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': str2id(user_id)})
if not db_user:
my_log.warning('User %s not found in database', user_id)
return Response(f'User {user_id} not found in our database', status=404)
log.info('Updating user %s from Blender ID on behalf of %s',
db_user['email'], current_user.email)
bid_user_id = blender_id.get_user_blenderid(db_user)
if not bid_user_id:
my_log.info('User %s has no Blender ID', user_id)
return Response('User has no Blender ID', status=404)
# Get the user info from Blender ID, and handle errors.
api_url = current_app.config['BLENDER_ID_USER_INFO_API']
api_token = current_app.config['BLENDER_ID_USER_INFO_TOKEN']
url = urljoin(api_url, bid_user_id)
resp = bid_session.get(url, headers={'Authorization': f'Bearer {api_token}'})
if resp.status_code == 404:
my_log.info('User %s has a Blender ID %s but Blender ID itself does not find it',
user_id, bid_user_id)
return Response(f'User {bid_user_id} does not exist at Blender ID', status=404)
if resp.status_code != 200:
my_log.info('Error code %s getting user %s from Blender ID (resp = %s)',
resp.status_code, user_id, resp.text)
return Response(f'Error code {resp.status_code} from Blender ID', status=resp.status_code)
# Update the user in our database.
local_user = auth.UserClass.construct('', db_user)
bid_user = resp.json()
do_update_subscription(local_user, bid_user)
return '', 204
def do_update_subscription(local_user: auth.UserClass, bid_user: dict):
"""Updates the subscription status of the user given the Blender ID user info.
Uses the badger service to update the user's roles from Blender ID.
bid_user should be a dict like:
{'id': 1234,
'full_name': 'मूंगफली मक्खन प्रेमी',
'email': 'here@example.com',
'roles': {'cloud_demo': True}}
The 'roles' key can also be an interable of role names instead of a dict.
"""
from pillar.api import service
my_log: logging.Logger = log.getChild('do_update_subscription')
try:
email = bid_user['email']
except KeyError:
email = '-missing email-'
# Transform the BID roles from a dict to a set.
bidr = bid_user.get('roles', set())
if isinstance(bidr, dict):
bid_roles = {role
for role, has_role in bid_user.get('roles', {}).items()
if has_role}
else:
bid_roles = set(bidr)
# Handle the role changes via the badger service functionality.
plr_roles = set(local_user.roles)
grant_roles = set()
revoke_roles = set()
for bid_role, plr_role in ROLES_BID_TO_PILLAR.items():
if bid_role in bid_roles and plr_role not in plr_roles:
grant_roles.add(plr_role)
continue
if bid_role not in bid_roles and plr_role in plr_roles:
revoke_roles.add(plr_role)
user_id = local_user.user_id
if grant_roles:
if my_log.isEnabledFor(logging.INFO):
my_log.info('granting roles to user %s (Blender ID %s): %s',
user_id, email, ', '.join(sorted(grant_roles)))
service.do_badger('grant', roles=grant_roles, user_id=user_id)
if revoke_roles:
if my_log.isEnabledFor(logging.INFO):
my_log.info('revoking roles to user %s (Blender ID %s): %s',
user_id, email, ', '.join(sorted(revoke_roles)))
service.do_badger('revoke', roles=revoke_roles, user_id=user_id)
# Let the world know this user's subscription was updated.
final_roles = (plr_roles - revoke_roles).union(grant_roles)
local_user.roles = list(final_roles)
local_user.collect_capabilities()
user_subscription_updated.send(local_user,
grant_roles=grant_roles,
revoke_roles=revoke_roles)
# Re-index the user in the search database.
from pillar.api.users import hooks
hooks.push_updated_user_to_search({'_id': user_id}, {})
def setup_app(app, url_prefix):
log.info('Registering blueprint at %s', url_prefix)
app.register_api_blueprint(blueprint, url_prefix=url_prefix)

View File

@ -3,14 +3,12 @@ import logging
from eve.methods.get import get from eve.methods.get import get
from eve.utils import config as eve_config from eve.utils import config as eve_config
from flask import Blueprint, request, current_app from flask import Blueprint, request, current_app, g
from werkzeug.datastructures import MultiDict
from werkzeug.exceptions import InternalServerError
from pillar.api import utils from pillar.api import utils
from pillar.api.utils.authentication import current_user_id from pillar.api.utils.authentication import current_user_id
from pillar.api.utils.authorization import require_login from pillar.api.utils.authorization import require_login
from pillar.auth import current_user from werkzeug.datastructures import MultiDict
from werkzeug.exceptions import InternalServerError
FIRST_ADDON_VERSION_WITH_HDRI = (1, 4, 0) FIRST_ADDON_VERSION_WITH_HDRI = (1, 4, 0)
TL_PROJECTION = utils.dumps({'name': 1, 'url': 1, 'permissions': 1,}) TL_PROJECTION = utils.dumps({'name': 1, 'url': 1, 'permissions': 1,})
@ -27,8 +25,8 @@ log = logging.getLogger(__name__)
def keep_fetching_texture_libraries(proj_filter): def keep_fetching_texture_libraries(proj_filter):
groups = current_user.group_ids groups = g.current_user['groups']
user_id = current_user.user_id user_id = g.current_user['user_id']
page = 1 page = 1
max_page = float('inf') max_page = float('inf')
@ -76,7 +74,7 @@ def texture_libraries():
# of the Blender Cloud Addon. If the addon version is None, we're dealing # of the Blender Cloud Addon. If the addon version is None, we're dealing
# with a version of the BCA that's so old it doesn't send its version along. # with a version of the BCA that's so old it doesn't send its version along.
addon_version = blender_cloud_addon_version() addon_version = blender_cloud_addon_version()
return_hdri = addon_version is not None and addon_version >= FIRST_ADDON_VERSION_WITH_HDRI return_hdri = addon_version >= FIRST_ADDON_VERSION_WITH_HDRI
log.debug('User %s has Blender Cloud Addon version %s; return_hdri=%s', log.debug('User %s has Blender Cloud Addon version %s; return_hdri=%s',
current_user_id(), addon_version, return_hdri) current_user_id(), addon_version, return_hdri)
@ -104,7 +102,7 @@ def has_texture_node(proj, return_hdri=True):
if return_hdri: if return_hdri:
node_types.append('group_hdri') node_types.append('group_hdri')
count = nodes_collection.count_documents( count = nodes_collection.count(
{'node_type': {'$in': node_types}, {'node_type': {'$in': node_types},
'project': proj['_id'], 'project': proj['_id'],
'parent': None}) 'parent': None})

View File

@ -4,57 +4,20 @@ Also contains functionality for other parts of Pillar to perform communication
with Blender ID. with Blender ID.
""" """
import datetime
import logging import logging
from urllib.parse import urljoin
import datetime
import requests import requests
from bson import tz_util from bson import tz_util
from rauth import OAuth2Session from flask import Blueprint, request, current_app, jsonify
from flask import Blueprint, request, jsonify, session from pillar.api.utils import authentication, remove_private_keys
from requests.adapters import HTTPAdapter from requests.adapters import HTTPAdapter
import urllib3.util.retry from werkzeug import exceptions as wz_exceptions
from pillar import current_app
from pillar.auth import get_blender_id_oauth_token
from pillar.api.utils import authentication, utcnow
from pillar.api.utils.authentication import find_user_in_db, upsert_user
blender_id = Blueprint('blender_id', __name__) blender_id = Blueprint('blender_id', __name__)
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
class LogoutUser(Exception):
"""Raised when Blender ID tells us the current user token is invalid.
This indicates the user should be immediately logged out.
"""
class Session(requests.Session):
"""Requests Session suitable for Blender ID communication."""
def __init__(self):
super().__init__()
retries = urllib3.util.retry.Retry(
total=10,
backoff_factor=0.05,
)
http_adapter = requests.adapters.HTTPAdapter(max_retries=retries)
self.mount('https://', http_adapter)
self.mount('http://', http_adapter)
def authenticate(self):
"""Attach the current user's authentication token to the request."""
bid_token = get_blender_id_oauth_token()
if not bid_token:
raise TypeError('authenticate() requires current user to be logged in with Blender ID')
self.headers['Authorization'] = f'Bearer {bid_token}'
@blender_id.route('/store_scst', methods=['POST']) @blender_id.route('/store_scst', methods=['POST'])
def store_subclient_token(): def store_subclient_token():
"""Verifies & stores a user's subclient-specific token.""" """Verifies & stores a user's subclient-specific token."""
@ -74,6 +37,13 @@ def store_subclient_token():
'subclient_user_id': str(db_user['_id'])}), status 'subclient_user_id': str(db_user['_id'])}), status
def blender_id_endpoint():
"""Gets the endpoint for the authentication API. If the env variable
is defined, it's possible to override the (default) production address.
"""
return current_app.config['BLENDER_ID_ENDPOINT'].rstrip('/')
def validate_create_user(blender_id_user_id, token, oauth_subclient_id): def validate_create_user(blender_id_user_id, token, oauth_subclient_id):
"""Validates a user against Blender ID, creating the user in our database. """Validates a user against Blender ID, creating the user in our database.
@ -94,23 +64,78 @@ def validate_create_user(blender_id_user_id, token, oauth_subclient_id):
# Blender ID can be queried without user ID, and will always include the # Blender ID can be queried without user ID, and will always include the
# correct user ID in its response. # correct user ID in its response.
log.debug('Obtained user info from Blender ID: %s', user_info) log.debug('Obtained user info from Blender ID: %s', user_info)
blender_id_user_id = user_info['id']
# Store the user info in MongoDB. # Store the user info in MongoDB.
db_user = find_user_in_db(user_info) db_user = find_user_in_db(blender_id_user_id, user_info)
db_id, status = upsert_user(db_user) db_id, status = upsert_user(db_user, blender_id_user_id)
# Store the token in MongoDB. # Store the token in MongoDB.
ip_based_roles = current_app.org_manager.roles_for_request() authentication.store_token(db_id, token, token_expiry, oauth_subclient_id)
authentication.store_token(db_id, token, token_expiry, oauth_subclient_id,
org_roles=ip_based_roles)
if current_app.org_manager is not None:
roles = current_app.org_manager.refresh_roles(db_id)
db_user['roles'] = list(roles)
return db_user, status return db_user, status
def upsert_user(db_user, blender_id_user_id):
"""Inserts/updates the user in MongoDB.
Retries a few times when there are uniqueness issues in the username.
:returns: the user's database ID and the status of the PUT/POST.
The status is 201 on insert, and 200 on update.
:type: (ObjectId, int)
"""
if u'subscriber' in db_user.get('groups', []):
log.error('Non-ObjectID string found in user.groups: %s', db_user)
raise wz_exceptions.InternalServerError('Non-ObjectID string found in user.groups: %s' % db_user)
r = {}
for retry in range(5):
if '_id' in db_user:
# Update the existing user
attempted_eve_method = 'PUT'
db_id = db_user['_id']
r, _, _, status = current_app.put_internal('users', remove_private_keys(db_user),
_id=db_id)
if status == 422:
log.error('Status %i trying to PUT user %s with values %s, should not happen! %s',
status, db_id, remove_private_keys(db_user), r)
else:
# Create a new user, retry for non-unique usernames.
attempted_eve_method = 'POST'
r, _, _, status = current_app.post_internal('users', db_user)
if status not in {200, 201}:
log.error('Status %i trying to create user for BlenderID %s with values %s: %s',
status, blender_id_user_id, db_user, r)
raise wz_exceptions.InternalServerError()
db_id = r['_id']
db_user.update(r) # update with database/eve-generated fields.
if status == 422:
# Probably non-unique username, so retry a few times with different usernames.
log.info('Error creating new user: %s', r)
username_issue = r.get('_issues', {}).get(u'username', '')
if u'not unique' in username_issue:
# Retry
db_user['username'] = authentication.make_unique_username(db_user['email'])
continue
# Saving was successful, or at least didn't break on a non-unique username.
break
else:
log.error('Unable to create new user %s: %s', db_user, r)
raise wz_exceptions.InternalServerError()
if status not in (200, 201):
log.error('internal response from %s to Eve: %r %r', attempted_eve_method, status, r)
raise wz_exceptions.InternalServerError()
return db_id, status
def validate_token(user_id, token, oauth_subclient_id): def validate_token(user_id, token, oauth_subclient_id):
"""Verifies a subclient token with Blender ID. """Verifies a subclient token with Blender ID.
@ -134,35 +159,23 @@ def validate_token(user_id, token, oauth_subclient_id):
payload = {'user_id': user_id, payload = {'user_id': user_id,
'token': token} 'token': token}
if oauth_subclient_id: if oauth_subclient_id:
# If the subclient ID is set, the token belongs to another OAuth Client,
# in which case we do not set the client_id field.
payload['subclient_id'] = oauth_subclient_id payload['subclient_id'] = oauth_subclient_id
else:
# We only want to accept Blender Cloud tokens.
payload['client_id'] = current_app.config['OAUTH_CREDENTIALS']['blender-id']['id']
blender_id_endpoint = current_app.config['BLENDER_ID_ENDPOINT'] url = '{0}/u/validate_token'.format(blender_id_endpoint())
url = urljoin(blender_id_endpoint, 'u/validate_token')
log.debug('POSTing to %r', url) log.debug('POSTing to %r', url)
# Retry a few times when POSTing to BlenderID fails.
# Source: http://stackoverflow.com/a/15431343/875379
s = requests.Session()
s.mount(blender_id_endpoint(), HTTPAdapter(max_retries=5))
# POST to Blender ID, handling errors as negative verification results. # POST to Blender ID, handling errors as negative verification results.
s = Session()
try: try:
r = s.post(url, data=payload, timeout=5, r = s.post(url, data=payload, timeout=5,
verify=current_app.config['TLS_CERT_FILE']) verify=current_app.config['TLS_CERT_FILE'])
except requests.exceptions.ConnectionError: except requests.exceptions.ConnectionError as e:
log.error('Connection error trying to POST to %s, handling as invalid token.', url) log.error('Connection error trying to POST to %s, handling as invalid token.', url)
return None, None return None, None
except requests.exceptions.ReadTimeout:
log.error('Read timeout trying to POST to %s, handling as invalid token.', url)
return None, None
except requests.exceptions.RequestException as ex:
log.error('Requests error "%s" trying to POST to %s, handling as invalid token.', ex, url)
return None, None
except IOError as ex:
log.error('Unknown I/O error "%s" trying to POST to %s, handling as invalid token.',
ex, url)
return None, None
if r.status_code != 200: if r.status_code != 200:
log.debug('Token %s invalid, HTTP status %i returned', token, r.status_code) log.debug('Token %s invalid, HTTP status %i returned', token, r.status_code)
@ -186,118 +199,43 @@ def _compute_token_expiry(token_expires_string):
the token. the token.
""" """
# requirement is called python-dateutil, so PyCharm doesn't find it. date_format = current_app.config['RFC1123_DATE_FORMAT']
# noinspection PyPackageRequirements blid_expiry = datetime.datetime.strptime(token_expires_string, date_format)
from dateutil import parser blid_expiry = blid_expiry.replace(tzinfo=tz_util.utc)
our_expiry = datetime.datetime.now(tz=tz_util.utc) + datetime.timedelta(hours=1)
blid_expiry = parser.parse(token_expires_string)
blid_expiry = blid_expiry.astimezone(tz_util.utc)
our_expiry = utcnow() + datetime.timedelta(hours=1)
return min(blid_expiry, our_expiry) return min(blid_expiry, our_expiry)
def get_user_blenderid(db_user: dict) -> str: def find_user_in_db(blender_id_user_id, user_info):
"""Returns the Blender ID user ID for this Pillar user. """Find the user in our database, creating/updating the returned document where needed.
Takes the string from 'auth.*.user_id' for the '*' where 'provider' Does NOT update the user in the database.
is 'blender-id'.
:returns the user ID, or the empty string when the user has none.
""" """
bid_user_ids = [auth['user_id'] users = current_app.data.driver.db['users']
for auth in db_user['auth']
if auth['provider'] == 'blender-id']
try:
return bid_user_ids[0]
except IndexError:
return ''
query = {'auth': {'$elemMatch': {'user_id': str(blender_id_user_id),
'provider': 'blender-id'}}}
log.debug('Querying: %s', query)
db_user = users.find_one(query)
def fetch_blenderid_user() -> dict: if db_user:
"""Returns the user info of the currently logged in user from BlenderID. log.debug('User blender_id_user_id=%r already in our database, '
'updating with info from Blender ID.', blender_id_user_id)
db_user['email'] = user_info['email']
else:
log.debug('User %r not yet in our database, create a new one.', blender_id_user_id)
db_user = authentication.create_new_user_document(
email=user_info['email'],
user_id=blender_id_user_id,
username=user_info['full_name'])
db_user['username'] = authentication.make_unique_username(user_info['email'])
if not db_user['full_name']:
db_user['full_name'] = db_user['username']
Returns an empty dict if communication fails. return db_user
Example dict:
{
"email": "some@email.example.com",
"full_name": "dr. Sybren A. St\u00fcvel",
"id": 5555,
"roles": {
"admin": true,
"bfct_trainer": false,
"cloud_has_subscription": true,
"cloud_subscriber": true,
"conference_speaker": true,
"network_member": true
}
}
:raises LogoutUser: when Blender ID tells us the current token is
invalid, and the user should be logged out.
"""
import httplib2 # used by the oauth2 package
my_log = log.getChild('fetch_blenderid_user')
bid_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'], 'api/user')
my_log.debug('Fetching user info from %s', bid_url)
credentials = current_app.config['OAUTH_CREDENTIALS']['blender-id']
oauth_token = session.get('blender_id_oauth_token')
if not oauth_token:
my_log.warning('no Blender ID oauth token found in user session')
return {}
assert isinstance(oauth_token, str), f'oauth token must be str, not {type(oauth_token)}'
oauth_session = OAuth2Session(
credentials['id'], credentials['secret'],
access_token=oauth_token)
try:
bid_resp = oauth_session.get(bid_url)
except httplib2.HttpLib2Error:
my_log.exception('Error getting %s from BlenderID', bid_url)
return {}
if bid_resp.status_code == 403:
my_log.warning('Error %i from BlenderID %s, logging out user', bid_resp.status_code, bid_url)
raise LogoutUser()
if bid_resp.status_code != 200:
my_log.warning('Error %i from BlenderID %s: %s', bid_resp.status_code, bid_url, bid_resp.text)
return {}
payload = bid_resp.json()
if not payload:
my_log.warning('Empty data returned from BlenderID %s', bid_url)
return {}
my_log.debug('BlenderID returned %s', payload)
return payload
def avatar_url(blenderid_user_id: str) -> str:
"""Return the URL to the user's avatar on Blender ID.
This avatar should be downloaded, and not served from the Blender ID URL.
"""
bid_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'],
f'api/user/{blenderid_user_id}/avatar')
return bid_url
def setup_app(app, url_prefix): def setup_app(app, url_prefix):
app.register_api_blueprint(blender_id, url_prefix=url_prefix) app.register_api_blueprint(blender_id, url_prefix=url_prefix)
def switch_user_url(next_url: str) -> str:
from urllib.parse import quote
base_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'], 'switch')
if next_url:
return '%s?next=%s' % (base_url, quote(next_url))
return base_url

View File

@ -1,58 +1,39 @@
from datetime import datetime
import logging import logging
from bson import ObjectId, tz_util from bson import ObjectId
from datetime import datetime
from eve.io.mongo import Validator from eve.io.mongo import Validator
from flask import current_app from flask import current_app
from pillar import markdown
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
class ValidateCustomFields(Validator): class ValidateCustomFields(Validator):
# TODO: split this into a convert_property(property, schema) and call that from this function.
def convert_properties(self, properties, node_schema): def convert_properties(self, properties, node_schema):
"""Converts datetime strings and ObjectId strings to actual Python objects."""
date_format = current_app.config['RFC1123_DATE_FORMAT'] date_format = current_app.config['RFC1123_DATE_FORMAT']
for prop in node_schema: for prop in node_schema:
if prop not in properties: if not prop in properties:
continue continue
schema_prop = node_schema[prop] schema_prop = node_schema[prop]
prop_type = schema_prop['type'] prop_type = schema_prop['type']
if prop_type == 'dict': if prop_type == 'dict':
try: properties[prop] = self.convert_properties(
dict_valueschema = schema_prop['schema'] properties[prop], schema_prop['schema'])
properties[prop] = self.convert_properties(properties[prop], dict_valueschema) if prop_type == 'list':
except KeyError:
# Cerberus 1.3 changed valueschema to valuesrules.
dict_valueschema = schema_prop.get('valuesrules') or \
schema_prop.get('valueschema')
if dict_valueschema is None:
raise KeyError(f"missing 'valuesrules' key in schema of property {prop}")
self.convert_dict_values(properties[prop], dict_valueschema)
elif prop_type == 'list':
if properties[prop] in ['', '[]']: if properties[prop] in ['', '[]']:
properties[prop] = [] properties[prop] = []
if 'schema' in schema_prop:
for k, val in enumerate(properties[prop]): for k, val in enumerate(properties[prop]):
if not 'schema' in schema_prop:
continue
item_schema = {'item': schema_prop['schema']} item_schema = {'item': schema_prop['schema']}
item_prop = {'item': properties[prop][k]} item_prop = {'item': properties[prop][k]}
properties[prop][k] = self.convert_properties( properties[prop][k] = self.convert_properties(
item_prop, item_schema)['item'] item_prop, item_schema)['item']
# Convert datetime string to RFC1123 datetime # Convert datetime string to RFC1123 datetime
elif prop_type == 'datetime': elif prop_type == 'datetime':
prop_val = properties[prop] prop_val = properties[prop]
prop_naieve = datetime.strptime(prop_val, date_format) properties[prop] = datetime.strptime(prop_val, date_format)
prop_aware = prop_naieve.replace(tzinfo=tz_util.utc)
properties[prop] = prop_aware
elif prop_type == 'objectid': elif prop_type == 'objectid':
prop_val = properties[prop] prop_val = properties[prop]
if prop_val: if prop_val:
@ -62,26 +43,7 @@ class ValidateCustomFields(Validator):
return properties return properties
def convert_dict_values(self, dict_property, dict_valueschema):
"""Calls convert_properties() for the values in the dict.
Only validates the dict values, not the keys. Modifies the given dict in-place.
"""
assert dict_valueschema['type'] == 'dict'
assert isinstance(dict_property, dict)
for key, val in dict_property.items():
item_schema = {'item': dict_valueschema}
item_prop = {'item': val}
dict_property[key] = self.convert_properties(item_prop, item_schema)['item']
def _validate_valid_properties(self, valid_properties, field, value): def _validate_valid_properties(self, valid_properties, field, value):
"""Fake property that triggers node dynamic property validation.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
from pillar.api.utils import project_get_node_type from pillar.api.utils import project_get_node_type
projects_collection = current_app.data.driver.db['projects'] projects_collection = current_app.data.driver.db['projects']
@ -110,91 +72,11 @@ class ValidateCustomFields(Validator):
except Exception as e: except Exception as e:
log.warning("Error converting form properties", exc_info=True) log.warning("Error converting form properties", exc_info=True)
v = self.__class__(schema=node_type['dyn_schema']) v = Validator(node_type['dyn_schema'])
val = v.validate(value) val = v.validate(value)
if val: if val:
# This ensures the modifications made by v's coercion rules are
# visible to this validator's output.
self.document[field] = v.document
return True return True
log.warning('Error validating properties for node %s: %s', self.document, v.errors) log.warning('Error validating properties for node %s: %s', self.document, v.errors)
self._error(field, "Error validating properties") self._error(field, "Error validating properties")
def _validate_required_after_creation(self, required_after_creation, field, value):
"""Makes a value required after creation only.
Combine "required_after_creation=True" with "required=False" to allow
pre-insert hooks to set default values.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
if not required_after_creation:
# Setting required_after_creation=False is the same as not mentioning this
# validator at all.
return
if self.document_id is None:
# This is a creation call, in which case this validator shouldn't run.
return
if not value:
self._error(field, "Value is required once the document was created")
def _check_with_iprange(self, field_name: str, value: str):
"""Ensure the field contains a valid IP address.
Supports both IPv6 and IPv4 ranges. Requires the IPy module.
"""
from IPy import IP
try:
ip = IP(value, make_net=True)
except ValueError as ex:
self._error(field_name, str(ex))
return
if ip.prefixlen() == 0:
self._error(field_name, 'Zero-length prefix is not allowed')
def _normalize_coerce_markdown(self, markdown_field: str) -> str:
"""
Cache markdown as html.
:param markdown_field: name of the field containing Markdown
:return: html string
"""
my_log = log.getChild('_normalize_coerce_markdown')
mdown = self.document.get(markdown_field, '')
html = markdown.markdown(mdown)
my_log.debug('Generated html for markdown field %s in doc with id %s',
markdown_field, id(self.document))
return html
if __name__ == '__main__':
from pprint import pprint
v = ValidateCustomFields()
v.schema = {
'foo': {'type': 'string', 'check_with': 'markdown'},
'foo_html': {'type': 'string'},
'nested': {
'type': 'dict',
'schema': {
'bar': {'type': 'string', 'check_with': 'markdown'},
'bar_html': {'type': 'string'},
}
}
}
print('Valid :', v.validate({
'foo': '# Title\n\nHeyyyy',
'nested': {'bar': 'bhahaha'},
}))
print('Document:')
pprint(v.document)
print('Errors :', v.errors)

View File

@ -1,16 +1,15 @@
import datetime
import json
import logging import logging
import os
from bson import ObjectId import datetime
import os
from bson import ObjectId, tz_util
from flask import Blueprint from flask import Blueprint
from flask import abort from flask import abort
from flask import current_app from flask import current_app
from flask import request from flask import request
from pillar.api import utils from pillar.api import utils
from pillar.api.file_storage_backends import Bucket from pillar.api.utils.gcs import GoogleCloudStorageBucket
from pillar.api.utils import skip_when_testing
encoding = Blueprint('encoding', __name__) encoding = Blueprint('encoding', __name__)
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -33,7 +32,6 @@ def size_descriptor(width, height):
1280: '720p', 1280: '720p',
1920: '1080p', 1920: '1080p',
2048: '2k', 2048: '2k',
3840: 'UHD',
4096: '4k', 4096: '4k',
} }
@ -44,6 +42,13 @@ def size_descriptor(width, height):
return '%ip' % height return '%ip' % height
@skip_when_testing
def rename_on_gcs(bucket_name, from_path, to_path):
gcs = GoogleCloudStorageBucket(str(bucket_name))
blob = gcs.bucket.blob(from_path)
gcs.bucket.rename_blob(blob, to_path)
@encoding.route('/zencoder/notifications', methods=['POST']) @encoding.route('/zencoder/notifications', methods=['POST'])
def zencoder_notifications(): def zencoder_notifications():
""" """
@ -97,24 +102,25 @@ def zencoder_notifications():
file_doc['processing']['status'] = job_state file_doc['processing']['status'] = job_state
if job_state == 'failed': if job_state == 'failed':
log.warning('Zencoder job %s for file %s failed: %s', zencoder_job_id, file_id, log.warning('Zencoder job %i for file %s failed.', zencoder_job_id, file_id)
json.dumps(data, sort_keys=True, indent=4)) # Log what Zencoder told us went wrong.
for output in data['outputs']:
if not any('error' in key for key in output):
continue
log.warning('Errors for output %s:', output['url'])
for key in output:
if 'error' in key:
log.info(' %s: %s', key, output[key])
file_doc['status'] = 'failed' file_doc['status'] = 'failed'
current_app.put_internal('files', file_doc, _id=file_id) current_app.put_internal('files', file_doc, _id=file_id)
# This is 'okay' because we handled the Zencoder notification properly.
return "You failed, but that's okay.", 200 return "You failed, but that's okay.", 200
log.info('Zencoder job %s for file %s completed with status %s.', zencoder_job_id, file_id, log.info('Zencoder job %s for file %s completed with status %s.', zencoder_job_id, file_id,
job_state) job_state)
# For every variation encoded, try to update the file object # For every variation encoded, try to update the file object
storage_name, _ = os.path.splitext(file_doc['file_path']) root, _ = os.path.splitext(file_doc['file_path'])
nice_name, _ = os.path.splitext(file_doc['filename'])
bucket_class = Bucket.for_backend(file_doc['backend'])
bucket = bucket_class(str(file_doc['project']))
for output in data['outputs']: for output in data['outputs']:
video_format = output['format'] video_format = output['format']
@ -135,16 +141,16 @@ def zencoder_notifications():
# Rename the file to include the now-known size descriptor. # Rename the file to include the now-known size descriptor.
size = size_descriptor(output['width'], output['height']) size = size_descriptor(output['width'], output['height'])
new_fname = f'{storage_name}-{size}.{video_format}' new_fname = '{}-{}.{}'.format(root, size, video_format)
# Rename the file on the storage. # Rename on Google Cloud Storage
blob = bucket.blob(variation['file_path'])
try: try:
new_blob = bucket.rename_blob(blob, new_fname) rename_on_gcs(file_doc['project'],
new_blob.update_filename(f'{nice_name}-{size}.{video_format}') '_/' + variation['file_path'],
'_/' + new_fname)
except Exception: except Exception:
log.warning('Unable to rename blob %r to %r. Keeping old name.', log.warning('Unable to rename GCS blob %r to %r. Keeping old name.',
blob, new_fname, exc_info=True) variation['file_path'], new_fname, exc_info=True)
else: else:
variation['file_path'] = new_fname variation['file_path'] = new_fname
@ -161,12 +167,9 @@ def zencoder_notifications():
file_doc['status'] = 'complete' file_doc['status'] = 'complete'
# Force an update of the links on the next load of the file. # Force an update of the links on the next load of the file.
file_doc['link_expires'] = utils.utcnow() - datetime.timedelta(days=1) file_doc['link_expires'] = datetime.datetime.now(tz=tz_util.utc) - datetime.timedelta(days=1)
r, _, _, status = current_app.put_internal('files', file_doc, _id=file_id) current_app.put_internal('files', file_doc, _id=file_id)
if status != 200:
log.error('unable to save file %s after Zencoder notification: %s', file_id, r)
return json.dumps(r), 500
return '', 204 return '', 204

View File

@ -1,8 +1,5 @@
import os import os
from pillar.api.node_types.utils import markdown_fields
STORAGE_BACKENDS = ["local", "pillar", "cdnsun", "gcs", "unittest"]
URL_PREFIX = 'api' URL_PREFIX = 'api'
# Enable reads (GET), inserts (POST) and DELETE for resources/collections # Enable reads (GET), inserts (POST) and DELETE for resources/collections
@ -91,8 +88,8 @@ users_schema = {
} }
}, },
'auth': { 'auth': {
# Storage of authentication credentials (one will be able to auth with multiple providers on # Storage of authentication credentials (one will be able to auth with
# the same account) # multiple providers on the same account)
'type': 'list', 'type': 'list',
'required': True, 'required': True,
'schema': { 'schema': {
@ -100,12 +97,13 @@ users_schema = {
'schema': { 'schema': {
'provider': { 'provider': {
'type': 'string', 'type': 'string',
'allowed': ['local', 'blender-id', 'facebook', 'google'], 'allowed': ["blender-id", "local"],
}, },
'user_id': { 'user_id': {
'type': 'string' 'type': 'string'
}, },
# A token is considered a "password" in case the provider is "local". # A token is considered a "password" in case the provider is
# "local".
'token': { 'token': {
'type': 'string' 'type': 'string'
} }
@ -122,80 +120,14 @@ users_schema = {
} }
}, },
'service': { 'service': {
'type': 'dict',
'allow_unknown': True,
},
'avatar': {
'type': 'dict', 'type': 'dict',
'schema': { 'schema': {
'file': { 'badger': {
'type': 'objectid', 'type': 'list',
'data_relation': { 'schema': {'type': 'string'}
'resource': 'files', }
'field': '_id', }
}, }
},
# For only downloading when things really changed:
'last_downloaded_url': {
'type': 'string',
},
'last_modified': {
'type': 'string',
},
},
},
# Node-specific information for this user.
'nodes': {
'type': 'dict',
'schema': {
# Per watched video info about where the user left off, both in time and in percent.
'view_progress': {
'type': 'dict',
# Keyed by Node ID of the video asset. MongoDB doesn't support using
# ObjectIds as key, so we cast them to string instead.
'keysrules': {'type': 'string'},
'valuesrules': {
'type': 'dict',
'schema': {
'progress_in_sec': {'type': 'float', 'min': 0},
'progress_in_percent': {'type': 'integer', 'min': 0, 'max': 100},
# When the progress was last updated, so we can limit this history to
# the last-watched N videos if we want, or show stuff in chrono order.
'last_watched': {'type': 'datetime'},
# True means progress_in_percent = 100, for easy querying
'done': {'type': 'boolean', 'default': False},
},
},
},
},
},
'badges': {
'type': 'dict',
'schema': {
'html': {'type': 'string'}, # HTML fetched from Blender ID.
'expires': {'type': 'datetime'}, # When we should fetch it again.
},
},
# Properties defined by extensions. Extensions should use their name (see the
# PillarExtension.name property) as the key, and are free to use whatever they want as value,
# but we suggest a dict for future extendability.
# Properties can be of two types:
# - public: they will be visible to the world (for example as part of the User.find() query)
# - private: visible only to their user
'extension_props_public': {
'type': 'dict',
'required': False,
},
'extension_props_private': {
'type': 'dict',
'required': False,
},
} }
organizations_schema = { organizations_schema = {
@ -205,7 +137,19 @@ organizations_schema = {
'maxlength': 128, 'maxlength': 128,
'required': True 'required': True
}, },
**markdown_fields('description', maxlength=256), 'email': {
'type': 'string'
},
'url': {
'type': 'string',
'minlength': 1,
'maxlength': 128,
'required': True
},
'description': {
'type': 'string',
'maxlength': 256,
},
'website': { 'website': {
'type': 'string', 'type': 'string',
'maxlength': 256, 'maxlength': 256,
@ -217,15 +161,7 @@ organizations_schema = {
'picture': dict( 'picture': dict(
nullable=True, nullable=True,
**_file_embedded_schema), **_file_embedded_schema),
'admin_uid': { 'users': {
'type': 'objectid',
'data_relation': {
'resource': 'users',
'field': '_id',
},
'required': True,
},
'members': {
'type': 'list', 'type': 'list',
'default': [], 'default': [],
'schema': { 'schema': {
@ -233,52 +169,51 @@ organizations_schema = {
'data_relation': { 'data_relation': {
'resource': 'users', 'resource': 'users',
'field': '_id', 'field': '_id',
'embeddable': True
} }
} }
}, },
'unknown_members': { 'teams': {
'type': 'list', # of email addresses of yet-to-register users.
'default': [],
'schema': {
'type': 'string',
},
},
# Maximum size of the organization, i.e. len(members) + len(unknown_members) may
# not exceed this.
'seat_count': {
'type': 'integer',
'required': True,
},
# Roles that the members of this organization automatically get.
'org_roles': {
'type': 'list', 'type': 'list',
'default': [], 'default': [],
'schema': {
'type': 'string',
},
},
# Identification of the subscription that pays for this organisation
# in an external subscription/payment management system.
'payment_subscription_id': {
'type': 'string',
},
'ip_ranges': {
'type': 'list',
'schema': { 'schema': {
'type': 'dict', 'type': 'dict',
'schema': { 'schema': {
# see _validate_type_{typename} in ValidateCustomFields: # Team name
'start': {'type': 'binary', 'required': True}, 'name': {
'end': {'type': 'binary', 'required': True}, 'type': 'string',
'prefix': {'type': 'integer', 'required': True}, 'minlength': 1,
'human': {'type': 'string', 'required': True, 'check_with': 'iprange'}, 'maxlength': 128,
'required': True
},
# List of user ids for the team
'users': {
'type': 'list',
'default': [],
'schema': {
'type': 'objectid',
'data_relation': {
'resource': 'users',
'field': '_id',
}
} }
}, },
}, # List of groups assigned to the team (this will automatically
# update the groups property of each user in the team)
'groups': {
'type': 'list',
'default': [],
'schema': {
'type': 'objectid',
'data_relation': {
'resource': 'groups',
'field': '_id',
}
}
}
}
}
}
} }
permissions_embedded_schema = { permissions_embedded_schema = {
@ -338,7 +273,9 @@ nodes_schema = {
'maxlength': 128, 'maxlength': 128,
'required': True, 'required': True,
}, },
**markdown_fields('description'), 'description': {
'type': 'string',
},
'picture': _file_embedded_schema, 'picture': _file_embedded_schema,
'order': { 'order': {
'type': 'integer', 'type': 'integer',
@ -371,7 +308,7 @@ nodes_schema = {
'properties': { 'properties': {
'type': 'dict', 'type': 'dict',
'valid_properties': True, 'valid_properties': True,
'required': True 'required': True,
}, },
'permissions': { 'permissions': {
'type': 'dict', 'type': 'dict',
@ -391,10 +328,6 @@ tokens_schema = {
'type': 'string', 'type': 'string',
'required': True, 'required': True,
}, },
'token_hashed': {
'type': 'string',
'required': False,
},
'expire_time': { 'expire_time': {
'type': 'datetime', 'type': 'datetime',
'required': True, 'required': True,
@ -402,22 +335,6 @@ tokens_schema = {
'is_subclient_token': { 'is_subclient_token': {
'type': 'boolean', 'type': 'boolean',
'required': False, 'required': False,
},
# Roles this user gets while this token is valid.
'org_roles': {
'type': 'list',
'default': [],
'schema': {
'type': 'string',
},
},
# OAuth scopes granted to this token.
'oauth_scopes': {
'type': 'list',
'default': [],
'schema': {'type': 'string'},
} }
} }
@ -476,7 +393,7 @@ files_schema = {
'backend': { 'backend': {
'type': 'string', 'type': 'string',
'required': True, 'required': True,
'allowed': STORAGE_BACKENDS, 'allowed': ["attract-web", "pillar", "cdnsun", "gcs", "unittest"]
}, },
# Where the file is in the backend storage itself. In the case of GCS, # Where the file is in the backend storage itself. In the case of GCS,
@ -588,7 +505,9 @@ projects_schema = {
'maxlength': 128, 'maxlength': 128,
'required': True, 'required': True,
}, },
**markdown_fields('description'), 'description': {
'type': 'string',
},
# Short summary for the project # Short summary for the project
'summary': { 'summary': {
'type': 'string', 'type': 'string',
@ -598,8 +517,6 @@ projects_schema = {
'picture_square': _file_embedded_schema, 'picture_square': _file_embedded_schema,
# Header # Header
'picture_header': _file_embedded_schema, 'picture_header': _file_embedded_schema,
# Picture with a 16:9 aspect ratio (for Open Graph)
'picture_16_9': _file_embedded_schema,
'header_node': dict( 'header_node': dict(
nullable=True, nullable=True,
**_node_embedded_schema **_node_embedded_schema
@ -616,9 +533,8 @@ projects_schema = {
'category': { 'category': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [
'course', 'training',
'film', 'film',
'workshop',
'assets', 'assets',
'software', 'software',
'game', 'game',
@ -707,16 +623,7 @@ projects_schema = {
'permissions': { 'permissions': {
'type': 'dict', 'type': 'dict',
'schema': permissions_embedded_schema 'schema': permissions_embedded_schema
}, }
# Properties defined by extensions. Extensions should use their name
# (see the PillarExtension.name property) as the key, and are free to
# use whatever they want as value (but we suggest a dict for future
# extendability).
'extension_props': {
'type': 'dict',
'required': False,
},
} }
activities_subscriptions_schema = { activities_subscriptions_schema = {
@ -760,19 +667,6 @@ activities_schema = {
'type': 'objectid', 'type': 'objectid',
'required': True 'required': True
}, },
'project': {
'type': 'objectid',
'data_relation': {
'resource': 'projects',
'field': '_id',
},
'required': False,
},
# If the object type is 'node', the node type can be stored here.
'node_type': {
'type': 'string',
'required': False,
}
} }
notifications_schema = { notifications_schema = {
@ -801,9 +695,13 @@ users = {
'cache_expires': 10, 'cache_expires': 10,
'resource_methods': ['GET'], 'resource_methods': ['GET'],
'item_methods': ['GET', 'PUT'], 'item_methods': ['GET', 'PUT', 'PATCH'],
'public_item_methods': ['GET'], 'public_item_methods': ['GET'],
# By default don't include the 'auth' field. It can still be obtained
# using projections, though, so we block that in hooks.
'datasource': {'projection': {u'auth': 0}},
'schema': users_schema 'schema': users_schema
} }
@ -817,12 +715,11 @@ tokens = {
} }
files = { files = {
'schema': files_schema,
'resource_methods': ['GET', 'POST'], 'resource_methods': ['GET', 'POST'],
'item_methods': ['GET', 'PATCH'], 'item_methods': ['GET', 'PATCH'],
'public_methods': ['GET'], 'public_methods': ['GET'],
'public_item_methods': ['GET'], 'public_item_methods': ['GET'],
'soft_delete': True, 'schema': files_schema
} }
groups = { groups = {
@ -834,11 +731,8 @@ groups = {
organizations = { organizations = {
'schema': organizations_schema, 'schema': organizations_schema,
'resource_methods': ['GET', 'POST'], 'public_item_methods': ['GET'],
'item_methods': ['GET'], 'public_methods': ['GET']
'public_item_methods': [],
'public_methods': [],
'soft_delete': True,
} }
projects = { projects = {
@ -882,9 +776,4 @@ UPSET_ON_PUT = False # do not create new document on PUT of non-existant URL.
X_DOMAINS = '*' X_DOMAINS = '*'
X_ALLOW_CREDENTIALS = True X_ALLOW_CREDENTIALS = True
X_HEADERS = 'Authorization' X_HEADERS = 'Authorization'
RENDERERS = ['eve.render.JSONRenderer'] XML = False
# TODO(Sybren): this is a quick workaround to make /p/{url}/jstree work again.
# Apparently Eve is now stricter in checking against MONGO_QUERY_BLACKLIST, and
# blocks our use of $regex.
MONGO_QUERY_BLACKLIST = ['$where']

View File

@ -1,38 +1,32 @@
import datetime
import io import io
import logging import logging
import mimetypes import mimetypes
import os
import pathlib
import tempfile import tempfile
import time
import typing
import uuid import uuid
from hashlib import md5 from hashlib import md5
import bson.tz_util
import datetime
import eve.utils import eve.utils
import os
import pymongo import pymongo
import werkzeug.exceptions as wz_exceptions import werkzeug.exceptions as wz_exceptions
import werkzeug.datastructures
from bson import ObjectId from bson import ObjectId
from flask import Blueprint from flask import Blueprint
from flask import current_app from flask import current_app
from flask import g
from flask import jsonify from flask import jsonify
from flask import request from flask import request
from flask import send_from_directory from flask import send_from_directory
from flask import url_for, helpers from flask import url_for, helpers
from pillar.api import utils from pillar.api import utils
from pillar.api.file_storage_backends.gcs import GoogleCloudStorageBucket, \ from pillar.api.utils.imaging import generate_local_thumbnails
GoogleCloudStorageBlob from pillar.api.utils import remove_private_keys, authentication
from pillar.api.utils import remove_private_keys, imaging from pillar.api.utils.authorization import require_login, user_has_role, \
from pillar.api.utils.authorization import require_login, \
user_matches_roles user_matches_roles
from pillar.api.utils.cdn import hash_file_path from pillar.api.utils.cdn import hash_file_path
from pillar.api.utils.encoding import Encoder from pillar.api.utils.encoding import Encoder
from pillar.api.file_storage_backends import default_storage_backend, Bucket from pillar.api.utils.gcs import GoogleCloudStorageBucket
from pillar.auth import current_user
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -51,6 +45,31 @@ mimetypes.add_type('application/x-radiance-hdr', '.hdr')
mimetypes.add_type('application/x-exr', '.exr') mimetypes.add_type('application/x-exr', '.exr')
@file_storage.route('/gcs/<bucket_name>/<subdir>/')
@file_storage.route('/gcs/<bucket_name>/<subdir>/<path:file_path>')
def browse_gcs(bucket_name, subdir, file_path=None):
"""Browse the content of a Google Cloud Storage bucket"""
# Initialize storage client
storage = GoogleCloudStorageBucket(bucket_name, subdir=subdir)
if file_path:
# If we provided a file_path, we try to fetch it
file_object = storage.Get(file_path)
if file_object:
# If it exists, return file properties in a dictionary
return jsonify(file_object)
else:
listing = storage.List(file_path)
return jsonify(listing)
# We always return an empty listing even if the directory does not
# exist. This can be changed later.
# return abort(404)
else:
listing = storage.List('')
return jsonify(listing)
@file_storage.route('/file', methods=['POST']) @file_storage.route('/file', methods=['POST'])
@file_storage.route('/file/<path:file_name>', methods=['GET', 'POST']) @file_storage.route('/file/<path:file_name>', methods=['GET', 'POST'])
def index(file_name=None): def index(file_name=None):
@ -84,10 +103,7 @@ def index(file_name=None):
return jsonify({'url': url_for('file_storage.index', file_name=file_name)}) return jsonify({'url': url_for('file_storage.index', file_name=file_name)})
def _process_image(bucket: Bucket, def _process_image(gcs, file_id, local_file, src_file):
file_id: ObjectId,
local_file: tempfile._TemporaryFileWrapper,
src_file: dict):
from PIL import Image from PIL import Image
im = Image.open(local_file) im = Image.open(local_file)
@ -97,9 +113,8 @@ def _process_image(bucket: Bucket,
# Generate previews # Generate previews
log.info('Generating thumbnails for file %s', file_id) log.info('Generating thumbnails for file %s', file_id)
local_path = pathlib.Path(local_file.name) src_file['variations'] = generate_local_thumbnails(src_file['name'],
name_base = pathlib.Path(src_file['name']).stem local_file.name)
src_file['variations'] = imaging.generate_local_thumbnails(name_base, local_path)
# Send those previews to Google Cloud Storage. # Send those previews to Google Cloud Storage.
log.info('Uploading %i thumbnails for file %s to Google Cloud Storage ' log.info('Uploading %i thumbnails for file %s to Google Cloud Storage '
@ -109,11 +124,11 @@ def _process_image(bucket: Bucket,
for variation in src_file['variations']: for variation in src_file['variations']:
fname = variation['file_path'] fname = variation['file_path']
if current_app.config['TESTING']: if current_app.config['TESTING']:
log.warning(' - NOT sending thumbnail %s to %s', fname, bucket) log.warning(' - NOT sending thumbnail %s to GCS', fname)
else: else:
blob = bucket.blob(fname) log.debug(' - Sending thumbnail %s to GCS', fname)
log.debug(' - Sending thumbnail %s to %s', fname, blob) blob = gcs.bucket.blob('_/' + fname, chunk_size=256 * 1024 * 2)
blob.upload_from_path(pathlib.Path(variation['local_path']), blob.upload_from_filename(variation['local_path'],
content_type=variation['content_type']) content_type=variation['content_type'])
if variation.get('size') == 't': if variation.get('size') == 't':
@ -131,162 +146,11 @@ def _process_image(bucket: Bucket,
src_file['status'] = 'complete' src_file['status'] = 'complete'
def _video_duration_seconds(filename: pathlib.Path) -> typing.Optional[int]: def _process_video(gcs, file_id, local_file, src_file):
"""Get the duration of a video file using ffprobe """Video is processed by Zencoder; the file isn't even stored locally."""
https://superuser.com/questions/650291/how-to-get-video-duration-in-seconds
:param filename: file path to video
:return: video duration in seconds
"""
import subprocess
def run(cli_args):
if log.isEnabledFor(logging.INFO):
import shlex
cmd = ' '.join(shlex.quote(s) for s in cli_args)
log.info('Calling %s', cmd)
ffprobe = subprocess.run(
cli_args,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
timeout=10, # seconds
)
if ffprobe.returncode:
import shlex
cmd = ' '.join(shlex.quote(s) for s in cli_args)
log.error('Error running %s: stopped with return code %i',
cmd, ffprobe.returncode)
log.error('Output was: %s', ffprobe.stdout)
return None
try:
return int(float(ffprobe.stdout))
except ValueError as e:
log.exception('ffprobe produced invalid number: %s', ffprobe.stdout)
return None
ffprobe_from_container_args = [
current_app.config['BIN_FFPROBE'],
'-v', 'error',
'-show_entries', 'format=duration',
'-of', 'default=noprint_wrappers=1:nokey=1',
str(filename),
]
ffprobe_from_stream_args = [
current_app.config['BIN_FFPROBE'],
'-v', 'error',
'-hide_banner',
'-select_streams', 'v:0', # we only care about the first video stream
'-show_entries', 'stream=duration',
'-of', 'default=noprint_wrappers=1:nokey=1',
str(filename),
]
duration = run(ffprobe_from_stream_args) or \
run(ffprobe_from_container_args) or \
None
return duration
def _video_size_pixels(filename: pathlib.Path) -> typing.Tuple[int, int]:
"""Figures out the size (in pixels) of the video file.
Returns (0, 0) if there was any error detecting the size.
"""
import json
import subprocess
cli_args = [
current_app.config['BIN_FFPROBE'],
'-loglevel', 'error',
'-hide_banner',
'-print_format', 'json',
'-select_streams', 'v:0', # we only care about the first video stream
'-show_streams',
str(filename),
]
if log.isEnabledFor(logging.INFO):
import shlex
cmd = ' '.join(shlex.quote(s) for s in cli_args)
log.info('Calling %s', cmd)
ffprobe = subprocess.run(
cli_args,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
timeout=10, # seconds
)
if ffprobe.returncode:
import shlex
cmd = ' '.join(shlex.quote(s) for s in cli_args)
log.error('Error running %s: stopped with return code %i',
cmd, ffprobe.returncode)
log.error('Output was: %s', ffprobe.stdout)
return 0, 0
try:
ffprobe_info = json.loads(ffprobe.stdout)
except json.JSONDecodeError:
log.exception('ffprobe produced invalid JSON: %s', ffprobe.stdout)
return 0, 0
try:
stream_info = ffprobe_info['streams'][0]
return stream_info['width'], stream_info['height']
except (KeyError, IndexError):
log.exception('ffprobe produced unexpected JSON: %s', ffprobe.stdout)
return 0, 0
def _video_cap_at_1080(width: int, height: int) -> typing.Tuple[int, int]:
"""Returns an appropriate width/height for a video capped at 1920x1080.
Takes into account that h264 has limitations:
- the width must be a multiple of 16
- the height must be a multiple of 8
"""
if width > 1920:
# The height must be a multiple of 8
new_height = height / width * 1920
height = new_height - (new_height % 8)
width = 1920
if height > 1080:
# The width must be a multiple of 16
new_width = width / height * 1080
width = new_width - (new_width % 16)
height = 1080
return int(width), int(height)
def _process_video(gcs,
file_id: ObjectId,
local_file: tempfile._TemporaryFileWrapper,
src_file: dict):
"""Video is processed by Zencoder."""
log.info('Processing video for file %s', file_id) log.info('Processing video for file %s', file_id)
# Use ffprobe to find the size (in pixels) of the video.
# Even though Zencoder can do resizing to a maximum resolution without upscaling,
# by determining the video size here we already have this information in the file
# document before Zencoder calls our notification URL. It also opens up possibilities
# for other encoding backends that don't support this functionality.
video_path = pathlib.Path(local_file.name)
video_width, video_height = _video_size_pixels(video_path)
capped_video_width, capped_video_height = _video_cap_at_1080(video_width, video_height)
video_duration = _video_duration_seconds(video_path)
# Create variations # Create variations
root, _ = os.path.splitext(src_file['file_path']) root, _ = os.path.splitext(src_file['file_path'])
src_file['variations'] = [] src_file['variations'] = []
@ -298,13 +162,12 @@ def _process_video(gcs,
content_type='video/{}'.format(v), content_type='video/{}'.format(v),
file_path='{}-{}.{}'.format(root, v, v), file_path='{}-{}.{}'.format(root, v, v),
size='', size='',
width=capped_video_width, duration=0,
height=capped_video_height, width=0,
height=0,
length=0, length=0,
md5='', md5='',
) )
if video_duration:
file_variation['duration'] = video_duration
# Append file variation. Originally mp4 and webm were the available options, # Append file variation. Originally mp4 and webm were the available options,
# that's why we build a list. # that's why we build a list.
src_file['variations'].append(file_variation) src_file['variations'].append(file_variation)
@ -312,8 +175,8 @@ def _process_video(gcs,
if current_app.config['TESTING']: if current_app.config['TESTING']:
log.warning('_process_video: NOT sending out encoding job due to ' log.warning('_process_video: NOT sending out encoding job due to '
'TESTING=%r', current_app.config['TESTING']) 'TESTING=%r', current_app.config['TESTING'])
j = {'process_id': 'fake-process-id', j = type('EncoderJob', (), {'process_id': 'fake-process-id',
'backend': 'fake'} 'backend': 'fake'})
else: else:
j = Encoder.job_create(src_file) j = Encoder.job_create(src_file)
if j is None: if j is None:
@ -331,14 +194,14 @@ def _process_video(gcs,
'backend': j['backend']} 'backend': j['backend']}
def process_file(bucket: Bucket, def process_file(gcs, file_id, local_file):
file_id: typing.Union[str, ObjectId],
local_file: tempfile._TemporaryFileWrapper):
"""Process the file by creating thumbnails, sending to Zencoder, etc. """Process the file by creating thumbnails, sending to Zencoder, etc.
:param file_id: '_id' key of the file :param file_id: '_id' key of the file
:type file_id: ObjectId or str
:param local_file: locally stored file, or None if no local processing is :param local_file: locally stored file, or None if no local processing is
needed. needed.
:type local_file: file
""" """
file_id = ObjectId(file_id) file_id = ObjectId(file_id)
@ -355,8 +218,8 @@ def process_file(bucket: Bucket,
# TODO: overrule the content type based on file extention & magic numbers. # TODO: overrule the content type based on file extention & magic numbers.
mime_category, src_file['format'] = src_file['content_type'].split('/', 1) mime_category, src_file['format'] = src_file['content_type'].split('/', 1)
# Only allow video encoding when the user has the correct capability. # Prevent video handling for non-admins.
if not current_user.has_cap('encode-video') and mime_category == 'video': if not user_has_role(u'admin') and mime_category == 'video':
if src_file['format'].startswith('x-'): if src_file['format'].startswith('x-'):
xified = src_file['format'] xified = src_file['format']
else: else:
@ -364,10 +227,10 @@ def process_file(bucket: Bucket,
src_file['content_type'] = 'application/%s' % xified src_file['content_type'] = 'application/%s' % xified
mime_category = 'application' mime_category = 'application'
log.info('Not processing video file %s for non-video-encoding user', file_id) log.info('Not processing video file %s for non-admin user', file_id)
# Run the required processor, based on the MIME category. # Run the required processor, based on the MIME category.
processors: typing.Mapping[str, typing.Callable] = { processors = {
'image': _process_image, 'image': _process_image,
'video': _process_video, 'video': _process_video,
} }
@ -386,7 +249,7 @@ def process_file(bucket: Bucket,
update_file_doc(file_id, status='processing') update_file_doc(file_id, status='processing')
try: try:
processor(bucket, file_id, local_file, src_file) processor(gcs, file_id, local_file, src_file)
except Exception: except Exception:
log.warning('process_file(%s): error when processing file, ' log.warning('process_file(%s): error when processing file, '
'resetting status to ' 'resetting status to '
@ -402,42 +265,62 @@ def process_file(bucket: Bucket,
file_id, status, r) file_id, status, r)
def generate_link(backend, file_path: str, project_id: str=None, is_public=False) -> str: def delete_file(file_item):
def process_file_delete(file_item):
"""Given a file item, delete the actual file from the storage backend.
This function can be probably made self-calling."""
if file_item['backend'] == 'gcs':
storage = GoogleCloudStorageBucket(str(file_item['project']))
storage.Delete(file_item['file_path'])
# Delete any file variation found in the file_item document
if 'variations' in file_item:
for v in file_item['variations']:
storage.Delete(v['file_path'])
return True
elif file_item['backend'] == 'pillar':
pass
elif file_item['backend'] == 'cdnsun':
pass
else:
pass
files_collection = current_app.data.driver.db['files']
# Collect children (variations) of the original file
children = files_collection.find({'parent': file_item['_id']})
for child in children:
process_file_delete(child)
# Finally remove the original file
process_file_delete(file_item)
def generate_link(backend, file_path, project_id=None, is_public=False):
"""Hook to check the backend of a file resource, to build an appropriate link """Hook to check the backend of a file resource, to build an appropriate link
that can be used by the client to retrieve the actual file. that can be used by the client to retrieve the actual file.
""" """
# TODO: replace config['TESTING'] with mocking GCS. if backend == 'gcs':
if backend == 'gcs' and current_app.config['TESTING']: if current_app.config['TESTING']:
log.info('Skipping GCS link generation, and returning a fake link ' log.info('Skipping GCS link generation, and returning a fake link '
'instead.') 'instead.')
return '/path/to/testing/gcs/%s' % file_path return '/path/to/testing/gcs/%s' % file_path
if backend in {'gcs', 'local'}: storage = GoogleCloudStorageBucket(project_id)
from ..file_storage_backends import Bucket blob = storage.Get(file_path)
bucket_cls = Bucket.for_backend(backend)
storage = bucket_cls(project_id)
blob = storage.get_blob(file_path)
if blob is None: if blob is None:
log.warning('generate_link(%r, %r): unable to find blob for file'
' path, returning empty link.', backend, file_path)
return '' return ''
return blob.get_url(is_public=is_public) if is_public:
return blob['public_url']
return blob['signed_url']
if backend == 'pillar': # obsolete, replace with local. if backend == 'pillar':
return url_for('file_storage.index', file_name=file_path, return url_for('file_storage.index', file_name=file_path,
_external=True, _scheme=current_app.config['SCHEME']) _external=True, _scheme=current_app.config['SCHEME'])
if backend == 'cdnsun': if backend == 'cdnsun':
return hash_file_path(file_path, None) return hash_file_path(file_path, None)
if backend == 'unittest': if backend == 'unittest':
return 'https://unit.test/%s' % md5(file_path.encode()).hexdigest() return md5(file_path).hexdigest()
log.warning('generate_link(): Unknown backend %r, returning empty string '
'as new link.',
backend)
return '' return ''
@ -451,8 +334,12 @@ def before_returning_file(response):
def strip_link_and_variations(response): def strip_link_and_variations(response):
# Check the access level of the user. # Check the access level of the user.
capability = current_app.config['FULL_FILE_ACCESS_CAP'] if g.current_user is None:
has_full_access = current_user.has_cap(capability) has_full_access = False
else:
user_roles = g.current_user['roles']
access_roles = current_app.config['FULL_FILE_ACCESS_ROLES']
has_full_access = bool(user_roles.intersection(access_roles))
# Strip all file variations (unless image) and link to the actual file. # Strip all file variations (unless image) and link to the actual file.
if not has_full_access: if not has_full_access:
@ -470,7 +357,7 @@ def before_returning_files(response):
ensure_valid_link(item) ensure_valid_link(item)
def ensure_valid_link(response: dict) -> None: def ensure_valid_link(response):
"""Ensures the file item has valid file links using generate_link(...).""" """Ensures the file item has valid file links using generate_link(...)."""
# Log to function-specific logger, so we can easily turn it off. # Log to function-specific logger, so we can easily turn it off.
@ -478,7 +365,7 @@ def ensure_valid_link(response: dict) -> None:
# log.debug('Inspecting link for file %s', response['_id']) # log.debug('Inspecting link for file %s', response['_id'])
# Check link expiry. # Check link expiry.
now = utils.utcnow() now = datetime.datetime.now(tz=bson.tz_util.utc)
if 'link_expires' in response: if 'link_expires' in response:
link_expires = response['link_expires'] link_expires = response['link_expires']
if now < link_expires: if now < link_expires:
@ -492,29 +379,21 @@ def ensure_valid_link(response: dict) -> None:
else: else:
log_link.debug('No expiry date for link; generating new link') log_link.debug('No expiry date for link; generating new link')
generate_all_links(response, now) _generate_all_links(response, now)
def generate_all_links(response: dict, now: datetime.datetime) -> None: def _generate_all_links(response, now):
"""Generate a new link for the file and all its variations. """Generate a new link for the file and all its variations.
:param response: the file document that should be updated. :param response: the file document that should be updated.
:param now: datetime that reflects 'now', for consistent expiry generation. :param now: datetime that reflects 'now', for consistent expiry generation.
""" """
assert isinstance(response, dict), f'response must be dict, is {response!r}'
project_id = str( project_id = str(
response['project']) if 'project' in response else None response['project']) if 'project' in response else None
# TODO: add project id to all files # TODO: add project id to all files
backend = response['backend'] backend = response['backend']
if 'file_path' in response:
response['link'] = generate_link(backend, response['file_path'], project_id) response['link'] = generate_link(backend, response['file_path'], project_id)
else:
import pprint
log.error('File without file_path properly, unable to generate links: %s',
pprint.pformat(response))
return
variations = response.get('variations') variations = response.get('variations')
if variations: if variations:
@ -527,12 +406,6 @@ def generate_all_links(response: dict, now: datetime.datetime) -> None:
response['link_expires'] = now + datetime.timedelta(seconds=validity_secs) response['link_expires'] = now + datetime.timedelta(seconds=validity_secs)
patch_info = remove_private_keys(response) patch_info = remove_private_keys(response)
# The project could have been soft-deleted, in which case it's fine to
# update the links to the file. However, Eve/Cerberus doesn't allow this;
# removing the 'project' key from the PATCH works around this.
patch_info.pop('project', None)
file_id = ObjectId(response['_id']) file_id = ObjectId(response['_id'])
(patch_resp, _, _, _) = current_app.patch_internal('files', patch_info, (patch_resp, _, _, _) = current_app.patch_internal('files', patch_info,
_id=file_id) _id=file_id)
@ -550,28 +423,25 @@ def generate_all_links(response: dict, now: datetime.datetime) -> None:
response['_etag'] = etag_doc['_etag'] response['_etag'] = etag_doc['_etag']
def before_deleting_file(item):
delete_file(item)
def on_pre_get_files(_, lookup): def on_pre_get_files(_, lookup):
# Override the HTTP header, we always want to fetch the document from # Override the HTTP header, we always want to fetch the document from
# MongoDB. # MongoDB.
parsed_req = eve.utils.parse_request('files') parsed_req = eve.utils.parse_request('files')
parsed_req.if_modified_since = None parsed_req.if_modified_since = None
# If there is no lookup, we would refresh *all* file documents,
# which is far too heavy to do in one client HTTP request.
if not lookup:
return
# Only fetch it if the date got expired. # Only fetch it if the date got expired.
now = utils.utcnow() now = datetime.datetime.now(tz=bson.tz_util.utc)
lookup_expired = lookup.copy() lookup_expired = lookup.copy()
lookup_expired['link_expires'] = {'$lte': now} lookup_expired['link_expires'] = {'$lte': now}
cursor, _ = current_app.data.find('files', parsed_req, lookup_expired, perform_count=False) cursor = current_app.data.find('files', parsed_req, lookup_expired)
for idx, file_doc in enumerate(cursor): for file_doc in cursor:
if idx == 0:
log.debug('Updating expired links for files that matched lookup %s', lookup_expired)
# log.debug('Updating expired links for file %r.', file_doc['_id']) # log.debug('Updating expired links for file %r.', file_doc['_id'])
generate_all_links(file_doc, now) _generate_all_links(file_doc, now)
def refresh_links_for_project(project_uuid, chunk_size, expiry_seconds): def refresh_links_for_project(project_uuid, chunk_size, expiry_seconds):
@ -584,7 +454,7 @@ def refresh_links_for_project(project_uuid, chunk_size, expiry_seconds):
# Retrieve expired links. # Retrieve expired links.
files_collection = current_app.data.driver.db['files'] files_collection = current_app.data.driver.db['files']
now = utils.utcnow() now = datetime.datetime.now(tz=bson.tz_util.utc)
expire_before = now + datetime.timedelta(seconds=expiry_seconds) expire_before = now + datetime.timedelta(seconds=expiry_seconds)
log.info('Limiting to links that expire before %s', expire_before) log.info('Limiting to links that expire before %s', expire_before)
@ -593,115 +463,95 @@ def refresh_links_for_project(project_uuid, chunk_size, expiry_seconds):
'link_expires': {'$lt': expire_before}, 'link_expires': {'$lt': expire_before},
}).sort([('link_expires', pymongo.ASCENDING)]).limit(chunk_size) }).sort([('link_expires', pymongo.ASCENDING)]).limit(chunk_size)
refresh_count = 0 if to_refresh.count() == 0:
log.info('No links to refresh.')
return
for file_doc in to_refresh: for file_doc in to_refresh:
log.debug('Refreshing links for file %s', file_doc['_id']) log.debug('Refreshing links for file %s', file_doc['_id'])
generate_all_links(file_doc, now) _generate_all_links(file_doc, now)
refresh_count += 1
if refresh_count: log.info('Refreshed %i links', min(chunk_size, to_refresh.count()))
log.info('Refreshed %i links', refresh_count)
def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds): def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds):
import gcloud.exceptions import gcloud.exceptions
my_log = log.getChild(f'refresh_links_for_backend.{backend_name}')
start_time = time.time()
# Retrieve expired links. # Retrieve expired links.
files_collection = current_app.data.driver.db['files'] files_collection = current_app.data.driver.db['files']
proj_coll = current_app.data.driver.db['projects'] proj_coll = current_app.data.driver.db['projects']
now = utils.utcnow() now = datetime.datetime.now(tz=bson.tz_util.utc)
expire_before = now + datetime.timedelta(seconds=expiry_seconds) expire_before = now + datetime.timedelta(seconds=expiry_seconds)
my_log.info('Limiting to links that expire before %s', expire_before) log.info('Limiting to links that expire before %s', expire_before)
base_query = {'backend': backend_name, '_deleted': {'$ne': True}} to_refresh = files_collection.find(
to_refresh_query = { {'$or': [{'backend': backend_name, 'link_expires': None},
'$or': [{'link_expires': None, **base_query}, {'backend': backend_name, 'link_expires': {
{'link_expires': {'$lt': expire_before}, **base_query}, '$lt': expire_before}},
{'link': None, **base_query}] {'backend': backend_name, 'link': None}]
} }).sort([('link_expires', pymongo.ASCENDING)]).limit(
chunk_size).batch_size(5)
document_count = files_collection.count_documents(to_refresh_query) if to_refresh.count() == 0:
if document_count == 0: log.info('No links to refresh.')
my_log.info('No links to refresh.')
return return
if 0 < chunk_size == document_count:
my_log.info('Found %d documents to refresh, probably limited by the chunk size %d',
document_count, chunk_size)
else:
my_log.info('Found %d documents to refresh, chunk size=%d', document_count, chunk_size)
to_refresh = files_collection.find(to_refresh_query)\
.sort([('link_expires', pymongo.ASCENDING)])\
.limit(chunk_size)\
.batch_size(5)
refreshed = 0 refreshed = 0
report_chunks = min(max(5, document_count // 25), 100)
for file_doc in to_refresh: for file_doc in to_refresh:
try: try:
file_id = file_doc['_id'] file_id = file_doc['_id']
project_id = file_doc.get('project') project_id = file_doc.get('project')
if project_id is None: if project_id is None:
my_log.debug('Skipping file %s, it has no project.', file_id) log.debug('Skipping file %s, it has no project.', file_id)
continue continue
count = proj_coll.count_documents({'_id': project_id, '$or': [ count = proj_coll.count({'_id': project_id, '$or': [
{'_deleted': {'$exists': False}}, {'_deleted': {'$exists': False}},
{'_deleted': False}, {'_deleted': False},
]}) ]})
if count == 0: if count == 0:
my_log.debug('Skipping file %s, project %s does not exist.', log.debug('Skipping file %s, project %s does not exist.',
file_id, project_id) file_id, project_id)
continue continue
if 'file_path' not in file_doc: if 'file_path' not in file_doc:
my_log.warning("Skipping file %s, missing 'file_path' property.", log.warning("Skipping file %s, missing 'file_path' property.",
file_id) file_id)
continue continue
my_log.debug('Refreshing links for file %s', file_id) log.debug('Refreshing links for file %s', file_id)
try: try:
generate_all_links(file_doc, now) _generate_all_links(file_doc, now)
except gcloud.exceptions.Forbidden: except gcloud.exceptions.Forbidden:
my_log.warning('Skipping file %s, GCS forbids us access to ' log.warning('Skipping file %s, GCS forbids us access to '
'project %s bucket.', file_id, project_id) 'project %s bucket.', file_id, project_id)
continue continue
refreshed += 1 refreshed += 1
if refreshed % report_chunks == 0:
my_log.info('Refreshed %i links', refreshed)
except KeyboardInterrupt: except KeyboardInterrupt:
my_log.warning('Aborting due to KeyboardInterrupt after refreshing %i ' log.warning('Aborting due to KeyboardInterrupt after refreshing %i '
'links', refreshed) 'links', refreshed)
return return
if refreshed % report_chunks != 0: log.info('Refreshed %i links', refreshed)
my_log.info('Refreshed %i links', refreshed)
my_log.info('Refresh took %s', datetime.timedelta(seconds=time.time() - start_time))
@require_login() @require_login()
def create_file_doc(name, filename, content_type, length, project, def create_file_doc(name, filename, content_type, length, project,
backend=None, **extra_fields): backend='gcs', **extra_fields):
"""Creates a minimal File document for storage in MongoDB. """Creates a minimal File document for storage in MongoDB.
Doesn't save it to MongoDB yet. Doesn't save it to MongoDB yet.
""" """
if backend is None: current_user = g.get('current_user')
backend = current_app.config['STORAGE_BACKEND']
file_doc = {'name': name, file_doc = {'name': name,
'filename': filename, 'filename': filename,
'file_path': '', 'file_path': '',
'user': current_user.user_id, 'user': current_user['user_id'],
'backend': backend, 'backend': backend,
'md5': '', 'md5': '',
'content_type': content_type, 'content_type': content_type,
@ -747,10 +597,10 @@ def override_content_type(uploaded_file):
del uploaded_file._parsed_content_type del uploaded_file._parsed_content_type
def assert_file_size_allowed(file_size: int): def assert_file_size_allowed(file_size):
"""Asserts that the current user is allowed to upload a file of the given size. """Asserts that the current user is allowed to upload a file of the given size.
:raises wz_exceptions.RequestEntityTooLarge: :raises
""" """
roles = current_app.config['ROLES_FOR_UNLIMITED_UPLOADS'] roles = current_app.config['ROLES_FOR_UNLIMITED_UPLOADS']
@ -764,7 +614,7 @@ def assert_file_size_allowed(file_size: int):
filesize_limit_mb = filesize_limit / 2.0 ** 20 filesize_limit_mb = filesize_limit / 2.0 ** 20
log.info('User %s tried to upload a %.3f MiB file, but is only allowed ' log.info('User %s tried to upload a %.3f MiB file, but is only allowed '
'%.3f MiB.', '%.3f MiB.',
current_user.user_id, file_size / 2.0 ** 20, authentication.current_user_id(), file_size / 2.0 ** 20,
filesize_limit_mb) filesize_limit_mb)
raise wz_exceptions.RequestEntityTooLarge( raise wz_exceptions.RequestEntityTooLarge(
'To upload files larger than %i MiB, subscribe to Blender Cloud' % 'To upload files larger than %i MiB, subscribe to Blender Cloud' %
@ -773,7 +623,7 @@ def assert_file_size_allowed(file_size: int):
@file_storage.route('/stream/<string:project_id>', methods=['POST', 'OPTIONS']) @file_storage.route('/stream/<string:project_id>', methods=['POST', 'OPTIONS'])
@require_login() @require_login()
def stream_to_storage(project_id: str): def stream_to_gcs(project_id):
project_oid = utils.str2id(project_id) project_oid = utils.str2id(project_id)
projects = current_app.data.driver.db['projects'] projects = current_app.data.driver.db['projects']
@ -783,16 +633,8 @@ def stream_to_storage(project_id: str):
raise wz_exceptions.NotFound('Project %s does not exist' % project_id) raise wz_exceptions.NotFound('Project %s does not exist' % project_id)
log.info('Streaming file to bucket for project=%s user_id=%s', project_id, log.info('Streaming file to bucket for project=%s user_id=%s', project_id,
current_user.user_id) authentication.current_user_id())
log.info('request.headers[Origin] = %r', request.headers.get('Origin')) log.info('request.headers[Origin] = %r', request.headers.get('Origin'))
log.info('request.content_length = %r', request.content_length)
# Try a check for the content length before we access request.files[].
# This allows us to abort the upload early. The entire body content length
# is always a bit larger than the actual file size, so if we accept here,
# we're sure it'll be accepted in subsequent checks as well.
if request.content_length:
assert_file_size_allowed(request.content_length)
uploaded_file = request.files['file'] uploaded_file = request.files['file']
@ -805,101 +647,90 @@ def stream_to_storage(project_id: str):
override_content_type(uploaded_file) override_content_type(uploaded_file)
if not uploaded_file.content_type: if not uploaded_file.content_type:
log.warning('File uploaded to project %s without content type.', log.warning('File uploaded to project %s without content type.', project_oid)
project_oid)
raise wz_exceptions.BadRequest('Missing content type.') raise wz_exceptions.BadRequest('Missing content type.')
if uploaded_file.content_type.startswith('image/') or uploaded_file.content_type.startswith( if uploaded_file.content_type.startswith('image/'):
'video/'): # We need to do local thumbnailing, so we have to write the stream
# We need to do local thumbnailing and ffprobe, so we have to write the stream
# both to Google Cloud Storage and to local storage. # both to Google Cloud Storage and to local storage.
local_file = tempfile.NamedTemporaryFile( local_file = tempfile.NamedTemporaryFile(dir=current_app.config['STORAGE_DIR'])
dir=current_app.config['STORAGE_DIR'])
uploaded_file.save(local_file) uploaded_file.save(local_file)
local_file.seek(0) # Make sure that re-read starts from the beginning. local_file.seek(0) # Make sure that a re-read starts from the beginning.
stream_for_gcs = local_file
else: else:
local_file = uploaded_file.stream local_file = None
stream_for_gcs = uploaded_file.stream
result = upload_and_process(local_file, uploaded_file, project_id)
# Local processing is done, we can close the local file so it is removed.
local_file.close()
resp = jsonify(result)
resp.status_code = result['status_code']
add_access_control_headers(resp)
return resp
def upload_and_process(local_file: typing.Union[io.BytesIO, typing.BinaryIO],
uploaded_file: werkzeug.datastructures.FileStorage,
project_id: str,
*,
may_process_file=True) -> dict:
# Figure out the file size, as we need to pass this in explicitly to GCloud. # Figure out the file size, as we need to pass this in explicitly to GCloud.
# Otherwise it always uses os.fstat(file_obj.fileno()).st_size, which isn't # Otherwise it always uses os.fstat(file_obj.fileno()).st_size, which isn't
# supported by a BytesIO object (even though it does have a fileno # supported by a BytesIO object (even though it does have a fileno attribute).
# attribute). if isinstance(stream_for_gcs, io.BytesIO):
if isinstance(local_file, io.BytesIO): file_size = len(stream_for_gcs.getvalue())
file_size = len(local_file.getvalue())
else: else:
file_size = os.fstat(local_file.fileno()).st_size file_size = os.fstat(stream_for_gcs.fileno()).st_size
# Check the file size again, now that we know its size for sure. # Check the file size again, now that we know its size for sure.
assert_file_size_allowed(file_size) assert_file_size_allowed(file_size)
# Create file document in MongoDB. # Create file document in MongoDB.
file_id, internal_fname, status = create_file_doc_for_upload(project_id, uploaded_file) file_id, internal_fname, status = create_file_doc_for_upload(project_oid, uploaded_file)
# Copy the file into storage. if current_app.config['TESTING']:
bucket = default_storage_backend(project_id) log.warning('NOT streaming to GCS because TESTING=%r', current_app.config['TESTING'])
blob = bucket.blob(internal_fname) # Fake a Blob object.
blob.create_from_file(local_file, gcs = None
file_size=file_size, blob = type('Blob', (), {'size': file_size})
content_type=uploaded_file.mimetype) else:
log.debug('Marking uploaded file id=%s, fname=%s, '
'size=%i as "queued_for_processing"',
file_id, internal_fname, file_size)
update_file_doc(file_id,
status='queued_for_processing' if may_process_file else 'complete',
file_path=internal_fname,
length=blob.size,
content_type=uploaded_file.mimetype)
if may_process_file:
log.debug('Processing uploaded file id=%s, fname=%s, size=%i', file_id,
internal_fname, blob.size)
process_file(bucket, file_id, local_file)
log.debug('Handled uploaded file id=%s, fname=%s, size=%i, status=%i',
file_id, internal_fname, blob.size, status)
# Status is 200 if the file already existed, and 201 if it was newly
# created.
# TODO: add a link to a thumbnail in the response.
return dict(status='ok', file_id=str(file_id), status_code=status)
from ..file_storage_backends.abstract import FileType
def stream_to_gcs(file_id: ObjectId, file_size: int, internal_fname: str, project_id: ObjectId,
stream_for_gcs: FileType, content_type: str) \
-> typing.Tuple[GoogleCloudStorageBlob, GoogleCloudStorageBucket]:
# Upload the file to GCS. # Upload the file to GCS.
from gcloud.streaming import transfer
log.debug('Streaming file to GCS bucket; id=%s, fname=%s, size=%i',
file_id, internal_fname, file_size)
# Files larger than this many bytes will be streamed directly from disk, smaller
# ones will be read into memory and then uploaded.
transfer.RESUMABLE_UPLOAD_THRESHOLD = 102400
try: try:
bucket = GoogleCloudStorageBucket(str(project_id)) gcs = GoogleCloudStorageBucket(project_id)
blob = bucket.blob(internal_fname) blob = gcs.bucket.blob('_/' + internal_fname, chunk_size=256 * 1024 * 2)
blob.create_from_file(stream_for_gcs, file_size=file_size, content_type=content_type) blob.upload_from_file(stream_for_gcs, size=file_size,
content_type=uploaded_file.mimetype)
except Exception: except Exception:
log.exception('Error uploading file to Google Cloud Storage (GCS),' log.exception('Error uploading file to Google Cloud Storage (GCS),'
' aborting handling of uploaded file (id=%s).', file_id) ' aborting handling of uploaded file (id=%s).', file_id)
update_file_doc(file_id, status='failed') update_file_doc(file_id, status='failed')
raise wz_exceptions.InternalServerError( raise wz_exceptions.InternalServerError('Unable to stream file to Google Cloud Storage')
'Unable to stream file to Google Cloud Storage')
return blob, bucket if stream_for_gcs.closed:
log.error('Eek, GCS closed its stream, Andy is not going to like this.')
# Reload the blob to get the file size according to Google.
blob.reload()
log.debug('Marking uploaded file id=%s, fname=%s, size=%i as "queued_for_processing"',
file_id, internal_fname, blob.size)
update_file_doc(file_id,
status='queued_for_processing',
file_path=internal_fname,
length=blob.size,
content_type=uploaded_file.mimetype)
log.debug('Processing uploaded file id=%s, fname=%s, size=%i', file_id, internal_fname, blob.size)
process_file(gcs, file_id, local_file)
# Local processing is done, we can close the local file so it is removed.
if local_file is not None:
local_file.close()
log.debug('Handled uploaded file id=%s, fname=%s, size=%i, status=%i',
file_id, internal_fname, blob.size, status)
# Status is 200 if the file already existed, and 201 if it was newly created.
# TODO: add a link to a thumbnail in the response.
resp = jsonify(status='ok', file_id=str(file_id))
resp.status_code = status
add_access_control_headers(resp)
return resp
def add_access_control_headers(resp): def add_access_control_headers(resp):
@ -913,6 +744,15 @@ def add_access_control_headers(resp):
return resp return resp
def update_file_doc(file_id, **updates):
files = current_app.data.driver.db['files']
res = files.update_one({'_id': ObjectId(file_id)},
{'$set': updates})
log.debug('update_file_doc(%s, %s): %i matched, %i updated.',
file_id, updates, res.matched_count, res.modified_count)
return res
def create_file_doc_for_upload(project_id, uploaded_file): def create_file_doc_for_upload(project_id, uploaded_file):
"""Creates a secure filename and a document in MongoDB for the file. """Creates a secure filename and a document in MongoDB for the file.
@ -984,55 +824,14 @@ def compute_aggregate_length_items(file_docs):
compute_aggregate_length(file_doc) compute_aggregate_length(file_doc)
def get_file_url(file_id: ObjectId, variation='') -> str:
"""Return the URL of a file in storage.
Note that this function is cached, see setup_app().
:param file_id: the ID of the file
:param variation: if non-empty, indicates the variation of of the file
to return the URL for; if empty, returns the URL of the original.
:return: the URL, or an empty string if the file/variation does not exist.
"""
file_coll = current_app.db('files')
db_file = file_coll.find_one({'_id': file_id})
if not db_file:
return ''
ensure_valid_link(db_file)
if variation:
variations = file_doc.get('variations', ())
for file_var in variations:
if file_var['size'] == variation:
return file_var['link']
return ''
return db_file['link']
def update_file_doc(file_id, **updates):
files = current_app.data.driver.db['files']
res = files.update_one({'_id': ObjectId(file_id)},
{'$set': updates})
log.debug('update_file_doc(%s, %s): %i matched, %i updated.',
file_id, updates, res.matched_count, res.modified_count)
return res
def setup_app(app, url_prefix): def setup_app(app, url_prefix):
global get_file_url
cached = app.cache.memoize(timeout=10)
get_file_url = cached(get_file_url)
app.on_pre_GET_files += on_pre_get_files app.on_pre_GET_files += on_pre_get_files
app.on_fetched_item_files += before_returning_file app.on_fetched_item_files += before_returning_file
app.on_fetched_resource_files += before_returning_files app.on_fetched_resource_files += before_returning_files
app.on_delete_item_files += before_deleting_file
app.on_update_files += compute_aggregate_length app.on_update_files += compute_aggregate_length
app.on_replace_files += compute_aggregate_length app.on_replace_files += compute_aggregate_length
app.on_insert_files += compute_aggregate_length_items app.on_insert_files += compute_aggregate_length_items

View File

@ -1,199 +0,0 @@
"""Code for moving files between backends."""
import logging
import os
import tempfile
import requests
import requests.exceptions
from bson import ObjectId
from flask import current_app
from pillar.api import utils
from . import stream_to_gcs, generate_all_links, ensure_valid_link
__all__ = ['PrerequisiteNotMetError', 'change_file_storage_backend', 'move_to_bucket']
log = logging.getLogger(__name__)
class PrerequisiteNotMetError(RuntimeError):
"""Raised when a file cannot be moved due to unmet prerequisites."""
def change_file_storage_backend(file_id, dest_backend):
"""Given a file document, move it to the specified backend (if not already
there) and update the document to reflect that.
Files on the original backend are not deleted automatically.
"""
dest_backend = str(dest_backend)
file_id = ObjectId(file_id)
# Fetch file document
files_collection = current_app.data.driver.db['files']
f = files_collection.find_one(file_id)
if f is None:
raise ValueError('File with _id: {} not found'.format(file_id))
# Check that new backend differs from current one
if dest_backend == f['backend']:
raise PrerequisiteNotMetError('Destination backend ({}) matches the current backend, we '
'are not moving the file'.format(dest_backend))
# TODO Check that new backend is allowed (make conf var)
# Check that the file has a project; without project, we don't know
# which bucket to store the file into.
try:
project_id = f['project']
except KeyError:
raise PrerequisiteNotMetError('File document does not have a project')
# Ensure that all links are up to date before we even attempt a download.
ensure_valid_link(f)
# Upload file and variations to the new backend
variations = f.get('variations', ())
try:
copy_file_to_backend(file_id, project_id, f, f['backend'], dest_backend)
except requests.exceptions.HTTPError as ex:
# allow the main file to be removed from storage.
if ex.response.status_code not in {404, 410}:
raise
if not variations:
raise PrerequisiteNotMetError('Main file ({link}) does not exist on server, '
'and no variations exist either'.format(**f))
log.warning('Main file %s does not exist; skipping main and visiting variations', f['link'])
for var in variations:
copy_file_to_backend(file_id, project_id, var, f['backend'], dest_backend)
# Generate new links for the file & all variations. This also saves
# the new backend we set here.
f['backend'] = dest_backend
generate_all_links(f, utils.utcnow())
def copy_file_to_backend(file_id, project_id, file_or_var, src_backend, dest_backend):
# Filenames on GCS do not contain paths, by our convention
internal_fname = os.path.basename(file_or_var['file_path'])
file_or_var['file_path'] = internal_fname
# If the file is not local already, fetch it
if src_backend == 'pillar':
local_finfo = fetch_file_from_local(file_or_var)
else:
local_finfo = fetch_file_from_link(file_or_var['link'])
try:
# Upload to GCS
if dest_backend != 'gcs':
raise ValueError('Only dest_backend="gcs" is supported now.')
if current_app.config['TESTING']:
log.warning('Skipping actual upload to GCS due to TESTING')
else:
# TODO check for name collisions
stream_to_gcs(file_id, local_finfo['file_size'],
internal_fname=internal_fname,
project_id=project_id,
stream_for_gcs=local_finfo['local_file'],
content_type=local_finfo['content_type'])
finally:
# No longer needed, so it can be closed & dispersed of.
local_finfo['local_file'].close()
def fetch_file_from_link(link):
"""Utility to download a file from a remote location and return it with
additional info (for upload to a different storage backend).
"""
log.info('Downloading %s', link)
r = requests.get(link, stream=True)
r.raise_for_status()
local_file = tempfile.NamedTemporaryFile(dir=current_app.config['STORAGE_DIR'])
log.info('Downloading to %s', local_file.name)
for chunk in r.iter_content(chunk_size=1024):
if chunk:
local_file.write(chunk)
local_file.seek(0)
file_dict = {
'file_size': os.fstat(local_file.fileno()).st_size,
'content_type': r.headers.get('content-type', 'application/octet-stream'),
'local_file': local_file
}
return file_dict
def fetch_file_from_local(file_doc):
"""Mimicks fetch_file_from_link(), but just returns the local file.
:param file_doc: dict with 'link' key pointing to a path in STORAGE_DIR, and
'content_type' key.
:type file_doc: dict
:rtype: dict self._log.info('Moving file %s to project %s', file_id, dest_proj['_id'])
"""
local_file = open(os.path.join(current_app.config['STORAGE_DIR'], file_doc['file_path']), 'rb')
local_finfo = {
'file_size': os.fstat(local_file.fileno()).st_size,
'content_type': file_doc['content_type'],
'local_file': local_file
}
return local_finfo
def move_to_bucket(file_id: ObjectId, dest_project_id: ObjectId, *, skip_storage=False):
"""Move a file + variations from its own bucket to the new project_id bucket.
:param file_id: ID of the file to move.
:param dest_project_id: Project to move to.
:param skip_storage: If True, the storage bucket will not be touched.
Only use this when you know what you're doing.
"""
files_coll = current_app.db('files')
f = files_coll.find_one(file_id)
if f is None:
raise ValueError(f'File with _id: {file_id} not found')
# Move file and variations to the new bucket.
if skip_storage:
log.warning('NOT ACTUALLY MOVING file %s on storage, just updating MongoDB', file_id)
else:
from pillar.api.file_storage_backends import Bucket
bucket_class = Bucket.for_backend(f['backend'])
src_bucket = bucket_class(str(f['project']))
dst_bucket = bucket_class(str(dest_project_id))
src_blob = src_bucket.get_blob(f['file_path'])
src_bucket.copy_blob(src_blob, dst_bucket)
for var in f.get('variations', []):
src_blob = src_bucket.get_blob(var['file_path'])
src_bucket.copy_blob(src_blob, dst_bucket)
# Update the file document after moving was successful.
# No need to update _etag or _updated, since that'll be done when
# the links are regenerated at the end of this function.
log.info('Switching file %s to project %s', file_id, dest_project_id)
update_result = files_coll.update_one({'_id': file_id},
{'$set': {'project': dest_project_id}})
if update_result.matched_count != 1:
raise RuntimeError(
'Unable to update file %s in MongoDB: matched_count=%i; modified_count=%i' % (
file_id, update_result.matched_count, update_result.modified_count))
log.info('Switching file %s: matched_count=%i; modified_count=%i',
file_id, update_result.matched_count, update_result.modified_count)
# Regenerate the links for this file
f['project'] = dest_project_id
generate_all_links(f, now=utils.utcnow())

View File

@ -1,29 +0,0 @@
"""Storage backends.
To obtain a storage backend, use either of the two forms:
>>> bucket = default_storage_backend('bucket_name')
>>> BucketClass = Bucket.for_backend('backend_name')
>>> bucket = BucketClass('bucket_name')
"""
from .abstract import Bucket
# Import the other backends so that they register.
from . import local
from . import gcs
def default_storage_backend(name: str) -> Bucket:
"""Returns an instance of a Bucket, based on the default backend.
Depending on the backend this may actually create the bucket.
"""
from flask import current_app
backend_name = current_app.config['STORAGE_BACKEND']
backend_cls = Bucket.for_backend(backend_name)
return backend_cls(name)

View File

@ -1,167 +0,0 @@
import abc
import io
import logging
import typing
import pathlib
from bson import ObjectId
__all__ = ['Bucket', 'Blob', 'Path', 'FileType']
# Shorthand for the type of path we use.
Path = pathlib.PurePosixPath
# This is a mess: typing.IO keeps mypy-0.501 happy, but not in all cases,
# and io.FileIO + io.BytesIO keeps PyCharm-2017.1 happy.
FileType = typing.Union[typing.IO, io.FileIO, io.BytesIO]
class Bucket(metaclass=abc.ABCMeta):
"""Can be a GCS bucket or simply a project folder in Pillar
:type name: string
:param name: Name of the bucket. As a convention, we use the ID of
the project to name the bucket.
"""
# Mapping from backend name to Bucket class
backends: typing.Dict[str, typing.Type['Bucket']] = {}
backend_name: str = None # define in subclass.
def __init__(self, name: str) -> None:
self.name = str(name)
def __init_subclass__(cls):
assert cls.backend_name, '%s.backend_name must be non-empty string' % cls
cls.backends[cls.backend_name] = cls
def __repr__(self):
return f'<{self.__class__.__name__} name={self.name!r}>'
@classmethod
def for_backend(cls, backend_name: str) -> typing.Type['Bucket']:
"""Returns the Bucket subclass for the given backend."""
return cls.backends[backend_name]
@abc.abstractmethod
def blob(self, blob_name: str) -> 'Blob':
"""Factory constructor for blob object.
:param blob_name: The path of the blob to be instantiated.
"""
@abc.abstractmethod
def get_blob(self, blob_name: str) -> typing.Optional['Blob']:
"""Get a blob object by name.
If the blob exists return the object, otherwise None.
"""
@abc.abstractmethod
def copy_blob(self, blob: 'Blob', to_bucket: 'Bucket'):
"""Copies a blob from the current bucket to the other bucket.
Implementations only need to support copying between buckets of the
same storage backend.
"""
@abc.abstractmethod
def rename_blob(self, blob: 'Blob', new_name: str) -> 'Blob':
"""Rename the blob, returning the new Blob."""
@classmethod
def copy_to_bucket(cls, blob_name, src_project_id: ObjectId, dest_project_id: ObjectId):
"""Copies a file from one bucket to the other."""
src_storage = cls(str(src_project_id))
dest_storage = cls(str(dest_project_id))
blob = src_storage.get_blob(blob_name)
src_storage.copy_blob(blob, dest_storage)
Bu = typing.TypeVar('Bu', bound=Bucket)
class Blob(metaclass=abc.ABCMeta):
"""A wrapper for file or blob objects."""
def __init__(self, name: str, bucket: Bucket) -> None:
self.name = name
"""Name of this blob in the bucket."""
self.bucket = bucket
self._size_in_bytes: typing.Optional[int] = None
self._log = logging.getLogger(f'{__name__}.Blob')
def __repr__(self):
return f'<{self.__class__.__name__} bucket={self.bucket.name!r} name={self.name!r}>'
@property
def size(self) -> typing.Optional[int]:
"""Size of the object, in bytes.
:returns: The size of the blob or ``None`` if the property
is not set locally.
"""
size = self._size_in_bytes
if size is None:
return None
return int(size)
@abc.abstractmethod
def create_from_file(self, file_obj: FileType, *,
content_type: str,
file_size: int = -1):
"""Copies the file object to the storage.
:param file_obj: The file object to send to storage.
:param content_type: The content type of the file.
:param file_size: The size of the file in bytes, or -1 if unknown
"""
def upload_from_path(self, path: pathlib.Path, content_type: str):
file_size = path.stat().st_size
with path.open('rb') as infile:
self.create_from_file(infile, content_type=content_type,
file_size=file_size)
@abc.abstractmethod
def update_filename(self, filename: str, *, is_attachment=True):
"""Sets the filename which is used when downloading the file.
Not all storage backends support this, and will use the on-disk filename instead.
"""
@abc.abstractmethod
def update_content_type(self, content_type: str, content_encoding: str = ''):
"""Set the content type (and optionally content encoding).
Not all storage backends support this.
"""
@abc.abstractmethod
def get_url(self, *, is_public: bool) -> str:
"""Returns the URL to access this blob.
Note that this may involve API calls to generate a signed URL.
"""
@abc.abstractmethod
def make_public(self):
"""Makes the blob publicly available.
Only performs an actual action on backends that support temporary links.
"""
@abc.abstractmethod
def exists(self) -> bool:
"""Returns True iff the file exists on the storage backend."""
Bl = typing.TypeVar('Bl', bound=Blob)

View File

@ -1,273 +0,0 @@
import os
import datetime
import logging
import typing
from bson import ObjectId
from gcloud.storage.client import Client
import gcloud.storage.blob
import gcloud.exceptions as gcloud_exc
from flask import current_app, g
from werkzeug.local import LocalProxy
from pillar.api import utils
from .abstract import Bucket, Blob, FileType
log = logging.getLogger(__name__)
def get_client() -> Client:
"""Stores the GCS client on the global Flask object.
The GCS client is not user-specific anyway.
"""
_gcs = getattr(g, '_gcs_client', None)
if _gcs is None:
_gcs = g._gcs_client = Client()
return _gcs
# This hides the specifics of how/where we store the GCS client,
# and allows the rest of the code to use 'gcs' as a simple variable
# that does the right thing.
gcs: Client = LocalProxy(get_client)
class GoogleCloudStorageBucket(Bucket):
"""Cloud Storage bucket interface. We create a bucket for every project. In
the bucket we create first level subdirs as follows:
- '_' (will contain hashed assets, and stays on top of default listing)
- 'svn' (svn checkout mirror)
- 'shared' (any additional folder of static folder that is accessed via a
node of 'storage' node_type)
:type bucket_name: string
:param bucket_name: Name of the bucket.
:type subdir: string
:param subdir: The local entry point to browse the bucket.
"""
backend_name = 'gcs'
def __init__(self, name: str, subdir='_') -> None:
super().__init__(name=name)
self._log = logging.getLogger(f'{__name__}.GoogleCloudStorageBucket')
try:
self._gcs_bucket = gcs.get_bucket(name)
except gcloud_exc.NotFound:
self._gcs_bucket = gcs.bucket(name)
# Hardcode the bucket location to EU
self._gcs_bucket.location = 'EU'
# Optionally enable CORS from * (currently only used for vrview)
# self.gcs_bucket.cors = [
# {
# "origin": ["*"],
# "responseHeader": ["Content-Type"],
# "method": ["GET", "HEAD", "DELETE"],
# "maxAgeSeconds": 3600
# }
# ]
self._gcs_bucket.create()
log.info('Created GCS instance for project %s', name)
self.subdir = subdir
def blob(self, blob_name: str) -> 'GoogleCloudStorageBlob':
return GoogleCloudStorageBlob(name=blob_name, bucket=self)
def get_blob(self, internal_fname: str) -> typing.Optional['GoogleCloudStorageBlob']:
blob = self.blob(internal_fname)
if not blob.gblob.exists():
return None
return blob
def _gcs_get(self, path: str, *, chunk_size=None) -> gcloud.storage.Blob:
"""Get selected file info if the path matches.
:param path: The path to the file, relative to the bucket's subdir.
"""
path = os.path.join(self.subdir, path)
blob = self._gcs_bucket.blob(path, chunk_size=chunk_size)
return blob
def _gcs_post(self, full_path, *, path=None) -> typing.Optional[gcloud.storage.Blob]:
"""Create new blob and upload data to it.
"""
path = path if path else os.path.join(self.subdir, os.path.basename(full_path))
gblob = self._gcs_bucket.blob(path)
if gblob.exists():
self._log.error(f'Trying to upload to {path}, but that blob already exists. '
f'Not uploading.')
return None
gblob.upload_from_filename(full_path)
return gblob
# return self.blob_to_dict(blob) # Has issues with threading
def delete_blob(self, path: str) -> bool:
"""Deletes the blob (when removing an asset or replacing a preview)"""
# We want to get the actual blob to delete
gblob = self._gcs_get(path)
try:
gblob.delete()
return True
except gcloud_exc.NotFound:
return False
def copy_blob(self, blob: Blob, to_bucket: Bucket):
"""Copies the given blob from this bucket to the other bucket.
Returns the new blob.
"""
assert isinstance(blob, GoogleCloudStorageBlob)
assert isinstance(to_bucket, GoogleCloudStorageBucket)
self._log.info('Copying %s to bucket %s', blob, to_bucket)
return self._gcs_bucket.copy_blob(blob.gblob, to_bucket._gcs_bucket)
def rename_blob(self, blob: 'GoogleCloudStorageBlob', new_name: str) \
-> 'GoogleCloudStorageBlob':
"""Rename the blob, returning the new Blob."""
assert isinstance(blob, GoogleCloudStorageBlob)
new_name = os.path.join(self.subdir, new_name)
self._log.info('Renaming %s to %r', blob, new_name)
new_gblob = self._gcs_bucket.rename_blob(blob.gblob, new_name)
return GoogleCloudStorageBlob(new_gblob.name, self, gblob=new_gblob)
class GoogleCloudStorageBlob(Blob):
"""GCS blob interface."""
def __init__(self, name: str, bucket: GoogleCloudStorageBucket,
*, gblob: gcloud.storage.blob.Blob=None) -> None:
super().__init__(name, bucket)
self._log = logging.getLogger(f'{__name__}.GoogleCloudStorageBlob')
self.gblob = gblob or bucket._gcs_get(name, chunk_size=256 * 1024 * 2)
def create_from_file(self, file_obj: FileType, *,
content_type: str,
file_size: int = -1) -> None:
from gcloud.streaming import transfer
self._log.debug('Streaming file to GCS bucket %r, size=%i', self, file_size)
# Files larger than this many bytes will be streamed directly from disk,
# smaller ones will be read into memory and then uploaded.
transfer.RESUMABLE_UPLOAD_THRESHOLD = 102400
self.gblob.upload_from_file(file_obj,
size=file_size,
content_type=content_type)
# Reload the blob to get the file size according to Google.
self.gblob.reload()
self._size_in_bytes = self.gblob.size
def update_filename(self, filename: str, *, is_attachment=True):
"""Set the ContentDisposition metadata so that when a file is downloaded
it has a human-readable name.
"""
if '"' in filename:
raise ValueError(f'Filename is not allowed to have double quote in it: {filename!r}')
if is_attachment:
self.gblob.content_disposition = f'attachment; filename="{filename}"'
else:
self.gblob.content_disposition = f'filename="{filename}"'
self.gblob.patch()
def update_content_type(self, content_type: str, content_encoding: str = ''):
"""Set the content type (and optionally content encoding)."""
self.gblob.content_type = content_type
self.gblob.content_encoding = content_encoding
self.gblob.patch()
def get_url(self, *, is_public: bool) -> str:
if is_public:
return self.gblob.public_url
expiration = utils.utcnow() + datetime.timedelta(days=1)
return self.gblob.generate_signed_url(expiration)
def make_public(self):
self.gblob.make_public()
def exists(self) -> bool:
# Reload to get the actual file properties from Google.
try:
self.gblob.reload()
except gcloud_exc.NotFound:
return False
return self.gblob.exists()
def update_file_name(node):
"""Assign to the CGS blob the same name of the asset node. This way when
downloading an asset we get a human-readable name.
"""
# Process only files that are not processing
if node['properties'].get('status', '') == 'processing':
return
def _format_name(name, override_ext, size=None, map_type=''):
root, _ = os.path.splitext(name)
size = '-{}'.format(size) if size else ''
map_type = '-{}'.format(map_type) if map_type else ''
return '{}{}{}{}'.format(root, size, map_type, override_ext)
def _update_name(file_id, file_props):
files_collection = current_app.data.driver.db['files']
file_doc = files_collection.find_one({'_id': ObjectId(file_id)})
if file_doc is None or file_doc.get('backend') != 'gcs':
return
# For textures -- the map type should be part of the name.
map_type = file_props.get('map_type', '')
storage = GoogleCloudStorageBucket(str(node['project']))
blob = storage.get_blob(file_doc['file_path'])
if blob is None:
log.warning('Unable to find blob for file %s in project %s',
file_doc['file_path'], file_doc['project'])
return
# Pick file extension from original filename
_, ext = os.path.splitext(file_doc['filename'])
name = _format_name(node['name'], ext, map_type=map_type)
blob.update_filename(name)
# Assign the same name to variations
for v in file_doc.get('variations', []):
_, override_ext = os.path.splitext(v['file_path'])
name = _format_name(node['name'], override_ext, v['size'], map_type=map_type)
blob = storage.get_blob(v['file_path'])
if blob is None:
log.info('Unable to find blob for file %s in project %s. This can happen if the '
'video encoding is still processing.', v['file_path'], node['project'])
continue
blob.update_filename(name)
# Currently we search for 'file' and 'files' keys in the object properties.
# This could become a bit more flexible and realy on a true reference of the
# file object type from the schema.
if 'file' in node['properties']:
_update_name(node['properties']['file'], {})
if 'files' in node['properties']:
for file_props in node['properties']['files']:
_update_name(file_props['file'], file_props)

View File

@ -1,134 +0,0 @@
import logging
import pathlib
import typing
from flask import current_app
__all__ = ['LocalBucket', 'LocalBlob']
from .abstract import Bucket, Blob, FileType, Path
class LocalBucket(Bucket):
backend_name = 'local'
def __init__(self, name: str) -> None:
super().__init__(name)
self._log = logging.getLogger(f'{__name__}.LocalBucket')
# For local storage, the name is actually a partial path, relative
# to the local storage root.
self.root = pathlib.Path(current_app.config['STORAGE_DIR'])
self.bucket_path = pathlib.PurePosixPath(self.name[:2]) / self.name
self.abspath = self.root / self.bucket_path
def blob(self, blob_name: str) -> 'LocalBlob':
return LocalBlob(name=blob_name, bucket=self)
def get_blob(self, blob_name: str) -> typing.Optional['LocalBlob']:
# TODO: Check if file exists, otherwise None
return self.blob(blob_name)
def copy_blob(self, blob: Blob, to_bucket: Bucket):
"""Copies a blob from the current bucket to the other bucket.
Implementations only need to support copying between buckets of the
same storage backend.
"""
assert isinstance(blob, LocalBlob)
assert isinstance(to_bucket, LocalBucket)
self._log.info('Copying %s to bucket %s', blob, to_bucket)
dest_blob = to_bucket.blob(blob.name)
# TODO: implement content type handling for local storage.
self._log.warning('Unable to set correct file content type for %s', dest_blob)
fpath = blob.abspath()
if not fpath.exists():
if not fpath.parent.exists():
raise FileNotFoundError(f'File {fpath} does not exist, and neither does its parent,'
f' unable to copy to {to_bucket}')
raise FileNotFoundError(f'File {fpath} does not exist, unable to copy to {to_bucket}')
with open(fpath, 'rb') as src_file:
dest_blob.create_from_file(src_file, content_type='application/x-octet-stream')
def rename_blob(self, blob: 'LocalBlob', new_name: str) -> 'LocalBlob':
"""Rename the blob, returning the new Blob."""
assert isinstance(blob, LocalBlob)
self._log.info('Renaming %s to %r', blob, new_name)
new_blob = LocalBlob(new_name, self)
old_path = blob.abspath()
new_path = new_blob.abspath()
new_path.parent.mkdir(parents=True, exist_ok=True)
old_path.rename(new_path)
return new_blob
class LocalBlob(Blob):
"""Blob representing a local file on the filesystem."""
bucket: LocalBucket
def __init__(self, name: str, bucket: LocalBucket) -> None:
super().__init__(name, bucket)
self._log = logging.getLogger(f'{__name__}.LocalBlob')
self.partial_path = Path(name[:2]) / name
def abspath(self) -> pathlib.Path:
"""Returns a concrete, absolute path to the local file."""
return pathlib.Path(self.bucket.abspath / self.partial_path)
def get_url(self, *, is_public: bool) -> str:
from flask import url_for
path = self.bucket.bucket_path / self.partial_path
url = url_for('file_storage.index', file_name=str(path), _external=True,
_scheme=current_app.config['SCHEME'])
return url
def create_from_file(self, file_obj: FileType, *,
content_type: str,
file_size: int = -1):
assert hasattr(file_obj, 'read')
import shutil
# Ensure path exists before saving
my_path = self.abspath()
my_path.parent.mkdir(exist_ok=True, parents=True)
with my_path.open('wb') as outfile:
shutil.copyfileobj(typing.cast(typing.IO, file_obj), outfile)
self._size_in_bytes = file_size
def update_filename(self, filename: str, *, is_attachment=True):
# TODO: implement this for local storage.
self._log.info('update_filename(%r) not supported', filename)
def update_content_type(self, content_type: str, content_encoding: str = ''):
self._log.info('update_content_type(%r, %r) not supported', content_type, content_encoding)
def make_public(self):
# No-op on this storage backend.
pass
def exists(self) -> bool:
return self.abspath().exists()
def touch(self):
"""Touch the file, creating parent directories if needed."""
path = self.abspath()
path.parent.mkdir(parents=True, exist_ok=True)
path.touch(exist_ok=True)

View File

@ -1,6 +1,5 @@
import typing import itertools
import bson
import pymongo import pymongo
from flask import Blueprint, current_app from flask import Blueprint, current_app
@ -9,85 +8,103 @@ from pillar.api.utils import jsonify
blueprint = Blueprint('latest', __name__) blueprint = Blueprint('latest', __name__)
def _public_project_ids() -> typing.List[bson.ObjectId]: def keep_fetching(collection, db_filter, projection, sort, py_filter,
"""Returns a list of ObjectIDs of public projects. batch_size=12):
"""Yields results for which py_filter returns True"""
Memoized in setup_app(). projection['_deleted'] = 1
""" curs = collection.find(db_filter, projection).sort(sort)
curs.batch_size(batch_size)
proj_coll = current_app.db('projects') for doc in curs:
result = proj_coll.find({'is_private': False}, {'_id': 1}) if doc.get('_deleted'):
return [p['_id'] for p in result] continue
doc.pop('_deleted', None)
if py_filter(doc):
yield doc
def latest_nodes(db_filter, projection, limit): def latest_nodes(db_filter, projection, py_filter, limit):
"""Returns the latest nodes, of a certain type, of public projects. nodes = current_app.data.driver.db['nodes']
Also includes information about the project and the user of each node.
"""
proj = { proj = {
'_created': 1, '_created': 1,
'_updated': 1, '_updated': 1,
'project._id': 1,
'project.url': 1,
'project.name': 1,
'name': 1,
'node_type': 1,
'parent': 1,
**projection,
} }
proj.update(projection)
nodes_coll = current_app.db('nodes') latest = keep_fetching(nodes, db_filter, proj,
pipeline = [ [('_created', pymongo.DESCENDING)],
{'$match': {'_deleted': {'$ne': True}}}, py_filter, limit)
{'$match': db_filter},
{'$match': {'project': {'$in': _public_project_ids()}}},
{'$sort': {'_created': pymongo.DESCENDING}},
{'$limit': limit},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$lookup': {"from": "projects",
"localField": "project",
"foreignField": "_id",
"as": "project"}},
{'$unwind': {'path': "$project"}},
{'$project': proj},
]
latest = nodes_coll.aggregate(pipeline) result = list(itertools.islice(latest, limit))
return list(latest) return result
def has_public_project(node_doc):
"""Returns True iff the project the node belongs to is public."""
project_id = node_doc.get('project')
return is_project_public(project_id)
# TODO: cache result, for a limited amt. of time, or for this HTTP request.
def is_project_public(project_id):
"""Returns True iff the project is public."""
project = current_app.data.driver.db['projects'].find_one(project_id)
if not project:
return False
return not project.get('is_private')
@blueprint.route('/assets') @blueprint.route('/assets')
def latest_assets(): def latest_assets():
latest = latest_nodes({'node_type': 'asset', latest = latest_nodes({'node_type': 'asset',
'properties.status': 'published'}, 'properties.status': 'published'},
{'name': 1, 'node_type': 1, {'name': 1, 'project': 1, 'user': 1, 'node_type': 1,
'parent': 1, 'picture': 1, 'properties.status': 1, 'parent': 1, 'picture': 1, 'properties.status': 1,
'properties.content_type': 1, 'properties.content_type': 1,
'properties.duration_seconds': 1,
'permissions.world': 1}, 'permissions.world': 1},
12) has_public_project, 12)
embed_user(latest)
embed_project(latest)
return jsonify({'_items': latest}) return jsonify({'_items': latest})
def embed_user(latest):
users = current_app.data.driver.db['users']
for comment in latest:
user_id = comment['user']
comment['user'] = users.find_one(user_id, {
'auth': 0, 'groups': 0, 'roles': 0, 'settings': 0, 'email': 0,
'_created': 0, '_updated': 0, '_etag': 0})
def embed_project(latest):
projects = current_app.data.driver.db['projects']
for comment in latest:
project_id = comment['project']
comment['project'] = projects.find_one(project_id, {'_id': 1, 'name': 1,
'url': 1})
@blueprint.route('/comments') @blueprint.route('/comments')
def latest_comments(): def latest_comments():
latest = latest_nodes({'node_type': 'comment', latest = latest_nodes({'node_type': 'comment',
'properties.status': 'published'}, 'properties.status': 'published'},
{'parent': 1, 'user.full_name': 1, {'project': 1, 'parent': 1, 'user': 1,
'properties.content': 1, 'node_type': 1, 'properties.content': 1, 'node_type': 1,
'properties.status': 1, 'properties.status': 1,
'properties.is_reply': 1}, 'properties.is_reply': 1},
10) has_public_project, 6)
# Embed the comments' parents. # Embed the comments' parents.
# TODO: move to aggregation pipeline.
nodes = current_app.data.driver.db['nodes'] nodes = current_app.data.driver.db['nodes']
parents = {} parents = {}
for comment in latest: for comment in latest:
@ -101,12 +118,11 @@ def latest_comments():
parents[parent_id] = parent parents[parent_id] = parent
comment['parent'] = parent comment['parent'] = parent
embed_project(latest)
embed_user(latest)
return jsonify({'_items': latest}) return jsonify({'_items': latest})
def setup_app(app, url_prefix): def setup_app(app, url_prefix):
global _public_project_ids
app.register_api_blueprint(blueprint, url_prefix=url_prefix) app.register_api_blueprint(blueprint, url_prefix=url_prefix)
cached = app.cache.cached(timeout=3600)
_public_project_ids = cached(_public_project_ids)

View File

@ -1,16 +1,15 @@
import base64 import base64
import datetime
import hashlib import hashlib
import logging import logging
import typing
import bcrypt import bcrypt
import datetime
import rsa.randnum
from bson import tz_util
from flask import abort, Blueprint, current_app, jsonify, request from flask import abort, Blueprint, current_app, jsonify, request
from pillar.api.utils.authentication import create_new_user_document from pillar.api.utils.authentication import create_new_user_document
from pillar.api.utils.authentication import make_unique_username from pillar.api.utils.authentication import make_unique_username
from pillar.api.utils.authentication import store_token from pillar.api.utils.authentication import store_token
from pillar.api.utils import utcnow
blueprint = Blueprint('authentication', __name__) blueprint = Blueprint('authentication', __name__)
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -38,7 +37,17 @@ def create_local_user(email, password):
return r['_id'] return r['_id']
def get_local_user(username, password): @blueprint.route('/make-token', methods=['POST'])
def make_token():
"""Direct login for a user, without OAuth, using local database. Generates
a token that is passed back to Pillar Web and used in subsequent
transactions.
:return: a token string
"""
username = request.form['username']
password = request.form['password']
# Look up user in db # Look up user in db
users_collection = current_app.data.driver.db['users'] users_collection = current_app.data.driver.db['users']
user = users_collection.find_one({'username': username}) user = users_collection.find_one({'username': username})
@ -53,63 +62,35 @@ def get_local_user(username, password):
hashed_password = hash_password(password, salt) hashed_password = hash_password(password, salt)
if hashed_password != credentials['token']: if hashed_password != credentials['token']:
return abort(403) return abort(403)
return user
@blueprint.route('/make-token', methods=['POST'])
def make_token():
"""Direct login for a user, without OAuth, using local database. Generates
a token that is passed back to Pillar Web and used in subsequent
transactions.
:return: a token string
"""
username = request.form['username']
password = request.form['password']
user = get_local_user(username, password)
token = generate_and_store_token(user['_id']) token = generate_and_store_token(user['_id'])
return jsonify(token=token['token']) return jsonify(token=token['token'])
def generate_and_store_token(user_id, days=15, prefix=b'') -> dict: def generate_and_store_token(user_id, days=15, prefix=''):
"""Generates token based on random bits. """Generates token based on random bits.
NOTE: the returned document includes the plain-text token.
DO NOT STORE OR LOG THIS unless there is a good reason to.
:param user_id: ObjectId of the owning user. :param user_id: ObjectId of the owning user.
:param days: token will expire in this many days. :param days: token will expire in this many days.
:param prefix: the token will be prefixed by these bytes, for easy identification. :param prefix: the token will be prefixed by this string, for easy identification.
:return: the token document with the token in plain text as well as hashed. :return: the token document.
""" """
if not isinstance(prefix, bytes): random_bits = rsa.randnum.read_random_bits(256)
raise TypeError('prefix must be bytes, not %s' % type(prefix))
import secrets
random_bits = secrets.token_bytes(32)
# Use 'xy' as altargs to prevent + and / characters from appearing. # Use 'xy' as altargs to prevent + and / characters from appearing.
# We never have to b64decode the string anyway. # We never have to b64decode the string anyway.
token = prefix + base64.b64encode(random_bits, altchars=b'xy').strip(b'=') token = prefix + base64.b64encode(random_bits, altchars='xy').strip('=')
token_expiry = utcnow() + datetime.timedelta(days=days) token_expiry = datetime.datetime.now(tz=tz_util.utc) + datetime.timedelta(days=days)
return store_token(user_id, token.decode('ascii'), token_expiry) return store_token(user_id, token, token_expiry)
def hash_password(password: str, salt: typing.Union[str, bytes]) -> str: def hash_password(password, salt):
password = password.encode() if isinstance(salt, unicode):
if isinstance(salt, str):
salt = salt.encode('utf-8') salt = salt.encode('utf-8')
encoded_password = base64.b64encode(hashlib.sha256(password).digest())
hash = hashlib.sha256(password).digest() return bcrypt.hashpw(encoded_password, salt)
encoded_password = base64.b64encode(hash)
hashed_password = bcrypt.hashpw(encoded_password, salt)
return hashed_password.decode('ascii')
def setup_app(app, url_prefix): def setup_app(app, url_prefix):

View File

@ -6,91 +6,3 @@ _file_embedded_schema = {
'embeddable': True 'embeddable': True
} }
} }
ATTACHMENT_SLUG_REGEX = r'[a-zA-Z0-9_\-]+'
attachments_embedded_schema = {
'type': 'dict',
'keysrules': {
'type': 'string',
'regex': '^%s$' % ATTACHMENT_SLUG_REGEX,
},
'valuesrules': {
'type': 'dict',
'schema': {
'oid': {
'type': 'objectid',
'required': True,
},
'collection': {
'type': 'string',
'allowed': ['files'],
'default': 'files',
},
},
},
}
# TODO (fsiddi) reference this schema in all node_types that allow ratings
ratings_embedded_schema = {
'type': 'dict',
# Total count of positive ratings (updated at every rating action)
'schema': {
'positive': {
'type': 'integer',
},
# Total count of negative ratings (updated at every rating action)
'negative': {
'type': 'integer',
},
# Collection of ratings, keyed by user
'ratings': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'user': {
'type': 'objectid',
'data_relation': {
'resource': 'users',
'field': '_id',
'embeddable': False
}
},
'is_positive': {
'type': 'boolean'
},
# Weight of the rating based on user rep and the context.
# Currently we have the following weights:
# - 1 auto null
# - 2 manual null
# - 3 auto valid
# - 4 manual valid
'weight': {
'type': 'integer'
}
}
}
},
'hot': {'type': 'float'},
},
}
# Import after defining the common embedded schemas, to prevent dependency cycles.
from pillar.api.node_types.asset import node_type_asset
from pillar.api.node_types.blog import node_type_blog
from pillar.api.node_types.comment import node_type_comment
from pillar.api.node_types.group import node_type_group
from pillar.api.node_types.group_hdri import node_type_group_hdri
from pillar.api.node_types.group_texture import node_type_group_texture
from pillar.api.node_types.hdri import node_type_hdri
from pillar.api.node_types.page import node_type_page
from pillar.api.node_types.post import node_type_post
from pillar.api.node_types.storage import node_type_storage
from pillar.api.node_types.text import node_type_text
from pillar.api.node_types.texture import node_type_texture
PILLAR_NODE_TYPES = (node_type_asset, node_type_blog, node_type_comment, node_type_group,
node_type_group_hdri, node_type_group_texture, node_type_hdri, node_type_page,
node_type_post, node_type_storage, node_type_text, node_type_texture)
PILLAR_NAMED_NODE_TYPES = {nt['name']: nt for nt in PILLAR_NODE_TYPES}

View File

@ -0,0 +1,5 @@
node_type_act = {
'name': 'act',
'description': 'Act node type',
'parent': []
}

View File

@ -1,4 +1,4 @@
from pillar.api.node_types import _file_embedded_schema, attachments_embedded_schema from pillar.api.node_types import _file_embedded_schema
node_type_asset = { node_type_asset = {
'name': 'asset', 'name': 'asset',
@ -24,14 +24,29 @@ node_type_asset = {
'content_type': { 'content_type': {
'type': 'string' 'type': 'string'
}, },
# The duration of a video asset in seconds.
'duration_seconds': {
'type': 'integer'
},
# We point to the original file (and use it to extract any relevant # We point to the original file (and use it to extract any relevant
# variation useful for our scope). # variation useful for our scope).
'file': _file_embedded_schema, 'file': _file_embedded_schema,
'attachments': attachments_embedded_schema, 'attachments': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'field': {'type': 'string'},
'files': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'file': _file_embedded_schema,
'slug': {'type': 'string', 'minlength': 1},
'size': {'type': 'string'}
}
}
}
}
}
},
# Tags for search # Tags for search
'tags': { 'tags': {
'type': 'list', 'type': 'list',
@ -43,30 +58,17 @@ node_type_asset = {
# this schema: "Root > Nested Category > One More Nested Category" # this schema: "Root > Nested Category > One More Nested Category"
'categories': { 'categories': {
'type': 'string' 'type': 'string'
}, }
'license_type': {
'default': 'cc-by',
'type': 'string',
'allowed': [
'cc-by',
'cc-0',
'cc-by-sa',
'cc-by-nd',
'cc-by-nc',
'copyright'
]
},
'license_notes': {
'type': 'string'
},
}, },
'form_schema': { 'form_schema': {
'status': {},
'content_type': {'visible': False}, 'content_type': {'visible': False},
'duration_seconds': {'visible': False}, 'file': {},
'attachments': {'visible': False},
'order': {'visible': False}, 'order': {'visible': False},
'tags': {'visible': False}, 'tags': {'visible': False},
'categories': {'visible': False}, 'categories': {'visible': False}
'license_type': {'visible': False},
'license_notes': {'visible': False},
}, },
'permissions': {
}
} }

View File

@ -2,6 +2,10 @@ node_type_blog = {
'name': 'blog', 'name': 'blog',
'description': 'Container for node_type post.', 'description': 'Container for node_type post.',
'dyn_schema': { 'dyn_schema': {
# Path for a custom template to be used for rendering the posts
'template': {
'type': 'string',
},
'categories' : { 'categories' : {
'type': 'list', 'type': 'list',
'schema': { 'schema': {
@ -14,4 +18,12 @@ node_type_blog = {
'template': {}, 'template': {},
}, },
'parent': ['project',], 'parent': ['project',],
'permissions': {
# 'groups': [{
# 'group': app.config['ADMIN_USER_GROUP'],
# 'methods': ['GET', 'PUT', 'POST']
# }],
# 'users': [],
# 'world': ['GET']
}
} }

View File

@ -1,15 +1,12 @@
from pillar.api.node_types import attachments_embedded_schema
from pillar.api.node_types.utils import markdown_fields
node_type_comment = { node_type_comment = {
'name': 'comment', 'name': 'comment',
'description': 'Comments for asset nodes, pages, etc.', 'description': 'Comments for asset nodes, pages, etc.',
'dyn_schema': { 'dyn_schema': {
# The actual comment content # The actual comment content (initially Markdown format)
**markdown_fields( 'content': {
'content', 'type': 'string',
minlength=5, 'minlength': 5,
required=True), },
'status': { 'status': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [
@ -51,9 +48,18 @@ node_type_comment = {
} }
}, },
'confidence': {'type': 'float'}, 'confidence': {'type': 'float'},
'is_reply': {'type': 'boolean'}, 'is_reply': {'type': 'boolean'}
'attachments': attachments_embedded_schema, },
'form_schema': {
'content': {},
'status': {},
'rating_positive': {},
'rating_negative': {},
'ratings': {},
'confidence': {},
'is_reply': {}
}, },
'form_schema': {},
'parent': ['asset', 'comment'], 'parent': ['asset', 'comment'],
'permissions': {
}
} }

View File

@ -1,9 +1,9 @@
node_type_group = { node_type_group = {
'name': 'group', 'name': 'group',
'description': 'Folder node type', 'description': 'Generic group node type edited',
'parent': ['group', 'project'], 'parent': ['group', 'project'],
'dyn_schema': { 'dyn_schema': {
# Used for sorting within the context of a group
'order': { 'order': {
'type': 'integer' 'type': 'integer'
}, },
@ -20,12 +20,14 @@ node_type_group = {
'notes': { 'notes': {
'type': 'string', 'type': 'string',
'maxlength': 256, 'maxlength': 256,
} },
}, },
'form_schema': { 'form_schema': {
'url': {'visible': False}, 'url': {'visible': False},
'status': {},
'notes': {'visible': False}, 'notes': {'visible': False},
'order': {'visible': False} 'order': {'visible': False}
}, },
'permissions': {
}
} }

View File

@ -15,5 +15,8 @@ node_type_group_hdri = {
], ],
} }
}, },
'form_schema': {}, 'form_schema': {
'status': {},
'order': {}
}
} }

View File

@ -15,5 +15,8 @@ node_type_group_texture = {
], ],
} }
}, },
'form_schema': {}, 'form_schema': {
'status': {},
'order': {}
}
} }

View File

@ -7,11 +7,6 @@ node_type_hdri = {
'description': 'HDR Image', 'description': 'HDR Image',
'parent': ['group_hdri'], 'parent': ['group_hdri'],
'dyn_schema': { 'dyn_schema': {
# Default yaw angle in degrees.
'default_yaw': {
'type': 'float',
'default': 0.0
},
'status': { 'status': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [
@ -67,5 +62,5 @@ node_type_hdri = {
'content_type': {'visible': False}, 'content_type': {'visible': False},
'tags': {'visible': False}, 'tags': {'visible': False},
'categories': {'visible': False}, 'categories': {'visible': False},
}, }
} }

View File

@ -1,9 +1,16 @@
from pillar.api.node_types import attachments_embedded_schema from pillar.api.node_types import _file_embedded_schema
node_type_page = { node_type_page = {
'name': 'page', 'name': 'page',
'description': 'A single page', 'description': 'A single page',
'dyn_schema': { 'dyn_schema': {
# The page content (Markdown format)
'content': {
'type': 'string',
'minlength': 5,
'maxlength': 90000,
'required': True
},
'status': { 'status': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [
@ -15,10 +22,33 @@ node_type_page = {
'url': { 'url': {
'type': 'string' 'type': 'string'
}, },
'attachments': attachments_embedded_schema, 'attachments': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'field': {'type': 'string'},
'files': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'file': _file_embedded_schema,
'slug': {'type': 'string', 'minlength': 1},
'size': {'type': 'string'}
}
}
}
}
}
}
}, },
'form_schema': { 'form_schema': {
'content': {},
'status': {},
'url': {},
'attachments': {'visible': False}, 'attachments': {'visible': False},
}, },
'parent': ['project', ], 'parent': ['project', ],
'permissions': {}
} }

View File

@ -1,14 +1,16 @@
from pillar.api.node_types import attachments_embedded_schema from pillar.api.node_types import _file_embedded_schema
from pillar.api.node_types.utils import markdown_fields
node_type_post = { node_type_post = {
'name': 'post', 'name': 'post',
'description': 'A blog post, for any project', 'description': 'A blog post, for any project',
'dyn_schema': { 'dyn_schema': {
**markdown_fields('content', # The blogpost content (Markdown format)
minlength=5, 'content': {
maxlength=90000, 'type': 'string',
required=True), 'minlength': 5,
'maxlength': 90000,
'required': True
},
'status': { 'status': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [
@ -24,10 +26,34 @@ node_type_post = {
'url': { 'url': {
'type': 'string' 'type': 'string'
}, },
'attachments': attachments_embedded_schema, 'attachments': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'field': {'type': 'string'},
'files': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'file': _file_embedded_schema,
'slug': {'type': 'string', 'minlength': 1},
'size': {'type': 'string'}
}
}
}
}
}
}
}, },
'form_schema': { 'form_schema': {
'content': {},
'status': {},
'category': {},
'url': {},
'attachments': {'visible': False}, 'attachments': {'visible': False},
}, },
'parent': ['blog', ], 'parent': ['blog', ],
'permissions': {}
} }

View File

@ -0,0 +1,124 @@
from pillar.api.node_types import _file_embedded_schema
node_type_project = {
'name': 'project',
'parent': {},
'description': 'The official project type',
'dyn_schema': {
'category': {
'type': 'string',
'allowed': [
'training',
'film',
'assets',
'software',
'game'
],
'required': True,
},
'is_private': {
'type': 'boolean'
},
'url': {
'type': 'string'
},
'organization': {
'type': 'objectid',
'nullable': True,
'data_relation': {
'resource': 'organizations',
'field': '_id',
'embeddable': True
},
},
'owners': {
'type': 'dict',
'schema': {
'users': {
'type': 'list',
'schema': {
'type': 'objectid',
}
},
'groups': {
'type': 'list',
'schema': {
'type': 'objectid',
'data_relation': {
'resource': 'groups',
'field': '_id',
'embeddable': True
}
}
}
}
},
'status': {
'type': 'string',
'allowed': [
'published',
'pending',
],
},
# Logo
'picture_square': _file_embedded_schema,
# Header
'picture_header': _file_embedded_schema,
# Short summary for the project
'summary': {
'type': 'string',
'maxlength': 128
},
# Latest nodes being edited
'nodes_latest': {
'type': 'list',
'schema': {
'type': 'objectid',
}
},
# Featured nodes, manually added
'nodes_featured': {
'type': 'list',
'schema': {
'type': 'objectid',
}
},
# Latest blog posts, manually added
'nodes_blog': {
'type': 'list',
'schema': {
'type': 'objectid',
}
}
},
'form_schema': {
'is_private': {},
# TODO add group parsing
'category': {},
'url': {},
'organization': {},
'picture_square': {},
'picture_header': {},
'summary': {},
'owners': {
'schema': {
'users': {},
'groups': {
'items': [('Group', 'name')],
},
}
},
'status': {},
'nodes_featured': {},
'nodes_latest': {},
'nodes_blog': {}
},
'permissions': {
# 'groups': [{
# 'group': app.config['ADMIN_USER_GROUP'],
# 'methods': ['GET', 'PUT', 'POST']
# }],
# 'users': [],
# 'world': ['GET']
}
}

View File

@ -0,0 +1,5 @@
node_type_scene = {
'name': 'scene',
'description': 'Scene node type',
'parent': ['act'],
}

View File

@ -0,0 +1,45 @@
node_type_shot = {
'name': 'shot',
'description': 'Shot Node Type, for shots',
'dyn_schema': {
'url': {
'type': 'string',
},
'cut_in': {
'type': 'integer'
},
'cut_out': {
'type': 'integer'
},
'status': {
'type': 'string',
'allowed': [
'on_hold',
'todo',
'in_progress',
'review',
'final'
],
},
'notes': {
'type': 'string',
'maxlength': 256,
},
'shot_group': {
'type': 'string',
#'data_relation': {
# 'resource': 'nodes',
# 'field': '_id',
#},
},
},
'form_schema': {
'url': {},
'cut_in': {},
'cut_out': {},
'status': {},
'notes': {},
'shot_group': {}
},
'parent': ['scene']
}

View File

@ -16,11 +16,22 @@ node_type_storage = {
'subdir': { 'subdir': {
'type': 'string', 'type': 'string',
}, },
# Which backend is used to store the files (gcs, local) # Which backend is used to store the files (gcs, pillar, bam, cdnsun)
'backend': { 'backend': {
'type': 'string', 'type': 'string',
}, },
}, },
'form_schema': {}, 'form_schema': {
'subdir': {},
'project': {},
'backend': {}
},
'parent': ['group', 'project'], 'parent': ['group', 'project'],
'permissions': {
# 'groups': [{
# 'group': app.config['ADMIN_USER_GROUP'],
# 'methods': ['GET', 'PUT', 'POST']
# }],
# 'users': [],
}
} }

View File

@ -0,0 +1,107 @@
node_type_task = {
'name': 'task',
'description': 'Task Node Type, for tasks',
'dyn_schema': {
'status': {
'type': 'string',
'allowed': [
'todo',
'in_progress',
'on_hold',
'approved',
'cbb',
'final',
'review'
],
'required': True,
},
'filepath': {
'type': 'string',
},
'revision': {
'type': 'integer',
},
'owners': {
'type': 'dict',
'schema': {
'users': {
'type': 'list',
'schema': {
'type': 'objectid',
}
},
'groups': {
'type': 'list',
'schema': {
'type': 'objectid',
}
}
}
},
'time': {
'type': 'dict',
'schema': {
'start': {
'type': 'datetime'
},
'duration': {
'type': 'integer'
},
'chunks': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'start': {
'type': 'datetime',
},
'duration': {
'type': 'integer',
}
}
}
},
}
},
'is_conflicting' : {
'type': 'boolean'
},
'is_processing' : {
'type': 'boolean'
},
'is_open' : {
'type': 'boolean'
}
},
'form_schema': {
'status': {},
'filepath': {},
'revision': {},
'owners': {
'schema': {
'users':{
'items': [('User', 'first_name')],
},
'groups': {}
}
},
'time': {
'schema': {
'start': {},
'duration': {},
'chunks': {
'visible': False,
'schema': {
'start': {},
'duration': {}
}
}
}
},
'is_conflicting': {},
'is_open': {},
'is_processing': {},
},
'parent': ['shot']
}

View File

@ -24,5 +24,5 @@ node_type_text = {
}, },
'form_schema': { 'form_schema': {
'shared_slug': {'visible': False}, 'shared_slug': {'visible': False},
}, }
} }

View File

@ -27,19 +27,13 @@ node_type_texture = {
'map_type': { 'map_type': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [
"alpha", 'color',
"ambient occlusion", 'specular',
"bump", 'bump',
"color", 'normal',
"displacement", 'translucency',
"emission", 'emission',
"glossiness", 'alpha'
"id",
"mask",
"normal",
"roughness",
"specular",
"translucency",
]} ]}
} }
} }
@ -64,8 +58,15 @@ node_type_texture = {
} }
}, },
'form_schema': { 'form_schema': {
'status': {},
'content_type': {'visible': False}, 'content_type': {'visible': False},
'files': {},
'is_tileable': {},
'is_landscape': {},
'resolution': {},
'aspect_ratio': {},
'order': {},
'tags': {'visible': False}, 'tags': {'visible': False},
'categories': {'visible': False}, 'categories': {'visible': False},
}, }
} }

View File

@ -1,34 +0,0 @@
from pillar import markdown
def markdown_fields(field: str, **kwargs) -> dict:
"""
Creates a field for the markdown, and a field for the cached html.
Example usage:
schema = {'myDoc': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
**markdown_fields('content', required=True),
}
},
}}
:param field:
:return:
"""
cache_field = markdown.cache_field_name(field)
return {
field: {
'type': 'string',
**kwargs
},
cache_field: {
'type': 'string',
'readonly': True,
'default': field, # Name of the field containing the markdown. Will be input to the coerce function.
'coerce': 'markdown',
}
}

View File

@ -1,19 +1,23 @@
import base64 import base64
import datetime
import logging import logging
import urlparse
import pymongo.errors import pymongo.errors
import rsa.randnum
import werkzeug.exceptions as wz_exceptions import werkzeug.exceptions as wz_exceptions
from flask import current_app, Blueprint, request from bson import ObjectId
from flask import current_app, g, Blueprint, request
from pillar.api.nodes import eve_hooks, comments, activities from pillar.api import file_storage
from pillar.api.activities import activity_subscribe, activity_object_add
from pillar.api.utils.algolia import algolia_index_node_delete
from pillar.api.utils.algolia import algolia_index_node_save
from pillar.api.utils import str2id, jsonify from pillar.api.utils import str2id, jsonify
from pillar.api.utils.authorization import check_permissions, require_login from pillar.api.utils.authorization import check_permissions, require_login
from pillar.web.utils import pretty_date from pillar.api.utils.gcs import update_file_name
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
blueprint = Blueprint('nodes_api', __name__) blueprint = Blueprint('nodes_api', __name__)
ROLES_FOR_SHARING = ROLES_FOR_COMMENTING = {'subscriber', 'demo'} ROLES_FOR_SHARING = {u'subscriber', u'demo'}
@blueprint.route('/<node_id>/share', methods=['GET', 'POST']) @blueprint.route('/<node_id>/share', methods=['GET', 'POST'])
@ -30,8 +34,6 @@ def share_node(node_id):
'node_type': 1, 'node_type': 1,
'short_code': 1 'short_code': 1
}) })
if not node:
raise wz_exceptions.NotFound('Node %s does not exist.' % node_id)
check_permissions('nodes', node, request.method) check_permissions('nodes', node, request.method)
@ -48,121 +50,7 @@ def share_node(node_id):
else: else:
return '', 204 return '', 204
return jsonify(eve_hooks.short_link_info(short_code), status=status) return jsonify(short_link_info(short_code), status=status)
@blueprint.route('/<string(length=24):node_path>/comments', methods=['GET'])
def get_node_comments(node_path: str):
node_id = str2id(node_path)
return comments.get_node_comments(node_id)
@blueprint.route('/<string(length=24):node_path>/comments', methods=['POST'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def post_node_comment(node_path: str):
node_id = str2id(node_path)
msg = request.json['msg']
attachments = request.json.get('attachments', {})
return comments.post_node_comment(node_id, msg, attachments)
@blueprint.route('/<string(length=24):node_path>/comments/<string(length=24):comment_path>', methods=['PATCH'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def patch_node_comment(node_path: str, comment_path: str):
node_id = str2id(node_path)
comment_id = str2id(comment_path)
msg = request.json['msg']
attachments = request.json.get('attachments', {})
return comments.patch_node_comment(node_id, comment_id, msg, attachments)
@blueprint.route('/<string(length=24):node_path>/comments/<string(length=24):comment_path>/vote', methods=['POST'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def post_node_comment_vote(node_path: str, comment_path: str):
node_id = str2id(node_path)
comment_id = str2id(comment_path)
vote_str = request.json['vote']
vote = int(vote_str)
return comments.post_node_comment_vote(node_id, comment_id, vote)
@blueprint.route('/<string(length=24):node_path>/activities', methods=['GET'])
def activities_for_node(node_path: str):
node_id = str2id(node_path)
return jsonify(activities.for_node(node_id))
@blueprint.route('/tagged/')
@blueprint.route('/tagged/<tag>')
def tagged(tag=''):
"""Return all tagged nodes of public projects as JSON."""
from pillar.auth import current_user
# We explicitly register the tagless endpoint to raise a 404, otherwise the PATCH
# handler on /api/nodes/<node_id> will return a 405 Method Not Allowed.
if not tag:
raise wz_exceptions.NotFound()
# Build the (cached) list of tagged nodes
agg_list = _tagged(tag)
for node in agg_list:
if node['properties'].get('duration_seconds'):
node['properties']['duration'] = datetime.timedelta(seconds=node['properties']['duration_seconds'])
if node.get('_created') is not None:
node['pretty_created'] = pretty_date(node['_created'])
# If the user is anonymous, no more information is needed and we return
if current_user.is_anonymous:
return jsonify(agg_list)
# If the user is authenticated, attach view_progress for video assets
view_progress = current_user.nodes['view_progress']
for node in agg_list:
node_id = str(node['_id'])
# View progress should be added only for nodes of type 'asset' and
# with content_type 'video', only if the video was already in the watched
# list for the current user.
if node_id in view_progress:
node['view_progress'] = view_progress[node_id]
return jsonify(agg_list)
def _tagged(tag: str):
"""Fetch all public nodes with the given tag.
This function is cached, see setup_app().
"""
nodes_coll = current_app.db('nodes')
agg = nodes_coll.aggregate([
{'$match': {'properties.tags': tag,
'_deleted': {'$ne': True}}},
# Only get nodes from public projects. This is done after matching the
# tagged nodes, because most likely nobody else will be able to tag
# nodes anyway.
{'$lookup': {
'from': 'projects',
'localField': 'project',
'foreignField': '_id',
'as': '_project',
}},
{'$unwind': '$_project'},
{'$match': {'_project.is_private': False}},
{'$addFields': {
'project._id': '$_project._id',
'project.name': '$_project.name',
'project.url': '$_project.url',
}},
# Don't return the entire project/file for each node.
{'$project': {'_project': False}},
{'$sort': {'_created': -1}}
])
return list(agg)
def generate_and_store_short_code(node): def generate_and_store_short_code(node):
@ -211,7 +99,7 @@ def make_world_gettable(node):
log.debug('Ensuring the world can read node %s', node_id) log.debug('Ensuring the world can read node %s', node_id)
world_perms = set(node.get('permissions', {}).get('world', [])) world_perms = set(node.get('permissions', {}).get('world', []))
world_perms.add('GET') world_perms.add(u'GET')
world_perms = list(world_perms) world_perms = list(world_perms)
result = nodes_coll.update_one({'_id': node_id}, result = nodes_coll.update_one({'_id': node_id},
@ -223,52 +111,307 @@ def make_world_gettable(node):
node_id) node_id)
def create_short_code(node) -> str: def create_short_code(node):
"""Generates a new 'short code' for the node.""" """Generates a new 'short code' for the node."""
import secrets
length = current_app.config['SHORT_CODE_LENGTH'] length = current_app.config['SHORT_CODE_LENGTH']
bits = rsa.randnum.read_random_bits(32)
# Base64 encoding will expand it a bit, so we'll cut that off later. short_code = base64.b64encode(bits, altchars='xy').rstrip('=')
# It's a good idea to start with enough bytes, though. short_code = short_code[:length]
bits = secrets.token_bytes(length)
short_code = base64.b64encode(bits, altchars=b'xy').rstrip(b'=')
short_code = short_code[:length].decode('ascii')
return short_code return short_code
def setup_app(app, url_prefix): def short_link_info(short_code):
global _tagged """Returns the short link info in a dict."""
cached = app.cache.memoize(timeout=300) short_link = urlparse.urljoin(current_app.config['SHORT_LINK_BASE_URL'], short_code)
_tagged = cached(_tagged)
return {
'short_code': short_code,
'short_link': short_link,
}
def item_parse_attachments(response):
"""Before returning a response, check if the 'attachments' property is
defined. If yes, load the file (for the moment only images) in the required
variation, get the link and build a Markdown representation. Search in the
'field' specified in the attachment and replace the 'slug' tag with the
generated link.
"""
attachments = response.get('properties', {}).get('attachments', None)
if not attachments:
return
files_collection = current_app.data.driver.db['files']
for attachment in attachments:
# Make a list from the property path
field_name_path = attachment['field'].split('.')
# This currently allow to access only properties inside of
# the properties property
if len(field_name_path) > 1:
field_content = response[field_name_path[0]][field_name_path[1]]
# This is for the "normal" first level property
else:
field_content = response[field_name_path[0]]
for af in attachment['files']:
slug = af['slug']
slug_tag = "[{0}]".format(slug)
f = files_collection.find_one({'_id': ObjectId(af['file'])})
if f is None:
af['file'] = None
continue
size = f['size'] if 'size' in f else 'l'
# Get the correct variation from the file
file_storage.ensure_valid_link(f)
thumbnail = next((item for item in f['variations'] if
item['size'] == size), None)
# Build Markdown img string
l = '![{0}]({1} "{2}")'.format(slug, thumbnail['link'], f['name'])
# Parse the content of the file and replace the attachment
# tag with the actual image link
field_content = field_content.replace(slug_tag, l)
# Apply the parsed value back to the property. See above for
# clarifications on how this is done.
if len(field_name_path) > 1:
response[field_name_path[0]][field_name_path[1]] = field_content
else:
response[field_name_path[0]] = field_content
def resource_parse_attachments(response):
for item in response['_items']:
item_parse_attachments(item)
def before_replacing_node(item, original):
check_permissions('nodes', original, 'PUT')
update_file_name(item)
def after_replacing_node(item, original):
"""Push an update to the Algolia index when a node item is updated. If the
project is private, prevent public indexing.
"""
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one({'_id': item['project']})
if project.get('is_private', False):
# Skip index updating and return
return
from algoliasearch.client import AlgoliaException
status = item['properties'].get('status', 'unpublished')
if status == 'published':
try:
algolia_index_node_save(item)
except AlgoliaException as ex:
log.warning('Unable to push node info to Algolia for node %s; %s',
item.get('_id'), ex)
else:
try:
algolia_index_node_delete(item)
except AlgoliaException as ex:
log.warning('Unable to delete node info to Algolia for node %s; %s',
item.get('_id'), ex)
def before_inserting_nodes(items):
"""Before inserting a node in the collection we check if the user is allowed
and we append the project id to it.
"""
nodes_collection = current_app.data.driver.db['nodes']
def find_parent_project(node):
"""Recursive function that finds the ultimate parent of a node."""
if node and 'parent' in node:
parent = nodes_collection.find_one({'_id': node['parent']})
return find_parent_project(parent)
if node:
return node
else:
return None
for item in items:
check_permissions('nodes', item, 'POST')
if 'parent' in item and 'project' not in item:
parent = nodes_collection.find_one({'_id': item['parent']})
project = find_parent_project(parent)
if project:
item['project'] = project['_id']
# Default the 'user' property to the current user.
item.setdefault('user', g.current_user['user_id'])
def after_inserting_nodes(items):
for item in items:
# Skip subscriptions for first level items (since the context is not a
# node, but a project).
# TODO: support should be added for mixed context
if 'parent' not in item:
return
context_object_id = item['parent']
if item['node_type'] == 'comment':
nodes_collection = current_app.data.driver.db['nodes']
parent = nodes_collection.find_one({'_id': item['parent']})
# Always subscribe to the parent node
activity_subscribe(item['user'], 'node', item['parent'])
if parent['node_type'] == 'comment':
# If the parent is a comment, we provide its own parent as
# context. We do this in order to point the user to an asset
# or group when viewing the notification.
verb = 'replied'
context_object_id = parent['parent']
# Subscribe to the parent of the parent comment (post or group)
activity_subscribe(item['user'], 'node', parent['parent'])
else:
activity_subscribe(item['user'], 'node', item['_id'])
verb = 'commented'
else:
verb = 'posted'
activity_subscribe(item['user'], 'node', item['_id'])
activity_object_add(
item['user'],
verb,
'node',
item['_id'],
'node',
context_object_id
)
def deduct_content_type(node_doc, original=None):
"""Deduct the content type from the attached file, if any."""
if node_doc['node_type'] != 'asset':
log.debug('deduct_content_type: called on node type %r, ignoring', node_doc['node_type'])
return
node_id = node_doc.get('_id')
try:
file_id = ObjectId(node_doc['properties']['file'])
except KeyError:
if node_id is None:
# Creation of a file-less node is allowed, but updates aren't.
return
log.warning('deduct_content_type: Asset without properties.file, rejecting.')
raise wz_exceptions.UnprocessableEntity('Missing file property for asset node')
files = current_app.data.driver.db['files']
file_doc = files.find_one({'_id': file_id},
{'content_type': 1})
if not file_doc:
log.warning('deduct_content_type: Node %s refers to non-existing file %s, rejecting.',
node_id, file_id)
raise wz_exceptions.UnprocessableEntity('File property refers to non-existing file')
# Guess the node content type from the file content type
file_type = file_doc['content_type']
if file_type.startswith('video/'):
content_type = 'video'
elif file_type.startswith('image/'):
content_type = 'image'
else:
content_type = 'file'
node_doc['properties']['content_type'] = content_type
def nodes_deduct_content_type(nodes):
for node in nodes:
deduct_content_type(node)
def before_returning_node(node):
# Run validation process, since GET on nodes entry point is public
check_permissions('nodes', node, 'GET', append_allowed_methods=True)
# Embed short_link_info if the node has a short_code.
short_code = node.get('short_code')
if short_code:
node['short_link'] = short_link_info(short_code)['short_link']
def before_returning_nodes(nodes):
for node in nodes['_items']:
before_returning_node(node)
def node_set_default_picture(node, original=None):
"""Uses the image of an image asset or colour map of texture node as picture."""
if node.get('picture'):
log.debug('Node %s already has a picture, not overriding', node.get('_id'))
return
node_type = node.get('node_type')
props = node.get('properties', {})
content = props.get('content_type')
if node_type == 'asset' and content == 'image':
image_file_id = props.get('file')
elif node_type == 'texture':
# Find the colour map, defaulting to the first image map available.
image_file_id = None
for image in props.get('files', []):
if image_file_id is None or image.get('map_type') == u'color':
image_file_id = image.get('file')
else:
log.debug('Not setting default picture on node type %s content type %s',
node_type, content)
return
if image_file_id is None:
log.debug('Nothing to set the picture to.')
return
log.debug('Setting default picture for node %s to %s', node.get('_id'), image_file_id)
node['picture'] = image_file_id
def nodes_set_default_picture(nodes):
for node in nodes:
node_set_default_picture(node)
def after_deleting_node(item):
from algoliasearch.client import AlgoliaException
try:
algolia_index_node_delete(item)
except AlgoliaException as ex:
log.warning('Unable to delete node info to Algolia for node %s; %s',
item.get('_id'), ex)
def setup_app(app, url_prefix):
from . import patch from . import patch
patch.setup_app(app, url_prefix=url_prefix) patch.setup_app(app, url_prefix=url_prefix)
app.on_fetched_item_nodes += eve_hooks.before_returning_node app.on_fetched_item_nodes += before_returning_node
app.on_fetched_resource_nodes += eve_hooks.before_returning_nodes app.on_fetched_resource_nodes += before_returning_nodes
app.on_replace_nodes += eve_hooks.before_replacing_node app.on_fetched_item_nodes += item_parse_attachments
app.on_replace_nodes += eve_hooks.texture_sort_files app.on_fetched_resource_nodes += resource_parse_attachments
app.on_replace_nodes += eve_hooks.deduct_content_type_and_duration
app.on_replace_nodes += eve_hooks.node_set_default_picture
app.on_replaced_nodes += eve_hooks.after_replacing_node
app.on_insert_nodes += eve_hooks.before_inserting_nodes app.on_replace_nodes += before_replacing_node
app.on_insert_nodes += eve_hooks.nodes_deduct_content_type_and_duration app.on_replace_nodes += deduct_content_type
app.on_insert_nodes += eve_hooks.nodes_set_default_picture app.on_replace_nodes += node_set_default_picture
app.on_insert_nodes += eve_hooks.textures_sort_files app.on_replaced_nodes += after_replacing_node
app.on_inserted_nodes += eve_hooks.after_inserting_nodes
app.on_update_nodes += eve_hooks.texture_sort_files app.on_insert_nodes += before_inserting_nodes
app.on_insert_nodes += nodes_deduct_content_type
app.on_insert_nodes += nodes_set_default_picture
app.on_inserted_nodes += after_inserting_nodes
app.on_delete_item_nodes += eve_hooks.before_deleting_node app.on_deleted_item_nodes += after_deleting_node
app.on_deleted_item_nodes += eve_hooks.after_deleting_node
app.register_api_blueprint(blueprint, url_prefix=url_prefix) app.register_api_blueprint(blueprint, url_prefix=url_prefix)
activities.setup_app(app)

View File

@ -1,43 +0,0 @@
from eve.methods import get
import pillar.api.users.avatar
def for_node(node_id):
activities, _, _, status, _ =\
get('activities',
{
'$or': [
{'object_type': 'node',
'object': node_id},
{'context_object_type': 'node',
'context_object': node_id},
],
},)
for act in activities['_items']:
act['actor_user'] = _user_info(act['actor_user'])
return activities
def _user_info(user_id):
users, _, _, status, _ = get('users', {'_id': user_id})
if len(users['_items']) > 0:
user = users['_items'][0]
user['avatar'] = pillar.api.users.avatar.url(user)
public_fields = {'full_name', 'username', 'avatar'}
for field in list(user.keys()):
if field not in public_fields:
del user[field]
return user
return {}
def setup_app(app):
global _user_info
decorator = app.cache.memoize(timeout=300, make_name='%s.public_user_info' % __name__)
_user_info = decorator(_user_info)

View File

@ -1,302 +0,0 @@
import logging
from datetime import datetime
import pymongo
import typing
import bson
import attr
import werkzeug.exceptions as wz_exceptions
import pillar
from pillar import current_app, shortcodes
import pillar.api.users.avatar
from pillar.api.nodes.custom.comment import patch_comment
from pillar.api.utils import jsonify
from pillar.auth import current_user
import pillar.markdown
log = logging.getLogger(__name__)
@attr.s(auto_attribs=True)
class UserDO:
id: str
full_name: str
avatar_url: str
badges_html: str
@attr.s(auto_attribs=True)
class CommentPropertiesDO:
attachments: typing.Dict
rating_positive: int = 0
rating_negative: int = 0
@attr.s(auto_attribs=True)
class CommentDO:
id: bson.ObjectId
parent: bson.ObjectId
project: bson.ObjectId
user: UserDO
msg_html: str
msg_markdown: str
properties: CommentPropertiesDO
created: datetime
updated: datetime
etag: str
replies: typing.List['CommentDO'] = []
current_user_rating: typing.Optional[bool] = None
@attr.s(auto_attribs=True)
class CommentTreeDO:
node_id: bson.ObjectId
project: bson.ObjectId
nbr_of_comments: int = 0
comments: typing.List[CommentDO] = []
def _get_markdowned_html(document: dict, field_name: str) -> str:
cache_field_name = pillar.markdown.cache_field_name(field_name)
html = document.get(cache_field_name)
if html is None:
markdown_src = document.get(field_name) or ''
html = pillar.markdown.markdown(markdown_src)
return html
def jsonify_data_object(data_object: attr):
return jsonify(
attr.asdict(data_object,
recurse=True)
)
class CommentTreeBuilder:
def __init__(self, node_id: bson.ObjectId):
self.node_id = node_id
self.nbr_of_Comments: int = 0
def build(self) -> CommentTreeDO:
enriched_comments = self.child_comments(
self.node_id,
sort={'properties.rating_positive': pymongo.DESCENDING,
'_created': pymongo.DESCENDING})
project_id = self.get_project_id()
return CommentTreeDO(
node_id=self.node_id,
project=project_id,
nbr_of_comments=self.nbr_of_Comments,
comments=enriched_comments
)
def child_comments(self, node_id: bson.ObjectId, sort: dict) -> typing.List[CommentDO]:
raw_comments = self.mongodb_comments(node_id, sort)
return [self.enrich(comment) for comment in raw_comments]
def enrich(self, mongo_comment: dict) -> CommentDO:
self.nbr_of_Comments += 1
comment = to_comment_data_object(mongo_comment)
comment.replies = self.child_comments(mongo_comment['_id'],
sort={'_created': pymongo.ASCENDING})
return comment
def get_project_id(self):
nodes_coll = current_app.db('nodes')
result = nodes_coll.find_one({'_id': self.node_id})
return result['project']
@classmethod
def mongodb_comments(cls, node_id: bson.ObjectId, sort: dict) -> typing.Iterator:
nodes_coll = current_app.db('nodes')
return nodes_coll.aggregate([
{'$match': {'node_type': 'comment',
'_deleted': {'$ne': True},
'properties.status': 'published',
'parent': node_id}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$sort': sort},
])
def get_node_comments(node_id: bson.ObjectId):
comments_tree = CommentTreeBuilder(node_id).build()
return jsonify_data_object(comments_tree)
def post_node_comment(parent_id: bson.ObjectId, markdown_msg: str, attachments: dict):
parent_node = find_node_or_raise(parent_id,
'User %s tried to update comment with bad parent_id %s',
current_user.objectid,
parent_id)
is_reply = parent_node['node_type'] == 'comment'
comment = dict(
parent=parent_id,
project=parent_node['project'],
name='Comment',
user=current_user.objectid,
node_type='comment',
properties=dict(
content=markdown_msg,
status='published',
is_reply=is_reply,
confidence=0,
rating_positive=0,
rating_negative=0,
attachments=attachments,
),
permissions=dict(
users=[dict(
user=current_user.objectid,
methods=['PUT'])
]
)
)
r, _, _, status = current_app.post_internal('nodes', comment)
if status != 201:
log.warning('Unable to post comment on %s as %s: %s',
parent_id, current_user.objectid, r)
raise wz_exceptions.InternalServerError('Unable to create comment')
comment_do = get_comment(parent_id, r['_id'])
return jsonify_data_object(comment_do), 201
def find_node_or_raise(node_id, *args):
nodes_coll = current_app.db('nodes')
node_to_comment = nodes_coll.find_one({
'_id': node_id,
'_deleted': {'$ne': True},
})
if not node_to_comment:
log.warning(args)
raise wz_exceptions.UnprocessableEntity()
return node_to_comment
def patch_node_comment(parent_id: bson.ObjectId,
comment_id: bson.ObjectId,
markdown_msg: str,
attachments: dict):
_, _ = find_parent_and_comment_or_raise(parent_id, comment_id)
patch = dict(
op='edit',
content=markdown_msg,
attachments=attachments
)
json_result = patch_comment(comment_id, patch)
if json_result.json['result'] != 200:
raise wz_exceptions.InternalServerError('Failed to update comment')
comment_do = get_comment(parent_id, comment_id)
return jsonify_data_object(comment_do), 200
def find_parent_and_comment_or_raise(parent_id, comment_id):
parent = find_node_or_raise(parent_id,
'User %s tried to update comment with bad parent_id %s',
current_user.objectid,
parent_id)
comment = find_node_or_raise(comment_id,
'User %s tried to update comment with bad id %s',
current_user.objectid,
comment_id)
validate_comment_parent_relation(comment, parent)
return parent, comment
def validate_comment_parent_relation(comment, parent):
if comment['parent'] != parent['_id']:
log.warning('User %s tried to update comment with bad parent/comment pair.'
' parent_id: %s comment_id: %s',
current_user.objectid, parent['_id'], comment['_id'])
raise wz_exceptions.BadRequest()
def get_comment(parent_id: bson.ObjectId, comment_id: bson.ObjectId) -> CommentDO:
nodes_coll = current_app.db('nodes')
mongo_comment = list(nodes_coll.aggregate([
{'$match': {'node_type': 'comment',
'_deleted': {'$ne': True},
'properties.status': 'published',
'parent': parent_id,
'_id': comment_id}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
]))[0]
return to_comment_data_object(mongo_comment)
def to_comment_data_object(mongo_comment: dict) -> CommentDO:
def current_user_rating():
if current_user.is_authenticated:
for rating in mongo_comment['properties'].get('ratings', ()):
if str(rating['user']) != current_user.objectid:
continue
return rating['is_positive']
return None
user_dict = mongo_comment['user']
user = UserDO(
id=str(mongo_comment['user']['_id']),
full_name=user_dict['full_name'],
avatar_url=pillar.api.users.avatar.url(user_dict),
badges_html=user_dict.get('badges', {}).get('html', '')
)
html = _get_markdowned_html(mongo_comment['properties'], 'content')
html = shortcodes.render_commented(html, context=mongo_comment['properties'])
return CommentDO(
id=mongo_comment['_id'],
parent=mongo_comment['parent'],
project=mongo_comment['project'],
user=user,
msg_html=html,
msg_markdown=mongo_comment['properties']['content'],
current_user_rating=current_user_rating(),
created=mongo_comment['_created'],
updated=mongo_comment['_updated'],
etag=mongo_comment['_etag'],
properties=CommentPropertiesDO(
attachments=mongo_comment['properties'].get('attachments', {}),
rating_positive=mongo_comment['properties']['rating_positive'],
rating_negative=mongo_comment['properties']['rating_negative']
)
)
def post_node_comment_vote(parent_id: bson.ObjectId, comment_id: bson.ObjectId, vote: int):
normalized_vote = min(max(vote, -1), 1)
_, _ = find_parent_and_comment_or_raise(parent_id, comment_id)
actions = {
1: 'upvote',
0: 'revoke',
-1: 'downvote',
}
patch = dict(
op=actions[normalized_vote]
)
json_result = patch_comment(comment_id, patch)
if json_result.json['_status'] != 'OK':
raise wz_exceptions.InternalServerError('Failed to vote on comment')
comment_do = get_comment(parent_id, comment_id)
return jsonify_data_object(comment_do), 200

View File

@ -1,55 +1,33 @@
"""PATCH support for comment nodes.""" """PATCH support for comment nodes."""
import logging import logging
from flask import current_app
import werkzeug.exceptions as wz_exceptions import werkzeug.exceptions as wz_exceptions
from flask import current_app
from pillar.api.utils import authorization, authentication, jsonify, remove_private_keys from pillar.api.utils import authorization, authentication, jsonify
from . import register_patch_handler from . import register_patch_handler
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
COMMENT_VOTING_OPS = {'upvote', 'downvote', 'revoke'} ROLES_FOR_COMMENT_VOTING = {u'subscriber', u'demo'}
VALID_COMMENT_OPERATIONS = COMMENT_VOTING_OPS.union({'edit'}) VALID_COMMENT_OPERATIONS = {u'upvote', u'downvote', u'revoke'}
@register_patch_handler('comment') @register_patch_handler(u'comment')
def patch_comment(node_id, patch): def patch_comment(node_id, patch):
assert_is_valid_patch(node_id, patch) assert_is_valid_patch(node_id, patch)
user_id = authentication.current_user_id() user_id = authentication.current_user_id()
if patch['op'] in COMMENT_VOTING_OPS: # Find the node
result, node = vote_comment(user_id, node_id, patch)
else:
assert patch['op'] == 'edit', 'Invalid patch operation %s' % patch['op']
result, node = edit_comment(user_id, node_id, patch)
return jsonify({'_status': 'OK',
'result': result,
'properties': node['properties']
})
def vote_comment(user_id, node_id, patch):
"""Performs a voting operation."""
# Find the node. Includes a query on the properties.ratings array so
# that we only get the current user's rating.
nodes_coll = current_app.data.driver.db['nodes'] nodes_coll = current_app.data.driver.db['nodes']
node_query = {'_id': node_id, node_query = {'_id': node_id,
'$or': [{'properties.ratings.$.user': {'$exists': False}}, '$or': [{'properties.ratings.$.user': {'$exists': False}},
{'properties.ratings.$.user': user_id}]} {'properties.ratings.$.user': user_id}]}
node = nodes_coll.find_one(node_query, node = nodes_coll.find_one(node_query,
projection={'properties': 1, 'user': 1}) projection={'properties': 1})
if node is None: if node is None:
log.warning('User %s wanted to patch non-existing node %s' % (user_id, node_id)) log.warning('How can the node not be found?')
raise wz_exceptions.NotFound('Node %s not found' % node_id) raise wz_exceptions.NotFound('Node %s not found' % node_id)
# We don't allow the user to down/upvote their own nodes.
if user_id == node['user']:
raise wz_exceptions.Forbidden('You cannot vote on your own node')
props = node['properties'] props = node['properties']
# Find the current rating (if any) # Find the current rating (if any)
@ -97,14 +75,13 @@ def vote_comment(user_id, node_id, patch):
return update return update
actions = { actions = {
'upvote': upvote, u'upvote': upvote,
'downvote': downvote, u'downvote': downvote,
'revoke': revoke, u'revoke': revoke,
} }
action = actions[patch['op']] action = actions[patch['op']]
mongo_update = action() mongo_update = action()
nodes_coll = current_app.data.driver.db['nodes']
if mongo_update: if mongo_update:
log.info('Running %s', mongo_update) log.info('Running %s', mongo_update)
if rating: if rating:
@ -120,50 +97,10 @@ def vote_comment(user_id, node_id, patch):
projection={'properties.rating_positive': 1, projection={'properties.rating_positive': 1,
'properties.rating_negative': 1}) 'properties.rating_negative': 1})
return result, node return jsonify({'_status': 'OK',
'result': result,
'properties': node['properties']
def edit_comment(user_id, node_id, patch):
"""Edits a single comment.
Doesn't do permission checking; users are allowed to edit their own
comment, and this is not something you want to revoke anyway. Admins
can edit all comments.
"""
# Find the node. We need to fetch some more info than we use here, so that
# we can pass this stuff to Eve's patch_internal; that way the validation &
# authorisation system has enough info to work.
nodes_coll = current_app.data.driver.db['nodes']
node = nodes_coll.find_one(node_id)
if node is None:
log.warning('User %s wanted to patch non-existing node %s' % (user_id, node_id))
raise wz_exceptions.NotFound('Node %s not found' % node_id)
if node['user'] != user_id and not authorization.user_has_role('admin'):
raise wz_exceptions.Forbidden('You can only edit your own comments.')
node = remove_private_keys(node)
node['properties']['content'] = patch['content']
node['properties']['attachments'] = patch.get('attachments', {})
# Use Eve to PUT this node, as that also updates the etag and we want to replace attachments.
r, _, _, status = current_app.put_internal('nodes',
node,
concurrency_check=False,
_id=node_id)
if status != 200:
log.error('Error %i editing comment %s for user %s: %s',
status, node_id, user_id, r)
raise wz_exceptions.InternalServerError('Internal error %i from Eve' % status)
else:
log.info('User %s edited comment %s', user_id, node_id)
# Fetch the new content, so the client can show these without querying again.
node = nodes_coll.find_one(node_id, projection={
'properties.content': 1,
'properties._content_html': 1,
}) })
return status, node
def assert_is_valid_patch(node_id, patch): def assert_is_valid_patch(node_id, patch):
@ -178,12 +115,8 @@ def assert_is_valid_patch(node_id, patch):
raise wz_exceptions.BadRequest('Operation should be one of %s', raise wz_exceptions.BadRequest('Operation should be one of %s',
', '.join(VALID_COMMENT_OPERATIONS)) ', '.join(VALID_COMMENT_OPERATIONS))
if op not in COMMENT_VOTING_OPS:
# We can't check here, we need the node owner for that.
return
# See whether the user is allowed to patch # See whether the user is allowed to patch
if authorization.user_matches_roles(current_app.config['ROLES_FOR_COMMENT_VOTING']): if authorization.user_matches_roles(ROLES_FOR_COMMENT_VOTING):
log.debug('User is allowed to upvote/downvote comment') log.debug('User is allowed to upvote/downvote comment')
return return

View File

@ -1,336 +0,0 @@
import collections
import functools
import logging
import urllib.parse
from bson import ObjectId
from werkzeug import exceptions as wz_exceptions
from pillar import current_app
from pillar.api.activities import activity_subscribe, activity_object_add
from pillar.api.file_storage_backends.gcs import update_file_name
from pillar.api.node_types import PILLAR_NAMED_NODE_TYPES
from pillar.api.utils import random_etag
from pillar.api.utils.authorization import check_permissions
log = logging.getLogger(__name__)
def before_returning_node(node):
# Run validation process, since GET on nodes entry point is public
check_permissions('nodes', node, 'GET', append_allowed_methods=True)
# Embed short_link_info if the node has a short_code.
short_code = node.get('short_code')
if short_code:
node['short_link'] = short_link_info(short_code)['short_link']
def before_returning_nodes(nodes):
for node in nodes['_items']:
before_returning_node(node)
def only_for_node_type_decorator(*required_node_type_names):
"""Returns a decorator that checks its first argument's node type.
If the node type is not of the required node type, returns None,
otherwise calls the wrapped function.
>>> deco = only_for_node_type_decorator('comment')
>>> @deco
... def handle_comment(node): pass
>>> deco = only_for_node_type_decorator('comment', 'post')
>>> @deco
... def handle_comment_or_post(node): pass
"""
# Convert to a set for efficient 'x in required_node_type_names' queries.
required_node_type_names = set(required_node_type_names)
def only_for_node_type(wrapped):
@functools.wraps(wrapped)
def wrapper(node, *args, **kwargs):
if node.get('node_type') not in required_node_type_names:
return
return wrapped(node, *args, **kwargs)
return wrapper
only_for_node_type.__doc__ = "Decorator, immediately returns when " \
"the first argument is not of type %s." % required_node_type_names
return only_for_node_type
def before_replacing_node(item, original):
check_permissions('nodes', original, 'PUT')
update_file_name(item)
def after_replacing_node(item, original):
"""Push an update to the Algolia index when a node item is updated. If the
project is private, prevent public indexing.
"""
from pillar.celery import search_index_tasks as index
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one({'_id': item['project']})
if project.get('is_private', False):
# Skip index updating and return
return
status = item['properties'].get('status', 'unpublished')
node_id = str(item['_id'])
if status == 'published':
index.node_save.delay(node_id)
else:
index.node_delete.delay(node_id)
def before_inserting_nodes(items):
"""Before inserting a node in the collection we check if the user is allowed
and we append the project id to it.
"""
from pillar.auth import current_user
nodes_collection = current_app.data.driver.db['nodes']
def find_parent_project(node):
"""Recursive function that finds the ultimate parent of a node."""
if node and 'parent' in node:
parent = nodes_collection.find_one({'_id': node['parent']})
return find_parent_project(parent)
if node:
return node
else:
return None
for item in items:
check_permissions('nodes', item, 'POST')
if 'parent' in item and 'project' not in item:
parent = nodes_collection.find_one({'_id': item['parent']})
project = find_parent_project(parent)
if project:
item['project'] = project['_id']
# Default the 'user' property to the current user.
item.setdefault('user', current_user.user_id)
def get_comment_verb_and_context_object_id(comment):
nodes_collection = current_app.data.driver.db['nodes']
verb = 'commented'
parent = nodes_collection.find_one({'_id': comment['parent']})
context_object_id = comment['parent']
while parent['node_type'] == 'comment':
# If the parent is a comment, we provide its own parent as
# context. We do this in order to point the user to an asset
# or group when viewing the notification.
verb = 'replied'
context_object_id = parent['parent']
parent = nodes_collection.find_one({'_id': parent['parent']})
return verb, context_object_id
def after_inserting_nodes(items):
for item in items:
context_object_id = None
# TODO: support should be added for mixed context
if item['node_type'] in PILLAR_NAMED_NODE_TYPES:
activity_subscribe(item['user'], 'node', item['_id'])
verb = 'posted'
context_object_id = item.get('parent')
if item['node_type'] == 'comment':
# Always subscribe to the parent node
activity_subscribe(item['user'], 'node', item['parent'])
verb, context_object_id = get_comment_verb_and_context_object_id(item)
# Subscribe to the parent of the parent comment (post or group)
activity_subscribe(item['user'], 'node', context_object_id)
if context_object_id and item['node_type'] in PILLAR_NAMED_NODE_TYPES:
# * Skip activity for first level items (since the context is not a
# node, but a project).
# * Don't automatically create activities for non-Pillar node types,
# as we don't know what would be a suitable verb (among other things).
activity_object_add(
item['user'],
verb,
'node',
item['_id'],
'node',
context_object_id
)
def deduct_content_type_and_duration(node_doc, original=None):
"""Deduct the content type from the attached file, if any."""
if node_doc['node_type'] != 'asset':
log.debug('deduct_content_type: called on node type %r, ignoring', node_doc['node_type'])
return
node_id = node_doc.get('_id')
try:
file_id = ObjectId(node_doc['properties']['file'])
except KeyError:
if node_id is None:
# Creation of a file-less node is allowed, but updates aren't.
return
log.warning('deduct_content_type: Asset without properties.file, rejecting.')
raise wz_exceptions.UnprocessableEntity('Missing file property for asset node')
files = current_app.data.driver.db['files']
file_doc = files.find_one({'_id': file_id},
{'content_type': 1,
'variations': 1})
if not file_doc:
log.warning('deduct_content_type: Node %s refers to non-existing file %s, rejecting.',
node_id, file_id)
raise wz_exceptions.UnprocessableEntity('File property refers to non-existing file')
# Guess the node content type from the file content type
file_type = file_doc['content_type']
if file_type.startswith('video/'):
content_type = 'video'
elif file_type.startswith('image/'):
content_type = 'image'
else:
content_type = 'file'
node_doc['properties']['content_type'] = content_type
if content_type == 'video':
duration = file_doc['variations'][0].get('duration')
if duration:
node_doc['properties']['duration_seconds'] = duration
else:
log.warning('Video file %s has no duration', file_id)
def nodes_deduct_content_type_and_duration(nodes):
for node in nodes:
deduct_content_type_and_duration(node)
def node_set_default_picture(node, original=None):
"""Uses the image of an image asset or colour map of texture node as picture."""
if node.get('picture'):
log.debug('Node %s already has a picture, not overriding', node.get('_id'))
return
node_type = node.get('node_type')
props = node.get('properties', {})
content = props.get('content_type')
if node_type == 'asset' and content == 'image':
image_file_id = props.get('file')
elif node_type == 'texture':
# Find the colour map, defaulting to the first image map available.
image_file_id = None
for image in props.get('files', []):
if image_file_id is None or image.get('map_type') == 'color':
image_file_id = image.get('file')
else:
log.debug('Not setting default picture on node type %s content type %s',
node_type, content)
return
if image_file_id is None:
log.debug('Nothing to set the picture to.')
return
log.debug('Setting default picture for node %s to %s', node.get('_id'), image_file_id)
node['picture'] = image_file_id
def nodes_set_default_picture(nodes):
for node in nodes:
node_set_default_picture(node)
def before_deleting_node(node: dict):
check_permissions('nodes', node, 'DELETE')
remove_project_references(node)
def remove_project_references(node):
project_id = node.get('project')
if not project_id:
return
node_id = node['_id']
log.info('Removing references to node %s from project %s', node_id, project_id)
projects_col = current_app.db('projects')
project = projects_col.find_one({'_id': project_id})
updates = collections.defaultdict(dict)
if project.get('header_node') == node_id:
updates['$unset']['header_node'] = node_id
project_reference_lists = ('nodes_blog', 'nodes_featured', 'nodes_latest')
for list_name in project_reference_lists:
references = project.get(list_name)
if not references:
continue
try:
references.remove(node_id)
except ValueError:
continue
updates['$set'][list_name] = references
if not updates:
return
updates['$set']['_etag'] = random_etag()
result = projects_col.update_one({'_id': project_id}, updates)
if result.modified_count != 1:
log.warning('Removing references to node %s from project %s resulted in %d modified documents (expected 1)',
node_id, project_id, result.modified_count)
def after_deleting_node(item):
from pillar.celery import search_index_tasks as index
index.node_delete.delay(str(item['_id']))
only_for_textures = only_for_node_type_decorator('texture')
@only_for_textures
def texture_sort_files(node, original=None):
"""Sort files alphabetically by map type, with colour map first."""
try:
files = node['properties']['files']
except KeyError:
return
# Sort the map types alphabetically, ensuring 'color' comes first.
as_dict = {f['map_type']: f for f in files}
types = sorted(as_dict.keys(), key=lambda k: '\0' if k == 'color' else k)
node['properties']['files'] = [as_dict[map_type] for map_type in types]
def textures_sort_files(nodes):
for node in nodes:
texture_sort_files(node)
def short_link_info(short_code):
"""Returns the short link info in a dict."""
short_link = urllib.parse.urljoin(
current_app.config['SHORT_LINK_BASE_URL'], short_code)
return {
'short_code': short_code,
'short_link': short_link,
}

View File

@ -1,110 +0,0 @@
"""Code for moving around nodes."""
import attr
import pymongo.database
from bson import ObjectId
from pillar import attrs_extra
import pillar.api.file_storage.moving
@attr.s
class NodeMover(object):
db = attr.ib(validator=attr.validators.instance_of(pymongo.database.Database))
skip_gcs = attr.ib(default=False, validator=attr.validators.instance_of(bool))
_log = attrs_extra.log('%s.NodeMover' % __name__)
def change_project(self, node, dest_proj):
"""Moves a node and children to a new project."""
assert isinstance(node, dict)
assert isinstance(dest_proj, dict)
for move_node in self._children(node):
self._change_project(move_node, dest_proj)
def _change_project(self, node, dest_proj):
"""Changes the project of a single node, non-recursively."""
node_id = node['_id']
proj_id = dest_proj['_id']
self._log.info('Moving node %s to project %s', node_id, proj_id)
# Find all files in the node.
moved_files = set()
self._move_files(moved_files, dest_proj, self._files(node.get('picture', None)))
self._move_files(moved_files, dest_proj, self._files(node['properties'], 'file'))
self._move_files(moved_files, dest_proj, self._files(node['properties'], 'files', 'file'))
self._move_files(moved_files, dest_proj,
self._files(node['properties'], 'attachments', 'files', 'file'))
# Switch the node's project after its files have been moved.
self._log.info('Switching node %s to project %s', node_id, proj_id)
nodes_coll = self.db['nodes']
update_result = nodes_coll.update_one({'_id': node_id},
{'$set': {'project': proj_id}})
if update_result.matched_count != 1:
raise RuntimeError(
'Unable to update node %s in MongoDB: matched_count=%i; modified_count=%i' % (
node_id, update_result.matched_count, update_result.modified_count))
def _move_files(self, moved_files, dest_proj, file_generator):
"""Tries to find all files from the given properties."""
for file_id in file_generator:
if file_id in moved_files:
continue
moved_files.add(file_id)
self.move_file(dest_proj, file_id)
def move_file(self, dest_proj, file_id):
"""Moves a single file to another project"""
self._log.info('Moving file %s to project %s', file_id, dest_proj['_id'])
pillar.api.file_storage.moving.move_to_bucket(file_id, dest_proj['_id'],
skip_storage=self.skip_gcs)
def _files(self, file_ref, *properties):
"""Yields file ObjectIDs."""
# Degenerate cases.
if not file_ref:
return
# Single ObjectID
if isinstance(file_ref, ObjectId):
assert not properties
yield file_ref
return
# List of ObjectIDs
if isinstance(file_ref, list):
for item in file_ref:
for subitem in self._files(item, *properties):
yield subitem
return
# Dict, use properties[0] as key
if isinstance(file_ref, dict):
try:
subref = file_ref[properties[0]]
except KeyError:
# Silently skip non-existing keys.
return
for subitem in self._files(subref, *properties[1:]):
yield subitem
return
raise TypeError('File ref is of type %s, not implemented' % type(file_ref))
def _children(self, node):
"""Generator, recursively yields the node and its children."""
yield node
nodes_coll = self.db['nodes']
for child in nodes_coll.find({'parent': node['_id']}):
# "yield from self.children(child)" was introduced in Python 3.3
for grandchild in self._children(child):
yield grandchild

View File

@ -1,444 +0,0 @@
"""Organization management.
Assumes role names that are given to users by organization membership
start with the string "org-".
"""
import logging
import typing
import attr
import bson
import flask
import werkzeug.exceptions as wz_exceptions
from pillar import attrs_extra, current_app
from pillar.api.utils import remove_private_keys, utcnow
class OrganizationError(Exception):
"""Superclass for all Organization-related errors."""
@attr.s
class NotEnoughSeats(OrganizationError):
"""Thrown when trying to add too many members to the organization."""
org_id = attr.ib(validator=attr.validators.instance_of(bson.ObjectId))
seat_count = attr.ib(validator=attr.validators.instance_of(int))
attempted_seat_count = attr.ib(validator=attr.validators.instance_of(int))
@attr.s
class OrgManager:
"""Organization manager.
Performs actions on an Organization. Does *NOT* test user permissions -- the caller
is responsible for that.
"""
_log = attrs_extra.log('%s.OrgManager' % __name__)
def create_new_org(self,
name: str,
admin_uid: bson.ObjectId,
seat_count: int,
*,
org_roles: typing.Iterable[str] = None) -> dict:
"""Creates a new Organization.
Returns the new organization document.
"""
assert isinstance(admin_uid, bson.ObjectId)
org_doc = {
'name': name,
'admin_uid': admin_uid,
'seat_count': seat_count,
}
if org_roles:
org_doc['org_roles'] = list(org_roles)
r, _, _, status = current_app.post_internal('organizations', org_doc)
if status != 201:
self._log.error('Error creating organization; status should be 201, not %i: %s',
status, r)
raise ValueError(f'Unable to create organization, status code {status}')
org_doc.update(r)
return org_doc
def assign_users(self,
org_id: bson.ObjectId,
emails: typing.List[str]) -> dict:
"""Assigns users to the organization.
Checks the seat count and throws a NotEnoughSeats exception when the
seat count is not sufficient to assign the requested users.
Users are looked up by email address, and known users are
automatically mapped.
:returns: the new organization document.
"""
self._log.info('Adding %i new members to organization %s', len(emails), org_id)
users_coll = current_app.db('users')
existing_user_docs = list(users_coll.find({'email': {'$in': emails}},
projection={'_id': 1, 'email': 1}))
unknown_users = set(emails) - {user['email'] for user in existing_user_docs}
existing_users = {user['_id'] for user in existing_user_docs}
return self._assign_users(org_id, unknown_users, existing_users)
def assign_single_user(self, org_id: bson.ObjectId, *, user_id: bson.ObjectId) -> dict:
"""Assigns a single, known user to the organization.
:returns: the new organization document.
"""
self._log.info('Adding new member %s to organization %s', user_id, org_id)
return self._assign_users(org_id, set(), {user_id})
def _assign_users(self, org_id: bson.ObjectId,
unknown_users: typing.Set[str],
existing_users: typing.Set[bson.ObjectId]) -> dict:
if self._log.isEnabledFor(logging.INFO):
self._log.info(' - found users: %s', ', '.join(str(uid) for uid in existing_users))
self._log.info(' - unknown users: %s', ', '.join(unknown_users))
org_doc = self._get_org(org_id)
# Compute the new members.
members = set(org_doc.get('members') or []) | existing_users
unknown_members = set(org_doc.get('unknown_members') or []) | unknown_users
# Make sure we don't exceed the current seat count.
new_seat_count = len(members) + len(unknown_members)
if new_seat_count > org_doc['seat_count']:
self._log.warning('assign_users(%s, ...): Trying to increase seats to %i, '
'but org only has %i seats.',
org_id, new_seat_count, org_doc['seat_count'])
raise NotEnoughSeats(org_id, org_doc['seat_count'], new_seat_count)
# Update the organization.
org_doc['members'] = list(members)
org_doc['unknown_members'] = list(unknown_members)
r, _, _, status = current_app.put_internal('organizations',
remove_private_keys(org_doc),
_id=org_id)
if status != 200:
self._log.error('Error updating organization; status should be 200, not %i: %s',
status, r)
raise ValueError(f'Unable to update organization, status code {status}')
org_doc.update(r)
# Update the roles for the affected members
for uid in existing_users:
self.refresh_roles(uid)
return org_doc
def assign_admin(self, org_id: bson.ObjectId, *, user_id: bson.ObjectId):
"""Assigns a user as admin user for this organization."""
assert isinstance(org_id, bson.ObjectId)
assert isinstance(user_id, bson.ObjectId)
org_coll = current_app.db('organizations')
users_coll = current_app.db('users')
if users_coll.count_documents({'_id': user_id}) == 0:
raise ValueError('User not found')
self._log.info('Updating organization %s, setting admin user to %s', org_id, user_id)
org_coll.update_one({'_id': org_id},
{'$set': {'admin_uid': user_id}})
def remove_user(self,
org_id: bson.ObjectId,
*,
user_id: bson.ObjectId = None,
email: str = None) -> dict:
"""Removes a user from the organization.
The user can be identified by either user ID or email.
Returns the new organization document.
"""
users_coll = current_app.db('users')
assert user_id or email
# Collect the email address if not given. This ensures the removal
# if the email was accidentally in the unknown_members list.
if email is None:
user_doc = users_coll.find_one(user_id, projection={'email': 1})
if user_doc is not None:
email = user_doc['email']
# See if we know this user.
if user_id is None:
user_doc = users_coll.find_one({'email': email}, projection={'_id': 1})
if user_doc is not None:
user_id = user_doc['_id']
if user_id and not users_coll.count_documents({'_id': user_id}):
raise wz_exceptions.UnprocessableEntity('User does not exist')
self._log.info('Removing user %s / %s from organization %s', user_id, email, org_id)
org_doc = self._get_org(org_id)
# Compute the new members.
if user_id:
members = set(org_doc.get('members') or []) - {user_id}
org_doc['members'] = list(members)
if email:
unknown_members = set(org_doc.get('unknown_members')) - {email}
org_doc['unknown_members'] = list(unknown_members)
r, _, _, status = current_app.put_internal('organizations',
remove_private_keys(org_doc),
_id=org_id)
if status != 200:
self._log.error('Error updating organization; status should be 200, not %i: %s',
status, r)
raise ValueError(f'Unable to update organization, status code {status}')
org_doc.update(r)
# Update the roles for the affected member.
if user_id:
self.refresh_roles(user_id)
return org_doc
def _get_org(self, org_id: bson.ObjectId, *, projection=None):
"""Returns the organization, or raises a ValueError."""
assert isinstance(org_id, bson.ObjectId)
org_coll = current_app.db('organizations')
org = org_coll.find_one(org_id, projection=projection)
if org is None:
raise ValueError(f'Organization {org_id} not found')
return org
def refresh_all_user_roles(self, org_id: bson.ObjectId):
"""Refreshes the roles of all members."""
assert isinstance(org_id, bson.ObjectId)
org = self._get_org(org_id, projection={'members': 1})
members = org.get('members')
if not members:
self._log.info('Organization %s has no members, nothing to refresh.', org_id)
return
for uid in members:
self.refresh_roles(uid)
def refresh_roles(self, user_id: bson.ObjectId) -> typing.Set[str]:
"""Refreshes the user's roles to own roles + organizations' roles.
:returns: the applied set of roles.
"""
assert isinstance(user_id, bson.ObjectId)
from pillar.api.service import do_badger
self._log.info('Refreshing roles for user %s', user_id)
org_coll = current_app.db('organizations')
tokens_coll = current_app.db('tokens')
def aggr_roles(coll, match: dict) -> typing.Set[str]:
query = coll.aggregate([
{'$match': match},
{'$project': {'org_roles': 1}},
{'$unwind': {'path': '$org_roles'}},
{'$group': {
'_id': None,
'org_roles': {'$addToSet': '$org_roles'},
}}])
# If the user has no organizations/tokens at all, the query will have no results.
try:
org_roles_doc = query.next()
except StopIteration:
return set()
return set(org_roles_doc['org_roles'])
# Join all organization-given roles and roles from the tokens collection.
org_roles = aggr_roles(org_coll, {'members': user_id})
self._log.debug('Organization-given roles for user %s: %s', user_id, org_roles)
token_roles = aggr_roles(tokens_coll, {
'user': user_id,
'expire_time': {"$gt": utcnow()},
})
self._log.debug('Token-given roles for user %s: %s', user_id, token_roles)
org_roles.update(token_roles)
users_coll = current_app.db('users')
user_doc = users_coll.find_one(user_id, projection={'roles': 1})
if not user_doc:
self._log.warning('Trying refresh roles of non-existing user %s, ignoring', user_id)
return set()
all_user_roles = set(user_doc.get('roles') or [])
existing_org_roles = {role for role in all_user_roles
if role.startswith('org-')}
grant_roles = org_roles - all_user_roles
revoke_roles = existing_org_roles - org_roles
if grant_roles:
do_badger('grant', roles=grant_roles, user_id=user_id)
if revoke_roles:
do_badger('revoke', roles=revoke_roles, user_id=user_id)
return all_user_roles.union(grant_roles) - revoke_roles
def user_is_admin(self, org_id: bson.ObjectId) -> bool:
"""Returns whether the currently logged in user is the admin of the organization."""
from pillar.api.utils.authentication import current_user_id
uid = current_user_id()
if uid is None:
return False
org = self._get_org(org_id, projection={'admin_uid': 1})
return org.get('admin_uid') == uid
def unknown_member_roles(self, member_email: str) -> typing.Set[str]:
"""Returns the set of organization roles for this user.
Assumes the user is not yet known, i.e. part of the unknown_members lists.
"""
org_coll = current_app.db('organizations')
# Aggregate all org-given roles for this user.
query = org_coll.aggregate([
{'$match': {'unknown_members': member_email}},
{'$project': {'org_roles': 1}},
{'$unwind': {'path': '$org_roles'}},
{'$group': {
'_id': None,
'org_roles': {'$addToSet': '$org_roles'},
}}])
# If the user has no organizations at all, the query will have no results.
try:
org_roles_doc = query.next()
except StopIteration:
return set()
return set(org_roles_doc['org_roles'])
def make_member_known(self, member_uid: bson.ObjectId, member_email: str):
"""Moves the given member from the unknown_members to the members lists."""
# This uses a direct PyMongo query rather than using Eve's put_internal,
# to prevent simultaneous updates from dropping users.
org_coll = current_app.db('organizations')
for org in org_coll.find({'unknown_members': member_email}):
self._log.info('Updating organization %s, marking member %s/%s as known',
org['_id'], member_uid, member_email)
org_coll.update_one({'_id': org['_id']},
{'$addToSet': {'members': member_uid},
'$pull': {'unknown_members': member_email}
})
def org_members(self, member_sting_ids: typing.Iterable[str]) -> typing.List[dict]:
"""Returns the user documents of the organization members.
This is a workaround to provide membership information for
organizations without giving 'mortal' users access to /api/users.
"""
from pillar.api.utils import str2id
if not member_sting_ids:
return []
member_ids = [str2id(uid) for uid in member_sting_ids]
users_coll = current_app.db('users')
users = users_coll.find({'_id': {'$in': member_ids}},
projection={'_id': 1, 'full_name': 1, 'email': 1, 'avatar': 1})
return list(users)
def user_has_organizations(self, user_id: bson.ObjectId) -> bool:
"""Returns True iff the user has anything to do with organizations.
That is, if the user is admin for and/or member of any organization.
"""
org_coll = current_app.db('organizations')
org_count = org_coll.count_documents({'$or': [
{'admin_uid': user_id},
{'members': user_id}
]})
return bool(org_count)
def user_is_unknown_member(self, member_email: str) -> bool:
"""Return True iff the email is an unknown member of some org."""
org_coll = current_app.db('organizations')
org_count = org_coll.count_documents({'unknown_members': member_email})
return bool(org_count)
def roles_for_ip_address(self, remote_addr: str) -> typing.Set[str]:
"""Find the roles given to the user via org IP range definitions."""
from . import ip_ranges
org_coll = current_app.db('organizations')
try:
q = ip_ranges.query(remote_addr)
except ValueError as ex:
self._log.warning('Invalid remote address %s, ignoring IP-based roles: %s',
remote_addr, ex)
return set()
orgs = org_coll.find(
{'ip_ranges': q},
projection={'org_roles': True},
)
return set(role
for org in orgs
for role in org.get('org_roles', []))
def roles_for_request(self) -> typing.Set[str]:
"""Find roles for user via the request's remote IP address."""
try:
remote_addr = flask.request.access_route[0]
except IndexError:
return set()
if not remote_addr:
return set()
roles = self.roles_for_ip_address(remote_addr)
self._log.debug('Roles for IP address %s: %s', remote_addr, roles)
return roles
def setup_app(app):
from . import patch, hooks
hooks.setup_app(app)
patch.setup_app(app)

View File

@ -1,48 +0,0 @@
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils.authentication import current_user
def pre_get_organizations(request, lookup):
user = current_user()
if user.is_anonymous:
raise wz_exceptions.Forbidden()
if user.has_cap('admin'):
# Allow all lookups to admins.
return
# Only allow users to see their own organizations.
lookup['$or'] = [{'admin_uid': user.user_id}, {'members': user.user_id}]
def on_fetched_item_organizations(org_doc: dict):
"""Filter out binary data.
Eve cannot return binary data, at least not until we upgrade to a version
that depends on Cerberus >= 1.0.
"""
for ipr in org_doc.get('ip_ranges') or []:
ipr.pop('start', None)
ipr.pop('end', None)
ipr.pop('prefix', None) # not binary, but useless without the other fields.
def on_fetched_resource_organizations(response: dict):
for org_doc in response.get('_items', []):
on_fetched_item_organizations(org_doc)
def pre_post_organizations(request):
user = current_user()
if not user.has_cap('create-organization'):
raise wz_exceptions.Forbidden()
def setup_app(app):
app.on_pre_GET_organizations += pre_get_organizations
app.on_pre_POST_organizations += pre_post_organizations
app.on_fetched_item_organizations += on_fetched_item_organizations
app.on_fetched_resource_organizations += on_fetched_resource_organizations

View File

@ -1,75 +0,0 @@
"""IP range support for Organizations."""
from IPy import IP
# 128 bits all set to 1
ONES_128 = 2 ** 128 - 1
def doc(iprange: str, min_prefixlen6: int=0, min_prefixlen4: int=0) -> dict:
"""Convert a human-readable string like '1.2.3.4/24' to a Mongo document.
This converts the address to IPv6 and computes the start/end addresses
of the range. The address, its prefix size, and start and end address,
are returned as a dict.
Addresses are stored as big-endian binary data because MongoDB doesn't
support 128 bits integers.
:param iprange: the IP address and mask size, can be IPv6 or IPv4.
:param min_prefixlen6: if given, causes a ValuError when the mask size
is too low. Note that the mask size is always
evaluated only for IPv6 addresses.
:param min_prefixlen4: if given, causes a ValuError when the mask size
is too low. Note that the mask size is always
evaluated only for IPv4 addresses.
:returns: a dict like: {
'start': b'xxxxx' with the lowest IP address in the range.
'end': b'yyyyy' with the highest IP address in the range.
'human': 'aaaa:bbbb::cc00/120' with the human-readable representation.
'prefix': 120, the prefix length of the netmask in bits.
}
"""
ip = IP(iprange, make_net=True)
prefixlen = ip.prefixlen()
if ip.version() == 4:
if prefixlen < min_prefixlen4:
raise ValueError(f'Prefix length {prefixlen} smaller than allowed {min_prefixlen4}')
ip = ip.v46map()
else:
if prefixlen < min_prefixlen6:
raise ValueError(f'Prefix length {prefixlen} smaller than allowed {min_prefixlen6}')
addr = ip.int()
# Set all address bits to 1 where the mask is 0 to obtain the largest address.
end = addr | (ONES_128 % ip.netmask().int())
# This ensures that even a single host is represented as /128 in the human-readable form.
ip.NoPrefixForSingleIp = False
return {
'start': addr.to_bytes(16, 'big'),
'end': end.to_bytes(16, 'big'),
'human': ip.strCompressed(),
'prefix': ip.prefixlen(),
}
def query(address: str) -> dict:
"""Return a dict usable for querying all organizations whose IP range matches the given one.
:returns: a dict like:
{$elemMatch: {'start': {$lte: b'xxxxx'}, 'end': {$gte: b'xxxxx'}}}
"""
ip = IP(address)
if ip.version() == 4:
ip = ip.v46map()
for_mongo = ip.ip.to_bytes(16, 'big')
return {'$elemMatch': {
'start': {'$lte': for_mongo},
'end': {'$gte': for_mongo},
}}

View File

@ -1,228 +0,0 @@
"""Organization patching support."""
import logging
import bson
from flask import Blueprint, jsonify
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils.authentication import current_user
from pillar.api.utils import authorization, str2id, jsonify
from pillar.api import patch_handler
from pillar import current_app
log = logging.getLogger(__name__)
patch_api_blueprint = Blueprint('pillar.api.organizations.patch', __name__)
class OrganizationPatchHandler(patch_handler.AbstractPatchHandler):
item_name = 'organization'
@authorization.require_login()
def patch_assign_users(self, org_id: bson.ObjectId, patch: dict):
"""Assigns users to an organization.
The calling user must be admin of the organization.
"""
from . import NotEnoughSeats
self._assert_is_admin(org_id)
# Do some basic validation.
try:
emails = patch['emails']
except KeyError:
raise wz_exceptions.BadRequest('No key "email" in patch.')
# Skip empty emails.
emails = [stripped
for stripped in (email.strip() for email in emails)
if stripped]
log.info('User %s uses PATCH to add users to organization %s',
current_user().user_id, org_id)
try:
org_doc = current_app.org_manager.assign_users(org_id, emails)
except NotEnoughSeats:
resp = jsonify({'_message': f'Not enough seats to assign {len(emails)} users'})
resp.status_code = 422
return resp
return jsonify(org_doc)
@authorization.require_login()
def patch_assign_user(self, org_id: bson.ObjectId, patch: dict):
"""Assigns a single user by User ID to an organization.
The calling user must be admin of the organization.
"""
from . import NotEnoughSeats
self._assert_is_admin(org_id)
# Do some basic validation.
try:
user_id = patch['user_id']
except KeyError:
raise wz_exceptions.BadRequest('No key "user_id" in patch.')
user_oid = str2id(user_id)
log.info('User %s uses PATCH to add user %s to organization %s',
current_user().user_id, user_oid, org_id)
try:
org_doc = current_app.org_manager.assign_single_user(org_id, user_id=user_oid)
except NotEnoughSeats:
resp = jsonify({'_message': f'Not enough seats to assign this user'})
resp.status_code = 422
return resp
return jsonify(org_doc)
@authorization.require_login()
def patch_assign_admin(self, org_id: bson.ObjectId, patch: dict):
"""Assigns a single user by User ID as admin of the organization.
The calling user must be admin of the organization.
"""
self._assert_is_admin(org_id)
# Do some basic validation.
try:
user_id = patch['user_id']
except KeyError:
raise wz_exceptions.BadRequest('No key "user_id" in patch.')
user_oid = str2id(user_id)
log.info('User %s uses PATCH to set user %s as admin for organization %s',
current_user().user_id, user_oid, org_id)
current_app.org_manager.assign_admin(org_id, user_id=user_oid)
@authorization.require_login()
def patch_remove_user(self, org_id: bson.ObjectId, patch: dict):
"""Removes a user from an organization.
The calling user must be admin of the organization.
"""
# Do some basic validation.
email = patch.get('email') or None
user_id = patch.get('user_id')
user_oid = str2id(user_id) if user_id else None
# Users require admin rights on the org, except when removing themselves.
current_user_id = current_user().user_id
if user_oid is None or user_oid != current_user_id:
self._assert_is_admin(org_id)
log.info('User %s uses PATCH to remove user %s from organization %s',
current_user_id, user_oid, org_id)
org_doc = current_app.org_manager.remove_user(org_id, user_id=user_oid, email=email)
return jsonify(org_doc)
def _assert_is_admin(self, org_id):
om = current_app.org_manager
if current_user().has_cap('admin'):
# Always allow admins to edit every organization.
return
if not om.user_is_admin(org_id):
log.warning('User %s uses PATCH to edit organization %s, '
'but is not admin of that Organization. Request denied.',
current_user().user_id, org_id)
raise wz_exceptions.Forbidden()
@authorization.require_login()
def patch_edit_from_web(self, org_id: bson.ObjectId, patch: dict):
"""Updates Organization fields from the web.
The PATCH command supports the following payload. The 'name' field must
be set, all other fields are optional. When an optional field is
ommitted it will be handled as an instruction to clear that field.
{'name': str,
'description': str,
'website': str,
'location': str,
'ip_ranges': list of human-readable IP ranges}
"""
from pymongo.results import UpdateResult
from . import ip_ranges
self._assert_is_admin(org_id)
user = current_user()
current_user_id = user.user_id
# Only take known fields from the patch, don't just copy everything.
update = {
'name': patch['name'].strip(),
'description': patch.get('description', '').strip(),
'website': patch.get('website', '').strip(),
'location': patch.get('location', '').strip(),
}
unset = {}
# Special transformation for IP ranges
iprs = patch.get('ip_ranges')
if iprs:
ipr_docs = []
for r in iprs:
try:
doc = ip_ranges.doc(r, min_prefixlen6=48, min_prefixlen4=8)
except ValueError as ex:
raise wz_exceptions.UnprocessableEntity(f'Invalid IP range {r!r}: {ex}')
ipr_docs.append(doc)
update['ip_ranges'] = ipr_docs
else:
unset['ip_ranges'] = True
refresh_user_roles = False
if user.has_cap('admin'):
if 'seat_count' in patch:
update['seat_count'] = int(patch['seat_count'])
if 'org_roles' in patch:
org_roles = [stripped for stripped in (role.strip() for role in patch['org_roles'])
if stripped]
if not all(role.startswith('org-') for role in org_roles):
raise wz_exceptions.UnprocessableEntity(
'Invalid role given, all roles must start with "org-"')
update['org_roles'] = org_roles
refresh_user_roles = True
self.log.info('User %s edits Organization %s: %s', current_user_id, org_id, update)
validator = current_app.validator_for_resource('organizations')
if not validator.validate_update(update, org_id, persisted_document={}):
resp = jsonify({
'_errors': validator.errors,
'_message': ', '.join(f'{field}: {error}'
for field, error in validator.errors.items()),
})
resp.status_code = 422
return resp
# Figure out what to set and what to unset
for_mongo = {'$set': update}
if unset:
for_mongo['$unset'] = unset
organizations_coll = current_app.db('organizations')
result: UpdateResult = organizations_coll.update_one({'_id': org_id}, for_mongo)
if result.matched_count != 1:
self.log.warning('User %s edits Organization %s but update matched %i items',
current_user_id, org_id, result.matched_count)
raise wz_exceptions.BadRequest()
if refresh_user_roles:
self.log.info('Organization roles set for org %s, refreshing users', org_id)
current_app.org_manager.refresh_all_user_roles(org_id)
return '', 204
def setup_app(app):
OrganizationPatchHandler(patch_api_blueprint)
app.register_api_blueprint(patch_api_blueprint, url_prefix='/organizations')

View File

@ -1,92 +0,0 @@
"""Handler for PATCH requests.
This supports PATCH request in the sense described by William Durand:
http://williamdurand.fr/2014/02/14/please-do-not-patch-like-an-idiot/
Each PATCH should be a JSON dict with at least a key 'op' with the
name of the operation to perform.
"""
import logging
import flask
from pillar.api.utils import authorization
log = logging.getLogger(__name__)
class AbstractPatchHandler:
"""Abstract PATCH handler supporting multiple operations.
Each operation, i.e. possible value of the 'op' key in the PATCH body,
should be matched to a similarly named "patch_xxx" function in a subclass.
For example, the operation "set-owner" is mapped to "patch_set_owner".
:cvar route: the Flask/Werkzeug route to attach this handler to.
For most handlers, the default will be fine.
:cvar item_name: the name of the things to patch, like "job", "task" etc.
Only used for logging.
"""
route: str = '/<object_id>'
item_name: str = None
def __init_subclass__(cls, **kwargs):
if not cls.route:
raise ValueError('Subclass must set route')
if not cls.item_name:
raise ValueError('Subclass must set item_name')
def __init__(self, blueprint: flask.Blueprint):
self.log: logging.Logger = log.getChild(self.__class__.__name__)
self.patch_handlers = {
name[6:].replace('_', '-'): getattr(self, name)
for name in dir(self)
if name.startswith('patch_') and callable(getattr(self, name))
}
if self.log.isEnabledFor(logging.INFO):
self.log.info('Creating PATCH handler %s.%s%s for operations: %s',
blueprint.name, self.patch.__name__, self.route,
sorted(self.patch_handlers.keys()))
blueprint.add_url_rule(self.route,
self.patch.__name__,
self.patch,
methods=['PATCH'])
@authorization.require_login()
def patch(self, object_id: str):
from flask import request
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils import str2id, authentication
# Parse the request
real_object_id = str2id(object_id)
patch = request.get_json()
if not patch:
self.log.info('Bad PATCH request, did not contain JSON')
raise wz_exceptions.BadRequest('Patch must contain JSON')
try:
patch_op = patch['op']
except KeyError:
self.log.info("Bad PATCH request, did not contain 'op' key")
raise wz_exceptions.BadRequest("PATCH should contain 'op' key to denote operation.")
log.debug('User %s wants to PATCH "%s" %s %s',
authentication.current_user_id(), patch_op, self.item_name, real_object_id)
# Find the PATCH handler for the operation.
try:
handler = self.patch_handlers[patch_op]
except KeyError:
log.warning('No %s PATCH handler for operation %r', self.item_name, patch_op)
raise wz_exceptions.BadRequest('Operation %r not supported' % patch_op)
# Let the PATCH handler do its thing.
response = handler(real_object_id, patch)
if response is None:
return '', 204
return response

View File

@ -3,25 +3,15 @@ from .routes import blueprint_api
def setup_app(app, api_prefix): def setup_app(app, api_prefix):
from . import patch
patch.setup_app(app)
app.on_replace_projects += hooks.override_is_private_field app.on_replace_projects += hooks.override_is_private_field
app.on_replace_projects += hooks.before_edit_check_permissions app.on_replace_projects += hooks.before_edit_check_permissions
app.on_replace_projects += hooks.protect_sensitive_fields app.on_replace_projects += hooks.protect_sensitive_fields
app.on_replace_projects += hooks.parse_markdown
app.on_update_projects += hooks.override_is_private_field app.on_update_projects += hooks.override_is_private_field
app.on_update_projects += hooks.before_edit_check_permissions app.on_update_projects += hooks.before_edit_check_permissions
app.on_update_projects += hooks.protect_sensitive_fields app.on_update_projects += hooks.protect_sensitive_fields
app.on_delete_item_projects += hooks.before_delete_project app.on_delete_item_projects += hooks.before_delete_project
app.on_deleted_item_projects += hooks.after_delete_project
app.on_insert_projects += hooks.before_inserting_override_is_private_field app.on_insert_projects += hooks.before_inserting_override_is_private_field
app.on_insert_projects += hooks.before_inserting_projects app.on_insert_projects += hooks.before_inserting_projects
app.on_insert_projects += hooks.parse_markdowns
app.on_inserted_projects += hooks.after_inserting_projects app.on_inserted_projects += hooks.after_inserting_projects
app.on_fetched_item_projects += hooks.before_returning_project_permissions app.on_fetched_item_projects += hooks.before_returning_project_permissions

View File

@ -1,20 +1,17 @@
import copy import copy
import logging import logging
from flask import request, abort from flask import request, abort, current_app
from gcloud import exceptions as gcs_exceptions
import pillar
from pillar import current_app
from pillar.api.node_types.asset import node_type_asset from pillar.api.node_types.asset import node_type_asset
from pillar.api.node_types.comment import node_type_comment from pillar.api.node_types.comment import node_type_comment
from pillar.api.node_types.group import node_type_group from pillar.api.node_types.group import node_type_group
from pillar.api.node_types.group_texture import node_type_group_texture from pillar.api.node_types.group_texture import node_type_group_texture
from pillar.api.node_types.texture import node_type_texture from pillar.api.node_types.texture import node_type_texture
from pillar.api.file_storage_backends import default_storage_backend from pillar.api.utils.gcs import GoogleCloudStorageBucket
from pillar.api.utils import authorization, authentication from pillar.api.utils import authorization, authentication
from pillar.api.utils import remove_private_keys from pillar.api.utils import remove_private_keys
from pillar.api.utils.authorization import user_has_role, check_permissions from pillar.api.utils.authorization import user_has_role, check_permissions
from pillar.auth import current_user
from .utils import abort_with_error from .utils import abort_with_error
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -31,7 +28,7 @@ def before_inserting_projects(items):
""" """
# Allow admin users to do whatever they want. # Allow admin users to do whatever they want.
if user_has_role('admin'): if user_has_role(u'admin'):
return return
for item in items: for item in items:
@ -60,39 +57,30 @@ def before_inserting_override_is_private_field(projects):
def before_edit_check_permissions(document, original): def before_edit_check_permissions(document, original):
# Allow admin users to do whatever they want.
# TODO: possibly move this into the check_permissions function.
if user_has_role(u'admin'):
return
check_permissions('projects', original, request.method) check_permissions('projects', original, request.method)
def before_delete_project(document): def before_delete_project(document):
"""Checks permissions before we allow deletion""" """Checks permissions before we allow deletion"""
check_permissions('projects', document, request.method) # Allow admin users to do whatever they want.
log.info('Deleting project %s on behalf of user %s', document['_id'], current_user) # TODO: possibly move this into the check_permissions function.
if user_has_role(u'admin'):
def after_delete_project(project: dict):
"""Perform delete on the project's files too."""
from werkzeug.exceptions import NotFound
from eve.methods.delete import delete
pid = project['_id']
log.info('Project %s was deleted, also deleting its files.', pid)
try:
r, _, _, status = delete('files', {'project': pid})
except NotFound:
# There were no files, and that's fine.
return return
if status != 204:
# Will never happen because bloody Eve always returns 204 or raises an exception. check_permissions('projects', document, request.method)
log.warning('Unable to delete files of project %s: %s', pid, r)
def protect_sensitive_fields(document, original): def protect_sensitive_fields(document, original):
"""When not logged in as admin, prevents update to certain fields.""" """When not logged in as admin, prevents update to certain fields."""
# Allow admin users to do whatever they want. # Allow admin users to do whatever they want.
if user_has_role('admin'): if user_has_role(u'admin'):
return return
def revert(name): def revert(name):
@ -130,8 +118,6 @@ def after_inserting_projects(projects):
def after_inserting_project(project, db_user): def after_inserting_project(project, db_user):
from pillar.auth import UserClass
project_id = project['_id'] project_id = project['_id']
user_id = db_user['_id'] user_id = db_user['_id']
@ -157,8 +143,7 @@ def after_inserting_project(project, db_user):
log.debug('Made user %s member of group %s', user_id, admin_group_id) log.debug('Made user %s member of group %s', user_id, admin_group_id)
# Assign the group to the project with admin rights # Assign the group to the project with admin rights
owner_user = UserClass.construct('', db_user) is_admin = authorization.is_admin(db_user)
is_admin = authorization.is_admin(owner_user)
world_permissions = ['GET'] if is_admin else [] world_permissions = ['GET'] if is_admin else []
permissions = { permissions = {
'world': world_permissions, 'world': world_permissions,
@ -191,8 +176,18 @@ def after_inserting_project(project, db_user):
else: else:
project['url'] = "p-{!s}".format(project_id) project['url'] = "p-{!s}".format(project_id)
# Initialize storage using the default specified in STORAGE_BACKEND # Initialize storage page (defaults to GCS)
default_storage_backend(str(project_id)) if current_app.config.get('TESTING'):
log.warning('Not creating Google Cloud Storage bucket while running unit tests!')
else:
try:
gcs_storage = GoogleCloudStorageBucket(str(project_id))
if gcs_storage.bucket.exists():
log.info('Created GCS instance for project %s', project_id)
else:
log.warning('Unable to create GCS instance for project %s', project_id)
except gcs_exceptions.Forbidden as ex:
log.warning('GCS forbids me to create CGS instance for project %s: %s', project_id, ex)
# Commit the changes directly to the MongoDB; a PUT is not allowed yet, # Commit the changes directly to the MongoDB; a PUT is not allowed yet,
# as the project doesn't have a valid permission structure. # as the project doesn't have a valid permission structure.
@ -200,7 +195,7 @@ def after_inserting_project(project, db_user):
result = projects_collection.update_one({'_id': project_id}, result = projects_collection.update_one({'_id': project_id},
{'$set': remove_private_keys(project)}) {'$set': remove_private_keys(project)})
if result.matched_count != 1: if result.matched_count != 1:
log.error('Unable to update project %s: %s', project_id, result.raw_result) log.warning('Unable to update project %s: %s', project_id, result.raw_result)
abort_with_error(500) abort_with_error(500)
@ -249,35 +244,3 @@ def projects_node_type_has_method(response):
project_node_type_has_method(project) project_node_type_has_method(project)
def parse_markdown(project, original=None):
schema = current_app.config['DOMAIN']['projects']['schema']
def find_markdown_fields(schema, project):
"""Find and process all Markdown coerced fields.
- look for fields with a 'coerce': 'markdown' property
- parse the name of the field and generate the sibling field name (_<field_name>_html -> <field_name>)
- parse the content of the <field_name> field as markdown and save it in _<field_name>_html
"""
for field_name, field_value in schema.items():
if not isinstance(field_value, dict):
continue
if field_value.get('coerce') != 'markdown':
continue
if field_name not in project:
continue
# Construct markdown source field name (strip the leading '_' and the trailing '_html')
source_field_name = field_name[1:-5]
html = pillar.markdown.markdown(project[source_field_name])
project[field_name] = html
if isinstance(project, dict) and field_name in project:
find_markdown_fields(field_value, project[field_name])
find_markdown_fields(schema, project)
def parse_markdowns(items):
for item in items:
parse_markdown(item)

View File

@ -1,47 +0,0 @@
"""Code for merging projects."""
import logging
from bson import ObjectId
from pillar import current_app
from pillar.api.file_storage.moving import move_to_bucket
from pillar.api.utils import random_etag, utcnow
log = logging.getLogger(__name__)
def merge_project(pid_from: ObjectId, pid_to: ObjectId):
"""Move nodes and files from one project to another.
Note that this may invalidate the nodes, as their node type definition
may differ between projects.
"""
log.info('Moving project contents from %s to %s', pid_from, pid_to)
assert isinstance(pid_from, ObjectId)
assert isinstance(pid_to, ObjectId)
files_coll = current_app.db('files')
nodes_coll = current_app.db('nodes')
# Move the files first. Since this requires API calls to an external
# service, this is more likely to go wrong than moving the nodes.
query = {'project': pid_from}
to_move = files_coll.find(query, projection={'_id': 1})
to_move_count = files_coll.count_documents(query)
log.info('Moving %d files to project %s', to_move_count, pid_to)
for file_doc in to_move:
fid = file_doc['_id']
log.debug('moving file %s to project %s', fid, pid_to)
move_to_bucket(fid, pid_to)
# Mass-move the nodes.
etag = random_etag()
result = nodes_coll.update_many(
query,
{'$set': {'project': pid_to,
'_etag': etag,
'_updated': utcnow(),
}}
)
log.info('Moved %d nodes to project %s', result.modified_count, pid_to)

View File

@ -1,85 +0,0 @@
"""Project patching support."""
import logging
import flask
from flask import Blueprint, request
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.auth import current_user
from pillar.api.utils import random_etag, str2id, utcnow
from pillar.api.utils import authorization
log = logging.getLogger(__name__)
blueprint = Blueprint('projects.patch', __name__)
@blueprint.route('/<project_id>', methods=['PATCH'])
@authorization.require_login()
def patch_project(project_id: str):
"""Undelete a project.
This is done via a custom PATCH due to the lack of transactions of MongoDB;
we cannot undelete both project-referenced files and file-referenced
projects in one atomic operation.
"""
# Parse the request
pid = str2id(project_id)
patch = request.get_json()
if not patch:
raise wz_exceptions.BadRequest('Expected JSON body')
log.debug('User %s wants to PATCH project %s: %s', current_user, pid, patch)
# 'undelete' is the only operation we support now, so no fancy handler registration.
op = patch.get('op', '')
if op != 'undelete':
log.warning('User %s sent unsupported PATCH op %r to project %s: %s',
current_user, op, pid, patch)
raise wz_exceptions.BadRequest(f'unsupported operation {op!r}')
# Get the project to find the user's permissions.
proj_coll = current_app.db('projects')
proj = proj_coll.find_one({'_id': pid})
if not proj:
raise wz_exceptions.NotFound(f'project {pid} not found')
allowed = authorization.compute_allowed_methods('projects', proj)
if 'PUT' not in allowed:
log.warning('User %s tried to undelete project %s but only has permissions %r',
current_user, pid, allowed)
raise wz_exceptions.Forbidden(f'no PUT access to project {pid}')
if not proj.get('_deleted', False):
raise wz_exceptions.BadRequest(f'project {pid} was not deleted, unable to undelete')
# Undelete the files. We cannot do this via Eve, as it doesn't support
# PATCHing collections, so direct MongoDB modification is used to set
# _deleted=False and provide new _etag and _updated values.
new_etag = random_etag()
log.debug('undeleting files before undeleting project %s', pid)
files_coll = current_app.db('files')
update_result = files_coll.update_many(
{'project': pid},
{'$set': {'_deleted': False,
'_etag': new_etag,
'_updated': utcnow()}})
log.info('undeleted %d of %d file documents of project %s',
update_result.modified_count, update_result.matched_count, pid)
log.info('undeleting project %s on behalf of user %s', pid, current_user)
update_result = proj_coll.update_one({'_id': pid},
{'$set': {'_deleted': False}})
log.info('undeleted %d project document %s', update_result.modified_count, pid)
resp = flask.Response('', status=204)
resp.location = flask.url_for('projects.view', project_url=proj['url'])
return resp
def setup_app(app):
# This needs to be on the same URL prefix as Eve uses for the collection,
# and not /p as used for the other Projects API calls.
app.register_api_blueprint(blueprint, url_prefix='/projects')

View File

@ -2,14 +2,11 @@ import json
import logging import logging
from bson import ObjectId from bson import ObjectId
from flask import Blueprint, request, current_app, make_response, url_for from flask import Blueprint, g, request, current_app, make_response, url_for
from werkzeug import exceptions as wz_exceptions
import pillar.api.users.avatar
from pillar.api.utils import authorization, jsonify, str2id from pillar.api.utils import authorization, jsonify, str2id
from pillar.api.utils import mongo from pillar.api.utils import mongo
from pillar.api.utils.authorization import require_login, check_permissions from pillar.api.utils.authorization import require_login, check_permissions
from pillar.auth import current_user from werkzeug import exceptions as wz_exceptions
from . import utils from . import utils
@ -19,7 +16,7 @@ blueprint_api = Blueprint('projects_api', __name__)
@blueprint_api.route('/create', methods=['POST']) @blueprint_api.route('/create', methods=['POST'])
@authorization.require_login(require_cap='subscriber') @authorization.require_login(require_roles={u'admin', u'subscriber', u'demo'})
def create_project(overrides=None): def create_project(overrides=None):
"""Creates a new project.""" """Creates a new project."""
@ -27,7 +24,7 @@ def create_project(overrides=None):
project_name = request.json['name'] project_name = request.json['name']
else: else:
project_name = request.form['project_name'] project_name = request.form['project_name']
user_id = current_user.user_id user_id = g.current_user['user_id']
project = utils.create_new_project(project_name, user_id, overrides) project = utils.create_new_project(project_name, user_id, overrides)
@ -44,8 +41,6 @@ def project_manage_users():
No changes are done on the project itself. No changes are done on the project itself.
""" """
from pillar.api.utils import str2id
projects_collection = current_app.data.driver.db['projects'] projects_collection = current_app.data.driver.db['projects']
users_collection = current_app.data.driver.db['users'] users_collection = current_app.data.driver.db['users']
@ -55,29 +50,23 @@ def project_manage_users():
project = projects_collection.find_one({'_id': ObjectId(project_id)}) project = projects_collection.find_one({'_id': ObjectId(project_id)})
admin_group_id = project['permissions']['groups'][0]['group'] admin_group_id = project['permissions']['groups'][0]['group']
users = list(users_collection.find( users = users_collection.find(
{'groups': {'$in': [admin_group_id]}}, {'groups': {'$in': [admin_group_id]}},
{'username': 1, 'email': 1, 'full_name': 1, 'avatar': 1})) {'username': 1, 'email': 1, 'full_name': 1})
for user in users: return jsonify({'_status': 'OK', '_items': list(users)})
user['avatar_url'] = pillar.api.users.avatar.url(user)
user.pop('avatar', None)
return jsonify({'_status': 'OK', '_items': users})
# The request is not a form, since it comes from the API sdk # The request is not a form, since it comes from the API sdk
data = json.loads(request.data) data = json.loads(request.data)
project_id = str2id(data['project_id']) project_id = ObjectId(data['project_id'])
target_user_id = str2id(data['user_id']) target_user_id = ObjectId(data['user_id'])
action = data['action'] action = data['action']
current_user_id = current_user.user_id current_user_id = g.current_user['user_id']
project = projects_collection.find_one({'_id': project_id}) project = projects_collection.find_one({'_id': project_id})
# Check if the current_user is owner of the project, or removing themselves. # Check if the current_user is owner of the project, or removing themselves.
if not authorization.user_has_role('admin'):
remove_self = target_user_id == current_user_id and action == 'remove' remove_self = target_user_id == current_user_id and action == 'remove'
if project['user'] != current_user_id and not remove_self: if project['user'] != current_user_id and not remove_self:
log.warning('User %s tries to %s %s to/from project %s, but is not allowed',
current_user_id, action, target_user_id, project_id)
utils.abort_with_error(403) utils.abort_with_error(403)
admin_group = utils.get_admin_group(project) admin_group = utils.get_admin_group(project)
@ -96,7 +85,7 @@ def project_manage_users():
action, current_user_id) action, current_user_id)
raise wz_exceptions.UnprocessableEntity() raise wz_exceptions.UnprocessableEntity()
users_collection.update_one({'_id': target_user_id}, users_collection.update({'_id': target_user_id},
{operation: {'groups': admin_group['_id']}}) {operation: {'groups': admin_group['_id']}})
user = users_collection.find_one({'_id': target_user_id}, user = users_collection.find_one({'_id': target_user_id},
@ -145,3 +134,5 @@ def get_allowed_methods(project_id=None, node_type=None):
resp.status_code = 204 resp.status_code = 204
return resp return resp

View File

@ -1,14 +1,10 @@
import logging import logging
import typing
from bson import ObjectId from bson import ObjectId
from flask import current_app
from werkzeug import exceptions as wz_exceptions from werkzeug import exceptions as wz_exceptions
from werkzeug.exceptions import abort from werkzeug.exceptions import abort
from pillar import current_app
from pillar.auth import current_user
from pillar.api import file_storage_backends
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -31,30 +27,12 @@ def project_total_file_size(project_id):
return 0 return 0
def get_admin_group_id(project_id: ObjectId) -> ObjectId: def get_admin_group(project):
assert isinstance(project_id, ObjectId)
project = current_app.db('projects').find_one({'_id': project_id},
{'permissions': 1})
if not project:
raise ValueError(f'Project {project_id} does not exist.')
# TODO: search through all groups to find the one with the project ID as its name,
# or identify "the admin group" in a different way (for example the group with DELETE rights).
try:
admin_group_id = ObjectId(project['permissions']['groups'][0]['group'])
except KeyError:
raise ValueError(f'Project {project_id} does not seem to have an admin group')
return admin_group_id
def get_admin_group(project: dict) -> dict:
"""Returns the admin group for the project.""" """Returns the admin group for the project."""
groups_collection = current_app.data.driver.db['groups'] groups_collection = current_app.data.driver.db['groups']
# TODO: see get_admin_group_id # TODO: search through all groups to find the one with the project ID as its name.
admin_group_id = ObjectId(project['permissions']['groups'][0]['group']) admin_group_id = ObjectId(project['permissions']['groups'][0]['group'])
group = groups_collection.find_one({'_id': admin_group_id}) group = groups_collection.find_one({'_id': admin_group_id})
@ -62,27 +40,11 @@ def get_admin_group(project: dict) -> dict:
raise ValueError('Unable to handle project without admin group.') raise ValueError('Unable to handle project without admin group.')
if group['name'] != str(project['_id']): if group['name'] != str(project['_id']):
log.error('User %s tries to get admin group for project %s, '
'but that does not have the project ID as group name: %s',
current_user.user_id, project.get('_id', '-unknown-'), group)
return abort_with_error(403) return abort_with_error(403)
return group return group
def user_rights_in_project(project_id: ObjectId) -> frozenset:
"""Returns the set of HTTP methods allowed on the given project for the current user."""
from pillar.api.utils import authorization
assert isinstance(project_id, ObjectId)
proj_coll = current_app.db().projects
proj = proj_coll.find_one({'_id': project_id})
return frozenset(authorization.compute_allowed_methods('projects', proj))
def abort_with_error(status): def abort_with_error(status):
"""Aborts with the given status, or 500 if the status doesn't indicate an error. """Aborts with the given status, or 500 if the status doesn't indicate an error.
@ -128,87 +90,3 @@ def create_new_project(project_name, user_id, overrides):
log.info('Created project %s for user %s', project['_id'], user_id) log.info('Created project %s for user %s', project['_id'], user_id)
return project return project
def get_node_type(project, node_type_name):
"""Returns the named node type, or None if it doesn't exist."""
return next((nt for nt in project['node_types']
if nt['name'] == node_type_name), None)
def node_type_dict(project: dict) -> typing.Dict[str, dict]:
"""Return the node types of the project as dictionary.
The returned dictionary will be keyed by the node type name.
"""
return {nt['name']: nt for nt in project['node_types']}
def project_id(project_url: str) -> ObjectId:
"""Returns the object ID, or raises a ValueError when not found."""
proj_coll = current_app.db('projects')
proj = proj_coll.find_one({'url': project_url}, projection={'_id': True})
if not proj:
raise ValueError(f'project with url={project_url!r} not found')
return proj['_id']
def get_project_url(project_id: ObjectId) -> str:
"""Returns the project URL, or raises a ValueError when not found."""
proj_coll = current_app.db('projects')
proj = proj_coll.find_one({'_id': project_id, '_deleted': {'$ne': True}},
projection={'url': True})
if not proj:
raise ValueError(f'project with id={project_id} not found')
return proj['url']
def get_project(project_url: str) -> dict:
"""Find a project in the database, raises ValueError if not found.
:param project_url: URL of the project
"""
proj_coll = current_app.db('projects')
project = proj_coll.find_one({'url': project_url, '_deleted': {'$ne': True}})
if not project:
raise ValueError(f'project url={project_url!r} does not exist')
return project
def put_project(project: dict):
"""Puts a project into the database via Eve.
:param project: the project data, should be the entire project document
:raises ValueError: if the project cannot be saved.
"""
from pillar.api.utils import remove_private_keys
from pillarsdk.utils import remove_none_attributes
pid = ObjectId(project['_id'])
proj_no_priv = remove_private_keys(project)
proj_no_none = remove_none_attributes(proj_no_priv)
result, _, _, status_code = current_app.put_internal('projects', proj_no_none, _id=pid)
if status_code != 200:
message = f"Can't update project {pid}, status {status_code} with issues: {result}"
log.error(message)
raise ValueError(message)
def storage(project_id: ObjectId) -> file_storage_backends.Bucket:
"""Return the storage bucket for this project.
For now this returns a bucket in the default storage backend, since
individual projects do not have a 'storage backend' setting (this is
set per file, not per project).
"""
return file_storage_backends.default_storage_backend(str(project_id))

View File

@ -1,9 +0,0 @@
from .routes import blueprint_search
from . import queries
def setup_app(app, url_prefix: str = None):
app.register_api_blueprint(
blueprint_search, url_prefix=url_prefix)
queries.setup_app(app)

View File

@ -1,40 +0,0 @@
import logging
from algoliasearch.helpers import AlgoliaException
log = logging.getLogger(__name__)
def push_updated_user(user_to_index: dict):
"""Push an update to the index when a user document is updated."""
from pillar.api.utils.algolia import index_user_save
try:
index_user_save(user_to_index)
except AlgoliaException as ex:
log.warning(
'Unable to push user info to Algolia for user "%s", id=%s; %s', # noqa
user_to_index.get('username'),
user_to_index.get('objectID'), ex)
def index_node_save(node_to_index: dict):
"""Save parsed node document to the index."""
from pillar.api.utils import algolia
try:
algolia.index_node_save(node_to_index)
except AlgoliaException as ex:
log.warning(
'Unable to push node info to Algolia for node %s; %s', node_to_index, ex) # noqa
def index_node_delete(delete_id: str):
"""Delete node using id."""
from pillar.api.utils import algolia
try:
algolia.index_node_delete(delete_id)
except AlgoliaException as ex:
log.warning('Unable to delete node info to Algolia for node %s; %s', delete_id, ex) # noqa

View File

@ -1,193 +0,0 @@
"""
Define elasticsearch document mapping.
Elasticsearch consist of two parts:
- Part 1: Define the documents in which you define who fields will be indexed.
- Part 2: Building elasticsearch json queries.
BOTH of these parts are equally importand to havea search API that returns
relevant results.
"""
import logging
import typing
import elasticsearch_dsl as es
from elasticsearch_dsl import analysis
log = logging.getLogger(__name__)
edge_ngram_filter = analysis.token_filter(
'edge_ngram_filter',
type='edge_ngram',
min_gram=1,
max_gram=15
)
autocomplete = es.analyzer(
'autocomplete',
tokenizer='standard',
filter=['standard', 'asciifolding', 'lowercase', edge_ngram_filter]
)
class User(es.DocType):
"""Elastic document describing user."""
objectID = es.Keyword()
username = es.Text(fielddata=True, analyzer=autocomplete)
username_exact = es.Keyword()
full_name = es.Text(fielddata=True, analyzer=autocomplete)
roles = es.Keyword(multi=True)
groups = es.Keyword(multi=True)
email = es.Text(fielddata=True, analyzer=autocomplete)
email_exact = es.Keyword()
class Meta:
index = 'users'
class Node(es.DocType):
"""
Elastic document describing user
"""
node_type = es.Keyword()
objectID = es.Keyword()
name = es.Text(
fielddata=True,
analyzer=autocomplete
)
user = es.Object(
fields={
'id': es.Keyword(),
'name': es.Text(
fielddata=True,
analyzer=autocomplete)
}
)
description = es.Text()
is_free = es.Boolean()
project = es.Object(
fields={
'id': es.Keyword(),
'name': es.Keyword(),
'url': es.Keyword(),
}
)
media = es.Keyword()
picture = es.Keyword()
tags = es.Keyword(multi=True)
license_notes = es.Text()
created_at = es.Date()
updated_at = es.Date()
class Meta:
index = 'nodes'
def create_doc_from_user_data(user_to_index: dict) -> typing.Optional[User]:
"""
Create the document to store in a search engine for this user.
See pillar.celery.search_index_task
:returns: an ElasticSearch document or None if user_to_index has no data.
"""
if not user_to_index:
return None
doc_id = str(user_to_index.get('objectID', ''))
if not doc_id:
log.error('USER ID is missing %s', user_to_index)
raise KeyError('Trying to create document without id')
doc = User(_id=doc_id)
doc.objectID = str(user_to_index['objectID'])
doc.username = user_to_index['username']
doc.username_exact = user_to_index['username']
doc.full_name = user_to_index['full_name']
doc.roles = list(map(str, user_to_index['roles']))
doc.groups = list(map(str, user_to_index['groups']))
doc.email = user_to_index['email']
doc.email_exact = user_to_index['email']
return doc
def create_doc_from_node_data(node_to_index: dict) -> typing.Optional[Node]:
"""
Create the document to store in a search engine for this node.
See pillar.celery.search_index_task
:returns: an ElasticSearch document or None if node_to_index has no data.
"""
if not node_to_index:
return None
# node stuff
doc_id = str(node_to_index.get('objectID', ''))
if not doc_id:
log.error('ID missing %s', node_to_index)
return None
doc = Node(_id=doc_id)
doc.objectID = str(node_to_index['objectID'])
doc.node_type = node_to_index['node_type']
doc.name = node_to_index['name']
doc.description = node_to_index.get('description')
doc.user.id = str(node_to_index['user']['_id'])
doc.user.name = node_to_index['user']['full_name']
doc.project.id = str(node_to_index['project']['_id'])
doc.project.name = node_to_index['project']['name']
doc.project.url = node_to_index['project']['url']
if node_to_index['node_type'] == 'asset':
doc.media = node_to_index['media']
doc.picture = str(node_to_index.get('picture'))
doc.tags = node_to_index.get('tags')
doc.license_notes = node_to_index.get('license_notes')
doc.is_free = node_to_index.get('is_free')
doc.created_at = node_to_index['created']
doc.updated_at = node_to_index['updated']
return doc
def create_doc_from_user(user_to_index: dict) -> User:
"""
Create a user document from user
"""
doc_id = str(user_to_index['objectID'])
doc = User(_id=doc_id)
doc.objectID = str(user_to_index['objectID'])
doc.full_name = user_to_index['full_name']
doc.username = user_to_index['username']
doc.roles = user_to_index['roles']
doc.groups = user_to_index['groups']
doc.email = user_to_index['email']
return doc

View File

@ -1,65 +0,0 @@
import logging
from elasticsearch_dsl.connections import connections
from elasticsearch.exceptions import NotFoundError
from pillar import current_app
from . import documents
log = logging.getLogger(__name__)
elk_hosts = current_app.config['ELASTIC_SEARCH_HOSTS']
connections.create_connection(
hosts=elk_hosts,
sniff_on_start=False,
timeout=20)
def push_updated_user(user_to_index: dict):
"""
Push an update to the Elastic index when a user item is updated.
"""
if not user_to_index:
return
doc = documents.create_doc_from_user_data(user_to_index)
if not doc:
return
index = current_app.config['ELASTIC_INDICES']['USER']
log.debug('Index %r update user doc %s in ElasticSearch.', index, doc._id)
doc.save(index=index)
def index_node_save(node_to_index: dict):
"""
Push an update to the Elastic index when a node item is saved.
"""
if not node_to_index:
return
doc = documents.create_doc_from_node_data(node_to_index)
if not doc:
return
index = current_app.config['ELASTIC_INDICES']['NODE']
log.debug('Index %r update node doc %s in ElasticSearch.', index, doc._id)
doc.save(index=index)
def index_node_delete(delete_id: str):
"""
Delete node document from Elastic index useing a node id
"""
index = current_app.config['ELASTIC_INDICES']['NODE']
log.debug('Index %r node doc delete %s', index, delete_id)
try:
doc: documents.Node = documents.Node.get(id=delete_id)
doc.delete(index=index)
except NotFoundError:
# seems to be gone already..
pass

View File

@ -1,64 +0,0 @@
import logging
from typing import List
from elasticsearch.exceptions import NotFoundError
from elasticsearch_dsl.connections import connections
import elasticsearch_dsl as es
from pillar import current_app
from . import documents
log = logging.getLogger(__name__)
class ResetIndexTask(object):
""" Clear and build index / mapping """
# Key into the ELASTIC_INDICES dict in the app config.
index_key: str = ''
# List of elastic document types
doc_types: List = []
name = 'remove index'
def __init__(self):
if not self.index_key:
raise ValueError("No index specified")
if not self.doc_types:
raise ValueError("No doc_types specified")
connections.create_connection(
hosts=current_app.config['ELASTIC_SEARCH_HOSTS'],
# sniff_on_start=True,
retry_on_timeout=True,
)
def execute(self):
index = current_app.config['ELASTIC_INDICES'][self.index_key]
idx = es.Index(index)
try:
idx.delete(ignore=404)
except NotFoundError:
log.warning("Could not delete index '%s', ignoring", index)
else:
log.info("Deleted index %s", index)
# create doc types
for dt in self.doc_types:
idx.doc_type(dt)
# create index
idx.create()
class ResetNodeIndex(ResetIndexTask):
index_key = 'NODE'
doc_types = [documents.Node]
class ResetUserIndex(ResetIndexTask):
index_key = 'USER'
doc_types = [documents.User]

View File

@ -1,215 +0,0 @@
import json
import logging
import typing
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q, MultiSearch
from elasticsearch_dsl.query import Query
from pillar import current_app
log = logging.getLogger(__name__)
BOOLEAN_TERMS = ['is_free']
NODE_AGG_TERMS = ['node_type', 'media', 'tags', *BOOLEAN_TERMS]
USER_AGG_TERMS = ['roles', ]
ITEMS_PER_PAGE = 10
USER_SOURCE_INCLUDE = ['full_name', 'objectID', 'username']
# Will be set in setup_app()
client: Elasticsearch = None
def add_aggs_to_search(search, agg_terms):
"""
Add facets / aggregations to the search result
"""
for term in agg_terms:
search.aggs.bucket(term, 'terms', field=term)
def make_filter(must: list, terms: dict) -> list:
""" Given term parameters append must queries to the must list """
for field, value in terms.items():
if value not in (None, ''):
must.append({'term': {field: value}})
return must
def nested_bool(filters: list, should: list, terms: dict, *, index_alias: str) -> Search:
"""
Create a nested bool, where the aggregation selection is a must.
:param index_alias: 'USER' or 'NODE', see ELASTIC_INDICES config.
"""
filters = make_filter(filters, terms)
bool_query = Q('bool', should=should)
bool_query = Q('bool', must=bool_query, filter=filters)
index = current_app.config['ELASTIC_INDICES'][index_alias]
search = Search(using=client, index=index)
search.query = bool_query
return search
def do_multi_node_search(queries: typing.List[dict]) -> typing.List[dict]:
"""
Given user query input and term refinements
search for public published nodes
"""
search = create_multi_node_search(queries)
return _execute_multi(search)
def do_node_search(query: str, terms: dict, page: int, project_id: str='') -> dict:
"""
Given user query input and term refinements
search for public published nodes
"""
search = create_node_search(query, terms, page, project_id)
return _execute(search)
def create_multi_node_search(queries: typing.List[dict]) -> MultiSearch:
search = MultiSearch(using=client)
for q in queries:
search = search.add(create_node_search(**q))
return search
def create_node_search(query: str, terms: dict, page: int, project_id: str='') -> Search:
terms = _transform_terms(terms)
should = [
Q('match', name=query),
{"match": {"project.name": query}},
{"match": {"user.name": query}},
Q('match', description=query),
Q('term', media=query),
Q('term', tags=query),
]
filters = []
if project_id:
filters.append({'term': {'project.id': project_id}})
if not query:
should = []
search = nested_bool(filters, should, terms, index_alias='NODE')
if not query:
search = search.sort('-created_at')
add_aggs_to_search(search, NODE_AGG_TERMS)
search = paginate(search, page)
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
return search
def do_user_search(query: str, terms: dict, page: int) -> dict:
""" return user objects represented in elasicsearch result dict"""
search = create_user_search(query, terms, page)
return _execute(search)
def _common_user_search(query: str) -> (typing.List[Query], typing.List[Query]):
"""Construct (filter,should) for regular + admin user search."""
if not query:
return [], []
should = []
if '@' in query:
should.append({'term': {'email_exact': {'value': query, 'boost': 50}}})
email_boost = 25
else:
email_boost = 1
should.extend([
Q('match', username=query),
Q('match', full_name=query),
{'match': {'email': {'query': query, 'boost': email_boost}}},
{'term': {'username_exact': {'value': query, 'boost': 50}}},
])
return [], should
def do_user_search_admin(query: str, terms: dict, page: int) -> dict:
"""
return users seach result dict object
search all user fields and provide aggregation information
"""
search = create_user_admin_search(query, terms, page)
return _execute(search)
def _execute(search: Search) -> dict:
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
resp = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(resp.to_dict(), indent=4))
return resp.to_dict()
def _execute_multi(search: typing.List[Search]) -> typing.List[dict]:
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
resp = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(resp.to_dict(), indent=4))
return [r.to_dict() for r in resp]
def create_user_admin_search(query: str, terms: dict, page: int) -> Search:
terms = _transform_terms(terms)
filters, should = _common_user_search(query)
if query:
# We most likely got and id field. we should find it.
if len(query) == len('563aca02c379cf0005e8e17d'):
should.append({'term': {
'objectID': {
'value': query, # the thing we're looking for
'boost': 100, # how much more it counts for the score
}
}})
search = nested_bool(filters, should, terms, index_alias='USER')
add_aggs_to_search(search, USER_AGG_TERMS)
search = paginate(search, page)
return search
def create_user_search(query: str, terms: dict, page: int) -> Search:
search = create_user_admin_search(query, terms, page)
return search.source(include=USER_SOURCE_INCLUDE)
def paginate(search: Search, page_idx: int) -> Search:
return search[page_idx * ITEMS_PER_PAGE:(page_idx + 1) * ITEMS_PER_PAGE]
def _transform_terms(terms: dict) -> dict:
"""
Ugly hack! Elastic uses 1/0 for boolean values in its aggregate response,
but expects true/false in queries.
"""
transformed = terms.copy()
for t in BOOLEAN_TERMS:
orig = transformed.get(t)
if orig in ('1', '0'):
transformed[t] = bool(int(orig))
return transformed
def setup_app(app):
global client
hosts = app.config['ELASTIC_SEARCH_HOSTS']
log.getChild('setup_app').info('Creating ElasticSearch client for %s', hosts)
client = Elasticsearch(hosts)

View File

@ -1,106 +0,0 @@
import logging
from flask import Blueprint, request
import elasticsearch.exceptions as elk_ex
from werkzeug import exceptions as wz_exceptions
from pillar.api.utils import authorization, jsonify
from . import queries
log = logging.getLogger(__name__)
blueprint_search = Blueprint('elksearch', __name__)
TERMS = [
'node_type', 'media',
'tags', 'is_free', 'projectname',
'roles',
]
def _term_filters(args) -> dict:
"""
Check if frontent wants to filter stuff
on specific fields AKA facets
return mapping with term field name
and provided user term value
"""
return {term: args.get(term, '') for term in TERMS}
def _page_index(page) -> int:
"""Return the page index from the query string."""
try:
page_idx = int(page)
except TypeError:
log.info('invalid page number %r received', request.args.get('page'))
raise wz_exceptions.BadRequest()
return page_idx
@blueprint_search.route('/', methods=['GET'])
def search_nodes():
searchword = request.args.get('q', '')
project_id = request.args.get('project', '')
terms = _term_filters(request.args)
page_idx = _page_index(request.args.get('page', 0))
result = queries.do_node_search(searchword, terms, page_idx, project_id)
return jsonify(result)
@blueprint_search.route('/multisearch', methods=['POST'])
def multi_search_nodes():
if len(request.args) != 1:
log.info(f'Expected 1 argument, received {len(request.args)}')
json_obj = request.json
q = []
for row in json_obj:
q.append({
'query': row.get('q', ''),
'project_id': row.get('project', ''),
'terms': _term_filters(row),
'page': _page_index(row.get('page', 0))
})
result = queries.do_multi_node_search(q)
return jsonify(result)
@blueprint_search.route('/user')
def search_user():
searchword = request.args.get('q', '')
terms = _term_filters(request.args)
page_idx = _page_index(request.args.get('page', 0))
# result is the raw elasticseach output.
# we need to filter fields in case of user objects.
try:
result = queries.do_user_search(searchword, terms, page_idx)
except elk_ex.ElasticsearchException as ex:
resp = jsonify({'_message': str(ex)})
resp.status_code = 500
return resp
return jsonify(result)
@blueprint_search.route('/admin/user')
@authorization.require_login(require_cap='admin')
def search_user_admin():
"""
User search over all fields.
"""
searchword = request.args.get('q', '')
terms = _term_filters(request.args)
page_idx = _page_index(_page_index(request.args.get('page', 0)))
try:
result = queries.do_user_search_admin(searchword, terms, page_idx)
except elk_ex.ElasticsearchException as ex:
resp = jsonify({'_message': str(ex)})
resp.status_code = 500
return resp
return jsonify(result)

View File

@ -1,31 +1,24 @@
"""Service accounts.""" """Service accounts."""
import logging import logging
import typing
import blinker import blinker
import bson
from flask import Blueprint, current_app, request from flask import Blueprint, current_app, request
from werkzeug import exceptions as wz_exceptions
from pillar.api import local_auth from pillar.api import local_auth
from pillar.api.utils import authorization, authentication from pillar.api.utils import mongo
from pillar.api.utils import authorization, authentication, str2id, jsonify
from werkzeug import exceptions as wz_exceptions
blueprint = Blueprint('service', __name__) blueprint = Blueprint('service', __name__)
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
signal_user_changed_role = blinker.NamedSignal('badger:user_changed_role') signal_user_changed_role = blinker.NamedSignal('badger:user_changed_role')
ROLES_WITH_GROUPS = {'admin', 'demo', 'subscriber'} ROLES_WITH_GROUPS = {u'admin', u'demo', u'subscriber'}
# Map of role name to group ID, for the above groups. # Map of role name to group ID, for the above groups.
role_to_group_id = {} role_to_group_id = {}
class ServiceAccountCreationError(Exception):
"""Raised when a service account cannot be created."""
@blueprint.before_app_first_request @blueprint.before_app_first_request
def fetch_role_to_group_id_map(): def fetch_role_to_group_id_map():
"""Fills the _role_to_group_id mapping upon application startup.""" """Fills the _role_to_group_id mapping upon application startup."""
@ -45,7 +38,7 @@ def fetch_role_to_group_id_map():
@blueprint.route('/badger', methods=['POST']) @blueprint.route('/badger', methods=['POST'])
@authorization.require_login(require_roles={'service', 'badger'}, require_all=True) @authorization.require_login(require_roles={u'service', u'badger'}, require_all=True)
def badger(): def badger():
if request.mimetype != 'application/json': if request.mimetype != 'application/json':
log.debug('Received %s instead of application/json', request.mimetype) log.debug('Received %s instead of application/json', request.mimetype)
@ -77,76 +70,42 @@ def badger():
action, user_email, role, action, role) action, user_email, role, action, role)
return 'Role not allowed', 403 return 'Role not allowed', 403
return do_badger(action, role=role, user_email=user_email) return do_badger(action, user_email, role)
def do_badger(action: str, *, def do_badger(action, user_email, role):
role: str=None, roles: typing.Iterable[str]=None, """Performs a badger action, returning a HTTP response."""
user_email: str = '', user_id: bson.ObjectId = None):
"""Performs a badger action, returning a HTTP response.
Either role or roles must be given.
Either user_email or user_id must be given.
"""
if action not in {'grant', 'revoke'}: if action not in {'grant', 'revoke'}:
log.error('do_badger(%r, %r, %r, %r): action %r not supported.',
action, role, user_email, user_id, action)
raise wz_exceptions.BadRequest('Action %r not supported' % action) raise wz_exceptions.BadRequest('Action %r not supported' % action)
if not user_email and user_id is None: if not user_email:
log.error('do_badger(%r, %r, %r, %r): neither email nor user_id given.',
action, role, user_email, user_id)
raise wz_exceptions.BadRequest('User email not given') raise wz_exceptions.BadRequest('User email not given')
if bool(role) == bool(roles): if not role:
log.error('do_badger(%r, role=%r, roles=%r, %r, %r): ' raise wz_exceptions.BadRequest('Role not given')
'either "role" or "roles" must be given.',
action, role, roles, user_email, user_id)
raise wz_exceptions.BadRequest('Invalid role(s) given')
# If only a single role was given, handle it as a set of one role.
if not roles:
roles = {role}
del role
users_coll = current_app.data.driver.db['users'] users_coll = current_app.data.driver.db['users']
# Fetch the user # Fetch the user
if user_email: db_user = users_coll.find_one({'email': user_email}, projection={'roles': 1, 'groups': 1})
query = {'email': user_email}
else:
query = user_id
db_user = users_coll.find_one(query, projection={'roles': 1, 'groups': 1})
if db_user is None: if db_user is None:
log.warning('badger(%s, roles=%s, user_email=%s, user_id=%s): user not found', log.warning('badger(%s, %s, %s): user not found', action, user_email, role)
action, roles, user_email, user_id)
return 'User not found', 404 return 'User not found', 404
# Apply the action # Apply the action
user_roles = set(db_user.get('roles') or []) roles = set(db_user.get('roles') or [])
if action == 'grant': if action == 'grant':
user_roles |= roles roles.add(role)
else: else:
user_roles -= roles roles.discard(role)
groups = None
for role in roles:
groups = manage_user_group_membership(db_user, role, action) groups = manage_user_group_membership(db_user, role, action)
if groups is None: updates = {'roles': list(roles)}
# No change for this role
continue
# Also update db_user for the next iteration.
db_user['groups'] = groups
updates = {'roles': list(user_roles)}
if groups is not None: if groups is not None:
updates['groups'] = list(groups) updates['groups'] = list(groups)
log.debug('badger(%s, %s, user_email=%s, user_id=%s): applying updates %r',
action, role, user_email, user_id, updates)
users_coll.update_one({'_id': db_user['_id']}, users_coll.update_one({'_id': db_user['_id']},
{'$set': updates}) {'$set': updates})
@ -157,6 +116,19 @@ def do_badger(action: str, *,
return '', 204 return '', 204
@blueprint.route('/urler/<project_id>', methods=['GET'])
@authorization.require_login(require_roles={u'service', u'urler'}, require_all=True)
def urler(project_id):
"""Returns the URL of any project."""
project_id = str2id(project_id)
project = mongo.find_one_or_404('projects', project_id,
projection={'url': 1})
return jsonify({
'_id': project_id,
'url': project['url']})
def manage_user_group_membership(db_user, role, action): def manage_user_group_membership(db_user, role, action):
"""Some roles have associated groups; this function maintains group & role membership. """Some roles have associated groups; this function maintains group & role membership.
@ -190,52 +162,37 @@ def manage_user_group_membership(db_user, role, action):
return user_groups return user_groups
def create_service_account(email: str, roles: typing.Iterable, service: dict, def create_service_account(email, roles, service):
*, full_name: str=None):
"""Creates a service account with the given roles + the role 'service'. """Creates a service account with the given roles + the role 'service'.
:param email: optional email address associated with the account. :param email: email address associated with the account
:type email: str
:param roles: iterable of role names :param roles: iterable of role names
:param service: dict of the 'service' key in the user. :param service: dict of the 'service' key in the user.
:param full_name: Full name of the service account. If None, will be set to :type service: dict
something reasonable.
:return: tuple (user doc, token doc) :return: tuple (user doc, token doc)
""" """
# Create a user with the correct roles. # Create a user with the correct roles.
roles = sorted(set(roles).union({'service'})) roles = list(set(roles).union({u'service'}))
user_id = bson.ObjectId() user = {'username': email,
log.info('Creating service account %s with roles %s', user_id, roles)
user = {'_id': user_id,
'username': f'SRV-{user_id}',
'groups': [], 'groups': [],
'roles': roles, 'roles': roles,
'settings': {'email_communications': 0}, 'settings': {'email_communications': 0},
'auth': [], 'auth': [],
'full_name': full_name or f'SRV-{user_id}', 'full_name': email,
'email': email,
'service': service} 'service': service}
if email:
user['email'] = email
result, _, _, status = current_app.post_internal('users', user) result, _, _, status = current_app.post_internal('users', user)
if status != 201: if status != 201:
raise ServiceAccountCreationError('Error creating user {}: {}'.format(user_id, result)) raise SystemExit('Error creating user {}: {}'.format(email, result))
user.update(result) user.update(result)
# Create an authentication token that won't expire for a long time. # Create an authentication token that won't expire for a long time.
token = generate_auth_token(user['_id']) token = local_auth.generate_and_store_token(user['_id'], days=36500, prefix='SRV')
return user, token return user, token
def generate_auth_token(service_account_id) -> dict:
"""Generates an authentication token for a service account."""
token_info = local_auth.generate_and_store_token(service_account_id, days=36500, prefix=b'SRV')
return token_info
def setup_app(app, api_prefix): def setup_app(app, api_prefix):
app.register_api_blueprint(blueprint, url_prefix=api_prefix) app.register_api_blueprint(blueprint, url_prefix=api_prefix)

View File

@ -1,374 +0,0 @@
import itertools
import typing
from datetime import datetime
from operator import itemgetter
import attr
import bson
import pymongo
from flask import Blueprint, current_app, request, url_for
import pillar
from pillar import shortcodes
from pillar.api.utils import jsonify, pretty_duration, str2id
blueprint = Blueprint('timeline', __name__)
@attr.s(auto_attribs=True)
class TimelineDO:
groups: typing.List['GroupDO'] = []
continue_from: typing.Optional[float] = None
@attr.s(auto_attribs=True)
class GroupDO:
label: typing.Optional[str] = None
url: typing.Optional[str] = None
items: typing.Dict = {}
groups: typing.Iterable['GroupDO'] = []
class SearchHelper:
def __init__(self, nbr_of_weeks: int, continue_from: typing.Optional[datetime],
project_ids: typing.List[bson.ObjectId], sort_direction: str):
self._nbr_of_weeks = nbr_of_weeks
self._continue_from = continue_from
self._project_ids = project_ids
self.sort_direction = sort_direction
def _match(self, continue_from: typing.Optional[datetime]) -> dict:
created = {}
if continue_from:
if self.sort_direction == 'desc':
created = {'_created': {'$lt': continue_from}}
else:
created = {'_created': {'$gt': continue_from}}
return {'_deleted': {'$ne': True},
'node_type': {'$in': ['asset', 'post']},
'properties.status': {'$eq': 'published'},
'project': {'$in': self._project_ids},
**created,
}
def raw_weeks_from_mongo(self) -> pymongo.collection.Collection:
direction = pymongo.DESCENDING if self.sort_direction == 'desc' else pymongo.ASCENDING
nodes_coll = current_app.db('nodes')
return nodes_coll.aggregate([
{'$match': self._match(self._continue_from)},
{'$lookup': {"from": "projects",
"localField": "project",
"foreignField": "_id",
"as": "project"}},
{'$unwind': {'path': "$project"}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$project': {
'_created': 1,
'project._id': 1,
'project.url': 1,
'project.name': 1,
'user._id': 1,
'user.full_name': 1,
'name': 1,
'node_type': 1,
'picture': 1,
'properties': 1,
'permissions': 1,
}},
{'$group': {
'_id': {'year': {'$isoWeekYear': '$_created'},
'week': {'$isoWeek': '$_created'}},
'nodes': {'$push': '$$ROOT'}
}},
{'$sort': {'_id.year': direction,
'_id.week': direction}},
{'$limit': self._nbr_of_weeks}
])
def has_more(self, continue_from: datetime) -> bool:
nodes_coll = current_app.db('nodes')
result = nodes_coll.count_documents(self._match(continue_from))
return bool(result)
class Grouper:
@classmethod
def label(cls, node):
return None
@classmethod
def url(cls, node):
return None
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
raise NotImplemented()
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
raise NotImplemented()
class ProjectGrouper(Grouper):
@classmethod
def label(cls, project: dict):
return project['name']
@classmethod
def url(cls, project: dict):
return url_for('projects.view', project_url=project['url'])
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
return itemgetter('project')
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
return lambda node: node['project']['_id']
class UserGrouper(Grouper):
@classmethod
def label(cls, user):
return user['full_name']
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
return itemgetter('user')
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
return lambda node: node['user']['_id']
class TimeLineBuilder:
def __init__(self, search_helper: SearchHelper, grouper: typing.Type[Grouper]):
self.search_helper = search_helper
self.grouper = grouper
self.continue_from = None
def build(self) -> TimelineDO:
raw_weeks = self.search_helper.raw_weeks_from_mongo()
clean_weeks = (self.create_week_group(week) for week in raw_weeks)
return TimelineDO(
groups=list(clean_weeks),
continue_from=self.continue_from.timestamp() if self.search_helper.has_more(self.continue_from) else None
)
def create_week_group(self, week: dict) -> GroupDO:
nodes = week['nodes']
nodes.sort(key=itemgetter('_created'), reverse=True)
self.update_continue_from(nodes)
groups = self.create_groups(nodes)
return GroupDO(
label=f'Week {week["_id"]["week"]}, {week["_id"]["year"]}',
groups=groups
)
def create_groups(self, nodes: typing.List[dict]) -> typing.List[GroupDO]:
self.sort_nodes(nodes) # groupby assumes that the list is sorted
nodes_grouped = itertools.groupby(nodes, self.grouper.group_key())
groups = (self.clean_group(grouped_by, group) for grouped_by, group in nodes_grouped)
groups_sorted = sorted(groups, key=self.group_row_sorter, reverse=True)
return groups_sorted
def sort_nodes(self, nodes: typing.List[dict]):
nodes.sort(key=itemgetter('node_type'))
nodes.sort(key=self.grouper.sort_key())
def update_continue_from(self, sorted_nodes: typing.List[dict]):
if self.search_helper.sort_direction == 'desc':
first_created = sorted_nodes[-1]['_created']
candidate = self.continue_from or first_created
self.continue_from = min(candidate, first_created)
else:
last_created = sorted_nodes[0]['_created']
candidate = self.continue_from or last_created
self.continue_from = max(candidate, last_created)
def clean_group(self, grouped_by: typing.Any, group: typing.Iterable[dict]) -> GroupDO:
items = self.create_items(group)
return GroupDO(
label=self.grouper.label(grouped_by),
url=self.grouper.url(grouped_by),
items=items
)
def create_items(self, group) -> typing.List[dict]:
by_node_type = itertools.groupby(group, key=itemgetter('node_type'))
items = {}
for node_type, nodes in by_node_type:
items[node_type] = [self.node_prettyfy(n) for n in nodes]
return items
@classmethod
def node_prettyfy(cls, node: dict)-> dict:
duration_seconds = node['properties'].get('duration_seconds')
if duration_seconds is not None:
node['properties']['duration'] = pretty_duration(duration_seconds)
if node['node_type'] == 'post':
html = _get_markdowned_html(node['properties'], 'content')
html = shortcodes.render_commented(html, context=node['properties'])
node['properties']['pretty_content'] = html
return node
@classmethod
def group_row_sorter(cls, row: GroupDO) -> typing.Tuple[datetime, datetime]:
'''
If a group contains posts are more interesting and therefor we put them higher in up
:param row:
:return: tuple with newest post date and newest asset date
'''
def newest_created(nodes: typing.List[dict]) -> datetime:
if nodes:
return nodes[0]['_created']
return datetime.fromtimestamp(0, tz=bson.tz_util.utc)
newest_post_date = newest_created(row.items.get('post'))
newest_asset_date = newest_created(row.items.get('asset'))
return newest_post_date, newest_asset_date
def _public_project_ids() -> typing.List[bson.ObjectId]:
"""Returns a list of ObjectIDs of public projects.
Memoized in setup_app().
"""
proj_coll = current_app.db('projects')
result = proj_coll.find({'is_private': False}, {'_id': 1})
return [p['_id'] for p in result]
def _get_markdowned_html(document: dict, field_name: str) -> str:
cache_field_name = pillar.markdown.cache_field_name(field_name)
html = document.get(cache_field_name)
if html is None:
markdown_src = document.get(field_name) or ''
html = pillar.markdown.markdown(markdown_src)
return html
@blueprint.route('/', methods=['GET'])
def global_timeline():
continue_from_str = request.args.get('from')
continue_from = parse_continue_from(continue_from_str)
nbr_of_weeks_str = request.args.get('weeksToLoad')
nbr_of_weeks = parse_nbr_of_weeks(nbr_of_weeks_str)
sort_direction = request.args.get('dir', 'desc')
return _global_timeline(continue_from, nbr_of_weeks, sort_direction)
@blueprint.route('/p/<string(length=24):pid_path>', methods=['GET'])
def project_timeline(pid_path: str):
continue_from_str = request.args.get('from')
continue_from = parse_continue_from(continue_from_str)
nbr_of_weeks_str = request.args.get('weeksToLoad')
nbr_of_weeks = parse_nbr_of_weeks(nbr_of_weeks_str)
sort_direction = request.args.get('dir', 'desc')
pid = str2id(pid_path)
return _project_timeline(continue_from, nbr_of_weeks, sort_direction, pid)
def parse_continue_from(from_arg) -> typing.Optional[datetime]:
try:
from_float = float(from_arg)
except (TypeError, ValueError):
return None
return datetime.fromtimestamp(from_float, tz=bson.tz_util.utc)
def parse_nbr_of_weeks(weeks_to_load: str) -> int:
try:
return int(weeks_to_load)
except (TypeError, ValueError):
return 3
def _global_timeline(continue_from: typing.Optional[datetime], nbr_of_weeks: int, sort_direction: str):
"""Returns an aggregated view of what has happened on the site
Memoized in setup_app().
:param continue_from: Python utc timestamp where to begin aggregation
:param nbr_of_weeks: Number of weeks to return
Example output:
{
groups: [{
label: 'Week 32',
groups: [{
label: 'Spring',
url: '/p/spring',
items:{
post: [blogPostDoc, blogPostDoc],
asset: [assetDoc, assetDoc]
},
groups: ...
}]
}],
continue_from: 123456.2 // python timestamp
}
"""
builder = TimeLineBuilder(
SearchHelper(nbr_of_weeks, continue_from, _public_project_ids(), sort_direction),
ProjectGrouper
)
return jsonify_timeline(builder.build())
def jsonify_timeline(timeline: TimelineDO):
return jsonify(
attr.asdict(timeline,
recurse=True,
filter=lambda att, value: value is not None)
)
def _project_timeline(continue_from: typing.Optional[datetime], nbr_of_weeks: int, sort_direction, pid: bson.ObjectId):
"""Returns an aggregated view of what has happened on the site
Memoized in setup_app().
:param continue_from: Python utc timestamp where to begin aggregation
:param nbr_of_weeks: Number of weeks to return
Example output:
{
groups: [{
label: 'Week 32',
groups: [{
label: 'Tobias Johansson',
items:{
post: [blogPostDoc, blogPostDoc],
asset: [assetDoc, assetDoc]
},
groups: ...
}]
}],
continue_from: 123456.2 // python timestamp
}
"""
builder = TimeLineBuilder(
SearchHelper(nbr_of_weeks, continue_from, [pid], sort_direction),
UserGrouper
)
return jsonify_timeline(builder.build())
def setup_app(app, url_prefix):
global _public_project_ids
global _global_timeline
global _project_timeline
app.register_api_blueprint(blueprint, url_prefix=url_prefix)
cached = app.cache.cached(timeout=3600)
_public_project_ids = cached(_public_project_ids)
memoize = app.cache.memoize(timeout=60)
_global_timeline = memoize(_global_timeline)
_project_timeline = memoize(_project_timeline)

View File

@ -1,82 +1,15 @@
import logging
import bson
from flask import current_app
from . import hooks from . import hooks
from .routes import blueprint_api from .routes import blueprint_api
log = logging.getLogger(__name__)
def remove_user_from_group(user_id: bson.ObjectId, group_id: bson.ObjectId):
"""Removes the user from the given group.
Directly uses MongoDB, so that it doesn't require any special permissions.
"""
log.info('Removing user %s from group %s', user_id, group_id)
user_group_action(user_id, group_id, '$pull')
def add_user_to_group(user_id: bson.ObjectId, group_id: bson.ObjectId):
"""Makes the user member of the given group.
Directly uses MongoDB, so that it doesn't require any special permissions.
"""
log.info('Adding user %s to group %s', user_id, group_id)
user_group_action(user_id, group_id, '$addToSet')
def user_group_action(user_id: bson.ObjectId, group_id: bson.ObjectId, action: str):
"""Performs a group action (add/remove).
:param user_id: the user's ObjectID.
:param group_id: the group's ObjectID.
:param action: either '$pull' to remove from a group, or '$addToSet' to add to a group.
"""
from pymongo.results import UpdateResult
assert isinstance(user_id, bson.ObjectId)
assert isinstance(group_id, bson.ObjectId)
assert action in {'$pull', '$addToSet'}
users_coll = current_app.db('users')
result: UpdateResult = users_coll.update_one(
{'_id': user_id},
{action: {'groups': group_id}},
)
if result.matched_count == 0:
raise ValueError(f'Unable to {action} user {user_id} membership of group {group_id}; '
f'user not found.')
def _update_search_user_changed_role(sender, user: dict):
log.debug('Sending updated user %s to Algolia due to role change', user['_id'])
hooks.push_updated_user_to_search(user, original=None)
def setup_app(app, api_prefix): def setup_app(app, api_prefix):
from pillar.api import service
from . import patch
patch.setup_app(app, url_prefix=api_prefix)
app.on_pre_GET_users += hooks.check_user_access app.on_pre_GET_users += hooks.check_user_access
app.on_post_GET_users += hooks.post_GET_user app.on_post_GET_users += hooks.post_GET_user
app.on_pre_PUT_users += hooks.check_put_access app.on_pre_PUT_users += hooks.check_put_access
app.on_pre_PUT_users += hooks.before_replacing_user app.on_pre_PUT_users += hooks.before_replacing_user
app.on_replaced_users += hooks.push_updated_user_to_search app.on_replaced_users += hooks.push_updated_user_to_algolia
app.on_replaced_users += hooks.send_blinker_signal_roles_changed app.on_replaced_users += hooks.send_blinker_signal_roles_changed
app.on_fetched_item_users += hooks.after_fetching_user app.on_fetched_item_users += hooks.after_fetching_user
app.on_fetched_resource_users += hooks.after_fetching_user_resource app.on_fetched_resource_users += hooks.after_fetching_user_resource
app.on_insert_users += hooks.before_inserting_users
app.on_inserted_users += hooks.after_inserting_users
app.register_api_blueprint(blueprint_api, url_prefix=api_prefix) app.register_api_blueprint(blueprint_api, url_prefix=api_prefix)
service.signal_user_changed_role.connect(_update_search_user_changed_role)

View File

@ -1,159 +0,0 @@
import functools
import io
import logging
import mimetypes
import typing
from bson import ObjectId
from eve.methods.get import getitem_internal
import flask
from pillar import current_app
from pillar.api import blender_id
from pillar.api.blender_cloud import home_project
import pillar.api.file_storage
from werkzeug.datastructures import FileStorage
log = logging.getLogger(__name__)
DEFAULT_AVATAR = 'assets/img/default_user_avatar.png'
def url(user: dict) -> str:
"""Return the avatar URL for this user.
:param user: dictionary from the MongoDB 'users' collection.
"""
assert isinstance(user, dict), f'user must be dict, not {type(user)}'
avatar_id = user.get('avatar', {}).get('file')
if not avatar_id:
return _default_avatar()
# The file may not exist, in which case we get an empty string back.
return pillar.api.file_storage.get_file_url(avatar_id) or _default_avatar()
@functools.lru_cache(maxsize=1)
def _default_avatar() -> str:
"""Return the URL path of the default avatar.
Doesn't change after the app has started, so we just cache it.
"""
return flask.url_for('static_pillar', filename=DEFAULT_AVATAR)
def _extension_for_mime(mime_type: str) -> str:
# Take the longest extension. I'd rather have '.jpeg' than the weird '.jpe'.
extensions: typing.List[str] = mimetypes.guess_all_extensions(mime_type)
try:
return max(extensions, key=len)
except ValueError:
# Raised when extensions is empty, e.g. when the mime type is unknown.
return ''
def _get_file_link(file_id: ObjectId) -> str:
# Get the file document via Eve to make it update the link.
file_doc, _, _, status = getitem_internal('files', _id=file_id)
assert status == 200
return file_doc['link']
def sync_avatar(user_id: ObjectId) -> str:
"""Fetch the user's avatar from Blender ID and save to storage.
Errors are logged but do not raise an exception.
:return: the link to the avatar, or '' if it was not processed.
"""
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': user_id})
old_avatar_info = db_user.get('avatar', {})
if isinstance(old_avatar_info, ObjectId):
old_avatar_info = {'file': old_avatar_info}
home_proj = home_project.get_home_project(user_id)
if not home_project:
log.error('Home project of user %s does not exist, unable to store avatar', user_id)
return ''
bid_userid = blender_id.get_user_blenderid(db_user)
if not bid_userid:
log.error('User %s has no Blender ID user-id, unable to fetch avatar', user_id)
return ''
avatar_url = blender_id.avatar_url(bid_userid)
bid_session = blender_id.Session()
# Avoid re-downloading the same avatar.
request_headers = {}
if avatar_url == old_avatar_info.get('last_downloaded_url') and \
old_avatar_info.get('last_modified'):
request_headers['If-Modified-Since'] = old_avatar_info.get('last_modified')
log.info('Downloading avatar for user %s from %s', user_id, avatar_url)
resp = bid_session.get(avatar_url, headers=request_headers, allow_redirects=True)
if resp.status_code == 304:
# File was not modified, we can keep the old file.
log.debug('Avatar for user %s was not modified on Blender ID, not re-downloading', user_id)
return _get_file_link(old_avatar_info['file'])
resp.raise_for_status()
mime_type = resp.headers['Content-Type']
file_extension = _extension_for_mime(mime_type)
if not file_extension:
log.error('No file extension known for mime type %s, unable to handle avatar of user %s',
mime_type, user_id)
return ''
filename = f'avatar-{user_id}{file_extension}'
fake_local_file = io.BytesIO(resp.content)
fake_local_file.name = filename
# Act as if this file was just uploaded by the user, so we can reuse
# existing Pillar file-handling code.
log.debug("Uploading avatar for user %s to storage", user_id)
uploaded_file = FileStorage(
stream=fake_local_file,
filename=filename,
headers=resp.headers,
content_type=mime_type,
content_length=resp.headers['Content-Length'],
)
with pillar.auth.temporary_user(db_user):
upload_data = pillar.api.file_storage.upload_and_process(
fake_local_file,
uploaded_file,
str(home_proj['_id']),
# Disallow image processing, as it's a tiny file anyway and
# we'll just serve the original.
may_process_file=False,
)
file_id = ObjectId(upload_data['file_id'])
avatar_info = {
'file': file_id,
'last_downloaded_url': resp.url,
'last_modified': resp.headers.get('Last-Modified'),
}
# Update the user to store the reference to their avatar.
old_avatar_file_id = old_avatar_info.get('file')
update_result = users_coll.update_one({'_id': user_id},
{'$set': {'avatar': avatar_info}})
if update_result.matched_count == 1:
log.debug('Updated avatar for user ID %s to file %s', user_id, file_id)
else:
log.warning('Matched %d users while setting avatar for user ID %s to file %s',
update_result.matched_count, user_id, file_id)
if old_avatar_file_id:
current_app.delete_internal('files', _id=old_avatar_file_id)
return _get_file_link(file_id)

View File

@ -2,146 +2,107 @@ import copy
import json import json
from eve.utils import parse_request from eve.utils import parse_request
from werkzeug import exceptions as wz_exceptions from flask import current_app, g
from pillar import current_app
from pillar.api.users.routes import log from pillar.api.users.routes import log
import pillar.api.users.avatar from pillar.api.utils.authorization import user_has_role
import pillar.auth from werkzeug.exceptions import Forbidden
USER_EDITABLE_FIELDS = {'full_name', 'username', 'email', 'settings'}
# These fields nobody is allowed to touch directly, not even admins.
USER_ALWAYS_RESTORE_FIELDS = {'auth'}
def before_replacing_user(request, lookup): def before_replacing_user(request, lookup):
"""Prevents changes to any field of the user doc, except USER_EDITABLE_FIELDS.""" """Loads the auth field from the database, preventing any changes."""
# Find the user that is being replaced # Find the user that is being replaced
req = parse_request('users') req = parse_request('users')
req.projection = json.dumps({key: 0 for key in USER_EDITABLE_FIELDS}) req.projection = json.dumps({'auth': 1})
original = current_app.data.find_one('users', req, **lookup) original = current_app.data.find_one('users', req, **lookup)
# Make sure that the replacement has a valid auth field. # Make sure that the replacement has a valid auth field.
put_data = request.get_json() updates = request.get_json()
if put_data is None: assert updates is request.get_json() # We should get a ref to the cached JSON, and not a copy.
raise wz_exceptions.BadRequest('No JSON data received')
# We should get a ref to the cached JSON, and not a copy. This will allow us to if 'auth' in original:
# modify the cached JSON so that Eve sees our modifications. updates['auth'] = copy.deepcopy(original['auth'])
assert put_data is request.get_json()
# Reset fields that shouldn't be edited to their original values. This is only
# needed when users are editing themselves; admins are allowed to edit much more.
if not pillar.auth.current_user.has_cap('admin'):
for db_key, db_value in original.items():
if db_key[0] == '_' or db_key in USER_EDITABLE_FIELDS:
continue
if db_key in original:
put_data[db_key] = copy.deepcopy(original[db_key])
# Remove fields added by this PUT request, except when they are user-editable.
for put_key in list(put_data.keys()):
if put_key[0] == '_' or put_key in USER_EDITABLE_FIELDS:
continue
if put_key not in original:
del put_data[put_key]
# Always restore those fields
for db_key in USER_ALWAYS_RESTORE_FIELDS:
if db_key in original:
put_data[db_key] = copy.deepcopy(original[db_key])
else: else:
del put_data[db_key] updates.pop('auth', None)
# Regular users should always have an email address
if 'service' not in put_data.get('roles', ()):
if not put_data.get('email'):
raise wz_exceptions.UnprocessableEntity(
'email field must be given')
def push_updated_user_to_search(user, original): def push_updated_user_to_algolia(user, original):
""" """Push an update to the Algolia index when a user item is updated"""
Push an update to the Search index when a user
item is updated
"""
from pillar.celery import search_index_tasks as searchindex from algoliasearch.client import AlgoliaException
from pillar.api.utils.algolia import algolia_index_user_save
searchindex.updated_user.delay(str(user['_id'])) try:
algolia_index_user_save(user)
except AlgoliaException as ex:
log.warning('Unable to push user info to Algolia for user "%s", id=%s; %s',
user.get('username'), user.get('_id'), ex)
def send_blinker_signal_roles_changed(user, original): def send_blinker_signal_roles_changed(user, original):
""" """Sends a Blinker signal that the user roles were changed, so others can respond."""
Sends a Blinker signal that the user roles were
changed, so others can respond.
"""
current_roles = set(user.get('roles', [])) if user.get('roles') == original.get('roles'):
original_roles = set(original.get('roles', []))
if current_roles == original_roles:
return return
from pillar.api.service import signal_user_changed_role from pillar.api.service import signal_user_changed_role
log.info('User %s changed roles to %s, sending Blinker signal', log.info('User %s changed roles to %s, sending Blinker signal',
user.get('_id'), current_roles) user.get('_id'), user.get('roles'))
signal_user_changed_role.send(current_app, user=user) signal_user_changed_role.send(current_app, user=user)
def check_user_access(request, lookup): def check_user_access(request, lookup):
"""Modifies the lookup dict to limit returned user info.""" """Modifies the lookup dict to limit returned user info."""
user = pillar.auth.get_current_user() # No access when not logged in.
current_user = g.get('current_user')
current_user_id = current_user['user_id'] if current_user else None
# Admins can do anything and get everything, except the 'auth' block. # Admins can do anything and get everything, except the 'auth' block.
if user.has_cap('admin'): if user_has_role(u'admin'):
return return
if not lookup and user.is_anonymous: if not lookup and not current_user:
raise wz_exceptions.Forbidden() raise Forbidden()
# Add a filter to only return the current user. # Add a filter to only return the current user.
if '_id' not in lookup: if '_id' not in lookup:
lookup['_id'] = user.user_id lookup['_id'] = current_user['user_id']
def check_put_access(request, lookup): def check_put_access(request, lookup):
"""Only allow PUT to the current user, or all users if admin.""" """Only allow PUT to the current user, or all users if admin."""
user = pillar.auth.get_current_user() if user_has_role(u'admin'):
if user.has_cap('admin'):
return return
if user.is_anonymous: current_user = g.get('current_user')
raise wz_exceptions.Forbidden() if not current_user:
raise Forbidden()
if str(lookup['_id']) != str(user.user_id): if str(lookup['_id']) != str(current_user['user_id']):
raise wz_exceptions.Forbidden() raise Forbidden()
def after_fetching_user(user: dict) -> None: def after_fetching_user(user):
# Deny access to auth block; authentication stuff is managed by # Deny access to auth block; authentication stuff is managed by
# custom end-points. # custom end-points.
user.pop('auth', None) user.pop('auth', None)
current_user = pillar.auth.get_current_user() current_user = g.get('current_user')
current_user_id = current_user['user_id'] if current_user else None
# Admins can do anything and get everything, except the 'auth' block. # Admins can do anything and get everything, except the 'auth' block.
if current_user.has_cap('admin'): if user_has_role(u'admin'):
return return
# Only allow full access to the current user. # Only allow full access to the current user.
if current_user.is_authenticated and str(user['_id']) == str(current_user.user_id): if str(user['_id']) == str(current_user_id):
return return
# Remove all fields except public ones. # Remove all fields except public ones.
public_fields = {'full_name', 'username', 'email', 'extension_props_public', 'badges'} public_fields = {'full_name', 'email'}
for field in list(user.keys()): for field in list(user.keys()):
if field not in public_fields: if field not in public_fields:
del user[field] del user[field]
@ -160,46 +121,3 @@ def post_GET_user(request, payload):
# json_data['computed_permissions'] = \ # json_data['computed_permissions'] = \
# compute_permissions(json_data['_id'], app.data.driver) # compute_permissions(json_data['_id'], app.data.driver)
payload.data = json.dumps(json_data) payload.data = json.dumps(json_data)
def grant_org_roles(user_doc):
"""Handle any organization this user may be part of."""
email = user_doc.get('email')
if not email:
log.info('Unable to check new user for organization membership, no email address: %r',
user_doc)
return
org_roles = current_app.org_manager.unknown_member_roles(email)
if not org_roles:
log.debug('No organization roles for user %r', email)
return
log.info('Granting organization roles %r to user %r', org_roles, email)
new_roles = set(user_doc.get('roles') or []) | org_roles
user_doc['roles'] = list(new_roles)
def before_inserting_users(user_docs):
"""Grants organization roles to the created users."""
for user_doc in user_docs:
grant_org_roles(user_doc)
def after_inserting_users(user_docs):
"""Moves the users from the unknown_members to the members list of their organizations."""
om = current_app.org_manager
for user_doc in user_docs:
user_id = user_doc.get('_id')
user_email = user_doc.get('email')
if not user_id or not user_email:
# Missing emails can happen when creating a service account, it's fine.
log.info('User created with _id=%r and email=%r, unable to check organizations',
user_id, user_email)
continue
om.make_member_known(user_id, user_email)

View File

@ -1,45 +0,0 @@
"""User patching support."""
import logging
import bson
from flask import Blueprint
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.auth import current_user
from pillar.api.utils import authorization, jsonify, remove_private_keys
from pillar.api import patch_handler
log = logging.getLogger(__name__)
patch_api_blueprint = Blueprint('users.patch', __name__)
class UserPatchHandler(patch_handler.AbstractPatchHandler):
item_name = 'user'
@authorization.require_login()
def patch_set_username(self, user_id: bson.ObjectId, patch: dict):
"""Updates a user's username."""
if user_id != current_user.user_id:
log.info('User %s tried to change username of user %s',
current_user.user_id, user_id)
raise wz_exceptions.Forbidden('You may only change your own username')
new_username = patch['username']
log.info('User %s uses PATCH to set username to %r', current_user.user_id, new_username)
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': user_id})
db_user['username'] = new_username
# Save via Eve to check the schema and trigger update hooks.
response, _, _, status = current_app.put_internal(
'users', remove_private_keys(db_user), _id=user_id)
return jsonify(response), status
def setup_app(app, url_prefix):
UserPatchHandler(patch_api_blueprint)
app.register_api_blueprint(patch_api_blueprint, url_prefix=url_prefix)

View File

@ -1,13 +1,9 @@
import logging import logging
from eve.methods.get import get from eve.methods.get import get
from flask import Blueprint, request from flask import g, Blueprint
import werkzeug.exceptions as wz_exceptions from pillar.api.utils import jsonify
from pillar import current_app
from pillar.api import utils
from pillar.api.utils.authorization import require_login from pillar.api.utils.authorization import require_login
from pillar.auth import current_user
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
blueprint_api = Blueprint('users_api', __name__) blueprint_api = Blueprint('users_api', __name__)
@ -16,129 +12,8 @@ blueprint_api = Blueprint('users_api', __name__)
@blueprint_api.route('/me') @blueprint_api.route('/me')
@require_login() @require_login()
def my_info(): def my_info():
eve_resp, _, _, status, _ = get('users', {'_id': current_user.user_id}) eve_resp, _, _, status, _ = get('users', {'_id': g.current_user['user_id']})
resp = utils.jsonify(eve_resp['_items'][0], status=status) resp = jsonify(eve_resp['_items'][0], status=status)
return resp return resp
@blueprint_api.route('/video/<video_id>/progress')
@require_login()
def get_video_progress(video_id: str):
"""Return video progress information.
Either a `204 No Content` is returned (no information stored),
or a `200 Ok` with JSON from Eve's 'users' schema, from the key
video.view_progress.<video_id>.
"""
# Validation of the video ID; raises a BadRequest when it's not an ObjectID.
# This isn't strictly necessary, but it makes this function behave symmetrical
# to the set_video_progress() function.
utils.str2id(video_id)
users_coll = current_app.db('users')
user_doc = users_coll.find_one(current_user.user_id, projection={'nodes.view_progress': True})
try:
progress = user_doc['nodes']['view_progress'][video_id]
except KeyError:
return '', 204
if not progress:
return '', 204
return utils.jsonify(progress)
@blueprint_api.route('/video/<video_id>/progress', methods=['POST'])
@require_login()
def set_video_progress(video_id: str):
"""Save progress information about a certain video.
Expected parameters:
- progress_in_sec: float number of seconds
- progress_in_perc: integer percentage of video watched (interval [0-100])
"""
my_log = log.getChild('set_video_progress')
my_log.debug('Setting video progress for user %r video %r', current_user.user_id, video_id)
# Constructing this response requires an active app, and thus can't be done on module load.
no_video_response = utils.jsonify({'_message': 'No such video'}, status=404)
try:
progress_in_sec = float(request.form['progress_in_sec'])
progress_in_perc = int(request.form['progress_in_perc'])
except KeyError as ex:
my_log.debug('Missing POST field in request: %s', ex)
raise wz_exceptions.BadRequest(f'missing a form field')
except ValueError as ex:
my_log.debug('Invalid value for POST field in request: %s', ex)
raise wz_exceptions.BadRequest(f'Invalid value for field: {ex}')
users_coll = current_app.db('users')
nodes_coll = current_app.db('nodes')
# First check whether this is actually an existing video
video_oid = utils.str2id(video_id)
video_doc = nodes_coll.find_one(video_oid, projection={
'node_type': True,
'properties.content_type': True,
'properties.file': True,
})
if not video_doc:
my_log.debug('Node %r not found, unable to set progress for user %r',
video_oid, current_user.user_id)
return no_video_response
try:
is_video = (video_doc['node_type'] == 'asset'
and video_doc['properties']['content_type'] == 'video')
except KeyError:
is_video = False
if not is_video:
my_log.info('Node %r is not a video, unable to set progress for user %r',
video_oid, current_user.user_id)
# There is no video found at this URL, so act as if it doesn't even exist.
return no_video_response
# Compute the progress
percent = min(100, max(0, progress_in_perc))
progress = {
'progress_in_sec': progress_in_sec,
'progress_in_percent': percent,
'last_watched': utils.utcnow(),
}
# After watching a certain percentage of the video, we consider it 'done'
#
# Total Credit start Total Credit Percent
# HH:MM:SS HH:MM:SS sec sec of duration
# Sintel 00:14:48 00:12:24 888 744 83.78%
# Tears of Steel 00:12:14 00:09:49 734 589 80.25%
# Cosmos Laundro 00:12:10 00:10:05 730 605 82.88%
# Agent 327 00:03:51 00:03:26 231 206 89.18%
# Caminandes 3 00:02:30 00:02:18 150 138 92.00%
# Glass Half 00:03:13 00:02:52 193 172 89.12%
# Big Buck Bunny 00:09:56 00:08:11 596 491 82.38%
# Elephants Drea 00:10:54 00:09:25 654 565 86.39%
#
# Median 85.09%
# Average 85.75%
#
# For training videos marking at done at 85% of the video may be a bit
# early, since those probably won't have (long) credits. This is why we
# stick to 90% here.
if percent >= 90:
progress['done'] = True
# Setting each property individually prevents us from overwriting any
# existing {done: true} fields.
updates = {f'nodes.view_progress.{video_id}.{k}': v
for k, v in progress.items()}
result = users_coll.update_one({'_id': current_user.user_id},
{'$set': updates})
if result.matched_count == 0:
my_log.error('Current user %r could not be updated', current_user.user_id)
raise wz_exceptions.InternalServerError('Unable to find logged-in user')
return '', 204

View File

@ -1,60 +1,30 @@
import base64
import copy import copy
import datetime
import functools
import hashlib import hashlib
import json import json
import urllib
import datetime
import functools
import logging import logging
import random
import typing
import urllib.request, urllib.parse, urllib.error
import warnings
import bson.objectid import bson.objectid
import bson.tz_util
from eve import RFC1123_DATE_FORMAT from eve import RFC1123_DATE_FORMAT
from flask import current_app from flask import current_app
from werkzeug import exceptions as wz_exceptions from werkzeug import exceptions as wz_exceptions
import pymongo.results import pymongo.results
__all__ = ('remove_private_keys', 'PillarJSONEncoder')
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
def node_setattr(node, key, value):
"""Sets a node property by dotted key.
Modifies the node in-place. Deletes None values.
:type node: dict
:type key: str
:param value: the value to set, or None to delete the key.
"""
set_on = node
while key and '.' in key:
head, key = key.split('.', 1)
set_on = set_on[head]
if value is None:
set_on.pop(key, None)
else:
set_on[key] = value
def remove_private_keys(document): def remove_private_keys(document):
"""Removes any key that starts with an underscore, returns result as new """Removes any key that starts with an underscore, returns result as new
dictionary. dictionary.
""" """
def do_remove(doc):
for key in list(doc.keys()):
if key.startswith('_'):
del doc[key]
elif isinstance(doc[key], dict):
doc[key] = do_remove(doc[key])
return doc
doc_copy = copy.deepcopy(document) doc_copy = copy.deepcopy(document)
do_remove(doc_copy) for key in list(doc_copy.keys()):
if key.startswith('_'):
del doc_copy[key]
try: try:
del doc_copy['allowed_methods'] del doc_copy['allowed_methods']
@ -64,39 +34,6 @@ def remove_private_keys(document):
return doc_copy return doc_copy
def pretty_duration(seconds: typing.Union[None, int, float]):
if seconds is None:
return ''
seconds = round(seconds)
hours, seconds = divmod(seconds, 3600)
minutes, seconds = divmod(seconds, 60)
if hours > 0:
return f'{hours:02}:{minutes:02}:{seconds:02}'
else:
return f'{minutes:02}:{seconds:02}'
def pretty_duration_fractional(seconds: typing.Union[None, int, float]):
if seconds is None:
return ''
# Remove fraction of seconds from the seconds so that the rest is done as integers.
seconds, fracs = divmod(seconds, 1)
hours, seconds = divmod(int(seconds), 3600)
minutes, seconds = divmod(seconds, 60)
msec = int(round(fracs * 1000))
if msec == 0:
msec_str = ''
else:
msec_str = f'.{msec:03}'
if hours > 0:
return f'{hours:02}:{minutes:02}:{seconds:02}{msec_str}'
else:
return f'{minutes:02}:{seconds:02}{msec_str}'
class PillarJSONEncoder(json.JSONEncoder): class PillarJSONEncoder(json.JSONEncoder):
"""JSON encoder with support for Pillar resources.""" """JSON encoder with support for Pillar resources."""
@ -104,9 +41,6 @@ class PillarJSONEncoder(json.JSONEncoder):
if isinstance(obj, datetime.datetime): if isinstance(obj, datetime.datetime):
return obj.strftime(RFC1123_DATE_FORMAT) return obj.strftime(RFC1123_DATE_FORMAT)
if isinstance(obj, datetime.timedelta):
return pretty_duration(obj.total_seconds())
if isinstance(obj, bson.ObjectId): if isinstance(obj, bson.ObjectId):
return str(obj) return str(obj)
@ -131,29 +65,16 @@ def jsonify(mongo_doc, status=200, headers=None):
headers=headers) headers=headers)
def bsonify(mongo_doc, status=200, headers=None):
"""BSonifies a Mongo document into a Flask response object."""
import bson
data = bson.BSON.encode(mongo_doc)
return current_app.response_class(data,
mimetype='application/bson',
status=status,
headers=headers)
def skip_when_testing(func): def skip_when_testing(func):
"""Decorator, skips the decorated function when app.config['TESTING']""" """Decorator, skips the decorated function when app.config['TESTING']"""
@functools.wraps(func) @functools.wraps(func)
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
if current_app.config['TESTING']: if current_app.config['TESTING']:
log.debug('Skipping call to %s(...) due to TESTING', func.__name__) log.debug('Skipping call to %s(...) due to TESTING', func.func_name)
return None return None
return func(*args, **kwargs) return func(*args, **kwargs)
return wrapper return wrapper
@ -169,9 +90,11 @@ def project_get_node_type(project_document, node_type_node_name):
if node_type['name'] == node_type_node_name), None) if node_type['name'] == node_type_node_name), None)
def str2id(document_id: str) -> bson.ObjectId: def str2id(document_id):
"""Returns the document ID as ObjectID, or raises a BadRequest exception. """Returns the document ID as ObjectID, or raises a BadRequest exception.
:type document_id: str
:rtype: bson.ObjectId
:raises: wz_exceptions.BadRequest :raises: wz_exceptions.BadRequest
""" """
@ -181,128 +104,13 @@ def str2id(document_id: str) -> bson.ObjectId:
try: try:
return bson.ObjectId(document_id) return bson.ObjectId(document_id)
except (bson.objectid.InvalidId, TypeError): except bson.objectid.InvalidId:
log.debug('str2id(%r): Invalid Object ID', document_id) log.debug('str2id(%r): Invalid Object ID', document_id)
raise wz_exceptions.BadRequest('Invalid object ID %r' % document_id) raise wz_exceptions.BadRequest('Invalid object ID %r' % document_id)
def gravatar(email: str, size=64) -> typing.Optional[str]: def gravatar(email, size=64):
"""Deprecated: return the Gravatar URL.
.. deprecated::
Use of Gravatar is deprecated, in favour of our self-hosted avatars.
See pillar.api.users.avatar.url(user).
"""
warnings.warn('pillar.api.utils.gravatar() is deprecated, '
'use pillar.api.users.avatar.url() instead',
category=DeprecationWarning)
if email is None:
return None
parameters = {'s': str(size), 'd': 'mm'} parameters = {'s': str(size), 'd': 'mm'}
return "https://www.gravatar.com/avatar/" + \ return "https://www.gravatar.com/avatar/" + \
hashlib.md5(email.encode()).hexdigest() + \ hashlib.md5(str(email)).hexdigest() + \
"?" + urllib.parse.urlencode(parameters) "?" + urllib.urlencode(parameters)
class MetaFalsey(type):
def __bool__(cls):
return False
class DoesNotExistMeta(MetaFalsey):
def __repr__(cls) -> str:
return 'DoesNotExist'
class DoesNotExist(object, metaclass=DoesNotExistMeta):
"""Returned as value by doc_diff if a value does not exist."""
def doc_diff(doc1, doc2, *, falsey_is_equal=True, superkey: str = None):
"""Generator, yields differences between documents.
Yields changes as (key, value in doc1, value in doc2) tuples, where
the value can also be the DoesNotExist class. Does not report changed
private keys (i.e. the standard Eve keys starting with underscores).
Sub-documents (i.e. dicts) are recursed, and dot notation is used
for the keys if changes are found.
If falsey_is_equal=True, all Falsey values compare as equal, i.e. this
function won't report differences between DoesNotExist, False, '', and 0.
"""
def is_private(key):
return str(key).startswith('_')
def combine_key(some_key):
"""Combine this key with the superkey.
Keep the key type the same, unless we have to combine with a superkey.
"""
if not superkey:
return some_key
if isinstance(some_key, str) and some_key[0] == '[':
return f'{superkey}{some_key}'
return f'{superkey}.{some_key}'
if doc1 is doc2:
return
if falsey_is_equal and not bool(doc1) and not bool(doc2):
return
if isinstance(doc1, dict) and isinstance(doc2, dict):
for key in set(doc1.keys()).union(set(doc2.keys())):
if is_private(key):
continue
val1 = doc1.get(key, DoesNotExist)
val2 = doc2.get(key, DoesNotExist)
yield from doc_diff(val1, val2,
falsey_is_equal=falsey_is_equal,
superkey=combine_key(key))
return
if isinstance(doc1, list) and isinstance(doc2, list):
for idx in range(max(len(doc1), len(doc2))):
try:
item1 = doc1[idx]
except IndexError:
item1 = DoesNotExist
try:
item2 = doc2[idx]
except IndexError:
item2 = DoesNotExist
subkey = f'[{idx}]'
if item1 is DoesNotExist or item2 is DoesNotExist:
yield combine_key(subkey), item1, item2
else:
yield from doc_diff(item1, item2,
falsey_is_equal=falsey_is_equal,
superkey=combine_key(subkey))
return
if doc1 != doc2:
yield superkey, doc1, doc2
def random_etag() -> str:
"""Random string usable as etag."""
randbytes = random.getrandbits(256).to_bytes(32, 'big')
return base64.b64encode(randbytes)[:-1].decode()
def utcnow() -> datetime.datetime:
"""Construct timezone-aware 'now' in UTC with millisecond precision."""
now = datetime.datetime.now(tz=bson.tz_util.utc)
# MongoDB stores in millisecond precision, so truncate the microseconds.
# This way the returned datetime can be round-tripped via MongoDB and stay the same.
trunc_now = now.replace(microsecond=now.microsecond - (now.microsecond % 1000))
return trunc_now

View File

@ -1,33 +1,98 @@
import logging import logging
from bson import ObjectId from bson import ObjectId
from flask import current_app
from pillar import current_app from pillar.api.file_storage import generate_link
from . import skip_when_testing from . import skip_when_testing
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
INDEX_ALLOWED_USER_ROLES = {'admin', 'subscriber', 'demo'}
INDEX_ALLOWED_NODE_TYPES = {'asset', 'texture', 'group', 'hdri'}
@skip_when_testing @skip_when_testing
def index_user_save(to_index_user: dict): def algolia_index_user_save(user):
index_users = current_app.algolia_index_users if current_app.algolia_index_users is None:
if not index_users:
log.debug('No Algolia index defined, so nothing to do.')
return return
# Strip unneeded roles
if 'roles' in user:
roles = set(user['roles']).intersection(INDEX_ALLOWED_USER_ROLES)
else:
roles = set()
if current_app.algolia_index_users:
# Create or update Algolia index for the user # Create or update Algolia index for the user
index_users.save_object(to_index_user) current_app.algolia_index_users.save_object({
'objectID': user['_id'],
'full_name': user['full_name'],
'username': user['username'],
'roles': list(roles),
'groups': user['groups'],
'email': user['email']
})
@skip_when_testing @skip_when_testing
def index_node_save(node_to_index): def algolia_index_node_save(node):
if not current_app.algolia_index_nodes: if not current_app.algolia_index_nodes:
return return
current_app.algolia_index_nodes.save_object(node_to_index) if node['node_type'] not in INDEX_ALLOWED_NODE_TYPES:
return
# If a nodes does not have status published, do not index
if node['properties'].get('status') != 'published':
return
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one({'_id': ObjectId(node['project'])})
users_collection = current_app.data.driver.db['users']
user = users_collection.find_one({'_id': ObjectId(node['user'])})
node_ob = {
'objectID': node['_id'],
'name': node['name'],
'project': {
'_id': project['_id'],
'name': project['name']
},
'created': node['_created'],
'updated': node['_updated'],
'node_type': node['node_type'],
'user': {
'_id': user['_id'],
'full_name': user['full_name']
},
}
if 'description' in node and node['description']:
node_ob['description'] = node['description']
if 'picture' in node and node['picture']:
files_collection = current_app.data.driver.db['files']
lookup = {'_id': ObjectId(node['picture'])}
picture = files_collection.find_one(lookup)
if picture['backend'] == 'gcs':
variation_t = next((item for item in picture['variations'] \
if item['size'] == 't'), None)
if variation_t:
node_ob['picture'] = generate_link(picture['backend'],
variation_t['file_path'], project_id=str(picture['project']),
is_public=True)
# If the node has world permissions, compute the Free permission
if 'permissions' in node and 'world' in node['permissions']:
if 'GET' in node['permissions']['world']:
node_ob['is_free'] = True
# Append the media key if the node is of node_type 'asset'
if node['node_type'] == 'asset':
node_ob['media'] = node['properties']['content_type']
# Add tags
if 'tags' in node['properties']:
node_ob['tags'] = node['properties']['tags']
current_app.algolia_index_nodes.save_object(node_ob)
@skip_when_testing @skip_when_testing
def index_node_delete(delete_id): def algolia_index_node_delete(node):
if current_app.algolia_index_nodes is None: if current_app.algolia_index_nodes is None:
return return
current_app.algolia_index_nodes.delete_object(delete_id) current_app.algolia_index_nodes.delete_object(node['_id'])

View File

@ -5,105 +5,18 @@ unique usernames from emails. Calls out to the pillar_server.modules.blender_id
module for Blender ID communication. module for Blender ID communication.
""" """
import base64
import datetime
import hmac
import hashlib
import logging import logging
import typing import datetime
import bson from bson import tz_util
from flask import g, current_app, session from flask import g
from flask import request from flask import request
from werkzeug import exceptions as wz_exceptions from flask import current_app
from pillar.api.utils import remove_private_keys, utcnow
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
# Construction is done when requested, since constructing a UserClass instance
# requires an application context to look up capabilities. We set the initial
# value to a not-None singleton to be able to differentiate between
# g.current_user set to "not logged in" or "uninitialised CLI_USER".
CLI_USER = ...
def validate_token():
def force_cli_user():
"""Sets g.current_user to the CLI_USER object.
This is used as a marker to avoid authorization checks and just allow everything.
"""
global CLI_USER
from pillar.auth import UserClass
if CLI_USER is ...:
CLI_USER = UserClass.construct('CLI', {
'_id': 'CLI',
'groups': [],
'roles': {'admin'},
'email': 'local@nowhere',
'username': 'CLI',
})
log.info('CONSTRUCTED CLI USER %s of type %s', id(CLI_USER), id(type(CLI_USER)))
log.info('Logging in as CLI_USER (%s) of type %s, circumventing authentication.',
id(CLI_USER), id(type(CLI_USER)))
g.current_user = CLI_USER
def find_user_in_db(user_info: dict, provider='blender-id') -> dict:
"""Find the user in our database, creating/updating the returned document where needed.
First, search for the user using its id from the provider, then try to look the user up via the
email address.
Does NOT update the user in the database.
:param user_info: Information (id, email and full_name) from the auth provider
:param provider: One of the supported providers
"""
users = current_app.data.driver.db['users']
user_id = user_info['id']
query = {'$or': [
{'auth': {'$elemMatch': {
'user_id': str(user_id),
'provider': provider}}},
{'email': user_info['email']},
]}
log.debug('Querying: %s', query)
db_user = users.find_one(query)
if db_user:
log.debug('User with %s id %s already in our database, updating with info from %s',
provider, user_id, provider)
db_user['email'] = user_info['email']
# Find out if an auth entry for the current provider already exists
provider_entry = [element for element in db_user['auth'] if element['provider'] == provider]
if not provider_entry:
db_user['auth'].append({
'provider': provider,
'user_id': str(user_id),
'token': ''})
else:
log.debug('User %r not yet in our database, create a new one.', user_id)
db_user = create_new_user_document(
email=user_info['email'],
user_id=user_id,
username=user_info['full_name'],
provider=provider)
db_user['username'] = make_unique_username(user_info['email'])
if not db_user['full_name']:
db_user['full_name'] = db_user['username']
return db_user
def validate_token(*, force=False) -> bool:
"""Validate the token provided in the request and populate the current_user """Validate the token provided in the request and populate the current_user
flask.g object, so that permissions and access to a resource can be defined flask.g object, so that permissions and access to a resource can be defined
from it. from it.
@ -111,44 +24,26 @@ def validate_token(*, force=False) -> bool:
When the token is successfully validated, sets `g.current_user` to contain When the token is successfully validated, sets `g.current_user` to contain
the user information, otherwise it is set to None. the user information, otherwise it is set to None.
:param force: don't trust g.current_user and force a re-check. @returns True iff the user is logged in with a valid Blender ID token.
:returns: True iff the user is logged in with a valid Blender ID token.
""" """
import pillar.auth
# Trust a pre-existing g.current_user
if not force:
cur = getattr(g, 'current_user', None)
if cur is not None and cur.is_authenticated:
log.debug('skipping token check because current user is already set to %s', cur)
return True
auth_header = request.headers.get('Authorization') or ''
if request.authorization: if request.authorization:
token = request.authorization.username token = request.authorization.username
oauth_subclient = request.authorization.password oauth_subclient = request.authorization.password
elif auth_header.startswith('Bearer '):
token = auth_header[7:].strip()
oauth_subclient = ''
else: else:
# Check the session, the user might be logged in through Flask-Login. # Check the session, the user might be logged in through Flask-Login.
from pillar import auth
# The user has a logged-in session; trust only if this request passes a CSRF check. token = auth.get_blender_id_oauth_token()
# FIXME(Sybren): we should stop saving the token as 'user_id' in the sesion. if token and isinstance(token, (tuple, list)):
token = session.get('user_id') token = token[0]
if token:
log.debug('skipping token check because current user already has a session')
current_app.csrf.protect()
else:
token = pillar.auth.get_blender_id_oauth_token()
oauth_subclient = None oauth_subclient = None
if not token: if not token:
# If no authorization headers are provided, we are getting a request # If no authorization headers are provided, we are getting a request
# from a non logged in user. Proceed accordingly. # from a non logged in user. Proceed accordingly.
log.debug('No authentication headers, so not logged in.') log.debug('No authentication headers, so not logged in.')
g.current_user = pillar.auth.AnonymousUser() g.current_user = None
return False return False
return validate_this_token(token, oauth_subclient) is not None return validate_this_token(token, oauth_subclient) is not None
@ -161,14 +56,14 @@ def validate_this_token(token, oauth_subclient=None):
:rtype: dict :rtype: dict
""" """
from pillar.auth import UserClass, AnonymousUser, user_authenticated
g.current_user = None g.current_user = None
_delete_expired_tokens() _delete_expired_tokens()
# Check the users to see if there is one with this Blender ID token. # Check the users to see if there is one with this Blender ID token.
db_token = find_token(token, oauth_subclient) db_token = find_token(token, oauth_subclient)
if not db_token: if not db_token:
log.debug('Token %s not found in our local database.', token)
# If no valid token is found in our local database, we issue a new # If no valid token is found in our local database, we issue a new
# request to the Blender ID server to verify the validity of the token # request to the Blender ID server to verify the validity of the token
# passed via the HTTP header. We will get basic user info if the user # passed via the HTTP header. We will get basic user info if the user
@ -183,70 +78,37 @@ def validate_this_token(token, oauth_subclient=None):
if db_user is None: if db_user is None:
log.debug('Validation failed, user not logged in') log.debug('Validation failed, user not logged in')
g.current_user = AnonymousUser()
return None return None
g.current_user = UserClass.construct(token, db_user) g.current_user = {'user_id': db_user['_id'],
user_authenticated.send(g.current_user) 'groups': db_user['groups'],
'roles': set(db_user.get('roles', []))}
return db_user return db_user
def remove_token(token: str):
"""Removes the token from the database."""
tokens_coll = current_app.db('tokens')
token_hashed = hash_auth_token(token)
# TODO: remove matching on hashed tokens once all hashed tokens have expired.
lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}]}
del_res = tokens_coll.delete_many(lookup)
log.debug('Removed token %r, matched %d documents', token, del_res.deleted_count)
def find_token(token, is_subclient_token=False, **extra_filters): def find_token(token, is_subclient_token=False, **extra_filters):
"""Returns the token document, or None if it doesn't exist (or is expired).""" """Returns the token document, or None if it doesn't exist (or is expired)."""
tokens_coll = current_app.db('tokens') tokens_collection = current_app.data.driver.db['tokens']
token_hashed = hash_auth_token(token)
# TODO: remove matching on hashed tokens once all hashed tokens have expired. # TODO: remove expired tokens from collection.
lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}], lookup = {'token': token,
'is_subclient_token': True if is_subclient_token else {'$in': [False, None]}, 'is_subclient_token': True if is_subclient_token else {'$in': [False, None]},
'expire_time': {"$gt": utcnow()}} 'expire_time': {"$gt": datetime.datetime.now(tz=tz_util.utc)}}
lookup.update(extra_filters) lookup.update(extra_filters)
db_token = tokens_coll.find_one(lookup) db_token = tokens_collection.find_one(lookup)
return db_token return db_token
def hash_auth_token(token: str) -> str: def store_token(user_id, token, token_expiry, oauth_subclient_id=False):
"""Returns the hashed authentication token.
The token is hashed using HMAC and then base64-encoded.
"""
hmac_key = current_app.config['AUTH_TOKEN_HMAC_KEY']
token_hmac = hmac.new(hmac_key, msg=token.encode('utf8'), digestmod=hashlib.sha256)
digest = token_hmac.digest()
return base64.b64encode(digest).decode('ascii')
def store_token(user_id,
token: str,
token_expiry,
oauth_subclient_id=False,
*,
org_roles: typing.Set[str] = frozenset(),
oauth_scopes: typing.Optional[typing.List[str]] = None,
):
"""Stores an authentication token. """Stores an authentication token.
:returns: the token document from MongoDB :returns: the token document from MongoDB
""" """
assert isinstance(token, str), 'token must be string type, not %r' % type(token) assert isinstance(token, (str, unicode)), 'token must be string type, not %r' % type(token)
token_data = { token_data = {
'user': user_id, 'user': user_id,
@ -255,10 +117,6 @@ def store_token(user_id,
} }
if oauth_subclient_id: if oauth_subclient_id:
token_data['is_subclient_token'] = True token_data['is_subclient_token'] = True
if org_roles:
token_data['org_roles'] = sorted(org_roles)
if oauth_scopes:
token_data['oauth_scopes'] = oauth_scopes
r, _, _, status = current_app.post_internal('tokens', token_data) r, _, _, status = current_app.post_internal('tokens', token_data)
@ -286,13 +144,13 @@ def create_new_user(email, username, user_id):
def create_new_user_document(email, user_id, username, provider='blender-id', def create_new_user_document(email, user_id, username, provider='blender-id',
token='', *, full_name=''): token=''):
"""Creates a new user document, without storing it in MongoDB. The token """Creates a new user document, without storing it in MongoDB. The token
parameter is a password in case provider is "local". parameter is a password in case provider is "local".
""" """
user_data = { user_data = {
'full_name': full_name or username, 'full_name': username,
'username': username, 'username': username,
'email': email, 'email': email,
'auth': [{ 'auth': [{
@ -345,103 +203,22 @@ def _delete_expired_tokens():
token_coll = current_app.data.driver.db['tokens'] token_coll = current_app.data.driver.db['tokens']
expiry_date = utcnow() - datetime.timedelta(days=7) now = datetime.datetime.now(tz_util.utc)
expiry_date = now - datetime.timedelta(days=7)
result = token_coll.delete_many({'expire_time': {"$lt": expiry_date}}) result = token_coll.delete_many({'expire_time': {"$lt": expiry_date}})
# log.debug('Deleted %i expired authentication tokens', result.deleted_count) # log.debug('Deleted %i expired authentication tokens', result.deleted_count)
def current_user_id() -> typing.Optional[bson.ObjectId]: def current_user_id():
"""None-safe fetching of user ID. Can return None itself, though.""" """None-safe fetching of user ID. Can return None itself, though."""
user = current_user() current_user = g.get('current_user') or {}
return user.user_id return current_user.get('user_id')
def current_user():
"""Returns the current user, or an AnonymousUser if not logged in.
:rtype: pillar.auth.UserClass
"""
import pillar.auth
user: pillar.auth.UserClass = g.get('current_user')
if user is None:
return pillar.auth.AnonymousUser()
return user
def setup_app(app): def setup_app(app):
@app.before_request @app.before_request
def validate_token_at_each_request(): def validate_token_at_each_request():
# Skip token validation if this is a static asset
# to avoid spamming Blender ID for no good reason
if request.path.startswith('/static/'):
return
validate_token() validate_token()
return None
def upsert_user(db_user):
"""Inserts/updates the user in MongoDB.
Retries a few times when there are uniqueness issues in the username.
:returns: the user's database ID and the status of the PUT/POST.
The status is 201 on insert, and 200 on update.
:type: (ObjectId, int)
"""
if 'subscriber' in db_user.get('groups', []):
log.error('Non-ObjectID string found in user.groups: %s', db_user)
raise wz_exceptions.InternalServerError(
'Non-ObjectID string found in user.groups: %s' % db_user)
if not db_user['full_name']:
# Blender ID doesn't need a full name, but we do.
db_user['full_name'] = db_user['username']
r = {}
for retry in range(5):
if '_id' in db_user:
# Update the existing user
attempted_eve_method = 'PUT'
db_id = db_user['_id']
r, _, _, status = current_app.put_internal('users', remove_private_keys(db_user),
_id=db_id)
if status == 422:
log.error('Status %i trying to PUT user %s with values %s, should not happen! %s',
status, db_id, remove_private_keys(db_user), r)
else:
# Create a new user, retry for non-unique usernames.
attempted_eve_method = 'POST'
r, _, _, status = current_app.post_internal('users', db_user)
if status not in {200, 201}:
log.error('Status %i trying to create user with values %s: %s',
status, db_user, r)
raise wz_exceptions.InternalServerError()
db_id = r['_id']
db_user.update(r) # update with database/eve-generated fields.
if status == 422:
# Probably non-unique username, so retry a few times with different usernames.
log.info('Error creating new user: %s', r)
username_issue = r.get('_issues', {}).get('username', '')
if 'not unique' in username_issue:
# Retry
db_user['username'] = make_unique_username(db_user['email'])
continue
# Saving was successful, or at least didn't break on a non-unique username.
break
else:
log.error('Unable to create new user %s: %s', db_user, r)
raise wz_exceptions.InternalServerError()
if status not in (200, 201):
log.error('internal response from %s to Eve: %r %r', attempted_eve_method, status, r)
raise wz_exceptions.InternalServerError()
return db_id, status

View File

@ -1,6 +1,5 @@
import logging import logging
import functools import functools
import typing
from bson import ObjectId from bson import ObjectId
from flask import g from flask import g
@ -8,14 +7,13 @@ from flask import abort
from flask import current_app from flask import current_app
from werkzeug.exceptions import Forbidden from werkzeug.exceptions import Forbidden
CHECK_PERMISSIONS_IMPLEMENTED_FOR = {'projects', 'nodes', 'flamenco_jobs'} CHECK_PERMISSIONS_IMPLEMENTED_FOR = {'projects', 'nodes'}
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
def check_permissions(collection_name: str, resource: dict, method: str, def check_permissions(collection_name, resource, method, append_allowed_methods=False,
append_allowed_methods=False, check_node_type=None):
check_node_type: typing.Optional[str] = None):
"""Check user permissions to access a node. We look up node permissions from """Check user permissions to access a node. We look up node permissions from
world to groups to users and match them with the computed user permissions. world to groups to users and match them with the computed user permissions.
If there is not match, we raise 403. If there is not match, we raise 403.
@ -29,12 +27,6 @@ def check_permissions(collection_name: str, resource: dict, method: str,
:param check_node_type: node type to check. Only valid when collection_name='projects'. :param check_node_type: node type to check. Only valid when collection_name='projects'.
:type check_node_type: str :type check_node_type: str
""" """
from pillar.auth import get_current_user
from .authentication import CLI_USER
if get_current_user() is CLI_USER:
log.debug('Short-circuiting check_permissions() for CLI user')
return
if not has_permissions(collection_name, resource, method, append_allowed_methods, if not has_permissions(collection_name, resource, method, append_allowed_methods,
check_node_type): check_node_type):
@ -53,8 +45,6 @@ def compute_allowed_methods(collection_name, resource, check_node_type=None):
:rtype: set :rtype: set
""" """
import pillar.auth
# Check some input values. # Check some input values.
if collection_name not in CHECK_PERMISSIONS_IMPLEMENTED_FOR: if collection_name not in CHECK_PERMISSIONS_IMPLEMENTED_FOR:
raise ValueError('compute_allowed_methods only implemented for %s, not for %s', raise ValueError('compute_allowed_methods only implemented for %s, not for %s',
@ -72,18 +62,15 @@ def compute_allowed_methods(collection_name, resource, check_node_type=None):
# Accumulate allowed methods from the user, group and world level. # Accumulate allowed methods from the user, group and world level.
allowed_methods = set() allowed_methods = set()
user = pillar.auth.get_current_user() current_user = g.current_user
if current_user:
if user.is_authenticated:
user_is_admin = is_admin(user)
# If the user is authenticated, proceed to compare the group permissions # If the user is authenticated, proceed to compare the group permissions
for permission in computed_permissions.get('groups', ()): for permission in computed_permissions.get('groups', ()):
if user_is_admin or permission['group'] in user.group_ids: if permission['group'] in current_user['groups']:
allowed_methods.update(permission['methods']) allowed_methods.update(permission['methods'])
for permission in computed_permissions.get('users', ()): for permission in computed_permissions.get('users', ()):
if user_is_admin or user.user_id == permission['user']: if current_user['user_id'] == permission['user']:
allowed_methods.update(permission['methods']) allowed_methods.update(permission['methods'])
# Check if the node is public or private. This must be set for non logged # Check if the node is public or private. This must be set for non logged
@ -95,9 +82,8 @@ def compute_allowed_methods(collection_name, resource, check_node_type=None):
return allowed_methods return allowed_methods
def has_permissions(collection_name: str, resource: dict, method: str, def has_permissions(collection_name, resource, method, append_allowed_methods=False,
append_allowed_methods=False, check_node_type=None):
check_node_type: typing.Optional[str] = None):
"""Check user permissions to access a node. We look up node permissions from """Check user permissions to access a node. We look up node permissions from
world to groups to users and match them with the computed user permissions. world to groups to users and match them with the computed user permissions.
@ -146,14 +132,6 @@ def compute_aggr_permissions(collection_name, resource, check_node_type=None):
if check_node_type is None: if check_node_type is None:
return project['permissions'] return project['permissions']
node_type_name = check_node_type node_type_name = check_node_type
elif 'node_type' not in resource:
# Neither a project, nor a node, therefore is another collection
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one(
ObjectId(resource['project']),
{'permissions': 1})
return project['permissions']
else: else:
# Not a project, so it's a node. # Not a project, so it's a node.
assert 'project' in resource assert 'project' in resource
@ -177,7 +155,7 @@ def compute_aggr_permissions(collection_name, resource, check_node_type=None):
project_permissions = project['permissions'] project_permissions = project['permissions']
# Find the node type from the project. # Find the node type from the project.
node_type = next((node_type for node_type in project.get('node_types', ()) node_type = next((node_type for node_type in project['node_types']
if node_type['name'] == node_type_name), None) if node_type['name'] == node_type_name), None)
if node_type is None: # This node type is not known, so doesn't give permissions. if node_type is None: # This node type is not known, so doesn't give permissions.
node_type_permissions = {} node_type_permissions = {}
@ -225,8 +203,6 @@ def merge_permissions(*args):
:returns: combined list of permissions. :returns: combined list of permissions.
""" """
from pillar.auth import current_user
if not args: if not args:
return {} return {}
@ -248,35 +224,25 @@ def merge_permissions(*args):
from0 = args[0].get(plural_name, []) from0 = args[0].get(plural_name, [])
from1 = args[1].get(plural_name, []) from1 = args[1].get(plural_name, [])
try:
asdict0 = {permission[field_name]: permission['methods'] for permission in from0} asdict0 = {permission[field_name]: permission['methods'] for permission in from0}
except KeyError:
log.exception('KeyError creating asdict0 for %r permissions; user=%s; args[0]=%r',
field_name, current_user.user_id, args[0])
asdict0 = {}
try:
asdict1 = {permission[field_name]: permission['methods'] for permission in from1} asdict1 = {permission[field_name]: permission['methods'] for permission in from1}
except KeyError:
log.exception('KeyError creating asdict1 for %r permissions; user=%s; args[1]=%r',
field_name, current_user.user_id, args[1])
asdict1 = {}
keys = set(asdict0.keys()).union(set(asdict1.keys())) keys = set(asdict0.keys() + asdict1.keys())
for key in maybe_sorted(keys): for key in maybe_sorted(keys):
methods0 = asdict0.get(key, []) methods0 = asdict0.get(key, [])
methods1 = asdict1.get(key, []) methods1 = asdict1.get(key, [])
methods = maybe_sorted(set(methods0).union(set(methods1))) methods = maybe_sorted(set(methods0).union(set(methods1)))
effective.setdefault(plural_name, []).append({field_name: key, 'methods': methods}) effective.setdefault(plural_name, []).append({field_name: key, u'methods': methods})
merge('user') merge(u'user')
merge('group') merge(u'group')
# Gather permissions for world # Gather permissions for world
world0 = args[0].get('world', []) world0 = args[0].get('world', [])
world1 = args[1].get('world', []) world1 = args[1].get('world', [])
world_methods = set(world0).union(set(world1)) world_methods = set(world0).union(set(world1))
if world_methods: if world_methods:
effective['world'] = maybe_sorted(world_methods) effective[u'world'] = maybe_sorted(world_methods)
# Recurse for longer merges # Recurse for longer merges
if len(args) > 2: if len(args) > 2:
@ -285,84 +251,39 @@ def merge_permissions(*args):
return effective return effective
def require_login(*, require_roles=set(), def require_login(require_roles=set(),
require_cap='', require_all=False):
require_all=False,
redirect_to_login=False,
error_view=None):
"""Decorator that enforces users to authenticate. """Decorator that enforces users to authenticate.
Optionally only allows access to users with a certain role and/or capability. Optionally only allows access to users with a certain role.
Either check on roles or on a capability, but never on both. There is no
require_all check for capabilities; if you need to check for multiple
capabilities at once, it's a sign that you need to add another capability
and give it to everybody that needs it.
:param require_roles: set of roles. :param require_roles: set of roles.
:param require_cap: a capability.
:param require_all: :param require_all:
When False (the default): if the user's roles have a When False (the default): if the user's roles have a
non-empty intersection with the given roles, access is granted. non-empty intersection with the given roles, access is granted.
When True: require the user to have all given roles before access is When True: require the user to have all given roles before access is
granted. granted.
:param redirect_to_login: Determines the behaviour when the user is not
logged in. When False (the default), a 403 Forbidden response is
returned; this is suitable for API calls. When True, the user is
redirected to the login page; this is suitable for user-facing web
requests, and mimicks the flask_login behaviour.
:param error_view: Callable that returns a Flask response object. This is
sent back to the client instead of the default 403 Forbidden.
""" """
from flask import request, redirect, url_for, Response
if not isinstance(require_roles, set): if not isinstance(require_roles, set):
raise TypeError(f'require_roles param should be a set, but is {type(require_roles)!r}') raise TypeError('require_roles param should be a set, but is a %r' % type(require_roles))
if not isinstance(require_cap, str):
raise TypeError(f'require_caps param should be a str, but is {type(require_cap)!r}')
if require_roles and require_cap:
raise ValueError('either use require_roles or require_cap, but not both')
if require_all and not require_roles: if require_all and not require_roles:
raise ValueError('require_login(require_all=True) cannot be used with empty require_roles.') raise ValueError('require_login(require_all=True) cannot be used with empty require_roles.')
def render_error() -> Response:
if error_view is None:
resp = Forbidden().get_response()
else:
resp = error_view()
resp.status_code = 403
return resp
def decorator(func): def decorator(func):
@functools.wraps(func) @functools.wraps(func)
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
import pillar.auth if not user_matches_roles(require_roles, require_all):
if g.current_user is None:
current_user = pillar.auth.get_current_user()
if current_user.is_anonymous:
# We don't need to log at a higher level, as this is very common. # We don't need to log at a higher level, as this is very common.
# Many browsers first try to see whether authentication is needed # Many browsers first try to see whether authentication is needed
# at all, before sending the password. # at all, before sending the password.
log.debug('Unauthenticated access to %s attempted.', func) log.debug('Unauthenticated acces to %s attempted.', func)
if redirect_to_login: else:
# Redirect using a 303 See Other, since even a POST log.warning('User %s is authenticated, but does not have required roles %s to '
# request should cause a GET on the login page. 'access %s', g.current_user['user_id'], require_roles, func)
return redirect(url_for('users.login', next=request.url), 303) abort(403)
return render_error()
if require_roles and not current_user.matches_roles(require_roles, require_all):
log.info('User %s is authenticated, but does not have required roles %s to '
'access %s', current_user.user_id, require_roles, func)
return render_error()
if require_cap and not current_user.has_cap(require_cap):
log.info('User %s is authenticated, but does not have required capability %s to '
'access %s', current_user.user_id, require_cap, func)
return render_error()
return func(*args, **kwargs) return func(*args, **kwargs)
@ -405,36 +326,14 @@ def ab_testing(require_roles=set(),
def user_has_role(role, user=None): def user_has_role(role, user=None):
"""Returns True iff the user is logged in and has the given role.""" """Returns True iff the user is logged in and has the given role."""
import pillar.auth if user is None:
user = g.get('current_user')
if user is None: if user is None:
user = pillar.auth.get_current_user()
if user is not None and not isinstance(user, pillar.auth.UserClass):
raise TypeError(f'pillar.auth.current_user should be instance of UserClass, '
f'not {type(user)}')
elif not isinstance(user, pillar.auth.UserClass):
raise TypeError(f'user should be instance of UserClass, not {type(user)}')
if user.is_anonymous:
return False return False
return user.has_role(role) roles = user.get('roles') or ()
return role in roles
def user_has_cap(capability: str, user=None) -> bool:
"""Returns True iff the user is logged in and has the given capability."""
import pillar.auth
assert capability
if user is None:
user = pillar.auth.get_current_user()
if not isinstance(user, pillar.auth.UserClass):
raise TypeError(f'user should be instance of UserClass, not {type(user)}')
return user.has_cap(capability)
def user_matches_roles(require_roles=set(), def user_matches_roles(require_roles=set(),
@ -449,16 +348,25 @@ def user_matches_roles(require_roles=set(),
returning True. returning True.
""" """
import pillar.auth if not isinstance(require_roles, set):
raise TypeError('require_roles param should be a set, but is a %r' % type(require_roles))
user = pillar.auth.get_current_user() if require_all and not require_roles:
if not isinstance(user, pillar.auth.UserClass): raise ValueError('require_login(require_all=True) cannot be used with empty require_roles.')
raise TypeError(f'user should be instance of UserClass, not {type(user)}')
return user.matches_roles(require_roles, require_all) current_user = g.get('current_user')
if current_user is None:
return False
intersection = require_roles.intersection(current_user['roles'])
if require_all:
return len(intersection) == len(require_roles)
return not bool(require_roles) or bool(intersection)
def is_admin(user): def is_admin(user):
"""Returns True iff the given user has the admin capability.""" """Returns True iff the given user has the admin role."""
return user_has_cap('admin', user) return user_has_role(u'admin', user)

View File

@ -1,7 +1,5 @@
import datetime import datetime
from hashlib import md5 from hashlib import md5
import base64
from flask import current_app from flask import current_app
@ -19,20 +17,19 @@ def hash_file_path(file_path, expiry_timestamp=None):
if current_app.config['CDN_USE_URL_SIGNING']: if current_app.config['CDN_USE_URL_SIGNING']:
url_signing_key = current_app.config['CDN_URL_SIGNING_KEY'] url_signing_key = current_app.config['CDN_URL_SIGNING_KEY']
to_hash = domain_subfolder + file_path + url_signing_key hash_string = domain_subfolder + file_path + url_signing_key
if not expiry_timestamp: if not expiry_timestamp:
expiry_timestamp = datetime.datetime.now() + datetime.timedelta(hours=24) expiry_timestamp = datetime.datetime.now() + datetime.timedelta(hours=24)
expiry_timestamp = expiry_timestamp.strftime('%s') expiry_timestamp = expiry_timestamp.strftime('%s')
to_hash = expiry_timestamp + to_hash hash_string = expiry_timestamp + hash_string
if isinstance(to_hash, str):
to_hash = to_hash.encode()
expiry_timestamp = "," + str(expiry_timestamp) expiry_timestamp = "," + str(expiry_timestamp)
hashed_file_path = base64.b64encode(md5(to_hash).digest())[:-1].decode() hashed_file_path = md5(hash_string).digest().encode('base64')[:-1]
hashed_file_path = hashed_file_path.replace('+', '-').replace('/', '_') hashed_file_path = hashed_file_path.replace('+', '-')
hashed_file_path = hashed_file_path.replace('/', '_')
asset_url = asset_url + \ asset_url = asset_url + \
'?secure=' + \ '?secure=' + \

View File

@ -31,10 +31,7 @@ class Encoder:
options = dict(notifications=current_app.config['ZENCODER_NOTIFICATIONS_URL']) options = dict(notifications=current_app.config['ZENCODER_NOTIFICATIONS_URL'])
outputs = [{'format': v['format'], outputs = [{'format': v['format'],
'url': os.path.join(storage_base, v['file_path']), 'url': os.path.join(storage_base, v['file_path'])}
'upscale': False,
'size': '{width}x{height}'.format(**v),
}
for v in src_file['variations']] for v in src_file['variations']]
r = current_app.encoding_service_client.job.create(file_input, r = current_app.encoding_service_client.job.create(file_input,
outputs=outputs, outputs=outputs,

224
pillar/api/utils/gcs.py Normal file
View File

@ -0,0 +1,224 @@
import os
import time
import datetime
import logging
from bson import ObjectId
from gcloud.storage.client import Client
from gcloud.exceptions import NotFound
from flask import current_app, g
from werkzeug.local import LocalProxy
log = logging.getLogger(__name__)
def get_client():
"""Stores the GCS client on the global Flask object.
The GCS client is not user-specific anyway.
:rtype: Client
"""
_gcs = getattr(g, '_gcs_client', None)
if _gcs is None:
_gcs = g._gcs_client = Client()
return _gcs
# This hides the specifics of how/where we store the GCS client,
# and allows the rest of the code to use 'gcs' as a simple variable
# that does the right thing.
gcs = LocalProxy(get_client)
class GoogleCloudStorageBucket(object):
"""Cloud Storage bucket interface. We create a bucket for every project. In
the bucket we create first level subdirs as follows:
- '_' (will contain hashed assets, and stays on top of default listing)
- 'svn' (svn checkout mirror)
- 'shared' (any additional folder of static folder that is accessed via a
node of 'storage' node_type)
:type bucket_name: string
:param bucket_name: Name of the bucket.
:type subdir: string
:param subdir: The local entry point to browse the bucket.
"""
def __init__(self, bucket_name, subdir='_/'):
try:
self.bucket = gcs.get_bucket(bucket_name)
except NotFound:
self.bucket = gcs.bucket(bucket_name)
# Hardcode the bucket location to EU
self.bucket.location = 'EU'
# Optionally enable CORS from * (currently only used for vrview)
# self.bucket.cors = [
# {
# "origin": ["*"],
# "responseHeader": ["Content-Type"],
# "method": ["GET", "HEAD", "DELETE"],
# "maxAgeSeconds": 3600
# }
# ]
self.bucket.create()
self.subdir = subdir
def List(self, path=None):
"""Display the content of a subdir in the project bucket. If the path
points to a file the listing is simply empty.
:type path: string
:param path: The relative path to the directory or asset.
"""
if path and not path.endswith('/'):
path += '/'
prefix = os.path.join(self.subdir, path)
fields_to_return = 'nextPageToken,items(name,size,contentType),prefixes'
req = self.bucket.list_blobs(fields=fields_to_return, prefix=prefix,
delimiter='/')
files = []
for f in req:
filename = os.path.basename(f.name)
if filename != '': # Skip own folder name
files.append(dict(
path=os.path.relpath(f.name, self.subdir),
text=filename,
type=f.content_type))
directories = []
for dir_path in req.prefixes:
directory_name = os.path.basename(os.path.normpath(dir_path))
directories.append(dict(
text=directory_name,
path=os.path.relpath(dir_path, self.subdir),
type='group_storage',
children=True))
# print os.path.basename(os.path.normpath(path))
list_dict = dict(
name=os.path.basename(os.path.normpath(path)),
type='group_storage',
children=files + directories
)
return list_dict
def blob_to_dict(self, blob):
blob.reload()
expiration = datetime.datetime.now() + datetime.timedelta(days=1)
expiration = int(time.mktime(expiration.timetuple()))
return dict(
updated=blob.updated,
name=os.path.basename(blob.name),
size=blob.size,
content_type=blob.content_type,
signed_url=blob.generate_signed_url(expiration),
public_url=blob.public_url)
def Get(self, path, to_dict=True):
"""Get selected file info if the path matches.
:type path: string
:param path: The relative path to the file.
:type to_dict: bool
:param to_dict: Return the object as a dictionary.
"""
path = os.path.join(self.subdir, path)
blob = self.bucket.blob(path)
if blob.exists():
if to_dict:
return self.blob_to_dict(blob)
else:
return blob
else:
return None
def Post(self, full_path, path=None):
"""Create new blob and upload data to it.
"""
path = path if path else os.path.join('_', os.path.basename(full_path))
blob = self.bucket.blob(path)
if blob.exists():
return None
blob.upload_from_filename(full_path)
return blob
# return self.blob_to_dict(blob) # Has issues with threading
def Delete(self, path):
"""Delete blob (when removing an asset or replacing a preview)"""
# We want to get the actual blob to delete
blob = self.Get(path, to_dict=False)
try:
blob.delete()
return True
except NotFound:
return None
def update_name(self, blob, name):
"""Set the ContentDisposition metadata so that when a file is downloaded
it has a human-readable name.
"""
blob.content_disposition = u'attachment; filename="{0}"'.format(name)
blob.patch()
def update_file_name(node):
"""Assign to the CGS blob the same name of the asset node. This way when
downloading an asset we get a human-readable name.
"""
# Process only files that are not processing
if node['properties'].get('status', '') == 'processing':
return
def _format_name(name, override_ext, size=None, map_type=u''):
root, _ = os.path.splitext(name)
size = u'-{}'.format(size) if size else u''
map_type = u'-{}'.format(map_type) if map_type else u''
return u'{}{}{}{}'.format(root, size, map_type, override_ext)
def _update_name(file_id, file_props):
files_collection = current_app.data.driver.db['files']
file_doc = files_collection.find_one({'_id': ObjectId(file_id)})
if file_doc is None or file_doc.get('backend') != 'gcs':
return
# For textures -- the map type should be part of the name.
map_type = file_props.get('map_type', u'')
storage = GoogleCloudStorageBucket(str(node['project']))
blob = storage.Get(file_doc['file_path'], to_dict=False)
# Pick file extension from original filename
_, ext = os.path.splitext(file_doc['filename'])
name = _format_name(node['name'], ext, map_type=map_type)
storage.update_name(blob, name)
# Assign the same name to variations
for v in file_doc.get('variations', []):
_, override_ext = os.path.splitext(v['file_path'])
name = _format_name(node['name'], override_ext, v['size'], map_type=map_type)
blob = storage.Get(v['file_path'], to_dict=False)
if blob is None:
log.info('Unable to find blob for file %s in project %s. This can happen if the '
'video encoding is still processing.', v['file_path'], node['project'])
continue
storage.update_name(blob, name)
# Currently we search for 'file' and 'files' keys in the object properties.
# This could become a bit more flexible and realy on a true reference of the
# file object type from the schema.
if 'file' in node['properties']:
_update_name(node['properties']['file'], {})
if 'files' in node['properties']:
for file_props in node['properties']['files']:
_update_name(file_props['file'], file_props)

View File

@ -1,61 +1,47 @@
import json
import typing
import os import os
import pathlib import json
import subprocess import subprocess
from PIL import Image from PIL import Image
from flask import current_app from flask import current_app
# Images with these modes will be thumbed to PNG, others to JPEG.
MODES_FOR_PNG = {'RGBA', 'LA'}
def generate_local_thumbnails(name_base, src):
def generate_local_thumbnails(fp_base: str, src: pathlib.Path):
"""Given a source image, use Pillow to generate thumbnails according to the """Given a source image, use Pillow to generate thumbnails according to the
application settings. application settings.
:param fp_base: the thumbnail will get a field :param name_base: the thumbnail will get a field 'name': '{basename}-{thumbsize}.jpg'
'file_path': '{fp_base}-{thumbsize}.{ext}' :type name_base: str
:param src: the path of the image to be thumbnailed :param src: the path of the image to be thumbnailed
:type src: str
""" """
thumbnail_settings = current_app.config['UPLOADS_LOCAL_STORAGE_THUMBNAILS'] thumbnail_settings = current_app.config['UPLOADS_LOCAL_STORAGE_THUMBNAILS']
thumbnails = [] thumbnails = []
for size, settings in thumbnail_settings.items(): save_to_base, _ = os.path.splitext(src)
im = Image.open(src) name_base, _ = os.path.splitext(name_base)
extra_args = {}
# If the source image has transparency, save as PNG for size, settings in thumbnail_settings.iteritems():
if im.mode in MODES_FOR_PNG: dst = '{0}-{1}{2}'.format(save_to_base, size, '.jpg')
suffix = '.png' name = '{0}-{1}{2}'.format(name_base, size, '.jpg')
imformat = 'PNG'
else:
suffix = '.jpg'
imformat = 'JPEG'
extra_args = {'quality': 95}
dst = src.with_name(f'{src.stem}-{size}{suffix}')
if settings['crop']: if settings['crop']:
im = resize_and_crop(im, settings['size']) resize_and_crop(src, dst, settings['size'])
width, height = settings['size']
else: else:
im.thumbnail(settings['size'], resample=Image.LANCZOS) im = Image.open(src).convert('RGB')
im.thumbnail(settings['size'])
im.save(dst, "JPEG")
width, height = im.size width, height = im.size
if imformat == 'JPEG':
im = im.convert('RGB')
im.save(dst, format=imformat, optimize=True, **extra_args)
thumb_info = {'size': size, thumb_info = {'size': size,
'file_path': f'{fp_base}-{size}{suffix}', 'file_path': name,
'local_path': str(dst), 'local_path': dst,
'length': dst.stat().st_size, 'length': os.stat(dst).st_size,
'width': width, 'width': width,
'height': height, 'height': height,
'md5': '', 'md5': '',
'content_type': f'image/{imformat.lower()}'} 'content_type': 'image/jpeg'}
if size == 't': if size == 't':
thumb_info['is_public'] = True thumb_info['is_public'] = True
@ -65,40 +51,63 @@ def generate_local_thumbnails(fp_base: str, src: pathlib.Path):
return thumbnails return thumbnails
def resize_and_crop(img: Image, size: typing.Tuple[int, int]) -> Image: def resize_and_crop(img_path, modified_path, size, crop_type='middle'):
"""Resize and crop an image to fit the specified size. """
Resize and crop an image to fit the specified size. Thanks to:
https://gist.github.com/sigilioso/2957026
Thanks to: https://gist.github.com/sigilioso/2957026 args:
img_path: path for the image to resize.
modified_path: path to store the modified image.
size: `(width, height)` tuple.
crop_type: can be 'top', 'middle' or 'bottom', depending on this
value, the image will cropped getting the 'top/left', 'middle' or
'bottom/right' of the image to fit the size.
raises:
Exception: if can not open the file in img_path of there is problems
to save the image.
ValueError: if an invalid `crop_type` is provided.
:param img: opened PIL.Image to work on
:param size: `(width, height)` tuple.
""" """
# If height is higher we resize vertically, if not we resize horizontally # If height is higher we resize vertically, if not we resize horizontally
img = Image.open(img_path).convert('RGB')
# Get current and desired ratio for the images # Get current and desired ratio for the images
cur_w, cur_h = img.size # current img_ratio = img.size[0] / float(img.size[1])
img_ratio = cur_w / cur_h ratio = size[0] / float(size[1])
w, h = size # desired
ratio = w / h
# The image is scaled/cropped vertically or horizontally depending on the ratio # The image is scaled/cropped vertically or horizontally depending on the ratio
if ratio > img_ratio: if ratio > img_ratio:
uncropped_h = (w * cur_h) // cur_w img = img.resize((size[0], int(round(size[0] * img.size[1] / img.size[0]))),
img = img.resize((w, uncropped_h), Image.ANTIALIAS) Image.ANTIALIAS)
box = (0, (uncropped_h - h) // 2, # Crop in the top, middle or bottom
w, (uncropped_h + h) // 2) if crop_type == 'top':
box = (0, 0, img.size[0], size[1])
elif crop_type == 'middle':
box = (0, int(round((img.size[1] - size[1]) / 2)), img.size[0],
int(round((img.size[1] + size[1]) / 2)))
elif crop_type == 'bottom':
box = (0, img.size[1] - size[1], img.size[0], img.size[1])
else:
raise ValueError('ERROR: invalid value for crop_type')
img = img.crop(box) img = img.crop(box)
elif ratio < img_ratio: elif ratio < img_ratio:
uncropped_w = (h * cur_w) // cur_h img = img.resize((int(round(size[1] * img.size[0] / img.size[1])), size[1]),
img = img.resize((uncropped_w, h), Image.ANTIALIAS) Image.ANTIALIAS)
box = ((uncropped_w - w) // 2, 0, # Crop in the top, middle or bottom
(uncropped_w + w) // 2, h) if crop_type == 'top':
box = (0, 0, size[0], img.size[1])
elif crop_type == 'middle':
box = (int(round((img.size[0] - size[0]) / 2)), 0,
int(round((img.size[0] + size[0]) / 2)), img.size[1])
elif crop_type == 'bottom':
box = (img.size[0] - size[0], 0, img.size[0], img.size[1])
else:
raise ValueError('ERROR: invalid value for crop_type')
img = img.crop(box) img = img.crop(box)
else: else:
img = img.resize((w, h), Image.ANTIALIAS) img = img.resize((size[0], size[1]),
Image.ANTIALIAS)
# If the scale is the same, we do not need to crop # If the scale is the same, we do not need to crop
return img img.save(modified_path, "JPEG")
def get_video_data(filepath): def get_video_data(filepath):
@ -134,7 +143,7 @@ def get_video_data(filepath):
res_y=video_stream['height'], res_y=video_stream['height'],
) )
if video_stream['sample_aspect_ratio'] != '1:1': if video_stream['sample_aspect_ratio'] != '1:1':
print('[warning] Pixel aspect ratio is not square!') print '[warning] Pixel aspect ratio is not square!'
return outdata return outdata
@ -181,14 +190,14 @@ def ffmpeg_encode(src, format, res_y=720):
dst = os.path.splitext(src) dst = os.path.splitext(src)
dst = "{0}-{1}p.{2}".format(dst[0], res_y, format) dst = "{0}-{1}p.{2}".format(dst[0], res_y, format)
args.append(dst) args.append(dst)
print("Encoding {0} to {1}".format(src, format)) print "Encoding {0} to {1}".format(src, format)
returncode = subprocess.call([current_app.config['BIN_FFMPEG']] + args) returncode = subprocess.call([current_app.config['BIN_FFMPEG']] + args)
if returncode == 0: if returncode == 0:
print("Successfully encoded {0}".format(dst)) print "Successfully encoded {0}".format(dst)
else: else:
print("Error during encode") print "Error during encode"
print("Code: {0}".format(returncode)) print "Code: {0}".format(returncode)
print("Command: {0}".format(current_app.config['BIN_FFMPEG'] + " " + " ".join(args))) print "Command: {0}".format(current_app.config['BIN_FFMPEG'] + " " + " ".join(args))
dst = None dst = None
# return path of the encoded video # return path of the encoded video
return dst return dst

View File

@ -1,86 +0,0 @@
import copy
import logging
import types
log = logging.getLogger(__name__)
def assign_permissions(project, node_types, permission_callback):
"""Generator, yields the node types with certain permissions set.
The permission_callback is called for each node type, and each user
and group permission in the project, and should return the appropriate
extra permissions for that node type.
Yields copies of the given node types with new permissions.
permission_callback(node_type, uwg, ident, proj_methods) is returned, where
- 'node_type' is the node type dict
- 'ugw' is either 'user', 'group', or 'world',
- 'ident' is the group or user ID, or None when ugw is 'world',
- 'proj_methods' is the list of already-allowed project methods.
"""
proj_perms = project['permissions']
for nt in node_types:
permissions = {}
for key in ('users', 'groups'):
perms = proj_perms.get(key)
if not perms:
continue
singular = key.rstrip('s')
for perm in perms:
assert isinstance(perm, dict), 'perm should be dict, but is %r' % perm
ident = perm[singular] # group or user ID.
methods_to_allow = permission_callback(nt, singular, ident, perm['methods'])
if not methods_to_allow:
continue
permissions.setdefault(key, []).append(
{singular: ident,
'methods': methods_to_allow}
)
# World permissions are simpler.
world_methods_to_allow = permission_callback(nt, 'world', None,
permissions.get('world', []))
if world_methods_to_allow:
permissions.setdefault('world', []).extend(world_methods_to_allow)
node_type = copy.deepcopy(nt)
if permissions:
node_type['permissions'] = permissions
yield node_type
def add_to_project(project, node_types, replace_existing):
"""Adds the given node types to the project.
Overwrites any existing by the same name when replace_existing=True.
"""
assert isinstance(project, dict)
assert isinstance(node_types, (list, set, frozenset, tuple, types.GeneratorType)), \
'node_types is of wrong type %s' % type(node_types)
project_id = project['_id']
for node_type in node_types:
found = [nt for nt in project['node_types']
if nt['name'] == node_type['name']]
if found:
assert len(found) == 1, 'node type name should be unique (found %ix)' % len(found)
# TODO: validate that the node type contains all the properties Attract needs.
if replace_existing:
log.info('Replacing existing node type %s on project %s',
node_type['name'], project_id)
project['node_types'].remove(found[0])
else:
continue
project['node_types'].append(node_type)

View File

@ -1,87 +0,0 @@
# These functions come from Reddit
# https://github.com/reddit/reddit/blob/master/r2/r2/lib/db/_sorts.pyx
# Additional resources
# http://www.redditblog.com/2009/10/reddits-new-comment-sorting-system.html
# http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
# http://amix.dk/blog/post/19588
from datetime import datetime, timezone
from math import log
from math import sqrt
epoch = datetime(1970, 1, 1, 0, 0, 0, 0, timezone.utc)
def epoch_seconds(date):
"""Returns the number of seconds from the epoch to date."""
td = date - epoch
return td.days * 86400 + td.seconds + (float(td.microseconds) / 1000000)
def score(ups, downs):
return ups - downs
def hot(ups, downs, date):
"""The hot formula. Reddit's hot ranking uses the logarithm function to
weight the first votes higher than the rest.
The first 10 upvotes have the same weight as the next 100 upvotes which
have the same weight as the next 1000, etc.
Dillo authors: we modified the formula to give more weight to negative
votes when an entry is controversial.
TODO: make this function more dynamic so that different defaults can be
specified depending on the item that is being rated.
"""
s = score(ups, downs)
order = log(max(abs(s), 1), 10)
sign = 1 if s > 0 else -1 if s < 0 else 0
seconds = epoch_seconds(date) - 1134028003
base_hot = round(sign * order + seconds / 45000, 7)
if downs > 1:
rating_delta = 100 * (downs - ups) / downs
if rating_delta < 25:
# The post is controversial
return base_hot
base_hot = base_hot - (downs * 6)
return base_hot
def _confidence(ups, downs):
n = ups + downs
if n == 0:
return 0
z = 1.0 #1.0 = 85%, 1.6 = 95%
phat = float(ups) / n
return sqrt(phat+z*z/(2*n)-z*((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n)
def confidence(ups, downs):
if ups + downs == 0:
return 0
else:
return _confidence(ups, downs)
def update_hot(document):
"""Update the hotness of a document given its current ratings.
We expect the document to implement the ratings_embedded_schema in
a 'ratings' property.
"""
dt = document['_created']
dt = dt.replace(tzinfo=timezone.utc)
document['properties']['ratings']['hot'] = hot(
document['properties']['ratings']['positive'],
document['properties']['ratings']['negative'],
dt,
)

View File

@ -1 +1,83 @@
"""Utility for managing storage backends and files.""" import subprocess
import os
from flask import current_app
from pillar.api.utils.gcs import GoogleCloudStorageBucket
def get_sizedata(filepath):
outdata = dict(
size=int(os.stat(filepath).st_size)
)
return outdata
def rsync(path, remote_dir=''):
BIN_SSH = current_app.config['BIN_SSH']
BIN_RSYNC = current_app.config['BIN_RSYNC']
DRY_RUN = False
arguments = ['--verbose', '--ignore-existing', '--recursive', '--human-readable']
logs_path = current_app.config['CDN_SYNC_LOGS']
storage_address = current_app.config['CDN_STORAGE_ADDRESS']
user = current_app.config['CDN_STORAGE_USER']
rsa_key_path = current_app.config['CDN_RSA_KEY']
known_hosts_path = current_app.config['CDN_KNOWN_HOSTS']
if DRY_RUN:
arguments.append('--dry-run')
folder_arguments = list(arguments)
if rsa_key_path:
folder_arguments.append(
'-e ' + BIN_SSH + ' -i ' + rsa_key_path + ' -o "StrictHostKeyChecking=no"')
# if known_hosts_path:
# folder_arguments.append("-o UserKnownHostsFile " + known_hosts_path)
folder_arguments.append("--log-file=" + logs_path + "/rsync.log")
folder_arguments.append(path)
folder_arguments.append(user + "@" + storage_address + ":/public/" + remote_dir)
# print (folder_arguments)
devnull = open(os.devnull, 'wb')
# DEBUG CONFIG
# print folder_arguments
# proc = subprocess.Popen(['rsync'] + folder_arguments)
# stdout, stderr = proc.communicate()
subprocess.Popen(['nohup', BIN_RSYNC] + folder_arguments, stdout=devnull, stderr=devnull)
def remote_storage_sync(path): # can be both folder and file
if os.path.isfile(path):
filename = os.path.split(path)[1]
rsync(path, filename[:2] + '/')
else:
if os.path.exists(path):
rsync(path)
else:
raise IOError('ERROR: path not found')
def push_to_storage(project_id, full_path, backend='cgs'):
"""Move a file from temporary/processing local storage to a storage endpoint.
By default we store items in a Google Cloud Storage bucket named after the
project id.
"""
def push_single_file(project_id, full_path, backend):
if backend == 'cgs':
storage = GoogleCloudStorageBucket(project_id, subdir='_')
blob = storage.Post(full_path)
# XXX Make public on the fly if it's an image and small preview.
# This should happen by reading the database (push to storage
# should change to accomodate it).
if blob is not None and full_path.endswith('-t.jpg'):
blob.make_public()
os.remove(full_path)
if os.path.isfile(full_path):
push_single_file(project_id, full_path, backend)
else:
if os.path.exists(full_path):
for root, dirs, files in os.walk(full_path):
for name in files:
push_single_file(project_id, os.path.join(root, name), backend)
else:
raise IOError('ERROR: path not found')

View File

@ -1,16 +0,0 @@
"""Extra functionality for attrs."""
import functools
import logging
import attr
string = functools.partial(attr.ib, validator=attr.validators.instance_of(str))
def log(name):
"""Returns a logger
:param name: name to pass to logging.getLogger()
"""
return logging.getLogger(name)

View File

@ -1,120 +1,27 @@
"""Authentication code common to the web and api modules.""" """Authentication code common to the web and api modules."""
import collections
import contextlib
import copy
import functools
import logging import logging
import typing
import blinker from flask import current_app, session
from bson import ObjectId
from flask import session, g
import flask_login import flask_login
from werkzeug.local import LocalProxy import flask_oauthlib.client
from pillar import current_app from ..api import utils, blender_id
from ..api.utils import authentication
# The sender is the user that was just authenticated.
user_authenticated = blinker.Signal('Sent whenever a user was authenticated')
user_logged_in = blinker.Signal('Sent whenever a user logged in on the web')
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
# Mapping from user role to capabilities obtained by users with that role.
CAPABILITIES = collections.defaultdict(**{
'subscriber': {'subscriber', 'home-project'},
'demo': {'subscriber', 'home-project'},
'admin': {'video-encoding', 'admin',
'view-pending-nodes', 'edit-project-node-types'},
}, default_factory=frozenset)
class UserClass(flask_login.UserMixin): class UserClass(flask_login.UserMixin):
def __init__(self, token: typing.Optional[str]): def __init__(self, token):
# We store the Token instead of ID # We store the Token instead of ID
self.id = token self.id = token
self.auth_token = token self.username = None
self.username: str = None self.full_name = None
self.full_name: str = None self.objectid = None
self.user_id: ObjectId = None self.gravatar = None
self.objectid: str = None self.email = None
self.email: str = None self.roles = []
self.roles: typing.List[str] = []
self.groups: typing.List[str] = [] # NOTE: these are stringified object IDs.
self.group_ids: typing.List[ObjectId] = []
self.capabilities: typing.Set[str] = set()
self.nodes: dict = {} # see the 'nodes' key in eve_settings.py::user_schema.
self.badges_html: str = ''
# Stored when constructing a user from the database
self._db_user = {}
# Lazily evaluated
self._has_organizations: typing.Optional[bool] = None
@classmethod
def construct(cls, token: str, db_user: dict) -> 'UserClass':
"""Constructs a new UserClass instance from a Mongo user document."""
user = cls(token)
user._db_user = copy.deepcopy(db_user)
user.user_id = db_user.get('_id')
user.roles = db_user.get('roles') or []
user.group_ids = db_user.get('groups') or []
user.email = db_user.get('email') or ''
user.username = db_user.get('username') or ''
user.full_name = db_user.get('full_name') or ''
user.badges_html = db_user.get('badges', {}).get('html') or ''
# Be a little more specific than just db_user['nodes'] or db_user['avatar']
user.nodes = {
'view_progress': db_user.get('nodes', {}).get('view_progress', {}),
}
# Derived properties
user.objectid = str(user.user_id or '')
user.groups = [str(g) for g in user.group_ids]
user.collect_capabilities()
return user
def __repr__(self):
return f'UserClass(user_id={self.user_id})'
def __str__(self):
return f'{self.__class__.__name__}(id={self.user_id}, email={self.email!r}'
def __getitem__(self, item):
"""Compatibility layer with old dict-based g.current_user object."""
if item == 'user_id':
return self.user_id
if item == 'groups':
return self.group_ids
if item == 'roles':
return set(self.roles)
raise KeyError(f'No such key {item!r}')
def get(self, key, default=None):
"""Compatibility layer with old dict-based g.current_user object."""
try:
return self[key]
except KeyError:
return default
def collect_capabilities(self):
"""Constructs the capabilities set given the user's current roles.
Requires an application context to be active.
"""
app_caps = current_app.user_caps
self.capabilities = set().union(*(app_caps[role] for role in self.roles))
def has_role(self, *roles): def has_role(self, *roles):
"""Returns True iff the user has one or more of the given roles.""" """Returns True iff the user has one or more of the given roles."""
@ -124,109 +31,33 @@ class UserClass(flask_login.UserMixin):
return bool(set(self.roles).intersection(set(roles))) return bool(set(self.roles).intersection(set(roles)))
def has_cap(self, *capabilities: typing.Iterable[str]) -> bool:
"""Returns True iff the user has one or more of the given capabilities."""
if not self.capabilities:
return False
return bool(set(self.capabilities).intersection(set(capabilities)))
def matches_roles(self,
require_roles=set(),
require_all=False) -> bool:
"""Returns True iff the user's roles matches the query.
:param require_roles: set of roles.
:param require_all:
When False (the default): if the user's roles have a
non-empty intersection with the given roles, returns True.
When True: require the user to have all given roles before
returning True.
"""
if not isinstance(require_roles, set):
raise TypeError(f'require_roles param should be a set, but is {type(require_roles)!r}')
if require_all and not require_roles:
raise ValueError('require_login(require_all=True) cannot be used with '
'empty require_roles.')
intersection = require_roles.intersection(self.roles)
if require_all:
return len(intersection) == len(require_roles)
return not bool(require_roles) or bool(intersection)
def has_organizations(self) -> bool:
"""Returns True iff this user administers or is member of any organization."""
if self._has_organizations is None:
assert self.user_id
self._has_organizations = current_app.org_manager.user_has_organizations(self.user_id)
return bool(self._has_organizations)
def frontend_info(self) -> dict:
"""Return a dictionary of user info for injecting into the page."""
return {
'user_id': str(self.user_id),
'username': self.username,
'full_name': self.full_name,
'avatar_url': self.avatar_url,
'email': self.email,
'capabilities': list(self.capabilities),
'badges_html': self.badges_html,
'is_authenticated': self.is_authenticated,
}
@property
@functools.lru_cache(maxsize=1)
def avatar_url(self) -> str:
"""Return the Avatar image URL for this user.
:return: The avatar URL (the default one if the user has no avatar).
"""
import pillar.api.users.avatar
return pillar.api.users.avatar.url(self._db_user)
class AnonymousUser(flask_login.AnonymousUserMixin, UserClass):
def __init__(self):
super().__init__(token=None)
class AnonymousUser(flask_login.AnonymousUserMixin):
def has_role(self, *roles): def has_role(self, *roles):
return False return False
def has_cap(self, *capabilities):
return False
def has_organizations(self) -> bool: def _load_user(token):
return False
def _load_user(token) -> typing.Union[UserClass, AnonymousUser]:
"""Loads a user by their token. """Loads a user by their token.
:returns: returns a UserClass instance if logged in, or an AnonymousUser() if not. :returns: returns a UserClass instance if logged in, or an AnonymousUser() if not.
:rtype: UserClass
""" """
from ..api.utils import authentication
if not token:
return AnonymousUser()
db_user = authentication.validate_this_token(token) db_user = authentication.validate_this_token(token)
if not db_user: if not db_user:
# There is a token, but it's not valid. We should reset the user's session.
session.clear()
return AnonymousUser() return AnonymousUser()
user = UserClass.construct(token, db_user) login_user = UserClass(token)
login_user.email = db_user['email']
login_user.objectid = unicode(db_user['_id'])
login_user.username = db_user['username']
login_user.gravatar = utils.gravatar(db_user['email'])
login_user.roles = db_user.get('roles', [])
login_user.groups = [unicode(g) for g in db_user['groups'] or ()]
login_user.full_name = db_user.get('full_name', '')
return user return login_user
def config_login_manager(app): def config_login_manager(app):
@ -235,7 +66,6 @@ def config_login_manager(app):
login_manager = flask_login.LoginManager() login_manager = flask_login.LoginManager()
login_manager.init_app(app) login_manager.init_app(app)
login_manager.login_view = "users.login" login_manager.login_view = "users.login"
login_manager.login_message = ''
login_manager.anonymous_user = AnonymousUser login_manager.anonymous_user = AnonymousUser
# noinspection PyTypeChecker # noinspection PyTypeChecker
login_manager.user_loader(_load_user) login_manager.user_loader(_load_user)
@ -243,97 +73,32 @@ def config_login_manager(app):
return login_manager return login_manager
def login_user(oauth_token: str, *, load_from_db=False): def get_blender_id_oauth_token():
"""Log in the user identified by the given token.""" """Returns a tuple (token, ''), for use with flask_oauthlib."""
return session.get('blender_id_oauth_token')
if load_from_db:
user = _load_user(oauth_token)
else:
user = UserClass(oauth_token)
login_user_object(user)
def login_user_object(user: UserClass): def config_oauth_login(app):
"""Log in the given user.""" config = app.config
flask_login.login_user(user, remember=True) if not config.get('SOCIAL_BLENDER_ID'):
g.current_user = user log.info('OAuth Blender-ID login not setup.')
user_authenticated.send(user) return None
user_logged_in.send(user)
oauth = flask_oauthlib.client.OAuth(app)
social_blender_id = config.get('SOCIAL_BLENDER_ID')
def logout_user(): oauth_blender_id = oauth.remote_app(
"""Forces a logout of the current user.""" 'blender_id',
consumer_key=social_blender_id['app_id'],
consumer_secret=social_blender_id['app_secret'],
request_token_params={'scope': 'email'},
base_url=config['BLENDER_ID_OAUTH_URL'],
request_token_url=None,
access_token_url=config['BLENDER_ID_BASE_ACCESS_TOKEN_URL'],
authorize_url=config['BLENDER_ID_AUTHORIZE_URL']
)
from ..api.utils import authentication oauth_blender_id.tokengetter(get_blender_id_oauth_token)
log.info('OAuth Blender-ID login setup as %s', social_blender_id['app_id'])
token = get_blender_id_oauth_token() return oauth_blender_id
if token:
authentication.remove_token(token)
session.clear()
flask_login.logout_user()
g.current_user = AnonymousUser()
@contextlib.contextmanager
def temporary_user(db_user: dict):
"""Temporarily sets the given user as 'current user'.
Does not trigger login signals, as this is not a real login action.
"""
try:
actual_current_user = g.current_user
except AttributeError:
actual_current_user = AnonymousUser()
temp_user = UserClass.construct('', db_user)
try:
g.current_user = temp_user
yield
finally:
g.current_user = actual_current_user
def get_blender_id_oauth_token() -> str:
"""Returns the Blender ID auth token, or an empty string if there is none."""
from flask import request
token = session.get('blender_id_oauth_token')
if token:
if isinstance(token, (tuple, list)):
# In a past version of Pillar we accidentally stored tuples in the session.
# Such sessions should be actively fixed.
# TODO(anyone, after 2017-12-01): refactor this if-block so that it just converts
# the token value to a string and use that instead.
token = token[0]
session['blender_id_oauth_token'] = token
return token
if request.authorization and request.authorization.username:
return request.authorization.username
if current_user.is_authenticated and current_user.id:
return current_user.id
return ''
def get_current_user() -> UserClass:
"""Returns the current user as a UserClass instance.
Never returns None; returns an AnonymousUser() instance instead.
This function is intended to be used when pillar.auth.current_user is
accessed many times in the same scope. Calling this function is then
more efficient, since it doesn't have to resolve the LocalProxy for
each access to the returned object.
"""
from ..api.utils.authentication import current_user
return current_user()
current_user: UserClass = LocalProxy(get_current_user)
"""The current user."""

View File

@ -1,48 +0,0 @@
"""Support for adding CORS headers to responses."""
import functools
import flask
import werkzeug.wrappers as wz_wrappers
import werkzeug.exceptions as wz_exceptions
def allow(*, allow_credentials=False):
"""Flask endpoint decorator, adds CORS headers to the response.
If the request has a non-empty 'Origin' header, the response header
'Access-Control-Allow-Origin' is set to the value of that request header,
and some other CORS headers are set.
"""
def decorator(wrapped):
@functools.wraps(wrapped)
def wrapper(*args, **kwargs):
request_origin = flask.request.headers.get('Origin')
if not request_origin:
# No CORS headers requested, so don't bother touching the response.
return wrapped(*args, **kwargs)
try:
response = wrapped(*args, **kwargs)
except wz_exceptions.HTTPException as ex:
response = ex.get_response()
else:
if isinstance(response, tuple):
response = flask.make_response(*response)
elif isinstance(response, str):
response = flask.make_response(response)
elif isinstance(response, wz_wrappers.Response):
pass
else:
raise TypeError(f'unknown response type {type(response)}')
assert isinstance(response, wz_wrappers.Response)
response.headers.set('Access-Control-Allow-Origin', request_origin)
response.headers.set('Access-Control-Allow-Headers', 'x-requested-with')
if allow_credentials:
response.headers.set('Access-Control-Allow-Credentials', 'true')
return response
return wrapper
return decorator

View File

@ -1,228 +0,0 @@
import abc
import json
import logging
import typing
import attr
from rauth import OAuth2Service
from flask import current_app, url_for, request, redirect, session, Response
@attr.s
class OAuthUserResponse:
"""Represents user information requested to an OAuth provider after
authenticating.
"""
id = attr.ib(validator=attr.validators.instance_of(str))
email = attr.ib(validator=attr.validators.instance_of(str))
access_token = attr.ib(validator=attr.validators.instance_of(str))
scopes: typing.List[str] = attr.ib(validator=attr.validators.instance_of(list))
class OAuthError(Exception):
"""Superclass of all exceptions raised by this module."""
class ProviderConfigurationMissing(OAuthError):
"""Raised when an OAuth provider is used but not configured."""
class ProviderNotImplemented(OAuthError):
"""Raised when a provider is requested that does not exist."""
class OAuthCodeNotProvided(OAuthError):
"""Raised when the 'code' arg is not provided in the OAuth callback."""
class ProviderNotConfigured:
"""Dummy class that indicates a provider isn't configured."""
class OAuthSignIn(metaclass=abc.ABCMeta):
provider_name: str = None # set in each subclass.
_providers = None # initialized in get_provider()
_log = logging.getLogger(f'{__name__}.OAuthSignIn')
def __init__(self):
credentials = current_app.config['OAUTH_CREDENTIALS'].get(self.provider_name)
if not credentials:
raise ProviderConfigurationMissing(
f'Missing OAuth credentials for {self.provider_name}')
self.consumer_id = credentials['id']
self.consumer_secret = credentials['secret']
# Set in a subclass
self.service: OAuth2Service = None
@abc.abstractmethod
def authorize(self) -> Response:
"""Redirect to the correct authorization endpoint for the current provider.
Depending on the provider, we sometimes have to specify a different
'scope'.
"""
pass
@abc.abstractmethod
def callback(self) -> OAuthUserResponse:
"""Callback performed after authorizing the user.
This is usually a request to a protected /me endpoint to query for
user information, such as user id and email address.
"""
pass
def get_callback_url(self):
return url_for('users.oauth_callback', provider=self.provider_name,
_external=True, _scheme=current_app.config['SCHEME'])
@staticmethod
def auth_code_from_request() -> str:
try:
return request.args['code']
except KeyError:
raise OAuthCodeNotProvided('A code argument was not provided in the request')
@staticmethod
def decode_json(payload):
return json.loads(payload.decode('utf-8'))
def make_oauth_session(self):
return self.service.get_auth_session(
data={'code': self.auth_code_from_request(),
'grant_type': 'authorization_code',
'redirect_uri': self.get_callback_url()},
decoder=self.decode_json
)
@classmethod
def get_provider(cls, provider_name) -> 'OAuthSignIn':
if cls._providers is None:
cls._init_providers()
try:
provider = cls._providers[provider_name]
except KeyError:
raise ProviderNotImplemented(f'No such OAuth provider {provider_name}')
if provider is ProviderNotConfigured:
raise ProviderConfigurationMissing(f'OAuth provider {provider_name} not configured')
return provider
@classmethod
def _init_providers(cls):
cls._providers = {}
for provider_class in cls.__subclasses__():
try:
provider = provider_class()
except ProviderConfigurationMissing:
cls._log.info('OAuth provider %s not configured',
provider_class.provider_name)
provider = ProviderNotConfigured
cls._providers[provider_class.provider_name] = provider
class BlenderIdSignIn(OAuthSignIn):
provider_name = 'blender-id'
scopes = ['email', 'badge']
def __init__(self):
from urllib.parse import urljoin
super().__init__()
base_url = current_app.config['BLENDER_ID_ENDPOINT']
self.service = OAuth2Service(
name='blender-id',
client_id=self.consumer_id,
client_secret=self.consumer_secret,
authorize_url=urljoin(base_url, 'oauth/authorize'),
access_token_url=urljoin(base_url, 'oauth/token'),
base_url=urljoin(base_url, 'api/'),
)
def authorize(self):
return redirect(self.service.get_authorize_url(
scope=' '.join(self.scopes),
response_type='code',
redirect_uri=self.get_callback_url())
)
def callback(self):
oauth_session = self.make_oauth_session()
# TODO handle exception for failed oauth or not authorized
access_token = oauth_session.access_token
assert isinstance(access_token, str), f'oauth token must be str, not {type(access_token)}'
session['blender_id_oauth_token'] = access_token
me = oauth_session.get('user').json()
# Blender ID doesn't tell us which scopes were granted by the user, so
# for now assume we got all the scopes we requested.
# (see https://github.com/jazzband/django-oauth-toolkit/issues/644)
return OAuthUserResponse(str(me['id']), me['email'], access_token, self.scopes)
class FacebookSignIn(OAuthSignIn):
provider_name = 'facebook'
def __init__(self):
super().__init__()
self.service = OAuth2Service(
name='facebook',
client_id=self.consumer_id,
client_secret=self.consumer_secret,
authorize_url='https://graph.facebook.com/oauth/authorize',
access_token_url='https://graph.facebook.com/oauth/access_token',
base_url='https://graph.facebook.com/'
)
def authorize(self):
return redirect(self.service.get_authorize_url(
scope='email',
response_type='code',
redirect_uri=self.get_callback_url())
)
def callback(self):
oauth_session = self.make_oauth_session()
me = oauth_session.get('me?fields=id,email').json()
# TODO handle case when user chooses not to disclose en email
# see https://developers.facebook.com/docs/graph-api/reference/user/
return OAuthUserResponse(me['id'], me.get('email'), '', [])
class GoogleSignIn(OAuthSignIn):
provider_name = 'google'
def __init__(self):
super().__init__()
self.service = OAuth2Service(
name='google',
client_id=self.consumer_id,
client_secret=self.consumer_secret,
authorize_url='https://accounts.google.com/o/oauth2/auth',
access_token_url='https://accounts.google.com/o/oauth2/token',
base_url='https://www.googleapis.com/oauth2/v1/'
)
def authorize(self):
return redirect(self.service.get_authorize_url(
scope='https://www.googleapis.com/auth/userinfo.email',
response_type='code',
redirect_uri=self.get_callback_url())
)
def callback(self):
oauth_session = self.make_oauth_session()
me = oauth_session.get('userinfo').json()
return OAuthUserResponse(str(me['id']), me['email'], '', [])

View File

@ -0,0 +1,51 @@
"""Cloud subscription info.
Connects to the external subscription server to obtain user info.
"""
import logging
from flask import current_app
import requests
from requests.adapters import HTTPAdapter
log = logging.getLogger(__name__)
def fetch_user(email):
"""Returns the user info dict from the external subscriptions management server.
:returns: the store user info, or None if the user can't be found or there
was an error communicating. A dict like this is returned:
{
"shop_id": 700,
"cloud_access": 1,
"paid_balance": 314.75,
"balance_currency": "EUR",
"start_date": "2014-08-25 17:05:46",
"expiration_date": "2016-08-24 13:38:45",
"subscription_status": "wc-active",
"expiration_date_approximate": true
}
:rtype: dict
"""
external_subscriptions_server = current_app.config['EXTERNAL_SUBSCRIPTIONS_MANAGEMENT_SERVER']
log.debug('Connecting to store at %s?blenderid=%s', external_subscriptions_server, email)
# Retry a few times when contacting the store.
s = requests.Session()
s.mount(external_subscriptions_server, HTTPAdapter(max_retries=5))
r = s.get(external_subscriptions_server, params={'blenderid': email},
verify=current_app.config['TLS_CERT_FILE'])
if r.status_code != 200:
log.warning("Error communicating with %s, code=%i, unable to check "
"subscription status of user %s",
external_subscriptions_server, r.status_code, email)
return None
store_user = r.json()
return store_user

View File

@ -1,266 +0,0 @@
import collections
import datetime
import logging
import typing
from urllib.parse import urljoin
import bson
import requests
from pillar import current_app, auth
from pillar.api.utils import utcnow
SyncUser = collections.namedtuple('SyncUser', 'user_id token bid_user_id')
BadgeHTML = collections.namedtuple('BadgeHTML', 'html expires')
log = logging.getLogger(__name__)
class StopRefreshing(Exception):
"""Indicates that Blender ID is having problems.
Further badge refreshes should be put on hold to avoid bludgeoning
a suffering Blender ID.
"""
def find_user_to_sync(user_id: bson.ObjectId) -> typing.Optional[SyncUser]:
"""Return user information for syncing badges for a specific user.
Returns None if the user cannot be synced (no 'badge' scope on a token,
or no Blender ID user_id known).
"""
my_log = log.getChild('refresh_single_user')
now = utcnow()
tokens_coll = current_app.db('tokens')
users_coll = current_app.db('users')
token_info = tokens_coll.find_one({
'user': user_id,
'token': {'$exists': True},
'oauth_scopes': 'badge',
'expire_time': {'$gt': now},
})
if not token_info:
my_log.debug('No token with scope "badge" for user %s', user_id)
return None
user_info = users_coll.find_one({'_id': user_id})
# TODO(Sybren): do this filtering in the MongoDB query:
bid_user_ids = [auth_info.get('user_id')
for auth_info in user_info.get('auth', [])
if auth_info.get('provider', '') == 'blender-id' and auth_info.get('user_id')]
if not bid_user_ids:
my_log.debug('No Blender ID user_id for user %s', user_id)
return None
bid_user_id = bid_user_ids[0]
return SyncUser(user_id=user_id, token=token_info['token'], bid_user_id=bid_user_id)
def find_users_to_sync() -> typing.Iterable[SyncUser]:
"""Return user information of syncable users with badges."""
now = utcnow()
tokens_coll = current_app.db('tokens')
cursor = tokens_coll.aggregate([
# Find all users who have a 'badge' scope in their OAuth token.
{'$match': {
'token': {'$exists': True},
'oauth_scopes': 'badge',
'expire_time': {'$gt': now},
# TODO(Sybren): save real token expiry time but keep checking tokens hourly when they are used!
}},
{'$lookup': {
'from': 'users',
'localField': 'user',
'foreignField': '_id',
'as': 'user'
}},
# Prevent 'user' from being an array.
{'$unwind': {'path': '$user'}},
# Get the Blender ID user ID only.
{'$unwind': {'path': '$user.auth'}},
{'$match': {'user.auth.provider': 'blender-id'}},
# Only select those users whose badge doesn't exist or has expired.
{'$match': {
'user.badges.expires': {'$not': {'$gt': now}}
}},
# Make sure that the badges that expire last are also refreshed last.
{'$sort': {'user.badges.expires': 1}},
# Reduce the document to the info we're after.
{'$project': {
'token': True,
'user._id': True,
'user.auth.user_id': True,
}},
])
log.debug('Aggregating tokens and users')
for user_info in cursor:
log.debug('User %s has badges %s',
user_info['user']['_id'], user_info['user'].get('badges'))
yield SyncUser(
user_id=user_info['user']['_id'],
token=user_info['token'],
bid_user_id=user_info['user']['auth']['user_id'])
def fetch_badge_html(session: requests.Session, user: SyncUser, size: str) \
-> str:
"""Fetch a Blender ID badge for this user.
:param session:
:param user:
:param size: Size indication for the badge images, see the Blender ID
documentation/code. As of this writing valid sizes are {'s', 'm', 'l'}.
"""
my_log = log.getChild('fetch_badge_html')
blender_id_endpoint = current_app.config['BLENDER_ID_ENDPOINT']
url = urljoin(blender_id_endpoint, f'api/badges/{user.bid_user_id}/html/{size}')
my_log.debug('Fetching badge HTML at %s for user %s', url, user.user_id)
try:
resp = session.get(url, headers={'Authorization': f'Bearer {user.token}'})
except requests.ConnectionError as ex:
my_log.warning('Unable to connect to Blender ID at %s: %s', url, ex)
raise StopRefreshing()
if resp.status_code == 204:
my_log.debug('No badges for user %s', user.user_id)
return ''
if resp.status_code == 403:
# TODO(Sybren): this indicates the token is invalid, so we could just as well delete it.
my_log.warning('Tried fetching %s for user %s but received a 403: %s',
url, user.user_id, resp.text)
return ''
if resp.status_code == 400:
my_log.warning('Blender ID did not accept our GET request at %s for user %s: %s',
url, user.user_id, resp.text)
return ''
if resp.status_code == 500:
my_log.warning('Blender ID returned an internal server error on %s for user %s, '
'aborting all badge refreshes: %s', url, user.user_id, resp.text)
raise StopRefreshing()
if resp.status_code == 404:
my_log.warning('Blender ID has no user %s for our user %s', user.bid_user_id, user.user_id)
return ''
resp.raise_for_status()
return resp.text
def refresh_all_badges(only_user_id: typing.Optional[bson.ObjectId] = None, *,
dry_run=False,
timelimit: datetime.timedelta):
"""Re-fetch all badges for all users, except when already refreshed recently.
:param only_user_id: Only refresh this user. This is expected to be used
sparingly during manual maintenance / debugging sessions only. It does
fetch all users to refresh, and in Python code skips all except the
given one.
:param dry_run: if True the changes are described in the log, but not performed.
:param timelimit: Refreshing will stop after this time. This allows for cron(-like)
jobs to run without overlapping, even when the number fo badges to refresh
becomes larger than possible within the period of the cron job.
"""
my_log = log.getChild('refresh_all_badges')
# Test the config before we start looping over the world.
badge_expiry = badge_expiry_config()
if not badge_expiry or not isinstance(badge_expiry, datetime.timedelta):
raise ValueError('BLENDER_ID_BADGE_EXPIRY not configured properly, should be a timedelta')
session = _get_requests_session()
deadline = utcnow() + timelimit
num_updates = 0
for user_info in find_users_to_sync():
if utcnow() > deadline:
my_log.info('Stopping badge refresh because the timelimit %s (H:MM:SS) was hit.',
timelimit)
break
if only_user_id and user_info.user_id != only_user_id:
my_log.debug('Skipping user %s', user_info.user_id)
continue
try:
badge_html = fetch_badge_html(session, user_info, 's')
except StopRefreshing:
my_log.error('Blender ID has internal problems, stopping badge refreshing at user %s',
user_info)
break
num_updates += 1
update_badges(user_info, badge_html, badge_expiry, dry_run=dry_run)
my_log.info('Updated badges of %d users%s', num_updates, ' (dry-run)' if dry_run else '')
def _get_requests_session() -> requests.Session:
from requests.adapters import HTTPAdapter
session = requests.Session()
session.mount('https://', HTTPAdapter(max_retries=5))
return session
def refresh_single_user(user_id: bson.ObjectId):
"""Refresh badges for a single user."""
my_log = log.getChild('refresh_single_user')
badge_expiry = badge_expiry_config()
if not badge_expiry:
my_log.warning('Skipping badge fetching, BLENDER_ID_BADGE_EXPIRY not configured')
my_log.debug('Fetching badges for user %s', user_id)
session = _get_requests_session()
user_info = find_user_to_sync(user_id)
if not user_info:
return
try:
badge_html = fetch_badge_html(session, user_info, 's')
except StopRefreshing:
my_log.error('Blender ID has internal problems, stopping badge refreshing at user %s',
user_info)
return
update_badges(user_info, badge_html, badge_expiry, dry_run=False)
my_log.info('Updated badges of user %s', user_id)
def update_badges(user_info: SyncUser, badge_html: str, badge_expiry: datetime.timedelta,
*, dry_run: bool):
my_log = log.getChild('update_badges')
users_coll = current_app.db('users')
update = {'badges': {
'html': badge_html,
'expires': utcnow() + badge_expiry,
}}
my_log.info('Updating badges HTML for Blender ID %s, user %s',
user_info.bid_user_id, user_info.user_id)
if dry_run:
return
result = users_coll.update_one({'_id': user_info.user_id},
{'$set': update})
if result.matched_count != 1:
my_log.warning('Unable to update badges for user %s', user_info.user_id)
def badge_expiry_config() -> datetime.timedelta:
return current_app.config.get('BLENDER_ID_BADGE_EXPIRY')
@auth.user_logged_in.connect
def sync_badge_upon_login(sender: auth.UserClass, **kwargs):
"""Auto-sync badges when a user logs in."""
log.info('Refreshing badge of %s because they logged in', sender.user_id)
refresh_single_user(sender.user_id)

View File

@ -1,52 +0,0 @@
# Keys in the user's session dictionary that are removed before sending to Bugsnag.
SESSION_KEYS_TO_REMOVE = ('blender_id_oauth_token', 'user_id')
def add_pillar_request_to_notification(notification):
"""Adds request metadata to the Bugsnag notifications.
This basically copies bugsnag.flask.add_flask_request_to_notification,
but is altered to include Pillar-specific metadata.
"""
from flask import request, session
from bugsnag.wsgi import request_path
import pillar.auth
if not request:
return
notification.context = "%s %s" % (request.method,
request_path(request.environ))
if 'id' not in notification.user:
user: pillar.auth.UserClass = pillar.auth.current_user._get_current_object()
notification.set_user(id=user.user_id,
email=user.email,
name=user.username)
notification.user['roles'] = sorted(user.roles)
notification.user['capabilities'] = sorted(user.capabilities)
session_dict = dict(session)
for key in SESSION_KEYS_TO_REMOVE:
try:
del session_dict[key]
except KeyError:
pass
notification.add_tab("session", session_dict)
notification.add_tab("environment", dict(request.environ))
remote_addr = request.remote_addr
forwarded_for = request.headers.get('X-Forwarded-For')
if forwarded_for:
remote_addr = f'{forwarded_for} (proxied via {remote_addr})'
notification.add_tab("request", {
"method": request.method,
"url": request.base_url,
"headers": dict(request.headers),
"params": dict(request.form),
"data": {'request.data': request.data,
'request.json': request.get_json()},
"endpoint": request.endpoint,
"remote_addr": remote_addr,
})

Some files were not shown because too many files have changed in this diff Show More