Compare commits

..

1774 Commits

Author SHA1 Message Date
Anna Sirota
8f3a03d311 Log exception on each ResourceInvalid to make debugging easier 2021-04-26 17:40:03 +02:00
Anna Sirota
d9d3b73070 Don't validate tokens for each static asset URL 2021-03-19 10:28:28 +01:00
Anna Sirota
2bce52e189 Pin poetry deps to work around cryptography requiring Rust issue 2021-03-18 18:49:10 +01:00
9f76657603 Remove debug-log when auth token cannot be found 2021-02-16 13:55:28 +01:00
b4982c4128 Pillar: Wider scrollbars 2020-07-29 22:53:01 +02:00
970303577a Update gulp-sass 2020-07-23 18:49:12 +02:00
5d9bae1f0f Blender Cloud: Fix responsive issues on navigation. 2020-07-22 18:32:48 +02:00
2e41b7a4dd Blender Cloud: Fix responsive issues on timeline. 2020-07-22 18:32:35 +02:00
b4207cce47 Blender Cloud: Fix responsive issues on blog. 2020-07-22 18:32:22 +02:00
5ab4086cbe Notifications: Regulate fetching via cookie
We introduce a doNotQueryNotifications cookie with a short lifetime,
which is used to determine wether getNotifications should be called
or not. This prevents notifications from being fetched at every page
load, unless the cookie is expired.
2020-04-17 13:32:27 +02:00
86206d42dc Notifications: Set timeout from 30 to 60 seconds
This slightly reduces server load, as clients that keep a page open
will query less often.
2020-04-17 13:32:27 +02:00
Ankit
7c238571bf Fix T73490 Hyperlink bug
Fix typo in the link to Blender Cloud

Maniphest Tasks: T73490

Differential Revision: https://developer.blender.org/D7218
2020-03-27 09:52:51 +01:00
7dc0cadc46 Fix issue with Cerberus
Cerberus has a clause `… and X in self.persisted_document`, which fails
when `persisted_document` is `None` (which is the default value for the
parameter). This code can be found in the function `_normalize_default()`
in `.venv/lib/python3.6/site-packages/cerberus/validator.py:922`.
2020-03-19 16:57:50 +01:00
47474ac936 Replaced Gravatar with self-hosted avatars
Avatars are now obtained from Blender ID. They are downloaded from
Blender ID and stored in the users' home project storage.

Avatars can be synced via Celery and triggered from a webhook.

The avatar can be obtained from the current user object in Python, or
via pillar.api.users.avatar.url(user_dict).

Avatars can be shown in the web frontend by:

- an explicit image (like before but with a non-Gravatar URL)
- a Vue.js component `user-avatar`
- a Vue.js component `current-user-avatar`

The latter is the most efficient for the current user, as it uses user
info that's already injected into the webpage (so requires no extra
queries).
2019-05-31 16:49:24 +02:00
8a19efe7a7 Reformatted code and added import to resolve PyCharm warnings 2019-05-31 13:55:06 +02:00
3904c188ac Removed trailing spaces 2019-05-31 13:55:06 +02:00
26e20ca571 Fix for now-allowed PATCH on users
Commit 0f0a4be4 introduced using PATCH on users to set the username.
An old unit test failed, as it checks that PATCH is not allowed (e.g.
tests for 405 Method Not Allowed response).
2019-05-31 10:24:11 +02:00
e57ec4bede Moved user_to_dict() function out of pillar.web.jinja module 2019-05-31 10:23:25 +02:00
3705b60f25 Fixed unit test by doing late import
For some reason the old pillar.auth stuck around, failing the
`isinstance(some_object, auth.UserClass)` check because it compared to the
old class and not the reloaded one.
2019-05-31 10:22:46 +02:00
0f0a4be412 Fixed updating username in settings view
The timestamps used by the 'last viewed' property of the video progress
feature were converted to strings when sending to the frontend, but never
changed back to timestamps when PUTting via the SDK. I solved it by not
PUTing the user at all, but using PATCH to set the username instead.
2019-05-29 18:37:01 +02:00
23f8c1a446 Ran npm audit fix --force
This fixed 64 security vulnerabilities and hopefully didn't break too much.
2019-05-29 17:06:41 +02:00
1f5f781ecf Suppress warnings from Werkzeug
- Werkzeug deprecated Request.is_xhr, but it works fine with jQuery and we
  don't need a reminder every time a unit test is run. When we upgrade to
  Werkzeug 1.0 (once that's released) we'll see things break and fix them.
- Werkzeug deprecated their Atom feed. This we should act on; tracked in
  https://developer.blender.org/T65274.
2019-05-29 15:22:45 +02:00
4425771117 Suppress Cerberus deprecation warning caused by Eve
Eve is falling behind on Cerberus. See my bug report on
https://github.com/pyeve/eve/issues/1278 for more info.
2019-05-29 14:32:46 +02:00
931c29a21f MongoDB: db.collection_names() is deprecated → db.list_collection_names() 2019-05-29 13:46:53 +02:00
2aa79d3f09 MongoDB: more changing count() → count_documents() 2019-05-29 13:46:53 +02:00
6f8fd4cd72 Cerberus 1.3 renamed 'validator' → 'check_with'
This results in a change in schemas as well as in validator function names.
2019-05-29 12:58:40 +02:00
f53217cabf Added some type declarations 2019-05-29 12:58:40 +02:00
8b42e88817 Cerberus 1.3 renamed '{value,key}schema' to '{values,keys}rules'
'valueschema' and 'keyschema' have been replaced by 'valuesrules' and
'keysrules'. Note the change from 2x singular ('value' and 'schema') to
2x plural ('values' and 'rules').
2019-05-29 12:57:38 +02:00
dd5cd5b61a Compatibility with Eve 0.9.1
Note that Eve's update from 0.9 → 0.9.1 had a breaking API change, as the
return type of `app.data.find(...)` changed...
2019-05-29 10:50:55 +02:00
459a579964 Some extra type annotations 2019-05-28 16:13:14 +02:00
0b32e973a9 More thorough retrying in Blender ID communication 2019-05-28 16:13:14 +02:00
c6e70dc5d9 Removed and gitignored poetry.lock
The poetry.lock files are only relevant for repeatable deployments,
and the one in this project isn't used for that (only the Blender
Cloud project file is used, and that's still there).
2019-05-28 16:13:14 +02:00
1b90dd16ae Re-locked dependencies 2019-05-28 16:13:14 +02:00
1e823a9dbe MongoCollection.count() and update() are deprecated
Eve doesn't have any counting methods on `current_app.data`, so there is
no one-to-one translation for `cursor.count()` in
`file_storage/__init__.py`. Since the call was only used in a debug log
entry, I just removed it altogether.

I removed `pillar.cli.operations.index_users_rebuild()`, as it was
importing `pillar.api.utils.algolia.algolia_index_user_save` which doesn't
exist any more, so the code was dead anyway.
2019-05-28 16:13:14 +02:00
47d5c6cbad UnitTest.assertEquals is deprecated, replaced by assertEqual 2019-05-28 16:13:14 +02:00
b66247881b Relaxed required versions of all our dependencies
Some packages were upgraded; the rename from `CommonMark` to `commonmark`
was the only change breaking the unit tests.
2019-05-28 16:13:14 +02:00
90e5868b31 Dependencies: remove requests, it's pulled in via python-pillar-sdk anyway 2019-05-28 16:13:14 +02:00
94efa948ac Development dependencies updates to their latest versions 2019-05-28 16:13:14 +02:00
ec344ba894 Generate Blender ID URL based on configuration 2019-05-23 13:48:24 +02:00
cb8c9f1225 Merge branch 'production' 2019-05-22 10:27:25 +02:00
51ed7a647d put_project(project_dict): also log the error when we cannot PUT
Previously only a ValueError was raised, which was sometimes swallowed.
Instead of looking up the culprit and solving this properly, I just log the
error now.
2019-05-22 10:15:25 +02:00
c396c7d371 Allow web projects to un-attach project pictures
This makes it possible to PUT a project after attach_project_pictures()
has been called on it (which embeds the picture file documents).

This will be used in SVNman.
2019-05-22 10:14:19 +02:00
2d7425b591 Added 'idna' package as dependency
It's required by pyopenssl but for some reason wasn't installed by Poetry.
2019-05-14 11:19:03 +02:00
3f875ad722 Gitignore devdeps metadata directory 2019-05-14 10:42:15 +02:00
9c517b67c5 Documenting use of Poetry for dependency management 2019-05-14 10:42:15 +02:00
dd9a96d111 README: Removed trailing whitespace 2019-05-14 10:42:15 +02:00
3d6ff9a7bc Moving to Poetry 2019-05-14 10:42:15 +02:00
8ba7122a01 Forms: Use own label element for fields instead of wtforms.
This way we can do two things:
* Tag the field for translation
* Use a filter (like undertitle for nicer labels)
2019-04-24 21:29:55 +02:00
15d5ac687c Attach all project pictures when viewing node
The Open Graph rendering code is not completely refactored yet,
so it still requires a mix of project.picture_header and
project.picture_16_9. By attaching all project pictures we prevent
unexpected errors.
2019-04-19 15:30:55 +02:00
402f9f23b5 Use picture_16_9 as og_image
Previously we used picture_header, which did not guarantee a suitable
aspect ratio for an Open Graph image.
2019-04-19 14:12:43 +02:00
486fb20dcf Enhance project with attach_project_pictures
Instead of individually attaching project images, use the utility
function.
2019-04-19 14:11:42 +02:00
34f2372082 Add picture_16_9 when attaching project pictures 2019-04-19 14:10:19 +02:00
c217ec194f Save 16_9 picture via Project edit form 2019-04-19 14:09:54 +02:00
b68af6da8b Rename 16x9 to 16_9
We do this to reduce ambiguity about resolution vs aspect ratio.
2019-04-19 11:50:41 +02:00
06f5bc8f01 Add picture_16x9 attribute for Project
This image can be use as a source for Open Graph tags, as well as for
displaying a project thumbnail with a known (or at least expected)
aspect ratio.
2019-04-19 10:57:46 +02:00
53eb9f30fd Bumped Jinja2 2.10 → 2.10.1
Github poked us about this being a security update.
2019-04-18 10:15:41 +02:00
43d464c60c Fix missing icons. 2019-04-15 12:42:49 +02:00
d0ef76c19e CSS: Utility classes for column count property. 2019-04-12 17:16:06 +02:00
a43eca4237 Timeline: Less prominent project title. 2019-04-10 17:08:14 +02:00
af020d4653 Cleanup CSS.
Extend Bootstrap classes instead of using own styling.
2019-04-10 17:08:01 +02:00
2c207b35e2 UI Asset List: Add custom class to meta items. 2019-04-10 14:14:04 +02:00
3f3172e00e Allow PUT method for owner on comment creation
Make use of the permission system and allow PUT method for the creator
of a Node of type comment. This enables comment owners to edit their
own posts.
2019-04-09 01:09:08 +02:00
26a09a900f PEP8 formatting 2019-04-09 01:01:58 +02:00
90154896fb PEP8 formatting 2019-04-09 01:01:49 +02:00
95d611d0c5 Cleanup: remove unused import and blank line 2019-04-08 23:55:26 +02:00
dc7d7bab4a Extend projects/view.html for page templates
Using projects/landing.html was causing exception since the landing
template expects project attributes that are available only for
projects that are setup_for_film.
2019-04-08 16:43:20 +02:00
d047943a07 Cleanup duplicate code. 2019-04-04 14:21:34 +02:00
b64b75eecb Jumbotron: Subtle text shadow on text 2019-04-04 14:21:34 +02:00
152dc50715 UI Timeline: Make buttons outline white when dark background. 2019-04-04 14:21:34 +02:00
73edd5c5d2 Remove unused import 2019-04-04 14:15:03 +02:00
3d8ee61b03 Clean up: Whitespace 2019-04-04 11:34:13 +02:00
ee5a1a8bb7 Use kebab-case for vue names
https://vuejs.org/v2/guide/components-custom-events.html#Event-Names
2019-04-04 11:33:43 +02:00
ccc78af742 white space clean up 2019-04-04 10:44:43 +02:00
de40b4b2b6 Specify prop type 2019-04-04 10:44:22 +02:00
fe2f350013 Silence warning about changing prop value 2019-04-04 10:18:42 +02:00
1b42d114ad Whitespace cleanup 2019-04-04 10:18:42 +02:00
e58db61d2a Add missing closing bracket to components 2019-04-04 10:18:42 +02:00
c6333cecfe Better initial component values 2019-04-04 10:18:42 +02:00
ee6fd3386d Fix wrong prop type 2019-04-04 10:18:42 +02:00
700e7d2fc4 Bind vue component key 2019-04-04 10:18:42 +02:00
619dfda6fa Only use minified vue if built as production 2019-04-04 10:18:42 +02:00
985e96f20b Wrong type was passed into component 2019-04-04 10:18:42 +02:00
37e09c2943 Remove unused parameter 2019-04-04 10:18:42 +02:00
62af8c2cbf Add example of usage 2019-04-04 10:18:42 +02:00
0b12436a31 UI Page: Fix link on header. 2019-04-04 00:26:15 +02:00
7f12c9b4ad UI Pages: Hide title if there is an image. 2019-04-04 00:24:37 +02:00
1171a8e437 UI Theatre: margin around comments container. 2019-04-03 23:15:09 +02:00
54abda883d Cleanup: remove unused font-pillar link.
They are now built into the main stylesheets.
2019-04-03 23:12:17 +02:00
ad0f9b939a CSS: include font-pillar into the main stylesheets. 2019-04-03 23:11:57 +02:00
4d5a8613af UI Alerts: minor style tweaks.
Remove margin from paragraphs and remove redundant text-align.
2019-04-03 22:47:04 +02:00
ff314c0a7d Cleanup: remove blender-cloud specific pug component. 2019-04-03 15:28:06 +02:00
18ec206a40 UI Breadcrums: Always show. 2019-04-02 16:40:01 +02:00
8f3f3b6698 UI Fix: Show sidebar on project edit. 2019-04-02 16:40:01 +02:00
ad5dbdf094 Remove unused data property 2019-04-02 14:09:49 +02:00
67a56dc797 Fix typo 2019-04-02 14:09:49 +02:00
093f4101cf UI Comments: Minor style adjustments and fixes. 2019-04-02 13:53:55 +02:00
b96731a939 UI jstree: Fix collapse of folders with one click.
Two clicks is too much work. It was removed by mistake on previous commit.
2019-04-02 12:27:09 +02:00
4f5746e0b7 UI Page: style the Edit bar.
With light background color and border, so it stands out.
2019-04-01 14:53:57 +02:00
1d65ea9de0 UI Pages: Add page title. 2019-04-01 14:53:57 +02:00
c31ef97c9e UI Timeline: scale the placeholder to almost fit the screen.
So the timeline has some initial height (75% of viewport height), and
once the content shows up the page doesn't jump much.
2019-04-01 14:53:57 +02:00
3906bab2ac Cleanup: Tweak comments and sort classes. 2019-04-01 14:53:57 +02:00
c93393ad10 Export vue component user-avatar 2019-04-01 14:25:45 +02:00
a37aec61b2 Vue getting started links 2019-04-01 11:23:25 +02:00
1b96c6e37e Added comments 2019-04-01 10:34:35 +02:00
119900337d Mark as deprecated an recommend vue instead 2019-04-01 10:34:35 +02:00
1d476d03d7 UI Project: Show sidebar by default.
Change the logic to hide, instead.
2019-03-29 15:47:29 +01:00
77a7b15a73 Merge branch 'production' 2019-03-29 15:43:07 +01:00
562e21d57a UI Page: Set page url as title.
So it's highlighted in the navigation.
2019-03-29 15:35:19 +01:00
c80234bac2 UI Page: style node description with its own class.
Instead of relying on 'landing'.
2019-03-29 15:34:56 +01:00
f31253dd17 UI Pages: Show Edit Post link. 2019-03-29 15:19:28 +01:00
46bbd1297b UI Pages: Only show header div if there is a picture. 2019-03-29 15:19:28 +01:00
5556bfee52 UI Page: Style like a regular page, not like the landing template (dark background). 2019-03-29 15:19:28 +01:00
72a42c2bf8 Template Cleanup: Remove unused 'title' variable.
'title' is set by the extended template ('landing').
2019-03-29 15:19:28 +01:00
da337df82b HACK to get page editing to not 500 Internal Server Error on us 2019-03-29 15:06:21 +01:00
50aec93515 HACK to get page editing to not 500 Internal Server Error on us 2019-03-29 14:54:20 +01:00
4187d17f1f Formatting 2019-03-29 14:54:20 +01:00
ba299b2a4c Documentation of es6 transcompile and packaging 2019-03-29 10:44:04 +01:00
c8adfc5595 UI Jstree: Small padding and height adjustment of anchors. 2019-03-28 21:15:22 +01:00
50d17de278 UI Project: move sticky breadcrumbs when sidebar is visible. 2019-03-28 20:59:39 +01:00
f72c1fffca UI Jstree: Spacing and style adjustments. 2019-03-28 20:59:04 +01:00
afc8acff83 Breadcrumbs: Take into account breadcrumbs when scaling project container. 2019-03-28 20:57:59 +01:00
4c857e63b2 UI: Toggle project sidebar logic. 2019-03-28 20:46:52 +01:00
48cb216c4a Removed unnecessary <template> element
Vue.js uses `<template>` when we don't want to output an element but still
want to set some attributes (like `v-if`) on a piece of text. Since we're
outputting a `<span>`, we can just move the attributes there.
2019-03-28 16:40:01 +01:00
1fd17303a5 Breadcrumbs: emit 'navigate' event when clicking on the link
Clicking on the breadcrumb link now doesn't follow the link any more,
but by keeping it as a link users can still open in a new tab.
2019-03-28 16:38:28 +01:00
d5a4c247b0 Breadcrumbs: Initial styling. 2019-03-28 16:03:50 +01:00
a3b8a8933c Breadcrumbs: Use <span> element in last item (_self).
To be able to style it similarly to the links, but without a link.
2019-03-28 16:03:24 +01:00
5c8181ae41 Refactored Date columns to have a common base 2019-03-28 14:36:30 +01:00
ff43fa19fd Add Created and Updated column 2019-03-28 12:48:45 +01:00
f73b7e5c41 Corrected comment 2019-03-28 12:40:33 +01:00
c089b0b603 Added little clarification 2019-03-28 12:40:33 +01:00
4499f911de Node breadcrumbs
Breadcrumbs are served as JSON at `/nodes/{node ID}/breadcrumbs`, with
the top-level parent listed first and the node itself listed last:

    {breadcrumbs: [
        ...
        {_id: "parentID",
         name: "The Parent Node",
         node_type: "group",
         url: "/p/project/parentID"},
        {_id: "deadbeefbeefbeefbeeffeee",
         name: "The Node Itself",
         node_type: "asset",
         url: "/p/project/nodeID",
         _self: true},
    ]}

When a parent node is missing, it has a breadcrumb like this:

    {_id: "deadbeefbeefbeefbeeffeee",
     _exists': false,
     name': '-unknown-'}

Of course this will be the first in the breadcrumbs list, as we won't be
able to determine the parent of a deleted/non-existing node.

Breadcrumbs are rendered with Vue.js in Blender Cloud (not in Pillar);
see projects/view.pug.
2019-03-28 12:40:33 +01:00
465f1eb87e Store filter/column settings in localStorage
The filter and column settings in tables are stored per project and
context in the browsers localStorage. This makes the table keep the
settings even if the browser is refreshed or restarted.

The table emits a "componentStateChanged" event containing the tables
current state (filter/column settings) which then is saved by the top
level component.
2019-03-28 10:29:13 +01:00
f6056f4f7e UI: New mixin component for listing categories.
For e.g. Blender Cloud's Learn, Libraries, etc.
2019-03-27 15:51:41 +01:00
64cb7abcba Removed unused imports 2019-03-27 15:51:24 +01:00
1f671a2375 Update package-lock.json
The current packages where failing to build libsass on macOS.
2019-03-27 14:22:33 +01:00
898379d0d3 UI: Font-size tweak for node description in timeline. 2019-03-27 14:11:05 +01:00
87ff681750 UI: Font-size tweak to node description for blog and project. 2019-03-27 14:09:48 +01:00
db11b03c39 Fix typo 2019-03-27 12:12:17 +01:00
1525ceafd5 Fix for find_markdown_fields project hook
Original commit 3b59d3ee9aacae517b06bf25346efa3f2dae0fe7
Breaking commit 32e25ce129612010a4c14dfee0d21d1a93666108

The breaking commit was actually meant to remove the need for this
hook logic entirely, by relying on a custom validator instead.
This works for nodes, but it currently does not work for projects.
The issue needs to be further investigated via T63006.
2019-03-27 12:12:17 +01:00
9c1e345252 Newline at end of file 2019-03-27 12:12:17 +01:00
237c135c31 UI Timeline: support for dark backgrounds.
Simply place the +timeline(project_id) mixin inside a div with a 'timeline-dark' class.
2019-03-27 12:07:06 +01:00
85706fc264 Updated bug report URLs
The project was apparently moved. The issues are closed, too, though, so
we could at some point check whether our workarounds can be removed.
2019-03-27 11:58:48 +01:00
4cd182e2d2 Cleanup: spaces to tabs. 2019-03-27 11:19:11 +01:00
69806d96a9 UI: Narrower column for text in jumbotron component.
Leaves some room to see the image on the right.
2019-03-27 11:04:39 +01:00
4977829da7 Cleanup: Remove legacy Bootstrap 3 minified CSS file.
* Our Pillar apps now use Bootstrap 4.
* Pillar builds its own CSS from Bootstrap 4 components (from node_modules)
2019-03-26 18:31:54 +01:00
cd94eb237f Cleanup: One indentation level too much. 2019-03-26 17:45:33 +01:00
97cda1ef6b UI: Fix hidden fields showing up in project edit.
The 'hidden' class got renamed to d-none in Bootstrap 4.
2019-03-26 15:21:15 +01:00
5cba6f53f5 Make sure sort buttons is always clickable
Hide part overflow of column label if there is not enough room
2019-03-22 14:10:18 +01:00
072a1793e4 Add missing tooltips in table 2019-03-22 14:07:29 +01:00
375182a781 Add css class per task type to table columns 2019-03-22 14:06:54 +01:00
022fc9a1b2 Removed possibility to toggle selected in table 2019-03-22 14:06:17 +01:00
6c4e6088d3 UI: Vertically center badges under comment avatar. 2019-03-21 01:03:59 +01:00
5aed4ceff7 Avoid emitting duplicate selectedItemsChanged 2019-03-20 15:19:37 +01:00
dfd61c8bd8 Update pillar table props 2019-03-20 15:18:50 +01:00
6bae6a39df Mark pillar table rows as corrupt if init fails 2019-03-20 15:14:50 +01:00
66e6ba1467 Move table css from attract to pillar repo 2019-03-20 15:12:19 +01:00
a104117618 Added pillar.auth.cors.allow() decorator
Use this decorator on Flask endpoints that should respond with CORS
headers. These headers are sent in a reply when the browser sends an
`Origin` request header; for more info see [1].

This commit rolls back the previous commit (0ee1d0d3), as this new
approach with a separate decorator is both easier to use and less
error-prone.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
2019-03-19 10:55:15 +01:00
0ee1d0d3da Allow HTTP headers to be set for @require_login() error responses
This makes the `require_login` decorator always return a Flask response.
Previously it could also raise a `Forbidden` exception; now it returns a
403 Forbidden response in that case too.
2019-03-18 14:42:00 +01:00
cfff5ef189 Fixed redirects ignoring the 'next_after_login` session variable
There were a few redirects (for example, trying to log in while already
logged in) that would incorrectly redirect to the main page. They use the
`next_after_login` session variable now.
2019-03-18 14:37:20 +01:00
58ff236a99 Generalized table to not depend on project id 2019-03-15 10:18:23 +01:00
ace091c998 Row selection before table fully inited failed
If a row was selected before table was fully initialized it would
be unselected once the row was fully initialized.
2019-03-14 10:53:47 +01:00
4136da110f Added comments and minor refactoring 2019-03-14 10:53:46 +01:00
01da240f54 Attract multi edit: Shift + mouse to select all between
and hopefully now command button on Mac works for multiselect.
2019-03-13 15:27:16 +01:00
379f743864 Attract multi edit: Edit multiple tasks/shots/assets at the same time
For the user:
Ctrl + L-Mouse to select multiple tasks/shots/assets and then edit
the nodes as before. When multiple items are selected a chain icon
can be seen in editor next to the fields. If the chain is broken
it indicates that the values are not the same on all the selected
items.

When a field has been edited it will be marked with a green background
color.

The items are saved one by one in parallel. This means that one item
could fail to be saved, while the others get updated.

For developers:
The editor and activities has been ported to Vue. The table and has
been updated to support multi select.

MultiEditEngine is the core of the multi edit. It keeps track of
what values differs and what has been edited.
2019-03-13 13:53:40 +01:00
d22c4182bf UI: Align 'Linked' comment tag with comment metadata. 2019-03-12 20:27:30 +01:00
69251de995 UI: Set max-width variable for select2. 2019-03-12 14:27:29 +01:00
57a180dc00 UI: Don't set font-size on node-details-description.
This is used for comments, nodes, everywhere. So each component should set
its own size.
2019-03-12 14:27:06 +01:00
12d8a282aa Fix T62049: Wrong sorting of comment replies 2019-03-11 10:32:40 +01:00
fbcd4c9250 UI: Fix emojis margin-top on node description utility. 2019-03-11 03:12:07 +01:00
a3f58ef8fe Bumped some secondary requirements
The cryptography package was getting old, and since Flamenco is going to
issue JWT tokens soon, I wanted to be up to date with security fixes.

Also requires updating pillar-python-sdk.
2019-03-07 17:39:06 +01:00
c7b0842779 CSS: Remove primary buttons gradient.
Doesn't always look nice, fallback to default bootstrap primary color instead.
2019-02-28 03:55:01 +01:00
5bcfa5218a UI: Minor style fixes to node-details-description.
Blockquotes and unordered lists could have the first line badly indented
since we introduced single-line comments. Now they both break the line
before being displayed.
2019-02-23 02:17:39 +01:00
da14d34551 Added jinja filter pretty_duration_fractional that includes milliseconds 2019-02-21 17:38:37 +01:00
32e25ce129 Notifications regression: Notifications not created
Notifications for when someone posted a comment on your node
was not created.

Root cause was that default values defined in schema was not set,
resulting in activity subscriptions not being active.
There were 2 bugs preventing them to be set:
* The way the caching of markdown as html was implemented caused
  default values not to be set.
* Eve/Cerberus regression causes nested default values to fail
  https://github.com/pyeve/eve/issues/1174

Also, a 3rd bug caused nodes without a parent not to have a
subscription.

Migration scripts:
How markdown fields is cached has changed, and unused properties
of attachments has been removed.
./manage.py maintenance replace_pillar_node_type_schemas

Set the default values of activities-subscription
./manage.py maintenance fix_missing_activities_subscription_defaults
2019-02-19 14:16:28 +01:00
250c7e2631 Vue Attract: Default sort shots by cut_in_timeline_in_frames 2019-02-12 12:59:01 +01:00
2f5f73843d Vue Attract: Sort/filterable table based on Vue
Initial commit implementing sortable and filterable tables for attract
using Vue.
2019-02-12 09:08:37 +01:00
a5bae513e1 Navigation: Unified cloud navigation
* Removed main drop down menu
* Added "My cloud" to user menu
* Attract/Flamenco is found under Production Tools menu
* Attract/Flamenco has the same navigation as its project
2019-02-06 10:31:36 +01:00
1101b8e716 Fix Regression: Heart filled icon was shown on all voted comments
Heart filled icon should be an indication that the current user has
voted. Thanks to Pablo Vazques for pointing it out
2019-02-04 10:16:50 +01:00
f35c2529a6 UI: Make blog title link to the actual blog entry 2019-02-02 04:03:39 +01:00
ecfd27094c UI: Blog title in timeline more prominent 2019-02-02 04:01:56 +01:00
f531685ba8 Updated unit test for FFmpeg 4 2019-01-31 14:57:38 +01:00
ef89b9a1dd CSS: Increase space between avatar and content 2019-01-30 23:15:29 +01:00
c505694b2d Formatting 2019-01-30 23:12:35 +01:00
3b59d3ee9a Projects Bug: Projects page not showing project description
Cache field _description_html was never updated when a project was
inserted/updated. Added a eve hook similar to how this cache works
with Nodes.
2019-01-21 14:48:40 +01:00
5eae0f6122 Added convenience url_for() wrapper for use in unit tests 2019-01-08 19:07:14 +01:00
b5a74ce7b9 Utility function for easily getting the project URL given its ID 2019-01-08 19:06:56 +01:00
a32fb6a208 Storage: added function for setting content type, encoding, and attachmentness
These are used by Flamenco to store task logs as gzipped text files, but to
send them to the browser with such HTTP headers that the browser can gunzip
them and display directly (rather than having to download & gunzip yourself).
2019-01-08 15:07:47 +01:00
974ac6867c Moved storage backend names to a module-global constant
This allows others to import the constant and have proper 'allowed' values
for backends. This will be used by Flamenco for storing task logs.
2019-01-08 14:45:55 +01:00
a756632cad Added pillar.api.projects.utils.storage(project_id) function
For now this returns a bucket in the default storage backend, since
individual projects do not have a 'storage backend' setting (this is
set per file, not per project).
2019-01-08 14:13:30 +01:00
c28d3e333a Storage backends: removed unused Blob.filename attribute
Just use Blob.update_filename() instead.
2019-01-08 14:12:49 +01:00
004bd47e22 Gulp fix for NodeJS 10 2019-01-04 14:20:16 +01:00
64bd2150a4 AbstractPillarTest.create_valid_auth_token() now also accepts string user ID
Strings were already passed to this function, even though it was declared
as taking an ObjectID. Instead of updating all callers, I just made it
convert strings to ObjectID.
2019-01-04 12:46:37 +01:00
a23e063002 Don't use attr.ib to declare a logger
This doesn't work well when overriding in subclasses; it keeps using the
superclass logger. Simply returning a logger fixes this.
2019-01-04 12:45:47 +01:00
903fbf8b0d Missing import & typo 2018-12-20 13:08:23 +01:00
beac125ff9 Nicer logging when refreshing file links 2018-12-20 12:51:53 +01:00
ef259345ce Formatting 2018-12-20 12:51:32 +01:00
b87c5b3728 User Search Bug: Failed to render users without roles 2018-12-20 11:37:30 +01:00
efeea87249 Markdown preview regression: Markdown preview failed in edit project 2018-12-18 17:38:04 +01:00
fb28059ae7 Rebuilt package-lock.json with Node 10 / NPM 6.4 2018-12-18 15:39:18 +01:00
a84d4d13a0 DnD fileupload in comments in firefox bug: CSS seams to be the cause 2018-12-18 15:04:08 +01:00
cb265e1975 Formatting 2018-12-18 12:53:06 +01:00
5b3de5f551 Missing JS parameter 2018-12-18 12:53:02 +01:00
fbcce7a6d8 Vue Comments: Comments ported to Vue + DnD fileupload
* Drag and drop files to comment editor to add a file attachment
* Using Vue to render comments

Since comments now has attachments we need to update the schemas
./manage.py maintenance replace_pillar_node_type_schemas
2018-12-12 11:45:47 +01:00
bba1448acd Added two more maintenance cmds for finding & fixing projectless files
This is about fixing file documents that do not have a `project` key at
all. Those were deleted by the `delete_projectless_files` management
command and restored manually. These commands can fix those file
documents properly, by checking which project they're referenced in, and
setting their `project` property.

Finding the references (`manage.py maintenance find_projects_for_files`)
is a heavy operation as it inspects all nodes and all projects. This can
be done offline on a cloned database, and the result stored in a JSON
file. This JSON file can then be processed on the production server
(`manage.py maintenance fix_projects_for_files /path/to/file.json --go`)
to perform the fix.
2018-12-05 14:23:34 +01:00
da7dc19f66 Expanded test for delete_projectless_files CLI command
It now also checks that _updated and _etag have been updated correctly,
and that the other properties haven't been touched.
2018-12-04 18:03:13 +01:00
de8633a5a4 Formatting 2018-12-04 17:44:35 +01:00
de5c7a98a5 Added CLI command for soft-deleting projectless files
Run `./manage.py maintenance delete_projectless_files --help` for more info.
2018-12-04 17:44:29 +01:00
ac092587af Switch Celery broker from RabbitMQ to Redis
This should work around a bug in Celery where long Celery tasks would
time out and be re-queued, causing an infinite loop.

See https://github.com/celery/celery/issues/3430 for more info.
2018-12-04 10:22:20 +01:00
a10b42afe6 Find only non deleted comments 2018-12-03 22:56:20 +01:00
6377379144 Fix T58116: Timeline does not exclude Posts with 'pending' status 2018-11-28 16:58:24 +01:00
82071bf922 Quick Search: Queries containing equal sign (=) failed 2018-11-27 10:00:44 +01:00
1c0476699a Update default comments sorting
Confidence is not necessary, as we only allow rating_positive.
2018-11-26 23:48:52 +01:00
411a6f75c5 Change default comments sorting
Comments were sorted by descending creation date. Now they are sorted by
descending confidence and descending creation date.
2018-11-26 19:48:12 +01:00
07821c7f97 Timeline Firefox bug fix: load more not working properly
Firefox failed to redraw the page properly when loading more weeks.
2018-11-23 14:55:58 +01:00
64b4ce3ba9 Minor layout and style adjustments. 2018-11-22 21:52:07 +01:00
72417a9abb Minor layout and style adjustments. 2018-11-22 21:35:27 +01:00
6ae9a5ddeb Quick-Search: Added Quick-search in the topbar
Changed how and what we store in elastic to unify it with how we store
things in mongodb so we can have more generic javascript code
to render the data.

Elastic changes:
  Added:
  Node.project.url

  Altered to store id instead of url
  Node.picture

  Made Post searchable

./manage.py elastic reset_index
./manage.py elastic reindex

Thanks to Pablo and Sybren
2018-11-22 15:31:53 +01:00
a897e201ba Timeline Fix: Attachment in post did not work 2018-11-22 14:39:25 +01:00
3985a00c6f Timeline: Style and layout adjustments 2018-11-21 20:32:27 +01:00
119291f817 Timeline: Remove header and lead from posts.
Headers don't really match with the rest of the listing.
2018-11-21 20:24:12 +01:00
801cda88bf Project View: Labels for sections 2018-11-21 20:23:07 +01:00
fc99713732 Project-Timeline: Introduced timeline on projects
Limited to projects of category assets and film for now.
2018-11-20 16:29:01 +01:00
1d909faf49 CSS: Override margin-bottom for emoji images. 2018-11-16 23:57:00 +01:00
ed35c54361 CSS: Fix alignment on list with custom bullets. 2018-11-16 23:57:00 +01:00
411b15b1a0 Pin versions in package.json
This should lead to predictable results when running ./gulp.
2018-11-16 15:45:46 +01:00
9b85a938f3 Add npm deps: acorn and glob 2018-11-16 14:31:46 +01:00
989a40a7f7 Add missing dependency for transpiling es6 2018-11-16 14:06:50 +01:00
64cc4dc9bf Bug fix: Sharing files failing
Found using sentry
2018-11-16 12:46:30 +01:00
9182188647 CSS: Minor style tweaks to user login.
Don't use hardcoded white color for container-box mixin.
2018-11-16 12:38:40 +01:00
5896f4cfdd CSS: Use generic colors for inputs border colors.
More reliable when theming.
2018-11-16 02:31:13 +01:00
f9a407054d CSS: Fix emoji set as block.
When parent styling set images to be block, emoji should always be inline.
2018-11-15 23:54:16 +01:00
1c46e4c96b CSS: Fix !default setting in config 2018-11-14 02:06:22 +01:00
2990738b5d Lazy Home: Lazy load latest blog posts and assets and group by week and
project.

Javascript tutti.js and timeline.js is needed, and then the following to
init the timeline:

$('.timeline')
    .timeline({
        url: '/api/timeline'
    });

# Javascript Notes:
## ES6 transpile:
* Files in src/scripts/js/es6/common will be transpiled from
modern es6 js to old es5 js, and then added to tutti.js
* Files in src/scripts/js/es6/individual will be transpiled from
modern es6 js to old es5 js to individual module files
## JS Testing
* Added the Jest test framework to write javascript tests.
* `npm test` will run all the javascript tests

Thanks to Sybren for reviewing
2018-11-12 12:57:25 +01:00
e2432f6e9f NPM: Upgrade to Gulp 4
No functional changes. Besides slightly faster thanks to parallel tasks and future proof.
2018-11-10 01:08:30 +01:00
aa63389b4f Remove duplicated file
The file was copy-pasted in api/search.
2018-11-04 11:48:08 +01:00
5075cd5bd0 Introducing Flask Debug Toolbar
Display useful information for debugging.
2018-11-01 02:19:13 +01:00
ceef04455c Video player in project header bug (firefox):
Unable to play video in in project header in firefox.

Reason:
Firefox is missing ResizeObserver, so as a workaround videoJs inserts an
iframe bellow the video and listens to resize events on that. This iframe
lands in front of the video when we use the class ".embed-responsive",
and therefore we can not start the wideo.

Solution:
I could not see any difference in how the page was rendered
with/without this class so I removed it.
2018-10-24 13:34:08 +02:00
c8e62e3610 Loading bar: Introduced two event listeners on window 'pillar:workStart' and 'pillar:workStop' that (de)activates the loading bar.
Reason:
* To decouple code
* Have the loading bar active until whole page stopped working
* Have local loading info

Usage:
$.('.myClass')
   .on('pillar:workStart', function(){
    ... do stuff locally while loading ...
    })
   .on('pillar:workStop', function(){
   ... stop do stuff locally while loading ...
   })

$.('.myClass .mySubClass').trigger('pillar:workStart')
... do stuff ...
$.('.myClass .mySubClass').trigger('pillar:workStop')
2018-10-23 13:57:02 +02:00
ce7cf52d70 Refresh badges every 10 minutes
Now that they are new, they should be snappy!
2018-10-11 10:04:16 +02:00
dc2105fbb8 Enabled badges in comments 2018-10-10 16:55:10 +02:00
71185af880 Added json jinja filter for debugging purposes 2018-10-10 16:55:10 +02:00
041f8914b2 Show badges on user profile page 2018-10-10 16:55:06 +02:00
b4ee5b59bd Sync Blender ID badge as soon as user logs in
This adds a new Blinker signal `user_logged_in` that is only sent when
the user logs in via the web interface (and not on every token
authentication and every API call).
2018-10-10 16:54:58 +02:00
314ce40e71 Send logged-in user in user_authenticated signal 2018-10-10 15:30:35 +02:00
7e941e2299 Added TODOs and removed fetching unused field from MongoDB 2018-10-10 14:40:45 +02:00
53811363ce Search bug fix: Missing video plugins resulted in wrong volume and progress. 2018-10-05 14:37:32 +02:00
51057e4d63 Search bug fix: Grid/List toggle on group nodes also affected the the way search results where presented 2018-10-05 12:37:48 +02:00
a1a48c1941 Elasticsearch: Added documentation on how to set the indexing. 2018-10-05 11:35:02 +02:00
19fdc75e60 Free assets: Assets should not be advertised as free if the user is a logged in subscriber. 2018-10-04 17:44:08 +02:00
879bcffc2b Asset list item: Don't show user.full_name in latest and random assets 2018-10-04 12:30:05 +02:00
6ad12d0098 Video Duration: The duration of a video is now shown on thumbnails and bellow the video player
Asset nodes now have a new field called "properties.duration_seconds". This holds a copy of the duration stored on the referenced video file and stays in sync using eve hooks.

To migrate existing duration times from files to nodes you need to run the following:
./manage.py maintenance reconcile_node_video_duration -ag

There are 2 more maintenance commands to be used to determine if there are any missing durations in either files or nodes:
find_video_files_without_duration
find_video_nodes_without_duration

FFProbe is now used to detect what duration a video file has.

Reviewed by Sybren.
2018-10-03 18:30:40 +02:00
a738cdcad8 Fix and tweaks to theatre mode
* Only show width/height if available (would be None otherwise)
* If image width/height is not available, allow zooming
* Fix styling and cleanup
* Remove footer (reported by Vulp35 on Twitter, thanks!)
2018-10-01 11:56:52 +02:00
199f37c5d7 Tagged Asset: Added metadata
Video duration, Project link and pretty date
2018-09-26 11:29:15 +02:00
4cf93f00f6 Assets: Fix video progress not showing 2018-09-24 13:31:48 +02:00
eaf9235fa9 Fix users listing styling 2018-09-21 17:11:26 +02:00
24ecf36896 CSS: Brighter primary button 2018-09-21 16:51:45 +02:00
86aa494aed CSS: Use 3 cards even on media-xl 2018-09-21 16:25:48 +02:00
5a5b97d362 Introducing Main Dropdown navigation for mobile 2018-09-21 16:13:50 +02:00
831858a336 CSS: Make buttons use bootstraps' variable for roundness 2018-09-21 16:13:50 +02:00
e9d247fe97 Added assertion in test to verify that the asset was deleted 2018-09-21 14:24:37 +02:00
1ddd8525c7 Remove references to node from projects when the node is deleted.
Removes node references  in project fields header_node, nodes_blog, nodes_featured, nodes_latest.
2018-09-21 14:23:47 +02:00
c43941807c Node details: Center only on landing 2018-09-21 12:11:11 +02:00
bbad8eb5c5 Remove unused project macros file
The only macro was render_secondary_navigation, which is in the _navigation.pug
template together with the other Blender Cloud navigation macros.
2018-09-20 16:38:17 +02:00
04f00cdd4f Loading Bar: Utility to turn it on/off 2018-09-20 15:20:29 +02:00
66d9fd0908 Center node-details-description 2018-09-20 12:15:08 +02:00
516ef2ddc7 Navigation: if category is Assets, then call it Libraries 2018-09-20 12:10:35 +02:00
35fb07ee64 Navigation: Move marker on left side
On the right it looks like a scrollbar.
2018-09-20 12:10:09 +02:00
f1d67894dc Rename secondary_navigation to navigation_project 2018-09-20 12:05:46 +02:00
aef2cf8c2d Navigation: Fix notification number 2018-09-19 19:43:49 +02:00
d347ddac2c Navigation: Films -> Open Projects
And show navigation when in the Blog
2018-09-19 19:33:01 +02:00
186ba167f1 Navigation: remove extra 's' for assets project
Such a lame solution. We need better categories.
2018-09-19 19:09:04 +02:00
847e97fe8c Project: remove arrow left/right navigation hotkey 2018-09-19 18:33:53 +02:00
7ace5f4292 Search: use proper navigation
Also remove failing projectBrowseTypeList js
2018-09-19 18:22:27 +02:00
6cb85b06dc Project: Dark navbar for edit project 2018-09-19 18:21:47 +02:00
5c019e8d1c Landing: Set project title as active 2018-09-19 15:50:23 +02:00
7796179021 Navigation: Position icons 2018-09-19 15:42:18 +02:00
26aca917c8 Use correct permission format for gulp-chmod 2018-09-19 14:45:43 +02:00
e262a5c240 Jumbotron: take content if defined in the block 2018-09-19 12:39:18 +02:00
e079ac4da1 CSS adjustments to dropdowns, cards, responsive 2018-09-19 11:33:20 +02:00
83097cf473 Projects: Explore -> Browse 2018-09-18 18:53:55 +02:00
f4ade9cda7 Allow empty content for card-deck component
In cases like the tags groups we want an empty card-deck because its
content is filled up via javascript.
2018-09-18 16:56:08 +02:00
31244a89e5 Merge branch 'master' into production 2018-09-18 15:50:55 +02:00
749c3dbd58 Gulp: Add bootstrap's collapse and alert js to tutti 2018-09-18 15:25:20 +02:00
b1d97e723f Cards: Smaller ribbon for vertical aligned cards 2018-09-18 15:25:20 +02:00
46bdd4f51c Pages: Don't show date and page title
It's already in the jumbotron
2018-09-18 15:25:20 +02:00
93720e226c Badges: don't display them just yet 2018-09-18 15:25:20 +02:00
9a0da126e6 Fix failing tests
Failure was due to a new ‘slug’ key in the link dict.
2018-09-18 15:14:27 +02:00
45672565e9 Card style fixes 2018-09-18 12:53:34 +02:00
3e1273d56c CSS: zoom-in cursor utility 2018-09-18 12:49:06 +02:00
fe86f76617 Search: styling 2018-09-17 19:04:42 +02:00
008d9b8880 Comments: padding 2018-09-17 18:35:04 +02:00
13b606df45 CSS cleanup and use classes for styling 2018-09-17 18:16:42 +02:00
57f5836829 Cleanup and replace custom styles with bootstrap classes. 2018-09-17 17:08:46 +02:00
e40ba69872 Project style adjustments. 2018-09-17 17:07:10 +02:00
0aeae2cabd Navigation: Highlight current page in the navbar 2018-09-17 15:02:54 +02:00
601b94e23a Pages: Set title from page properties url 2018-09-17 15:02:24 +02:00
00c4ec8741 Navigation Links: Pass the slug
So we can style the items by comparing it to the page 'title'.
2018-09-17 15:01:57 +02:00
caee114d48 Posts: Remove unused title and pages 2018-09-17 15:01:23 +02:00
7fccf02e68 Posts: Pass navigation_links
Otherwise pages wont show up when looking at a project blog
2018-09-17 15:00:55 +02:00
1c42e8fd07 Nodes View: Remove unnecessary containers
#node-container and #node-overlay were not used.
2018-09-17 14:26:37 +02:00
77f855be3e Remove jQuery Montage
No longer used since we list assets with a macro.
2018-09-17 14:25:19 +02:00
cede3e75db Remove more Markdown references 2018-09-17 13:47:03 +02:00
02a7014bf4 Cleanup and title-underline utility 2018-09-17 12:54:07 +02:00
04e51a9d3f CSS: Break to large size a bit earlier 2018-09-17 12:53:25 +02:00
d4fd6b5cda Asset Listing: display author name (when available) 2018-09-17 12:52:48 +02:00
2935b442d8 Remove outdated remarkdown_comments management command 2018-09-17 09:14:11 +02:00
567247f3fd Rename hooks.py to eve_hooks.py
Follow naming convention started in Attract and Flamenco.
2018-09-17 09:09:46 +02:00
def52944bf CSS tweaks for embeds, videos and iframe 2018-09-16 23:56:31 +02:00
8753a12dee Tweak unit test to support new embed code 2018-09-16 22:04:22 +02:00
77e3c476f0 Move node hooks into own file 2018-09-16 13:04:12 +02:00
842ddaeab0 Assets: Display similar assets based on tags
Experimental.
2018-09-16 06:29:19 +02:00
85e5cb4f71 Projects: Only display category for public projects 2018-09-16 05:02:52 +02:00
6648f8d074 Minor style adjustments 2018-09-16 05:02:16 +02:00
a5bc36b1cf Jumbotron overlay is now optional.
Just add the jumbotron-overlay class, or jumbotron-overlay-gradient
2018-09-16 04:28:11 +02:00
e56b3ec61f Use Pillar's built-in markdown when editing projects/creating posts. 2018-09-16 04:27:24 +02:00
9624f6bd76 Style pages 2018-09-16 04:05:37 +02:00
4e5a53a19b Option to limit card-deck to a maximum N columns
Only 3 supported for now
2018-09-16 03:42:48 +02:00
fbc7c0fce7 CSS: media breakpoints
from Bootstrap and added a couple more for super big screens
2018-09-16 03:39:54 +02:00
bb483e72aa CSS cleanup (blog, comments) 2018-09-16 03:05:34 +02:00
baf27fa560 Blog: Fix and css cleanup 2018-09-16 02:04:14 +02:00
845ba953cb Make YouTube shortcode embeds responsive
Part of T56813
2018-09-15 22:32:03 +02:00
e5b7905a5c Project: Sort navigation links
See T56813
2018-09-15 22:12:12 +02:00
88c0ef0e7c Blog: fixes and tweaks 2018-09-15 21:32:54 +02:00
f8d992400e Extend attachment shortcode rendering
The previous implementation only supported rendering
attachments within the context of a node or project document.
Now it also supports node.properties. This is a temporary
solution, as noted in the TODO comments.
2018-09-15 19:01:58 +02:00
263d68071e Add view_progress to nodes of type asset 2018-09-15 17:59:30 +02:00
0f7f7d5a66 Profile styling, layout and cleanup. 2018-09-15 16:42:29 +02:00
6b29c70212 Navigation menu: Style see-more items 2018-09-15 06:16:06 +02:00
07670dce96 Fix view type list for folders 2018-09-15 05:50:42 +02:00
fe288b1cc2 Typo 2018-09-15 05:50:10 +02:00
2e9555e160 Layout and style for new global menu. 2018-09-15 05:41:15 +02:00
b0311af6b5 CSS: $primary-accent color and gradient utils 2018-09-15 05:40:29 +02:00
35a22cab4b Fix wrong url 2018-09-14 23:12:02 +02:00
0055633732 Blog: Styling and cleanup 2018-09-14 20:30:04 +02:00
78b186c8e4 Blog: Unify all post viewing in one template
During the years we went from site-wide blog, to project blog, to
post view inside a project, to full one-page post view. This led
to have multiple ways to see the same content.

This commit brings all post related stuff to always use index.pug
(or index_archive if we are looking blasts from the past).
2018-09-14 20:29:44 +02:00
232321cc2c Blog: Cleanup CSS 2018-09-14 17:29:13 +02:00
a6d662b690 Refactor render_secondary_navigation macro
* Use navigation_links instead of pages.
* Use secondary navigation mixin.
* Always include project category.
* Always include Explore tab.

Should be eventually moved to Blender Cloud repo.
2018-09-14 16:58:48 +02:00
32c7ffbc99 Move project-main to Blender Cloud
Also remove calls to project-landing, it is now part of project-main.
It was just a few lines of code not worth having a different CSS file.
2018-09-14 16:56:35 +02:00
cfcc629b61 Update package-lock.json 2018-09-14 13:11:49 +02:00
8ea0310956 Remove old videojs 2018-09-14 01:58:30 +02:00
c1958d2da7 Gulp: task to move vendor scripts
Only videojs at the moment.
2018-09-14 01:57:55 +02:00
030c5494a8 Cleanup: jQuery and Bootstrap are now part of tutti
Also remove font loading from Google, we use system fonts now.
2018-09-14 00:52:58 +02:00
462f31406a Package.json: videojs as new dependency
So it's easier to keep track of the version number.
2018-09-14 00:52:58 +02:00
1a1f67cf00 Cleanup: Remove markdown js scripts
Pillar has its own way to convert markdown (commonmark via backend) so it
does not longer need these files.
2018-09-14 00:52:58 +02:00
8d5bdf04aa Mixins no longer used 2018-09-13 18:10:39 +02:00
9a9d15ce47 Generate project_navigation_links
This function generates a list of selected links for important nodes such
as Pages and Blog. This list of links is used in the templates to provide
high level navigation of a Project.
2018-09-13 16:35:53 +02:00
c795015a3c Remove blog and page node types from jstree
They will be visible in project_navigation_links (see next commit).
2018-09-13 16:35:53 +02:00
afda0062f5 Navbar: Padding for items 2018-09-12 19:00:29 +02:00
a97c8ffc93 Search: Layout and styling 2018-09-12 19:00:16 +02:00
c5fa6b9535 Sass: set project_nav-width sizes 2018-09-12 18:59:12 +02:00
2be41a7145 Show author badges on assets and comments
Comments layout is still broken, marked as TODO(Pablo).
2018-09-12 15:58:29 +02:00
e8fb77c39b Badge sync: also support removal of all badges
Removal is stored as '' for the HTML. This way there is still the expiry
date, which means we won't repeatedly check for changes.
2018-09-12 15:29:45 +02:00
40933d51cf Show badges to users in their profile settings 2018-09-12 15:02:19 +02:00
9a9ca1bf8b Synchronise badges with Blender ID
Synchronisation is performed in the background by the Celery Beat, every
10 minutes. It has a time limit of 9 minutes to prevent multiple refresh
tasks from running at the same time.

Synchronisation is also possible with the `manage.py badges sync` CLI
command, which can sync either a single user or all users.
2018-09-12 15:02:19 +02:00
0983474e76 Store Blender ID OAuth scopes in MongoDB + request badge scope too
This also changes the way we treat Blender ID tokens. Before, the Blender ID
token was discarded and a random token was generated & stored. Now the
actual Blender ID token is stored.

The Facebook and Google OAuth code still uses the old approach of generating
a new token. Not sure what the added value is, though, because once the
Django session is gone there is nothing left to authenticate the user and
thus the random token is useless anyway.
2018-09-12 15:02:19 +02:00
6bcce87bb9 Sort celery task modules alphabetically 2018-09-12 15:02:19 +02:00
1401a6168f Always use urljoin to construct Blender ID URLs 2018-09-12 15:02:19 +02:00
85eab0c6cb No longer hash auth tokens + store the token scopes
This partially reverts commit c57aefd48b10ca3cabc9df162bc32efa62a6a21e.
The code to check against hashed tokens remains, because existing tokens
should still work.

The unhashed tokens are necessary for fetching badges from Blender ID.
2018-09-12 15:02:19 +02:00
a753637e70 Thicker progress bar on cards 2018-09-11 19:45:42 +02:00
f87c7a25df Asset: style and cleanup listing
Font pillar aliases for asset icons
2018-09-11 19:37:22 +02:00
3ae16d7750 Tweaks to asset listing 2018-09-11 17:45:33 +02:00
c546dd2881 Video: new macro for showing video progress
Import video_progress_bar from '_macros/_asset_video_progress.html'
and pass it the video and current_user.
2018-09-11 16:11:05 +02:00
48df0583ab Layout and styling of asset groups 2018-09-11 15:16:37 +02:00
094d15116e Video progress: fixed issue in group node view_embed when never watched video 2018-09-11 15:01:11 +02:00
534d06ca8f Include video progress data in UserClass
See src/templates/nodes/custom/group/view_embed.pug for a crude example.
2018-09-11 14:06:45 +02:00
df078b395d Video progress: skip 'only reporting when paused' when forcing report
This ensures that the final pause at the end of a non-looping video is
also reported.
2018-09-11 14:06:45 +02:00
5df92ca4cf Use list-asset() mixin component for project index 2018-09-10 19:02:27 +02:00
ecace8c55b Navbar: style tweaks 2018-09-10 17:09:37 +02:00
bcacdfb7ea Project view: List of pages 2018-09-10 16:11:21 +02:00
d7fd90ded1 Videoplayer: Custom playback speed 2018-09-10 15:23:05 +02:00
b9268337c3 Videoplayer: Move loop functions outside of videojs() 2018-09-10 15:22:05 +02:00
9b62daec74 Search: Cleanup and minor fixes. 2018-09-10 11:56:31 +02:00
5cc5698477 Pillar Font: A couple new icons and update.
Also added comments on how to update this file in the future.
2018-09-10 11:55:59 +02:00
00ba98d279 Search: replace spinning loader with page-bar loader 2018-09-10 11:10:25 +02:00
e818c92d4e Assets: License style 2018-09-07 18:17:50 +02:00
612862c048 Use bootstrap classes where possible 2018-09-07 18:13:04 +02:00
6b3f025e16 Project Edit: Cleanup and styling 2018-09-07 17:21:02 +02:00
8a90cd00e9 Pug mixin components for jumbotron, secondary navigation and more. 2018-09-07 17:20:22 +02:00
17a69b973e Videoplayer: thicker progress bar 2018-09-07 14:55:42 +02:00
8380270128 Fixes on buttons/dropdown layout 2018-09-07 14:55:27 +02:00
35225a189d Replace #project-loading spinning icon with a .loader-bar 2018-09-07 14:55:04 +02:00
be98a95fc0 Assets: Fix download dropdown 2018-09-07 12:27:37 +02:00
95c1f913c6 Videoplayer small improvements
* Disable volume change on scroll
* Add L key shortcut to toggle loop
* Minor style fixes (missing font family)
2018-09-07 11:49:34 +02:00
9bcd6cec89 Cleanup and minor tweaks for apps with a sidebar
Like Attract or Flamenco
2018-09-06 18:18:22 +02:00
4532c1ea39 Updated package-lock.json 2018-09-06 16:09:25 +02:00
e19dd27099 API endpoint /api/nodes/tagged/<tag>
This endpoint returns nodes in public projects that have the given tag.
The returned JSON is cached for 5 minutes.
2018-09-06 15:42:50 +02:00
f54e56bad8 Allow predefined tags on nodes
Whenever a node has a 'tags' property of type 'list' it will be handled as
if it has {'allowed': app.config['NODE_TAGS']} in the node type definition.
2018-09-06 15:42:20 +02:00
eb851ce6e1 Added some type declarations
I added those for a certain use that ended up not being committed, but
those declarations are useful anyway.
2018-09-06 15:42:20 +02:00
586d9c0d3b Create MongoDB indices at Pillar startup, and not at first request
This makes things a little more predictable, and allowed me to actually
find & fix a bug in a unittest.
2018-09-06 15:42:20 +02:00
ac23c7b00b Bootstrap popovers are no longer used. 2018-09-06 14:24:09 +02:00
811edc5a2a Gulp: generate sourcemaps when not in production 2018-09-06 14:14:15 +02:00
cb95bf989a Updated package.lock by running ./gulp 2018-09-06 13:44:03 +02:00
e4fa32b8e4 Fixed bug in attachment code 2018-09-06 13:36:01 +02:00
08bf63c2ee Merge branch 'wip-redesign'
# Conflicts:
#	src/templates/projects/view.pug
2018-09-06 13:30:24 +02:00
0baf5b38c3 Project view: dim title link 2018-09-06 12:52:54 +02:00
858a75af8d Pug: Move project home templates to blender-cloud
These are super hard-coded to the Cloud anyway.
2018-09-06 12:51:58 +02:00
6b1a5e24e8 Pug: Use templates from blender-cloud
Affects the following templates:

/projects/view.pug
/projects/index_dashboard.pug
/organizations/index.pug

A lot of this layout is hardcoded for blender-cloud anyway. Eventually
Pillar should have its own templates to use as starting point for building
other Pillar apps. This should be built using the minimal amount of code
possible and rely on styling possible via Bootstrap.
2018-09-06 12:46:33 +02:00
1500e20291 Blog: cleanup of layout and style
Simpler markup reusing bootstrap 4 classes.
2018-09-06 12:42:37 +02:00
d347534fea Pug: Move navigation macro to blender-cloud 2018-09-06 12:19:28 +02:00
4546469d37 Pug: Move blog macros to blender-cloud 2018-09-06 12:19:00 +02:00
b0d8da821f CSS: Blog cleanup 2018-09-06 12:11:18 +02:00
1821bb6b7d CSS general cleanup and minor style tweaks 2018-09-06 12:11:10 +02:00
278eebd235 Style jsTree 2018-09-06 12:06:14 +02:00
2777c37085 Style videoplayer. 2018-09-06 12:05:45 +02:00
5e07cfb9b2 Send the request URL to Sentry
Also removed some dead code.
2018-09-05 14:58:34 +02:00
bc16bb6e56 Send the request URL to Sentry
Also removed some dead code.
2018-09-05 14:54:30 +02:00
0fcafddbd1 Added unit test for creating comments
We had an issue creating comments, so I wrote a test for it. The test
succeeds on a new project, so the problem lies with the older projects.
In the end it was the comment node type that still had
`{'coerce': 'markdown'}`.
2018-09-05 14:54:08 +02:00
f29e01c78e Video player: remember volume in local storage 2018-09-04 12:16:24 +02:00
2698be3e12 Saving & restoring video watching progress
Video progress updates:

- Mark as 'done' when 90% or more is watched.
- Keep 'done' flag when re-watching.

The video progress is stored on three events, whichever comes first:

- Every 30 seconds of video.
- Every 10% of the video.
- Every pause/stop/navigation to another page.
- When we detect the video is looping.
2018-09-04 12:16:24 +02:00
9c2ded79dd CSS: Cleanup and simplification
Mainly to rely more on bootstrap styling
2018-08-31 19:32:17 +02:00
b4acfb89fa Layout: use bootstrap classes 2018-08-31 19:31:36 +02:00
3f8e0396cf VideoJS: don't use videojs.registerPlugin() to start Google Analytics
The `registerPlugin()` call should only be done once, and not for every
video shown.

This removes the warning about the 'analytics' plugin already being
registered, which you see when navigating from one video to another via
the JSTree.
2018-08-31 17:19:27 +02:00
05c488c484 Authentication: also accept user from session on API calls
When loading the user from the session, a CSRF check is performed.
2018-08-31 17:18:46 +02:00
33bd2c5880 Sass: Import modules on top level 2018-08-31 14:26:42 +02:00
76338b4568 Sass config: Bootstrap overrides 2018-08-31 14:24:25 +02:00
7405e198eb Use .displayAs() instead of .show()
Needed for CSS display to be set as inline-block instead of show()'s inline.
2018-08-31 14:23:23 +02:00
2332bc0960 jQuery: Small utility to set CSS display type
Showing elements with jQuery's native .show() sets display as 'inline',
but sometimes we need to set 'flex' or 'inline-block'.
2018-08-31 14:20:59 +02:00
ac3a599bb6 Gulp: build our own bootstrap js only using the needed modules.
At this point we only use tooltip and dropdown code, but we could use
tabs or carousels in the future. Just add them to the toUglify list.
2018-08-31 14:19:09 +02:00
814275fc95 Gulp: only chmod when running --production 2018-08-31 14:17:39 +02:00
40c19a3cb0 pillar.api.utils.utcnow() now truncates microseconds to milliseconds
MongoDB stores datetimes in millisecond precision, to keep datetimes the
same when roundtripping via MongoDB we now truncate the microseconds.
2018-08-31 11:26:32 +02:00
a67527d6af Use app_context() instead of test_request_context()
There is no request context needed here.
2018-08-30 18:28:17 +02:00
791906521f Added a test context manager to log in when doing Flask test client requests 2018-08-30 18:27:55 +02:00
2ad5b20880 Quick hack to get /p/{url}/jstree working again
Apparently Eve is now stricter in checking against MONGO_QUERY_BLACKLIST,
and blocks our use of $regex when getting child nodes. See
`jstree.py::jstree_get_children()`
2018-08-30 13:59:23 +02:00
f6fd9228e5 Upgrade Celery (fixes a problem with workers not starting) 2018-08-30 12:31:54 +02:00
e9f303f330 Re-pinned dependency versions 2018-08-30 12:04:57 +02:00
00a7406a1e Ignore .pytest_cache 2018-08-30 11:00:36 +02:00
82aa521b5f Merge branch 'master' into wip-flask-one 2018-08-30 10:59:00 +02:00
f7220924bc Replaced deprecated call to collection.count() 2018-08-30 10:33:30 +02:00
46b0d6d663 Upgrade npm dependencies
Change gulp-uglify for gulp-uglify-es which has support for ES6.

New dependencies:
* boostrap
* jquery
* popper.js (required by bootstrap)
2018-08-29 16:30:17 +02:00
595bb48741 Silence warning of Flask-Caching about NULL cache during testing 2018-08-29 15:23:47 +02:00
1c430044b9 More urljoin() instead of string concatenation 2018-08-29 14:28:24 +02:00
73bc084417 Cerberus or Eve apparently changed validator._id to document_id 2018-08-29 14:18:24 +02:00
37ca803162 Flask wrapped Response replaced json() function with json property 2018-08-29 14:18:07 +02:00
939bb97f13 Revert 9389fef8ba96a3e0eb03d4d600f8b85af1190fde 2018-08-29 14:17:38 +02:00
2c40665271 Use urljoin() to compose OAuth URLs instead of string concatenation
String concatenation is bound to mess up; in this case it was producing
double slashes instead of single ones when `BLENDER_ID_ENDPOINT` ends in
a slash. Since URLs generally end in a slash, this should be supported.
2018-08-29 14:17:17 +02:00
e8123b7839 Apparently the test client now uses `https://localhost.local/' as URL
Previously this was 'http://localhost/'
2018-08-29 11:27:00 +02:00
6d6a40b8c0 Empty lists don't seem to be stored in MongoDB any more
It looks like with the new Eve (or one of its dependencies) empty lists
aren't stored any more; rather than storing `{'org_roles': []}`, it skips
the `'org_roles'` key altogether. Not sure what caused this, as it was
mentioned in neither the Eve nor the PyMongo changelog.
2018-08-29 11:26:19 +02:00
efd345ec46 Upgrade attachments CLI cmd: added compatibility with new 'validator' key
We now support both the old coerce=markdown and the new validator=markdown.
Probably support for the old can be removed, but I'm keeping it around
just to be sure.
2018-08-29 11:24:44 +02:00
d655d2b749 Users schema: don't supply schema when allow_known=True
Apparently the new Cerberus doesn't like this, and will check against the
schema only and ignore `allow_unknown` when it's there.
2018-08-29 11:23:19 +02:00
a58e616769 Markdown validator: gracefully handle partial document validation
Validation of partial documents can happen when validating an update.
Missing data is fine then.
2018-08-29 11:22:39 +02:00
a8a7166e78 Use self.assertRaises as context manager 2018-08-28 17:45:58 +02:00
1649591d75 Create a copy in the validator's self.document
This ensures that further modifications (like setting '_etag' etc.) aren't
done in-place.
2018-08-28 17:45:44 +02:00
9389fef8ba Explicitly install pyasn1, solves certain build/test problems 2018-08-28 17:29:53 +02:00
6737aa1123 Markdown validator now also updates the doc with post_internal
The post_internal function does `document = validator.document`, replacing
the to-be-posted document by the copy that Cerberus made (and which we
cannot add keys to because it iterates over the keys and the dict size thus
isn't allowed to change).

I hope this doesn't break other validators who expect to be able to write
to `self.document`.
2018-08-28 17:29:29 +02:00
40f79af49d Tooltips: Cleanup 2018-08-28 15:54:14 +02:00
84608500b9 CSS: Split dropdown styling 2018-08-28 15:53:47 +02:00
819300f954 Navbar cleanup 2018-08-28 15:52:56 +02:00
b569829343 General cleanup 2018-08-28 15:52:50 +02:00
c35fb6202b render_secondary_navigation: Bootstrap 4 tweaks 2018-08-28 15:51:56 +02:00
d0ff519980 Font Pillar: Aliases for CC license icons
Also comments about updating the font from fontello.com
2018-08-27 17:03:13 +02:00
6ff4ee8fa1 Minor Dashboard style tweaks 2018-08-27 17:02:36 +02:00
b5535a8773 CSS: New primary color and navbar height 2018-08-27 17:02:07 +02:00
2ded541955 CSS Cleanup: remove font-body specifics 2018-08-27 17:01:43 +02:00
3965061bde CSS: Split into modules
Don't place pure styling on top-level files (those that don't begin with underscore).
Instead, import them as individual files.
2018-08-27 17:01:08 +02:00
5238e2c26d Pillar Font: Use variable for path 2018-08-22 19:57:22 +02:00
469f24d113 Fix for {validate: markdown} when used in Eve
Eve's Validator has not only a validate() function, but also
validate_update() and validate_replace(). Those set
self.persisted_document, so if that attribute exists we just use it.
2018-07-13 17:14:06 +02:00
8a0f582a80 Removed dependency on flask_pymongo 2018-07-13 17:08:06 +02:00
559e212c55 Removed debug prints + added TODO(fsiddi) 2018-07-13 17:04:23 +02:00
61278730c6 De-indent the code a bit 2018-07-13 17:02:47 +02:00
0fdcbc3947 Restored MarkDown conversion using 'validator': 'markdown' 2018-07-13 17:02:38 +02:00
8dc3296bd5 Schema change for IP range, use validator instead of type
Custom types became rather useless in Cerberus 1.0 since the type checker
is cripled (doesn't know field name, cannot return useful/detailed error
messages). Instead we use a validator now.
2018-07-13 15:03:35 +02:00
a699138fd6 Merge branch 'master' into wip-flask-one 2018-07-13 13:50:24 +02:00
466adabbb0 Added unit tests for IP range validation 2018-07-13 13:50:01 +02:00
7da741f354 Re-enabled PATCH handler for organisations 2018-07-13 13:36:59 +02:00
41369d134c Fix bloody Eve raising exceptions instead of returning status code 2018-07-13 12:45:58 +02:00
61ed083218 Don't change the global schema! 2018-07-13 12:33:22 +02:00
46777f7f8c Removed unnecessary ['shema'] 2018-07-13 12:06:48 +02:00
ef94c68177 Re-enabled the 'valid_properties': True in nodes_schema 2018-07-13 12:06:38 +02:00
aaf452e18b Fixed Cerberus canary unit test
Apparently it's no longer possible for Cerberus to validate its own schemas.
2018-07-13 12:02:40 +02:00
c607eaf23d Added magic custom validation rule schemas in docstrings 2018-07-13 12:02:18 +02:00
baa77a7de5 Merge branch 'master' into wip-flask-one 2018-07-13 11:43:57 +02:00
5fb40eb32b Simple unittests for Cerberus validation 2018-07-13 11:42:31 +02:00
c83a1a21b8 Unpinned a bunch of package versions
This helps us get the latest versions and test with those, instead.
2018-07-13 11:01:22 +02:00
549cf0a3e8 WIP on libraries upgrade 2018-07-12 15:23:57 +02:00
9f380751f5 Support for capabilities check in any shortcode
Use the @capcheck decorator on any shortcode that should support
this. Currently used by iframe and youtube.
2018-07-11 12:32:00 +02:00
49075cbc60 Local development server uses http, not https 2018-06-23 01:25:35 +02:00
81848c2c44 Introducing package-lock.json 2018-06-22 19:38:49 +02:00
9ee7b742ab Make more consistent use of BLENDER_ID_ENDPOINT
Now BLENDER_ID_ENDPOINT is used for the Blender ID OAuth config,
and it's directly accessed when building requests for Blender ID token
validation (without using utility functions).
2018-06-22 19:38:27 +02:00
58c33074c3 Fix unittest for jinja.do_markdown
We were passing invalid html to do_markdown, which was returning a valid
version, by closing the <script> tag.
2018-06-22 17:10:38 +02:00
756427b34e Link Markdown Cheatsheet to CommonMark help 2018-06-10 10:03:56 +02:00
7e06212cd5 CSS: Tweaks to pre/code 2018-06-10 09:41:26 +02:00
ef3912b647 CSS: Fix for emojis on lists 2018-06-10 09:01:44 +02:00
151484dee3 Support parsing of bare links in Markdown text 2018-06-08 19:35:14 +02:00
bec1f209ba Update bleach library from 1.4.3 to 2.1.3 2018-06-08 19:34:39 +02:00
0e14bdd09f Introduce rating functions
These hotness and confidence calculation algorithms come from Reddit
and have been tweaked based on our experience on the Dillo project.
2018-06-03 02:09:20 +02:00
ce6df542cc Add ratings_embedded_schema to node_types
Ratings, like attachments, are a common feature in node_types.
By adding this schema definition, we reduce code duplication.
No functional changes are introduced introduced in this commit.
2018-05-11 01:32:39 +02:00
530302b74f Fix deprecation warning, rename Form to FlaskForm
Starting with flask_wtform version 1.0, Form will be dropped in favor
of FlaskForm.
2018-05-09 22:50:26 +02:00
1bfb6cd2f6 Use high-res image for page and blog headers 2018-05-07 15:26:42 +02:00
53b6210531 Remove unneeded file opening
The statement has been moved to the Docker file of blender-cloud,
where we actually append a generated STATIC_FILE_HASH.
2018-04-21 18:09:42 +02:00
aeaa03ed80 Handle embedded featured nodes to get node_id 2018-04-16 17:30:02 +02:00
3319a578b9 Move secondary navigation rendering to a macro 2018-04-16 16:23:19 +02:00
24d47f0848 Add page node_type to CUSTOM_VIEW_NODE_TYPES
Due to the new templates, we do not need to embed pages in the
project view anymore.
2018-04-16 16:22:38 +02:00
505e3c3a6d New design for project landing pages and blogs 2018-04-16 14:33:38 +02:00
e5259bb56c Config: provide a correct suggestion for SERVER_NAME 2018-04-14 19:31:57 +02:00
8c0c22d801 Home project: sort synced Blender versions by _updated 2018-04-09 13:41:16 +02:00
Kael Baldwin
157eed8321 Fix layout template is_authenticated call
Differential revision: https://developer.blender.org/D3136
2018-04-07 21:39:38 +02:00
9ed526510f Fix preview on public assets 2018-04-05 16:47:06 +02:00
ec2e4dee46 Minor style tweak to assets 2018-04-05 16:47:06 +02:00
c9789f46db {iframe} shortcode no longer requires cap=xxx 2018-04-04 15:44:52 +02:00
289dc39e50 Fixed T54149: Link markup in project description doesn't work
Don't convert MarkDown in JavaScript when we already do it in Python.
2018-04-03 16:23:58 +02:00
22b6673346 Link to {attachment} shortcode documentation 2018-04-03 15:58:23 +02:00
3e7722a567 Expand image for {attachment slug link=self}
Clicking on the image will no longer open it directly, but expand it
instead.
2018-04-03 15:44:24 +02:00
1ba1da49c3 Pass positional arguments to attachment render functions
This allows handling `{attachment slug link}` as a synonym for
`{attachment slug link=self}`.
2018-04-03 15:42:47 +02:00
a71de3a727 Added link to shortcodes documentation
https://pillarframework.org/shortcodes/
2018-04-03 14:43:47 +02:00
67e8e7c082 Disallow spaces in attachment slugs
Slugs shouldn't have spaces. It also interferes with using slugs in
shortcodes.
2018-04-03 13:59:31 +02:00
cbb5d546ef Fixed CLI cmd upgrade_attachment_schema
It didn't add the {'coerce': 'markdown'}, which caused the
upgrade_attachment_usage CLI command to skip 'upgraded' nodes.
2018-04-03 12:49:34 +02:00
a86920fc73 Disallow spaces in attachment slugs 2018-04-03 12:24:42 +02:00
14b31174dc Fixes to upgrade_attachment_schema() for URL-less projects 2018-04-03 11:47:18 +02:00
1cb3a24e2f Only load clipboard.min.js when authenticated
This is used in the attachments form, which is only available to
authenticated users.
2018-04-03 11:27:20 +02:00
a052e754f9 Button "Copy to clipboard" instead of "Add to description"
This allows the user to paste the code wherever they need. For example,
a blog post takes its contents from 'properties.content' and not from
the description field.

I also added an explanation for new attachment shortcode.
2018-04-03 10:59:20 +02:00
3b452d14ce Render attachments with shortcodes rather than slugs
The attachments should now be rendered using `{attachment slug}` instead
of `@[slug]`. The `link` attribute can be specified in the shortcode
(for attachments that support it), rather than in the attachment itself.

The attachment subdocument is now reduced to `{oid: File ObjectID}`, and
nodes without attachments should NOT have an `attachment` property at
all (previously it would be an empty dict). This makes querying for
nodes with/out attachments easier.

The CLI command `upgrade_attachment_schema` can do dry-run and remove
empty attachments:

- Added --go to actually perform the database changes.
- Remove empty attachments, so that a node either has one or more
  attachments or no attachments sub-document at all.

The CLI command `upgrade_attachment_usage` converts `@[slug]` to
`{attachment slug}`. It also takes into account 'link' and 'link_custom'
fields on the attachment. After conversion those fields are removed from
the attachment itself.

Simplified maintentance CLI commands that iterate over all projects:
I've moved the common approach (either run on one project or all of
them, skipping deleted ones, giving a message upon dry-run, and showing
duration of the command) to a new _db_projects() function. The new
function is now used by two recently-touched CLI commands; more of them
could be migrated to use this.
2018-04-03 10:59:20 +02:00
f4e0b9185b Shortcodes for YouTube and iframes
Added shortcodes 2.5.0 as dependency; Earlier versions corrupted
non-ASCII characters, see
https://github.com/dmulholland/shortcodes/issues/6

The rendered elements have a `shortcode` CSS class.

The YouTube shortcode supports various ways to refer to a video:

    - `{youtube VideoID}`
    - `{youtube youtube.com or youtu.be URL}`

URLs containing an '=' should be quoted, or otherwise the shortcodes
library will parse it as "key=value" pair.

The IFrame shortcode supports the `cap` and `nocap` attributes. `cap`
indicates the required capability the user should have in order to
render the tag. If `nocap` is given, its contents are shown as a message
to users who do not have this tag; without it, the iframe is silently
hidden.

`{iframe src='https://source' cap='subscriber' nocap='Subscribe to view'}`

Merged test code + added HTML class for shortcode iframes
2018-04-03 10:49:00 +02:00
0841d52dd1 Removed unused imports 2018-04-03 10:48:40 +02:00
f32630237a Fix Cerberus github URL so that it doesn't require SSH authentication 2018-03-29 16:49:20 +02:00
9ee816d366 Ignore _xxx properties in form generation 2018-03-29 10:38:25 +02:00
d10bdea6c5 Use typewatch for previewing comments 2018-03-28 23:35:59 +02:00
5b061af3a5 WIP on using the new nodes.preview_markdown for comments 2018-03-28 22:53:27 +02:00
e69f991aa6 Update flask_wtf to 0.14.2 and make CSRFProtect available to current_app
By default CSRF protection is disabled for all views, since most
web endpoints and all API endpoints do not need it.
On the views that require it, we use the 
current_app.csrf.protect() method.
2018-03-28 22:05:54 +02:00
fc9c518c2a Merge branch 'wip-asset-obscure'
All asset templates now extend view_base, only overriding what's needed via jinja blocks.

Yay for less duplicated code!
2018-03-28 12:46:37 +02:00
dcde2a4551 Merge branch 'master' into wip-asset-obscure 2018-03-28 12:42:42 +02:00
fe7e078f8b Added unit test
Should have been part of prev commit.
2018-03-28 12:42:36 +02:00
8288455468 Fixed a KeyError when editing a comment. 2018-03-28 12:36:03 +02:00
5eb464a1f3 Minor tweaks to layout when able to re-new subscriptions 2018-03-28 12:17:11 +02:00
ab6b277293 Minor tweaks and cleanup on group_hdri, group_texture and texture templates 2018-03-27 19:47:48 +02:00
a4e415f1e3 Assets: Trim the first part of the asset type
Usually "image" or "application".

Also special treatment for .blend files
e.g. application/x-blender becomes blend logo
2018-03-27 19:46:34 +02:00
ebfd3d542c Generic template for node preview when not subscribed 2018-03-27 19:40:44 +02:00
8f227076fd Node details is now part of view_base 2018-03-27 19:40:18 +02:00
a7cb3b9658 Use view_base for assets 2018-03-27 19:39:49 +02:00
641f29ab30 Introducing: view_base template for nodes
Contains all the basics divided in blocks:
* node_preview
* node_details
* node_details_meta_extra (for additional list items)
* node_download - to override the download button
* node_comments
* node_scripts - for node specific scripts, like hdri or video
* footer_scripts
2018-03-27 19:38:45 +02:00
17792df85e Cleanup: Unused block 2018-03-27 19:13:14 +02:00
bca8fac4cd Cleanup: Unused templates 2018-03-27 19:08:28 +02:00
d3ff88e5cf Also replace node types when key with underscore changed
Previously all keys starting with an underscore were ignored (so changes
to _created wouldn't count as "different"), but this clashes with saving
Markdown output to _xxx_html keys.
2018-03-27 17:56:38 +02:00
f22dc4d92a Fixed PATCHing comments 2018-03-27 17:42:29 +02:00
540dd28861 Short-circuit check_permissions() when logged in as CLI user
The CLI user should just be able to do anything.
2018-03-27 17:42:12 +02:00
218c3f0dca Fixed comment rendering 2018-03-27 17:13:12 +02:00
dfaac59e20 Cache Markdown'ed HTML in database
This is done via coercion rules. To cache the field 'content' in the
database, include this in your Eve schema:

    {'content': {'type': 'string', 'coerce': 'markdown'},
     '_content_html': {'type': 'string'}}

The `_content_html` field will be filled automatically when saving the
document via Eve.

To display the cached HTML, and fall back to display-time rendering if it
is not there, use `{{ document | markdowned('content') }}` in your template.

Still needs unit testing, a CLI command for regenerating the caches, and
a CLI command for migrating the node type definitions in existing projects.
2018-03-27 16:34:32 +02:00
08ce84fe31 Drop 'template' from blog node type 2018-03-27 15:56:06 +02:00
d2a0a5ae26 Added CLI command 'maintenance purge_home_projects'
This command soft-deletes home projects when their owning user is no longer
there.
2018-03-27 15:45:32 +02:00
bf498b829c @manager.command and @manager.option are sometimes mutually exclusive
@manager.option also registers the function as command, so the double use
is generally unnecessary.

Furthermore, @manager.command will register CLI options based on the
function parameters, which potentially conflict with the ones registered
with the following @manager.options decorators.

Note that positional arguments should be given in reverse order.
2018-03-27 15:45:32 +02:00
195edf679c Improved replace_pillar_node_type_schemas CLI cmd further 2018-03-27 15:32:36 +02:00
d24715a224 Smarter upgrades of node type definitions
- No changes are applied unless the new --go CLI arg is used.
- Differences to node types are actually shown.
- Dynamic form definitions are kept.
2018-03-27 12:03:18 +02:00
dee0b18429 utils.doc_diff() now also supports list values 2018-03-27 11:50:23 +02:00
de8bff51b5 Added TODO: keep Sentry unconfigured when running CLI commands.
When running CLI stuff the logging is seen by human eyes anyway, so we
don't need to send things to Sentry.
2018-03-27 11:50:23 +02:00
318ccb2d95 Reduce log level
WARNING and higher are sent to Sentry, which isn't necessary here.
2018-03-27 11:50:23 +02:00
12272750c3 T53890: Improving static content serving
Static files are now served with an 8-character hash before the last
extension. For example, `tutti.min.js` is now served as
`tutti.min.abcd1234.js`. When doing a request the hash is removed before
serving the static file.

The hash must be 8 characters long, and is taken from STATIC_FILE_HASH.
It is up to the deployment to change this configuration variable
whenever static files change. This forces browsers that download newly
deployed HTML to also refresh the dependencies (most importantly
JS/CSS).

For this to work, the URL must be built with `url_for('static_xxx',
filename='/path/to/file')`. The 'static' module still returns regular,
hashless URLs.
2018-03-23 17:36:14 +01:00
0cf45c0d78 Use capability check instead of role check in strip_link_and_variations() 2018-03-23 14:23:47 +01:00
e4f229cc70 Fix T51678: 16bit greyscale PNG images thumbnailing fails
generate_local_thumbnails() now uses pathlib and f-string formatting too,
making the code a lot simpler. Furthermore, I removed unused bits of
resize_and_crop() and simplified the rest.
2018-03-22 17:53:14 +01:00
f8ccb8aaaa Follow the convention for error formatting 2018-03-21 20:21:10 +01:00
fb2852acdc Tweak to function docstring 2018-03-21 20:21:10 +01:00
e6edd00e46 Introducing /nodes/preview-markdown
This endpoint receives POST requests and parses the content field
returning it as Markdown. Useful for partially previewing node edits.
2018-03-21 20:21:10 +01:00
479a435ec5 Work in progress in blurring asset preview and minor CSS/template cleanups 2018-03-21 20:15:29 +01:00
d30a11c8f7 Do not index a document if it's empty
The prepare_node_data function returns an empty dict if the node
is not of the INDEX_ALLOWED_NODE_TYPES, or if it's not published, etc.
2018-03-21 02:17:58 +01:00
67a24e9d4e Provide debug log info when nodes are not indexed 2018-03-21 02:15:46 +01:00
2bf0bf1064 Formatting 2018-03-21 02:15:07 +01:00
678f72766e Change elif to if _validate_config
Elif is not needed after a raise.
2018-03-20 10:05:17 +01:00
66e4229b9b Merge branch 'production' 2018-03-18 20:14:17 +01:00
99e0eb7a7a Require SERVER_NAME in the configuration
Since we rely more and more on the presence of SERVER_NAME in the
configuration, we make it a hard requirement, before checking if it is
a FQDN.
2018-03-18 18:53:08 +01:00
6a0e0721e9 Require SERVER_NAME to be a FQDN with TLD
A fully-qualified domain name, including a top-level domain name, is
required for Chrome to accept session cookies. For more info, see
https://stackoverflow.com/questions/27254013/why-does-the-session-cookie-work-when-serving-from-a-domain-but-not-when-using-a#27276450
2018-03-15 11:39:20 +01:00
97091457a8 Check for capabilites instead of roles in allow_link 2018-03-14 22:05:00 +01:00
6f69fe5b8a CSS: Style kbd tag in node description 2018-03-14 21:59:50 +01:00
7292c534ed Fix scrollToLinkedComment()
The test was done against location.hash, which contains a hash symbol.
Strip it for the test.
2018-03-14 21:59:50 +01:00
df6297d40f Fixed project search
The project ID wasn't used at all when searching in a project's context.
2018-03-13 12:24:29 +01:00
257793dcd5 Simplified some code 2018-03-13 12:21:41 +01:00
6e1d255dfc CSS: Style buttons with 'disabled' class 2018-03-11 23:40:23 +01:00
f236845374 CSS Login: Minor tweaks and maintenance 2018-03-07 21:16:35 +01:00
450dde56b7 Pass our OAuth2 client ID to Blender ID when validating tokens
This is a security measure, as it ensures that valid Blender ID OAuth2
tokens that were not generated for Blender Cloud are rejected.
2018-02-21 10:49:33 +01:00
854bc7cfaf Sentry: include extra user information
We perform authentication of the user while handling the request,
but Sentry calls get_user_info() in a before-request handler. This means
that Sentry would miss user info in many cases. This fixes that.
2018-02-14 13:52:52 +01:00
0c7abdb99a Avoid error when there is no #cloud-search element 2018-02-14 10:22:13 +01:00
b10369a867 Removed unused imports 2018-02-13 16:51:42 +01:00
05187cacea Add comment to config.py to point at SERVER_NAME 2018-02-13 16:51:28 +01:00
f79642d69f Refuse to merge projects when SERVER_NAME is unset 2018-02-13 16:50:37 +01:00
1f2fb774b4 Converted another datetime.utcnow() to utils.utcnow() 2018-02-13 16:50:11 +01:00
de801e41e3 CLI command for moving all nodes+files to another project
`manage.py operations merge_project src_url dst_url` moves all nodes and
files from the project with `src_url` to the project with `dst_url`.
This also moves soft-deleted files/nodes, as it ignores the _deleted
field. The actual files on the storage backend are copied rather than
moved.

Note that this may invalidate the nodes, as their node type definition
may differ between projects. Since we use direct MongoDB queries the
nodes are moved to the new project anyway. This allows for a
move-first-then-fix approach).
2018-02-13 15:52:21 +01:00
cd42ce6cba Moving blobs between nodes now uses storage API
Instead of only being GCS-specific, it now works for all storage
backends.
2018-02-13 15:36:11 +01:00
eb18e5b933 Formatting 2018-02-13 14:36:23 +01:00
350cf85cd2 Removed unused imports 2018-02-13 14:36:16 +01:00
f2888069db Added pillar.api.utils.utcnow() which returns a datetime for 'now'
This replaces pillar.web.utils.datetime_now() and can be used in a wider
setting (since we don't import web stuff in the api, but we do vice versa).
2018-02-13 14:36:05 +01:00
d0520484bb User admin: Show selected user as 'active' 2018-02-13 10:24:49 +01:00
d114b5631a User admin: removed cancel button
It didn't do anything useful, but did break the GUI.
2018-02-13 10:21:43 +01:00
ce33ce994f Elastic: Allow resetting and reindexing in one CLI command
Use `manage.py elastic reindex [indexname] --reset` to reset first and then
reindex.
2018-02-13 10:19:05 +01:00
05d5882c68 Remove deploy.sh 2018-02-02 12:47:47 +01:00
0c238284b0 Typo 2018-02-01 15:18:43 +01:00
d85c45f10f Not using let in JS, as Gulp minify doesn't understand it :( 2018-02-01 14:28:12 +01:00
06b2adf923 Added all the in-use texture map types to the texture node type
Note that we now name 'occlusion' (as it's used in production) as
'ambient occlusion'. The database needs to be migrated for this.
2018-02-01 14:13:50 +01:00
1ca2f336c4 Proper error handling for node type editor 2018-02-01 14:13:01 +01:00
284873ddd4 Unify and simplify texture map type labels 2018-02-01 12:04:12 +01:00
d86c215c34 More texture map types
These are actually in use in production.
2018-02-01 11:10:16 +01:00
1b57b333df Removed the URLer service
We don't have a need for it any more, so it can go.
2018-01-31 14:33:41 +01:00
08a814525b Implement project undelete as PATCH
This is done via a custom PATCH due to the lack of transactions of MongoDB;
we cannot undelete both project-referenced files and file-referenced
projects in one atomic operation.
2018-01-31 14:15:23 +01:00
27153bd74a Remove use of URLER service, replaced it with direct MongoDB query. 2018-01-31 10:08:17 +01:00
20d80dee61 cache_for_request should take function itself into account too
Previously it only looked at the arguments to the function, but not the
function itself.
2018-01-31 10:08:17 +01:00
ca7d528c85 Mass-attaching project pictures for /p/ endpoint
Previously we did an API call for each picture_square and picture_header
for each project listed in /p/. Now we do one API call that fetches only
the pictures needed, in one go; in other words, it fetches less data in
less HTTP calls.
2018-01-31 10:08:17 +01:00
f8ff30fb4d (un)delete on project also (un)delete file documents.
Note that undeleting files cannot be done via Eve, as it doesn't support
PATCHing collections. Instead, direct MongoDB modification is used to set
_deleted=False and provide new _etag and _updated values.
2018-01-31 10:08:17 +01:00
7d1b08bf58 refresh_links_for_backend: log to backend-specific child logger. 2018-01-31 10:08:17 +01:00
60abf6d4a9 on_pre_get_files: remove refresh of files
It is possible to perform a GET request that has an empty `lookup`
param, like {'_id': {'$in': ['objectID1', 'objectID2', ...]}}. Such a
request would cause a refresh of *all* file documents, which is far too
heavy to do in one client HTTP request.
2018-01-31 10:08:17 +01:00
7c384d1f45 Link refresh: allow refreshing links for soft-deleted projects.
Since files aren't deleted (yet) when projects are deleted, it can happen
that a file is refreshed but then cannot reference a deleted project.
By removing the project ID from the PATCH, Eve doesn't have enough info to
check this, and it'll work fine.
2018-01-31 10:08:17 +01:00
f18d5580c1 Fixed unicode/bytes issues in CDN file path hashing. 2018-01-31 10:08:17 +01:00
9177f77e69 Link refresh: gracefully handle case where 'file_path' is not set.
In this case an error is logged and the entire link regeneration is
aborted. In other words, variations aren't refreshed either.
2018-01-31 10:08:17 +01:00
4b5a961e14 Speed up authentication by trusting g.current_user if set. 2018-01-31 10:08:17 +01:00
ed1e348d67 Display publishing status of a texture node only to editors 2018-01-26 16:11:56 +01:00
660b7a3811 Added 'maintenance refresh_content_disposition' CLI command
This command fixes the filename in the Content-Disposition header of file
variations on Google Cloud Storage. This is to fix the existing files after
fixing T51477.
2018-01-26 16:03:11 +01:00
e5fb156224 Fix logging in check_home_project_groups CLI command 2018-01-26 15:05:49 +01:00
de1eab4596 GCS: the slash is a separator, and not part of the directory name itself. 2018-01-26 14:58:42 +01:00
de1c227ccd GCS storage: use self.subdir instead of hard-coded '_' 2018-01-26 14:58:25 +01:00
230b2c669c Implement rename-after-Zencoder-is-done using our Storage API.
This means it's no longer GCS-specific, and can be tested using the local
storage implementation.

Required implementation of a rename operation. To mirror Google's API, I've
implemented the renaming of a Blob as a function on the Bucket class.
To me this makes sense, as it requires creating a new Blob instance, which
shouldn't be done by another Blob.
2018-01-26 14:54:34 +01:00
2e2314c16b Replace storage backend 'pillar' with 'local'
The backend 'pillar' is obsolete; 'local' is the modernity and uses our
nice storage API.
2018-01-26 14:13:03 +01:00
75e2402420 Removed unused import 2018-01-26 12:49:53 +01:00
fb121a9601 Test actual Zencoder notification.
This notification was taken from our API history and was actually sent to
us by Zencoder.
2018-01-26 12:49:45 +01:00
f8c3408f18 Properly handle errors when saving updated file doc after Zencoder notif.
We never checked the return values from the put_internal() call, so errors
would have passed silently into the night.
2018-01-26 12:49:15 +01:00
89ca0516a9 Better handling of Zencoder error notifications. 2018-01-26 12:29:22 +01:00
5ae98507e3 Added missing unittest for encoding.size_descriptor() 2018-01-26 12:15:56 +01:00
66ac8c6587 Modernised ZencoderNotificationTest 2018-01-26 12:15:42 +01:00
fd95135f66 PEP 8 formatting and removal of unused import 2018-01-26 11:42:56 +01:00
987d6d03a6 Fix T49280: Make texture files (in texture node) sortable
Rather than making them sortable, I made them automatically sorted upon
saving the node. The colour map comes first, then the other maps in
alphabetical order.
2018-01-26 11:42:42 +01:00
9b3a836c83 Fix for project-less files 2018-01-26 10:45:11 +01:00
741cdf6e12 Elastic: regenerate picture URL before inserting into ElasticSearch
This ensures the thumbnail URL is public so that it won't expire.
Since this now requires API calls to Google, I've increased the number of
parallel threads used for indexing, since they'll be waiting for network
I/O more.
2018-01-26 10:29:28 +01:00
0744aeb42f app → current_app
'app' doesn't exist.
2018-01-26 09:37:25 +01:00
ae7489d8e7 Don't hide form items in CSS
If those should be hidden, it should be done in the form_schema of the
node types, and not with CSS.
2018-01-25 17:31:13 +01:00
666da0adda Show status in texture view_embed 2018-01-25 16:17:26 +01:00
889b5dc1c5 Removed redundant if around for 2018-01-25 16:17:18 +01:00
b3a36f2833 Fix T49930: bug in texture count 2018-01-25 15:51:15 +01:00
dd8d19178b Removed unused function 2018-01-25 15:50:48 +01:00
840e8ba29b Fix issue when editing org without IP range.
Stupid JavaScript ''.split('\n') results in Array('') instead of Array().
2018-01-25 14:35:33 +01:00
6a17949fdf Added Roles & Capabilities page to user settings
Thanks @fsiddi for helping with the explanatory text.
2018-01-25 14:01:28 +01:00
0a0c47205f Use Jinja2 inheritance to render settings pages.
This gives us more flexibility than using {% include %}.
2018-01-25 14:01:28 +01:00
fd3e795824 Store IP-based org-given roles in the user document.
This is a two-stage approach that happens when a new token is verified
with Blender ID and stored in our local MongoDB:

  - Given the remote IP address of the HTTP request, compute and store the
    org roles in the token document.
  - Recompute the user's roles based on their own roles, regular org roles,
    and the roles stored in non-expired token documents.

This happens once per hour, since that's how long we store tokens in our
database.
2018-01-25 14:01:28 +01:00
270bb21646 Support IP range editing in Organization view_embed 2018-01-25 14:01:28 +01:00
d3f97358d9 Work around Eve not supporting returning binary data 2018-01-25 14:01:28 +01:00
c44f0489bc Backend support for organization IP ranges.
We can now store IP ranges with Organizations. The aim is to have any user
logging in with a remote IP address within such a race will get the
organization roles assigned to the user object stored in the Flask session.

This commit just contains the MongoDB storage and querying, and not yet the
updates to the user.
2018-01-25 14:01:28 +01:00
9bd41ed5d7 Added urljoin Jinja2 filter
Use as {{ config['THESERVER'] | urljoin('path/on/server') }}. It uses
the urllib.parse.urljoin() function to do this.
2018-01-25 14:01:28 +01:00
0eca0c706f Introducing overridable comments list rendering
By refactoring part of comments_for_node into a dedicated function called render_comments_for_node, we enable Pillar apps to override the comment url and determine in each app what are the conditions that allow a user to post.
Further, we introduce an extensible and overridable list_embed.pug, which currently defines custom blocks for when the user is allowed and not allowed to post a comment,
2018-01-20 00:43:54 +01:00
4da7a84c86 Fix for broken urls in blog list
This actually undoes commits 90c62664a6713a9330a4fe44f2cfe58124b29ed3 and 18fe240b931ab8dbd3a8a9fb6760c8b8cdba7ad3 and simply adds the node.url property when rendering a post in the posts_view function. This is what the template macro actually expected in the first place.
2018-01-18 16:02:29 +01:00
2b2910a1ac Fix typo in comment 2018-01-18 15:59:45 +01:00
90c62664a6 Fix for broken url in blog post title and meta 2018-01-18 12:30:06 +01:00
18fe240b93 Fix for broken url in blog post header image 2018-01-18 11:37:55 +01:00
bdff391440 Support for rendering of video file attachments 2018-01-17 15:55:25 +01:00
46beaece75 Implemented pillar.flask_extra.ensure_schema(url)
This function ensures that the URL has the correct schema, given the
app configuration. This is required because the Flask instance can sit
behind an SSL-terminating proxy like HAProxy and not know that it is
reachable via HTTPS.
2018-01-12 17:21:38 +01:00
15ce143356 Include Last-Modifed HTTP header in Blog feed response 2018-01-12 16:49:43 +01:00
7245dac1ca Elastic: reverted previous two commits
The ngrams cause too much noise in the search results.
2018-01-12 16:27:09 +01:00
6748fd0006 Elastic: use ngrams for user search but not assets 2018-01-12 15:36:15 +01:00
b2bd01117e Elastic: tweaked user indexing
This makes it a bit more "fuzzy", so users are also matched on N-grams
and not just N-grams-from-the-start-of-the-word.
2018-01-12 15:28:33 +01:00
31ca4f3d23 Gracefully handle Elastic errors in user search 2018-01-12 15:27:56 +01:00
8326d8e7fe Elastic query: remove matching on document type.
ElasticSearch 6 removes support for different document types in one index,
so we don't have to match for it any more. Furthermore, elasticsearch_dsl
seems to save with the generic 'doc' type, so this check is
counter-productive.
2018-01-12 11:17:52 +01:00
3e5ccaf8fd Elastic indexing: explicitly save into the configured index. 2018-01-12 11:16:42 +01:00
68b6e43649 Upgraded ElasticSearch to 6.1.x. 2018-01-11 10:29:15 +01:00
ca3d99c52c Search JS: fixed node type display 2018-01-10 17:23:11 +01:00
55ccd39960 removed debug log entries 2018-01-10 17:23:06 +01:00
61673ef273 Search: implemented pagination
- Got rid of the nasty off-by-one logic in the JavaScript.
- Implemented pagination at the API.
2018-01-10 17:07:21 +01:00
82a2e9a523 Search: Disable Algolia backend 2018-01-10 15:53:56 +01:00
36da289746 user search: boost exact matches on username 2018-01-10 15:12:39 +01:00
7f892601f4 Reformatting 2018-01-10 15:12:22 +01:00
1d08f6850b Elastic: paralellise reindexing
It's marginally faster (on our production DB user reindexing goes down from
5+ minutes to 4 minutes), but will likely become significantly faster when
we run ElasticSearch on its own machine.
2018-01-09 17:05:31 +01:00
408db5e060 Elastic: log how long reindexing took 2018-01-09 16:57:31 +01:00
740e088cc5 Elastic: Doc & return type polish 2018-01-09 16:21:08 +01:00
4fdcd2a343 Elastic: include exact searches on email address 2018-01-09 16:16:38 +01:00
6e40b9a44a Elastic: delegated common user search functionality to a single function 2018-01-09 16:16:26 +01:00
c20aa41b5c Formatting 2018-01-09 16:09:49 +01:00
d96be99d1d Hacked jquery.autocomplete to not use algolia-autocomplete CSS class
Instead, it now uses search-autocomplete.
2018-01-05 17:53:24 +01:00
6a9c27f8bf Removed console.log JS 2018-01-05 17:39:38 +01:00
284d822a8a Asset search: order by newest-first on empty query 2018-01-05 17:29:59 +01:00
9d39995d0f Asset search: debug-log instead of print query + result 2018-01-05 17:29:47 +01:00
36aad45b26 Asset search JS: show creation timestamps 2018-01-05 17:23:40 +01:00
83a38ff50e Admin user search: include aggregations 2018-01-05 17:03:09 +01:00
67851752fa Search JS: more stupid and thus more reliable repeat query filtering 2018-01-05 17:02:42 +01:00
33c051bf28 Search: include explicit indices to search
This makes it possible to search for 'everything' without explicitly
stating the document type.
2018-01-05 16:42:08 +01:00
b6f7958dfe Admin user search: Include aggregations 2018-01-05 16:41:36 +01:00
acd7a40fe6 Renamed do_search → do_node_search 2018-01-05 16:40:54 +01:00
999c1a3fa6 Formatting 2018-01-05 16:26:41 +01:00
a574a75610 Advanced search JS: avoid re-querying for the same thing 2018-01-05 16:10:24 +01:00
af69b4fa58 Admin user search: boost user ID matching 2018-01-05 16:02:49 +01:00
01d8ad5ca2 Admin user search: also match on user ID
And removed user ID matching for regular user search.
2018-01-05 15:52:25 +01:00
57ce554feb Log search queries instead of printing them 2018-01-05 15:52:01 +01:00
72c01cc743 Admin user search actually uses the right end-point 2018-01-05 15:51:41 +01:00
75a6de18b2 Regular user search now also finds by email address 2018-01-05 15:51:30 +01:00
fdab66a500 Elastic: prevent indexing deleted nodes 2018-01-05 15:33:40 +01:00
dcd67b6114 Simplified ElasticSearch connection 2018-01-05 15:24:57 +01:00
f6cf8d29f0 Elastic: search indexing logging tweaks 2018-01-05 15:24:47 +01:00
0b6969bf0c Search JS: removed console.log debug calls 2018-01-05 15:24:29 +01:00
b4a5cdec55 search JS: gracefully handle errors 2018-01-05 15:24:19 +01:00
11b5be9d8e Fixed missing video.js errors in asset search page 2018-01-05 15:23:54 +01:00
90883eddb9 xhrErrorResponseMessage: nice message when unable to connect 2018-01-05 15:23:43 +01:00
d240a979ba scrollToLinkedComment: Check for valid ObjectID before passing to jQuery 2018-01-05 15:08:00 +01:00
8f6966978f Elastic: more progress logging when reindexing users 2018-01-05 14:42:10 +01:00
7f33826d1d prepare_user_data: Always return a dict 2018-01-05 14:41:59 +01:00
b09e4463bd Elastic: don't regenerate picture links
Thumbnail links shouldn't expire anyway.
2018-01-05 14:27:46 +01:00
1bfda7769a Elastic reindexing: more verbose logging at info level.
Including some progress report every 100 nodes.
2018-01-05 14:22:38 +01:00
bbdb731043 Slightly nicer return value & dict creation. 2018-01-05 14:22:38 +01:00
4381ed6671 Elastic: handle pictures without variations or project ID
This happens on old file documents.
2018-01-05 14:22:38 +01:00
2ca960a73f Hard-code 'elastic:9200' as ElasticSearch host 2018-01-05 13:10:39 +00:00
2433a1b981 minor documentation / annotation fixes 2018-01-05 11:58:33 +01:00
2ed2aaf58f merge 2018-01-05 10:58:32 +01:00
24d38fe52e Merge branch 'master' of git.blender.org:pillar into elastic 2018-01-05 10:56:46 +01:00
de8c6a8b63 improve elastic server settings 2018-01-05 10:56:41 +01:00
96428d3c73 Elastic: use different indices when running unit tests. 2018-01-03 18:34:55 +01:00
520f327f5a Default ELASTIC_SEARCH_HOSTS to the host/docker name we use in production
Also removed some comments that didn't add any new information.
2018-01-03 18:34:33 +01:00
f1b3409052 Merge branch 'master' into elastic 2018-01-03 17:42:01 +01:00
91660fefe4 Lowering log level to DEBUG for internal SDK call exceptions.
The exception is re-raised anyway, so it may be handled by the caller in
a way that doesn't warrant a warning/error at all.
2018-01-03 14:39:02 +01:00
fdb9970792 Prevent crash when session['blender_id_oauth_token'] doesn't exist 2018-01-03 12:19:03 +01:00
1c6599fc30 More detailed logging in fetch_blenderid_user 2018-01-03 12:18:43 +01:00
a938342611 Reduced log level when checking user without email for org membership
Service accounts may not have an email address, which is fine for now.
2018-01-03 12:08:06 +01:00
656a878c6a Include stack trace when looking an SDK exception.
Possibly the exception shouldn't be logged at all (or just at debug level),
since it's also re-raised and should be handled by the caller instead.
2018-01-03 11:44:49 +01:00
ef2cc44ceb Reduce log level when user lacks required roles/caps
This prevents logging those at Sentry.
2018-01-03 11:12:17 +01:00
c7ba775048 Removed some traces of Bugsnag 2018-01-03 11:10:01 +01:00
85d6f76000 better error reporting 2017-12-29 17:08:34 +01:00
ebe524ded3 make javascript more secure.. 2017-12-29 16:47:29 +01:00
f4625cfe06 remove dead code 2017-12-29 15:11:47 +01:00
99131374cd javascript debugging 2017-12-29 14:52:39 +01:00
04684c5f65 remove algolia from css and vendor stuff 2017-12-29 14:19:22 +01:00
d726e15ed8 Merge branch 'master' of git.blender.org:pillar into elastic 2017-12-29 12:19:47 +01:00
d73146ff62 Formatting 2017-12-22 16:27:16 +01:00
8f73dab36e Allow project undeletion, fixes T51244
Projects can be undeleted within a month of deletion.
2017-12-22 16:27:05 +01:00
46612a9f68 JavaScript function for getting reasonable error message from an XHR response 2017-12-22 16:25:39 +01:00
766e766f50 Declare some parameter types 2017-12-22 16:25:12 +01:00
f47a45f9a3 Removed unused code 2017-12-22 14:42:24 +01:00
8fb22931f5 Remove unused imports 2017-12-22 14:42:18 +01:00
8f9d21cdd8 Fixed bug in parsing jstree
A projection was missing, causing warnings that the node doesn't have a
content type.
2017-12-22 12:29:51 +01:00
054eced7de Added SMTP Auth support 2017-12-22 10:59:15 +01:00
8ca6b4cdb0 Added Celery task for queued email sending.
Upon IOError or OSError (which includes SMTP protocol errors) the mail
sending task is retried after MAIL_RETRY seconds. It is retried three
times (default setting of Celery) only.
2017-12-21 13:17:57 +01:00
01f81ce4d5 Send a Blinker signal when someone's subscription status changes
This is very close to the 'roles changed' signal, with the difference that
it is sent only once for multiple role changes.
2017-12-21 12:59:32 +01:00
ef1609efc2 Added abs_url() Jinja function for proper absolute URLs
abs_url(x) is a shortcut for url_for(x, _external=True,
 _schema=app.config['SCHEMA']), and should be used for all URLs that should
include the hostname and schema.
2017-12-21 12:58:06 +01:00
b7bf29c06e Added user_is_unknown_member() to OrgManager 2017-12-20 14:57:55 +01:00
dab8fbae6d create_new_user_document: allow passing the full name 2017-12-20 13:34:27 +01:00
c545053b85 Declare return type 2017-12-20 13:34:17 +01:00
05ad824dcb Allow UserClass instantiation without database ID
This allows us to inspect the capabilities that would be given to a user,
without actually creating the user in the database first.
2017-12-20 13:34:11 +01:00
92fe39ddac Prevent shadowing of name from outer scope 2017-12-19 10:45:34 +01:00
10732f9a10 wip D2950 2017-12-15 17:57:47 +01:00
7c6425ff4d wip D2950 2017-12-15 17:33:06 +01:00
e0604fc217 Reduce log level for something that's fine
Missing emails can happen when creating a service account, we shouldn't
log a warning for this.
2017-12-15 11:23:16 +01:00
a7693aa78d Switch from macros to blocks for navigation menus
This affects the user and notifications menus. It happens for two reasons:
- the only argument passed to the macros was current_user, which is always available
- we want to enable overriding and adding items to the menus via extensions

At the moment only the user menu takes advantage of the base template, since the blender-cloud extension makes use of it, while notifications.pug does not need it yet.
2017-12-13 11:08:33 +01:00
20ca3f8ee4 Rename blender_id url to blender-id
This fixes a non-compliant to RFC 1178 exception raised by the Django implementation of Blender ID. The issue is debated here https://code.djangoproject.com/ticket/20264.
2017-12-12 18:49:52 +01:00
6d37046933 Fixed "leave shared project" javascript
Now the project is actually removed from the page. This isn't optimal; see
T53546 for a followup.
2017-12-12 11:48:48 +01:00
ae8c6e92fc Fix forced login for user switching 2017-12-12 11:25:32 +01:00
1d1e588d57 Switch: Always follow PREFERRED_URL_SCHEME instead of the request scheme
When getting an _external=True URL, we shouldn't use the scheme of the
current request at all (this depends on HaProxy forwarding the correct
headers, which might fail when misconfigured) and just always use the
preferred URL scheme. This fixes it at least for the user switching,
because Blender ID will refuse to redirect back to a http:// URL.
2017-12-12 10:56:34 +01:00
8206186426 Merge branch 'elastic' of git.blender.org:pillar into elastic 2017-12-08 17:09:11 +01:00
d38f7fda3e T53161 start working on elastic..
T53161 proces feedback sybren, replace angolia with search

T53161 WIP create elasticsearch app / doc / stuff

T53161 elasticsearch can index nodes now. cli command. NOTE config changes!!

T53161 WIP javascript search WIP WIP

T53161 Proof of Concept working

T53161 Proof of Concept working USER search. WIP js.

T53161 start working on elastic..

T53161 proces feedback sybren, replace angolia with search

T53161 WIP create elasticsearch app / doc / stuff

T53161 elasticsearch can index nodes now. cli command. NOTE config changes!!

T53161 WIP javascript search WIP WIP

T53161 Proof of Concept working

T53161 Proof of Concept working USER search. WIP js.

Merge branch 'elastic' of git.blender.org:pillar into elastic

T53161 project user search now also elastic

T53161 simpification tips from sybren.

T53161 javascript  search stuff almost complete.

Merge branch 'master' of git.blender.org:pillar into elastic

search is completely working in frontend now

search is completely working in frontend now

Merge branch 'master' into elastic

Added missing ElasticSearch requirements

T52710 search on id works

Merge branch 'elastic' of git.blender.org:pillar into elastic

T52710 pytests work

T53161 all py.test things PASSES

doc

Differential Revision: https://developer.blender.org/D2950
2017-12-08 17:08:59 +01:00
88939ba51d Cleaned up ElasticSearch CLI interface 2017-12-08 16:54:08 +01:00
c15fffa11f Allow importing pillar.api.search.index outside of app context 2017-12-08 16:07:10 +01:00
b77527e9a2 No '…'.format(…) in logging 2017-12-08 14:52:38 +01:00
3bdd5197f5 T53161 all py.test things PASSES 2017-12-08 14:47:04 +01:00
199c6b1f77 Auth: also support Bearer token authentication
This is commonly used in OAuth-authenticated calls, and can help us break
away from the username-is-auth-token stuff currently in use.
2017-12-08 14:46:58 +01:00
3ea2504e8c Log more information in Sentry 2017-12-08 14:46:01 +01:00
8eee0d57b6 Update token expiry in tests to be a bit more into the future. 2017-12-08 14:03:45 +01:00
8a400c5c0f Gracefully handle users with empty full_name 2017-12-08 14:03:30 +01:00
fccd3e306e T52710 pytests work 2017-12-08 14:00:30 +01:00
d467f000a7 Merge branch 'elastic' of git.blender.org:pillar into elastic 2017-12-08 13:13:02 +01:00
533544117b T52710 search on id works 2017-12-08 13:12:39 +01:00
3b21027d6f Added missing ElasticSearch requirements 2017-12-08 13:01:46 +01:00
b7773e69c7 Merge branch 'master' into elastic 2017-12-08 12:55:57 +01:00
821f11393c Link to 'edit profile' page on Blender ID directly 2017-12-08 10:42:43 +01:00
ca25078b30 Removed editing of full name from Cloud profile
We take the full name from Blender ID instead.
2017-12-07 17:31:26 +01:00
785145e1c1 Nicer message when username already exists 2017-12-07 17:31:07 +01:00
e1646adff6 More modern use of super() 2017-12-07 17:30:10 +01:00
d20f3d5668 Removed manual bad JSON-encoding 2017-12-07 17:29:49 +01:00
dfc224d8a9 Added capability 'encode-video' and role 'video-encoder'.
Both 'video-encoder' and 'admin' roles get 'encode-video' capability,
which allows users to upload video that gets encoded & displayed as a
video. For users without this capability videos are handled as regular
downloads.
2017-12-07 16:51:16 +01:00
5c7f37a100 Lowered dependency versions to satisfy Eve 2017-12-07 13:02:23 +01:00
fc25ca9c03 Replaced Bugsnag with Sentry - requires config changes!
Note that pillar/bugsnag_extra.py still exists; I'm keeping it around for
a while until we know what info we miss in Sentry, can port it, and then
remove/refactor it.
2017-12-07 12:58:21 +01:00
6c4dd8ae02 Fix T53339: Downgraded Werkzeug 0.12.2 → 0.11.15 2017-12-07 12:44:05 +01:00
9fdcfff4fc Direct users to renewal page on Store instead of /join
/join should only be used when someone can actually buy a new subscription.
/renew should be used when someone already has a subscription that needs
to be renewed.

Since url_for('cloud.xxxx') makes no sense in Pillar, I just hard-coded
/renew instead.
2017-12-06 14:39:30 +01:00
2bcc26860f Removed 'subscriber' cap from 'admin' role
This allows admins to test what happens when users do not have a
subscription. To give the user subscriber capability, just grant demo role
as well.
2017-12-06 12:09:21 +01:00
1e012f860b Registered org-subscriber role so that it shows in the admin 2017-12-06 11:58:21 +01:00
87fe1887e8 Added "Update from Blender ID" button
Added this button in the /u/ user/embed view, so that admins can easily force a re-check from Blender ID without requiring the user themselves to perform any actions.
2017-12-05 11:45:42 +01:00
c8221ea0e4 Simplified javascript in users/edit_embed_base.pug
There is no need to use JQuery, a unique ID for the a-element, and an
invalid href value, just to bind on-click functionality to a link.
2017-12-05 11:44:05 +01:00
517b283893 Accept roles from Blender ID in two formats
This supports {'role_name': bool} dicts (the old format) and any iterable
of strings {'role_name', ...}
2017-12-01 18:10:33 +01:00
b6a93452cd search is completely working in frontend now 2017-12-01 16:36:08 +01:00
1cba014948 search is completely working in frontend now 2017-12-01 16:32:57 +01:00
b0d6f724ef Merge branch 'master' of git.blender.org:pillar into elastic 2017-12-01 16:24:56 +01:00
0b218eb656 Use Blender ID to obtain subscription status.
Instead of performing a call to the Blender Store, call to Blender ID to
get the user's subscription status.

Currently this is performed as a second HTTP call after logging in; in the
future we may want to include the roles in the login response from Blender
ID, so that we can do this in one call instead of two.
2017-11-30 15:28:35 +01:00
8ba4cc5c0c Allow extension of users/edit_embed.pug
This is done by moving users/edit_embed.pug to users/edit_embed_base.pug,
and having a new users/edit_embed.pug extend that. Projects like Blender
Cloud can then provide their own users/edit_embed.pug and override certain
blocks.
2017-11-29 13:58:47 +01:00
2ad65b0485 Project: Remove collapse of sidebar
Since projects went full-width it's not needed anymore.
2017-11-28 15:36:31 +01:00
ce7754ffe4 Renaming IDs to classes 2017-11-24 19:34:58 +01:00
9cd3d97c75 T53161 javascript search stuff almost complete. 2017-11-24 17:47:38 +01:00
7252055e4a Markdown: Convert Markdown via Jinja filter in the template
This gets rid of the use of javascript for converting node/post description.
Now we only use markdown.js for real time as-we-type stuff, like node/post
editing or commenting.
2017-11-23 16:49:19 +01:00
c086cff36e Comments: Only load JS for post/edit comments if we can actually comment 2017-11-23 16:40:58 +01:00
eeba87d333 Blog: Unify the looks of blog posts
Now that the render_blog_post macro is shared with the homepage
2017-11-23 16:20:29 +01:00
cb7a23bc69 Fixed home project menu 2017-11-21 14:48:59 +01:00
1bda98228c T53161 simpification tips from sybren. 2017-11-17 18:04:29 +01:00
8b789d408e T53161 project user search now also elastic 2017-11-17 17:41:43 +01:00
d355e58f2f Merge branch 'elastic' of git.blender.org:pillar into elastic 2017-11-17 16:09:23 +01:00
b03e8d5bd7 T53161 Proof of Concept working USER search. WIP js. 2017-11-17 16:06:51 +01:00
76bb68dcc8 T53161 Proof of Concept working 2017-11-17 16:06:51 +01:00
41eb5e256f T53161 WIP javascript search WIP WIP 2017-11-17 16:06:51 +01:00
d2a8f2a47f T53161 elasticsearch can index nodes now. cli command. NOTE config changes!! 2017-11-17 16:06:51 +01:00
43fa8f1a45 T53161 WIP create elasticsearch app / doc / stuff 2017-11-17 16:06:01 +01:00
fcf19de786 T53161 proces feedback sybren, replace angolia with search 2017-11-17 16:06:01 +01:00
2233d015f3 T53161 start working on elastic.. 2017-11-17 16:06:01 +01:00
8b25024f6f T53161 Proof of Concept working USER search. WIP js. 2017-11-17 16:05:22 +01:00
235a88d613 T53161 Proof of Concept working 2017-11-17 14:05:24 +01:00
49a6a6a758 Delete the auth token when logging out.
Before this, authentication tokens were kept in the database, even when
someone logged out. This is unwanted behaviour, as logging in will create
yet another token anyway there is no reason to keep the token around.
2017-11-17 12:10:21 +01:00
5a9a2c4268 T53161 WIP javascript search WIP WIP 2017-11-10 19:09:14 +01:00
491c5e1b8c A bit more margin around iframe on node descriptions 2017-11-10 17:57:52 +01:00
3466320d98 File Upload: Don't clear all upload toastr notifications
Define the toastr on setup and clear only that one once it's finished
2017-11-10 17:57:17 +01:00
a8849ec823 T53161 elasticsearch can index nodes now. cli command. NOTE config changes!! 2017-11-10 16:05:12 +01:00
0cea8d622d Update cache for CSS/JS 2017-11-09 19:00:01 +01:00
a2df79d614 Minimum height for youtube embeds 2017-11-09 18:38:21 +01:00
15d2f1ce18 Added links to Blender Store and ID on /u/ user page. 2017-11-09 16:56:34 +01:00
fee242ad07 Allow a custom error view with @require_login() 2017-11-09 11:09:24 +01:00
cde86db44e @require_login(): made all arguments keyword-only
This allows us to remove the require_roles kwarg at some point, ensuring
that it doesn't fall back to assigning to require_cap instead when that
happens. It's also more explicit everywhere, so it's clearer when we check
for roles or caps.
2017-11-09 11:09:22 +01:00
7d5785da62 Handle exception when users are not allowed to update nodes_latest
When editing a node, the user should not be required to have PUT permission on the project the node belongs to. The function project_update_nodes_list should not be called within edit, but should rather be implemented as hook for specific cases only.
2017-11-08 23:56:30 +01:00
ac9aa59924 Comments: When editing, resize the textarea
Minor tweaks:
* Remove the 'editing' class after cancel/save
* Style <code>, <pre>, etc tags
2017-11-08 22:49:56 +01:00
cb0272fe60 Comments: Put comment content inside a span
So when editing we do not override the author's name.
2017-11-08 22:01:45 +01:00
8ef89d0b53 Fix for crash when element with id 'description' was not found 2017-11-08 21:18:49 +01:00
e01f915abf Comments: Unbind event before binding
Prevents flashing of comments when posting
2017-11-08 20:29:45 +01:00
c6a138ff43 Introducing 00_utils.js
Utilities that can be used accross all Pillar apps

This includes general purpose functions,
small jQuery plugins, and so on.

For example: the autoResize jQuery plugin that automatically
resizes a textarea according to its content was used on all
major Pillar apps so far (Attract, Flamenco, Blender Cloud
and Dillo), proven to be really needed everywhere.
2017-11-08 16:59:38 +01:00
22d65f1e9c put_project now also removes None values 2017-11-08 16:19:30 +01:00
4f282c1587 Bit more space + scrolling for users at /u/ 2017-11-08 15:45:41 +01:00
d651791f22 Set remember=True on login_user to persist login sessions
Before this, after closing the browser a user had to login again.
2017-11-07 23:18:46 +01:00
0cf57a633c Move Blender Cloud specific Sass files to blender-cloud repository
Three Sass files have been moved so far:
_welcome
_homepage
_services

The '_stats.sass' file is no longer used since Kibana
2017-11-07 16:53:03 +01:00
5d6e0af605 Comments: Auto-resize textarea field as we type 2017-11-04 01:59:02 +01:00
1fe88819f4 T53161 WIP create elasticsearch app / doc / stuff 2017-11-03 18:18:12 +01:00
8187a8a0dd Moved some useful code from Flamenco to Pillar 2017-11-03 17:39:54 +01:00
b6af919fa9 T53161 proces feedback sybren, replace angolia with search 2017-11-03 16:40:02 +01:00
390d687f61 Added utility to find project ID from URL.
This is mostly useful for the CLI interface, as the majority of our Pillar
code actually needs more project information than just the ID.
2017-11-03 14:33:19 +01:00
33d3ff07db Added missing newline at end of file 2017-11-03 14:32:24 +01:00
d0f10779f9 Added useful 'string' alias to attrs_extra 2017-11-03 14:32:13 +01:00
8427f03df4 Fixed bug loading extension config defaults 2017-11-03 14:31:56 +01:00
d66bfe6166 Upgraded dependencies to fix problem with Flask-Script
- Flask-Script 2.0.5 → 2.0.6

Along with this came:

- Flask      0.12    → 0.12.2
- Werkzeug   0.11.15 → 0.12.2
- MarkupSafe 0.23    → 1.0
2017-11-03 11:39:53 +01:00
04f7869e8e Comments: Display actual date on mouse over (not pretty_date) 2017-11-02 15:38:37 +01:00
545165c97f Sass: Don't specify strong/b, let the browser decide 2017-11-02 12:41:04 +01:00
021c9e03bb Sass select2: Replace hardcoded values for our variables 2017-11-02 00:39:55 +01:00
53aabc8c84 Translations: Mark more strings for translation 2017-10-27 00:54:05 +02:00
3202b3f0b5 Comments: Style blockquote 2017-10-26 02:28:07 +02:00
e41fd36952 Responsive tweaks for blog sidebar on projects 2017-10-25 18:43:19 +02:00
3636db1793 Blog: Style tweak for project blogs 2017-10-25 17:45:46 +02:00
5732b1a938 Videoplayer: Style tweaks and minor cleanup 2017-10-25 17:27:55 +02:00
3585443508 Use cache or jstree css 2017-10-25 17:27:55 +02:00
47a1db07dc T53161 start working on elastic.. 2017-10-25 17:09:10 +02:00
99ed8fff5d Remove unneeded properties on create_blog 2017-10-25 16:22:55 +02:00
a6ab7dda97 Title for loop button on videoplayer 2017-10-25 16:17:08 +02:00
6564fa000d Titles for node details 2017-10-25 16:16:58 +02:00
bb33ddd9fb Blog: Style tweaks and minor cleanup
Mainly removing unused classes such as blog_post-container
2017-10-25 16:02:02 +02:00
5fbe62105a Blog: Minor layout tweaks
* Make header image clickable
* Make thumbnails on blog list clickable
* Put action buttons in .blog-action for easier positioning
* Cleanup
2017-10-25 16:02:02 +02:00
771b93b169 Blog: Don't display author name on sidebar 2017-10-25 16:02:02 +02:00
f13310d71b Menu: "Log in" instead of "Login and Explore" 2017-10-25 16:02:02 +02:00
243442694c Log warning when someone is denied a project sharing action
This indicates that the web frontend showed something that wasn't allowed.
2017-10-25 14:59:17 +02:00
a4addbfd22 Log as error when project admin group isn't properly configured. 2017-10-25 14:58:02 +02:00
e983d0756d Rename 'Log in' button to 'Log in and Explore' 2017-10-24 15:46:16 +02:00
52cd30b947 Rename _join.sass to _welcome.sass 2017-10-24 15:43:20 +02:00
ed55a73d04 VideoJS: Upgrade and stuff
* Upgrade to the latest stable version 6.2.8
* Move JS files to blender-cloud
* Introducing Hotkeys support (a'la YouTube)
* Introducing Loop button (and a way to easily add new buttons)
* Fix Analytics plugin to work with the VideoJS 6
* Minor style tweaks to work with the latest update
2017-10-24 12:49:39 +02:00
4f3fc91c0a VideoJS: Upgrade and stuff
* Upgrade to the latest stable version 6.2.8
* Move JS files to blender-cloud
* Introducing Hotkeys support (a'la YouTube)
* Introducing Loop button (and a way to easily add new buttons)
* Minor style tweaks to work with the latest update
2017-10-24 12:38:11 +02:00
5c3524706f Welcome Page: Minor style tweaks 2017-10-23 15:35:31 +02:00
2290e019e6 Cleanup
We don't use navbar-fixed-top anymore.
2017-10-18 20:05:24 +02:00
6fe6345b13 Refresh styling on /welcome 2017-10-18 20:05:24 +02:00
53fe047bca Fix bug in getting Blender ID error response 2017-10-17 12:44:26 +02:00
d9c3705c47 Fix tuples in existing session['blender_id_oauth_token']
In a past version of Pillar we accidentally stored tuples in the session.
Such sessions should be actively fixed.
2017-10-17 12:40:33 +02:00
88ffd64706 get_blender_id_oauth_token() now consistently returns a str
Before it could return either of str, tuple, or None.
2017-10-17 12:16:56 +02:00
a897282400 Added some type checks before assigning to session['blender_id_oauth_token']
There were some sporadic TypeErrors where the session var was set to a
tuple instead of a string; this is a way to figure out where that happens.
2017-10-17 12:16:20 +02:00
cfbb05530a Taken unrelated code out of try-body.
The try-body should only contain code that can actually raise the caught
exception.
2017-10-17 12:14:12 +02:00
72f440f509 Fix AttributeError
Exceptions aren't guaranteed to have a 'message' attribute. It does have
'args', but str(ex) is probably more useful as it's likely to include
the exception type.
2017-10-17 11:32:25 +02:00
9c3667b51f Include HTTP method in bugsnag report 2017-10-17 11:32:25 +02:00
6ffbd52a36 Comments: Simpler login message
No point in having a disabled input, since we're leaving
the page to login anyway
2017-10-15 05:56:43 +02:00
49feaa8281 Don't use hard-coded white background for notifications flyout.
Use $color-background instead
2017-10-15 05:55:12 +02:00
72507d7bb2 Minor style tweaks to comments 2017-10-15 05:52:49 +02:00
3bcf4eaebd Icons
New: pi-social-youtube, pi-social-reddit, pi-moon, pi-off
Replaced: pi-spin, pi-comment, pi-download
Removed: pi-log-in, pi-log-out, pi-circle-notch
2017-10-14 03:15:12 +02:00
d637e322f7 Replace hardcoded colors with variables and more sane colors.
Makes it possible to theme the comments by replacing color variables.
2017-10-14 03:06:08 +02:00
43cb7a5f65 File Upload: Clear notifications before success/error
So we don't end up with both file-upload and success at the same time
when uploading files that go up fast.
2017-10-11 23:08:20 +02:00
7fd7fcc16c Minor style tweaks for responsiveness 2017-10-08 23:41:18 +02:00
e01197a5d5 Comments: minor tweaks to strings 2017-10-07 00:07:24 +02:00
be4ce024f4 Introducing public and private extension_props for users
- public: they will be visible to the world (for example as result of the User.find() query)
- private: visible only to their user
2017-10-06 00:13:22 +02:00
98527c72f4 Support for extra_template_args in node view
This allows for wrapping the view function in another function which will provide additional args. Originally implemented in order to allow Dillo post view to provide the Project object to the view_embed template.
2017-10-05 23:46:24 +02:00
053e681d00 Create MongoDB index on tokens.token_hashed 2017-10-05 19:40:41 +02:00
6d8870ba25 Blog: Check if blog_archive_prev/next exist before showing link
Also minor cleanup of the classes and excessive indenting.
2017-10-05 17:32:55 +02:00
01dab4188c Style tweaks to blog archive 2017-10-05 17:32:55 +02:00
eca5f48d41 Fix project_blog_archive pagination
Was missing the 'page' argument.

Fixed by Dr. Sybren
2017-10-05 17:32:55 +02:00
73b50556be Added blog archive.
May still need some style tweaking.
2017-10-05 17:32:55 +02:00
e724c9c2ad Blog: Simplified looping over blog posts. 2017-10-05 17:32:55 +02:00
f42334453c Deduplicated code for image expansion into the page overlay
It now also supports WEBP links, and is compatible with Google Cloud
Storage (which adds ?blablabla to links).
2017-10-05 17:32:55 +02:00
2603c4f44f Deduplicated blog templates by using macros. 2017-10-05 17:32:55 +02:00
bd93625119 Use super() call instead of copy-pasting the contents of the parent block 2017-10-05 17:32:55 +02:00
8fe6b472e4 Notifications: Fix and documentation
Fixed notification-toggle not working because we were accessing
the selectors before they were available in the DOM.

Now use the ID selector directly, re-use when possible.

Also added comments describing obscure variables.
2017-10-05 15:29:30 +02:00
68c7a88fed Use the nifty new DocumentTitleAPI to update notification count
and page titles when browsing assets.

This removes the need for updateTitle()
2017-10-05 15:29:30 +02:00
f9e10976b8 Added a little DocumentTitleAPI object for easier updating of the title. 2017-10-05 15:29:30 +02:00
f17453ba10 Added 'operations hash_auth_tokens' CLI command. 2017-10-05 13:04:44 +02:00
c57aefd48b Hash authentication tokens before storing in the database. 2017-10-05 12:57:16 +02:00
Pablo Vazquez
389413ab8a Notifications: Define selectors once and re-use
Plus some comments and simplifying the switch/adding of icons
by just adding a css class that controls it.

No functional changes
2017-10-04 00:39:00 +02:00
a2ffb8a342 Minor padding tweaks and color for blog title links 2017-10-03 12:53:41 +02:00
6f0a03ce60 Set page title on edit_embed 2017-10-03 12:52:50 +02:00
Pablo Vazquez
053848607d Don't add padding on bottom of node-details-description mixin 2017-10-03 01:26:41 +02:00
Pablo Vazquez
94a0868ba5 Comments: Style cancel button 2017-10-03 01:26:25 +02:00
Pablo Vazquez
c6d8a4c46f Comments: Send text should be inside a span 2017-10-03 01:26:06 +02:00
58a34d6cdb Typo (one bracket too much) 2017-10-02 19:49:33 +02:00
b0c7997128 Notifications: Don't update titles directly, fire an event instead 2017-10-02 19:49:20 +02:00
1bf2a87726 Use a the updateTitle() function to update page titles 2017-10-02 19:48:37 +02:00
d3cd6a884e Use toastr notifications instead of statusBarSet() 2017-10-02 19:31:52 +02:00
Pablo Vazquez
9870979c54 Comments: Style tweak linked comment
Move "Linked Comment" to the bottom, add padding, no borders.
2017-10-02 01:54:13 +02:00
Pablo Vazquez
bb067f77c3 Fix styling for login/local 2017-10-02 00:41:04 +02:00
Pablo Vazquez
da7be50c61 Comments: Cleanup and convert more IDs to classes when possible
No functional changes.
2017-09-30 23:21:54 +02:00
Pablo Vazquez
e4c5743852 Comments: trigger comments-loaded event on comments load.
Useful when apps want to know or do something with the comments,
like Dillo to parse emojis for example.
2017-09-30 23:18:09 +02:00
e9233ff7c0 Introducing embed_project
By specifying the 'embed_project' argument, the node's Project will be fetched and embedded in the document. This is useful in specific cases, where a project property needs to be accessed when rendering the view_embed template.
2017-09-30 22:13:10 +02:00
2d01cd8761 Simplified posts_view a bit
Removed some redundancy, avoided rendering attachments for posts that'll
never be shown, and made the flow a bit clearer.
2017-09-29 10:45:29 +02:00
45a44d08eb Don't manually construct JSON as strings, just use dicts. The modernity! 2017-09-29 10:45:29 +02:00
ddc52b969e Make it possible for node types to have a 'custom view'
This 'custom view' means that the URL to view the node (as returned by
url_for_node(…)) cannot be loaded via XHR and embedded in the project
viewer, but should rather be used as the actual browser URL instead.

Currently only blogs use this.
2017-09-29 10:45:29 +02:00
8ad2ee8729 Registered 'node finder' for blog nodes. 2017-09-28 17:47:41 +02:00
d160999535 Removed unused import 2017-09-28 17:46:34 +02:00
e4fff8df81 Added missing project_container variable. 2017-09-28 17:43:48 +02:00
923cbe62d1 Gulp: indented package.json with tabs
This is what Atom does automatically, and also what we have in Flamenco
and Attract.
2017-09-28 15:32:06 +02:00
0612bd1a21 Gulp: run 'cleanup' task when running with --production. 2017-09-28 15:32:06 +02:00
a1fd48752e Gulp: added 'cleanup' task that erases all gulp-generated files.
It uses Git to erase those files, so anything that's tracked in Git (such
as the JS/CSS vendor directories) are kept as-is.
2017-09-28 15:32:06 +02:00
0e48c18579 Gulp: sorted require statements 2017-09-28 15:09:11 +02:00
db1e3239e8 Gulp: sorted dependencies 2017-09-28 15:09:11 +02:00
34353f773b Gulp: fixed license expression
Running 'npm install --save-dev something' also moved the author & license
keys to the top, which I just kept.
2017-09-28 15:09:11 +02:00
ad0253a461 Gulp: replaced hardcoded paths with variables. 2017-09-28 15:09:08 +02:00
6a541e0662 Improved bugsnag reporting
- Include release stage, which should be 'production' or 'development',
  and gets postfixed by '-debug' when running in debug mode.
- Properly logging remote IP address when proxied through HAProxy;
- Log user ID, email, username, roles, and capabilities;
- Remove authentication tokens from logged session;
- Log request data and JSON separately.
- Added request endpoint.
2017-09-28 13:28:19 +02:00
ec42d033b3 Comment padding/color tweaks for the blog and projects 2017-09-25 10:15:05 +02:00
Pablo Vazquez
8fd577c733 Comments: Capitalize actions and color Reply 2017-09-25 01:24:30 +02:00
Pablo Vazquez
4eeccb6107 Comments: More compact layout 2017-09-25 00:39:34 +02:00
Pablo Vazquez
6688ae66fa Use Toastr to notify 2017-09-25 00:37:03 +02:00
Pablo Vazquez
38e960eb3f Comments: Fix multiple posting when using a hotkey
On every new item loaded with comments, we would bind the click without
unbinding first, leading to multiple posting when triggering the comment
submission.
2017-09-25 00:35:35 +02:00
Pablo Vazquez
dbde681aff Fix alignment for notifications count 2017-09-24 23:11:08 +02:00
Pablo Vazquez
76a5555ff4 Toastr Notifications: Brighter text color 2017-09-23 00:01:16 +02:00
7d740c74e3 Move comment posting into its own function 2017-09-22 19:12:13 +02:00
58b8cea757 Homepage: Don't capitalize project summary on random featured assets 2017-09-22 15:32:27 +02:00
dc75980eba Blog: limit the max-width of post content 2017-09-21 19:14:30 +02:00
f311dbbe30 Fix fluid headers on video 2017-09-21 19:04:55 +02:00
Pablo Vazquez
b5f0c59511 Comments: Style tweaks
More compact and also convert IDs to classes (when not used by javascript)
2017-09-21 01:14:53 +02:00
bc5a8fba61 Prevent node edit form display if PUT is not allowed for the node 2017-09-20 16:40:06 +02:00
1e6180be48 More fixes for video fluid 2017-09-20 16:37:16 +02:00
ebe3118f79 Project home video should not have vjs-fluid, it already is fluid 2017-09-20 16:35:19 +02:00
714455a4eb Fix navigation tree not scrolling until the bottom 2017-09-20 16:31:07 +02:00
0089a8a98e Video: Previews are already fluid, no need for vjs-fluid 2017-09-20 16:18:05 +02:00
9fa4da284b Homepage: Style tweak to random featured project 2017-09-20 16:14:46 +02:00
e3f1f101e2 Assets: No need to calculate preview aspect ratio anymore
No longer used since we went full width
2017-09-20 16:06:42 +02:00
386571157a Assets: Check if user is subscriber or asset is public 2017-09-20 14:23:21 +02:00
5012d19180 Video asset: Show link to login as well as subscribe 2017-09-20 14:19:39 +02:00
54b5e8e3d4 Video: Fix Download button 2017-09-20 14:19:39 +02:00
Pablo Vazquez
5e0e71aa1c SASS: Set all config variables as !default
So other Pillar apps can override them
2017-09-19 21:21:59 +02:00
bd976e6c2e Fixed user switching. 2017-09-19 13:38:48 +02:00
9cce441f6c Removed unused code 2017-09-19 13:38:30 +02:00
6af0cd2be8 Don't show "Create a project" button to non-subscribers, and make btn clickable 2017-09-18 13:51:31 +02:00
82a1e0796c Project Home: Scale header image fit as cover 2017-09-17 21:55:59 +02:00
8d0c0894cb Textures: No background color for download button 2017-09-17 21:55:10 +02:00
b98771f067 Improvements to image thumbnailing
- Optimize JPEGs and increase quality from 75 to 95
- Don't always convert to RGB, first check if RGBA and save as PNG optimized

Thanks to Dr. Sybren and Francesco for review and feedback
2017-09-17 21:49:55 +02:00
b2cfe46438 Style tweaks on projects
Plus re-ordering and minor cleanup
2017-09-17 20:11:13 +02:00
53ac29cfd1 Project: Show edit button in sidebar
So we can access it from everywhere and not only from project home
2017-09-17 20:11:13 +02:00
4e153413d9 Fix Download button showing when not logged in 2017-09-17 20:11:13 +02:00
Dalai Felinto
7d48c02fa3 Expand user schema to support extension_props 2017-09-17 00:25:09 +02:00
5df68c4ead Comments: Only show if there are actually comments
No negative text "no comments"
2017-09-16 20:11:50 +02:00
1563f4142d Fix broken layout on project blog posts
Fixes T52764
2017-09-16 20:11:50 +02:00
1177f516ba Set status as 'published' when creating a blog
If a blog is not set as published it won't be visible in the navbar.
2017-09-16 19:20:12 +02:00
2d18057c6e Added DB index for latest assets/comments 2017-09-15 17:09:15 +02:00
970376ed56 Removed debug print 2017-09-15 17:04:23 +02:00
e2ea8d491b Added a bit of input validation 2017-09-15 16:50:27 +02:00
62954ac157 Latest assets/comments: using Mongo aggregation instead of Python code 2017-09-15 16:47:40 +02:00
1c70d80b99 Removed unused imports 2017-09-15 15:26:43 +02:00
dc50d6e941 Add more logging to find cause of KeyError
There can be a KeyError accessing permission['methods'], but our current
logging doesn't provide enough information as to determine when this
happens. Rather than bluntly fixing the issue, I added logging to try and
find out how we get a 'methods'-less permission dict in the first place.
2017-09-15 11:02:31 +02:00
8377dc63c0 Fix attribute error accessing response.text
The response object *should* be a requests.Response object, which *should*
have a .text property. However, there are situations where this is not the
case, and in those cases we now won't produce an AttributeError.
2017-09-15 10:06:06 +02:00
54bb506e10 Orphan finder: also interpret 24-char hex strings as ObjectIDs
This is necessary as some dynamic node properties have ObjectIDs saved
as strings.
2017-09-14 17:43:23 +02:00
d4facbf2e3 Orphan finder: store the orphan-files.txt file in STORAGE_DIR
This allows running the orphan finder inside a docker container.
2017-09-14 17:34:02 +02:00
ddc8fc0f5e Clarify celery beat schedule a bit 2017-09-14 17:15:11 +02:00
82d2921424 Added support for periodic Celery tasks.
You have to run "manage.py celery beat" for this to work too. Run
"manage.py celery beat -- --help" to get CLI option help.
2017-09-14 16:00:59 +02:00
5d137ac997 Added Celery task for refreshing file links
This includes a CLI command to kick off a single run for the Celery task.

This does *NOT* include a check to see whether the task is already running!
2017-09-14 15:12:25 +02:00
b06e17acf0 Added a little reminder about what to do when you add a Celery module 2017-09-14 15:10:54 +02:00
c272b5d4bb refresh_backend_links CLI: don't convert str → int multiple times 2017-09-14 15:10:32 +02:00
eba28b4eb4 File link refresh: report on every N refreshed links
This makes it easier to see what the Celery worker is actually working on
when refreshing a large number of links.

It'll report on every N refreshed links, where N = link_count/25 but
clamped to N ∈ [5, 100]
2017-09-14 15:10:09 +02:00
44f473221f File link refresh: ignore soft-deleted files 2017-09-14 15:06:37 +02:00
3be47056a0 Orphan finder: drop the per-project finding
Overall finding is much faster, at the expense of a bit more RAM.
2017-09-14 12:18:10 +02:00
be6746f7ab Fixed bug when parsing node without content type property 2017-09-14 12:09:54 +02:00
8cb506731f find_for_other: fail early when no project is given 2017-09-14 12:08:32 +02:00
e09649586f find_url_for_node: fail early when node has no valid project 2017-09-14 12:08:08 +02:00
230c15d51c Fix snag that happens when PUTting a user document without roles key. 2017-09-14 11:23:35 +02:00
c4a765e73b Change url of EXAMPLE_PROJECT
This prevents tests from breaking as we start implementing the concept of 'main' project.
2017-09-13 23:30:23 +02:00
18eb84fa9d Log capabilities at DEBUG level. 2017-09-13 16:36:36 +02:00
1a505bb0a2 Work around bugsnag issue
3263f0a551b663f93d1f2ec8087ef43302752f8b didn't fix it in production.
2017-09-13 16:36:29 +02:00
86caa3a044 Fixed inconsistency between requirements.txt and setup.py 2017-09-13 16:36:01 +02:00
3263f0a551 Upgraded bugsnag 2.3.1 → 3.1.1
I hope this fixes this error; I no longer see it locally:

Traceback (most recent call last):
  File "/data/git/blender-cloud/runserver.wsgi", line 16, in <module>
    application = PillarServer(my_path)
  File "/data/git/pillar/pillar/__init__.py", line 96, in __init__
    self._config_bugsnag()
  File "/data/git/pillar/pillar/__init__.py", line 191, in _config_bugsnag
    handle_exceptions(self)
  File "/opt/python/lib/python3.6/site-packages/bugsnag/flask/__init__.py", line 27, in handle_exceptions
    got_request_exception.connect(__log_exception, app)
  File "/opt/python/lib/python3.6/site-packages/blinker/base.py", line 130, in connect
    sender_ref = reference(sender, self._cleanup_sender)
  File "/opt/python/lib/python3.6/site-packages/blinker/_utilities.py", line 134, in reference
    weak = callable_reference(object, callback)
  File "/opt/python/lib/python3.6/site-packages/blinker/_utilities.py", line 145, in callable_reference
    return BoundMethodWeakref(target=object, on_delete=callback)
  File "/opt/python/lib/python3.6/site-packages/blinker/_saferef.py", line 135, in __new__
    key = cls.calculate_key(target)
  File "/opt/python/lib/python3.6/site-packages/blinker/_saferef.py", line 196, in calculate_key
    return (id(get_self(target)), id(get_func(target)))
  File "/opt/python/lib/python3.6/site-packages/events/events.py", line 41, in __getattr__
    (self.__class__.__name__, name))
AttributeError: type object 'PillarServer' has no attribute '__self__'
2017-09-13 16:03:38 +02:00
8aa6bb61dd Slightly nicer initialisation of Bugsnag 2017-09-13 16:02:48 +02:00
f89bec5089 Allow <pre> tags in comments
This is useful for code blocks.
2017-09-13 15:34:36 +02:00
896784a351 Clear session when token is invalid
Before this, the user's authentication token would still be stored in
the session even when it's found to be invalid. This caused a login
action to fail, but not in such a way that we would redirect to the login
page of Blender ID. Rather, it would keep you not logged in. By clearing
the session we're sure that the invalid token is forgotten, and the next
request will handle the login properly.
2017-09-13 15:23:38 +02:00
6488f4677e Be more graceful when URLer service isn't configured properly.
Errors are still logged, but find_url_for_node() will just act as if the
node doesn't exist when the URLer authentication token is invalid.
2017-09-13 15:22:04 +02:00
f650835c07 Orphan finder: soft-delete orphan files
This uses the orphan-files.txt file output by find_orphan_files() to
mark those files as deleted. This allows for a two-stage approach, where
file IDs are found on one machine (against a read-only MongoDB slave, for
example) and soft-deleted on another machine (against a writable master).
2017-09-13 14:05:28 +02:00
33feaa81ca Orphan finder: refuse to find orphans when orphan-files.txt exists. 2017-09-13 14:05:28 +02:00
16bf193b0e Added soft-delete to the files schema.
This allows us to soft-delete orphan files, at least until we know
that the orphan file detection is solid and can really be trusted.
2017-09-13 14:05:28 +02:00
ce59bc3335 Orphan finder: write file object IDs to orphan-files.txt 2017-09-13 14:05:28 +02:00
46d834f5aa Orphan finder: log duration of orphan file search 2017-09-13 14:05:28 +02:00
4be05c8b57 Orphan finder: when iterating all projects, gracefully stop at CTRL+C 2017-09-13 14:05:28 +02:00
5ce02bbbfe Orphan finder: fix bug when no orphan files are found 2017-09-13 14:05:28 +02:00
d01b498ad5 Orphan finder: Default project._deleted to False 2017-09-13 14:05:28 +02:00
b1d69b2304 Added orphan file finder. Works per project or pass 'all' for all projects.
This is quite a heavy thing to run, since it goes over all files of a
project, and then goes over every document in (almost) every collection
which has a property 'project' that's set to the project ID. It then goes
over every document to find all ObjectIDs and removes those from the set
of file ObjectIDs for that project. The remaining ObjectIDs are considered
orphans.

This is a very thorough search, but it doesn't require any knowledge of
the document and collection structure, so it should be future-proof.
2017-09-13 14:05:28 +02:00
9ac870e0a5 Fixed scrolling to comment when the comment ID is in the URL hash. 2017-09-13 14:05:18 +02:00
a8511c9db5 Gracefully handle read timeouts when communicating with BlenderID 2017-09-12 16:30:11 +02:00
ab7d623d27 Create some indices used for statistics 2017-09-12 11:58:31 +02:00
901fea3361 Do not assume that users and groups keys exist in permissions 2017-09-11 22:35:44 +02:00
Dalai Felinto
3329788be8 Localization: Fix setup.py install/develop
After I removed the @translation_manager.command decorators the setup
install was failing because translations was not found in pillar.cli.
2017-09-11 21:54:34 +02:00
Dalai Felinto
303a33c3bf Internationalization: Backend support to localization based on user browser
User experience
===============
For users it means we can provide localized web-sites to enrich their
overall experiences.

Although for the Blender Cloud this doesn't make much sense (since the
content is in English), Flamenco and Attract can really benefit from
this.

New configuration settings
==========================
There are two new parameters in config.py:

* DEFAULT_LOCALE='en_US'
* SUPPORT_ENGLISH=True

They are both properly documented in the `config.py` file.

Technicall details
==================
We are using the 'Accept-Languages' header to match the
available translations with the user supported languages.

If an extension has a `translations` folder, it's used for translations.
However the main application (e.g., Blender Cloud) is the one that
determines the supported languages based on its `languages` folder.

How to mark strings for translation
===================================
See the documentation in README.md.

But as an example, 404.pug and pillar/__init__.py::handle_sdk_resource_invalid
have marked up strings that will be extracted once you install pillar,
or run any of the translations commangs.

Remember to **gulp** after you update the template files.

How to setup translations
=========================
You will need to create translation for the main project, and for each
extension that you want to see translated. I added a new entry-point to
the installation of Pillar.

So all you need is to use the `translations`
script to initialize, update and compile your translations.

Pending tasks
=============
Aside from marking more strings for extraction and start the translation
effort it would be interesting to replace the pretty_date routine with
momentjs.

Acknowledgement
===============
Many thanks for Sybren Stüvel for the suggestions and throughout code
review. Thanks also to Francesco Siddi for the original documentation
and suggesting me to tackle this. And Kudos for Pablo Vazquez for the
motivational support and for the upcoming "strings mark up" task force!

The core of the implementation is based on Miguel Grinberg i18n chapter
of his great 'The Mega Flask Tutorial'.

Reviewers: sybren

Differential Revision: https://developer.blender.org/D2826
2017-09-09 00:26:18 +02:00
Dalai Felinto
b769cc6c3c Fix app_root in unittests 2017-09-08 14:06:55 +02:00
4e5ce71a52 File storage link refreshing: log nr of documents to refresh. 2017-09-07 15:53:16 +02:00
7dbd0c7896 Exclude Flamenco task logs when dumping the database. 2017-09-07 15:52:57 +02:00
Dalai Felinto
d5a55f71a7 Fix Flask and Eve dependency issue
Eve expects 'flask>=0.10.1,<=0.12' so we can't use flask==0.12.2 in
pillar for now.

This needs to be fixed upstream (eve), but for now `python setup.py
install`.
2017-09-06 17:51:21 +02:00
ebcd280886 Updated requirements in setup.py 2017-09-06 17:30:55 +02:00
Dalai Felinto
216b9278af A user should not be able to vote on own content
This should be hidden in the UI as well, but the backend should support this too.
We also want to set initial rating of 1 for contents that need it.

This commit includes a new unittest for this case.

Reviewers: sybren

Differential Revision: https://developer.blender.org/D2825
2017-09-06 13:51:32 +02:00
eb467474f8 Make our require_login() optionally redirect to the login page
This mimicks the behaviour of flask_login. In our case, it only redirects
when redirect_to_login=True and the user is anonymous. Otheriwse it still
results in a 403 Forbidden response.
2017-09-06 12:07:20 +02:00
e5b80297ba Fixed user ID, we now pass a UserClass instance to compute the capabilities 2017-09-05 16:05:57 +02:00
6b32c4ede8 Use has_cap('admin') instead of has_role('admin') 2017-09-05 16:05:39 +02:00
2db0ee11db Fixed case for packages.
'pip freeze -r requirements.txt' otherwise complains about this.
2017-09-05 13:38:04 +02:00
146bf97432 Removed doubly-listed CommonMark package 2017-09-05 13:37:43 +02:00
fc5177d58b Removed pycrypto package; it's not used. 2017-09-05 13:35:30 +02:00
97564022c3 Renamed 'child' to 'variation', since it's not about child nodes. 2017-09-05 11:56:41 +02:00
941fd8dd9c Show max filesize of variations, rather than original file size.
The original file isn't always accessible, and can be of completely
different size than the downloadable variations. This is mostly appliccable
to videos.
2017-09-05 11:56:28 +02:00
b6b7aad046 Have a nicer 403 Forbidden message when the user isn't logged in.
Since we don't know who the user is, just stating that they don't have
access isn't correct.
2017-09-05 11:35:21 +02:00
a8e912fcb1 Include 'next' URL when logging in through a 403 Forbidden page 2017-09-05 11:35:21 +02:00
903bb6cfe9 Typo 2017-09-04 19:46:42 +02:00
94efdcd9b5 Small layout style fixes 2017-09-04 19:16:47 +02:00
b8153b5c9a Assets: Move details to its own file and share across assets
And new styling for the details as well
2017-09-04 19:13:38 +02:00
923b4bd9d6 Notifications: no minimum width 2017-09-04 16:19:17 +02:00
4c25248e5f Project: breadcrumbs no longer exist 2017-09-04 16:19:17 +02:00
06dd24bcf9 Project Edit: Node types editing embedded 2017-09-04 16:19:17 +02:00
941ec94313 Node Type Edit: Save node_type description 2017-09-04 16:19:17 +02:00
adcbebd7b6 Videos: Keep a 16:9 aspect ratio 2017-09-04 16:19:17 +02:00
4a68821dee JS: Fix 50px offset on containerResizeY 2017-09-04 16:19:17 +02:00
4b1bb6283e Let Flask know our preferred URL scheme 2017-09-01 16:20:37 +02:00
3a3b3c3269 Revert "Replaced config SCHEME with Flask's own PREFERRED_URL_SCHEME setting."
This reverts commit 8318d4b1f69846e21002acafd4f410f5003af6f6.
2017-09-01 16:19:58 +02:00
fe64a0c70e Changed organizations endpoint /orgs → /o 2017-09-01 11:39:59 +02:00
ea9af92bd4 Organizations: Click anywhere in the list item to open
Suggestion by Dr. Sybren
2017-09-01 11:39:46 +02:00
dd3cfe80ef CSS: don't use cursor pointer for organizations/projects lists 2017-09-01 11:35:55 +02:00
314f0b8dbe Quote token when logging
This helps when debugging strange tokens.
2017-09-01 11:21:02 +02:00
6a4f571b05 Organizations list styling 2017-08-31 16:35:31 +02:00
30b3d6abaf Full width projects, search, and top navbar 2017-08-31 16:35:31 +02:00
01c8f6cdae Inputs: Dim the placeholder text 2017-08-31 16:35:31 +02:00
8318d4b1f6 Replaced config SCHEME with Flask's own PREFERRED_URL_SCHEME setting.
This prevents us from explicitly passing SCHEME to url_for() calls.

NOTE: this possibly requires an update to your config_local.py
2017-08-31 14:37:35 +02:00
d6dd0d69d0 Fix for missing underscore in _scheme arg
Be more careful next time!
2017-08-31 14:32:39 +02:00
2d3b54b80b Use app config SCHEME to enforce https when doing oauth redirects 2017-08-31 14:22:07 +02:00
89f24ac4e6 Buttons: Don't force uppercase text 2017-08-30 23:18:59 +02:00
7890cd2622 Introducing settings blueprint
Now settings live in a dedicated space, the settings blueprint can be used by Pillar applications, and the templates can be extended or overridden. Moved subscription and email settings to the blender-cloud repository.
2017-08-30 23:10:28 +02:00
b6bd40f956 Theatre View: backdrop block not needed anymore 2017-08-30 15:34:13 +02:00
c0b380f215 Gulp: Fix livereload 2017-08-30 15:05:38 +02:00
811236cff4 Migrate Jade to Pug template engine
Jade templates engine has been renamed to Pug.

We are using Pug already on the Blender Cloud repository, following is Flamenco and Attract
2017-08-30 14:04:15 +02:00
62542f0329 Rolled back some flask_login and g.current_user integration
Setting flask_login.current_user ourselves was a bad idea, and messed up
flask_login's internal administration. Our code now just manages
g.current_user in these specific instances, which works fine.
2017-08-30 12:39:46 +02:00
6825b8bf74 Fixed infinite recursion. 2017-08-29 12:31:52 +02:00
bdd603fb17 Using new UserClass instances everywhere:
- No more direct access to g.current_user, unless unavoidable.
  - Using pillar.auth.current_user instead of g.current_user or
    flask_login.current_user.
  - p.a.current_user is never checked against None.
  - p.a.current_user.is_authenticated or is_anonymous is used, and never
    together with a negation (instead of 'not is_anon' use 'is_auth').
  - No more accessing current_user a a dict.
  - No more checks for admin role, use capability check instead.
2017-08-29 11:34:48 +02:00
86e76aaa5f Use UserClass instead of assigning dict to g.current_user 2017-08-29 11:34:48 +02:00
88af86ae61 Toastr: Style buttons in notifications 2017-08-27 17:44:40 +02:00
a6f56a4811 OAuth test: checking email address too 2017-08-25 12:53:21 +02:00
c7c867f1c7 OAuth signin: streamlined instantiation of OAuthSignIn subclasses 2017-08-25 12:35:08 +02:00
add2538655 Prevent JS error by sync-loading jquery.autocomplete….js 2017-08-25 12:03:52 +02:00
ff1b14d980 Project sharing: Simplified user selection JS code. 2017-08-25 12:03:52 +02:00
a12838032f Introducing exception handling in the application code 2017-08-25 11:47:40 +02:00
6edd0e2f8d Fix for embedded template paths
When edting a custom node type we were building the wrong path, and not passing the project argument to the render_template function.
2017-08-25 10:57:29 +02:00
398bbbc316 Fix up config_testing values to conform with tests 2017-08-25 10:55:35 +02:00
41a82c44c5 Tests for providers callbacks
Also added SERVER_NAME in config_testing and pre-populated the  keys of OAUTH_CREDENTIALS, since the implementation of providers is part of the application.
2017-08-25 10:55:35 +02:00
cecf81a07d Initial tests for OAuthSignIn 2017-08-25 10:55:35 +02:00
45275c3831 Switch to class-based OAuthUserResponse
Instead of returning an arbirary number of items, we provide a standardized and better documented response.
2017-08-25 10:55:35 +02:00
99866542a1 Style Google oauth login 2017-08-25 10:53:30 +02:00
6b3e523036 Remove Flask-OAuthlib and oauth_blender_id from Pillar
We switch completely to a rauth-based approach, allowing multiple providers for authentication.
2017-08-25 10:53:22 +02:00
6e9a539d61 Fix typo 2017-08-25 10:52:52 +02:00
c9b2eb25b2 Add default OAUTH_CREDENTIALS in config 2017-08-25 10:51:45 +02:00
23b856b073 Move Blender ID to extensible OAuth
Also, added support for Google OAuth.
2017-08-25 10:51:45 +02:00
e0520e265d Style login page 2017-08-25 10:51:45 +02:00
9b9e0488d3 New login page
Exposes all available login providers
2017-08-25 10:51:45 +02:00
c827dc4ed2 Initial work to support multiple OAuth clients 2017-08-25 10:51:45 +02:00
d48a308cc6 Renamed pillar.auth.current_web_user to pillar.auth.current_user
This is an in-between change. In the future, we want to always set
g.current_user so that it's never None (but rather an AnonymousUser
instance). However, there is still some code that assumes that when
g.current_user is not None the user is logged in. This should be
addressed first.
2017-08-24 14:28:18 +02:00
b9ae4396e5 Orgs: show "My Organizations" in the user's menu
This is shown only when the user is member of or administrator for one or
more organizations, otherwise it's hidden.
2017-08-24 14:28:18 +02:00
95dc799692 Orgs: made org properties for non-admins a bit nicer 2017-08-24 14:28:18 +02:00
be12bd7d99 Orgs: allow users to leave an organization 2017-08-24 14:28:18 +02:00
0445c3bd86 Orgs: assign capabilities to org-subscriber role 2017-08-24 14:28:18 +02:00
694e04cc50 Orgs: UI tweak 2017-08-24 14:28:18 +02:00
598b59c0c6 Orgs: gracefully handle 'not enough seats' error 2017-08-24 14:28:18 +02:00
1e1bd83baf Orgs: refresh all members' roles after org changed roles 2017-08-24 14:28:18 +02:00
d41e2bbce4 Orgs: fixed "Create New Organization" button
It now actually creates the new org and shows it.
2017-08-24 14:28:18 +02:00
5f607fa2cf Orgs: Moved some JS around, no real semantic changes 2017-08-24 14:28:18 +02:00
cd417bb9db Orgs: styling tweaks to make member list a bit nicer 2017-08-24 14:28:18 +02:00
65518f2208 Spaces to tabs 2017-08-24 14:28:18 +02:00
30902bc9cd Orgs: made the admin picker a bit nicer to work with
Also it now asks for a confirmation before transferring admin-ship to
the new admin user.
2017-08-24 14:28:18 +02:00
37b1a6e0c1 Orgs: added labels to organization form 2017-08-24 14:28:18 +02:00
f1edb901d1 Orgs: allow setting org admin via web interface / PATCH request 2017-08-24 14:28:18 +02:00
a5d11ec31b Refactored user search as JQuery plugin 2017-08-24 14:28:18 +02:00
2bf95223b7 Orgs: layout tweaks 2017-08-24 14:28:18 +02:00
08294e2f14 Orgs: allow admins to set seat count and org_roles 2017-08-24 14:28:18 +02:00
1c9f425a40 Orgs: use flask_login.current_user to avoid calling current_user() all the time 2017-08-24 14:28:18 +02:00
4116357447 Orgs: some small fixes, mostly for stability / corner cases 2017-08-24 14:28:18 +02:00
e9cb235640 Added web interface for organizations.
It looks like crap, but it allows you to edit the details and the members.
2017-08-24 14:28:18 +02:00
64eab850c5 Orgs: pillar admins can always edit an organization 2017-08-24 14:28:17 +02:00
c6eebc4eae Orgs: allow setting location field by PATCH 2017-08-24 14:28:17 +02:00
1bd6e07fe2 Orgs: Allow adding individual known users by user ID.
This is used for the selection by user search.
2017-08-24 14:28:17 +02:00
1ad13d048f Some extra type safety checks 2017-08-24 14:28:17 +02:00
cfde720b1d Orgs: PATCH op to batch-add emails as members now strip()s emails
It also refuses to add empty emails.
2017-08-24 14:28:11 +02:00
5d17d892a4 Orgs: Use current_user() in PATCH handler 2017-08-24 14:28:02 +02:00
40172bf8b5 Orgs: Use create-organization capability to control access
This is more explicit and future-proof than checking for admin cap.
2017-08-24 14:27:52 +02:00
72404d0fd9 Handle registration of previously unknown organization members.
When a new user is created, two things happen:
  - before inserting into MongoDB, the organizational roles are given
  - after inserting, the organizations are updated to move the user from
    `unknown_members` to `members`.
2017-08-24 14:26:19 +02:00
b53d485960 Added access control to organizations Eve endpoints 2017-08-24 14:26:19 +02:00
cf51d1a280 Added utility function current_user() that acts like flask_login.current_user
This actually returns an AnonymousUser object, instead of None, when the
user is not logged in.

For compatibility with existing code, this function doesn't set
g.current_user to that AnonymousUser instance. We may decide to do this
later.
2017-08-24 14:26:19 +02:00
efc1890871 Added PATCH support for organizations
With a PATCH request you can now:
  - assign users,
  - remove a user,
  - edit the name, description, and website fields.

Only the organization admin user can do this.
2017-08-24 14:26:19 +02:00
93d534fe94 Added Organization Manager.
This is a Flamenco/Attract-style Manager object that's instantiated by
the PillarApplication. It can create Organizations and assign/remove
users.

Also I updated the Organization schema to reflect the currently desired
design.

NOTA BENE: this does not include any security/authorisation checks on Eve's
organizations collection.
2017-08-24 14:25:52 +02:00
87afbc52f6 Updated do_badger to take an optional set of roles.
The 'role' parameter now must be passed as keyword arg instead of
positional arg. Either 'role' or 'roles' must be given.
2017-08-23 08:59:23 +02:00
15de24214a Decouple upload_and_process from stream_to_storage
The stream_to_storage function is still quite large, and this is a first step at refactoring it. stream_to_storage can be used for files that are uploaded on the server without the /stream endpoint (for example downloaded from a link).
2017-08-22 13:26:12 +02:00
2b09711eb0 Load user capabilities from Pillar config and allow extensions to extend.
Default caps can be overridden using the USER_CAPABILITIES name in
config_local.py. These can be extended by Pillar Extensions.
2017-08-22 11:31:17 +02:00
566f2a4835 Late-initialise CLI user & late-import UserClass class
This may fix some unit tests issues.
2017-08-22 09:41:38 +02:00
575a7ed1a7 Introduced role-based capability system.
It's still rather limited and hard-coded, but it works.
2017-08-18 14:47:42 +02:00
566a23d3b6 Unified user representation for web and API calls
Both approaches now use a pillar.auth.UserClass instance. g.current_user
is now always set to that instance, even for web entry points.

This UserClass instance can still be keyed like the old dict, but this is
for temporary compatibility and shouldn't be relied on in new or touched
code.
2017-08-18 13:19:34 +02:00
6473ad3de7 Allow iframes that contain content from our Google Cloud storage. 2017-08-17 12:59:42 +02:00
6285e81883 Add course and workshop project types to admin interface 2017-07-27 17:18:20 +02:00
4c896ae6b7 Introducing new icons
graduation-cap and lightbulb thanks to @venomgfx.
2017-07-26 16:55:42 +02:00
b3aee6c8bc Introducing new types of projects
We reorganized training projects into courses and workshops. Project types should be expandable by extensions to avoid this kind of changes.
2017-07-26 16:55:02 +02:00
e18ed79c7b Move training and open-projects to blender-cloud repo 2017-07-26 16:52:45 +02:00
9aa73c5306 Moved project index_collection to blender-cloud repo 2017-07-26 16:52:05 +02:00
e430b2b9a1 Update url_for from main.join to cloud.join
This text should be moved to Blender Cloud.
2017-07-16 01:00:24 +02:00
502e494083 Clean up local login
Use generate_and_store_token and get_local_user directly instead of the /make-token endpoint.
2017-07-14 21:41:40 +02:00
e752a5dc87 On new project creation, use the backend storage set in config 2017-07-14 12:04:24 +02:00
5ec76f8801 Remove Blender Cloud specific pages
They are now available in the blender-cloud repository. This is an effort to make Pillar a generic package.
2017-07-13 18:24:43 +02:00
7f336cb47c Merge branch 'production' 2017-07-13 17:31:28 +02:00
bd13d89817 Added permission check to DELETE of nodes. 2017-07-13 17:29:46 +02:00
8a8f654657 Project sidebar: tweak to active item state 2017-07-13 15:44:00 +02:00
b88594958d Convert spaces to tabs for jade files 2017-07-13 12:36:06 +02:00
3d1757476a Support for OpenGraph and Twitter cards in blogposts 2017-07-13 12:36:06 +02:00
c9af6fe44f Services: Re-design and welcome Flamenco!
Thanks to @fsiddi for feedback
2017-07-11 18:44:30 +02:00
5c21443f9b Homepage: Hide In Production (for now... : 2017-07-11 18:44:30 +02:00
758df5bc26 Gulp: Fix crashing after error (Plumber not plumbing) 2017-07-11 18:44:30 +02:00
4c273671e4 CLI index_users_rebuild() made parallel 2017-07-11 15:29:17 +02:00
f3e79bcfb5 Formatting 2017-07-11 12:56:40 +02:00
b04abef20f Also push user to Algolia when its role changes through the badger
This may cause some superfluous pushes, though.
2017-07-11 12:56:32 +02:00
73d4a77881 Role change blinker: make comparison set-based
This makes it impervious to changes in order and duplicate roles.
2017-07-11 12:17:06 +02:00
c974b388b6 Formatting 2017-06-29 11:05:14 +02:00
66ebfc669a No need to pass ?embed=1 any more 2017-06-29 11:05:14 +02:00
e061d6c29d Allow editing users' email address via /u/
Also reloads the user info after a succesful edit.
2017-06-29 11:05:01 +02:00
08cb2b8438 Simplified string 2017-06-16 14:05:13 +02:00
a19ad751a8 Removed obsolete file upload stuff. 2017-06-16 14:05:02 +02:00
fc4ab9d6ba Removed obsolete comment file + function 2017-06-16 13:40:31 +02:00
2482381999 Added ability to add missing node types to replace_pillar_node_type_schemas 2017-06-16 12:40:10 +02:00
6e6ea6082d Renamed _attachments_embedded_schema to attachments_embedded_schema
It's used in multiple files, and thus shouldn't be marked as 'private'.
2017-06-16 12:39:51 +02:00
50108201cf Removed 'content' property from page node type
... because it doesn't work when it's there.
2017-06-16 12:38:51 +02:00
964526924d Save thumbnails with explicit quality setting.
This should have been the default value anyway, but T49477 looks like it
may not be. This should solve that.
2017-06-15 16:56:23 +02:00
05f6fb6016 Upgraded Pillow 2.8.1 → 4.1.1
Version 4.0.0 was actually the first one to officially support Python 3.6,
so we've been lucky so far that it worked at all ;-)
2017-06-15 16:53:06 +02:00
7ed053b6c1 Little clarification
... because I always forget this myself...
2017-06-15 14:52:43 +02:00
8e02de32ab Pillar Extensions can now determine which user roles to index in Algola 2017-06-15 11:31:48 +02:00
8d94901bab Use app.user_roles to construct the roles field in /u 2017-06-15 11:13:44 +02:00
13b67702b4 Let Pillar extensions register new roles.
These will be available via the app.user_roles property.
2017-06-15 11:06:01 +02:00
cdb148fc0d Just import the forms module, not every single form it it separately. 2017-06-15 11:04:44 +02:00
4fd193ba2b Use (eek) the current_user_is_subscriber var injected by the BCloud extension 2017-06-14 16:26:57 +02:00
efa2321ac3 Pillar extensions can now register global Jinja2 context processors. 2017-06-14 16:10:11 +02:00
94d12c6b66 Menu: Style sidebar nav-item-sign-in 2017-06-14 15:01:37 +02:00
73c5032a48 Convert timezone, not replace it 2017-06-14 12:06:20 +02:00
5955b71459 Comments: Use toastr for notifications 2017-06-12 19:47:00 +02:00
b091044bc2 Small tweaks to buttons 2017-06-12 19:06:02 +02:00
3a500f3ea3 Simplify Errors (404 & 403)
TODO: Use a generic error template and pass error/text
2017-06-12 16:49:43 +02:00
de96e8e189 Move _errors Sass from main to base
That way errors are automatically styled on all other apps (Flamenco, Attract...)
2017-06-12 15:03:06 +02:00
10e14e8c24 Project Edit: tweak to node types 2017-06-09 17:57:20 +02:00
6f7aa86e8b Project Edit: Set container size and header width 2017-06-09 17:57:01 +02:00
1b6fbb940b Flamenco Project Settings: Flip buttons to the right 2017-06-09 17:56:40 +02:00
df40560c5a Make notification and user menus a macro 2017-06-09 16:31:14 +02:00
3713f1eee2 Style .btn as buttons 2017-06-09 14:59:46 +02:00
07ca1ad16e Project Edit: Minor style tweaks 2017-06-09 14:59:37 +02:00
de5557a68f Use own icons for toastr notification type 2017-06-09 14:59:18 +02:00
93087f7fa9 Project Edit: Use folder icon for sidebar instead of tree 2017-06-09 14:58:19 +02:00
41bc4b6a6f Project Edit: Nicer listing for node_type 2017-06-09 14:58:05 +02:00
36a5190349 CSS tweaks to js select2 style from Flamenco 2017-06-08 16:43:51 +02:00
293961097f Merge branch 'production' 2017-06-08 11:45:16 +02:00
740df09b9d User edit form: prevent accidentally revoking roles
Prevent accidentally revoking roles that were not part of the form.
2017-06-08 11:35:33 +02:00
263c274774 Allow indexing of flamenco-user role.
Role handling should be refactored so that extensions can also declare
roles, and whether they should be pushed to Algolia or not.
2017-06-08 11:34:53 +02:00
a9c506c290 Set Toastr defaults 2017-06-07 21:10:10 +02:00
85f2c6093d Introducing Toastr for toast notifications 2017-06-07 20:58:27 +02:00
91807ad022 Add comment to STORAGE_BACKEND config 2017-06-07 19:40:19 +02:00
5ce78a2fb3 Create <p> tag, do not use existing tags 2017-06-07 18:59:46 +02:00
155ddf243a Partial revert of "Added @project_view() decorator to reduce duplicated code."
This reverts parts of commit 0cf96e47e84590a4c540f0cf118a76c05a41710b.
The decorator is still there, and it's used by new code (also in Flamenco),
but it's not used by pre-existing code.
2017-06-07 17:06:26 +02:00
31b71eb244 Escape text when presenting search results 2017-06-07 16:22:39 +02:00
1ce4654673 Autodetect timestamp format in Blender ID token expiry.
The new Blender ID uses a different timestamp format than the old one.
We can alter Blender ID, but using the ISO 8601 is a good idea anyway.
2017-06-07 09:00:51 +02:00
72cbd2ce2b Added 'repr' Jinja2 filter.
This can help with debugging, for example by showing the difference between
a string ID and an ObjectID.
2017-06-06 18:29:33 +02:00
3d273d4284 Expose Flask session to Jinja 2017-06-06 18:06:46 +02:00
d920d9e9fe Also mock .s() and .si() celery signature functions. 2017-06-06 17:35:56 +02:00
c2bc52718a Fixed string formatting in exception raising 2017-06-06 17:35:56 +02:00
c3ea8228a1 Less padding on buttons 2017-06-06 17:03:24 +02:00
5047803e3c CSS: progress-bar styling part of pillar's base 2017-06-06 17:03:24 +02:00
1c566c6259 Fixed bug in GoogleCloudStorageBlob.exists() 2017-06-06 16:35:14 +02:00
2ad8c5186c Storage backends: added exists() method
This method returns whether the file exists on the backend.
2017-06-06 15:33:05 +02:00
d6506b6402 Moved Celery CLI commands to 'manage.py celery' submodule + added extensions:
Added a 'celery queue' command, which is supposed to show queued
Celery tasks (but doesn't work quite as I'd expect).

Added a 'celery purge' command, which purges queued Celery tasks.
2017-06-06 15:05:18 +02:00
5bde262ad7 Ping Celery workers when starting up.
This makes debugging Celery/RabbitMQ/Redis issues much easier, as it
happens at application startup, rather than when we first create a Celery
task.
2017-06-02 17:33:15 +02:00
27ad3459c1 Testing: make Celery tasks execute immediately when called. 2017-06-02 16:15:46 +02:00
6f16e20bf6 Flask Request.json() is deprecated, use get_json() instead.
See http://flask.pocoo.org/docs/0.12/api/#flask.Request.json
2017-06-02 16:03:45 +02:00
3e67db50f0 Tests: added some code to easily enter the Flask app context
This can't be trivially enabled globally, since it seems to leak certain
things like authentication info between calls.
2017-06-02 16:02:18 +02:00
c7e225e81b Added a bit about Celery to the README.md 2017-06-02 11:50:02 +02:00
878bf22695 Migrated Algolia push/delete of nodes to Celery background tasks. 2017-06-02 10:44:37 +02:00
d0c30cfeca Mock Celery while testing, to prevent actual background task creation. 2017-06-02 10:44:37 +02:00
5af54237b9 Integrated Celery startup / management / config with PillarServer. 2017-06-02 10:44:37 +02:00
e7d268bde6 Algolia: Use Celery to push user updates in a background task. 2017-06-02 10:44:37 +02:00
ed4ee5228a Added Celery for background tasks.
The implementation is still rather simple, using hard-coded configuration
values. This will change in subsequent commits.

The worker can be started with "manage.py operations worker". Celery
Worker CLI options can be passed after a double dash, like this:

    ./manage.py operations worker -- -C -E
2017-06-02 10:44:37 +02:00
f152521041 Algolia user push: simplified & streamlined the code a bit. 2017-06-02 10:44:37 +02:00
2b36b4c514 PEP8 formatting 2017-06-02 10:44:37 +02:00
5f2153ae5a Sorted imports 2017-06-02 10:44:37 +02:00
22301a0e9a Removed unused import 2017-06-02 10:44:37 +02:00
96ffee49cf Update url_for statements for project redirects
Now we always point to the full project urls.
2017-06-01 17:40:16 +02:00
8c38861c2f Remove Blender Cloud specific redirects
They have been moved to the Apache config file in the blender-cloud repo.
2017-06-01 17:24:46 +02:00
80a0643747 Updated font location in README 2017-06-01 12:10:33 +02:00
92536aa7ac Renamed readme.md to README.md 2017-06-01 12:10:24 +02:00
7ac9203753 Pillar-font: added pi-link and pi-unlink 2017-06-01 12:10:00 +02:00
d67f65019e Escape HTML when displaying search results 2017-05-31 17:14:17 +02:00
a806f294b2 Some extensions to make Flamenco tests possible 2017-05-31 17:13:57 +02:00
bfbcdee926 CLI 'index_users_rebuild' should gracefully stop when Algolia isn't configured. 2017-05-31 11:03:33 +02:00
34b9be4efa Don't use str.format() when logging.
The correct way to log is to use %-formatting, and pass the format args to
the logging function. This prevents the string from being formatted at all
when the log item isn't logged anywhere (in this case, when the log level
is WARNING or higher).
2017-05-31 11:03:04 +02:00
2c78697e80 Pass extension pages to all extensions' "project settings" pages. 2017-05-31 10:35:49 +02:00
f953f1e51b Moved common Jade code for project edit pages into projects/edit_layout.jade 2017-05-31 10:35:49 +02:00
207dacf816 Reindex users that have not been deleted 2017-05-28 20:21:06 +02:00
add1c8b9b3 Make ROLES_FOR_COMMENT_VOTING a config value
This way we can override it when extending Pillar for other projects that might not require the 'subscriber' or 'demo' roles.
2017-05-28 19:04:11 +02:00
85922f4493 Fix to support missing roles key in user 2017-05-24 19:42:44 +02:00
a7d3ba24b4 Refactor cli scripts in submodules 2017-05-24 19:41:35 +02:00
10c584daab Skip user indexing if user has service group 2017-05-24 18:11:36 +02:00
fe56b04f7d Remove bottom link on sitemap 2017-05-24 18:03:49 +02:00
43d4a36a0d CLI functions for index management
When developing locally, it is important to set up a dedicated indexing backend. With these two operations functions it is now possible.
- index_users_rebuild: Clear users index, update settings and reindex all users.
- index_users_update_settings: Configure indexing backend as required by the project
2017-05-24 18:02:39 +02:00
12a8a34bdc Fixed JS injection vulnerability.
JavaScript in the user's full name or username was executed when adding
that user to a project.
2017-05-24 16:32:05 +02:00
85b6ff2d7f Use str2id(x) instead of ObjectId(x)
The latter produces an internal server error if 'x' is not a valid ObjectId,
whereas the fromer produces a 400 Bad Request.
2017-05-24 16:31:15 +02:00
4edbcd6a98 PEP8 formatting 2017-05-24 15:48:27 +02:00
2ba52e2467 Allow extensions to have a project settings page. 2017-05-24 15:48:27 +02:00
b7bccfeee3 Annotate sidebar_links(project) param + return type 2017-05-24 15:48:27 +02:00
43a04880e0 Allow extensions to declare their icon.
The PillarExtension.icon() property returns the icon HTML class,
for use like i.pi-{{ext.icon}}
2017-05-24 15:48:27 +02:00
0cf96e47e8 Added @project_view() decorator to reduce duplicated code. 2017-05-24 15:48:27 +02:00
7fbe648d99 Import current_app from pillar instead of flask 2017-05-24 15:48:27 +02:00
1ce13b71a3 Add type annotation to app.pillar_extensions 2017-05-24 15:48:27 +02:00
4e268510f2 Declare pillar.current_app before importing other Pillar modules.
This makes it easier/possible to just do "from pillar import current_app"
in submodules.
2017-05-24 15:48:27 +02:00
1f2dd34683 No longer using deprecated @abc.abstractproperty
See https://docs.python.org/3/library/abc.html#abc.abstractproperty for
more info.
2017-05-24 15:48:27 +02:00
c50f745744 Bottom sitemap: Fix URLs 2017-05-24 14:51:20 +02:00
5e721c61b9 Added function to easily remove someone from a group. 2017-05-24 10:56:53 +02:00
8c1dbf984b Homepage update for Blender Cloud
Now with more Agent!
2017-05-22 15:59:43 +02:00
38df6e873b Extracted function to generate authentication tokens for service accounts. 2017-05-19 12:02:00 +02:00
ef2d8d14a0 Added PillarServer.validator_for_resource()
This makes it possible to perform Cerberus validation on documents.
2017-05-18 16:32:05 +02:00
1f0a855510 Added pillar.current_app local proxy
This proxy is annotated as PillarServer instance, so using it in an IDE
will give you much better autocompletion.
2017-05-18 16:31:43 +02:00
50d62f17b8 Allow specification of full name when creating service account 2017-05-18 15:46:02 +02:00
c12b646b09 More logging in PATCH handler 2017-05-18 15:46:02 +02:00
cbe182a298 Gravatar: support None email addresses 2017-05-18 15:46:02 +02:00
59a95450e5 Updated Eve, Flask, and Werkzeug. Adjusted code to make Pillar work again.
Eve     : 0.6.3   → 0.7.3
Flask   : 0.10.1  → 0.12.2
Werkzeug: 0.11.10 → 0.11.15

Also updated some secondary requirements.
2017-05-18 15:46:02 +02:00
e4f221ab13 Take default crappy secret key from config.py
This forces anyone installing Pillar to actually generate a proper secret.
2017-05-18 15:46:02 +02:00
4ad82a1eb3 Updated algoliasearch requirement in setup.py 2017-05-18 15:46:02 +02:00
47b81055fd PEP8 formatting 2017-05-18 15:46:02 +02:00
19d9684a67 Raise ConfigurationMissingError instead of SystemExit 2017-05-18 15:46:02 +02:00
091c70e735 Project homepage responsive tweak
Full width for latest cards
2017-05-15 11:56:13 +02:00
abcb0c27a0 Remove Agent 327 special content lock 2017-05-15 11:55:43 +02:00
71403e6f28 Tests: Allow specification of user's email address 2017-05-12 14:48:36 +02:00
9a10c86329 Added pillar.auth.current_web_user to easily get the current UserClass. 2017-05-12 13:55:55 +02:00
fdb9154b85 Allow login_user() to load the user from the database
This makes it easier to properly log someone in from a unit test.
2017-05-12 13:55:55 +02:00
2703617179 Added 'groups' property to UserClass
This property was created by _load_user(), but never had a default
value set in UserClass.__init__().
2017-05-12 13:55:55 +02:00
9f752e2584 Made AnonymousUser a subclass of UserClass 2017-05-12 13:55:55 +02:00
82437724cc Added some type annotation
The web layer uses string IDs, whereas the API layer uses ObjectIDs.
Those annotations make it a bit more explicit what is used where.
2017-05-12 13:55:55 +02:00
080d98f57c Removed unused imports 2017-05-12 13:55:55 +02:00
ad9a981cda Added p.a.users.add_user_to_group() function 2017-05-12 13:55:55 +02:00
7c5aef033d Some more checks on p.a.project.utils.get_admin_group_id() 2017-05-12 13:55:55 +02:00
d2f548faf9 Proper type annotations for PillarServer.db() 2017-05-12 13:55:55 +02:00
203c6418fd Added pillar.flask_extra.vary_xhr() decorator
This produces a 'Vary: X-Requested-With' header on the response of
decorated view functions, which indicates to the browser (or intermediate
proxy servers) that the response may/will will be different for XHR and
non-XHR requests.
2017-05-12 13:55:55 +02:00
736686390f Move activities styling to Pillar
Since activities is a core part of Pillar
2017-05-10 15:58:56 +02:00
c66a6e67c8 Added p.a.project.utils.user_rights_in_project()
This returns the allowed HTTP method for the current user in the given
project. This is used for access control on Flamenco, for example.
2017-05-10 12:09:48 +02:00
a139e8c41a Added p.a.projects.utils.get_admin_group_id() 2017-05-10 12:09:09 +02:00
ee7af393a0 Use annotations to declare types (instead of docstring) 2017-05-10 12:08:45 +02:00
a6617cae68 Allow current_app.db('collections-name')
This mimics the use in Flamenco (current_flamenco.db('collection_name')),
and makes calling code a bit nicer (db('coll') instead of db()['coll'])
2017-05-10 12:08:11 +02:00
319f815985 Some more logging in pillar.api.blender_cloud.subscription.update_subscription 2017-05-10 12:04:34 +02:00
c77a6b9d21 More logging in pillar.api.service.do_badger() 2017-05-10 11:15:29 +02:00
c854ccbb4b Generic PATCH handler class.
A class-based approach is easier to extend than the function-based approach
used in the nodes. That one is still there, though -- might look at it
at a later time. This handler is primarily for Flamenco.
2017-05-09 14:08:35 +02:00
fdaf4af31a Modernised some unit tests 2017-05-05 14:40:37 +02:00
69d7c5c5ce Allow service accounts to be email-less
This removes the ability of updating service accounts through the CLI
(something we never used anyway), now that service accounts cannot be
uniquely identified by their email address.
2017-05-05 14:34:18 +02:00
095f1cda0c Added "Switch user" functionality.
The user isn't logged out until the new user logs in. This allows you to
click on "Log in as different user", hit the back button, and still be
logged in.
2017-05-05 12:56:19 +02:00
c3eb97e24c Log redirect URL for users after logging in with Blender ID. 2017-05-05 12:55:29 +02:00
b1b91a7b29 Timeout (10s) on store API calls + better exception handling
We now log connection errors, timeouts, and other Requests errors, and
return None so that the login flow of the user can continue.
2017-05-05 12:55:05 +02:00
870800e8d2 Stop flashing 'Please log in to access this page.'
This message was "flashed" (http://flask.pocoo.org/docs/0.12/patterns/flashing/)
by Flask-Login. This happens on every unauthorised request, so also on
AJAX requests (like for the notifications). As a result, a user could be
spammed by a screen full of these messages if they left their window open
and their session timed out.
2017-05-05 10:40:08 +02:00
379d40837b Fixed issues logging in.
The API call to /api/bcloud/update-subscription is now performed via the
SDK, to ensure proper authentication. Also streamlined some other code.
2017-05-05 10:29:16 +02:00
10a40ddabd Make Blender ID URL work with live URL too 2017-05-04 18:29:11 +02:00
118de12712 Always return a HTTP response 2017-05-04 18:24:08 +02:00
cfa31ab542 JS mistake 2017-05-04 18:23:55 +02:00
47ba5e18a3 Give users a "Re-check my subscription" button. 2017-05-04 18:15:35 +02:00
1a54b723aa Reworked subscription/demo role management from web to API level.
In the old situation, users had to be able to change their own roles. This
is inherently insecure.
2017-05-04 17:49:18 +02:00
d0557445cd Fix privilege escalation leak
A PUT request on /api/user/{user-id} by the user themselves would allow
too much, and would allow self-granting of roles (including admin),
group membership (so join any arbitrary project) and pretend to be
service accounts.
2017-05-04 12:48:30 +02:00
1ad3e7910c Upgrade algoliasearch 2017-04-11 12:08:57 +02:00
49895805e3 Display project description in instead of summary 2017-04-07 09:02:08 +02:00
bd3f8d597a Allow upload of videos > 1080p
Videos that are larger than 1920x1080 pixels are scaled down so that they
fit that size. Care is taken to keep the width a multiple of 16 pixels and
the height a multiple of 8.
2017-03-31 14:52:58 +02:00
c711a04e6c Added some type annotations (no functional differences) 2017-03-31 13:14:07 +02:00
1cb7a92e40 Removed old mock-EncoderJob and replaced it with a dict
The real value is a dict too, anyway.
2017-03-31 13:12:08 +02:00
d8640df115 Made markdown jinja filter None-safe 2017-03-30 09:37:48 +02:00
4c704c8cda Pipe description & content of featured nodes through markdown 2017-03-30 09:23:59 +02:00
6f9feea8a9 Locally hosting jstree, instead of linking to cloudflare.
Should speed up the site, and remove a possible point of failure.

This also upgrades JSTree to the latest version (3.3.3).
2017-03-29 16:57:26 +02:00
dde5526022 Updated HDRi specifics in the "join" page 2017-03-29 16:46:49 +02:00
34a6fb064c Removed illegal 'home_project' tag 2017-03-29 16:43:08 +02:00
cecc9bc7fb Added "Copy yaw" button to HDRIs
The button is only shown to people with the right to edit the current
node. I've also simplified some CSS, with the help of @venomgfx.
2017-03-28 18:11:08 +02:00
9ccf4474bc Fix for missing tag in Markdown validator 2017-03-28 17:54:10 +02:00
3622fad9c2 Merge remote-tracking branch 'origin/master' 2017-03-28 16:43:22 +02:00
c846ee9823 Add support for video tag 2017-03-28 16:43:12 +02:00
fd541d2243 Changed interpretation of '' in form handling of integer properties. 2017-03-28 16:04:40 +02:00
fcaa4fb936 Upgraded VRViewer:
- upgraded vrviewer to latest master (ffbc9ff4bf0c550cc79003ec188ca16e9e83c31e)
- added some notes on how to upgrade to the readme
- added support for setting default yaw angle
- added support for float properties
2017-03-28 16:04:26 +02:00
ddfb69edb2 group_hdri: remove double title 2017-03-28 14:08:15 +02:00
fac56943ee group_hdri: remove unused preview
This div contained the same image as otherwise shown, except that it was
never shown.
2017-03-28 14:08:15 +02:00
8330488c0a group_hdri: show unpublished children for current user.
Previously you weren't even allowed to see your own "pending" items.
2017-03-28 14:08:15 +02:00
6001f1936c Tweak to utm_source handling 2017-03-24 17:14:29 +01:00
46c019a758 Add support for utm_source 2017-03-24 11:47:12 +01:00
b2ed441bf7 Fix width of node add/edit 2017-03-23 18:53:39 +01:00
373be03c80 Groups: browse type list tweaks to match browse as icons 2017-03-23 18:53:24 +01:00
ce5e27814a Blog: Fix top bar not full width 2017-03-23 18:52:24 +01:00
24468159e7 Gulp: Only cache templates/scripts if not gulping for production 2017-03-23 12:44:33 +01:00
7153c8adde Added mypy to requirements-dev.txt 2017-03-23 12:07:03 +01:00
41414319a5 Merge branch 'master' into wip-storage-backend 2017-03-23 12:06:36 +01:00
a3513aa45c Update background for Agent 327 on join pages 2017-03-22 21:56:43 +01:00
1ed31d26e5 Be more explicit in logging which video encoding service is used 2017-03-22 17:08:15 +01:00
b36dc63335 Added simple mocking test for GCS 2017-03-22 16:43:17 +01:00
c02c6d89b0 Removed unused import 2017-03-22 16:43:04 +01:00
563bb2c244 Added unittest for Bucket.copy_to_bucket() 2017-03-22 16:14:06 +01:00
6b526f6b53 Fixed bug in local file storage URL generation. 2017-03-22 16:05:38 +01:00
cce388049d Big refactoring of file storage handling
- Moved pillar.api.utils.{gcs,storage} to pillar.api.file_storage_backends
- Implemented GCS and local storage using abstract Bucket and Blob classes
- Removed file processing from the Blob class, and kept it in the
  file_storage/__init__.py class. That way storage and processing are
  kept separate.
2017-03-22 15:49:56 +01:00
fdfdd38c38 Removed route for direct GCS storage browsing 2017-03-22 15:49:52 +01:00
56b631d4a2 Added type annotations 2017-03-22 15:49:52 +01:00
6eadc09c10 Use __init_subclass__ to register storage backends
See https://docs.python.org/3.6/whatsnew/3.6.html#pep-487-simpler-customization-of-class-creation
2017-03-22 15:49:52 +01:00
1f3d699a0c Ran 2to3 on pillar/api/utils/storage.py 2017-03-22 15:49:52 +01:00
0eb2f17624 Raise exception on not-implemented methods, instead of just pass'ing 2017-03-22 15:49:52 +01:00
47eba6786a Renamed 'file_in_storage' to 'blob' 2017-03-22 15:49:52 +01:00
ec1db0a725 Groups view: Background gradient gray doesn't look good 2017-03-22 15:49:51 +01:00
04a235ca12 Larger image for collections header 2017-03-22 15:49:51 +01:00
5b59d6b6b1 Slightly bigger thumbnail size for posts in homepage 2017-03-22 15:49:51 +01:00
0e6bdf9467 Update CSS caches 2017-03-22 15:49:51 +01:00
6d1f81b26a Minor tweaks to homepage listing
Thumbnails are now slightly larger (22px wider), and did some rearrangement
2017-03-22 15:49:51 +01:00
a000176db9 Tweaks to group listing
Non-square thumbnails, always display icon type
2017-03-22 15:49:51 +01:00
9f49140230 Video Player: Vertical volume slider and loop by default 2017-03-22 15:49:51 +01:00
8934eb8b8d Fix for crash on extension blueprints loading
It the extension was registered with url_prefix=None, we set url_prefix to empty string so it can be added to blueprint.url_prefix.
2017-03-22 15:49:51 +01:00
26f8a3fec7 Synced dev requirements with Pillar Python SDK 2017-03-22 15:49:51 +01:00
49500570d4 Added missing redis requirement 2017-03-22 15:49:51 +01:00
b2b3e8acf1 Fixed version conflict with pillarsdk dev requirements 2017-03-22 15:49:51 +01:00
67bce48de8 Auto-install -e pillar
It uses ../pillar instead of . so that it is a valid path from blender-cloud as well.
2017-03-22 15:49:51 +01:00
3d1c8625a3 Remove redundant requirement
It is already defined in requirements.txt in pillar-python-sdk.
2017-03-22 15:49:51 +01:00
41600f208e Remove Linux venv specific dependencies
Was giving install error on macOS.
2017-03-22 15:49:51 +01:00
ef6e76f153 Upgraded development requirements to speed up code coverage recording
Removed requests from requirements.txt file, because it's already a req
of pillar-python-sdk.
2017-03-22 15:49:51 +01:00
ae5009c9ef Python 3.6: Fixed issue with gravatar function
Hashing of string object doesn't work. Also added a deprecation warning
that pillar.api.utils.gravatar should be used; pillar.web.utils.gravatar
is just a copy.
2017-03-22 15:49:51 +01:00
dcdcd99393 Python 3.6 compatibility: random bits & bcrypt
Switched from Sybren's RSA library to the new stdlib module 'secrets' to
generate secret tokens. This also means that the rsa library was demoted
to secondary requirement.
2017-03-22 15:49:51 +01:00
2e41c074b5 Python 3.6 compatibility: bytes vs strings stuff
These changes mostly revolve around the change in ObjectId constructor
when running on Python 3.6. Where on 2.7 the constructor would accept
12- and 24-byte strings, now only 12-byte bytes and 24-character strings
are accepted. Good thing, but required some changes in our code.

Other changes include hashing of strings, which isn't supported, so they
are converted to bytes first, and sometimes converted back afterwards.
2017-03-22 15:49:51 +01:00
c2206e6b27 Python 3.6 compatibility: Prevent comparison with None 2017-03-22 15:49:51 +01:00
c58d616bfc Don't run failed unittests first.
This can cause false positives when the failure was caused by inter-test
interference.
2017-03-22 15:49:51 +01:00
fb25e3e03f Fixup of syntax error introduced by 2to3 2017-03-22 15:49:51 +01:00
16b2b119fd Referring to Pillar Python SDK requirements
This forces us to remove common requirements from Pillar's requirements.txt
file (which is a good thing).
2017-03-22 15:49:51 +01:00
4e138d38a1 Removed no longer compatibility thingy for Python 3 2017-03-22 15:49:51 +01:00
bced6cae68 Ran 2to3 on unittests, same sort of manual fixups as before 2017-03-22 15:49:51 +01:00
e0c525389f Renamed static.py to staticfile.py
Python 3 supports 'namespace packages', and thus can see a directory
without __init__.py as something importable. This caused a name conflict,
since there were both the file static.py and the dir static.
2017-03-22 15:49:51 +01:00
663627358f Ran 2to3 on pillar + some manual fixups
The 'manual fixups' are:

- incorrect use of dict.items() where dict.iteritems() was meant; this
  results in list(dict.items()), which I changed to dict.items().
- removal of 'from __future__ import' lines, which 2to3 changes into
  empty lines; I removed the empty lines.
2017-03-22 15:49:51 +01:00
10b3318419 Added development requirements to requirements-dev.txt
In this commit (and the previous ones on requirements files) I haven't
changed the package versions. Upgrading our dependencies is for another
time.
2017-03-22 15:49:51 +01:00
c2c19bd6f3 Removed development requirements and unused secondary requirements. 2017-03-22 15:49:51 +01:00
1266d4b5d9 Made requirements.txt py36-compatible
Some packages had to be removed; they are deployment-specific anyway,
and may not even be needed any more.

I've also added some secondary requirements that weren't specified yet.
The next steps will be to split into runtime and development requirements.
2017-03-22 15:49:51 +01:00
e6fb64621f Introducing GOOGLE_SITE_VERIFICATION
Used for cross-verification on various Google sites (eg. YouTube) . By default it is not rendered in the pages.
2017-03-21 15:45:36 +01:00
fde50f6525 Groups view: Background gradient gray doesn't look good 2017-03-20 18:07:13 +01:00
f19be0ae17 Larger image for collections header 2017-03-20 16:00:37 +01:00
b70bc07a75 Slightly bigger thumbnail size for posts in homepage 2017-03-20 15:54:36 +01:00
508a28aeae Update CSS caches 2017-03-17 18:05:12 +01:00
ba5923044a Minor tweaks to homepage listing
Thumbnails are now slightly larger (22px wider), and did some rearrangement
2017-03-17 18:03:06 +01:00
c52bfd2236 Tweaks to group listing
Non-square thumbnails, always display icon type
2017-03-17 16:53:05 +01:00
bfb5f4f44e Video Player: Vertical volume slider and loop by default 2017-03-13 17:11:27 +01:00
1eb1cd7b64 Fix for crash on extension blueprints loading
It the extension was registered with url_prefix=None, we set url_prefix to empty string so it can be added to blueprint.url_prefix.
2017-03-12 18:46:08 +01:00
9abdd1ee90 Synced dev requirements with Pillar Python SDK 2017-03-09 10:18:19 +01:00
db98c681a2 Added missing redis requirement 2017-03-08 17:16:44 +01:00
a7cd515fdb Fixed version conflict with pillarsdk dev requirements 2017-03-08 17:06:55 +01:00
7967b80ab3 Auto-install -e pillar
It uses ../pillar instead of . so that it is a valid path from blender-cloud as well.
2017-03-07 14:25:56 +01:00
4e36ea5aae Remove redundant requirement
It is already defined in requirements.txt in pillar-python-sdk.
2017-03-07 12:50:03 +01:00
47ca614ea3 Remove Linux venv specific dependencies
Was giving install error on macOS.
2017-03-07 12:49:17 +01:00
662f1276d2 Upgraded development requirements to speed up code coverage recording
Removed requests from requirements.txt file, because it's already a req
of pillar-python-sdk.
2017-03-03 14:53:44 +01:00
a0a8257df0 Python 3.6: Fixed issue with gravatar function
Hashing of string object doesn't work. Also added a deprecation warning
that pillar.api.utils.gravatar should be used; pillar.web.utils.gravatar
is just a copy.
2017-03-03 14:42:13 +01:00
3fe9472d27 Python 3.6 compatibility: random bits & bcrypt
Switched from Sybren's RSA library to the new stdlib module 'secrets' to
generate secret tokens. This also means that the rsa library was demoted
to secondary requirement.
2017-03-03 14:16:29 +01:00
a9e40ccf10 Python 3.6 compatibility: bytes vs strings stuff
These changes mostly revolve around the change in ObjectId constructor
when running on Python 3.6. Where on 2.7 the constructor would accept
12- and 24-byte strings, now only 12-byte bytes and 24-character strings
are accepted. Good thing, but required some changes in our code.

Other changes include hashing of strings, which isn't supported, so they
are converted to bytes first, and sometimes converted back afterwards.
2017-03-03 14:14:36 +01:00
6fb58a3f26 Python 3.6 compatibility: Prevent comparison with None 2017-03-03 14:10:47 +01:00
86b13557fb Don't run failed unittests first.
This can cause false positives when the failure was caused by inter-test
interference.
2017-03-03 12:33:48 +01:00
60c608d095 Fixup of syntax error introduced by 2to3 2017-03-03 12:33:23 +01:00
72b002491a Referring to Pillar Python SDK requirements
This forces us to remove common requirements from Pillar's requirements.txt
file (which is a good thing).
2017-03-03 12:33:01 +01:00
82157af84b Removed no longer compatibility thingy for Python 3 2017-03-03 12:32:21 +01:00
b454b011b0 Ran 2to3 on unittests, same sort of manual fixups as before 2017-03-03 12:07:18 +01:00
b65dd49aa6 Renamed static.py to staticfile.py
Python 3 supports 'namespace packages', and thus can see a directory
without __init__.py as something importable. This caused a name conflict,
since there were both the file static.py and the dir static.
2017-03-03 12:01:38 +01:00
7c055b5f56 Ran 2to3 on pillar + some manual fixups
The 'manual fixups' are:

- incorrect use of dict.items() where dict.iteritems() was meant; this
  results in list(dict.items()), which I changed to dict.items().
- removal of 'from __future__ import' lines, which 2to3 changes into
  empty lines; I removed the empty lines.
2017-03-03 12:00:30 +01:00
2d6bdd350f Added development requirements to requirements-dev.txt
In this commit (and the previous ones on requirements files) I haven't
changed the package versions. Upgrading our dependencies is for another
time.
2017-03-03 11:42:23 +01:00
eadb91abc9 Removed development requirements and unused secondary requirements. 2017-03-03 11:37:55 +01:00
3e7152bb93 Made requirements.txt py36-compatible
Some packages had to be removed; they are deployment-specific anyway,
and may not even be needed any more.

I've also added some secondary requirements that weren't specified yet.
The next steps will be to split into runtime and development requirements.
2017-03-03 11:30:24 +01:00
647ae0f3d6 Fixed create_from_file(filename) bug (should be file obj, not name) 2017-03-01 08:56:26 +01:00
e5b4ce0890 GoogleCloudStorageBucket.gcs_bucket → _gcs_bucket
Added a few FIXME comments where _gcs_bucket is used outside of the class.
2017-03-01 08:56:26 +01:00
27df603299 Started moving processing function in subclasses 2017-03-01 08:56:26 +01:00
4d6bf65a99 Attempt at proper naming
Using Bucket and Blob as base classes.
2017-03-01 08:56:26 +01:00
c06533db5b Breaking stuff by introducting decorator & abstract base class stuff. 2017-03-01 08:56:26 +01:00
b3b9c68486 Fixed uploading of images.
Thumbnailing is still broken, though.
2017-03-01 08:56:26 +01:00
aecab0561e WIP introducing STORAGE_BACKEND
We introduce two new classes StorageBackend and FileInStorage, which
are subclassed by CGS and local Pillar. This makes supporting multiple
storage solutions easier.
2017-03-01 08:56:26 +01:00
4570b4637b Move attachment parsing on the node level 2017-02-27 16:23:21 +01:00
e381ca774e On Page load use replaceState instead of pushState
Fix T50797 and replace the id-based url with a custom url for page in the browser's history.
2017-02-27 13:08:56 +01:00
6765276519 Introducing attachments fixes for blog posts and assets.
Requires migration of attachments schema using
python manage.py maintenance upgrade_attachment_schema --all
2017-02-21 18:08:42 +01:00
eca4ade9d8 Linking to Blender Cloud add-on (and no longer to bundle)
Added a note that states the add-on requires Blender 2.78+. Even though
this isn't strictly true (it also supports 2.77a if you manually install
the Blender ID add-on), it simplifies things greatly.

Fixes T49721
2017-02-21 11:14:46 +01:00
2e00e81b30 Raise z-index of col_right by 1 2017-02-14 16:03:37 +01:00
0a86ad357f Analytics for videojs 2017-02-08 16:27:52 +01:00
02f736dcc4 Hide missing summaries on projects homepage 2017-02-08 15:27:20 +01:00
d8eae2c44b Fix OG crash on projects without picture_header 2017-02-08 15:26:56 +01:00
c98cd82b3f OpenGraph: Check if we have a description/post content 2017-02-08 14:48:55 +01:00
69b3e06b1c Use project picture as fallback if og_picture/node is undefined 2017-02-07 18:03:35 +01:00
7b9fef2fc8 Update caches 2017-02-06 14:44:05 +01:00
528887b1a6 Unify Twitter cards and Open Graph data 2017-02-06 14:37:53 +01:00
10df0af355 Fix search list not scrolling 2017-02-06 14:35:51 +01:00
ae38bec218 Fix project header videos 2017-02-06 12:07:05 +01:00
3ef0bf6761 Typo 2017-02-02 18:08:21 +01:00
1e56ca5227 Only load videojs when there are sources, and minor style tweaks 2017-02-02 18:05:30 +01:00
b8ad0cd18f Update cache version 2017-02-02 17:40:32 +01:00
e049ab0a08 Fire videojs via js 2017-02-02 17:40:04 +01:00
089b0f1535 Own copy of videojs 5.8.8 2017-02-02 16:57:31 +01:00
bf0ebce81a Videojs for project video headers 2017-02-02 16:57:18 +01:00
eb02fa5eec Replace Flowplayer with the open source Video.js library 2017-02-02 16:06:41 +01:00
bc6f526b72 Don't use ?format=amp after url_for()
url_for() is smart enough to add variables to the query string if there is
no route parameter for them.
2017-01-24 16:35:02 +01:00
0e07cb2b1d Link to AMP view if we're in a node 2017-01-24 16:01:05 +01:00
2b528f0fff Added pillar.api.utils.bsonify(some_dict)
It was used in an experiment in Flamenco as an alternative to JSON; it
might still be used in the future if BSON turns out to be significantly
faster to generate.
2017-01-24 09:19:24 +01:00
9b90070191 AMP: break too long words 2017-01-23 16:15:53 +01:00
68fcae64ae AMP: Use srcset to load different headers depending on screen size 2017-01-23 15:56:41 +01:00
e3fc5d1b9b Initial support for AMP (Accelerated Mobile Pages)
https://www.ampproject.org/

Basic implementation. Still needs the node description to be parsed,
as <img> tags need to be <amp-img> with special tags.
2017-01-23 15:47:14 +01:00
85988bb8c9 Fix for some project names breaking javascript 2017-01-20 17:35:08 +01:00
85dba5e9e9 Blog: Re-order hideOverlay to be re-used 2017-01-20 13:13:11 +01:00
350577033c Blog: Expand images when clicking on them (and the link is an image)
Duplicated in both index/view post to get it out for today's Cycles post, wrote a note to fix this.
2017-01-20 12:38:50 +01:00
eb5fb4eb09 Fix undefined projectTree 2017-01-20 12:10:23 +01:00
181cbc07d6 Blog: Center images on posts 2017-01-20 12:05:28 +01:00
784c1ed0bb CSS: top border for active status on table rows 2017-01-19 16:57:41 +01:00
604d6c1a07 Added pillar.web.utils.last_page_index()
This returns the last page number (base-1) of a paged Eve result.
2017-01-19 15:13:01 +01:00
129ec94608 Renamed flamenco.jobs to flamenco_jobs 2016-12-14 14:48:37 +01:00
01cc52bba9 Allow user updates in create_service_account() calls. 2016-12-14 14:41:06 +01:00
8115bc2ad5 Collections are now named flamenco_xxx instead of flamenco.xxx
The dot notation disallowed Eve hooks, as the collection names weren't
valid Python identifiers.
2016-12-14 14:40:38 +01:00
a100d73a8b Collections in extension eve_settings now should start with the ext name.
Instead of Pillar automagically prepending 'attract.' or 'flamenco.' to the
names this should now be done explicitly in the extension's Eve settings.
This allows for more explicit configuration, and ensures foreign key
definitions are unambiguous.
2016-12-14 11:26:28 +01:00
11197e669c Remove /about endpoint 2016-12-02 18:02:29 +01:00
7a6e1d3386 refresh css 2016-12-02 17:54:12 +01:00
6bb491aadc Support for page urls
Now we can access pages with the following url
/p/<project_url>/<page-url>. Internally we use the existing view_node,
but if we detect that the node_id is not an object id we try to treat
it as a page url and therefore we try to define node and project using
render_node_page().
2016-12-02 16:57:51 +01:00
bc456f9387 Fix typo 2016-12-02 16:25:47 +01:00
1beb3ca488 Better join page for the agent project 2016-12-02 16:18:17 +01:00
0190cf944a Show free assets 2016-12-02 15:39:44 +01:00
5f590a2063 Search point to Join page for not subscribers 2016-12-02 14:46:22 +01:00
c284156723 Project thumbnail link to project root, not about 2016-12-02 12:43:15 +01:00
7219c5ca72 Disable Learn More on projects for now 2016-12-02 12:42:58 +01:00
86b5c1b242 Fix scrolling on sidebar for posts 2016-12-01 16:42:17 +01:00
ffdffdeb96 Bigger thumbnail for posts 2016-12-01 16:39:20 +01:00
455bfdfc49 Update CSS 2016-12-01 16:31:03 +01:00
2ad3c8a7ed Show Browse Project on top of the list 2016-12-01 16:30:27 +01:00
08f3467406 Fix width on containers 2016-12-01 16:30:17 +01:00
2bae7c2fef Thumbnail on list of blogs on sidebar 2016-12-01 16:21:02 +01:00
b6b517688e Display blog list and posts within the project
TODO: Edit within the project as well
2016-12-01 15:57:59 +01:00
f2942a20fe Refactor manage commands using subcommands
This way we clean up the output of manage.py and sort the commands in
three main categories:
- setup: Setup utilities, like setup_db() or create_blog()
- maintenance:  Maintenance scripts, to update user groups
- operations: Backend operations, like moving nodes across projects
2016-12-01 00:33:24 +01:00
d9b56f485b Extend CHECK_PERMISSIONS_IMPLEMENTED_FOR
We support flamenco.jobs. This is a temporary workaround until we
implement check permissions in a way that can be extended by extensions.
2016-11-30 23:50:21 +01:00
f06b3c94eb join_agent page for the agent project 2016-11-30 23:32:46 +01:00
742a16fb9f Better 403 error message 2016-11-30 22:11:27 +01:00
e72f02711d Temporary tweak to join mechanism
TODO: move this to the external app (blender-cloud).
2016-11-30 15:57:11 +01:00
48ebdf11b3 Update project-main 2016-11-29 18:49:49 +01:00
e43f99593a Vertical spacing on hdri thumbnails 2016-11-29 18:43:32 +01:00
476e7be826 Update CSS 2016-11-29 18:22:43 +01:00
8654503f5a Show free ribbon on project view 2016-11-29 18:17:35 +01:00
98295305fd Only show lock icon when we don't have a valid role 2016-11-29 18:00:54 +01:00
e43b0cbccf Responsive layout for HDRI listing 2016-11-29 16:58:11 +01:00
462ef953bc Update CSS 2016-11-29 16:12:53 +01:00
29629f6647 Update CSS 2016-11-29 16:06:04 +01:00
e3fc265408 Bigger thumbnail for HDRIs 2016-11-29 16:02:56 +01:00
a67774c6e8 textures and hdris can also have the public icon 2016-11-29 16:01:51 +01:00
dea6dd5242 Show Public status on textures 2016-11-29 15:58:21 +01:00
a79ca80f28 Limit free icon on jstree for asset/texture items 2016-11-29 15:51:18 +01:00
7fb94a86e8 Display a nice icon on jstree if item is free 2016-11-29 15:35:12 +01:00
9783711818 Add New File button: avoid selection of text and highlight when active 2016-11-29 14:50:59 +01:00
bf5b457141 Node description for HDRI/Textures folders 2016-11-29 14:44:41 +01:00
3fbee33369 Open jstree folders on load, and set parent as selected as well
So when we open a node inside a folder, it highlights itself and parent folder
2016-11-29 14:39:47 +01:00
2c71168677 in some cases hdr files can be read as None 2016-11-29 13:03:57 +01:00
51d7eed164 Fix alignment of text on status-bar 2016-11-29 13:03:42 +01:00
64ce091f11 Fix sidebar height missing navbar height into account 2016-11-29 12:25:46 +01:00
4a5d553bc8 No blog on activity stream 2016-11-25 13:32:17 +01:00
f75c43055f Blog on frontpage 2016-11-25 13:32:05 +01:00
f2d9df8b61 Add note about status parsing during the node tree creation 2016-11-25 12:56:41 +01:00
c73ad07e83 Remove whitespaces 2016-11-25 12:45:29 +01:00
a93d9be632 Remove whitespace 2016-11-25 12:43:59 +01:00
89689db96e Move tooltips/popovers code to layout 2016-11-24 19:43:11 +01:00
01e79f8565 Show icons on project homepage list 2016-11-24 19:42:12 +01:00
5866cc54aa Style group_texture 2016-11-24 19:16:34 +01:00
e8b03de444 Update css 2016-11-24 19:04:17 +01:00
1e1d9e57e7 Show description/content of posts/assets 2016-11-24 19:03:43 +01:00
5617f89c99 Style posts and assets on project homepage 2016-11-24 18:47:15 +01:00
b30aba2463 Fix clicking on posts 2016-11-24 18:46:41 +01:00
c8ae748bd6 Move colors for node types to config 2016-11-24 18:46:26 +01:00
3e6a9909da Update CSS 2016-11-24 18:21:18 +01:00
d35f2aa8c9 style tweaks to homepage activity stream 2016-11-24 18:17:40 +01:00
32ac0a64fb navbar is now opaque 2016-11-24 18:17:23 +01:00
3125ff75ca Style tweaks to sidebar 2016-11-24 18:17:13 +01:00
62b518c81e Show updated time on page templaet 2016-11-24 18:16:25 +01:00
8865ae02e4 Merge nodes_blog and nodes_featured 2016-11-24 18:16:15 +01:00
44c4182a86 Remove blog from sidebar and use folder icon 2016-11-24 18:15:45 +01:00
f59086c025 Style blog and page items on the tree 2016-11-24 18:15:20 +01:00
081a7f96ca No transparent navbar anymore 2016-11-24 18:14:46 +01:00
b1a0e1e3b6 Show blog on the tree 2016-11-24 18:14:25 +01:00
6910d3da49 We always include the picture now 2016-11-24 18:14:07 +01:00
b9c3d6b0fb Merge featured assets and blog posts into one activity stream 2016-11-24 18:13:46 +01:00
f99869f57e 10 featured/latest items 2016-11-24 18:12:38 +01:00
85bfbdb5e3 Display 10 comments on frontpage 2016-11-24 18:12:16 +01:00
ee20926233 List style for homepage activities 2016-11-24 16:31:36 +01:00
f732f1e08b Expand > Toggle 2016-11-24 16:31:36 +01:00
f899fb48ce Lighter background for navtree 2016-11-24 16:31:36 +01:00
4f071260f7 Fix tooltips not visible 2016-11-24 16:31:36 +01:00
6ed772278c Tooltips on the right and better text for them 2016-11-24 16:31:36 +01:00
Dalai Felinto
b04ed3f5b6 Fix problem pip install failing
Repeated elements here makes it fails (at least in WSL - Windows Subsystem Linux)
2016-11-21 23:03:52 +01:00
738c3e82d7 Remove box for containers on posts 2016-11-21 12:37:03 +01:00
9e952b0436 Fix on scrollbars 2016-11-21 12:29:09 +01:00
6ef2c5ca0d Refresh CSS cache 2016-11-17 15:24:00 +01:00
c025aa3aac Move table classes up a level so they can have effect without being nested 2016-11-17 14:55:04 +01:00
a41bda6859 Minor tweaks to tree/nav tree 2016-11-16 17:58:52 +01:00
9210285089 Make status-bar one line 2016-11-16 17:57:38 +01:00
f1661f7efb Use native scrollbars 2016-11-16 17:48:35 +01:00
8959fac415 Tooltips/popovers without delay 2016-11-11 20:04:08 +01:00
9b469cee7d Style tweaks to jstree 2016-11-11 20:03:45 +01:00
bbb3f5c7c0 Don't display extra content on /about 2016-11-11 18:16:16 +01:00
3139ba5368 Style tweak to nav header 2016-11-11 18:16:02 +01:00
df810c0c4e Fix icon 2016-11-11 18:04:53 +01:00
29b4ebd09a Link to project blog 2016-11-11 17:55:23 +01:00
76a5d9c9e1 Blog and Latest assets are shown bigger now 2016-11-11 17:48:38 +01:00
fe848525b1 Small refactor of jstree style
Still needs some work but it's a bit cleaner
2016-11-11 17:11:35 +01:00
24ede3f4ee Include node_type on jstree list item 2016-11-11 17:11:35 +01:00
756e3d2d89 Template for pages 2016-11-11 17:11:35 +01:00
684afb8cd5 Style .container.box 2016-11-11 17:11:35 +01:00
52a1602a7c Allow overriding whether the user can comment from URL.
Not really secure (user can still post comments via API and by changing the
URL and re-requesting the embedded comment form), but at least normal users
are blocked from commenting this way.
2016-11-11 16:01:56 +01:00
ce6020702e Don't check for hardcoded caminandes-3 url
We now have the header_video_file feature for it
2016-11-11 15:37:46 +01:00
76f2367e66 Added extra role to UserAdminTest. 2016-11-11 15:23:25 +01:00
5f0092cfa1 Fixed bug in /u/ where home project group membership was lost after edit.
Rather than understanding the code, I rewrote the editing and added a
unit test for it.
2016-11-11 15:06:29 +01:00
4b84e6506b CLI command to check home project group membership 2016-11-11 15:05:43 +01:00
a13937e500 Log error when unable to update home project 2016-11-11 12:44:47 +01:00
b9e27a4cbf Quote activity verb in log 2016-11-11 08:40:49 +01:00
3b694a91af Fix alignment of header 2016-11-10 11:26:26 +01:00
f651ece343 Set color for navigation on sidebar 2016-11-10 11:21:11 +01:00
595a690473 Removed activity 'extra fields', as it wasn't used and half-built. 2016-11-10 09:50:10 +01:00
1702b40812 hover color of active list items 2016-11-10 00:54:06 +01:00
9612e99806 Unify active state of list and table items 2016-11-10 00:52:36 +01:00
c17993418c Fix animated stripes background not aligned on tables 2016-11-10 00:10:29 +01:00
60e43c368d Active statuses for tables and list items 2016-11-09 23:37:17 +01:00
2f3e5a513b Unify inputs with other apps 2016-11-09 23:14:15 +01:00
54fccfc3ad Status colors
From Attract, but will be used also in Flamenco and others in the future
2016-11-09 23:01:32 +01:00
b6b62babd2 Some fixes and utils from Attract 2016-11-09 22:42:53 +01:00
ad3f2c0119 Introducing apps_base.sass, contains basic layout/generic classes 2016-11-09 22:36:55 +01:00
dc70705b1e Don't show "Join the conversation" to demo users
And minor style tweaks
2016-11-09 18:15:34 +01:00
ab375b2126 Moved node_setattr() from Attract to Pillar 2016-11-09 12:50:30 +01:00
fcecc75c3d Update CSS cache version 2016-11-08 18:29:46 +01:00
15be184816 Align edit header to the right 2016-11-08 18:25:40 +01:00
45328b629b Escape html when building jstree 2016-11-08 18:25:23 +01:00
cce45b96e1 Fix special characters on document title 2016-11-08 18:08:30 +01:00
edad85ee34 Display private/public label on projects 'shared with me' 2016-11-08 17:56:56 +01:00
116ed9f08a More padding on error message 2016-11-08 16:03:39 +01:00
7391f40cba Users list: copy to clipboard for IDs
Feature request by Francesco
2016-11-08 15:59:14 +01:00
e54bfa4520 Clipboard.js, brought over from Attract, we'll use it here as well 2016-11-08 15:49:47 +01:00
d272896787 Use a lock icon (instead of download icon) when there's no permission to download 2016-11-08 15:05:44 +01:00
724fe6ceeb 'Join the conversation' wasn't accurate for subscribers without POST permission 2016-11-08 14:19:23 +01:00
865259d40e pretty_date('some string') now tries to parse the string as datetime.
dateutil.parser.parse('some string') is used for this.
2016-11-08 13:38:36 +01:00
65b554986c pretty_date(None) now returns None 2016-11-08 12:56:19 +01:00
fb6e326a14 Also support future dates and times in pretty_date 2016-11-08 12:24:55 +01:00
920a1de263 No need to format known number 2016-11-08 12:24:55 +01:00
0da4e3bafc Public/Private label for list of own projects 2016-11-08 12:00:15 +01:00
89be4efe6f If day is in the future, just print the time (not empty) 2016-11-07 17:10:41 +01:00
ba591da2fc Store js libraries locally 2016-11-07 12:20:23 +01:00
4c6a51c501 Fixed some package version conflicts between Pillar and the SDK. 2016-11-07 10:56:31 +01:00
76174046ad Use our own perfect scrollbar, not cdn 2016-11-04 16:11:04 +01:00
7b79270481 Auto-open dropdown menus only on nav bars 2016-11-04 11:22:22 +01:00
a1dca29382 Quick fix for layout of attachments file upload 2016-11-04 11:05:19 +01:00
c1427cf6a2 avoid horizontal scroll on notifications 2016-11-03 18:27:50 +01:00
a89ada7c2f Ported yesno Django filter to Jinja2 2016-11-03 18:26:11 +01:00
84a86a690e Gracefully handle replies on comments on deleted nodes. 2016-11-03 17:45:25 +01:00
0a0db88701 Style disabled buttons 2016-11-03 15:35:00 +01:00
27bad1be8a Fix markdown on coments 2016-11-03 15:34:50 +01:00
e98b158886 Disabled auto-slug feature.
It broke file uploads. Thanks @venomgfx for joining in solving this.
2016-11-03 14:04:40 +01:00
324d500edb Tweaks to style of file attachments 2016-11-02 19:42:44 +01:00
ef326a2193 Fix width of project header when page is not fully loaded 2016-11-02 19:05:43 +01:00
5ade876784 Labels for fields 2016-11-02 18:55:26 +01:00
738c20b36b Undertitle field labels 2016-11-02 18:51:51 +01:00
3c6642d879 Undertitle labels for checkboxes
Avoids ugly 'is_tileable' label on textures
2016-11-02 18:50:20 +01:00
e43405a349 Fix for empty File field not showing when there are no files
Committing on behalf of Dr. Sybren
2016-11-02 18:43:41 +01:00
f394907dd2 CLI replace_pillar_node_type_schemas: abort when unable to save 2016-11-02 18:20:44 +01:00
e117432f3d CLI replace_pillar_node_type_schemas: allow setting license types on public project nodes. 2016-11-02 18:15:23 +01:00
295c821b9d Simplified code 2016-11-02 17:55:37 +01:00
865f777152 CLI replace_pillar_node_type_schemas: using PILLAR_NAMED_NODE_TYPES 2016-11-02 17:21:50 +01:00
36e7cc56ef Removed colon for easy copy & paste of IDs 2016-11-02 17:21:50 +01:00
aa3340ddbe CLI upgrade_attachment_schema: stop when a node cannot be saved. 2016-11-02 17:21:50 +01:00
4280e0175b CLI upgrade_attachment_schema: only upgrade non-deleted nodes 2016-11-02 17:21:50 +01:00
cc562a9fb1 Fix attachment rendering for nodes without description. 2016-11-02 17:21:50 +01:00
4ec3268a23 Reloading comment list via event 'pillar:comment-posted' on body element. 2016-11-02 17:21:50 +01:00
80601f75ed Remove deprecated +button-rounded-filled mixin
We now use just 'button', as roundness and filled are configurable
2016-11-02 16:36:47 +01:00
9ac2f38042 Warn if there's no slug to append 2016-11-02 16:21:10 +01:00
4bd334e403 Add button to 'Add Attachment to Description' 2016-11-02 16:16:20 +01:00
ae859d3ea7 Minor style tweaks to file form widgets 2016-11-02 16:16:20 +01:00
e69393e95e WIP: endpoint for posting new comments without comment list.
We need to determine what happens when such a comment is successfully
posted, as we can't just reload the comment list. In other words, this is
dependent on where we are embedded, and cannot be handled just locally.
2016-11-02 15:40:26 +01:00
2cc21583d9 On-create activities are only created for Pillar nodes.
This allows Attract to use custom on-create activities.
2016-11-02 15:39:16 +01:00
0ac0f482ac Merge branch 'production' 2016-11-02 14:52:37 +01:00
f30cdd5246 Minor style tweaks to attachments form 2016-11-02 14:51:10 +01:00
48157254c1 Fixed snag. 2016-11-02 14:43:19 +01:00
3fc08bcafd Set the slug based on the file name 2016-11-02 14:07:02 +01:00
ff94cc57a3 Only show image size if it's image
Otherwise it'd be NonexNone
2016-11-02 12:51:49 +01:00
cf28e5a3f4 Unified "Add New File" and ".. Attachment" buttons. 2016-11-02 12:29:38 +01:00
6ea7386bd3 "Add new attachment" button works. 2016-11-02 12:28:45 +01:00
90c6fdc377 Handle empty attachments (no slug nor oid) and reject duplicate slugs 2016-11-02 12:28:45 +01:00
2a5b3dc53e Removed unused code. 2016-11-02 12:28:45 +01:00
dabc1a44b8 Set icon for error message 2016-11-02 11:42:49 +01:00
eb1561136b Fix typo in attachments code 2016-11-02 11:42:23 +01:00
d24677992e Datetimes in dynamic properties are now timezone-aware (but hardcoded). 2016-11-02 10:52:44 +01:00
e143b9cb72 Use undertitle filter when displaying node status 2016-11-01 19:36:04 +01:00
6faea83372 Fix rating on comments 2016-11-01 19:28:53 +01:00
d36dcad773 Fix rated status for comments (was missing space between classes) 2016-11-01 19:28:53 +01:00
a385a373b9 Typo in comments 2016-11-01 19:28:53 +01:00
8fa135d52e Add license types and notes to asset node_type 2016-11-01 19:05:14 +01:00
6f460ee127 Fix for non existing attachments 2016-11-01 18:05:26 +01:00
8cc2cfb189 Don't use hardcode url for homepage 2016-11-01 17:29:27 +01:00
c672bc07fe Only load comments on assets or posts
Was trying to load comments on groups, textures, etc.
2016-11-01 17:17:33 +01:00
656944f1ce Allow add_to_project() to take generator for node types 2016-11-01 16:47:55 +01:00
ab9d5c1793 CLI upgrade_attachment_schema: skip already upgraded nodes. 2016-11-01 16:47:55 +01:00
fe4d70c0d1 CLI upgrade_attachment_schema: also remove attachments form_schema
Previously they would have {'attachments': {'visible': False}}, but this
is no longer needed.
2016-11-01 16:47:55 +01:00
964e807721 Give admin explicit permissions, instead of blindly granting everything.
This ensures that the allowed_methods properties are properly set. Admin
users get the union of all permissions given to all groups and users.
2016-11-01 16:47:55 +01:00
3cf71a365f Forms for attachments work, VERY HACKISH Hardcodedness™ 2016-11-01 16:47:55 +01:00
5bd2c101fe Restore DB from 'cloud' subdir 2016-11-01 16:47:55 +01:00
aef7754537 Attachment rendering for posts & node descriptions. 2016-11-01 16:47:55 +01:00
d50d206e77 Gracefully handle non-existing files when renaming asset nodes. 2016-11-01 16:47:55 +01:00
28223159e7 Allow admin users to do everything.
This makes things more consistent (previously admins could create projects,
but not nodes in those projects).
2016-11-01 16:47:55 +01:00
a38e053c1a Added CLI command to create blogs. 2016-11-01 16:47:55 +01:00
62ac12deff Some more simplification 2016-11-01 16:47:55 +01:00
64ece74404 Cleaned up some blog post viewing code 2016-11-01 16:47:55 +01:00
bffbbad323 Support Cerberus valueschema in ValidateCustomFields 2016-11-01 16:47:55 +01:00
8fb64c38d6 Removed API-side attachment parsing. 2016-11-01 16:47:55 +01:00
f72890cc59 Define standard set of node types 2016-11-01 16:47:55 +01:00
0929a80f2b New data structure for attachments. 2016-11-01 16:47:55 +01:00
ff7101c3fe Small improvements in ValidateCustomFields() 2016-11-01 16:47:55 +01:00
590d075735 New schema for attachments, using propertyschema/valueschema. 2016-11-01 16:47:55 +01:00
fa3406b7d0 only_for_node_type_decorator() now supports checking multiple node types 2016-11-01 16:47:32 +01:00
5805f4eb2a Comments is now part of the base style 2016-11-01 15:53:40 +01:00
53cbe78ec1 Use #comments-embed for embedding comments. Avoid duplicate ID 2016-11-01 15:53:40 +01:00
f4b5e49c26 Return service account info from create_service_account() 2016-11-01 14:00:00 +01:00
499af03473 Gracefully handle 404 in get_user_info() 2016-11-01 14:00:00 +01:00
51c2c1d568 Make it possible for Pillar extensions to add service accounts. 2016-11-01 14:00:00 +01:00
144c5b8894 Use statusBarSet() js function from Pillar 2016-11-01 12:30:53 +01:00
c9d7da3a42 Attract and Flamenco icons 2016-10-21 20:41:41 +02:00
b59fcb5cba Prevent {{ url_for_node(...) }} crashing the planet when node doesn't exist.
Now None is returned as URL, and a warning is logged, rather than crashing
with a 500. A situation like this occurs when an activity refers to a
no longer existing node.
2016-10-21 16:00:03 +02:00
7be8e9b967 Show a nicer 404 error when something was deleted (instead of just "not there") 2016-10-21 15:27:17 +02:00
041722f71a Allow custom messages in the 404_embed.jade template 2016-10-21 14:38:57 +02:00
457a63ddcb Notifications: Fix alignment of mark as read button 2016-10-21 11:43:40 +02:00
5677ae8532 Prevent errors when notification is linked to non-existing node 2016-10-20 17:43:51 +02:00
8d99f8fc2e No more on-focus resizing; the "POST COMMENT" button moves away when you click it 2016-10-20 17:30:39 +02:00
09a21510a2 Comments: fixed issue cancelling reply & then posting top-level comment
This would still post as a reply, rather than as a top-level comment.
2016-10-20 17:29:45 +02:00
73641ecc8a Allow more tags in comments, including iframe (for video embedding) 2016-10-20 17:14:20 +02:00
b1da6de46e Comment textarea min height set when editing + only transition border-color 2016-10-20 17:04:02 +02:00
fceac01505 Set a nice minimum height when editing a comment 2016-10-20 17:02:07 +02:00
8b64f9140b Allow resizing of comment textarea 2016-10-20 17:01:58 +02:00
e1678537c0 Editing comments via PATCH on pillar-web, and some other comment fixes 2016-10-20 16:47:04 +02:00
d8686e5a14 Fixed comment rating 2016-10-20 16:34:33 +02:00
e71e6a7b32 API for editing comments via PATCH 2016-10-20 16:22:11 +02:00
8352fafd21 Replaced markdown with commonmark module 2016-10-20 13:05:43 +02:00
db2680be81 Removed unused import 2016-10-20 13:05:43 +02:00
c456696600 Added TODO 2016-10-20 13:05:43 +02:00
ad1816c617 log.warning → .info 2016-10-20 13:05:43 +02:00
8d3c4745aa Remove unnecessary form_schema fields. 2016-10-20 13:05:43 +02:00
3afeeaccd0 Removed permission keys from node type definitions.
This prevents replace_pillar_node_type_schemas() from overwriting existing
permissions.
2016-10-20 13:05:43 +02:00
7f4ad85781 Count comments and replies, not just top-level comments 2016-10-19 17:16:27 +02:00
ea2be0f13d Major revision of comment system.
- Comments are stored in HTML as well as Markdown, so that conversion
  only happens when saving (rather than when viewing).
- Added 'markdown' Jinja filter for easy development. This is quite
  a heavy filter, so it shouldn't be used (much) in production.
- Added CLI command to update schemas on existing node types.
2016-10-19 16:57:17 +02:00
eea934a86a Added username to public user fields 2016-10-19 16:57:17 +02:00
f2f66d7a6c Moved subquery.py from Attract to Pillar, as it's useful for comments too.
It's an attempt to speed up common queries which would ordinarily be
embedded by Eve. We want to move away from embedding due to security
issues (allowing the embedding of users leaks privacy-sensitive info).
2016-10-18 15:34:39 +02:00
aca54d76e0 Moved find_url_for_node() to its own module and made more pluggable.
Extensions can now register custom node URL finders using the
@pillar.web.nodes.finders.register_node_finder(node_type_name) decorator.
2016-10-18 12:03:06 +02:00
646ab58395 Style sidebar icons 2016-10-18 11:34:53 +02:00
d99ddca410 Split base styles into base.css
That way we can load this css in other projects to bring the basic stuff
such as normalize, navbar, notifications, custom scrollbars, and so on.
2016-10-17 16:17:23 +02:00
87f3093503 Delete attract main.sass, attract has its own 2016-10-17 15:40:14 +02:00
ae723b1655 update css 2016-10-14 15:57:11 +02:00
0a606ae15c Fix Free tag overflow 2016-10-14 15:19:40 +02:00
6af3dfdb51 Use local bootstrap 3.3.7 2016-10-13 16:02:38 +02:00
eca3f47eb8 Style form-upload-progress-bar when uploading
Had the same green hue for completed/uploading, which made it confusing.
2016-10-13 14:25:18 +02:00
8043caf187 Font Pillar: Question mark icon 2016-10-13 14:25:18 +02:00
aa953f76a1 Cache FlaskInternalApi object on request keyed by authentication token. 2016-10-13 10:01:29 +02:00
10ecb2158e Log error when URLer service is used but not configured. 2016-10-13 10:01:11 +02:00
96c9e12f7f doc_diff() optionally no longer reports differences between falsey values.
If falsey_is_equal=True, all Falsey values compare as equal, i.e. this
function won't report differences between DoesNotExist, False, '', and 0.
2016-10-12 17:09:48 +02:00
7c310e12ef Added util function to compute the difference between two dicts. 2016-10-12 16:01:30 +02:00
26aa155b9e Cache Pillar API Object on request object. 2016-10-12 14:29:47 +02:00
0146b568c0 Allow extra fields in activities. 2016-10-12 14:29:28 +02:00
ade62033ba Added only_for_node_type_decorator(node_type_name) decorator factory func
This allows you to create a decorator for Eve hooks. The decorator returns
a decorator that checks its first argument's node type.

If the node type is not of the required node type, returns None,
otherwise calls the wrapped function.
2016-10-12 13:41:16 +02:00
8aab88bdc2 Activities now have explicit project ID
This allows for directly querying activity on a certain project.
Used in Attract for task/shot activity streams.
2016-10-12 13:40:27 +02:00
f4b34f1d02 Error handler: set 'code' and 'description' defaults separately. 2016-10-12 10:22:25 +02:00
4eb8319697 Better logging of OAuth issues, in the hope to figure out what's going on. 2016-10-11 17:09:02 +02:00
5dd49fa5dd Pillar Extensions can now add links to the sidebar. 2016-10-11 16:33:44 +02:00
6429c3df21 Modernised flask.ext.login imports → flask_login 2016-10-11 15:23:40 +02:00
3561cb61c6 Fix favicon 2016-10-10 17:29:13 +02:00
a52c263733 Homepage: Fix long comments 2016-10-10 16:39:36 +02:00
c9d4a06486 Swap Blender Sync with Agent 327 project announcement 2016-10-07 16:42:42 +02:00
8a35fe3a16 Swap blog stream with random featured assets 2016-10-07 15:12:27 +02:00
620107fdc0 If there's no content_type, display node_type
Like in the case of textures, they are not content_type but node_type
2016-10-07 15:06:29 +02:00
14a8be6329 Fix 'Latest Assets' list not being updated
Was simply missing project_id
2016-10-07 15:05:57 +02:00
77b17e31e0 Homepage: Minor style tweaks to make feed a bit more compact 2016-10-07 14:52:39 +02:00
2028891e7a No need to cache Sass, it's so fast anyway 2016-10-07 14:51:46 +02:00
abe0c28a99 Flowplayer: Fix fullscreen icon 2016-10-06 11:35:10 +02:00
c71186f318 Allow project membership to be managed by ppl with admin role.
This was already mentioned as possible in the frontend, but not implemented
in the backend.
2016-10-05 14:36:07 +02:00
4e0db78ff1 Made the use of the term "Team member" consistent on the proj sharing page.
Also clarified that project owners *and* team members can edit the project,
and that team members can also delete assets.
2016-10-04 12:51:23 +02:00
d1610da5f9 JStree: HREF attribute link to actual node instead of #
This allows things like middle click on an item to load in a separate tab, yay!

Idea and help by Dr. Sybren
2016-10-04 12:38:08 +02:00
73ec464292 py.test: run with -x (stop at first error) and --ff (failed test first) 2016-10-04 11:58:46 +02:00
0de8772c98 Removed __all__, as we didn't keep it up to date anyway. 2016-10-04 11:58:46 +02:00
91b116aa74 Slightly smarter ./gulp script (taken from Attract) 2016-10-04 11:58:46 +02:00
6537332b26 Don't use # as link on group nodes listing, use the actual link 2016-09-30 18:07:36 +02:00
001d310d76 Fix double pushState when browsing group nodes
Was calling displayNode() twice
2016-09-30 18:07:36 +02:00
e2921c8da8 nodes_latest was missing the content_type 2016-09-30 18:07:36 +02:00
d1d48553e5 Fix link to blog items not working 2016-09-30 18:07:36 +02:00
dd58d4ad04 Created AbstractPillarTest.create_project_admin() function. 2016-09-30 12:54:21 +02:00
b429933737 Added 'required_after_creation' Eve schema validator. 2016-09-30 12:54:21 +02:00
2cc22f4f90 Fix scrolling on mobile 2016-09-30 11:28:21 +02:00
e2236864e7 Filter out '^attract_.*' node types from jstree
While we're at it, also filter out comment & post from the query, rather
than later in Python code.
2016-09-29 17:34:24 +02:00
74d86487a9 Added self-building gulp command 2016-09-29 10:01:31 +02:00
d7fe196af0 Some dependency cleanups. 2016-09-29 10:01:15 +02:00
dcef372e4f Gracefully handle project without node types.
This can happen when a projection excludes node types.
2016-09-29 09:55:49 +02:00
7931428312 Clipboard icons on pillar-font 2016-09-27 17:01:07 +02:00
407aefb9ad Added CLI command for moving top-level nodes between projects.
Also introduces a slightly nicer way to get the database interface, and
an object-oriented way to allow dependency injection.
2016-09-27 12:57:57 +02:00
c64fbf61ba Removed project node type 2016-09-27 12:57:57 +02:00
063023c69a PEP8 2016-09-27 12:57:57 +02:00
2c7d2e7dfd Move font-pillar into its own css file
So we can easily link it from attract/flamenco/etc
2016-09-23 17:29:35 +02:00
7968c6ca37 Added node_type_utils to assign permissions to certain node types.
This separates "mechanism" from "policy".
2016-09-23 17:13:26 +02:00
91e3ec659f Added ProjectUtils.projectUrl() 2016-09-23 10:12:57 +02:00
e0f92b6185 Don't log entire exception when forwarding a 412 Precondition Failed. 2016-09-23 09:40:05 +02:00
0bf07b4ba4 ProjectUtils: add context
Currently used in Attract for the shots/tasks list
2016-09-22 18:59:55 +02:00
dfe398458b Tutti: Check if algoliaIndex is defined 2016-09-22 18:59:55 +02:00
30215bf87c Tutti: Check if tooltip/popover exist 2016-09-22 18:59:55 +02:00
0f23ee7a08 Added handler for 412 Precondition Failed from SDK. 2016-09-22 18:09:43 +02:00
9514066893 Gulp: Don't livereload by default
When running gulp watch, we were livereloading by default, which meant we can't have multiple 'gulp watch'.
2016-09-22 18:07:05 +02:00
cd8707207b Made format_undertitle() Jinja filter None-safe 2016-09-22 10:33:51 +02:00
7f9f89853d Properly handle embed/non-embed error renders for some SDK exceptions. 2016-09-22 09:25:59 +02:00
78824c9c2a Allow extensions to define custom project properties 2016-09-20 15:59:39 +02:00
40896fc70b Better logging when bad extension class is given.
This was necessary to debug an issue with different unit tests influencing
each other in Attract.
2016-09-20 15:59:39 +02:00
7598ad0b57 Gulp: Avoid re-building unchanged files by caching the results 2016-09-20 15:17:19 +02:00
4b11aab429 Update cloud headline 2016-09-19 16:53:11 +02:00
ad91e37d14 Art of Blender is selling out! 2016-09-19 12:34:03 +02:00
df8afb8b14 Append license notes to Algolia index
So we can keep nodes without description or uploaded by other users (like
textures), with clean names and still be able to search them easily by
their copyright notes.

Reviewers: sybren, fsiddi

Reviewed By: sybren, fsiddi

Differential Revision: https://developer.blender.org/D2225
2016-09-14 09:39:19 +02:00
55b2911665 Added .arcconfig for phabricator integration 2016-09-14 09:39:19 +02:00
1680475d92 Expose License notes on Textures, if any 2016-09-12 18:57:57 +02:00
d116439b57 correct text when there are no hdris 2016-09-12 18:11:25 +02:00
56c669874d Agent in the frontpage 2016-09-12 18:01:11 +02:00
76b0f5fc46 Moved login-code into a separate function.
This makes it easier to log in users by their token from unittests.
2016-09-08 12:03:51 +02:00
68666f0650 Updated unittest code so that we can create 100% valid projects.
This means also creating a user and groups so that the references are
valid.
2016-09-08 12:03:17 +02:00
4313284dab Added 'hide_none' Jinja filter, which replaces None with an empty string 2016-09-07 17:01:56 +02:00
9e6b998c50 Refactored static file handling so that extensions can provide static files 2016-09-07 16:36:25 +02:00
b2e8711ac4 Moved Jinja2 stuff to its own module, and added |undertitle filter. 2016-09-07 16:03:40 +02:00
f03566a10f Added template for embedded error 500 2016-09-07 14:57:05 +02:00
2730a7a2b2 Added error handlers for some PillarSDK exceptions. 2016-09-07 12:23:48 +02:00
f21b708085 Made it easier for extensions to register multiple blueprints at different URLs
The blueprint's own url_prefix='/xxx' setting is now taken into account.
2016-09-07 11:40:24 +02:00
8a6cd96198 Added pi-users icon + documented regeneration of pillar-font. 2016-09-07 11:14:36 +02:00
4ae36a0dc3 Allow custom template dirs for extensions 2016-09-06 18:39:35 +02:00
eac49ab810 Use BLENDER_ID_ENDPOINT to get roles from BlenderID
Also refactored some code.
2016-09-06 17:27:14 +02:00
49c08cba10 Custom error handlers: also properly handle non-Werkzeug exceptions. 2016-09-06 17:10:50 +02:00
cf30bb5d62 Use BlenderID-side roles to grant demo role. 2016-09-06 16:42:48 +02:00
ab5a4a6b6c Custom error pages.
These make a distinction between API requests on /api/ (which will return
a JSON response) and other requests (which will return HTML).

Fixes T49212
2016-09-06 14:22:52 +02:00
e04b2ef7ea Fix background color for nav container 2016-09-06 12:41:52 +02:00
52ca2adc19 User admin: actually show the search hit container. 2016-09-06 12:16:25 +02:00
29a0bed39b Fix background color of node-container on /about 2016-09-06 12:11:47 +02:00
634ad86fa1 Fix search on blog and tweaks to navbar 2016-09-06 12:04:40 +02:00
574178cffc Prevent accessing /nodes/undefined/view from search pages.
`firstHit.attr('data-hit-id')` can be undefined; in that case we just
ignore the siutation.

Furthermore, I've removed the call to clearTimeout(), as it is only
called after the timeout has been hit, and thus is a no-op.
2016-09-06 11:56:54 +02:00
305d9b44ec re-indented algolia_search.js so that it uses 4-space indents. 2016-09-06 11:52:26 +02:00
3bb55fd3db User admin: properly handle AJAX errors.
Added specific handling for clicking on non-existing users. The styling
might need some tweaking (it's pretty ugly), but then again, it's just
for us admins.
2016-09-06 11:27:49 +02:00
486686f1f9 File upload: Removed JS-side file size check.
Instead, the size of the entire HTTP request body is checked against the
maximum file size. This allows for slightly smaller files (in the order
of 200-300 bytes), which shouldn't be noticeable given our 32 MiB limit
for non-subscribers. This check is performed before accessing
request.files[], and thus before the file even starts uploading.

This also allows unlimited file uploads to subscribers and demo users.
This was already possible using the API, so now the web interface is
consistent. Limits can be set using config[_local].py.

This closes T49264: Allow large uploads for admins
2016-09-06 10:33:28 +02:00
52cc61b143 Use Roboto font for headings as well 2016-09-05 19:40:46 +02:00
e4763d809b Project view: Fix transparent background of tree/sidebar 2016-09-05 18:55:49 +02:00
4cf7fde5bf Welcome Colin and Beau! 2016-09-05 16:00:45 +02:00
e58f29a9d0 Fix missing pictures on latest blog posts and node updates 2016-09-05 16:00:45 +02:00
fa050da8e2 Display Blog on the sidebar, if available 2016-09-05 16:00:45 +02:00
3d9b9e40d4 Added PillarExtension.setup_app(app)
It's called on each extension after all extensions have been processed,
and after all built-in Pillar modules have had their setup_app() called.
Call order is random.
2016-08-31 16:03:45 +02:00
4cf779e040 Keep reference to loaded extension, and refuse to load twice.
The Pillar extensions are now stored, by their name, in a dictionary.
2016-08-31 16:02:55 +02:00
a0cc76259e Renamed TestPillarServer to PillarTestServer
TestXXX classes are seen as unit tests by py.test, so anything that's not
a unit test should not be called TestXXX.
2016-08-31 11:29:16 +02:00
54bc0e87ce Updated test requirements 2016-08-31 11:28:38 +02:00
cb5128907c Removed old-src folder, use the last-before-fusion tag instead.
The 'last-before-fusion' tag points to the last revision before the
fusion with Pillar-Web. Any old source can be looked up there.
2016-08-31 11:10:44 +02:00
34921ece76 Added quotes around node type name 2016-08-30 16:00:16 +02:00
5ebec42e6d Removed unused, commented-out code 2016-08-30 15:58:58 +02:00
4529d0597b Gracefully handle nodes of a type for which we don't have a template.
Before, it would simply return a 500 Internal Server Error.
2016-08-30 15:52:55 +02:00
3f9d519753 Added Dummy deploy script for people with a 'git pp' alias
For people with a 'git pp' alias to push to production. This are the
aliases I use to push & deploy changes to production:

    prod = "!git checkout production && git fetch origin production && gitk --all"
    ff = "merge --ff-only"
    pp = "!git push && if [ -e deploy.sh ]; then ./deploy.sh; fi && git checkout master"

Those are handy to make branch switches easy, and to ensure that you don't
accidentally continue work on the production branch after deploying.
2016-08-30 14:37:36 +02:00
3039aef7d3 Removed Attract node types.
Those are moved into the new Blender Cloud server's Attract module.
2016-08-30 14:24:14 +02:00
cb84e6f0b7 Allow CLI commands to set the current user to a non-existing admin user. 2016-08-30 14:24:14 +02:00
88b5537df4 Avoid crash when there is no current user 2016-08-30 14:24:14 +02:00
88dd574797 No longer using flask.ext.XXX, more imports have to change too. 2016-08-30 14:24:14 +02:00
8d6df947c8 User our own jQuery 2016-08-30 14:10:04 +02:00
b9b993fe4a Extension system: allow empty Eve settings.
Extensions are now able to return an empty dict from their eve_settings()
method.
2016-08-30 13:55:43 +02:00
2c62bd4016 When replying, use @username only 2016-08-30 13:54:59 +02:00
06ed6af2a9 Use Blender Cloud add-on version from config 2016-08-30 12:17:59 +02:00
32c130ed93 Fall back to application/octet-stream when there is no content-type header 2016-08-26 17:57:52 +02:00
634b233685 mass_copy_between_backends: Also catch unexpected exceptions, and simply move on to the next file. 2016-08-26 17:50:40 +02:00
eb7b875122 Copying files to other backend now works 2016-08-26 15:52:02 +02:00
c4a3601939 Broken file_storage.py up into file_storage/{__init__,moving}.py 2016-08-26 15:36:34 +02:00
225f9ae054 WIP for change file backends 2016-08-26 15:36:34 +02:00
163db3f2b8 Let generated links for 'unittest' backend actually be a valid link. 2016-08-26 15:35:18 +02:00
dd6fc8bde4 generate_link: warn when GCS blob can't be found. 2016-08-26 15:34:58 +02:00
ff692d287c Added 'check_cdnsun' management command.
This command performs a HEAD on each file stored at CDNSun, including its
variations. Logs missing variations and missing main files (but only when
there are no variations).
2016-08-26 14:16:05 +02:00
1fe86fa000 backup-db.sh now uses the new 'cloud' database 2016-08-24 14:50:08 +02:00
04c9c010f0 p.view_node(): check node_id for validity, before sending it to the API
This prevents a pillarsdk.exceptions.MethodNotAllowed exception, which
would result in a 500 Internal Server Error on the frontend.
2016-08-24 14:49:30 +02:00
b6c623cca8 Don't import every function from pillar.web.utils individually.
Instead, just "from pillar.web import utils" and then use utils.X to
get to the util function.
2016-08-24 14:26:47 +02:00
9b2a419d9b Extra debug logging for file uploads 2016-08-24 11:33:02 +02:00
d5cf3b8246 Moved TLS cert file to post() call instead of session.
Another way to make it work is to set it on the session, and explicitly
specify verify=True in the post() call.
2016-08-23 17:45:31 +02:00
0d3ed3af2c Explicitly use certificate chain. 2016-08-23 17:45:08 +02:00
751a321aa6 Document return type 2016-08-23 17:42:42 +02:00
207d821564 Override image/x-exr mimetype with application/x-exr
This prevents us from handling EXR files as images, at least until the
time when we can properly thumbnail those.
2016-08-23 17:41:56 +02:00
d7b71e38e8 Don't show upvote button on own comment 2016-08-23 16:25:09 +02:00
07691db874 Check subscription status on login. 2016-08-23 16:09:47 +02:00
dcbefc33ae Revert an oops in f3bf380bb7fa66b63010e3f6b3b477a8943479e7 2016-08-23 14:57:11 +02:00
751c692e6a Use urlparse.urlunsplit() to join url parts together.
This also works when there is no scheme and no hostname.
2016-08-23 14:34:15 +02:00
00a34e7e24 py.test now ignores node.js tests
There's one file node_modules/node-gyp/gyp/pylib/gyp/generator/ninja_test.py
which would otherwise be picked up by py.test.
2016-08-23 14:33:45 +02:00
2e0ba4c6cd test_sdk: load BlenderDesktopLogo.png from the correct path 2016-08-23 14:32:59 +02:00
9d1181330b Fix T49138: "learn more" buttons break history / back-button
The "learn more" links are now actually links, and the location is simply
set using `window.location.ref = url` instead of `window.location.replace()`.
2016-08-23 14:09:20 +02:00
f3bf380bb7 current_user.is_authenticated is a function, and thus should be called. 2016-08-23 14:09:20 +02:00
27eee380d2 Missing pillar-font 2016-08-23 13:56:35 +02:00
57620fd49a Added some more documentation for refresh_backend_links mgmt cmd 2016-08-23 12:57:49 +02:00
becf7e6381 manage.py refresh_backend_links: properly set up logging 2016-08-23 12:06:59 +02:00
c440465cf1 Removed pillar.manage_extra module.
It's no longer used, and empty.
2016-08-23 12:05:19 +02:00
25fb4ce842 Fix scrollbars on project_tree
(brought from pillar-web)
2016-08-22 23:12:56 +02:00
9c59b06ab9 Use boolean to define if button-rounded mixin is filled or not
Fix to blog
2016-08-22 23:04:06 +02:00
bd9ce3182d Typo: Create* 2016-08-22 12:31:12 +02:00
4398d250a7 Fix broken upload widgets on posts 2016-08-19 11:47:06 +02:00
2c5dc34ea2 Introducing Pillar Framework
Refactor of pillar-server and pillar-web into a single python package. This
simplifies the overall architecture of pillar applications.

Special thanks @sybren and @venomgfx
2016-08-19 09:19:06 +02:00
516 changed files with 68104 additions and 7784 deletions

6
.arcconfig Normal file
View File

@ -0,0 +1,6 @@
{
"project_id" : "Pillar Server",
"conduit_uri" : "https://developer.blender.org/",
"git.default-relative-commit" : "origin/master",
"arc.land.update.default" : "rebase"
}

3
.babelrc Normal file
View File

@ -0,0 +1,3 @@
{
"presets": ["@babel/preset-env"]
}

24
.gitignore vendored
View File

@ -6,14 +6,30 @@
*.ropeproject*
*.swp
/pillar/config_local.py
config_local.py
.ropeproject/*
/pillar/application/static/storage/
/build
/.cache
/pillar/pillar.egg-info/
/pillar/google_app.json
/.pytest_cache/
*.egg-info/
profile.stats
/dump/
/.eggs
/devdeps/pip-wheel-metadata/
/node_modules
/.sass-cache
*.css.map
*.js.map
/translations/*/LC_MESSAGES/*.mo
pillar/web/static/assets/css/*.css
pillar/web/static/assets/js/*.min.js
pillar/web/static/assets/js/vendor/video.min.js
pillar/web/static/storage/
pillar/web/static/uploads/
pillar/web/templates/
/poetry.lock

85
README.md Normal file
View File

@ -0,0 +1,85 @@
Pillar
======
This is the latest iteration on the Attract project. We are building a unified
framework called Pillar. Pillar will combine Blender Cloud and Attract. You
can see Pillar in action on the [Blender Cloud](https://cloud.blender.org).
## Custom fonts
The icons on the website are drawn using a custom font, stored in
[pillar/web/static/font](pillar/web/static/font).
This font is generated via [Fontello](http://fontello.com/) by uploading
[pillar/web/static/font/config.json](pillar/web/static/font/config.json).
Note that we only use the WOFF and WOFF2 formats, and discard the others
supplied by Fontello.
After replacing the font files & `config.json`, edit the Fontello-supplied
`font.css` to remove all font formats except `woff` and `woff2`. Then upload
it to [css2sass](http://css2sass.herokuapp.com/) to convert it to SASS, and
place it in [src/styles/font-pillar.sass](src/styles/font-pillar.sass).
Don't forget to Gulp!
## Installation
Dependencies are managed via [Poetry](https://poetry.eustace.io/).
Make sure your /data directory exists and is writable by the current user.
Alternatively, provide a `pillar/config_local.py` that changes the relevant
settings.
```
git clone git@git.blender.org:pillar-python-sdk.git ../pillar-python-sdk
pip install -U --user poetry
poetry install
```
## HDRi viewer
The HDRi viewer uses [Google VRView](https://github.com/googlevr/vrview). To upgrade,
get those files:
* [three.min.js](https://raw.githubusercontent.com/googlevr/vrview/master/build/three.min.js)
* [embed.min.js](https://raw.githubusercontent.com/googlevr/vrview/master/build/embed.min.js)
* [loading.gif](https://raw.githubusercontent.com/googlevr/vrview/master/images/loading.gif)
and place them in `pillar/web/static/assets/vrview`. Replace `images/loading.gif` in `embed.min.js` with `static/pillar/assets/vrview/loading.gif`.
You may also want to compare their
[index.html](https://raw.githubusercontent.com/googlevr/vrview/master/index.html) to our
`src/templates/vrview.pug`.
When on a HDRi page with the viewer embedded, use this JavaScript code to find the current
yaw: `vrview_window.contentWindow.yaw()`. This can be passed as `default_yaw` parameter to
the iframe.
## Celery
Pillar requires [Celery](http://www.celeryproject.org/) for background task processing. This in
turn requires a backend and a broker, for which the default Pillar configuration uses Redis and
RabbitMQ.
You can run the Celery Worker using `manage.py celery worker`.
Find other Celery operations with the `manage.py celery` command.
## Elasticsearch
Pillar uses [Elasticsearch](https://www.elastic.co/products/elasticsearch) to power the search engine.
You will need to run the `manage.py elastic reset_index` command to initialize the indexing.
If you need to reindex your documents in elastic you run the `manage.py elastic reindex` command.
## Translations
If the language you want to support doesn't exist, you need to run: `translations init es_AR`.
Every time a new string is marked for translation you need to update the entire catalog: `translations update`
And once more strings are translated, you need to compile the translations: `translations compile`
*To mark strings strings for translations in Python scripts you need to
wrap them with the `flask_babel.gettext` function.
For .pug templates wrap them with `_()`.*

View File

@ -1,3 +1,3 @@
#!/bin/bash
#!/bin/bash -ex
mongodump -h localhost:27018 -d eve --out dump/$(date +'%Y-%m-%d-%H%M') --excludeCollection tokens
mongodump -h localhost:27018 -d cloud --out dump/$(date +'%Y-%m-%d-%H%M') --excludeCollection tokens --excludeCollection flamenco_task_logs

View File

@ -1,57 +0,0 @@
#!/bin/bash -e
# Deploys the current production branch to the production machine.
PROJECT_NAME="pillar"
DOCKER_NAME="pillar"
REMOTE_ROOT="/data/git/${PROJECT_NAME}"
SSH="ssh -o ClearAllForwardings=yes cloud.blender.org"
ROOT="$(dirname "$(readlink -f "$0")")"
cd ${ROOT}
# Check that we're on production branch.
if [ $(git rev-parse --abbrev-ref HEAD) != "production" ]; then
echo "You are NOT on the production branch, refusing to deploy." >&2
exit 1
fi
# Check that production branch has been pushed.
if [ -n "$(git log origin/production..production --oneline)" ]; then
echo "WARNING: not all changes to the production branch have been pushed."
echo "Press [ENTER] to continue deploying current origin/production, CTRL+C to abort."
read dummy
fi
# SSH to cloud to pull all files in
echo "==================================================================="
echo "UPDATING FILES ON ${PROJECT_NAME}"
${SSH} git -C ${REMOTE_ROOT} fetch origin production
${SSH} git -C ${REMOTE_ROOT} log origin/production..production --oneline
${SSH} git -C ${REMOTE_ROOT} merge --ff-only origin/production
# Update the virtualenv
${SSH} -t docker exec ${DOCKER_NAME} /data/venv/bin/pip install -U -r ${REMOTE_ROOT}/requirements.txt --exists-action w
# Notify Bugsnag of this new deploy.
echo
echo "==================================================================="
GIT_REVISION=$(${SSH} git -C ${REMOTE_ROOT} describe --always)
echo "Notifying Bugsnag of this new deploy of revision ${GIT_REVISION}."
BUGSNAG_API_KEY=$(${SSH} python -c "\"import sys; sys.path.append('${REMOTE_ROOT}/${PROJECT_NAME}'); import config_local; print(config_local.BUGSNAG_API_KEY)\"")
curl --data "apiKey=${BUGSNAG_API_KEY}&revision=${GIT_REVISION}" https://notify.bugsnag.com/deploy
echo
# Wait for [ENTER] to restart the server
echo
echo "==================================================================="
echo "NOTE: If you want to edit config_local.py on the server, do so now."
echo "NOTE: Press [ENTER] to continue and restart the server process."
read dummy
${SSH} docker exec ${DOCKER_NAME} kill -HUP 1
echo "Server process restarted"
echo
echo "==================================================================="
echo "Deploy of ${PROJECT_NAME} is done."
echo "==================================================================="

16
devdeps/pyproject.toml Normal file
View File

@ -0,0 +1,16 @@
[tool.poetry]
name = "pillar-devdeps"
version = "1.0"
description = ""
authors = [
"Francesco Siddi <francesco@blender.org>",
"Pablo Vazquez <pablo@blender.studio>",
"Sybren Stüvel <sybren@blender.studio>",
]
[tool.poetry.dependencies]
python = "~3.6"
mypy = "^0.501"
pytest = "~4.4"
pytest-cov = "~2.7"
responses = "~0.10"

View File

@ -1,17 +0,0 @@
#!/usr/bin/env bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo $DIR
if [[ $1 == 'pro' || $1 == 'dev' ]]; then
# Copy requirements.txt into pro folder
cp ../requirements.txt $1/requirements.txt
# Build image
docker build -t armadillica/pillar_$1 $1
# Remove requirements.txt
rm $1/requirements.txt
else
echo "POS. Your options are 'pro' or 'dev'"
fi

View File

@ -1,48 +0,0 @@
FROM ubuntu:14.04
MAINTAINER Francesco Siddi <francesco@blender.org>
RUN apt-get update && apt-get install -y \
python \
python-dev \
python-pip \
vim \
nano \
zlib1g-dev \
libjpeg-dev \
python-crypto \
python-openssl \
libssl-dev \
libffi-dev \
software-properties-common \
git
RUN add-apt-repository ppa:mc3man/trusty-media \
&& apt-get update && apt-get install -y \
ffmpeg
RUN mkdir -p /data/git/pillar \
&& mkdir -p /data/storage/shared \
&& mkdir -p /data/storage/pillar \
&& mkdir -p /data/config \
&& mkdir -p /data/storage/logs
RUN pip install virtualenv \
&& virtualenv /data/venv
ENV PIP_PACKAGES_VERSION = 2
ADD requirements.txt /requirements.txt
RUN . /data/venv/bin/activate && pip install -r /requirements.txt
VOLUME /data/git/pillar
VOLUME /data/config
VOLUME /data/storage/shared
VOLUME /data/storage/pillar
ENV MONGO_HOST mongo_pillar
EXPOSE 5000
ADD runserver.sh /runserver.sh
ENTRYPOINT ["bash", "/runserver.sh"]

View File

@ -1,3 +0,0 @@
#!/bin/bash
. /data/venv/bin/activate && python /data/git/pillar/pillar/manage.py runserver

View File

@ -1,47 +0,0 @@
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName 127.0.0.1
# EnableSendfile on
XSendFile on
XSendFilePath /data/storage/pillar
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
WSGIDaemonProcess pillar
WSGIPassAuthorization On
WSGIScriptAlias / /data/git/pillar/pillar/runserver.wsgi \
process-group=pillar application-group=%{GLOBAL}
<Directory /data/git/pillar/pillar>
<Files runserver.wsgi>
Require all granted
</Files>
</Directory>
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

View File

@ -1,61 +0,0 @@
FROM ubuntu:14.04
MAINTAINER Francesco Siddi <francesco@blender.org>
RUN apt-get update && apt-get install -y \
python \
python-dev \
python-pip \
vim \
nano \
zlib1g-dev \
libjpeg-dev \
python-crypto \
python-openssl \
libssl-dev \
libffi-dev \
software-properties-common \
apache2-mpm-event \
libapache2-mod-wsgi \
libapache2-mod-xsendfile \
git
RUN add-apt-repository ppa:mc3man/trusty-media \
&& apt-get update && apt-get install -y \
ffmpeg
RUN mkdir -p /data/git/pillar \
&& mkdir -p /data/storage/shared \
&& mkdir -p /data/storage/pillar \
&& mkdir -p /data/config \
&& mkdir -p /data/storage/logs
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
RUN pip install virtualenv \
&& virtualenv /data/venv
ENV PIP_PACKAGES_VERSION = 2
ADD requirements.txt /requirements.txt
RUN . /data/venv/bin/activate \
&& pip install -r /requirements.txt
VOLUME /data/git/pillar
VOLUME /data/config
VOLUME /data/storage/shared
VOLUME /data/storage/pillar
ENV MONGO_HOST mongo_pillar
EXPOSE 80
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

19
gulp Executable file
View File

@ -0,0 +1,19 @@
#!/bin/bash -ex
GULP=./node_modules/.bin/gulp
function install() {
npm install
touch $GULP # installer doesn't always touch this after a build, so we do.
}
# Rebuild Gulp if missing or outdated.
[ -e $GULP ] || install
[ gulpfile.js -nt $GULP ] && install
if [ "$1" == "watch" ]; then
# Treat "gulp watch" as "gulp && gulp watch"
$GULP
fi
exec $GULP "$@"

234
gulpfile.js Normal file
View File

@ -0,0 +1,234 @@
let argv = require('minimist')(process.argv.slice(2));
let autoprefixer = require('gulp-autoprefixer');
let cache = require('gulp-cached');
let chmod = require('gulp-chmod');
let concat = require('gulp-concat');
let git = require('gulp-git');
let gulpif = require('gulp-if');
let gulp = require('gulp');
let livereload = require('gulp-livereload');
let plumber = require('gulp-plumber');
let pug = require('gulp-pug');
let rename = require('gulp-rename');
let sass = require('gulp-sass');
let sourcemaps = require('gulp-sourcemaps');
let uglify = require('gulp-uglify-es').default;
let browserify = require('browserify');
let babelify = require('babelify');
let sourceStream = require('vinyl-source-stream');
let glob = require('glob');
let es = require('event-stream');
let path = require('path');
let buffer = require('vinyl-buffer');
let enabled = {
uglify: argv.production,
maps: !argv.production,
failCheck: !argv.production,
prettyPug: !argv.production,
cachify: !argv.production,
cleanup: argv.production,
chmod: argv.production,
};
let destination = {
css: 'pillar/web/static/assets/css',
pug: 'pillar/web/templates',
js: 'pillar/web/static/assets/js',
}
let source = {
bootstrap: 'node_modules/bootstrap/',
jquery: 'node_modules/jquery/',
popper: 'node_modules/popper.js/',
vue: 'node_modules/vue/',
}
/* Stylesheets */
gulp.task('styles', function(done) {
gulp.src('src/styles/**/*.sass')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(sass({
outputStyle: 'compressed'}
))
.pipe(autoprefixer("last 3 versions"))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.css))
.pipe(gulpif(argv.livereload, livereload()));
done();
});
/* Templates */
gulp.task('templates', function(done) {
gulp.src('src/templates/**/*.pug')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('templating')))
.pipe(pug({
pretty: enabled.prettyPug
}))
.pipe(gulp.dest(destination.pug))
.pipe(gulpif(argv.livereload, livereload()));
done();
});
/* Individual Uglified Scripts */
gulp.task('scripts', function(done) {
gulp.src('src/scripts/*.js')
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('scripting')))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(gulpif(enabled.uglify, uglify()))
.pipe(rename({suffix: '.min'}))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulpif(enabled.chmod, chmod(0o644)))
.pipe(gulp.dest(destination.js))
.pipe(gulpif(argv.livereload, livereload()));
done();
});
function browserify_base(entry) {
let pathSplited = path.dirname(entry).split(path.sep);
let moduleName = pathSplited[pathSplited.length - 1];
return browserify({
entries: [entry],
standalone: 'pillar.' + moduleName,
})
.transform(babelify, { "presets": ["@babel/preset-env"] })
.bundle()
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(sourceStream(path.basename(entry)))
.pipe(buffer())
.pipe(rename({
basename: moduleName,
extname: '.min.js'
}));
}
/**
* Transcompile and package common modules to be included in tutti.js.
*
* Example:
* src/scripts/js/es6/common/api/init.js
* src/scripts/js/es6/common/events/init.js
* Everything exported in api/init.js will end up in module pillar.api.*, and everything exported in events/init.js
* will end up in pillar.events.*
*/
function browserify_common() {
return glob.sync('src/scripts/js/es6/common/**/init.js').map(browserify_base);
}
/**
* Transcompile and package individual modules.
*
* Example:
* src/scripts/js/es6/individual/coolstuff/init.js
* Will create a coolstuff.js and everything exported in init.js will end up in namespace pillar.coolstuff.*
*/
gulp.task('scripts_browserify', function(done) {
glob('src/scripts/js/es6/individual/**/init.js', function(err, files) {
if(err) done(err);
var tasks = files.map(function(entry) {
return browserify_base(entry)
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(gulpif(enabled.uglify, uglify()))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.js));
});
es.merge(tasks).on('end', done);
})
});
/* Collection of scripts in src/scripts/tutti/ and src/scripts/js/es6/common/ to merge into tutti.min.js
* Since it's always loaded, it's only for functions that we want site-wide.
* It also includes jQuery and Bootstrap (and its dependency popper), since
* the site doesn't work without it anyway.*/
gulp.task('scripts_concat_tutti', function(done) {
let toUglify = [
source.jquery + 'dist/jquery.min.js',
source.vue + (enabled.uglify ? 'dist/vue.min.js' : 'dist/vue.js'),
source.popper + 'dist/umd/popper.min.js',
source.bootstrap + 'js/dist/index.js',
source.bootstrap + 'js/dist/util.js',
source.bootstrap + 'js/dist/alert.js',
source.bootstrap + 'js/dist/collapse.js',
source.bootstrap + 'js/dist/dropdown.js',
source.bootstrap + 'js/dist/tooltip.js',
'src/scripts/tutti/**/*.js'
];
es.merge(gulp.src(toUglify), ...browserify_common())
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(concat("tutti.min.js"))
.pipe(gulpif(enabled.uglify, uglify()))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulpif(enabled.chmod, chmod(0o644)))
.pipe(gulp.dest(destination.js))
.pipe(gulpif(argv.livereload, livereload()));
done();
});
/* Simply move these vendor scripts from node_modules. */
gulp.task('scripts_move_vendor', function(done) {
let toMove = [
'node_modules/video.js/dist/video.min.js',
];
gulp.src(toMove)
.pipe(gulp.dest(destination.js + '/vendor/'));
done();
});
// While developing, run 'gulp watch'
gulp.task('watch',function(done) {
// Only listen for live reloads if ran with --livereload
if (argv.livereload){
livereload.listen();
}
gulp.watch('src/styles/**/*.sass',gulp.series('styles'));
gulp.watch('src/templates/**/*.pug',gulp.series('templates'));
gulp.watch('src/scripts/*.js',gulp.series('scripts'));
gulp.watch('src/scripts/tutti/**/*.js',gulp.series('scripts_concat_tutti'));
gulp.watch('src/scripts/js/**/*.js',gulp.series(['scripts_browserify', 'scripts_concat_tutti']));
done();
});
// Erases all generated files in output directories.
gulp.task('cleanup', function(done) {
let paths = [];
for (attr in destination) {
paths.push(destination[attr]);
}
git.clean({ args: '-f -X ' + paths.join(' ') }, function (err) {
if(err) throw err;
});
done();
});
// Run 'gulp' to build everything at once
let tasks = [];
if (enabled.cleanup) tasks.push('cleanup');
// gulp.task('default', gulp.parallel('styles', 'templates', 'scripts', 'scripts_tutti'));
gulp.task('default', gulp.parallel(tasks.concat([
'styles',
'templates',
'scripts',
'scripts_concat_tutti',
'scripts_move_vendor',
'scripts_browserify',
])));

180
jest.config.js Normal file
View File

@ -0,0 +1,180 @@
// For a detailed explanation regarding each configuration property, visit:
// https://jestjs.io/docs/en/configuration.html
module.exports = {
// All imported modules in your tests should be mocked automatically
// automock: false,
// Stop running tests after the first failure
// bail: false,
// Respect "browser" field in package.json when resolving modules
// browser: false,
// The directory where Jest should store its cached dependency information
// cacheDirectory: "/tmp/jest_rs",
// Automatically clear mock calls and instances between every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
// collectCoverage: false,
// An array of glob patterns indicating a set of files for which coverage information should be collected
// collectCoverageFrom: null,
// The directory where Jest should output its coverage files
// coverageDirectory: null,
// An array of regexp pattern strings used to skip coverage collection
// coveragePathIgnorePatterns: [
// "/node_modules/"
// ],
// A list of reporter names that Jest uses when writing coverage reports
// coverageReporters: [
// "json",
// "text",
// "lcov",
// "clover"
// ],
// An object that configures minimum threshold enforcement for coverage results
// coverageThreshold: null,
// Make calling deprecated APIs throw helpful error messages
// errorOnDeprecated: false,
// Force coverage collection from ignored files usin a array of glob patterns
// forceCoverageMatch: [],
// A path to a module which exports an async function that is triggered once before all test suites
// globalSetup: null,
// A path to a module which exports an async function that is triggered once after all test suites
// globalTeardown: null,
// A set of global variables that need to be available in all test environments
// globals: {},
// An array of directory names to be searched recursively up from the requiring module's location
// moduleDirectories: [
// "node_modules"
// ],
// An array of file extensions your modules use
// moduleFileExtensions: [
// "js",
// "json",
// "jsx",
// "node"
// ],
// A map from regular expressions to module names that allow to stub out resources with a single module
// moduleNameMapper: {},
// An array of regexp pattern strings, matched against all module paths before considered 'visible' to the module loader
// modulePathIgnorePatterns: [],
// Activates notifications for test results
// notify: false,
// An enum that specifies notification mode. Requires { notify: true }
// notifyMode: "always",
// A preset that is used as a base for Jest's configuration
// preset: null,
// Run tests from one or more projects
// projects: null,
// Use this configuration option to add custom reporters to Jest
// reporters: undefined,
// Automatically reset mock state between every test
// resetMocks: false,
// Reset the module registry before running each individual test
// resetModules: false,
// A path to a custom resolver
// resolver: null,
// Automatically restore mock state between every test
// restoreMocks: false,
// The root directory that Jest should scan for tests and modules within
// rootDir: null,
// A list of paths to directories that Jest should use to search for files in
// roots: [
// "<rootDir>"
// ],
// Allows you to use a custom runner instead of Jest's default test runner
// runner: "jest-runner",
// The paths to modules that run some code to configure or set up the testing environment before each test
setupFiles: ["<rootDir>/src/scripts/js/es6/test_config/test-env.js"],
// The path to a module that runs some code to configure or set up the testing framework before each test
// setupTestFrameworkScriptFile: null,
// A list of paths to snapshot serializer modules Jest should use for snapshot testing
// snapshotSerializers: [],
// The test environment that will be used for testing
testEnvironment: "jsdom",
// Options that will be passed to the testEnvironment
// testEnvironmentOptions: {},
// Adds a location field to test results
// testLocationInResults: false,
// The glob patterns Jest uses to detect test files
// testMatch: [
// "**/__tests__/**/*.js?(x)",
// "**/?(*.)+(spec|test).js?(x)"
// ],
// An array of regexp pattern strings that are matched against all test paths, matched tests are skipped
// testPathIgnorePatterns: [
// "/node_modules/"
// ],
// The regexp pattern Jest uses to detect test files
// testRegex: "",
// This option allows the use of a custom results processor
// testResultsProcessor: null,
// This option allows use of a custom test runner
// testRunner: "jasmine2",
// This option sets the URL for the jsdom environment. It is reflected in properties such as location.href
// testURL: "http://localhost",
// Setting this value to "fake" allows the use of fake timers for functions such as "setTimeout"
// timers: "real",
// A map from regular expressions to paths to transformers
// transform: null,
// An array of regexp pattern strings that are matched against all source file paths, matched files will skip transformation
// transformIgnorePatterns: [
// "/node_modules/"
// ],
// An array of regexp pattern strings that are matched against all modules before the module loader will automatically return a mock for them
// unmockedModulePathPatterns: undefined,
// Indicates whether each individual test should be reported during the run
// verbose: null,
// An array of regexp patterns that are matched against all source file paths before re-running tests in watch mode
// watchPathIgnorePatterns: [],
// Whether to use watchman for file crawling
// watchman: true,
};

10475
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

54
package.json Normal file
View File

@ -0,0 +1,54 @@
{
"name": "pillar",
"license": "GPL-2.0+",
"author": "Blender Institute",
"repository": {
"type": "git",
"url": "git://git.blender.org/pillar.git"
},
"devDependencies": {
"@babel/core": "7.1.6",
"@babel/preset-env": "7.1.6",
"acorn": "5.7.3",
"babel-core": "7.0.0-bridge.0",
"babelify": "10.0.0",
"browserify": "16.2.3",
"gulp": "4.0.0",
"gulp-autoprefixer": "6.0.0",
"gulp-babel": "8.0.0",
"gulp-cached": "1.1.1",
"gulp-chmod": "2.0.0",
"gulp-concat": "2.6.1",
"gulp-git": "2.8.0",
"gulp-if": "2.0.2",
"gulp-livereload": "4.0.0",
"gulp-plumber": "1.2.0",
"gulp-pug": "4.0.1",
"gulp-rename": "1.4.0",
"gulp-sass": "4.1.0",
"gulp-sourcemaps": "2.6.4",
"gulp-uglify-es": "1.0.4",
"jest": "^24.8.0",
"minimist": "1.2.0",
"vinyl-buffer": "1.0.1",
"vinyl-source-stream": "2.0.0"
},
"dependencies": {
"bootstrap": "^4.3.1",
"glob": "7.1.3",
"jquery": "^3.4.1",
"natives": "^1.1.6",
"popper.js": "1.14.4",
"video.js": "7.2.2",
"vue": "2.5.17"
},
"scripts": {
"test": "jest"
},
"__COMMENTS__": [
"natives@1.1.6 for Gulp 3.x on Node 10.x: https://github.com/gulpjs/gulp/issues/2162#issuecomment-385197164"
],
"resolutions": {
"natives": "1.1.6"
}
}

947
pillar/__init__.py Normal file
View File

@ -0,0 +1,947 @@
"""Pillar server."""
import collections
import contextlib
import copy
import json
import logging
import logging.config
import subprocess
import tempfile
import typing
import os
import os.path
import pathlib
import warnings
# These warnings have to be suppressed before the first import.
# Eve is falling behind on Cerberus. See https://github.com/pyeve/eve/issues/1278
warnings.filterwarnings(
'ignore', category=DeprecationWarning,
message="Methods for type testing are deprecated, use TypeDefinition and the "
"'types_mapping'-property of a Validator-instance instead")
# Werkzeug deprecated Request.is_xhr, but it works fine with jQuery and we don't need a reminder
# every time a unit test is run.
warnings.filterwarnings('ignore', category=DeprecationWarning,
message="'Request.is_xhr' is deprecated as of version 0.13 and will be "
"removed in version 1.0.")
import jinja2
import flask
from eve import Eve
from flask import g, render_template, request
from flask_babel import Babel, gettext as _
from flask.templating import TemplateNotFound
import pymongo.database
from werkzeug.local import LocalProxy
# Declare pillar.current_app before importing other Pillar modules.
def _get_current_app():
"""Returns the current application."""
return flask.current_app
current_app: 'PillarServer' = LocalProxy(_get_current_app)
"""the current app, annotated as PillarServer"""
from pillar.api import custom_field_validation
from pillar.api.utils import authentication
import pillar.web.jinja
from . import api
from . import web
from . import auth
from . import sentry_extra
import pillar.api.organizations
empty_settings = {
# Use a random URL prefix when booting Eve, to ensure that any
# Flask route that's registered *before* we load our own config
# won't interfere with Pillar itself.
'URL_PREFIX': 'pieQui4vah9euwieFai6naivaV4thahchoochiiwazieBe5o',
'DOMAIN': {},
}
class ConfigurationMissingError(SystemExit):
"""Raised when a vital configuration key is missing.
Causes Python to exit.
"""
class BlinkerCompatibleEve(Eve):
"""Workaround for https://github.com/pyeve/eve/issues/1087"""
def __getattr__(self, name):
if name in {"im_self", "im_func"}:
raise AttributeError("type object '%s' has no attribute '%s'" %
(self.__class__.__name__, name))
return super().__getattr__(name)
class PillarServer(BlinkerCompatibleEve):
def __init__(self, app_root: str, **kwargs) -> None:
from .extension import PillarExtension
from celery import Celery
from flask_wtf.csrf import CSRFProtect
kwargs.setdefault('validator', custom_field_validation.ValidateCustomFields)
super(PillarServer, self).__init__(settings=empty_settings, **kwargs)
# mapping from extension name to extension object.
map_type = typing.MutableMapping[str, PillarExtension]
self.pillar_extensions: map_type = collections.OrderedDict()
self.pillar_extensions_template_paths = [] # list of paths
# The default roles Pillar uses. Will probably all move to extensions at some point.
self._user_roles: typing.Set[str] = {
'demo', 'admin', 'subscriber', 'homeproject',
'protected', 'org-subscriber', 'video-encoder',
'service', 'badger', 'svner',
}
self._user_roles_indexable: typing.Set[str] = {'demo', 'admin', 'subscriber'}
# Mapping from role name to capabilities given to that role.
self._user_caps: typing.MutableMapping[str, typing.FrozenSet[str]] = \
collections.defaultdict(frozenset)
self.app_root = os.path.abspath(app_root)
self._load_flask_config()
self._config_logging()
self.log = logging.getLogger('%s.%s' % (__name__, self.__class__.__name__))
self.log.info('Creating new instance from %r', self.app_root)
self._config_url_map()
self._config_auth_token_hmac_key()
self._config_tempdirs()
self._config_git()
self.sentry: typing.Optional[sentry_extra.PillarSentry] = None
self._config_sentry()
self._config_google_cloud_storage()
self.algolia_index_users = None
self.algolia_index_nodes = None
self.algolia_client = None
self._config_algolia()
self.encoding_service_client = None
self._config_encoding_backend()
try:
self.settings = os.environ['EVE_SETTINGS']
except KeyError:
self.settings = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'api', 'eve_settings.py')
# self.settings = self.config['EVE_SETTINGS_PATH']
self.load_config()
self._validate_config()
# Configure authentication
self.login_manager = auth.config_login_manager(self)
self._config_caching()
self._config_translations()
# Celery itself is configured after all extensions have loaded.
self.celery: Celery = None
self.org_manager = pillar.api.organizations.OrgManager()
# Make CSRF protection available to the application. By default it is
# disabled on all endpoints. More info at WTF_CSRF_CHECK_DEFAULT in config.py
self.csrf = CSRFProtect(self)
def _validate_config(self):
if not self.config.get('SECRET_KEY'):
raise ConfigurationMissingError('SECRET_KEY configuration key is missing')
server_name = self.config.get('SERVER_NAME')
if not server_name:
raise ConfigurationMissingError('SERVER_NAME configuration key is missing, should be a '
'FQDN with TLD')
if server_name != 'localhost' and '.' not in server_name:
raise ConfigurationMissingError('SERVER_NAME should contain a FQDN with TLD')
def _load_flask_config(self):
# Load configuration from different sources, to make it easy to override
# settings with secrets, as well as for development & testing.
self.config.from_pyfile(os.path.join(os.path.dirname(__file__), 'config.py'), silent=False)
self.config.from_pyfile(os.path.join(self.app_root, 'config.py'), silent=True)
self.config.from_pyfile(os.path.join(self.app_root, 'config_local.py'), silent=True)
from_envvar = os.environ.get('PILLAR_CONFIG')
if from_envvar:
# Don't use from_envvar, as we want different behaviour. If the envvar
# is not set, it's fine (i.e. silent=True), but if it is set and the
# configfile doesn't exist, it should error out (i.e. silent=False).
self.config.from_pyfile(from_envvar, silent=False)
def _config_logging(self):
# Configure logging
logging.config.dictConfig(self.config['LOGGING'])
log = logging.getLogger(__name__)
if self.config['DEBUG']:
log.info('Pillar starting, debug=%s', self.config['DEBUG'])
def _config_url_map(self):
"""Extend Flask url_map with our own converters."""
import secrets, re
from . import flask_extra
if not self.config.get('STATIC_FILE_HASH'):
self.log.warning('STATIC_FILE_HASH is empty, generating random one')
h = re.sub(r'[_.~-]', '', secrets.token_urlsafe())[:8]
self.config['STATIC_FILE_HASH'] = h
self.url_map.converters['hashed_path'] = flask_extra.HashedPathConverter
def _config_auth_token_hmac_key(self):
"""Load AUTH_TOKEN_HMAC_KEY, falling back to SECRET_KEY."""
hmac_key = self.config.get('AUTH_TOKEN_HMAC_KEY')
if not hmac_key:
self.log.warning('AUTH_TOKEN_HMAC_KEY not set, falling back to SECRET_KEY')
hmac_key = self.config['AUTH_TOKEN_HMAC_KEY'] = self.config['SECRET_KEY']
if isinstance(hmac_key, str):
self.log.warning('Converting AUTH_TOKEN_HMAC_KEY to bytes')
self.config['AUTH_TOKEN_HMAC_KEY'] = hmac_key.encode('utf8')
def _config_tempdirs(self):
storage_dir = self.config['STORAGE_DIR']
if not os.path.exists(storage_dir):
self.log.info('Creating storage directory %r', storage_dir)
os.makedirs(storage_dir)
# Set the TMP environment variable to manage where uploads are stored.
# These are all used by tempfile.mkstemp(), but we don't knwow in whic
# order. As such, we remove all used variables but the one we set.
tempfile.tempdir = storage_dir
os.environ['TMP'] = storage_dir
os.environ.pop('TEMP', None)
os.environ.pop('TMPDIR', None)
def _config_git(self):
# Get the Git hash
try:
git_cmd = ['git', '-C', self.app_root, 'describe', '--always']
description = subprocess.check_output(git_cmd)
self.config['GIT_REVISION'] = description.strip()
except (subprocess.CalledProcessError, OSError) as ex:
self.log.warning('Unable to run "git describe" to get git revision: %s', ex)
self.config['GIT_REVISION'] = 'unknown'
self.log.info('Git revision %r', self.config['GIT_REVISION'])
def _config_sentry(self):
# TODO(Sybren): keep Sentry unconfigured when running CLI commands.
sentry_dsn = self.config.get('SENTRY_CONFIG', {}).get('dsn')
if self.config.get('TESTING') or sentry_dsn in {'', '-set-in-config-local-'}:
self.log.warning('Sentry NOT configured.')
self.sentry = None
return
self.sentry = sentry_extra.PillarSentry(
self, logging=True, level=logging.WARNING,
logging_exclusions=('werkzeug',))
self.log.debug('Sentry setup complete')
def _config_google_cloud_storage(self):
# Google Cloud project
try:
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = \
self.config['GCLOUD_APP_CREDENTIALS']
except KeyError:
raise ConfigurationMissingError('GCLOUD_APP_CREDENTIALS configuration is missing')
# Storage backend (GCS)
try:
os.environ['GCLOUD_PROJECT'] = self.config['GCLOUD_PROJECT']
except KeyError:
raise ConfigurationMissingError('GCLOUD_PROJECT configuration value is missing')
def _config_algolia(self):
# Algolia search
if 'algolia' not in self.config['SEARCH_BACKENDS']:
return
from algoliasearch import algoliasearch
client = algoliasearch.Client(self.config['ALGOLIA_USER'],
self.config['ALGOLIA_API_KEY'])
self.algolia_client = client
self.algolia_index_users = client.init_index(self.config['ALGOLIA_INDEX_USERS'])
self.algolia_index_nodes = client.init_index(self.config['ALGOLIA_INDEX_NODES'])
def _config_encoding_backend(self):
# Encoding backend
if self.config['ENCODING_BACKEND'] != 'zencoder':
self.log.warning('Encoding backend %r not supported, no video encoding possible!',
self.config['ENCODING_BACKEND'])
return
self.log.info('Setting up video encoding backend %r',
self.config['ENCODING_BACKEND'])
from zencoder import Zencoder
self.encoding_service_client = Zencoder(self.config['ZENCODER_API_KEY'])
def _config_caching(self):
from flask_caching import Cache
self.cache = Cache(self)
def set_languages(self, translations_folder: pathlib.Path):
"""Set the supported languages based on translations folders
English is an optional language included by default, since we will
never have a translations folder for it.
"""
self.default_locale = self.config['DEFAULT_LOCALE']
self.config['BABEL_DEFAULT_LOCALE'] = self.default_locale
# Determine available languages.
languages = list()
# The available languages will be determined based on available
# translations in the //translations/ folder. The exception is (American) English
# since all the text is originally in English already.
# That said, if rare occasions we may want to never show
# the site in English.
if self.config['SUPPORT_ENGLISH']:
languages.append('en_US')
base_path = pathlib.Path(self.app_root) / 'translations'
if not base_path.is_dir():
self.log.debug('Project has no translations folder: %s', base_path)
else:
languages.extend(i.name for i in base_path.iterdir() if i.is_dir())
# Use set for quicker lookup
self.languages = set(languages)
self.log.info('Available languages: %s' % ', '.join(self.languages))
def _config_translations(self):
"""
Initialize translations variable.
The BABEL_TRANSLATION_DIRECTORIES has the folder for the compiled
translations files. It uses ; separation for the extension folders.
"""
self.log.info('Configure translations')
translations_path = pathlib.Path(__file__).parents[1].joinpath('translations')
self.config['BABEL_TRANSLATION_DIRECTORIES'] = str(translations_path)
babel = Babel(self)
self.set_languages(translations_path)
# get_locale() is registered as a callback for locale selection.
# That prevents the function from being garbage collected.
@babel.localeselector
def get_locale() -> str:
"""
Callback runs before each request to give us a chance to choose the
language to use when producing its response.
We set g.locale to be able to access it from the template pages.
We still need to return it explicitly, since this function is
called as part of the babel translation framework.
We are using the 'Accept-Languages' header to match the available
translations with the user supported languages.
"""
locale = request.accept_languages.best_match(
self.languages, self.default_locale)
g.locale = locale
return locale
def load_extension(self, pillar_extension, url_prefix):
from .extension import PillarExtension
if not isinstance(pillar_extension, PillarExtension):
if self.config.get('DEBUG'):
for cls in type(pillar_extension).mro():
self.log.error('class %42r (%i) is %42r (%i): %s',
cls, id(cls), PillarExtension, id(PillarExtension),
cls is PillarExtension)
raise AssertionError('Extension has wrong type %r' % type(pillar_extension))
self.log.info('Loading extension %s', pillar_extension.name)
# Remember this extension, and disallow duplicates.
if pillar_extension.name in self.pillar_extensions:
raise ValueError('Extension with name %s already loaded', pillar_extension.name)
self.pillar_extensions[pillar_extension.name] = pillar_extension
# Load extension Flask configuration
for key, value in pillar_extension.flask_config().items():
self.config.setdefault(key, value)
# Load extension blueprint(s)
for blueprint in pillar_extension.blueprints():
if blueprint.url_prefix:
if not url_prefix:
# If we registered the extension with url_prefix=None
url_prefix = ''
blueprint_prefix = url_prefix + blueprint.url_prefix
else:
blueprint_prefix = url_prefix
self.register_blueprint(blueprint, url_prefix=blueprint_prefix)
# Load template paths
tpath = pillar_extension.template_path
if tpath:
self.log.info('Extension %s: adding template path %s',
pillar_extension.name, tpath)
if not os.path.exists(tpath):
raise ValueError('Template path %s for extension %s does not exist.',
tpath, pillar_extension.name)
self.pillar_extensions_template_paths.append(tpath)
# Load extension Eve settings
eve_settings = pillar_extension.eve_settings()
if 'DOMAIN' in eve_settings:
pillar_ext_prefix = pillar_extension.name + '_'
pillar_url_prefix = pillar_extension.name + '/'
for key, collection in eve_settings['DOMAIN'].items():
assert key.startswith(pillar_ext_prefix), \
'Eve collection names of %s MUST start with %r' % \
(pillar_extension.name, pillar_ext_prefix)
url = key.replace(pillar_ext_prefix, pillar_url_prefix)
collection.setdefault('datasource', {}).setdefault('source', key)
collection.setdefault('url', url)
self.config['DOMAIN'].update(eve_settings['DOMAIN'])
# Configure the extension translations
trpath = pillar_extension.translations_path
if not trpath:
self.log.debug('Extension %s does not have a translations folder',
pillar_extension.name)
return
self.log.info('Extension %s: adding translations path %s',
pillar_extension.name, trpath)
# Babel requires semi-colon string separation
self.config['BABEL_TRANSLATION_DIRECTORIES'] += ';' + str(trpath)
def _config_jinja_env(self):
# Start with the extensions...
paths_list = [
jinja2.FileSystemLoader(path)
for path in reversed(self.pillar_extensions_template_paths)
]
# ...then load Pillar paths.
pillar_dir = os.path.dirname(os.path.realpath(__file__))
parent_theme_path = os.path.join(pillar_dir, 'web', 'templates')
current_path = os.path.join(self.app_root, 'templates')
paths_list += [
jinja2.FileSystemLoader(current_path),
jinja2.FileSystemLoader(parent_theme_path),
self.jinja_loader
]
# Set up a custom loader, so that Jinja searches for a theme file first
# in the current theme dir, and if it fails it searches in the default
# location.
custom_jinja_loader = jinja2.ChoiceLoader(paths_list)
self.jinja_loader = custom_jinja_loader
pillar.web.jinja.setup_jinja_env(self.jinja_env, self.config)
# Register context processors from extensions
for ext in self.pillar_extensions.values():
if not ext.has_context_processor:
continue
self.log.debug('Registering context processor for %s', ext.name)
self.context_processor(ext.context_processor)
def _config_static_dirs(self):
# Setup static folder for the instanced app
self.static_folder = os.path.join(self.app_root, 'static')
# Setup static folder for Pillar
pillar_dir = os.path.dirname(os.path.realpath(__file__))
pillar_static_folder = os.path.join(pillar_dir, 'web', 'static')
self.register_static_file_endpoint('/static/pillar', 'static_pillar', pillar_static_folder)
# Setup static folders for extensions
for name, ext in self.pillar_extensions.items():
if not ext.static_path:
continue
self.register_static_file_endpoint('/static/%s' % name,
'static_%s' % name,
ext.static_path)
def _config_celery(self):
from celery import Celery
self.log.info('Configuring Celery')
# Pillar-defined Celery task modules:
celery_task_modules = [
'pillar.celery.avatar',
'pillar.celery.badges',
'pillar.celery.email_tasks',
'pillar.celery.file_link_tasks',
'pillar.celery.search_index_tasks',
'pillar.celery.tasks',
]
# Allow Pillar extensions from defining their own Celery tasks.
for extension in self.pillar_extensions.values():
celery_task_modules.extend(extension.celery_task_modules)
self.celery = Celery(
'pillar.celery',
backend=self.config['CELERY_BACKEND'],
broker=self.config['CELERY_BROKER'],
include=celery_task_modules,
task_track_started=True,
result_expires=3600,
)
# This configures the Celery task scheduler in such a way that we don't
# have to import the pillar.celery.XXX modules. Remember to run
# 'manage.py celery beat' too, otherwise those will never run.
beat_schedule = self.config.get('CELERY_BEAT_SCHEDULE')
if beat_schedule:
self.celery.conf.beat_schedule = beat_schedule
self.log.info('Pinging Celery workers')
self.log.info('Response: %s', self.celery.control.ping())
def _config_user_roles(self):
"""Gathers all user roles from extensions.
The union of all user roles can be obtained from self.user_roles.
"""
for extension in self.pillar_extensions.values():
indexed_but_not_defined = extension.user_roles_indexable - extension.user_roles
if indexed_but_not_defined:
raise ValueError('Extension %s has roles %s indexable but not in user_roles',
extension.name, indexed_but_not_defined)
self._user_roles.update(extension.user_roles)
self._user_roles_indexable.update(extension.user_roles_indexable)
self.log.info('Loaded %i user roles from extensions, %i of which are indexable',
len(self._user_roles), len(self._user_roles_indexable))
def _config_user_caps(self):
"""Merges all capability settings from app config and extensions."""
app_caps = collections.defaultdict(frozenset, **self.config['USER_CAPABILITIES'])
for extension in self.pillar_extensions.values():
ext_caps = extension.user_caps
for role, caps in ext_caps.items():
union_caps = frozenset(app_caps[role] | caps)
app_caps[role] = union_caps
self._user_caps = app_caps
if self.log.isEnabledFor(logging.DEBUG):
import pprint
self.log.debug('Configured user capabilities: %s', pprint.pformat(self._user_caps))
def register_static_file_endpoint(self, url_prefix, endpoint_name, static_folder):
from pillar.web.staticfile import PillarStaticFile
view_func = PillarStaticFile.as_view(endpoint_name, static_folder=static_folder)
self.add_url_rule(f'{url_prefix}/<hashed_path:filename>', view_func=view_func)
def process_extensions(self):
"""This is about Eve extensions, not Pillar extensions."""
# Re-initialise Eve after we allowed Pillar submodules to be loaded.
# EVIL STARTS HERE. It just copies part of the Eve.__init__() method.
self.set_defaults()
self.validate_config()
self.validate_domain_struct()
self._init_url_rules()
self._init_media_endpoint()
self._init_schema_endpoint()
if self.config['OPLOG'] is True:
self._init_oplog()
domain_copy = copy.deepcopy(self.config['DOMAIN'])
for resource, settings in domain_copy.items():
self.register_resource(resource, settings)
self.register_error_handlers()
# EVIL ENDS HERE. No guarantees, though.
self.finish_startup()
def register_error_handlers(self):
super(PillarServer, self).register_error_handlers()
# Register error handlers per code.
for code in (403, 404, 412, 500):
self.register_error_handler(code, self.pillar_error_handler)
# Register error handlers per exception.
from pillarsdk import exceptions as sdk_exceptions
sdk_handlers = [
(sdk_exceptions.UnauthorizedAccess, self.handle_sdk_unauth),
(sdk_exceptions.ForbiddenAccess, self.handle_sdk_forbidden),
(sdk_exceptions.ResourceNotFound, self.handle_sdk_resource_not_found),
(sdk_exceptions.ResourceInvalid, self.handle_sdk_resource_invalid),
(sdk_exceptions.MethodNotAllowed, self.handle_sdk_method_not_allowed),
(sdk_exceptions.PreconditionFailed, self.handle_sdk_precondition_failed),
]
for (eclass, handler) in sdk_handlers:
self.register_error_handler(eclass, handler)
def handle_sdk_unauth(self, error):
"""Global exception handling for pillarsdk UnauthorizedAccess
Currently the api is fully locked down so we need to constantly
check for user authorization.
"""
return flask.redirect(flask.url_for('users.login'))
def handle_sdk_forbidden(self, error):
self.log.info('Forwarding ForbiddenAccess exception to client: %s', error, exc_info=True)
error.code = 403
return self.pillar_error_handler(error)
def handle_sdk_resource_not_found(self, error):
self.log.info('Forwarding ResourceNotFound exception to client: %s', error, exc_info=True)
content = getattr(error, 'content', None)
if content:
try:
error_content = json.loads(content)
except ValueError:
error_content = None
if error_content and error_content.get('_deleted', False):
# This document used to exist, but doesn't any more. Let the user know.
doc_name = error_content.get('name')
node_type = error_content.get('node_type')
if node_type:
node_type = node_type.replace('_', ' ').title()
if doc_name:
description = '%s "%s" was deleted.' % (node_type, doc_name)
else:
description = 'This %s was deleted.' % (node_type,)
else:
if doc_name:
description = '"%s" was deleted.' % doc_name
else:
description = None
error.description = description
error.code = 404
return self.pillar_error_handler(error)
def handle_sdk_precondition_failed(self, error):
self.log.info('Forwarding PreconditionFailed exception to client: %s', error)
error.code = 412
return self.pillar_error_handler(error)
def handle_sdk_resource_invalid(self, error):
self.log.exception('Forwarding ResourceInvalid exception to client: %s', error, exc_info=True)
# Raising a Werkzeug 422 exception doens't work, as Flask turns it into a 500.
return _('The submitted data could not be validated.'), 422
def handle_sdk_method_not_allowed(self, error):
"""Forwards 405 Method Not Allowed to the client.
This is actually not fair, as a 405 between Pillar and Pillar-Web
doesn't imply that the request the client did on Pillar-Web is not
allowed. However, it does allow us to debug this if it happens, by
watching for 405s in the browser.
"""
from flask import request
self.log.info('Forwarding MethodNotAllowed exception to client: %s', error, exc_info=True)
self.log.info('HTTP Referer is %r', request.referrer)
# Raising a Werkzeug 405 exception doens't work, as Flask turns it into a 500.
return 'The requested HTTP method is not allowed on this URL.', 405
def pillar_error_handler(self, error_ob):
# 'error_ob' can be any exception. If it's not a Werkzeug exception,
# handle it as a 500.
if not hasattr(error_ob, 'code'):
error_ob.code = 500
if not hasattr(error_ob, 'description'):
error_ob.description = str(error_ob)
if request.full_path.startswith('/%s/' % self.config['URL_PREFIX']):
from pillar.api.utils import jsonify
# This is an API request, so respond in JSON.
return jsonify({
'_status': 'ERR',
'_code': error_ob.code,
'_message': error_ob.description,
}, status=error_ob.code)
# See whether we should return an embedded page or a regular one.
if request.is_xhr:
fname = 'errors/%i_embed.html' % error_ob.code
else:
fname = 'errors/%i.html' % error_ob.code
# Also handle the case where we didn't create a template for this error.
try:
return render_template(fname, description=error_ob.description), error_ob.code
except TemplateNotFound:
self.log.warning('Error template %s for code %i not found',
fname, error_ob.code)
return render_template('errors/500.html'), error_ob.code
def finish_startup(self):
self.log.info('Using MongoDB database %r', self.config['MONGO_DBNAME'])
with self.app_context():
self.setup_db_indices()
self._config_celery()
api.setup_app(self)
web.setup_app(self)
authentication.setup_app(self)
# Register Flask Debug Toolbar (disabled by default).
from flask_debugtoolbar import DebugToolbarExtension
DebugToolbarExtension(self)
for ext in self.pillar_extensions.values():
self.log.info('Setting up extension %s', ext.name)
ext.setup_app(self)
self._config_jinja_env()
self._config_static_dirs()
self._config_user_roles()
self._config_user_caps()
# Only enable this when debugging.
# TODO(fsiddi): Consider removing this in favor of the routes tab in Flask Debug Toolbar.
# self._list_routes()
def setup_db_indices(self):
"""Adds missing database indices.
This does NOT drop and recreate existing indices,
nor does it reconfigure existing indices.
If you want that, drop them manually first.
"""
self.log.debug('Adding any missing database indices.')
import pymongo
db = self.data.driver.db
coll = db['tokens']
coll.create_index([('user', pymongo.ASCENDING)])
coll.create_index([('token', pymongo.ASCENDING)])
coll.create_index([('token_hashed', pymongo.ASCENDING)])
coll = db['notifications']
coll.create_index([('user', pymongo.ASCENDING)])
coll = db['activities-subscriptions']
coll.create_index([('context_object', pymongo.ASCENDING)])
coll = db['nodes']
# This index is used for queries on project, and for queries on
# the combination (project, node type).
coll.create_index([('project', pymongo.ASCENDING),
('node_type', pymongo.ASCENDING)])
coll.create_index([('parent', pymongo.ASCENDING)])
coll.create_index([('short_code', pymongo.ASCENDING)],
sparse=True, unique=True)
# Used for latest assets & comments
coll.create_index([('properties.status', pymongo.ASCENDING),
('node_type', pymongo.ASCENDING),
('_created', pymongo.DESCENDING)])
# Used for asset tags
coll.create_index([('properties.tags', pymongo.ASCENDING)])
coll = db['projects']
# This index is used for statistics, and for fetching public projects.
coll.create_index([('is_private', pymongo.ASCENDING)])
coll.create_index([('category', pymongo.ASCENDING)])
coll = db['organizations']
coll.create_index([('ip_ranges.start', pymongo.ASCENDING)])
coll.create_index([('ip_ranges.end', pymongo.ASCENDING)])
self.log.debug('Created database indices')
def register_api_blueprint(self, blueprint, url_prefix):
# TODO: use Eve config variable instead of hard-coded '/api'
self.register_blueprint(blueprint, url_prefix='/api' + url_prefix)
def make_header(self, username, subclient_id=''):
"""Returns a Basic HTTP Authentication header value."""
import base64
return 'basic ' + base64.b64encode('%s:%s' % (username, subclient_id))
def post_internal(self, resource: str, payl=None, skip_validation=False):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.post import post_internal
url = self.config['URLS'][resource]
path = '%s/%s' % (self.api_prefix, url)
with self.__fake_request_url_rule('POST', path):
return post_internal(resource, payl=payl, skip_validation=skip_validation)[:4]
def put_internal(self, resource: str, payload=None, concurrency_check=False,
skip_validation=False, **lookup):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.put import put_internal
url = self.config['URLS'][resource]
path = '%s/%s/%s' % (self.api_prefix, url, lookup['_id'])
with self.__fake_request_url_rule('PUT', path):
return put_internal(resource, payload=payload, concurrency_check=concurrency_check,
skip_validation=skip_validation, **lookup)[:4]
def patch_internal(self, resource: str, payload=None, concurrency_check=False,
skip_validation=False, **lookup):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.patch import patch_internal
url = self.config['URLS'][resource]
path = '%s/%s/%s' % (self.api_prefix, url, lookup['_id'])
with self.__fake_request_url_rule('PATCH', path):
return patch_internal(resource, payload=payload, concurrency_check=concurrency_check,
skip_validation=skip_validation, **lookup)[:4]
def delete_internal(self, resource: str, concurrency_check=False,
suppress_callbacks=False, **lookup):
"""Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.delete import deleteitem_internal
url = self.config['URLS'][resource]
path = '%s/%s/%s' % (self.api_prefix, url, lookup['_id'])
with self.__fake_request_url_rule('DELETE', path):
return deleteitem_internal(resource,
concurrency_check=concurrency_check,
suppress_callbacks=suppress_callbacks,
**lookup)[:4]
def _list_routes(self):
from pprint import pprint
from flask import url_for
def has_no_empty_params(rule):
defaults = rule.defaults if rule.defaults is not None else ()
arguments = rule.arguments if rule.arguments is not None else ()
return len(defaults) >= len(arguments)
links = []
with self.test_request_context():
for rule in self.url_map.iter_rules():
# Filter out rules we can't navigate to in a browser
# and rules that require parameters
if "GET" in rule.methods and has_no_empty_params(rule):
url = url_for(rule.endpoint, **(rule.defaults or {}))
links.append((url, rule.endpoint, rule.methods))
if "PATCH" in rule.methods:
args = {arg: arg for arg in rule.arguments}
url = url_for(rule.endpoint, **args)
links.append((url, rule.endpoint, rule.methods))
links.sort(key=lambda t: (('/api/' in t[0]), len(t[0])))
pprint(links, width=300)
def db(self, collection_name: str = None) \
-> typing.Union[pymongo.collection.Collection, pymongo.database.Database]:
"""Returns the MongoDB database, or the collection (if given)"""
if collection_name:
return self.data.driver.db[collection_name]
return self.data.driver.db
def extension_sidebar_links(self, project):
"""Returns the sidebar links for the given projects.
:returns: HTML as a string for the sidebar.
"""
if not project:
return ''
return jinja2.Markup(''.join(ext.sidebar_links(project)
for ext in self.pillar_extensions.values()))
@contextlib.contextmanager
def __fake_request_url_rule(self, method: str, url_path: str):
"""Tries to force-set the request URL rule.
This is required by Eve (since 0.70) to be able to construct a
Location HTTP header that points to the resource item.
See post_internal, put_internal and patch_internal.
"""
import werkzeug.exceptions as wz_exceptions
with self.test_request_context(method=method, path=url_path) as ctx:
try:
rule, _ = ctx.url_adapter.match(url_path, method=method, return_rule=True)
except (wz_exceptions.MethodNotAllowed, wz_exceptions.NotFound):
# We're POSTing things that we haven't told Eve are POSTable. Try again using the
# GET method.
rule, _ = ctx.url_adapter.match(url_path, method='GET', return_rule=True)
current_request = request._get_current_object()
current_request.url_rule = rule
yield ctx
def validator_for_resource(self,
resource_name: str) -> custom_field_validation.ValidateCustomFields:
schema = self.config['DOMAIN'][resource_name]['schema']
validator = self.validator(schema, resource_name)
return validator
@property
def user_roles(self) -> typing.FrozenSet[str]:
return frozenset(self._user_roles)
@property
def user_roles_indexable(self) -> typing.FrozenSet[str]:
return frozenset(self._user_roles_indexable)
@property
def user_caps(self) -> typing.Mapping[str, typing.FrozenSet[str]]:
return self._user_caps
@property
def real_app(self) -> 'PillarServer':
"""The real application object.
Can be used to obtain the real app object from a LocalProxy.
"""
return self

20
pillar/api/__init__.py Normal file
View File

@ -0,0 +1,20 @@
def setup_app(app):
from . import encoding, blender_id, projects, local_auth, file_storage
from . import users, nodes, latest, blender_cloud, service, activities, timeline
from . import organizations
from . import search
encoding.setup_app(app, url_prefix='/encoding')
blender_id.setup_app(app, url_prefix='/blender_id')
search.setup_app(app, url_prefix='/newsearch')
projects.setup_app(app, api_prefix='/p')
local_auth.setup_app(app, url_prefix='/auth')
file_storage.setup_app(app, url_prefix='/storage')
latest.setup_app(app, url_prefix='/latest')
timeline.setup_app(app, url_prefix='/timeline')
blender_cloud.setup_app(app, url_prefix='/bcloud')
users.setup_app(app, api_prefix='/users')
service.setup_app(app, api_prefix='/service')
nodes.setup_app(app, url_prefix='/nodes')
activities.setup_app(app)
organizations.setup_app(app)

View File

@ -1,7 +1,10 @@
from flask import g
from flask import current_app
from eve.methods.post import post_internal
from application.modules.users import gravatar
import logging
from flask import request, current_app
import pillar.api.users.avatar
from pillar.auth import current_user
log = logging.getLogger(__name__)
def notification_parse(notification):
@ -15,6 +18,11 @@ def notification_parse(notification):
if activity is None or activity['object_type'] != 'node':
return
node = nodes_collection.find_one({'_id': activity['object']})
if not node:
# This can happen when a notification is generated and then the
# node is deleted.
return
# Initial support only for node_type comments
if node['node_type'] != 'comment':
return
@ -23,7 +31,7 @@ def notification_parse(notification):
object_name = ''
object_id = activity['object']
if node['parent']['user'] == g.current_user['user_id']:
if node['parent']['user'] == current_user.user_id:
owner = "your {0}".format(node['parent']['node_type'])
else:
parent_comment_user = users_collection.find_one(
@ -45,7 +53,7 @@ def notification_parse(notification):
action = activity['verb']
lookup = {
'user': g.current_user['user_id'],
'user': current_user.user_id,
'context_object_type': 'node',
'context_object': context_object_id,
}
@ -60,7 +68,7 @@ def notification_parse(notification):
if actor:
parsed_actor = {
'username': actor['username'],
'avatar': gravatar(actor['email'])}
'avatar': pillar.api.users.avatar.url(actor)}
else:
parsed_actor = None
@ -83,14 +91,14 @@ def notification_parse(notification):
def notification_get_subscriptions(context_object_type, context_object_id, actor_user_id):
subscriptions_collection = current_app.data.driver.db['activities-subscriptions']
subscriptions_collection = current_app.db('activities-subscriptions')
lookup = {
'user': {"$ne": actor_user_id},
'context_object_type': context_object_type,
'context_object': context_object_id,
'is_subscribed': True,
}
return subscriptions_collection.find(lookup)
return subscriptions_collection.find(lookup), subscriptions_collection.count_documents(lookup)
def activity_subscribe(user_id, context_object_type, context_object_id):
@ -111,7 +119,9 @@ def activity_subscribe(user_id, context_object_type, context_object_id):
# If no subscription exists, we create one
if not subscription:
post_internal('activities-subscriptions', lookup)
# Workaround for issue: https://github.com/pyeve/eve/issues/1174
lookup['notifications'] = {}
current_app.post_internal('activities-subscriptions', lookup)
def activity_object_add(actor_user_id, verb, object_type, object_id,
@ -130,25 +140,85 @@ def activity_object_add(actor_user_id, verb, object_type, object_id,
:param object_id: object id, to be traced with object_type_id
"""
subscriptions = notification_get_subscriptions(
subscriptions, subscription_count = notification_get_subscriptions(
context_object_type, context_object_id, actor_user_id)
if subscriptions.count() > 0:
activity = dict(
actor_user=actor_user_id,
verb=verb,
object_type=object_type,
object=object_id,
context_object_type=context_object_type,
context_object=context_object_id
)
if subscription_count == 0:
return
activity = post_internal('activities', activity)
if activity[3] != 201:
# If creation failed for any reason, do not create a any notifcation
return
for subscription in subscriptions:
notification = dict(
user=subscription['user'],
activity=activity[0]['_id'])
post_internal('notifications', notification)
info, status = register_activity(actor_user_id, verb, object_type, object_id,
context_object_type, context_object_id)
if status != 201:
# If creation failed for any reason, do not create a any notifcation
return
for subscription in subscriptions:
notification = dict(
user=subscription['user'],
activity=info['_id'])
current_app.post_internal('notifications', notification)
def register_activity(actor_user_id, verb, object_type, object_id,
context_object_type, context_object_id,
project_id=None,
node_type=None):
"""Registers an activity.
This works using the following pattern:
ACTOR -> VERB -> OBJECT -> CONTEXT
:param actor_user_id: id of the user who is changing the object
:param verb: the action on the object ('commented', 'replied')
:param object_type: hardcoded name, see database schema
:param object_id: object id, to be traced with object_type
:param context_object_type: the type of the context object, like 'project' or 'node',
see database schema
:param context_object_id:
:param project_id: optional project ID to make the activity easily queryable
per project.
:param node_type: optional, node type of the node receiving the activity.
:returns: tuple (info, status_code), where a successful operation should have
status_code=201. If it is not 201, a warning is logged.
"""
activity = {
'actor_user': actor_user_id,
'verb': verb,
'object_type': object_type,
'object': object_id,
'context_object_type': context_object_type,
'context_object': context_object_id}
if project_id:
activity['project'] = project_id
if node_type:
activity['node_type'] = node_type
info, _, _, status_code = current_app.post_internal('activities', activity)
if status_code != 201:
log.error('register_activity: code %i creating activity %s: %s',
status_code, activity, info)
else:
log.info('register_activity: user %s "%s" on %s %s, context %s %s',
actor_user_id, verb, object_type, object_id,
context_object_type, context_object_id)
return info, status_code
def before_returning_item_notifications(response):
if request.args.get('parse'):
notification_parse(response)
def before_returning_resource_notifications(response):
for item in response['_items']:
if request.args.get('parse'):
notification_parse(item)
def setup_app(app):
app.on_fetched_item_notifications += before_returning_item_notifications
app.on_fetched_resource_notifications += before_returning_resource_notifications

View File

@ -24,7 +24,8 @@ def blender_cloud_addon_version():
def setup_app(app, url_prefix):
from . import texture_libs, home_project
from . import texture_libs, home_project, subscription
texture_libs.setup_app(app, url_prefix=url_prefix)
home_project.setup_app(app, url_prefix=url_prefix)
subscription.setup_app(app, url_prefix=url_prefix)

View File

@ -1,17 +1,14 @@
import copy
import logging
import datetime
from bson import ObjectId, tz_util
from eve.methods.post import post_internal
from eve.methods.put import put_internal
from bson import ObjectId
from eve.methods.get import get
from flask import Blueprint, g, current_app, request
from flask import Blueprint, current_app, request
from pillar.api import utils
from pillar.api.utils import authentication, authorization, utcnow
from werkzeug import exceptions as wz_exceptions
from application.modules import projects
from application import utils
from application.utils import authentication, authorization
from pillar.api.projects import utils as proj_utils
blueprint = Blueprint('blender_cloud.home_project', __name__)
log = logging.getLogger(__name__)
@ -20,7 +17,7 @@ log = logging.getLogger(__name__)
HOME_PROJECT_USERS = set()
# Users with any of these roles will get full write access to their home project.
HOME_PROJECT_WRITABLE_USERS = {u'subscriber', u'demo'}
HOME_PROJECT_WRITABLE_USERS = {'subscriber', 'demo'}
HOME_PROJECT_DESCRIPTION = ('# Your home project\n\n'
'This is your home project. It allows synchronisation '
@ -32,7 +29,7 @@ HOME_PROJECT_SUMMARY = 'This is your home project. Here you can sync your Blende
# 'as a pastebin for text, images and other assets, and '
# 'allows synchronisation of your Blender settings.')
# HOME_PROJECT_SUMMARY = 'This is your home project. Pastebin and Blender settings sync in one!'
SYNC_GROUP_NODE_NAME = u'Blender Sync'
SYNC_GROUP_NODE_NAME = 'Blender Sync'
SYNC_GROUP_NODE_DESC = ('The [Blender Cloud Addon](https://cloud.blender.org/services'
'#blender-addon) will synchronize your Blender settings here.')
@ -73,7 +70,7 @@ def create_blender_sync_node(project_id, admin_group_id, user_id):
}
}
r, _, _, status = post_internal('nodes', node)
r, _, _, status = current_app.post_internal('nodes', node)
if status != 201:
log.warning('Unable to create Blender Sync node for home project %s: %s',
project_id, r)
@ -109,13 +106,13 @@ def create_home_project(user_id, write_access):
project = deleted_proj
else:
log.debug('User %s does not have a deleted project', user_id)
project = projects.create_new_project(project_name='Home',
user_id=ObjectId(user_id),
overrides=overrides)
project = proj_utils.create_new_project(project_name='Home',
user_id=ObjectId(user_id),
overrides=overrides)
# Re-validate the authentication token, so that the put_internal call sees the
# new group created for the project.
authentication.validate_token()
authentication.validate_token(force=True)
# There are a few things in the on_insert_projects hook we need to adjust.
@ -124,10 +121,10 @@ def create_home_project(user_id, write_access):
# Set up the correct node types. No need to set permissions for them,
# as the inherited project permissions are fine.
from manage_extra.node_types.group import node_type_group
from manage_extra.node_types.asset import node_type_asset
# from manage_extra.node_types.text import node_type_text
from manage_extra.node_types.comment import node_type_comment
from pillar.api.node_types.group import node_type_group
from pillar.api.node_types.asset import node_type_asset
# from pillar.api.node_types.text import node_type_text
from pillar.api.node_types.comment import node_type_comment
# For non-subscribers: take away write access from the admin group,
# and grant it to certain node types.
@ -137,8 +134,8 @@ def create_home_project(user_id, write_access):
# This allows people to comment on shared images and see comments.
node_type_comment = assign_permissions(
node_type_comment,
subscriber_methods=[u'GET', u'POST'],
world_methods=[u'GET'])
subscriber_methods=['GET', 'POST'],
world_methods=['GET'])
project['node_types'] = [
node_type_group,
@ -147,8 +144,8 @@ def create_home_project(user_id, write_access):
node_type_comment,
]
result, _, _, status = put_internal('projects', utils.remove_private_keys(project),
_id=project['_id'])
result, _, _, status = current_app.put_internal('projects', utils.remove_private_keys(project),
_id=project['_id'])
if status != 200:
log.error('Unable to update home project %s for user %s: %s',
project['_id'], user_id, result)
@ -166,7 +163,7 @@ def create_home_project(user_id, write_access):
def assign_permissions(node_type, subscriber_methods, world_methods):
"""Assigns permissions to the node type object.
:param node_type: a node type from manage_extra.node_types.
:param node_type: a node type from pillar.api.node_types.
:type node_type: dict
:param subscriber_methods: allowed HTTP methods for users of role 'subscriber',
'demo' and 'admin'.
@ -177,7 +174,7 @@ def assign_permissions(node_type, subscriber_methods, world_methods):
:rtype: dict
"""
from application.modules import service
from pillar.api import service
nt_with_perms = copy.deepcopy(node_type)
@ -203,8 +200,10 @@ def home_project():
Eve projections are supported, but at least the following fields must be present:
'permissions', 'category', 'user'
"""
user_id = g.current_user['user_id']
roles = g.current_user.get('roles', ())
from pillar.auth import current_user
user_id = current_user.user_id
roles = current_user.roles
log.debug('Possibly creating home project for user %s with roles %s', user_id, roles)
if HOME_PROJECT_USERS and not HOME_PROJECT_USERS.intersection(roles):
@ -217,7 +216,7 @@ def home_project():
write_access = write_access_with_roles(roles)
create_home_project(user_id, write_access)
resp, _, _, status, _ = get('projects', category=u'home', user=user_id)
resp, _, _, status, _ = get('projects', category='home', user=user_id)
if status != 200:
return utils.jsonify(resp), status
@ -250,18 +249,18 @@ def home_project_permissions(write_access):
"""
if write_access:
return [u'GET', u'PUT', u'POST', u'DELETE']
return [u'GET']
return ['GET', 'PUT', 'POST', 'DELETE']
return ['GET']
def has_home_project(user_id):
"""Returns True iff the user has a home project."""
proj_coll = current_app.data.driver.db['projects']
return proj_coll.count({'user': user_id, 'category': 'home', '_deleted': False}) > 0
return proj_coll.count_documents({'user': user_id, 'category': 'home', '_deleted': False}) > 0
def get_home_project(user_id, projection=None):
def get_home_project(user_id: ObjectId, projection=None) -> dict:
"""Returns the home project"""
proj_coll = current_app.data.driver.db['projects']
@ -273,16 +272,16 @@ def is_home_project(project_id, user_id):
"""Returns True iff the given project exists and is the user's home project."""
proj_coll = current_app.data.driver.db['projects']
return proj_coll.count({'_id': project_id,
'user': user_id,
'category': 'home',
'_deleted': False}) > 0
return proj_coll.count_documents({'_id': project_id,
'user': user_id,
'category': 'home',
'_deleted': False}) > 0
def mark_node_updated(node_id):
"""Uses pymongo to set the node's _updated to "now"."""
now = datetime.datetime.now(tz=tz_util.utc)
now = utcnow()
nodes_coll = current_app.data.driver.db['nodes']
return nodes_coll.update_one({'_id': node_id},
@ -391,7 +390,7 @@ def user_changed_role(sender, user):
user_id = user['_id']
if not has_home_project(user_id):
log.debug('User %s does not have a home project', user_id)
log.debug('User %s does not have a home project, not changing access permissions', user_id)
return
proj_coll = current_app.data.driver.db['projects']
@ -414,12 +413,12 @@ def user_changed_role(sender, user):
def setup_app(app, url_prefix):
app.register_blueprint(blueprint, url_prefix=url_prefix)
app.register_api_blueprint(blueprint, url_prefix=url_prefix)
app.on_insert_nodes += check_home_project_nodes_permissions
app.on_inserted_nodes += mark_parents_as_updated
app.on_updated_nodes += mark_parent_as_updated
app.on_replaced_nodes += mark_parent_as_updated
from application.modules import service
from pillar.api import service
service.signal_user_changed_role.connect(user_changed_role)

View File

@ -0,0 +1,180 @@
import logging
import typing
import blinker
from flask import Blueprint, Response
import requests
from requests.adapters import HTTPAdapter
from pillar import auth, current_app
from pillar.api import blender_id
from pillar.api.utils import authorization, jsonify
from pillar.auth import current_user
log = logging.getLogger(__name__)
blueprint = Blueprint('blender_cloud.subscription', __name__)
# Mapping from roles on Blender ID to roles here in Pillar.
# Roles not mentioned here will not be synced from Blender ID.
ROLES_BID_TO_PILLAR = {
'cloud_subscriber': 'subscriber',
'cloud_demo': 'demo',
'cloud_has_subscription': 'has_subscription',
}
user_subscription_updated = blinker.NamedSignal(
'user_subscription_updated',
'The sender is a UserClass instance, kwargs includes "revoke_roles" and "grant_roles".')
@blueprint.route('/update-subscription')
@authorization.require_login()
def update_subscription() -> typing.Tuple[str, int]:
"""Updates the subscription status of the current user.
Returns an empty HTTP response.
"""
my_log: logging.Logger = log.getChild('update_subscription')
real_current_user = auth.get_current_user() # multiple accesses, just get unproxied.
try:
bid_user = blender_id.fetch_blenderid_user()
except blender_id.LogoutUser:
auth.logout_user()
return '', 204
if not bid_user:
my_log.warning('Logged in user %s has no BlenderID account! '
'Unable to update subscription status.', real_current_user.user_id)
return '', 204
do_update_subscription(real_current_user, bid_user)
return '', 204
@blueprint.route('/update-subscription-for/<user_id>', methods=['POST'])
@authorization.require_login(require_cap='admin')
def update_subscription_for(user_id: str):
"""Updates the user based on their info at Blender ID."""
from urllib.parse import urljoin
from pillar.api.utils import str2id
my_log = log.getChild('update_subscription_for')
bid_session = requests.Session()
bid_session.mount('https://', HTTPAdapter(max_retries=5))
bid_session.mount('http://', HTTPAdapter(max_retries=5))
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': str2id(user_id)})
if not db_user:
my_log.warning('User %s not found in database', user_id)
return Response(f'User {user_id} not found in our database', status=404)
log.info('Updating user %s from Blender ID on behalf of %s',
db_user['email'], current_user.email)
bid_user_id = blender_id.get_user_blenderid(db_user)
if not bid_user_id:
my_log.info('User %s has no Blender ID', user_id)
return Response('User has no Blender ID', status=404)
# Get the user info from Blender ID, and handle errors.
api_url = current_app.config['BLENDER_ID_USER_INFO_API']
api_token = current_app.config['BLENDER_ID_USER_INFO_TOKEN']
url = urljoin(api_url, bid_user_id)
resp = bid_session.get(url, headers={'Authorization': f'Bearer {api_token}'})
if resp.status_code == 404:
my_log.info('User %s has a Blender ID %s but Blender ID itself does not find it',
user_id, bid_user_id)
return Response(f'User {bid_user_id} does not exist at Blender ID', status=404)
if resp.status_code != 200:
my_log.info('Error code %s getting user %s from Blender ID (resp = %s)',
resp.status_code, user_id, resp.text)
return Response(f'Error code {resp.status_code} from Blender ID', status=resp.status_code)
# Update the user in our database.
local_user = auth.UserClass.construct('', db_user)
bid_user = resp.json()
do_update_subscription(local_user, bid_user)
return '', 204
def do_update_subscription(local_user: auth.UserClass, bid_user: dict):
"""Updates the subscription status of the user given the Blender ID user info.
Uses the badger service to update the user's roles from Blender ID.
bid_user should be a dict like:
{'id': 1234,
'full_name': 'मूंगफली मक्खन प्रेमी',
'email': 'here@example.com',
'roles': {'cloud_demo': True}}
The 'roles' key can also be an interable of role names instead of a dict.
"""
from pillar.api import service
my_log: logging.Logger = log.getChild('do_update_subscription')
try:
email = bid_user['email']
except KeyError:
email = '-missing email-'
# Transform the BID roles from a dict to a set.
bidr = bid_user.get('roles', set())
if isinstance(bidr, dict):
bid_roles = {role
for role, has_role in bid_user.get('roles', {}).items()
if has_role}
else:
bid_roles = set(bidr)
# Handle the role changes via the badger service functionality.
plr_roles = set(local_user.roles)
grant_roles = set()
revoke_roles = set()
for bid_role, plr_role in ROLES_BID_TO_PILLAR.items():
if bid_role in bid_roles and plr_role not in plr_roles:
grant_roles.add(plr_role)
continue
if bid_role not in bid_roles and plr_role in plr_roles:
revoke_roles.add(plr_role)
user_id = local_user.user_id
if grant_roles:
if my_log.isEnabledFor(logging.INFO):
my_log.info('granting roles to user %s (Blender ID %s): %s',
user_id, email, ', '.join(sorted(grant_roles)))
service.do_badger('grant', roles=grant_roles, user_id=user_id)
if revoke_roles:
if my_log.isEnabledFor(logging.INFO):
my_log.info('revoking roles to user %s (Blender ID %s): %s',
user_id, email, ', '.join(sorted(revoke_roles)))
service.do_badger('revoke', roles=revoke_roles, user_id=user_id)
# Let the world know this user's subscription was updated.
final_roles = (plr_roles - revoke_roles).union(grant_roles)
local_user.roles = list(final_roles)
local_user.collect_capabilities()
user_subscription_updated.send(local_user,
grant_roles=grant_roles,
revoke_roles=revoke_roles)
# Re-index the user in the search database.
from pillar.api.users import hooks
hooks.push_updated_user_to_search({'_id': user_id}, {})
def setup_app(app, url_prefix):
log.info('Registering blueprint at %s', url_prefix)
app.register_api_blueprint(blueprint, url_prefix=url_prefix)

View File

@ -1,15 +1,16 @@
import functools
import logging
from flask import Blueprint, request, current_app, g
from eve.methods.get import get
from eve.utils import config as eve_config
from flask import Blueprint, request, current_app
from werkzeug.datastructures import MultiDict
from werkzeug.exceptions import InternalServerError
from application import utils
from application.utils.authentication import current_user_id
from application.utils.authorization import require_login
from pillar.api import utils
from pillar.api.utils.authentication import current_user_id
from pillar.api.utils.authorization import require_login
from pillar.auth import current_user
FIRST_ADDON_VERSION_WITH_HDRI = (1, 4, 0)
TL_PROJECTION = utils.dumps({'name': 1, 'url': 1, 'permissions': 1,})
@ -26,8 +27,8 @@ log = logging.getLogger(__name__)
def keep_fetching_texture_libraries(proj_filter):
groups = g.current_user['groups']
user_id = g.current_user['user_id']
groups = current_user.group_ids
user_id = current_user.user_id
page = 1
max_page = float('inf')
@ -75,7 +76,7 @@ def texture_libraries():
# of the Blender Cloud Addon. If the addon version is None, we're dealing
# with a version of the BCA that's so old it doesn't send its version along.
addon_version = blender_cloud_addon_version()
return_hdri = addon_version >= FIRST_ADDON_VERSION_WITH_HDRI
return_hdri = addon_version is not None and addon_version >= FIRST_ADDON_VERSION_WITH_HDRI
log.debug('User %s has Blender Cloud Addon version %s; return_hdri=%s',
current_user_id(), addon_version, return_hdri)
@ -103,7 +104,7 @@ def has_texture_node(proj, return_hdri=True):
if return_hdri:
node_types.append('group_hdri')
count = nodes_collection.count(
count = nodes_collection.count_documents(
{'node_type': {'$in': node_types},
'project': proj['_id'],
'parent': None})
@ -144,4 +145,4 @@ def setup_app(app, url_prefix):
app.on_replace_nodes += sort_by_image_width
app.on_insert_nodes += sort_nodes_by_image_width
app.register_blueprint(blueprint, url_prefix=url_prefix)
app.register_api_blueprint(blueprint, url_prefix=url_prefix)

303
pillar/api/blender_id.py Normal file
View File

@ -0,0 +1,303 @@
"""Blender ID subclient endpoint.
Also contains functionality for other parts of Pillar to perform communication
with Blender ID.
"""
import datetime
import logging
from urllib.parse import urljoin
import requests
from bson import tz_util
from rauth import OAuth2Session
from flask import Blueprint, request, jsonify, session
from requests.adapters import HTTPAdapter
import urllib3.util.retry
from pillar import current_app
from pillar.auth import get_blender_id_oauth_token
from pillar.api.utils import authentication, utcnow
from pillar.api.utils.authentication import find_user_in_db, upsert_user
blender_id = Blueprint('blender_id', __name__)
log = logging.getLogger(__name__)
class LogoutUser(Exception):
"""Raised when Blender ID tells us the current user token is invalid.
This indicates the user should be immediately logged out.
"""
class Session(requests.Session):
"""Requests Session suitable for Blender ID communication."""
def __init__(self):
super().__init__()
retries = urllib3.util.retry.Retry(
total=10,
backoff_factor=0.05,
)
http_adapter = requests.adapters.HTTPAdapter(max_retries=retries)
self.mount('https://', http_adapter)
self.mount('http://', http_adapter)
def authenticate(self):
"""Attach the current user's authentication token to the request."""
bid_token = get_blender_id_oauth_token()
if not bid_token:
raise TypeError('authenticate() requires current user to be logged in with Blender ID')
self.headers['Authorization'] = f'Bearer {bid_token}'
@blender_id.route('/store_scst', methods=['POST'])
def store_subclient_token():
"""Verifies & stores a user's subclient-specific token."""
user_id = request.form['user_id'] # User ID at BlenderID
subclient_id = request.form['subclient_id']
scst = request.form['token']
db_user, status = validate_create_user(user_id, scst, subclient_id)
if db_user is None:
log.warning('Unable to verify subclient token with Blender ID.')
return jsonify({'status': 'fail',
'error': 'BLENDER ID ERROR'}), 403
return jsonify({'status': 'success',
'subclient_user_id': str(db_user['_id'])}), status
def validate_create_user(blender_id_user_id, token, oauth_subclient_id):
"""Validates a user against Blender ID, creating the user in our database.
:param blender_id_user_id: the user ID at the BlenderID server.
:param token: the OAuth access token.
:param oauth_subclient_id: the subclient ID, or empty string if not a subclient.
:returns: (user in MongoDB, HTTP status 200 or 201)
"""
# Verify with Blender ID
log.debug('Storing token for BlenderID user %s', blender_id_user_id)
user_info, token_expiry = validate_token(blender_id_user_id, token, oauth_subclient_id)
if user_info is None:
log.debug('Unable to verify token with Blender ID.')
return None, None
# Blender ID can be queried without user ID, and will always include the
# correct user ID in its response.
log.debug('Obtained user info from Blender ID: %s', user_info)
# Store the user info in MongoDB.
db_user = find_user_in_db(user_info)
db_id, status = upsert_user(db_user)
# Store the token in MongoDB.
ip_based_roles = current_app.org_manager.roles_for_request()
authentication.store_token(db_id, token, token_expiry, oauth_subclient_id,
org_roles=ip_based_roles)
if current_app.org_manager is not None:
roles = current_app.org_manager.refresh_roles(db_id)
db_user['roles'] = list(roles)
return db_user, status
def validate_token(user_id, token, oauth_subclient_id):
"""Verifies a subclient token with Blender ID.
:returns: (user info, token expiry) on success, or (None, None) on failure.
The user information from Blender ID is returned as dict
{'email': 'a@b', 'full_name': 'AB'}, token expiry as a datime.datetime.
:rtype: dict
"""
our_subclient_id = current_app.config['BLENDER_ID_SUBCLIENT_ID']
# Check that IF there is a subclient ID given, it is the correct one.
if oauth_subclient_id and our_subclient_id != oauth_subclient_id:
log.warning('validate_token(): BlenderID user %s is trying to use the wrong subclient '
'ID %r; treating as invalid login.', user_id, oauth_subclient_id)
return None, None
# Validate against BlenderID.
log.debug('Validating subclient token for BlenderID user %r, subclient %r', user_id,
oauth_subclient_id)
payload = {'user_id': user_id,
'token': token}
if oauth_subclient_id:
# If the subclient ID is set, the token belongs to another OAuth Client,
# in which case we do not set the client_id field.
payload['subclient_id'] = oauth_subclient_id
else:
# We only want to accept Blender Cloud tokens.
payload['client_id'] = current_app.config['OAUTH_CREDENTIALS']['blender-id']['id']
blender_id_endpoint = current_app.config['BLENDER_ID_ENDPOINT']
url = urljoin(blender_id_endpoint, 'u/validate_token')
log.debug('POSTing to %r', url)
# POST to Blender ID, handling errors as negative verification results.
s = Session()
try:
r = s.post(url, data=payload, timeout=5,
verify=current_app.config['TLS_CERT_FILE'])
except requests.exceptions.ConnectionError:
log.error('Connection error trying to POST to %s, handling as invalid token.', url)
return None, None
except requests.exceptions.ReadTimeout:
log.error('Read timeout trying to POST to %s, handling as invalid token.', url)
return None, None
except requests.exceptions.RequestException as ex:
log.error('Requests error "%s" trying to POST to %s, handling as invalid token.', ex, url)
return None, None
except IOError as ex:
log.error('Unknown I/O error "%s" trying to POST to %s, handling as invalid token.',
ex, url)
return None, None
if r.status_code != 200:
log.debug('Token %s invalid, HTTP status %i returned', token, r.status_code)
return None, None
resp = r.json()
if resp['status'] != 'success':
log.warning('Failed response from %s: %s', url, resp)
return None, None
expires = _compute_token_expiry(resp['token_expires'])
return resp['user'], expires
def _compute_token_expiry(token_expires_string):
"""Computes token expiry based on current time and BlenderID expiry.
Expires our side of the token when either the BlenderID token expires,
or in one hour. The latter case is to ensure we periodically verify
the token.
"""
# requirement is called python-dateutil, so PyCharm doesn't find it.
# noinspection PyPackageRequirements
from dateutil import parser
blid_expiry = parser.parse(token_expires_string)
blid_expiry = blid_expiry.astimezone(tz_util.utc)
our_expiry = utcnow() + datetime.timedelta(hours=1)
return min(blid_expiry, our_expiry)
def get_user_blenderid(db_user: dict) -> str:
"""Returns the Blender ID user ID for this Pillar user.
Takes the string from 'auth.*.user_id' for the '*' where 'provider'
is 'blender-id'.
:returns the user ID, or the empty string when the user has none.
"""
bid_user_ids = [auth['user_id']
for auth in db_user['auth']
if auth['provider'] == 'blender-id']
try:
return bid_user_ids[0]
except IndexError:
return ''
def fetch_blenderid_user() -> dict:
"""Returns the user info of the currently logged in user from BlenderID.
Returns an empty dict if communication fails.
Example dict:
{
"email": "some@email.example.com",
"full_name": "dr. Sybren A. St\u00fcvel",
"id": 5555,
"roles": {
"admin": true,
"bfct_trainer": false,
"cloud_has_subscription": true,
"cloud_subscriber": true,
"conference_speaker": true,
"network_member": true
}
}
:raises LogoutUser: when Blender ID tells us the current token is
invalid, and the user should be logged out.
"""
import httplib2 # used by the oauth2 package
my_log = log.getChild('fetch_blenderid_user')
bid_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'], 'api/user')
my_log.debug('Fetching user info from %s', bid_url)
credentials = current_app.config['OAUTH_CREDENTIALS']['blender-id']
oauth_token = session.get('blender_id_oauth_token')
if not oauth_token:
my_log.warning('no Blender ID oauth token found in user session')
return {}
assert isinstance(oauth_token, str), f'oauth token must be str, not {type(oauth_token)}'
oauth_session = OAuth2Session(
credentials['id'], credentials['secret'],
access_token=oauth_token)
try:
bid_resp = oauth_session.get(bid_url)
except httplib2.HttpLib2Error:
my_log.exception('Error getting %s from BlenderID', bid_url)
return {}
if bid_resp.status_code == 403:
my_log.warning('Error %i from BlenderID %s, logging out user', bid_resp.status_code, bid_url)
raise LogoutUser()
if bid_resp.status_code != 200:
my_log.warning('Error %i from BlenderID %s: %s', bid_resp.status_code, bid_url, bid_resp.text)
return {}
payload = bid_resp.json()
if not payload:
my_log.warning('Empty data returned from BlenderID %s', bid_url)
return {}
my_log.debug('BlenderID returned %s', payload)
return payload
def avatar_url(blenderid_user_id: str) -> str:
"""Return the URL to the user's avatar on Blender ID.
This avatar should be downloaded, and not served from the Blender ID URL.
"""
bid_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'],
f'api/user/{blenderid_user_id}/avatar')
return bid_url
def setup_app(app, url_prefix):
app.register_api_blueprint(blender_id, url_prefix=url_prefix)
def switch_user_url(next_url: str) -> str:
from urllib.parse import quote
base_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'], 'switch')
if next_url:
return '%s?next=%s' % (base_url, quote(next_url))
return base_url

View File

@ -0,0 +1,200 @@
from datetime import datetime
import logging
from bson import ObjectId, tz_util
from eve.io.mongo import Validator
from flask import current_app
from pillar import markdown
log = logging.getLogger(__name__)
class ValidateCustomFields(Validator):
# TODO: split this into a convert_property(property, schema) and call that from this function.
def convert_properties(self, properties, node_schema):
"""Converts datetime strings and ObjectId strings to actual Python objects."""
date_format = current_app.config['RFC1123_DATE_FORMAT']
for prop in node_schema:
if prop not in properties:
continue
schema_prop = node_schema[prop]
prop_type = schema_prop['type']
if prop_type == 'dict':
try:
dict_valueschema = schema_prop['schema']
properties[prop] = self.convert_properties(properties[prop], dict_valueschema)
except KeyError:
# Cerberus 1.3 changed valueschema to valuesrules.
dict_valueschema = schema_prop.get('valuesrules') or \
schema_prop.get('valueschema')
if dict_valueschema is None:
raise KeyError(f"missing 'valuesrules' key in schema of property {prop}")
self.convert_dict_values(properties[prop], dict_valueschema)
elif prop_type == 'list':
if properties[prop] in ['', '[]']:
properties[prop] = []
if 'schema' in schema_prop:
for k, val in enumerate(properties[prop]):
item_schema = {'item': schema_prop['schema']}
item_prop = {'item': properties[prop][k]}
properties[prop][k] = self.convert_properties(
item_prop, item_schema)['item']
# Convert datetime string to RFC1123 datetime
elif prop_type == 'datetime':
prop_val = properties[prop]
prop_naieve = datetime.strptime(prop_val, date_format)
prop_aware = prop_naieve.replace(tzinfo=tz_util.utc)
properties[prop] = prop_aware
elif prop_type == 'objectid':
prop_val = properties[prop]
if prop_val:
properties[prop] = ObjectId(prop_val)
else:
properties[prop] = None
return properties
def convert_dict_values(self, dict_property, dict_valueschema):
"""Calls convert_properties() for the values in the dict.
Only validates the dict values, not the keys. Modifies the given dict in-place.
"""
assert dict_valueschema['type'] == 'dict'
assert isinstance(dict_property, dict)
for key, val in dict_property.items():
item_schema = {'item': dict_valueschema}
item_prop = {'item': val}
dict_property[key] = self.convert_properties(item_prop, item_schema)['item']
def _validate_valid_properties(self, valid_properties, field, value):
"""Fake property that triggers node dynamic property validation.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
from pillar.api.utils import project_get_node_type
projects_collection = current_app.data.driver.db['projects']
lookup = {'_id': ObjectId(self.document['project'])}
project = projects_collection.find_one(lookup, {
'node_types.name': 1,
'node_types.dyn_schema': 1,
})
if project is None:
log.warning('Unknown project %s, declared by node %s',
lookup, self.document.get('_id'))
self._error(field, 'Unknown project')
return False
node_type_name = self.document['node_type']
node_type = project_get_node_type(project, node_type_name)
if node_type is None:
log.warning('Project %s has no node type %s, declared by node %s',
project, node_type_name, self.document.get('_id'))
self._error(field, 'Unknown node type')
return False
try:
value = self.convert_properties(value, node_type['dyn_schema'])
except Exception as e:
log.warning("Error converting form properties", exc_info=True)
v = self.__class__(schema=node_type['dyn_schema'])
val = v.validate(value)
if val:
# This ensures the modifications made by v's coercion rules are
# visible to this validator's output.
self.document[field] = v.document
return True
log.warning('Error validating properties for node %s: %s', self.document, v.errors)
self._error(field, "Error validating properties")
def _validate_required_after_creation(self, required_after_creation, field, value):
"""Makes a value required after creation only.
Combine "required_after_creation=True" with "required=False" to allow
pre-insert hooks to set default values.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
if not required_after_creation:
# Setting required_after_creation=False is the same as not mentioning this
# validator at all.
return
if self.document_id is None:
# This is a creation call, in which case this validator shouldn't run.
return
if not value:
self._error(field, "Value is required once the document was created")
def _check_with_iprange(self, field_name: str, value: str):
"""Ensure the field contains a valid IP address.
Supports both IPv6 and IPv4 ranges. Requires the IPy module.
"""
from IPy import IP
try:
ip = IP(value, make_net=True)
except ValueError as ex:
self._error(field_name, str(ex))
return
if ip.prefixlen() == 0:
self._error(field_name, 'Zero-length prefix is not allowed')
def _normalize_coerce_markdown(self, markdown_field: str) -> str:
"""
Cache markdown as html.
:param markdown_field: name of the field containing Markdown
:return: html string
"""
my_log = log.getChild('_normalize_coerce_markdown')
mdown = self.document.get(markdown_field, '')
html = markdown.markdown(mdown)
my_log.debug('Generated html for markdown field %s in doc with id %s',
markdown_field, id(self.document))
return html
if __name__ == '__main__':
from pprint import pprint
v = ValidateCustomFields()
v.schema = {
'foo': {'type': 'string', 'check_with': 'markdown'},
'foo_html': {'type': 'string'},
'nested': {
'type': 'dict',
'schema': {
'bar': {'type': 'string', 'check_with': 'markdown'},
'bar_html': {'type': 'string'},
}
}
}
print('Valid :', v.validate({
'foo': '# Title\n\nHeyyyy',
'nested': {'bar': 'bhahaha'},
}))
print('Document:')
pprint(v.document)
print('Errors :', v.errors)

View File

@ -1,17 +1,16 @@
import logging
import datetime
import json
import logging
import os
from bson import ObjectId, tz_util
from eve.methods.put import put_internal
from bson import ObjectId
from flask import Blueprint
from flask import abort
from flask import request
from flask import current_app
from application import utils
from application.utils import skip_when_testing
from application.utils.gcs import GoogleCloudStorageBucket
from flask import request
from pillar.api import utils
from pillar.api.file_storage_backends import Bucket
encoding = Blueprint('encoding', __name__)
log = logging.getLogger(__name__)
@ -34,6 +33,7 @@ def size_descriptor(width, height):
1280: '720p',
1920: '1080p',
2048: '2k',
3840: 'UHD',
4096: '4k',
}
@ -44,13 +44,6 @@ def size_descriptor(width, height):
return '%ip' % height
@skip_when_testing
def rename_on_gcs(bucket_name, from_path, to_path):
gcs = GoogleCloudStorageBucket(str(bucket_name))
blob = gcs.bucket.blob(from_path)
gcs.bucket.rename_blob(blob, to_path)
@encoding.route('/zencoder/notifications', methods=['POST'])
def zencoder_notifications():
"""
@ -104,25 +97,24 @@ def zencoder_notifications():
file_doc['processing']['status'] = job_state
if job_state == 'failed':
log.warning('Zencoder job %i for file %s failed.', zencoder_job_id, file_id)
# Log what Zencoder told us went wrong.
for output in data['outputs']:
if not any('error' in key for key in output):
continue
log.warning('Errors for output %s:', output['url'])
for key in output:
if 'error' in key:
log.info(' %s: %s', key, output[key])
log.warning('Zencoder job %s for file %s failed: %s', zencoder_job_id, file_id,
json.dumps(data, sort_keys=True, indent=4))
file_doc['status'] = 'failed'
put_internal('files', file_doc, _id=file_id)
current_app.put_internal('files', file_doc, _id=file_id)
# This is 'okay' because we handled the Zencoder notification properly.
return "You failed, but that's okay.", 200
log.info('Zencoder job %s for file %s completed with status %s.', zencoder_job_id, file_id,
job_state)
# For every variation encoded, try to update the file object
root, _ = os.path.splitext(file_doc['file_path'])
storage_name, _ = os.path.splitext(file_doc['file_path'])
nice_name, _ = os.path.splitext(file_doc['filename'])
bucket_class = Bucket.for_backend(file_doc['backend'])
bucket = bucket_class(str(file_doc['project']))
for output in data['outputs']:
video_format = output['format']
@ -143,16 +135,16 @@ def zencoder_notifications():
# Rename the file to include the now-known size descriptor.
size = size_descriptor(output['width'], output['height'])
new_fname = '{}-{}.{}'.format(root, size, video_format)
new_fname = f'{storage_name}-{size}.{video_format}'
# Rename on Google Cloud Storage
# Rename the file on the storage.
blob = bucket.blob(variation['file_path'])
try:
rename_on_gcs(file_doc['project'],
'_/' + variation['file_path'],
'_/' + new_fname)
new_blob = bucket.rename_blob(blob, new_fname)
new_blob.update_filename(f'{nice_name}-{size}.{video_format}')
except Exception:
log.warning('Unable to rename GCS blob %r to %r. Keeping old name.',
variation['file_path'], new_fname, exc_info=True)
log.warning('Unable to rename blob %r to %r. Keeping old name.',
blob, new_fname, exc_info=True)
else:
variation['file_path'] = new_fname
@ -169,8 +161,15 @@ def zencoder_notifications():
file_doc['status'] = 'complete'
# Force an update of the links on the next load of the file.
file_doc['link_expires'] = datetime.datetime.now(tz=tz_util.utc) - datetime.timedelta(days=1)
file_doc['link_expires'] = utils.utcnow() - datetime.timedelta(days=1)
put_internal('files', file_doc, _id=file_id)
r, _, _, status = current_app.put_internal('files', file_doc, _id=file_id)
if status != 200:
log.error('unable to save file %s after Zencoder notification: %s', file_id, r)
return json.dumps(r), 500
return '', 204
def setup_app(app, url_prefix):
app.register_api_blueprint(encoding, url_prefix=url_prefix)

View File

@ -1,5 +1,10 @@
import os
from pillar.api.node_types.utils import markdown_fields
STORAGE_BACKENDS = ["local", "pillar", "cdnsun", "gcs", "unittest"]
URL_PREFIX = 'api'
# Enable reads (GET), inserts (POST) and DELETE for resources/collections
# (if you omit this line, the API will default to ['GET'] and provide
# read-only access to the endpoint).
@ -86,8 +91,8 @@ users_schema = {
}
},
'auth': {
# Storage of authentication credentials (one will be able to auth with
# multiple providers on the same account)
# Storage of authentication credentials (one will be able to auth with multiple providers on
# the same account)
'type': 'list',
'required': True,
'schema': {
@ -95,13 +100,12 @@ users_schema = {
'schema': {
'provider': {
'type': 'string',
'allowed': ["blender-id", "local"],
'allowed': ['local', 'blender-id', 'facebook', 'google'],
},
'user_id': {
'type': 'string'
},
# A token is considered a "password" in case the provider is
# "local".
# A token is considered a "password" in case the provider is "local".
'token': {
'type': 'string'
}
@ -118,14 +122,80 @@ users_schema = {
}
},
'service': {
'type': 'dict',
'allow_unknown': True,
},
'avatar': {
'type': 'dict',
'schema': {
'badger': {
'type': 'list',
'schema': {'type': 'string'}
}
}
}
'file': {
'type': 'objectid',
'data_relation': {
'resource': 'files',
'field': '_id',
},
},
# For only downloading when things really changed:
'last_downloaded_url': {
'type': 'string',
},
'last_modified': {
'type': 'string',
},
},
},
# Node-specific information for this user.
'nodes': {
'type': 'dict',
'schema': {
# Per watched video info about where the user left off, both in time and in percent.
'view_progress': {
'type': 'dict',
# Keyed by Node ID of the video asset. MongoDB doesn't support using
# ObjectIds as key, so we cast them to string instead.
'keysrules': {'type': 'string'},
'valuesrules': {
'type': 'dict',
'schema': {
'progress_in_sec': {'type': 'float', 'min': 0},
'progress_in_percent': {'type': 'integer', 'min': 0, 'max': 100},
# When the progress was last updated, so we can limit this history to
# the last-watched N videos if we want, or show stuff in chrono order.
'last_watched': {'type': 'datetime'},
# True means progress_in_percent = 100, for easy querying
'done': {'type': 'boolean', 'default': False},
},
},
},
},
},
'badges': {
'type': 'dict',
'schema': {
'html': {'type': 'string'}, # HTML fetched from Blender ID.
'expires': {'type': 'datetime'}, # When we should fetch it again.
},
},
# Properties defined by extensions. Extensions should use their name (see the
# PillarExtension.name property) as the key, and are free to use whatever they want as value,
# but we suggest a dict for future extendability.
# Properties can be of two types:
# - public: they will be visible to the world (for example as part of the User.find() query)
# - private: visible only to their user
'extension_props_public': {
'type': 'dict',
'required': False,
},
'extension_props_private': {
'type': 'dict',
'required': False,
},
}
organizations_schema = {
@ -135,19 +205,7 @@ organizations_schema = {
'maxlength': 128,
'required': True
},
'email': {
'type': 'string'
},
'url': {
'type': 'string',
'minlength': 1,
'maxlength': 128,
'required': True
},
'description': {
'type': 'string',
'maxlength': 256,
},
**markdown_fields('description', maxlength=256),
'website': {
'type': 'string',
'maxlength': 256,
@ -159,7 +217,15 @@ organizations_schema = {
'picture': dict(
nullable=True,
**_file_embedded_schema),
'users': {
'admin_uid': {
'type': 'objectid',
'data_relation': {
'resource': 'users',
'field': '_id',
},
'required': True,
},
'members': {
'type': 'list',
'default': [],
'schema': {
@ -167,51 +233,52 @@ organizations_schema = {
'data_relation': {
'resource': 'users',
'field': '_id',
'embeddable': True
}
}
},
'teams': {
'unknown_members': {
'type': 'list', # of email addresses of yet-to-register users.
'default': [],
'schema': {
'type': 'string',
},
},
# Maximum size of the organization, i.e. len(members) + len(unknown_members) may
# not exceed this.
'seat_count': {
'type': 'integer',
'required': True,
},
# Roles that the members of this organization automatically get.
'org_roles': {
'type': 'list',
'default': [],
'schema': {
'type': 'string',
},
},
# Identification of the subscription that pays for this organisation
# in an external subscription/payment management system.
'payment_subscription_id': {
'type': 'string',
},
'ip_ranges': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
# Team name
'name': {
'type': 'string',
'minlength': 1,
'maxlength': 128,
'required': True
},
# List of user ids for the team
'users': {
'type': 'list',
'default': [],
'schema': {
'type': 'objectid',
'data_relation': {
'resource': 'users',
'field': '_id',
}
}
},
# List of groups assigned to the team (this will automatically
# update the groups property of each user in the team)
'groups': {
'type': 'list',
'default': [],
'schema': {
'type': 'objectid',
'data_relation': {
'resource': 'groups',
'field': '_id',
}
}
}
# see _validate_type_{typename} in ValidateCustomFields:
'start': {'type': 'binary', 'required': True},
'end': {'type': 'binary', 'required': True},
'prefix': {'type': 'integer', 'required': True},
'human': {'type': 'string', 'required': True, 'check_with': 'iprange'},
}
}
}
},
},
}
permissions_embedded_schema = {
@ -271,9 +338,7 @@ nodes_schema = {
'maxlength': 128,
'required': True,
},
'description': {
'type': 'string',
},
**markdown_fields('description'),
'picture': _file_embedded_schema,
'order': {
'type': 'integer',
@ -306,7 +371,7 @@ nodes_schema = {
'properties': {
'type': 'dict',
'valid_properties': True,
'required': True,
'required': True
},
'permissions': {
'type': 'dict',
@ -326,6 +391,10 @@ tokens_schema = {
'type': 'string',
'required': True,
},
'token_hashed': {
'type': 'string',
'required': False,
},
'expire_time': {
'type': 'datetime',
'required': True,
@ -333,6 +402,22 @@ tokens_schema = {
'is_subclient_token': {
'type': 'boolean',
'required': False,
},
# Roles this user gets while this token is valid.
'org_roles': {
'type': 'list',
'default': [],
'schema': {
'type': 'string',
},
},
# OAuth scopes granted to this token.
'oauth_scopes': {
'type': 'list',
'default': [],
'schema': {'type': 'string'},
}
}
@ -375,14 +460,15 @@ files_schema = {
},
'length_aggregate_in_bytes': { # Size of file + all variations
'type': 'integer',
'required': False, # it's computed on the fly anyway, so clients don't need to provide it.
'required': False,
# it's computed on the fly anyway, so clients don't need to provide it.
},
'md5': {
'type': 'string',
'required': True,
},
# Original filename as given by the user, possibly cleaned-up to make it safe.
# Original filename as given by the user, cleaned-up to make it safe.
'filename': {
'type': 'string',
'required': True,
@ -390,7 +476,7 @@ files_schema = {
'backend': {
'type': 'string',
'required': True,
'allowed': ["attract-web", "pillar", "cdnsun", "gcs", "unittest"]
'allowed': STORAGE_BACKENDS,
},
# Where the file is in the backend storage itself. In the case of GCS,
@ -502,9 +588,7 @@ projects_schema = {
'maxlength': 128,
'required': True,
},
'description': {
'type': 'string',
},
**markdown_fields('description'),
# Short summary for the project
'summary': {
'type': 'string',
@ -514,6 +598,8 @@ projects_schema = {
'picture_square': _file_embedded_schema,
# Header
'picture_header': _file_embedded_schema,
# Picture with a 16:9 aspect ratio (for Open Graph)
'picture_16_9': _file_embedded_schema,
'header_node': dict(
nullable=True,
**_node_embedded_schema
@ -530,8 +616,9 @@ projects_schema = {
'category': {
'type': 'string',
'allowed': [
'training',
'course',
'film',
'workshop',
'assets',
'software',
'game',
@ -620,7 +707,16 @@ projects_schema = {
'permissions': {
'type': 'dict',
'schema': permissions_embedded_schema
}
},
# Properties defined by extensions. Extensions should use their name
# (see the PillarExtension.name property) as the key, and are free to
# use whatever they want as value (but we suggest a dict for future
# extendability).
'extension_props': {
'type': 'dict',
'required': False,
},
}
activities_subscriptions_schema = {
@ -664,6 +760,19 @@ activities_schema = {
'type': 'objectid',
'required': True
},
'project': {
'type': 'objectid',
'data_relation': {
'resource': 'projects',
'field': '_id',
},
'required': False,
},
# If the object type is 'node', the node type can be stored here.
'node_type': {
'type': 'string',
'required': False,
}
}
notifications_schema = {
@ -695,10 +804,6 @@ users = {
'item_methods': ['GET', 'PUT'],
'public_item_methods': ['GET'],
# By default don't include the 'auth' field. It can still be obtained
# using projections, though, so we block that in hooks.
'datasource': {'projection': {u'auth': 0}},
'schema': users_schema
}
@ -712,10 +817,12 @@ tokens = {
}
files = {
'schema': files_schema,
'resource_methods': ['GET', 'POST'],
'item_methods': ['GET', 'PATCH'],
'public_methods': ['GET'],
'public_item_methods': ['GET'],
'schema': files_schema
'soft_delete': True,
}
groups = {
@ -727,8 +834,11 @@ groups = {
organizations = {
'schema': organizations_schema,
'public_item_methods': ['GET'],
'public_methods': ['GET']
'resource_methods': ['GET', 'POST'],
'item_methods': ['GET'],
'public_item_methods': [],
'public_methods': [],
'soft_delete': True,
}
projects = {
@ -763,13 +873,18 @@ DOMAIN = {
'notifications': notifications
}
MONGO_HOST = os.environ.get('MONGO_HOST', 'localhost')
MONGO_PORT = os.environ.get('MONGO_PORT', 27017)
MONGO_DBNAME = os.environ.get('MONGO_DBNAME', 'eve')
MONGO_HOST = os.environ.get('PILLAR_MONGO_HOST', 'localhost')
MONGO_PORT = int(os.environ.get('PILLAR_MONGO_PORT', 27017))
MONGO_DBNAME = os.environ.get('PILLAR_MONGO_DBNAME', 'eve')
CACHE_EXPIRES = 60
HATEOAS = False
UPSET_ON_PUT = False # do not create new document on PUT of non-existant URL.
X_DOMAINS = '*'
X_ALLOW_CREDENTIALS = True
X_HEADERS = 'Authorization'
XML = False
RENDERERS = ['eve.render.JSONRenderer']
# TODO(Sybren): this is a quick workaround to make /p/{url}/jstree work again.
# Apparently Eve is now stricter in checking against MONGO_QUERY_BLACKLIST, and
# blocks our use of $regex.
MONGO_QUERY_BLACKLIST = ['$where']

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,199 @@
"""Code for moving files between backends."""
import logging
import os
import tempfile
import requests
import requests.exceptions
from bson import ObjectId
from flask import current_app
from pillar.api import utils
from . import stream_to_gcs, generate_all_links, ensure_valid_link
__all__ = ['PrerequisiteNotMetError', 'change_file_storage_backend', 'move_to_bucket']
log = logging.getLogger(__name__)
class PrerequisiteNotMetError(RuntimeError):
"""Raised when a file cannot be moved due to unmet prerequisites."""
def change_file_storage_backend(file_id, dest_backend):
"""Given a file document, move it to the specified backend (if not already
there) and update the document to reflect that.
Files on the original backend are not deleted automatically.
"""
dest_backend = str(dest_backend)
file_id = ObjectId(file_id)
# Fetch file document
files_collection = current_app.data.driver.db['files']
f = files_collection.find_one(file_id)
if f is None:
raise ValueError('File with _id: {} not found'.format(file_id))
# Check that new backend differs from current one
if dest_backend == f['backend']:
raise PrerequisiteNotMetError('Destination backend ({}) matches the current backend, we '
'are not moving the file'.format(dest_backend))
# TODO Check that new backend is allowed (make conf var)
# Check that the file has a project; without project, we don't know
# which bucket to store the file into.
try:
project_id = f['project']
except KeyError:
raise PrerequisiteNotMetError('File document does not have a project')
# Ensure that all links are up to date before we even attempt a download.
ensure_valid_link(f)
# Upload file and variations to the new backend
variations = f.get('variations', ())
try:
copy_file_to_backend(file_id, project_id, f, f['backend'], dest_backend)
except requests.exceptions.HTTPError as ex:
# allow the main file to be removed from storage.
if ex.response.status_code not in {404, 410}:
raise
if not variations:
raise PrerequisiteNotMetError('Main file ({link}) does not exist on server, '
'and no variations exist either'.format(**f))
log.warning('Main file %s does not exist; skipping main and visiting variations', f['link'])
for var in variations:
copy_file_to_backend(file_id, project_id, var, f['backend'], dest_backend)
# Generate new links for the file & all variations. This also saves
# the new backend we set here.
f['backend'] = dest_backend
generate_all_links(f, utils.utcnow())
def copy_file_to_backend(file_id, project_id, file_or_var, src_backend, dest_backend):
# Filenames on GCS do not contain paths, by our convention
internal_fname = os.path.basename(file_or_var['file_path'])
file_or_var['file_path'] = internal_fname
# If the file is not local already, fetch it
if src_backend == 'pillar':
local_finfo = fetch_file_from_local(file_or_var)
else:
local_finfo = fetch_file_from_link(file_or_var['link'])
try:
# Upload to GCS
if dest_backend != 'gcs':
raise ValueError('Only dest_backend="gcs" is supported now.')
if current_app.config['TESTING']:
log.warning('Skipping actual upload to GCS due to TESTING')
else:
# TODO check for name collisions
stream_to_gcs(file_id, local_finfo['file_size'],
internal_fname=internal_fname,
project_id=project_id,
stream_for_gcs=local_finfo['local_file'],
content_type=local_finfo['content_type'])
finally:
# No longer needed, so it can be closed & dispersed of.
local_finfo['local_file'].close()
def fetch_file_from_link(link):
"""Utility to download a file from a remote location and return it with
additional info (for upload to a different storage backend).
"""
log.info('Downloading %s', link)
r = requests.get(link, stream=True)
r.raise_for_status()
local_file = tempfile.NamedTemporaryFile(dir=current_app.config['STORAGE_DIR'])
log.info('Downloading to %s', local_file.name)
for chunk in r.iter_content(chunk_size=1024):
if chunk:
local_file.write(chunk)
local_file.seek(0)
file_dict = {
'file_size': os.fstat(local_file.fileno()).st_size,
'content_type': r.headers.get('content-type', 'application/octet-stream'),
'local_file': local_file
}
return file_dict
def fetch_file_from_local(file_doc):
"""Mimicks fetch_file_from_link(), but just returns the local file.
:param file_doc: dict with 'link' key pointing to a path in STORAGE_DIR, and
'content_type' key.
:type file_doc: dict
:rtype: dict self._log.info('Moving file %s to project %s', file_id, dest_proj['_id'])
"""
local_file = open(os.path.join(current_app.config['STORAGE_DIR'], file_doc['file_path']), 'rb')
local_finfo = {
'file_size': os.fstat(local_file.fileno()).st_size,
'content_type': file_doc['content_type'],
'local_file': local_file
}
return local_finfo
def move_to_bucket(file_id: ObjectId, dest_project_id: ObjectId, *, skip_storage=False):
"""Move a file + variations from its own bucket to the new project_id bucket.
:param file_id: ID of the file to move.
:param dest_project_id: Project to move to.
:param skip_storage: If True, the storage bucket will not be touched.
Only use this when you know what you're doing.
"""
files_coll = current_app.db('files')
f = files_coll.find_one(file_id)
if f is None:
raise ValueError(f'File with _id: {file_id} not found')
# Move file and variations to the new bucket.
if skip_storage:
log.warning('NOT ACTUALLY MOVING file %s on storage, just updating MongoDB', file_id)
else:
from pillar.api.file_storage_backends import Bucket
bucket_class = Bucket.for_backend(f['backend'])
src_bucket = bucket_class(str(f['project']))
dst_bucket = bucket_class(str(dest_project_id))
src_blob = src_bucket.get_blob(f['file_path'])
src_bucket.copy_blob(src_blob, dst_bucket)
for var in f.get('variations', []):
src_blob = src_bucket.get_blob(var['file_path'])
src_bucket.copy_blob(src_blob, dst_bucket)
# Update the file document after moving was successful.
# No need to update _etag or _updated, since that'll be done when
# the links are regenerated at the end of this function.
log.info('Switching file %s to project %s', file_id, dest_project_id)
update_result = files_coll.update_one({'_id': file_id},
{'$set': {'project': dest_project_id}})
if update_result.matched_count != 1:
raise RuntimeError(
'Unable to update file %s in MongoDB: matched_count=%i; modified_count=%i' % (
file_id, update_result.matched_count, update_result.modified_count))
log.info('Switching file %s: matched_count=%i; modified_count=%i',
file_id, update_result.matched_count, update_result.modified_count)
# Regenerate the links for this file
f['project'] = dest_project_id
generate_all_links(f, now=utils.utcnow())

View File

@ -0,0 +1,29 @@
"""Storage backends.
To obtain a storage backend, use either of the two forms:
>>> bucket = default_storage_backend('bucket_name')
>>> BucketClass = Bucket.for_backend('backend_name')
>>> bucket = BucketClass('bucket_name')
"""
from .abstract import Bucket
# Import the other backends so that they register.
from . import local
from . import gcs
def default_storage_backend(name: str) -> Bucket:
"""Returns an instance of a Bucket, based on the default backend.
Depending on the backend this may actually create the bucket.
"""
from flask import current_app
backend_name = current_app.config['STORAGE_BACKEND']
backend_cls = Bucket.for_backend(backend_name)
return backend_cls(name)

View File

@ -0,0 +1,167 @@
import abc
import io
import logging
import typing
import pathlib
from bson import ObjectId
__all__ = ['Bucket', 'Blob', 'Path', 'FileType']
# Shorthand for the type of path we use.
Path = pathlib.PurePosixPath
# This is a mess: typing.IO keeps mypy-0.501 happy, but not in all cases,
# and io.FileIO + io.BytesIO keeps PyCharm-2017.1 happy.
FileType = typing.Union[typing.IO, io.FileIO, io.BytesIO]
class Bucket(metaclass=abc.ABCMeta):
"""Can be a GCS bucket or simply a project folder in Pillar
:type name: string
:param name: Name of the bucket. As a convention, we use the ID of
the project to name the bucket.
"""
# Mapping from backend name to Bucket class
backends: typing.Dict[str, typing.Type['Bucket']] = {}
backend_name: str = None # define in subclass.
def __init__(self, name: str) -> None:
self.name = str(name)
def __init_subclass__(cls):
assert cls.backend_name, '%s.backend_name must be non-empty string' % cls
cls.backends[cls.backend_name] = cls
def __repr__(self):
return f'<{self.__class__.__name__} name={self.name!r}>'
@classmethod
def for_backend(cls, backend_name: str) -> typing.Type['Bucket']:
"""Returns the Bucket subclass for the given backend."""
return cls.backends[backend_name]
@abc.abstractmethod
def blob(self, blob_name: str) -> 'Blob':
"""Factory constructor for blob object.
:param blob_name: The path of the blob to be instantiated.
"""
@abc.abstractmethod
def get_blob(self, blob_name: str) -> typing.Optional['Blob']:
"""Get a blob object by name.
If the blob exists return the object, otherwise None.
"""
@abc.abstractmethod
def copy_blob(self, blob: 'Blob', to_bucket: 'Bucket'):
"""Copies a blob from the current bucket to the other bucket.
Implementations only need to support copying between buckets of the
same storage backend.
"""
@abc.abstractmethod
def rename_blob(self, blob: 'Blob', new_name: str) -> 'Blob':
"""Rename the blob, returning the new Blob."""
@classmethod
def copy_to_bucket(cls, blob_name, src_project_id: ObjectId, dest_project_id: ObjectId):
"""Copies a file from one bucket to the other."""
src_storage = cls(str(src_project_id))
dest_storage = cls(str(dest_project_id))
blob = src_storage.get_blob(blob_name)
src_storage.copy_blob(blob, dest_storage)
Bu = typing.TypeVar('Bu', bound=Bucket)
class Blob(metaclass=abc.ABCMeta):
"""A wrapper for file or blob objects."""
def __init__(self, name: str, bucket: Bucket) -> None:
self.name = name
"""Name of this blob in the bucket."""
self.bucket = bucket
self._size_in_bytes: typing.Optional[int] = None
self._log = logging.getLogger(f'{__name__}.Blob')
def __repr__(self):
return f'<{self.__class__.__name__} bucket={self.bucket.name!r} name={self.name!r}>'
@property
def size(self) -> typing.Optional[int]:
"""Size of the object, in bytes.
:returns: The size of the blob or ``None`` if the property
is not set locally.
"""
size = self._size_in_bytes
if size is None:
return None
return int(size)
@abc.abstractmethod
def create_from_file(self, file_obj: FileType, *,
content_type: str,
file_size: int = -1):
"""Copies the file object to the storage.
:param file_obj: The file object to send to storage.
:param content_type: The content type of the file.
:param file_size: The size of the file in bytes, or -1 if unknown
"""
def upload_from_path(self, path: pathlib.Path, content_type: str):
file_size = path.stat().st_size
with path.open('rb') as infile:
self.create_from_file(infile, content_type=content_type,
file_size=file_size)
@abc.abstractmethod
def update_filename(self, filename: str, *, is_attachment=True):
"""Sets the filename which is used when downloading the file.
Not all storage backends support this, and will use the on-disk filename instead.
"""
@abc.abstractmethod
def update_content_type(self, content_type: str, content_encoding: str = ''):
"""Set the content type (and optionally content encoding).
Not all storage backends support this.
"""
@abc.abstractmethod
def get_url(self, *, is_public: bool) -> str:
"""Returns the URL to access this blob.
Note that this may involve API calls to generate a signed URL.
"""
@abc.abstractmethod
def make_public(self):
"""Makes the blob publicly available.
Only performs an actual action on backends that support temporary links.
"""
@abc.abstractmethod
def exists(self) -> bool:
"""Returns True iff the file exists on the storage backend."""
Bl = typing.TypeVar('Bl', bound=Blob)

View File

@ -0,0 +1,273 @@
import os
import datetime
import logging
import typing
from bson import ObjectId
from gcloud.storage.client import Client
import gcloud.storage.blob
import gcloud.exceptions as gcloud_exc
from flask import current_app, g
from werkzeug.local import LocalProxy
from pillar.api import utils
from .abstract import Bucket, Blob, FileType
log = logging.getLogger(__name__)
def get_client() -> Client:
"""Stores the GCS client on the global Flask object.
The GCS client is not user-specific anyway.
"""
_gcs = getattr(g, '_gcs_client', None)
if _gcs is None:
_gcs = g._gcs_client = Client()
return _gcs
# This hides the specifics of how/where we store the GCS client,
# and allows the rest of the code to use 'gcs' as a simple variable
# that does the right thing.
gcs: Client = LocalProxy(get_client)
class GoogleCloudStorageBucket(Bucket):
"""Cloud Storage bucket interface. We create a bucket for every project. In
the bucket we create first level subdirs as follows:
- '_' (will contain hashed assets, and stays on top of default listing)
- 'svn' (svn checkout mirror)
- 'shared' (any additional folder of static folder that is accessed via a
node of 'storage' node_type)
:type bucket_name: string
:param bucket_name: Name of the bucket.
:type subdir: string
:param subdir: The local entry point to browse the bucket.
"""
backend_name = 'gcs'
def __init__(self, name: str, subdir='_') -> None:
super().__init__(name=name)
self._log = logging.getLogger(f'{__name__}.GoogleCloudStorageBucket')
try:
self._gcs_bucket = gcs.get_bucket(name)
except gcloud_exc.NotFound:
self._gcs_bucket = gcs.bucket(name)
# Hardcode the bucket location to EU
self._gcs_bucket.location = 'EU'
# Optionally enable CORS from * (currently only used for vrview)
# self.gcs_bucket.cors = [
# {
# "origin": ["*"],
# "responseHeader": ["Content-Type"],
# "method": ["GET", "HEAD", "DELETE"],
# "maxAgeSeconds": 3600
# }
# ]
self._gcs_bucket.create()
log.info('Created GCS instance for project %s', name)
self.subdir = subdir
def blob(self, blob_name: str) -> 'GoogleCloudStorageBlob':
return GoogleCloudStorageBlob(name=blob_name, bucket=self)
def get_blob(self, internal_fname: str) -> typing.Optional['GoogleCloudStorageBlob']:
blob = self.blob(internal_fname)
if not blob.gblob.exists():
return None
return blob
def _gcs_get(self, path: str, *, chunk_size=None) -> gcloud.storage.Blob:
"""Get selected file info if the path matches.
:param path: The path to the file, relative to the bucket's subdir.
"""
path = os.path.join(self.subdir, path)
blob = self._gcs_bucket.blob(path, chunk_size=chunk_size)
return blob
def _gcs_post(self, full_path, *, path=None) -> typing.Optional[gcloud.storage.Blob]:
"""Create new blob and upload data to it.
"""
path = path if path else os.path.join(self.subdir, os.path.basename(full_path))
gblob = self._gcs_bucket.blob(path)
if gblob.exists():
self._log.error(f'Trying to upload to {path}, but that blob already exists. '
f'Not uploading.')
return None
gblob.upload_from_filename(full_path)
return gblob
# return self.blob_to_dict(blob) # Has issues with threading
def delete_blob(self, path: str) -> bool:
"""Deletes the blob (when removing an asset or replacing a preview)"""
# We want to get the actual blob to delete
gblob = self._gcs_get(path)
try:
gblob.delete()
return True
except gcloud_exc.NotFound:
return False
def copy_blob(self, blob: Blob, to_bucket: Bucket):
"""Copies the given blob from this bucket to the other bucket.
Returns the new blob.
"""
assert isinstance(blob, GoogleCloudStorageBlob)
assert isinstance(to_bucket, GoogleCloudStorageBucket)
self._log.info('Copying %s to bucket %s', blob, to_bucket)
return self._gcs_bucket.copy_blob(blob.gblob, to_bucket._gcs_bucket)
def rename_blob(self, blob: 'GoogleCloudStorageBlob', new_name: str) \
-> 'GoogleCloudStorageBlob':
"""Rename the blob, returning the new Blob."""
assert isinstance(blob, GoogleCloudStorageBlob)
new_name = os.path.join(self.subdir, new_name)
self._log.info('Renaming %s to %r', blob, new_name)
new_gblob = self._gcs_bucket.rename_blob(blob.gblob, new_name)
return GoogleCloudStorageBlob(new_gblob.name, self, gblob=new_gblob)
class GoogleCloudStorageBlob(Blob):
"""GCS blob interface."""
def __init__(self, name: str, bucket: GoogleCloudStorageBucket,
*, gblob: gcloud.storage.blob.Blob=None) -> None:
super().__init__(name, bucket)
self._log = logging.getLogger(f'{__name__}.GoogleCloudStorageBlob')
self.gblob = gblob or bucket._gcs_get(name, chunk_size=256 * 1024 * 2)
def create_from_file(self, file_obj: FileType, *,
content_type: str,
file_size: int = -1) -> None:
from gcloud.streaming import transfer
self._log.debug('Streaming file to GCS bucket %r, size=%i', self, file_size)
# Files larger than this many bytes will be streamed directly from disk,
# smaller ones will be read into memory and then uploaded.
transfer.RESUMABLE_UPLOAD_THRESHOLD = 102400
self.gblob.upload_from_file(file_obj,
size=file_size,
content_type=content_type)
# Reload the blob to get the file size according to Google.
self.gblob.reload()
self._size_in_bytes = self.gblob.size
def update_filename(self, filename: str, *, is_attachment=True):
"""Set the ContentDisposition metadata so that when a file is downloaded
it has a human-readable name.
"""
if '"' in filename:
raise ValueError(f'Filename is not allowed to have double quote in it: {filename!r}')
if is_attachment:
self.gblob.content_disposition = f'attachment; filename="{filename}"'
else:
self.gblob.content_disposition = f'filename="{filename}"'
self.gblob.patch()
def update_content_type(self, content_type: str, content_encoding: str = ''):
"""Set the content type (and optionally content encoding)."""
self.gblob.content_type = content_type
self.gblob.content_encoding = content_encoding
self.gblob.patch()
def get_url(self, *, is_public: bool) -> str:
if is_public:
return self.gblob.public_url
expiration = utils.utcnow() + datetime.timedelta(days=1)
return self.gblob.generate_signed_url(expiration)
def make_public(self):
self.gblob.make_public()
def exists(self) -> bool:
# Reload to get the actual file properties from Google.
try:
self.gblob.reload()
except gcloud_exc.NotFound:
return False
return self.gblob.exists()
def update_file_name(node):
"""Assign to the CGS blob the same name of the asset node. This way when
downloading an asset we get a human-readable name.
"""
# Process only files that are not processing
if node['properties'].get('status', '') == 'processing':
return
def _format_name(name, override_ext, size=None, map_type=''):
root, _ = os.path.splitext(name)
size = '-{}'.format(size) if size else ''
map_type = '-{}'.format(map_type) if map_type else ''
return '{}{}{}{}'.format(root, size, map_type, override_ext)
def _update_name(file_id, file_props):
files_collection = current_app.data.driver.db['files']
file_doc = files_collection.find_one({'_id': ObjectId(file_id)})
if file_doc is None or file_doc.get('backend') != 'gcs':
return
# For textures -- the map type should be part of the name.
map_type = file_props.get('map_type', '')
storage = GoogleCloudStorageBucket(str(node['project']))
blob = storage.get_blob(file_doc['file_path'])
if blob is None:
log.warning('Unable to find blob for file %s in project %s',
file_doc['file_path'], file_doc['project'])
return
# Pick file extension from original filename
_, ext = os.path.splitext(file_doc['filename'])
name = _format_name(node['name'], ext, map_type=map_type)
blob.update_filename(name)
# Assign the same name to variations
for v in file_doc.get('variations', []):
_, override_ext = os.path.splitext(v['file_path'])
name = _format_name(node['name'], override_ext, v['size'], map_type=map_type)
blob = storage.get_blob(v['file_path'])
if blob is None:
log.info('Unable to find blob for file %s in project %s. This can happen if the '
'video encoding is still processing.', v['file_path'], node['project'])
continue
blob.update_filename(name)
# Currently we search for 'file' and 'files' keys in the object properties.
# This could become a bit more flexible and realy on a true reference of the
# file object type from the schema.
if 'file' in node['properties']:
_update_name(node['properties']['file'], {})
if 'files' in node['properties']:
for file_props in node['properties']['files']:
_update_name(file_props['file'], file_props)

View File

@ -0,0 +1,134 @@
import logging
import pathlib
import typing
from flask import current_app
__all__ = ['LocalBucket', 'LocalBlob']
from .abstract import Bucket, Blob, FileType, Path
class LocalBucket(Bucket):
backend_name = 'local'
def __init__(self, name: str) -> None:
super().__init__(name)
self._log = logging.getLogger(f'{__name__}.LocalBucket')
# For local storage, the name is actually a partial path, relative
# to the local storage root.
self.root = pathlib.Path(current_app.config['STORAGE_DIR'])
self.bucket_path = pathlib.PurePosixPath(self.name[:2]) / self.name
self.abspath = self.root / self.bucket_path
def blob(self, blob_name: str) -> 'LocalBlob':
return LocalBlob(name=blob_name, bucket=self)
def get_blob(self, blob_name: str) -> typing.Optional['LocalBlob']:
# TODO: Check if file exists, otherwise None
return self.blob(blob_name)
def copy_blob(self, blob: Blob, to_bucket: Bucket):
"""Copies a blob from the current bucket to the other bucket.
Implementations only need to support copying between buckets of the
same storage backend.
"""
assert isinstance(blob, LocalBlob)
assert isinstance(to_bucket, LocalBucket)
self._log.info('Copying %s to bucket %s', blob, to_bucket)
dest_blob = to_bucket.blob(blob.name)
# TODO: implement content type handling for local storage.
self._log.warning('Unable to set correct file content type for %s', dest_blob)
fpath = blob.abspath()
if not fpath.exists():
if not fpath.parent.exists():
raise FileNotFoundError(f'File {fpath} does not exist, and neither does its parent,'
f' unable to copy to {to_bucket}')
raise FileNotFoundError(f'File {fpath} does not exist, unable to copy to {to_bucket}')
with open(fpath, 'rb') as src_file:
dest_blob.create_from_file(src_file, content_type='application/x-octet-stream')
def rename_blob(self, blob: 'LocalBlob', new_name: str) -> 'LocalBlob':
"""Rename the blob, returning the new Blob."""
assert isinstance(blob, LocalBlob)
self._log.info('Renaming %s to %r', blob, new_name)
new_blob = LocalBlob(new_name, self)
old_path = blob.abspath()
new_path = new_blob.abspath()
new_path.parent.mkdir(parents=True, exist_ok=True)
old_path.rename(new_path)
return new_blob
class LocalBlob(Blob):
"""Blob representing a local file on the filesystem."""
bucket: LocalBucket
def __init__(self, name: str, bucket: LocalBucket) -> None:
super().__init__(name, bucket)
self._log = logging.getLogger(f'{__name__}.LocalBlob')
self.partial_path = Path(name[:2]) / name
def abspath(self) -> pathlib.Path:
"""Returns a concrete, absolute path to the local file."""
return pathlib.Path(self.bucket.abspath / self.partial_path)
def get_url(self, *, is_public: bool) -> str:
from flask import url_for
path = self.bucket.bucket_path / self.partial_path
url = url_for('file_storage.index', file_name=str(path), _external=True,
_scheme=current_app.config['SCHEME'])
return url
def create_from_file(self, file_obj: FileType, *,
content_type: str,
file_size: int = -1):
assert hasattr(file_obj, 'read')
import shutil
# Ensure path exists before saving
my_path = self.abspath()
my_path.parent.mkdir(exist_ok=True, parents=True)
with my_path.open('wb') as outfile:
shutil.copyfileobj(typing.cast(typing.IO, file_obj), outfile)
self._size_in_bytes = file_size
def update_filename(self, filename: str, *, is_attachment=True):
# TODO: implement this for local storage.
self._log.info('update_filename(%r) not supported', filename)
def update_content_type(self, content_type: str, content_encoding: str = ''):
self._log.info('update_content_type(%r, %r) not supported', content_type, content_encoding)
def make_public(self):
# No-op on this storage backend.
pass
def exists(self) -> bool:
return self.abspath().exists()
def touch(self):
"""Touch the file, creating parent directories if needed."""
path = self.abspath()
path.parent.mkdir(parents=True, exist_ok=True)
path.touch(exist_ok=True)

112
pillar/api/latest.py Normal file
View File

@ -0,0 +1,112 @@
import typing
import bson
import pymongo
from flask import Blueprint, current_app
from pillar.api.utils import jsonify
blueprint = Blueprint('latest', __name__)
def _public_project_ids() -> typing.List[bson.ObjectId]:
"""Returns a list of ObjectIDs of public projects.
Memoized in setup_app().
"""
proj_coll = current_app.db('projects')
result = proj_coll.find({'is_private': False}, {'_id': 1})
return [p['_id'] for p in result]
def latest_nodes(db_filter, projection, limit):
"""Returns the latest nodes, of a certain type, of public projects.
Also includes information about the project and the user of each node.
"""
proj = {
'_created': 1,
'_updated': 1,
'project._id': 1,
'project.url': 1,
'project.name': 1,
'name': 1,
'node_type': 1,
'parent': 1,
**projection,
}
nodes_coll = current_app.db('nodes')
pipeline = [
{'$match': {'_deleted': {'$ne': True}}},
{'$match': db_filter},
{'$match': {'project': {'$in': _public_project_ids()}}},
{'$sort': {'_created': pymongo.DESCENDING}},
{'$limit': limit},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$lookup': {"from": "projects",
"localField": "project",
"foreignField": "_id",
"as": "project"}},
{'$unwind': {'path': "$project"}},
{'$project': proj},
]
latest = nodes_coll.aggregate(pipeline)
return list(latest)
@blueprint.route('/assets')
def latest_assets():
latest = latest_nodes({'node_type': 'asset',
'properties.status': 'published'},
{'name': 1, 'node_type': 1,
'parent': 1, 'picture': 1, 'properties.status': 1,
'properties.content_type': 1,
'properties.duration_seconds': 1,
'permissions.world': 1},
12)
return jsonify({'_items': latest})
@blueprint.route('/comments')
def latest_comments():
latest = latest_nodes({'node_type': 'comment',
'properties.status': 'published'},
{'parent': 1, 'user.full_name': 1,
'properties.content': 1, 'node_type': 1,
'properties.status': 1,
'properties.is_reply': 1},
10)
# Embed the comments' parents.
# TODO: move to aggregation pipeline.
nodes = current_app.data.driver.db['nodes']
parents = {}
for comment in latest:
parent_id = comment['parent']
if parent_id in parents:
comment['parent'] = parents[parent_id]
continue
parent = nodes.find_one(parent_id)
parents[parent_id] = parent
comment['parent'] = parent
return jsonify({'_items': latest})
def setup_app(app, url_prefix):
global _public_project_ids
app.register_api_blueprint(blueprint, url_prefix=url_prefix)
cached = app.cache.cached(timeout=3600)
_public_project_ids = cached(_public_project_ids)

View File

@ -2,16 +2,15 @@ import base64
import datetime
import hashlib
import logging
import rsa.randnum
import typing
import bcrypt
from bson import tz_util
from eve.methods.post import post_internal
from flask import abort, Blueprint, current_app, jsonify, request
from application.utils.authentication import store_token
from application.utils.authentication import create_new_user_document
from application.utils.authentication import make_unique_username
from pillar.api.utils.authentication import create_new_user_document
from pillar.api.utils.authentication import make_unique_username
from pillar.api.utils.authentication import store_token
from pillar.api.utils import utcnow
blueprint = Blueprint('authentication', __name__)
log = logging.getLogger(__name__)
@ -31,7 +30,7 @@ def create_local_user(email, password):
# Make username unique
db_user['username'] = make_unique_username(email)
# Create the user
r, _, _, status = post_internal('users', db_user)
r, _, _, status = current_app.post_internal('users', db_user)
if status != 201:
log.error('internal response: %r %r', status, r)
return abort(500)
@ -39,17 +38,7 @@ def create_local_user(email, password):
return r['_id']
@blueprint.route('/make-token', methods=['POST'])
def make_token():
"""Direct login for a user, without OAuth, using local database. Generates
a token that is passed back to Pillar Web and used in subsequent
transactions.
:return: a token string
"""
username = request.form['username']
password = request.form['password']
def get_local_user(username, password):
# Look up user in db
users_collection = current_app.data.driver.db['users']
user = users_collection.find_one({'username': username})
@ -64,36 +53,64 @@ def make_token():
hashed_password = hash_password(password, salt)
if hashed_password != credentials['token']:
return abort(403)
return user
@blueprint.route('/make-token', methods=['POST'])
def make_token():
"""Direct login for a user, without OAuth, using local database. Generates
a token that is passed back to Pillar Web and used in subsequent
transactions.
:return: a token string
"""
username = request.form['username']
password = request.form['password']
user = get_local_user(username, password)
token = generate_and_store_token(user['_id'])
return jsonify(token=token['token'])
def generate_and_store_token(user_id, days=15, prefix=''):
def generate_and_store_token(user_id, days=15, prefix=b'') -> dict:
"""Generates token based on random bits.
NOTE: the returned document includes the plain-text token.
DO NOT STORE OR LOG THIS unless there is a good reason to.
:param user_id: ObjectId of the owning user.
:param days: token will expire in this many days.
:param prefix: the token will be prefixed by this string, for easy identification.
:return: the token document.
:param prefix: the token will be prefixed by these bytes, for easy identification.
:return: the token document with the token in plain text as well as hashed.
"""
random_bits = rsa.randnum.read_random_bits(256)
if not isinstance(prefix, bytes):
raise TypeError('prefix must be bytes, not %s' % type(prefix))
import secrets
random_bits = secrets.token_bytes(32)
# Use 'xy' as altargs to prevent + and / characters from appearing.
# We never have to b64decode the string anyway.
token = prefix + base64.b64encode(random_bits, altchars='xy').strip('=')
token = prefix + base64.b64encode(random_bits, altchars=b'xy').strip(b'=')
token_expiry = datetime.datetime.now(tz=tz_util.utc) + datetime.timedelta(days=days)
return store_token(user_id, token, token_expiry)
token_expiry = utcnow() + datetime.timedelta(days=days)
return store_token(user_id, token.decode('ascii'), token_expiry)
def hash_password(password, salt):
if isinstance(salt, unicode):
def hash_password(password: str, salt: typing.Union[str, bytes]) -> str:
password = password.encode()
if isinstance(salt, str):
salt = salt.encode('utf-8')
encoded_password = base64.b64encode(hashlib.sha256(password).digest())
return bcrypt.hashpw(encoded_password, salt)
hash = hashlib.sha256(password).digest()
encoded_password = base64.b64encode(hash)
hashed_password = bcrypt.hashpw(encoded_password, salt)
return hashed_password.decode('ascii')
def setup_app(app, url_prefix):
app.register_blueprint(blueprint, url_prefix=url_prefix)
app.register_api_blueprint(blueprint, url_prefix=url_prefix)

View File

@ -0,0 +1,96 @@
_file_embedded_schema = {
'type': 'objectid',
'data_relation': {
'resource': 'files',
'field': '_id',
'embeddable': True
}
}
ATTACHMENT_SLUG_REGEX = r'[a-zA-Z0-9_\-]+'
attachments_embedded_schema = {
'type': 'dict',
'keysrules': {
'type': 'string',
'regex': '^%s$' % ATTACHMENT_SLUG_REGEX,
},
'valuesrules': {
'type': 'dict',
'schema': {
'oid': {
'type': 'objectid',
'required': True,
},
'collection': {
'type': 'string',
'allowed': ['files'],
'default': 'files',
},
},
},
}
# TODO (fsiddi) reference this schema in all node_types that allow ratings
ratings_embedded_schema = {
'type': 'dict',
# Total count of positive ratings (updated at every rating action)
'schema': {
'positive': {
'type': 'integer',
},
# Total count of negative ratings (updated at every rating action)
'negative': {
'type': 'integer',
},
# Collection of ratings, keyed by user
'ratings': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'user': {
'type': 'objectid',
'data_relation': {
'resource': 'users',
'field': '_id',
'embeddable': False
}
},
'is_positive': {
'type': 'boolean'
},
# Weight of the rating based on user rep and the context.
# Currently we have the following weights:
# - 1 auto null
# - 2 manual null
# - 3 auto valid
# - 4 manual valid
'weight': {
'type': 'integer'
}
}
}
},
'hot': {'type': 'float'},
},
}
# Import after defining the common embedded schemas, to prevent dependency cycles.
from pillar.api.node_types.asset import node_type_asset
from pillar.api.node_types.blog import node_type_blog
from pillar.api.node_types.comment import node_type_comment
from pillar.api.node_types.group import node_type_group
from pillar.api.node_types.group_hdri import node_type_group_hdri
from pillar.api.node_types.group_texture import node_type_group_texture
from pillar.api.node_types.hdri import node_type_hdri
from pillar.api.node_types.page import node_type_page
from pillar.api.node_types.post import node_type_post
from pillar.api.node_types.storage import node_type_storage
from pillar.api.node_types.text import node_type_text
from pillar.api.node_types.texture import node_type_texture
PILLAR_NODE_TYPES = (node_type_asset, node_type_blog, node_type_comment, node_type_group,
node_type_group_hdri, node_type_group_texture, node_type_hdri, node_type_page,
node_type_post, node_type_storage, node_type_text, node_type_texture)
PILLAR_NAMED_NODE_TYPES = {nt['name']: nt for nt in PILLAR_NODE_TYPES}

View File

@ -1,4 +1,4 @@
from manage_extra.node_types import _file_embedded_schema
from pillar.api.node_types import _file_embedded_schema, attachments_embedded_schema
node_type_asset = {
'name': 'asset',
@ -24,29 +24,14 @@ node_type_asset = {
'content_type': {
'type': 'string'
},
# The duration of a video asset in seconds.
'duration_seconds': {
'type': 'integer'
},
# We point to the original file (and use it to extract any relevant
# variation useful for our scope).
'file': _file_embedded_schema,
'attachments': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'field': {'type': 'string'},
'files': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
'file': _file_embedded_schema,
'slug': {'type': 'string', 'minlength': 1},
'size': {'type': 'string'}
}
}
}
}
}
},
'attachments': attachments_embedded_schema,
# Tags for search
'tags': {
'type': 'list',
@ -58,17 +43,30 @@ node_type_asset = {
# this schema: "Root > Nested Category > One More Nested Category"
'categories': {
'type': 'string'
}
},
'license_type': {
'default': 'cc-by',
'type': 'string',
'allowed': [
'cc-by',
'cc-0',
'cc-by-sa',
'cc-by-nd',
'cc-by-nc',
'copyright'
]
},
'license_notes': {
'type': 'string'
},
},
'form_schema': {
'status': {},
'content_type': {'visible': False},
'file': {},
'attachments': {'visible': False},
'duration_seconds': {'visible': False},
'order': {'visible': False},
'tags': {'visible': False},
'categories': {'visible': False}
'categories': {'visible': False},
'license_type': {'visible': False},
'license_notes': {'visible': False},
},
'permissions': {
}
}

View File

@ -0,0 +1,17 @@
node_type_blog = {
'name': 'blog',
'description': 'Container for node_type post.',
'dyn_schema': {
'categories': {
'type': 'list',
'schema': {
'type': 'string'
}
}
},
'form_schema': {
'categories': {},
'template': {},
},
'parent': ['project', ],
}

View File

@ -1,12 +1,15 @@
from pillar.api.node_types import attachments_embedded_schema
from pillar.api.node_types.utils import markdown_fields
node_type_comment = {
'name': 'comment',
'description': 'Comments for asset nodes, pages, etc.',
'dyn_schema': {
# The actual comment content (initially Markdown format)
'content': {
'type': 'string',
'minlength': 5,
},
# The actual comment content
**markdown_fields(
'content',
minlength=5,
required=True),
'status': {
'type': 'string',
'allowed': [
@ -48,18 +51,9 @@ node_type_comment = {
}
},
'confidence': {'type': 'float'},
'is_reply': {'type': 'boolean'}
},
'form_schema': {
'content': {},
'status': {},
'rating_positive': {},
'rating_negative': {},
'ratings': {},
'confidence': {},
'is_reply': {}
'is_reply': {'type': 'boolean'},
'attachments': attachments_embedded_schema,
},
'form_schema': {},
'parent': ['asset', 'comment'],
'permissions': {
}
}

View File

@ -1,9 +1,9 @@
node_type_group = {
'name': 'group',
'description': 'Generic group node type edited',
'description': 'Folder node type',
'parent': ['group', 'project'],
'dyn_schema': {
# Used for sorting within the context of a group
'order': {
'type': 'integer'
},
@ -20,14 +20,12 @@ node_type_group = {
'notes': {
'type': 'string',
'maxlength': 256,
},
}
},
'form_schema': {
'url': {'visible': False},
'status': {},
'notes': {'visible': False},
'order': {'visible': False}
},
'permissions': {
}
}

View File

@ -15,8 +15,5 @@ node_type_group_hdri = {
],
}
},
'form_schema': {
'status': {},
'order': {}
}
'form_schema': {},
}

View File

@ -15,8 +15,5 @@ node_type_group_texture = {
],
}
},
'form_schema': {
'status': {},
'order': {}
}
'form_schema': {},
}

View File

@ -1,4 +1,4 @@
from manage_extra.node_types import _file_embedded_schema
from pillar.api.node_types import _file_embedded_schema
node_type_hdri = {
# When adding this node type, make sure to enable CORS from * on the GCS
@ -7,6 +7,11 @@ node_type_hdri = {
'description': 'HDR Image',
'parent': ['group_hdri'],
'dyn_schema': {
# Default yaw angle in degrees.
'default_yaw': {
'type': 'float',
'default': 0.0
},
'status': {
'type': 'string',
'allowed': [
@ -62,5 +67,5 @@ node_type_hdri = {
'content_type': {'visible': False},
'tags': {'visible': False},
'categories': {'visible': False},
}
},
}

View File

@ -0,0 +1,24 @@
from pillar.api.node_types import attachments_embedded_schema
node_type_page = {
'name': 'page',
'description': 'A single page',
'dyn_schema': {
'status': {
'type': 'string',
'allowed': [
'published',
'pending'
],
'default': 'pending'
},
'url': {
'type': 'string'
},
'attachments': attachments_embedded_schema,
},
'form_schema': {
'attachments': {'visible': False},
},
'parent': ['project', ],
}

View File

@ -0,0 +1,33 @@
from pillar.api.node_types import attachments_embedded_schema
from pillar.api.node_types.utils import markdown_fields
node_type_post = {
'name': 'post',
'description': 'A blog post, for any project',
'dyn_schema': {
**markdown_fields('content',
minlength=5,
maxlength=90000,
required=True),
'status': {
'type': 'string',
'allowed': [
'published',
'pending'
],
'default': 'pending'
},
# Global categories, will be enforced to be 1 word
'category': {
'type': 'string',
},
'url': {
'type': 'string'
},
'attachments': attachments_embedded_schema,
},
'form_schema': {
'attachments': {'visible': False},
},
'parent': ['blog', ],
}

View File

@ -16,22 +16,11 @@ node_type_storage = {
'subdir': {
'type': 'string',
},
# Which backend is used to store the files (gcs, pillar, bam, cdnsun)
# Which backend is used to store the files (gcs, local)
'backend': {
'type': 'string',
},
},
'form_schema': {
'subdir': {},
'project': {},
'backend': {}
},
'form_schema': {},
'parent': ['group', 'project'],
'permissions': {
# 'groups': [{
# 'group': app.config['ADMIN_USER_GROUP'],
# 'methods': ['GET', 'PUT', 'POST']
# }],
# 'users': [],
}
}

View File

@ -24,5 +24,5 @@ node_type_text = {
},
'form_schema': {
'shared_slug': {'visible': False},
}
},
}

View File

@ -1,4 +1,4 @@
from manage_extra.node_types import _file_embedded_schema
from pillar.api.node_types import _file_embedded_schema
node_type_texture = {
'name': 'texture',
@ -27,13 +27,19 @@ node_type_texture = {
'map_type': {
'type': 'string',
'allowed': [
'color',
'specular',
'bump',
'normal',
'translucency',
'emission',
'alpha'
"alpha",
"ambient occlusion",
"bump",
"color",
"displacement",
"emission",
"glossiness",
"id",
"mask",
"normal",
"roughness",
"specular",
"translucency",
]}
}
}
@ -58,15 +64,8 @@ node_type_texture = {
}
},
'form_schema': {
'status': {},
'content_type': {'visible': False},
'files': {},
'is_tileable': {},
'is_landscape': {},
'resolution': {},
'aspect_ratio': {},
'order': {},
'tags': {'visible': False},
'categories': {'visible': False},
}
},
}

View File

@ -0,0 +1,34 @@
from pillar import markdown
def markdown_fields(field: str, **kwargs) -> dict:
"""
Creates a field for the markdown, and a field for the cached html.
Example usage:
schema = {'myDoc': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
**markdown_fields('content', required=True),
}
},
}}
:param field:
:return:
"""
cache_field = markdown.cache_field_name(field)
return {
field: {
'type': 'string',
**kwargs
},
cache_field: {
'type': 'string',
'readonly': True,
'default': field, # Name of the field containing the markdown. Will be input to the coerce function.
'coerce': 'markdown',
}
}

View File

@ -0,0 +1,274 @@
import base64
import datetime
import logging
import pymongo.errors
import werkzeug.exceptions as wz_exceptions
from flask import current_app, Blueprint, request
from pillar.api.nodes import eve_hooks, comments, activities
from pillar.api.utils import str2id, jsonify
from pillar.api.utils.authorization import check_permissions, require_login
from pillar.web.utils import pretty_date
log = logging.getLogger(__name__)
blueprint = Blueprint('nodes_api', __name__)
ROLES_FOR_SHARING = ROLES_FOR_COMMENTING = {'subscriber', 'demo'}
@blueprint.route('/<node_id>/share', methods=['GET', 'POST'])
@require_login(require_roles=ROLES_FOR_SHARING)
def share_node(node_id):
"""Shares a node, or returns sharing information."""
node_id = str2id(node_id)
nodes_coll = current_app.data.driver.db['nodes']
node = nodes_coll.find_one({'_id': node_id},
projection={
'project': 1,
'node_type': 1,
'short_code': 1
})
if not node:
raise wz_exceptions.NotFound('Node %s does not exist.' % node_id)
check_permissions('nodes', node, request.method)
log.info('Sharing node %s', node_id)
short_code = node.get('short_code')
status = 200
if not short_code:
if request.method == 'POST':
short_code = generate_and_store_short_code(node)
make_world_gettable(node)
status = 201
else:
return '', 204
return jsonify(eve_hooks.short_link_info(short_code), status=status)
@blueprint.route('/<string(length=24):node_path>/comments', methods=['GET'])
def get_node_comments(node_path: str):
node_id = str2id(node_path)
return comments.get_node_comments(node_id)
@blueprint.route('/<string(length=24):node_path>/comments', methods=['POST'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def post_node_comment(node_path: str):
node_id = str2id(node_path)
msg = request.json['msg']
attachments = request.json.get('attachments', {})
return comments.post_node_comment(node_id, msg, attachments)
@blueprint.route('/<string(length=24):node_path>/comments/<string(length=24):comment_path>', methods=['PATCH'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def patch_node_comment(node_path: str, comment_path: str):
node_id = str2id(node_path)
comment_id = str2id(comment_path)
msg = request.json['msg']
attachments = request.json.get('attachments', {})
return comments.patch_node_comment(node_id, comment_id, msg, attachments)
@blueprint.route('/<string(length=24):node_path>/comments/<string(length=24):comment_path>/vote', methods=['POST'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def post_node_comment_vote(node_path: str, comment_path: str):
node_id = str2id(node_path)
comment_id = str2id(comment_path)
vote_str = request.json['vote']
vote = int(vote_str)
return comments.post_node_comment_vote(node_id, comment_id, vote)
@blueprint.route('/<string(length=24):node_path>/activities', methods=['GET'])
def activities_for_node(node_path: str):
node_id = str2id(node_path)
return jsonify(activities.for_node(node_id))
@blueprint.route('/tagged/')
@blueprint.route('/tagged/<tag>')
def tagged(tag=''):
"""Return all tagged nodes of public projects as JSON."""
from pillar.auth import current_user
# We explicitly register the tagless endpoint to raise a 404, otherwise the PATCH
# handler on /api/nodes/<node_id> will return a 405 Method Not Allowed.
if not tag:
raise wz_exceptions.NotFound()
# Build the (cached) list of tagged nodes
agg_list = _tagged(tag)
for node in agg_list:
if node['properties'].get('duration_seconds'):
node['properties']['duration'] = datetime.timedelta(seconds=node['properties']['duration_seconds'])
if node.get('_created') is not None:
node['pretty_created'] = pretty_date(node['_created'])
# If the user is anonymous, no more information is needed and we return
if current_user.is_anonymous:
return jsonify(agg_list)
# If the user is authenticated, attach view_progress for video assets
view_progress = current_user.nodes['view_progress']
for node in agg_list:
node_id = str(node['_id'])
# View progress should be added only for nodes of type 'asset' and
# with content_type 'video', only if the video was already in the watched
# list for the current user.
if node_id in view_progress:
node['view_progress'] = view_progress[node_id]
return jsonify(agg_list)
def _tagged(tag: str):
"""Fetch all public nodes with the given tag.
This function is cached, see setup_app().
"""
nodes_coll = current_app.db('nodes')
agg = nodes_coll.aggregate([
{'$match': {'properties.tags': tag,
'_deleted': {'$ne': True}}},
# Only get nodes from public projects. This is done after matching the
# tagged nodes, because most likely nobody else will be able to tag
# nodes anyway.
{'$lookup': {
'from': 'projects',
'localField': 'project',
'foreignField': '_id',
'as': '_project',
}},
{'$unwind': '$_project'},
{'$match': {'_project.is_private': False}},
{'$addFields': {
'project._id': '$_project._id',
'project.name': '$_project.name',
'project.url': '$_project.url',
}},
# Don't return the entire project/file for each node.
{'$project': {'_project': False}},
{'$sort': {'_created': -1}}
])
return list(agg)
def generate_and_store_short_code(node):
nodes_coll = current_app.data.driver.db['nodes']
node_id = node['_id']
log.debug('Creating new short link for node %s', node_id)
max_attempts = 10
for attempt in range(1, max_attempts):
# Generate a new short code
short_code = create_short_code(node)
log.debug('Created short code for node %s: %s', node_id, short_code)
node['short_code'] = short_code
# Store it in MongoDB
try:
result = nodes_coll.update_one({'_id': node_id},
{'$set': {'short_code': short_code}})
break
except pymongo.errors.DuplicateKeyError:
log.info('Duplicate key while creating short code, retrying (attempt %i/%i)',
attempt, max_attempts)
pass
else:
log.error('Unable to find unique short code for node %s after %i attempts, failing!',
node_id, max_attempts)
raise wz_exceptions.InternalServerError('Unable to create unique short code for node %s' %
node_id)
# We were able to store a short code, now let's verify the result.
if result.matched_count != 1:
log.warning('Unable to update node %s with new short_links=%r', node_id, node['short_code'])
raise wz_exceptions.InternalServerError('Unable to update node %s with new short links' %
node_id)
return short_code
def make_world_gettable(node):
nodes_coll = current_app.data.driver.db['nodes']
node_id = node['_id']
log.debug('Ensuring the world can read node %s', node_id)
world_perms = set(node.get('permissions', {}).get('world', []))
world_perms.add('GET')
world_perms = list(world_perms)
result = nodes_coll.update_one({'_id': node_id},
{'$set': {'permissions.world': world_perms}})
if result.matched_count != 1:
log.warning('Unable to update node %s with new permissions.world=%r', node_id, world_perms)
raise wz_exceptions.InternalServerError('Unable to update node %s with new permissions' %
node_id)
def create_short_code(node) -> str:
"""Generates a new 'short code' for the node."""
import secrets
length = current_app.config['SHORT_CODE_LENGTH']
# Base64 encoding will expand it a bit, so we'll cut that off later.
# It's a good idea to start with enough bytes, though.
bits = secrets.token_bytes(length)
short_code = base64.b64encode(bits, altchars=b'xy').rstrip(b'=')
short_code = short_code[:length].decode('ascii')
return short_code
def setup_app(app, url_prefix):
global _tagged
cached = app.cache.memoize(timeout=300)
_tagged = cached(_tagged)
from . import patch
patch.setup_app(app, url_prefix=url_prefix)
app.on_fetched_item_nodes += eve_hooks.before_returning_node
app.on_fetched_resource_nodes += eve_hooks.before_returning_nodes
app.on_replace_nodes += eve_hooks.before_replacing_node
app.on_replace_nodes += eve_hooks.texture_sort_files
app.on_replace_nodes += eve_hooks.deduct_content_type_and_duration
app.on_replace_nodes += eve_hooks.node_set_default_picture
app.on_replaced_nodes += eve_hooks.after_replacing_node
app.on_insert_nodes += eve_hooks.before_inserting_nodes
app.on_insert_nodes += eve_hooks.nodes_deduct_content_type_and_duration
app.on_insert_nodes += eve_hooks.nodes_set_default_picture
app.on_insert_nodes += eve_hooks.textures_sort_files
app.on_inserted_nodes += eve_hooks.after_inserting_nodes
app.on_update_nodes += eve_hooks.texture_sort_files
app.on_delete_item_nodes += eve_hooks.before_deleting_node
app.on_deleted_item_nodes += eve_hooks.after_deleting_node
app.register_api_blueprint(blueprint, url_prefix=url_prefix)
activities.setup_app(app)

View File

@ -0,0 +1,43 @@
from eve.methods import get
import pillar.api.users.avatar
def for_node(node_id):
activities, _, _, status, _ =\
get('activities',
{
'$or': [
{'object_type': 'node',
'object': node_id},
{'context_object_type': 'node',
'context_object': node_id},
],
},)
for act in activities['_items']:
act['actor_user'] = _user_info(act['actor_user'])
return activities
def _user_info(user_id):
users, _, _, status, _ = get('users', {'_id': user_id})
if len(users['_items']) > 0:
user = users['_items'][0]
user['avatar'] = pillar.api.users.avatar.url(user)
public_fields = {'full_name', 'username', 'avatar'}
for field in list(user.keys()):
if field not in public_fields:
del user[field]
return user
return {}
def setup_app(app):
global _user_info
decorator = app.cache.memoize(timeout=300, make_name='%s.public_user_info' % __name__)
_user_info = decorator(_user_info)

View File

@ -0,0 +1,302 @@
import logging
from datetime import datetime
import pymongo
import typing
import bson
import attr
import werkzeug.exceptions as wz_exceptions
import pillar
from pillar import current_app, shortcodes
import pillar.api.users.avatar
from pillar.api.nodes.custom.comment import patch_comment
from pillar.api.utils import jsonify
from pillar.auth import current_user
import pillar.markdown
log = logging.getLogger(__name__)
@attr.s(auto_attribs=True)
class UserDO:
id: str
full_name: str
avatar_url: str
badges_html: str
@attr.s(auto_attribs=True)
class CommentPropertiesDO:
attachments: typing.Dict
rating_positive: int = 0
rating_negative: int = 0
@attr.s(auto_attribs=True)
class CommentDO:
id: bson.ObjectId
parent: bson.ObjectId
project: bson.ObjectId
user: UserDO
msg_html: str
msg_markdown: str
properties: CommentPropertiesDO
created: datetime
updated: datetime
etag: str
replies: typing.List['CommentDO'] = []
current_user_rating: typing.Optional[bool] = None
@attr.s(auto_attribs=True)
class CommentTreeDO:
node_id: bson.ObjectId
project: bson.ObjectId
nbr_of_comments: int = 0
comments: typing.List[CommentDO] = []
def _get_markdowned_html(document: dict, field_name: str) -> str:
cache_field_name = pillar.markdown.cache_field_name(field_name)
html = document.get(cache_field_name)
if html is None:
markdown_src = document.get(field_name) or ''
html = pillar.markdown.markdown(markdown_src)
return html
def jsonify_data_object(data_object: attr):
return jsonify(
attr.asdict(data_object,
recurse=True)
)
class CommentTreeBuilder:
def __init__(self, node_id: bson.ObjectId):
self.node_id = node_id
self.nbr_of_Comments: int = 0
def build(self) -> CommentTreeDO:
enriched_comments = self.child_comments(
self.node_id,
sort={'properties.rating_positive': pymongo.DESCENDING,
'_created': pymongo.DESCENDING})
project_id = self.get_project_id()
return CommentTreeDO(
node_id=self.node_id,
project=project_id,
nbr_of_comments=self.nbr_of_Comments,
comments=enriched_comments
)
def child_comments(self, node_id: bson.ObjectId, sort: dict) -> typing.List[CommentDO]:
raw_comments = self.mongodb_comments(node_id, sort)
return [self.enrich(comment) for comment in raw_comments]
def enrich(self, mongo_comment: dict) -> CommentDO:
self.nbr_of_Comments += 1
comment = to_comment_data_object(mongo_comment)
comment.replies = self.child_comments(mongo_comment['_id'],
sort={'_created': pymongo.ASCENDING})
return comment
def get_project_id(self):
nodes_coll = current_app.db('nodes')
result = nodes_coll.find_one({'_id': self.node_id})
return result['project']
@classmethod
def mongodb_comments(cls, node_id: bson.ObjectId, sort: dict) -> typing.Iterator:
nodes_coll = current_app.db('nodes')
return nodes_coll.aggregate([
{'$match': {'node_type': 'comment',
'_deleted': {'$ne': True},
'properties.status': 'published',
'parent': node_id}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$sort': sort},
])
def get_node_comments(node_id: bson.ObjectId):
comments_tree = CommentTreeBuilder(node_id).build()
return jsonify_data_object(comments_tree)
def post_node_comment(parent_id: bson.ObjectId, markdown_msg: str, attachments: dict):
parent_node = find_node_or_raise(parent_id,
'User %s tried to update comment with bad parent_id %s',
current_user.objectid,
parent_id)
is_reply = parent_node['node_type'] == 'comment'
comment = dict(
parent=parent_id,
project=parent_node['project'],
name='Comment',
user=current_user.objectid,
node_type='comment',
properties=dict(
content=markdown_msg,
status='published',
is_reply=is_reply,
confidence=0,
rating_positive=0,
rating_negative=0,
attachments=attachments,
),
permissions=dict(
users=[dict(
user=current_user.objectid,
methods=['PUT'])
]
)
)
r, _, _, status = current_app.post_internal('nodes', comment)
if status != 201:
log.warning('Unable to post comment on %s as %s: %s',
parent_id, current_user.objectid, r)
raise wz_exceptions.InternalServerError('Unable to create comment')
comment_do = get_comment(parent_id, r['_id'])
return jsonify_data_object(comment_do), 201
def find_node_or_raise(node_id, *args):
nodes_coll = current_app.db('nodes')
node_to_comment = nodes_coll.find_one({
'_id': node_id,
'_deleted': {'$ne': True},
})
if not node_to_comment:
log.warning(args)
raise wz_exceptions.UnprocessableEntity()
return node_to_comment
def patch_node_comment(parent_id: bson.ObjectId,
comment_id: bson.ObjectId,
markdown_msg: str,
attachments: dict):
_, _ = find_parent_and_comment_or_raise(parent_id, comment_id)
patch = dict(
op='edit',
content=markdown_msg,
attachments=attachments
)
json_result = patch_comment(comment_id, patch)
if json_result.json['result'] != 200:
raise wz_exceptions.InternalServerError('Failed to update comment')
comment_do = get_comment(parent_id, comment_id)
return jsonify_data_object(comment_do), 200
def find_parent_and_comment_or_raise(parent_id, comment_id):
parent = find_node_or_raise(parent_id,
'User %s tried to update comment with bad parent_id %s',
current_user.objectid,
parent_id)
comment = find_node_or_raise(comment_id,
'User %s tried to update comment with bad id %s',
current_user.objectid,
comment_id)
validate_comment_parent_relation(comment, parent)
return parent, comment
def validate_comment_parent_relation(comment, parent):
if comment['parent'] != parent['_id']:
log.warning('User %s tried to update comment with bad parent/comment pair.'
' parent_id: %s comment_id: %s',
current_user.objectid, parent['_id'], comment['_id'])
raise wz_exceptions.BadRequest()
def get_comment(parent_id: bson.ObjectId, comment_id: bson.ObjectId) -> CommentDO:
nodes_coll = current_app.db('nodes')
mongo_comment = list(nodes_coll.aggregate([
{'$match': {'node_type': 'comment',
'_deleted': {'$ne': True},
'properties.status': 'published',
'parent': parent_id,
'_id': comment_id}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
]))[0]
return to_comment_data_object(mongo_comment)
def to_comment_data_object(mongo_comment: dict) -> CommentDO:
def current_user_rating():
if current_user.is_authenticated:
for rating in mongo_comment['properties'].get('ratings', ()):
if str(rating['user']) != current_user.objectid:
continue
return rating['is_positive']
return None
user_dict = mongo_comment['user']
user = UserDO(
id=str(mongo_comment['user']['_id']),
full_name=user_dict['full_name'],
avatar_url=pillar.api.users.avatar.url(user_dict),
badges_html=user_dict.get('badges', {}).get('html', '')
)
html = _get_markdowned_html(mongo_comment['properties'], 'content')
html = shortcodes.render_commented(html, context=mongo_comment['properties'])
return CommentDO(
id=mongo_comment['_id'],
parent=mongo_comment['parent'],
project=mongo_comment['project'],
user=user,
msg_html=html,
msg_markdown=mongo_comment['properties']['content'],
current_user_rating=current_user_rating(),
created=mongo_comment['_created'],
updated=mongo_comment['_updated'],
etag=mongo_comment['_etag'],
properties=CommentPropertiesDO(
attachments=mongo_comment['properties'].get('attachments', {}),
rating_positive=mongo_comment['properties']['rating_positive'],
rating_negative=mongo_comment['properties']['rating_negative']
)
)
def post_node_comment_vote(parent_id: bson.ObjectId, comment_id: bson.ObjectId, vote: int):
normalized_vote = min(max(vote, -1), 1)
_, _ = find_parent_and_comment_or_raise(parent_id, comment_id)
actions = {
1: 'upvote',
0: 'revoke',
-1: 'downvote',
}
patch = dict(
op=actions[normalized_vote]
)
json_result = patch_comment(comment_id, patch)
if json_result.json['_status'] != 'OK':
raise wz_exceptions.InternalServerError('Failed to vote on comment')
comment_do = get_comment(parent_id, comment_id)
return jsonify_data_object(comment_do), 200

View File

@ -1,33 +1,55 @@
"""PATCH support for comment nodes."""
import logging
from flask import current_app
import werkzeug.exceptions as wz_exceptions
from application.utils import authorization, authentication, jsonify
from pillar.api.utils import authorization, authentication, jsonify, remove_private_keys
from . import register_patch_handler
log = logging.getLogger(__name__)
ROLES_FOR_COMMENT_VOTING = {u'subscriber', u'demo'}
VALID_COMMENT_OPERATIONS = {u'upvote', u'downvote', u'revoke'}
COMMENT_VOTING_OPS = {'upvote', 'downvote', 'revoke'}
VALID_COMMENT_OPERATIONS = COMMENT_VOTING_OPS.union({'edit'})
@register_patch_handler(u'comment')
@register_patch_handler('comment')
def patch_comment(node_id, patch):
assert_is_valid_patch(node_id, patch)
user_id = authentication.current_user_id()
# Find the node
if patch['op'] in COMMENT_VOTING_OPS:
result, node = vote_comment(user_id, node_id, patch)
else:
assert patch['op'] == 'edit', 'Invalid patch operation %s' % patch['op']
result, node = edit_comment(user_id, node_id, patch)
return jsonify({'_status': 'OK',
'result': result,
'properties': node['properties']
})
def vote_comment(user_id, node_id, patch):
"""Performs a voting operation."""
# Find the node. Includes a query on the properties.ratings array so
# that we only get the current user's rating.
nodes_coll = current_app.data.driver.db['nodes']
node_query = {'_id': node_id,
'$or': [{'properties.ratings.$.user': {'$exists': False}},
{'properties.ratings.$.user': user_id}]}
node = nodes_coll.find_one(node_query,
projection={'properties': 1})
projection={'properties': 1, 'user': 1})
if node is None:
log.warning('How can the node not be found?')
log.warning('User %s wanted to patch non-existing node %s' % (user_id, node_id))
raise wz_exceptions.NotFound('Node %s not found' % node_id)
# We don't allow the user to down/upvote their own nodes.
if user_id == node['user']:
raise wz_exceptions.Forbidden('You cannot vote on your own node')
props = node['properties']
# Find the current rating (if any)
@ -75,13 +97,14 @@ def patch_comment(node_id, patch):
return update
actions = {
u'upvote': upvote,
u'downvote': downvote,
u'revoke': revoke,
'upvote': upvote,
'downvote': downvote,
'revoke': revoke,
}
action = actions[patch['op']]
mongo_update = action()
nodes_coll = current_app.data.driver.db['nodes']
if mongo_update:
log.info('Running %s', mongo_update)
if rating:
@ -97,10 +120,50 @@ def patch_comment(node_id, patch):
projection={'properties.rating_positive': 1,
'properties.rating_negative': 1})
return jsonify({'_status': 'OK',
'result': result,
'properties': node['properties']
})
return result, node
def edit_comment(user_id, node_id, patch):
"""Edits a single comment.
Doesn't do permission checking; users are allowed to edit their own
comment, and this is not something you want to revoke anyway. Admins
can edit all comments.
"""
# Find the node. We need to fetch some more info than we use here, so that
# we can pass this stuff to Eve's patch_internal; that way the validation &
# authorisation system has enough info to work.
nodes_coll = current_app.data.driver.db['nodes']
node = nodes_coll.find_one(node_id)
if node is None:
log.warning('User %s wanted to patch non-existing node %s' % (user_id, node_id))
raise wz_exceptions.NotFound('Node %s not found' % node_id)
if node['user'] != user_id and not authorization.user_has_role('admin'):
raise wz_exceptions.Forbidden('You can only edit your own comments.')
node = remove_private_keys(node)
node['properties']['content'] = patch['content']
node['properties']['attachments'] = patch.get('attachments', {})
# Use Eve to PUT this node, as that also updates the etag and we want to replace attachments.
r, _, _, status = current_app.put_internal('nodes',
node,
concurrency_check=False,
_id=node_id)
if status != 200:
log.error('Error %i editing comment %s for user %s: %s',
status, node_id, user_id, r)
raise wz_exceptions.InternalServerError('Internal error %i from Eve' % status)
else:
log.info('User %s edited comment %s', user_id, node_id)
# Fetch the new content, so the client can show these without querying again.
node = nodes_coll.find_one(node_id, projection={
'properties.content': 1,
'properties._content_html': 1,
})
return status, node
def assert_is_valid_patch(node_id, patch):
@ -115,8 +178,12 @@ def assert_is_valid_patch(node_id, patch):
raise wz_exceptions.BadRequest('Operation should be one of %s',
', '.join(VALID_COMMENT_OPERATIONS))
if op not in COMMENT_VOTING_OPS:
# We can't check here, we need the node owner for that.
return
# See whether the user is allowed to patch
if authorization.user_matches_roles(ROLES_FOR_COMMENT_VOTING):
if authorization.user_matches_roles(current_app.config['ROLES_FOR_COMMENT_VOTING']):
log.debug('User is allowed to upvote/downvote comment')
return

View File

@ -0,0 +1,336 @@
import collections
import functools
import logging
import urllib.parse
from bson import ObjectId
from werkzeug import exceptions as wz_exceptions
from pillar import current_app
from pillar.api.activities import activity_subscribe, activity_object_add
from pillar.api.file_storage_backends.gcs import update_file_name
from pillar.api.node_types import PILLAR_NAMED_NODE_TYPES
from pillar.api.utils import random_etag
from pillar.api.utils.authorization import check_permissions
log = logging.getLogger(__name__)
def before_returning_node(node):
# Run validation process, since GET on nodes entry point is public
check_permissions('nodes', node, 'GET', append_allowed_methods=True)
# Embed short_link_info if the node has a short_code.
short_code = node.get('short_code')
if short_code:
node['short_link'] = short_link_info(short_code)['short_link']
def before_returning_nodes(nodes):
for node in nodes['_items']:
before_returning_node(node)
def only_for_node_type_decorator(*required_node_type_names):
"""Returns a decorator that checks its first argument's node type.
If the node type is not of the required node type, returns None,
otherwise calls the wrapped function.
>>> deco = only_for_node_type_decorator('comment')
>>> @deco
... def handle_comment(node): pass
>>> deco = only_for_node_type_decorator('comment', 'post')
>>> @deco
... def handle_comment_or_post(node): pass
"""
# Convert to a set for efficient 'x in required_node_type_names' queries.
required_node_type_names = set(required_node_type_names)
def only_for_node_type(wrapped):
@functools.wraps(wrapped)
def wrapper(node, *args, **kwargs):
if node.get('node_type') not in required_node_type_names:
return
return wrapped(node, *args, **kwargs)
return wrapper
only_for_node_type.__doc__ = "Decorator, immediately returns when " \
"the first argument is not of type %s." % required_node_type_names
return only_for_node_type
def before_replacing_node(item, original):
check_permissions('nodes', original, 'PUT')
update_file_name(item)
def after_replacing_node(item, original):
"""Push an update to the Algolia index when a node item is updated. If the
project is private, prevent public indexing.
"""
from pillar.celery import search_index_tasks as index
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one({'_id': item['project']})
if project.get('is_private', False):
# Skip index updating and return
return
status = item['properties'].get('status', 'unpublished')
node_id = str(item['_id'])
if status == 'published':
index.node_save.delay(node_id)
else:
index.node_delete.delay(node_id)
def before_inserting_nodes(items):
"""Before inserting a node in the collection we check if the user is allowed
and we append the project id to it.
"""
from pillar.auth import current_user
nodes_collection = current_app.data.driver.db['nodes']
def find_parent_project(node):
"""Recursive function that finds the ultimate parent of a node."""
if node and 'parent' in node:
parent = nodes_collection.find_one({'_id': node['parent']})
return find_parent_project(parent)
if node:
return node
else:
return None
for item in items:
check_permissions('nodes', item, 'POST')
if 'parent' in item and 'project' not in item:
parent = nodes_collection.find_one({'_id': item['parent']})
project = find_parent_project(parent)
if project:
item['project'] = project['_id']
# Default the 'user' property to the current user.
item.setdefault('user', current_user.user_id)
def get_comment_verb_and_context_object_id(comment):
nodes_collection = current_app.data.driver.db['nodes']
verb = 'commented'
parent = nodes_collection.find_one({'_id': comment['parent']})
context_object_id = comment['parent']
while parent['node_type'] == 'comment':
# If the parent is a comment, we provide its own parent as
# context. We do this in order to point the user to an asset
# or group when viewing the notification.
verb = 'replied'
context_object_id = parent['parent']
parent = nodes_collection.find_one({'_id': parent['parent']})
return verb, context_object_id
def after_inserting_nodes(items):
for item in items:
context_object_id = None
# TODO: support should be added for mixed context
if item['node_type'] in PILLAR_NAMED_NODE_TYPES:
activity_subscribe(item['user'], 'node', item['_id'])
verb = 'posted'
context_object_id = item.get('parent')
if item['node_type'] == 'comment':
# Always subscribe to the parent node
activity_subscribe(item['user'], 'node', item['parent'])
verb, context_object_id = get_comment_verb_and_context_object_id(item)
# Subscribe to the parent of the parent comment (post or group)
activity_subscribe(item['user'], 'node', context_object_id)
if context_object_id and item['node_type'] in PILLAR_NAMED_NODE_TYPES:
# * Skip activity for first level items (since the context is not a
# node, but a project).
# * Don't automatically create activities for non-Pillar node types,
# as we don't know what would be a suitable verb (among other things).
activity_object_add(
item['user'],
verb,
'node',
item['_id'],
'node',
context_object_id
)
def deduct_content_type_and_duration(node_doc, original=None):
"""Deduct the content type from the attached file, if any."""
if node_doc['node_type'] != 'asset':
log.debug('deduct_content_type: called on node type %r, ignoring', node_doc['node_type'])
return
node_id = node_doc.get('_id')
try:
file_id = ObjectId(node_doc['properties']['file'])
except KeyError:
if node_id is None:
# Creation of a file-less node is allowed, but updates aren't.
return
log.warning('deduct_content_type: Asset without properties.file, rejecting.')
raise wz_exceptions.UnprocessableEntity('Missing file property for asset node')
files = current_app.data.driver.db['files']
file_doc = files.find_one({'_id': file_id},
{'content_type': 1,
'variations': 1})
if not file_doc:
log.warning('deduct_content_type: Node %s refers to non-existing file %s, rejecting.',
node_id, file_id)
raise wz_exceptions.UnprocessableEntity('File property refers to non-existing file')
# Guess the node content type from the file content type
file_type = file_doc['content_type']
if file_type.startswith('video/'):
content_type = 'video'
elif file_type.startswith('image/'):
content_type = 'image'
else:
content_type = 'file'
node_doc['properties']['content_type'] = content_type
if content_type == 'video':
duration = file_doc['variations'][0].get('duration')
if duration:
node_doc['properties']['duration_seconds'] = duration
else:
log.warning('Video file %s has no duration', file_id)
def nodes_deduct_content_type_and_duration(nodes):
for node in nodes:
deduct_content_type_and_duration(node)
def node_set_default_picture(node, original=None):
"""Uses the image of an image asset or colour map of texture node as picture."""
if node.get('picture'):
log.debug('Node %s already has a picture, not overriding', node.get('_id'))
return
node_type = node.get('node_type')
props = node.get('properties', {})
content = props.get('content_type')
if node_type == 'asset' and content == 'image':
image_file_id = props.get('file')
elif node_type == 'texture':
# Find the colour map, defaulting to the first image map available.
image_file_id = None
for image in props.get('files', []):
if image_file_id is None or image.get('map_type') == 'color':
image_file_id = image.get('file')
else:
log.debug('Not setting default picture on node type %s content type %s',
node_type, content)
return
if image_file_id is None:
log.debug('Nothing to set the picture to.')
return
log.debug('Setting default picture for node %s to %s', node.get('_id'), image_file_id)
node['picture'] = image_file_id
def nodes_set_default_picture(nodes):
for node in nodes:
node_set_default_picture(node)
def before_deleting_node(node: dict):
check_permissions('nodes', node, 'DELETE')
remove_project_references(node)
def remove_project_references(node):
project_id = node.get('project')
if not project_id:
return
node_id = node['_id']
log.info('Removing references to node %s from project %s', node_id, project_id)
projects_col = current_app.db('projects')
project = projects_col.find_one({'_id': project_id})
updates = collections.defaultdict(dict)
if project.get('header_node') == node_id:
updates['$unset']['header_node'] = node_id
project_reference_lists = ('nodes_blog', 'nodes_featured', 'nodes_latest')
for list_name in project_reference_lists:
references = project.get(list_name)
if not references:
continue
try:
references.remove(node_id)
except ValueError:
continue
updates['$set'][list_name] = references
if not updates:
return
updates['$set']['_etag'] = random_etag()
result = projects_col.update_one({'_id': project_id}, updates)
if result.modified_count != 1:
log.warning('Removing references to node %s from project %s resulted in %d modified documents (expected 1)',
node_id, project_id, result.modified_count)
def after_deleting_node(item):
from pillar.celery import search_index_tasks as index
index.node_delete.delay(str(item['_id']))
only_for_textures = only_for_node_type_decorator('texture')
@only_for_textures
def texture_sort_files(node, original=None):
"""Sort files alphabetically by map type, with colour map first."""
try:
files = node['properties']['files']
except KeyError:
return
# Sort the map types alphabetically, ensuring 'color' comes first.
as_dict = {f['map_type']: f for f in files}
types = sorted(as_dict.keys(), key=lambda k: '\0' if k == 'color' else k)
node['properties']['files'] = [as_dict[map_type] for map_type in types]
def textures_sort_files(nodes):
for node in nodes:
texture_sort_files(node)
def short_link_info(short_code):
"""Returns the short link info in a dict."""
short_link = urllib.parse.urljoin(
current_app.config['SHORT_LINK_BASE_URL'], short_code)
return {
'short_code': short_code,
'short_link': short_link,
}

110
pillar/api/nodes/moving.py Normal file
View File

@ -0,0 +1,110 @@
"""Code for moving around nodes."""
import attr
import pymongo.database
from bson import ObjectId
from pillar import attrs_extra
import pillar.api.file_storage.moving
@attr.s
class NodeMover(object):
db = attr.ib(validator=attr.validators.instance_of(pymongo.database.Database))
skip_gcs = attr.ib(default=False, validator=attr.validators.instance_of(bool))
_log = attrs_extra.log('%s.NodeMover' % __name__)
def change_project(self, node, dest_proj):
"""Moves a node and children to a new project."""
assert isinstance(node, dict)
assert isinstance(dest_proj, dict)
for move_node in self._children(node):
self._change_project(move_node, dest_proj)
def _change_project(self, node, dest_proj):
"""Changes the project of a single node, non-recursively."""
node_id = node['_id']
proj_id = dest_proj['_id']
self._log.info('Moving node %s to project %s', node_id, proj_id)
# Find all files in the node.
moved_files = set()
self._move_files(moved_files, dest_proj, self._files(node.get('picture', None)))
self._move_files(moved_files, dest_proj, self._files(node['properties'], 'file'))
self._move_files(moved_files, dest_proj, self._files(node['properties'], 'files', 'file'))
self._move_files(moved_files, dest_proj,
self._files(node['properties'], 'attachments', 'files', 'file'))
# Switch the node's project after its files have been moved.
self._log.info('Switching node %s to project %s', node_id, proj_id)
nodes_coll = self.db['nodes']
update_result = nodes_coll.update_one({'_id': node_id},
{'$set': {'project': proj_id}})
if update_result.matched_count != 1:
raise RuntimeError(
'Unable to update node %s in MongoDB: matched_count=%i; modified_count=%i' % (
node_id, update_result.matched_count, update_result.modified_count))
def _move_files(self, moved_files, dest_proj, file_generator):
"""Tries to find all files from the given properties."""
for file_id in file_generator:
if file_id in moved_files:
continue
moved_files.add(file_id)
self.move_file(dest_proj, file_id)
def move_file(self, dest_proj, file_id):
"""Moves a single file to another project"""
self._log.info('Moving file %s to project %s', file_id, dest_proj['_id'])
pillar.api.file_storage.moving.move_to_bucket(file_id, dest_proj['_id'],
skip_storage=self.skip_gcs)
def _files(self, file_ref, *properties):
"""Yields file ObjectIDs."""
# Degenerate cases.
if not file_ref:
return
# Single ObjectID
if isinstance(file_ref, ObjectId):
assert not properties
yield file_ref
return
# List of ObjectIDs
if isinstance(file_ref, list):
for item in file_ref:
for subitem in self._files(item, *properties):
yield subitem
return
# Dict, use properties[0] as key
if isinstance(file_ref, dict):
try:
subref = file_ref[properties[0]]
except KeyError:
# Silently skip non-existing keys.
return
for subitem in self._files(subref, *properties[1:]):
yield subitem
return
raise TypeError('File ref is of type %s, not implemented' % type(file_ref))
def _children(self, node):
"""Generator, recursively yields the node and its children."""
yield node
nodes_coll = self.db['nodes']
for child in nodes_coll.find({'parent': node['_id']}):
# "yield from self.children(child)" was introduced in Python 3.3
for grandchild in self._children(child):
yield grandchild

View File

@ -5,11 +5,11 @@ Depends on node_type-specific patch handlers in submodules.
import logging
from flask import Blueprint, request
import werkzeug.exceptions as wz_exceptions
from application.utils import str2id
from application.utils import authorization, mongo, authentication
from flask import Blueprint, request
from pillar.api.utils import mongo
from pillar.api.utils import authorization, authentication
from pillar.api.utils import str2id
from . import custom
@ -48,4 +48,4 @@ def patch_node(node_id):
def setup_app(app, url_prefix):
app.register_blueprint(blueprint, url_prefix=url_prefix)
app.register_api_blueprint(blueprint, url_prefix=url_prefix)

View File

@ -0,0 +1,444 @@
"""Organization management.
Assumes role names that are given to users by organization membership
start with the string "org-".
"""
import logging
import typing
import attr
import bson
import flask
import werkzeug.exceptions as wz_exceptions
from pillar import attrs_extra, current_app
from pillar.api.utils import remove_private_keys, utcnow
class OrganizationError(Exception):
"""Superclass for all Organization-related errors."""
@attr.s
class NotEnoughSeats(OrganizationError):
"""Thrown when trying to add too many members to the organization."""
org_id = attr.ib(validator=attr.validators.instance_of(bson.ObjectId))
seat_count = attr.ib(validator=attr.validators.instance_of(int))
attempted_seat_count = attr.ib(validator=attr.validators.instance_of(int))
@attr.s
class OrgManager:
"""Organization manager.
Performs actions on an Organization. Does *NOT* test user permissions -- the caller
is responsible for that.
"""
_log = attrs_extra.log('%s.OrgManager' % __name__)
def create_new_org(self,
name: str,
admin_uid: bson.ObjectId,
seat_count: int,
*,
org_roles: typing.Iterable[str] = None) -> dict:
"""Creates a new Organization.
Returns the new organization document.
"""
assert isinstance(admin_uid, bson.ObjectId)
org_doc = {
'name': name,
'admin_uid': admin_uid,
'seat_count': seat_count,
}
if org_roles:
org_doc['org_roles'] = list(org_roles)
r, _, _, status = current_app.post_internal('organizations', org_doc)
if status != 201:
self._log.error('Error creating organization; status should be 201, not %i: %s',
status, r)
raise ValueError(f'Unable to create organization, status code {status}')
org_doc.update(r)
return org_doc
def assign_users(self,
org_id: bson.ObjectId,
emails: typing.List[str]) -> dict:
"""Assigns users to the organization.
Checks the seat count and throws a NotEnoughSeats exception when the
seat count is not sufficient to assign the requested users.
Users are looked up by email address, and known users are
automatically mapped.
:returns: the new organization document.
"""
self._log.info('Adding %i new members to organization %s', len(emails), org_id)
users_coll = current_app.db('users')
existing_user_docs = list(users_coll.find({'email': {'$in': emails}},
projection={'_id': 1, 'email': 1}))
unknown_users = set(emails) - {user['email'] for user in existing_user_docs}
existing_users = {user['_id'] for user in existing_user_docs}
return self._assign_users(org_id, unknown_users, existing_users)
def assign_single_user(self, org_id: bson.ObjectId, *, user_id: bson.ObjectId) -> dict:
"""Assigns a single, known user to the organization.
:returns: the new organization document.
"""
self._log.info('Adding new member %s to organization %s', user_id, org_id)
return self._assign_users(org_id, set(), {user_id})
def _assign_users(self, org_id: bson.ObjectId,
unknown_users: typing.Set[str],
existing_users: typing.Set[bson.ObjectId]) -> dict:
if self._log.isEnabledFor(logging.INFO):
self._log.info(' - found users: %s', ', '.join(str(uid) for uid in existing_users))
self._log.info(' - unknown users: %s', ', '.join(unknown_users))
org_doc = self._get_org(org_id)
# Compute the new members.
members = set(org_doc.get('members') or []) | existing_users
unknown_members = set(org_doc.get('unknown_members') or []) | unknown_users
# Make sure we don't exceed the current seat count.
new_seat_count = len(members) + len(unknown_members)
if new_seat_count > org_doc['seat_count']:
self._log.warning('assign_users(%s, ...): Trying to increase seats to %i, '
'but org only has %i seats.',
org_id, new_seat_count, org_doc['seat_count'])
raise NotEnoughSeats(org_id, org_doc['seat_count'], new_seat_count)
# Update the organization.
org_doc['members'] = list(members)
org_doc['unknown_members'] = list(unknown_members)
r, _, _, status = current_app.put_internal('organizations',
remove_private_keys(org_doc),
_id=org_id)
if status != 200:
self._log.error('Error updating organization; status should be 200, not %i: %s',
status, r)
raise ValueError(f'Unable to update organization, status code {status}')
org_doc.update(r)
# Update the roles for the affected members
for uid in existing_users:
self.refresh_roles(uid)
return org_doc
def assign_admin(self, org_id: bson.ObjectId, *, user_id: bson.ObjectId):
"""Assigns a user as admin user for this organization."""
assert isinstance(org_id, bson.ObjectId)
assert isinstance(user_id, bson.ObjectId)
org_coll = current_app.db('organizations')
users_coll = current_app.db('users')
if users_coll.count_documents({'_id': user_id}) == 0:
raise ValueError('User not found')
self._log.info('Updating organization %s, setting admin user to %s', org_id, user_id)
org_coll.update_one({'_id': org_id},
{'$set': {'admin_uid': user_id}})
def remove_user(self,
org_id: bson.ObjectId,
*,
user_id: bson.ObjectId = None,
email: str = None) -> dict:
"""Removes a user from the organization.
The user can be identified by either user ID or email.
Returns the new organization document.
"""
users_coll = current_app.db('users')
assert user_id or email
# Collect the email address if not given. This ensures the removal
# if the email was accidentally in the unknown_members list.
if email is None:
user_doc = users_coll.find_one(user_id, projection={'email': 1})
if user_doc is not None:
email = user_doc['email']
# See if we know this user.
if user_id is None:
user_doc = users_coll.find_one({'email': email}, projection={'_id': 1})
if user_doc is not None:
user_id = user_doc['_id']
if user_id and not users_coll.count_documents({'_id': user_id}):
raise wz_exceptions.UnprocessableEntity('User does not exist')
self._log.info('Removing user %s / %s from organization %s', user_id, email, org_id)
org_doc = self._get_org(org_id)
# Compute the new members.
if user_id:
members = set(org_doc.get('members') or []) - {user_id}
org_doc['members'] = list(members)
if email:
unknown_members = set(org_doc.get('unknown_members')) - {email}
org_doc['unknown_members'] = list(unknown_members)
r, _, _, status = current_app.put_internal('organizations',
remove_private_keys(org_doc),
_id=org_id)
if status != 200:
self._log.error('Error updating organization; status should be 200, not %i: %s',
status, r)
raise ValueError(f'Unable to update organization, status code {status}')
org_doc.update(r)
# Update the roles for the affected member.
if user_id:
self.refresh_roles(user_id)
return org_doc
def _get_org(self, org_id: bson.ObjectId, *, projection=None):
"""Returns the organization, or raises a ValueError."""
assert isinstance(org_id, bson.ObjectId)
org_coll = current_app.db('organizations')
org = org_coll.find_one(org_id, projection=projection)
if org is None:
raise ValueError(f'Organization {org_id} not found')
return org
def refresh_all_user_roles(self, org_id: bson.ObjectId):
"""Refreshes the roles of all members."""
assert isinstance(org_id, bson.ObjectId)
org = self._get_org(org_id, projection={'members': 1})
members = org.get('members')
if not members:
self._log.info('Organization %s has no members, nothing to refresh.', org_id)
return
for uid in members:
self.refresh_roles(uid)
def refresh_roles(self, user_id: bson.ObjectId) -> typing.Set[str]:
"""Refreshes the user's roles to own roles + organizations' roles.
:returns: the applied set of roles.
"""
assert isinstance(user_id, bson.ObjectId)
from pillar.api.service import do_badger
self._log.info('Refreshing roles for user %s', user_id)
org_coll = current_app.db('organizations')
tokens_coll = current_app.db('tokens')
def aggr_roles(coll, match: dict) -> typing.Set[str]:
query = coll.aggregate([
{'$match': match},
{'$project': {'org_roles': 1}},
{'$unwind': {'path': '$org_roles'}},
{'$group': {
'_id': None,
'org_roles': {'$addToSet': '$org_roles'},
}}])
# If the user has no organizations/tokens at all, the query will have no results.
try:
org_roles_doc = query.next()
except StopIteration:
return set()
return set(org_roles_doc['org_roles'])
# Join all organization-given roles and roles from the tokens collection.
org_roles = aggr_roles(org_coll, {'members': user_id})
self._log.debug('Organization-given roles for user %s: %s', user_id, org_roles)
token_roles = aggr_roles(tokens_coll, {
'user': user_id,
'expire_time': {"$gt": utcnow()},
})
self._log.debug('Token-given roles for user %s: %s', user_id, token_roles)
org_roles.update(token_roles)
users_coll = current_app.db('users')
user_doc = users_coll.find_one(user_id, projection={'roles': 1})
if not user_doc:
self._log.warning('Trying refresh roles of non-existing user %s, ignoring', user_id)
return set()
all_user_roles = set(user_doc.get('roles') or [])
existing_org_roles = {role for role in all_user_roles
if role.startswith('org-')}
grant_roles = org_roles - all_user_roles
revoke_roles = existing_org_roles - org_roles
if grant_roles:
do_badger('grant', roles=grant_roles, user_id=user_id)
if revoke_roles:
do_badger('revoke', roles=revoke_roles, user_id=user_id)
return all_user_roles.union(grant_roles) - revoke_roles
def user_is_admin(self, org_id: bson.ObjectId) -> bool:
"""Returns whether the currently logged in user is the admin of the organization."""
from pillar.api.utils.authentication import current_user_id
uid = current_user_id()
if uid is None:
return False
org = self._get_org(org_id, projection={'admin_uid': 1})
return org.get('admin_uid') == uid
def unknown_member_roles(self, member_email: str) -> typing.Set[str]:
"""Returns the set of organization roles for this user.
Assumes the user is not yet known, i.e. part of the unknown_members lists.
"""
org_coll = current_app.db('organizations')
# Aggregate all org-given roles for this user.
query = org_coll.aggregate([
{'$match': {'unknown_members': member_email}},
{'$project': {'org_roles': 1}},
{'$unwind': {'path': '$org_roles'}},
{'$group': {
'_id': None,
'org_roles': {'$addToSet': '$org_roles'},
}}])
# If the user has no organizations at all, the query will have no results.
try:
org_roles_doc = query.next()
except StopIteration:
return set()
return set(org_roles_doc['org_roles'])
def make_member_known(self, member_uid: bson.ObjectId, member_email: str):
"""Moves the given member from the unknown_members to the members lists."""
# This uses a direct PyMongo query rather than using Eve's put_internal,
# to prevent simultaneous updates from dropping users.
org_coll = current_app.db('organizations')
for org in org_coll.find({'unknown_members': member_email}):
self._log.info('Updating organization %s, marking member %s/%s as known',
org['_id'], member_uid, member_email)
org_coll.update_one({'_id': org['_id']},
{'$addToSet': {'members': member_uid},
'$pull': {'unknown_members': member_email}
})
def org_members(self, member_sting_ids: typing.Iterable[str]) -> typing.List[dict]:
"""Returns the user documents of the organization members.
This is a workaround to provide membership information for
organizations without giving 'mortal' users access to /api/users.
"""
from pillar.api.utils import str2id
if not member_sting_ids:
return []
member_ids = [str2id(uid) for uid in member_sting_ids]
users_coll = current_app.db('users')
users = users_coll.find({'_id': {'$in': member_ids}},
projection={'_id': 1, 'full_name': 1, 'email': 1, 'avatar': 1})
return list(users)
def user_has_organizations(self, user_id: bson.ObjectId) -> bool:
"""Returns True iff the user has anything to do with organizations.
That is, if the user is admin for and/or member of any organization.
"""
org_coll = current_app.db('organizations')
org_count = org_coll.count_documents({'$or': [
{'admin_uid': user_id},
{'members': user_id}
]})
return bool(org_count)
def user_is_unknown_member(self, member_email: str) -> bool:
"""Return True iff the email is an unknown member of some org."""
org_coll = current_app.db('organizations')
org_count = org_coll.count_documents({'unknown_members': member_email})
return bool(org_count)
def roles_for_ip_address(self, remote_addr: str) -> typing.Set[str]:
"""Find the roles given to the user via org IP range definitions."""
from . import ip_ranges
org_coll = current_app.db('organizations')
try:
q = ip_ranges.query(remote_addr)
except ValueError as ex:
self._log.warning('Invalid remote address %s, ignoring IP-based roles: %s',
remote_addr, ex)
return set()
orgs = org_coll.find(
{'ip_ranges': q},
projection={'org_roles': True},
)
return set(role
for org in orgs
for role in org.get('org_roles', []))
def roles_for_request(self) -> typing.Set[str]:
"""Find roles for user via the request's remote IP address."""
try:
remote_addr = flask.request.access_route[0]
except IndexError:
return set()
if not remote_addr:
return set()
roles = self.roles_for_ip_address(remote_addr)
self._log.debug('Roles for IP address %s: %s', remote_addr, roles)
return roles
def setup_app(app):
from . import patch, hooks
hooks.setup_app(app)
patch.setup_app(app)

View File

@ -0,0 +1,48 @@
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils.authentication import current_user
def pre_get_organizations(request, lookup):
user = current_user()
if user.is_anonymous:
raise wz_exceptions.Forbidden()
if user.has_cap('admin'):
# Allow all lookups to admins.
return
# Only allow users to see their own organizations.
lookup['$or'] = [{'admin_uid': user.user_id}, {'members': user.user_id}]
def on_fetched_item_organizations(org_doc: dict):
"""Filter out binary data.
Eve cannot return binary data, at least not until we upgrade to a version
that depends on Cerberus >= 1.0.
"""
for ipr in org_doc.get('ip_ranges') or []:
ipr.pop('start', None)
ipr.pop('end', None)
ipr.pop('prefix', None) # not binary, but useless without the other fields.
def on_fetched_resource_organizations(response: dict):
for org_doc in response.get('_items', []):
on_fetched_item_organizations(org_doc)
def pre_post_organizations(request):
user = current_user()
if not user.has_cap('create-organization'):
raise wz_exceptions.Forbidden()
def setup_app(app):
app.on_pre_GET_organizations += pre_get_organizations
app.on_pre_POST_organizations += pre_post_organizations
app.on_fetched_item_organizations += on_fetched_item_organizations
app.on_fetched_resource_organizations += on_fetched_resource_organizations

View File

@ -0,0 +1,75 @@
"""IP range support for Organizations."""
from IPy import IP
# 128 bits all set to 1
ONES_128 = 2 ** 128 - 1
def doc(iprange: str, min_prefixlen6: int=0, min_prefixlen4: int=0) -> dict:
"""Convert a human-readable string like '1.2.3.4/24' to a Mongo document.
This converts the address to IPv6 and computes the start/end addresses
of the range. The address, its prefix size, and start and end address,
are returned as a dict.
Addresses are stored as big-endian binary data because MongoDB doesn't
support 128 bits integers.
:param iprange: the IP address and mask size, can be IPv6 or IPv4.
:param min_prefixlen6: if given, causes a ValuError when the mask size
is too low. Note that the mask size is always
evaluated only for IPv6 addresses.
:param min_prefixlen4: if given, causes a ValuError when the mask size
is too low. Note that the mask size is always
evaluated only for IPv4 addresses.
:returns: a dict like: {
'start': b'xxxxx' with the lowest IP address in the range.
'end': b'yyyyy' with the highest IP address in the range.
'human': 'aaaa:bbbb::cc00/120' with the human-readable representation.
'prefix': 120, the prefix length of the netmask in bits.
}
"""
ip = IP(iprange, make_net=True)
prefixlen = ip.prefixlen()
if ip.version() == 4:
if prefixlen < min_prefixlen4:
raise ValueError(f'Prefix length {prefixlen} smaller than allowed {min_prefixlen4}')
ip = ip.v46map()
else:
if prefixlen < min_prefixlen6:
raise ValueError(f'Prefix length {prefixlen} smaller than allowed {min_prefixlen6}')
addr = ip.int()
# Set all address bits to 1 where the mask is 0 to obtain the largest address.
end = addr | (ONES_128 % ip.netmask().int())
# This ensures that even a single host is represented as /128 in the human-readable form.
ip.NoPrefixForSingleIp = False
return {
'start': addr.to_bytes(16, 'big'),
'end': end.to_bytes(16, 'big'),
'human': ip.strCompressed(),
'prefix': ip.prefixlen(),
}
def query(address: str) -> dict:
"""Return a dict usable for querying all organizations whose IP range matches the given one.
:returns: a dict like:
{$elemMatch: {'start': {$lte: b'xxxxx'}, 'end': {$gte: b'xxxxx'}}}
"""
ip = IP(address)
if ip.version() == 4:
ip = ip.v46map()
for_mongo = ip.ip.to_bytes(16, 'big')
return {'$elemMatch': {
'start': {'$lte': for_mongo},
'end': {'$gte': for_mongo},
}}

View File

@ -0,0 +1,228 @@
"""Organization patching support."""
import logging
import bson
from flask import Blueprint, jsonify
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils.authentication import current_user
from pillar.api.utils import authorization, str2id, jsonify
from pillar.api import patch_handler
from pillar import current_app
log = logging.getLogger(__name__)
patch_api_blueprint = Blueprint('pillar.api.organizations.patch', __name__)
class OrganizationPatchHandler(patch_handler.AbstractPatchHandler):
item_name = 'organization'
@authorization.require_login()
def patch_assign_users(self, org_id: bson.ObjectId, patch: dict):
"""Assigns users to an organization.
The calling user must be admin of the organization.
"""
from . import NotEnoughSeats
self._assert_is_admin(org_id)
# Do some basic validation.
try:
emails = patch['emails']
except KeyError:
raise wz_exceptions.BadRequest('No key "email" in patch.')
# Skip empty emails.
emails = [stripped
for stripped in (email.strip() for email in emails)
if stripped]
log.info('User %s uses PATCH to add users to organization %s',
current_user().user_id, org_id)
try:
org_doc = current_app.org_manager.assign_users(org_id, emails)
except NotEnoughSeats:
resp = jsonify({'_message': f'Not enough seats to assign {len(emails)} users'})
resp.status_code = 422
return resp
return jsonify(org_doc)
@authorization.require_login()
def patch_assign_user(self, org_id: bson.ObjectId, patch: dict):
"""Assigns a single user by User ID to an organization.
The calling user must be admin of the organization.
"""
from . import NotEnoughSeats
self._assert_is_admin(org_id)
# Do some basic validation.
try:
user_id = patch['user_id']
except KeyError:
raise wz_exceptions.BadRequest('No key "user_id" in patch.')
user_oid = str2id(user_id)
log.info('User %s uses PATCH to add user %s to organization %s',
current_user().user_id, user_oid, org_id)
try:
org_doc = current_app.org_manager.assign_single_user(org_id, user_id=user_oid)
except NotEnoughSeats:
resp = jsonify({'_message': f'Not enough seats to assign this user'})
resp.status_code = 422
return resp
return jsonify(org_doc)
@authorization.require_login()
def patch_assign_admin(self, org_id: bson.ObjectId, patch: dict):
"""Assigns a single user by User ID as admin of the organization.
The calling user must be admin of the organization.
"""
self._assert_is_admin(org_id)
# Do some basic validation.
try:
user_id = patch['user_id']
except KeyError:
raise wz_exceptions.BadRequest('No key "user_id" in patch.')
user_oid = str2id(user_id)
log.info('User %s uses PATCH to set user %s as admin for organization %s',
current_user().user_id, user_oid, org_id)
current_app.org_manager.assign_admin(org_id, user_id=user_oid)
@authorization.require_login()
def patch_remove_user(self, org_id: bson.ObjectId, patch: dict):
"""Removes a user from an organization.
The calling user must be admin of the organization.
"""
# Do some basic validation.
email = patch.get('email') or None
user_id = patch.get('user_id')
user_oid = str2id(user_id) if user_id else None
# Users require admin rights on the org, except when removing themselves.
current_user_id = current_user().user_id
if user_oid is None or user_oid != current_user_id:
self._assert_is_admin(org_id)
log.info('User %s uses PATCH to remove user %s from organization %s',
current_user_id, user_oid, org_id)
org_doc = current_app.org_manager.remove_user(org_id, user_id=user_oid, email=email)
return jsonify(org_doc)
def _assert_is_admin(self, org_id):
om = current_app.org_manager
if current_user().has_cap('admin'):
# Always allow admins to edit every organization.
return
if not om.user_is_admin(org_id):
log.warning('User %s uses PATCH to edit organization %s, '
'but is not admin of that Organization. Request denied.',
current_user().user_id, org_id)
raise wz_exceptions.Forbidden()
@authorization.require_login()
def patch_edit_from_web(self, org_id: bson.ObjectId, patch: dict):
"""Updates Organization fields from the web.
The PATCH command supports the following payload. The 'name' field must
be set, all other fields are optional. When an optional field is
ommitted it will be handled as an instruction to clear that field.
{'name': str,
'description': str,
'website': str,
'location': str,
'ip_ranges': list of human-readable IP ranges}
"""
from pymongo.results import UpdateResult
from . import ip_ranges
self._assert_is_admin(org_id)
user = current_user()
current_user_id = user.user_id
# Only take known fields from the patch, don't just copy everything.
update = {
'name': patch['name'].strip(),
'description': patch.get('description', '').strip(),
'website': patch.get('website', '').strip(),
'location': patch.get('location', '').strip(),
}
unset = {}
# Special transformation for IP ranges
iprs = patch.get('ip_ranges')
if iprs:
ipr_docs = []
for r in iprs:
try:
doc = ip_ranges.doc(r, min_prefixlen6=48, min_prefixlen4=8)
except ValueError as ex:
raise wz_exceptions.UnprocessableEntity(f'Invalid IP range {r!r}: {ex}')
ipr_docs.append(doc)
update['ip_ranges'] = ipr_docs
else:
unset['ip_ranges'] = True
refresh_user_roles = False
if user.has_cap('admin'):
if 'seat_count' in patch:
update['seat_count'] = int(patch['seat_count'])
if 'org_roles' in patch:
org_roles = [stripped for stripped in (role.strip() for role in patch['org_roles'])
if stripped]
if not all(role.startswith('org-') for role in org_roles):
raise wz_exceptions.UnprocessableEntity(
'Invalid role given, all roles must start with "org-"')
update['org_roles'] = org_roles
refresh_user_roles = True
self.log.info('User %s edits Organization %s: %s', current_user_id, org_id, update)
validator = current_app.validator_for_resource('organizations')
if not validator.validate_update(update, org_id, persisted_document={}):
resp = jsonify({
'_errors': validator.errors,
'_message': ', '.join(f'{field}: {error}'
for field, error in validator.errors.items()),
})
resp.status_code = 422
return resp
# Figure out what to set and what to unset
for_mongo = {'$set': update}
if unset:
for_mongo['$unset'] = unset
organizations_coll = current_app.db('organizations')
result: UpdateResult = organizations_coll.update_one({'_id': org_id}, for_mongo)
if result.matched_count != 1:
self.log.warning('User %s edits Organization %s but update matched %i items',
current_user_id, org_id, result.matched_count)
raise wz_exceptions.BadRequest()
if refresh_user_roles:
self.log.info('Organization roles set for org %s, refreshing users', org_id)
current_app.org_manager.refresh_all_user_roles(org_id)
return '', 204
def setup_app(app):
OrganizationPatchHandler(patch_api_blueprint)
app.register_api_blueprint(patch_api_blueprint, url_prefix='/organizations')

View File

@ -0,0 +1,92 @@
"""Handler for PATCH requests.
This supports PATCH request in the sense described by William Durand:
http://williamdurand.fr/2014/02/14/please-do-not-patch-like-an-idiot/
Each PATCH should be a JSON dict with at least a key 'op' with the
name of the operation to perform.
"""
import logging
import flask
from pillar.api.utils import authorization
log = logging.getLogger(__name__)
class AbstractPatchHandler:
"""Abstract PATCH handler supporting multiple operations.
Each operation, i.e. possible value of the 'op' key in the PATCH body,
should be matched to a similarly named "patch_xxx" function in a subclass.
For example, the operation "set-owner" is mapped to "patch_set_owner".
:cvar route: the Flask/Werkzeug route to attach this handler to.
For most handlers, the default will be fine.
:cvar item_name: the name of the things to patch, like "job", "task" etc.
Only used for logging.
"""
route: str = '/<object_id>'
item_name: str = None
def __init_subclass__(cls, **kwargs):
if not cls.route:
raise ValueError('Subclass must set route')
if not cls.item_name:
raise ValueError('Subclass must set item_name')
def __init__(self, blueprint: flask.Blueprint):
self.log: logging.Logger = log.getChild(self.__class__.__name__)
self.patch_handlers = {
name[6:].replace('_', '-'): getattr(self, name)
for name in dir(self)
if name.startswith('patch_') and callable(getattr(self, name))
}
if self.log.isEnabledFor(logging.INFO):
self.log.info('Creating PATCH handler %s.%s%s for operations: %s',
blueprint.name, self.patch.__name__, self.route,
sorted(self.patch_handlers.keys()))
blueprint.add_url_rule(self.route,
self.patch.__name__,
self.patch,
methods=['PATCH'])
@authorization.require_login()
def patch(self, object_id: str):
from flask import request
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils import str2id, authentication
# Parse the request
real_object_id = str2id(object_id)
patch = request.get_json()
if not patch:
self.log.info('Bad PATCH request, did not contain JSON')
raise wz_exceptions.BadRequest('Patch must contain JSON')
try:
patch_op = patch['op']
except KeyError:
self.log.info("Bad PATCH request, did not contain 'op' key")
raise wz_exceptions.BadRequest("PATCH should contain 'op' key to denote operation.")
log.debug('User %s wants to PATCH "%s" %s %s',
authentication.current_user_id(), patch_op, self.item_name, real_object_id)
# Find the PATCH handler for the operation.
try:
handler = self.patch_handlers[patch_op]
except KeyError:
log.warning('No %s PATCH handler for operation %r', self.item_name, patch_op)
raise wz_exceptions.BadRequest('Operation %r not supported' % patch_op)
# Let the PATCH handler do its thing.
response = handler(real_object_id, patch)
if response is None:
return '', 204
return response

View File

@ -0,0 +1,32 @@
from . import hooks
from .routes import blueprint_api
def setup_app(app, api_prefix):
from . import patch
patch.setup_app(app)
app.on_replace_projects += hooks.override_is_private_field
app.on_replace_projects += hooks.before_edit_check_permissions
app.on_replace_projects += hooks.protect_sensitive_fields
app.on_replace_projects += hooks.parse_markdown
app.on_update_projects += hooks.override_is_private_field
app.on_update_projects += hooks.before_edit_check_permissions
app.on_update_projects += hooks.protect_sensitive_fields
app.on_delete_item_projects += hooks.before_delete_project
app.on_deleted_item_projects += hooks.after_delete_project
app.on_insert_projects += hooks.before_inserting_override_is_private_field
app.on_insert_projects += hooks.before_inserting_projects
app.on_insert_projects += hooks.parse_markdowns
app.on_inserted_projects += hooks.after_inserting_projects
app.on_fetched_item_projects += hooks.before_returning_project_permissions
app.on_fetched_resource_projects += hooks.before_returning_project_resource_permissions
app.on_fetched_item_projects += hooks.project_node_type_has_method
app.on_fetched_resource_projects += hooks.projects_node_type_has_method
app.register_api_blueprint(blueprint_api, url_prefix=api_prefix)

View File

@ -0,0 +1,283 @@
import copy
import logging
from flask import request, abort
import pillar
from pillar import current_app
from pillar.api.node_types.asset import node_type_asset
from pillar.api.node_types.comment import node_type_comment
from pillar.api.node_types.group import node_type_group
from pillar.api.node_types.group_texture import node_type_group_texture
from pillar.api.node_types.texture import node_type_texture
from pillar.api.file_storage_backends import default_storage_backend
from pillar.api.utils import authorization, authentication
from pillar.api.utils import remove_private_keys
from pillar.api.utils.authorization import user_has_role, check_permissions
from pillar.auth import current_user
from .utils import abort_with_error
log = logging.getLogger(__name__)
# Default project permissions for the admin group.
DEFAULT_ADMIN_GROUP_PERMISSIONS = ['GET', 'PUT', 'POST', 'DELETE']
def before_inserting_projects(items):
"""Strip unwanted properties, that will be assigned after creation. Also,
verify permission to create a project (check quota, check role).
:param items: List of project docs that have been inserted (normally one)
"""
# Allow admin users to do whatever they want.
if user_has_role('admin'):
return
for item in items:
item.pop('url', None)
def override_is_private_field(project, original):
"""Override the 'is_private' property from the world permissions.
:param project: the project, which will be updated
"""
# No permissions, no access.
if 'permissions' not in project:
project['is_private'] = True
return
world_perms = project['permissions'].get('world', [])
is_private = 'GET' not in world_perms
project['is_private'] = is_private
def before_inserting_override_is_private_field(projects):
for project in projects:
override_is_private_field(project, None)
def before_edit_check_permissions(document, original):
check_permissions('projects', original, request.method)
def before_delete_project(document):
"""Checks permissions before we allow deletion"""
check_permissions('projects', document, request.method)
log.info('Deleting project %s on behalf of user %s', document['_id'], current_user)
def after_delete_project(project: dict):
"""Perform delete on the project's files too."""
from werkzeug.exceptions import NotFound
from eve.methods.delete import delete
pid = project['_id']
log.info('Project %s was deleted, also deleting its files.', pid)
try:
r, _, _, status = delete('files', {'project': pid})
except NotFound:
# There were no files, and that's fine.
return
if status != 204:
# Will never happen because bloody Eve always returns 204 or raises an exception.
log.warning('Unable to delete files of project %s: %s', pid, r)
def protect_sensitive_fields(document, original):
"""When not logged in as admin, prevents update to certain fields."""
# Allow admin users to do whatever they want.
if user_has_role('admin'):
return
def revert(name):
if name not in original:
try:
del document[name]
except KeyError:
pass
return
document[name] = original[name]
revert('status')
revert('category')
revert('user')
if 'url' in original:
revert('url')
def after_inserting_projects(projects):
"""After inserting a project in the collection we do some processing such as:
- apply the right permissions
- define basic node types
- optionally generate a url
- initialize storage space
:param projects: List of project docs that have been inserted (normally one)
"""
users_collection = current_app.data.driver.db['users']
for project in projects:
owner_id = project.get('user', None)
owner = users_collection.find_one(owner_id)
after_inserting_project(project, owner)
def after_inserting_project(project, db_user):
from pillar.auth import UserClass
project_id = project['_id']
user_id = db_user['_id']
# Create a project-specific admin group (with name matching the project id)
result, _, _, status = current_app.post_internal('groups', {'name': str(project_id)})
if status != 201:
log.error('Unable to create admin group for new project %s: %s',
project_id, result)
return abort_with_error(status)
admin_group_id = result['_id']
log.debug('Created admin group %s for project %s', admin_group_id, project_id)
# Assign the current user to the group
db_user.setdefault('groups', []).append(admin_group_id)
result, _, _, status = current_app.patch_internal('users', {'groups': db_user['groups']},
_id=user_id)
if status != 200:
log.error('Unable to add user %s as member of admin group %s for new project %s: %s',
user_id, admin_group_id, project_id, result)
return abort_with_error(status)
log.debug('Made user %s member of group %s', user_id, admin_group_id)
# Assign the group to the project with admin rights
owner_user = UserClass.construct('', db_user)
is_admin = authorization.is_admin(owner_user)
world_permissions = ['GET'] if is_admin else []
permissions = {
'world': world_permissions,
'users': [],
'groups': [
{'group': admin_group_id,
'methods': DEFAULT_ADMIN_GROUP_PERMISSIONS[:]},
]
}
def with_permissions(node_type):
copied = copy.deepcopy(node_type)
copied['permissions'] = permissions
return copied
# Assign permissions to the project itself, as well as to the node_types
project['permissions'] = permissions
project['node_types'] = [
with_permissions(node_type_group),
with_permissions(node_type_asset),
with_permissions(node_type_comment),
with_permissions(node_type_texture),
with_permissions(node_type_group_texture),
]
# Allow admin users to use whatever url they want.
if not is_admin or not project.get('url'):
if project.get('category', '') == 'home':
project['url'] = 'home'
else:
project['url'] = "p-{!s}".format(project_id)
# Initialize storage using the default specified in STORAGE_BACKEND
default_storage_backend(str(project_id))
# Commit the changes directly to the MongoDB; a PUT is not allowed yet,
# as the project doesn't have a valid permission structure.
projects_collection = current_app.data.driver.db['projects']
result = projects_collection.update_one({'_id': project_id},
{'$set': remove_private_keys(project)})
if result.matched_count != 1:
log.error('Unable to update project %s: %s', project_id, result.raw_result)
abort_with_error(500)
def before_returning_project_permissions(response):
# Run validation process, since GET on nodes entry point is public
check_permissions('projects', response, 'GET', append_allowed_methods=True)
def before_returning_project_resource_permissions(response):
# Return only those projects the user has access to.
allow = []
for project in response['_items']:
if authorization.has_permissions('projects', project,
'GET', append_allowed_methods=True):
allow.append(project)
else:
log.debug('User %s requested project %s, but has no access to it; filtered out.',
authentication.current_user_id(), project['_id'])
response['_items'] = allow
def project_node_type_has_method(response):
"""Check for a specific request arg, and check generate the allowed_methods
list for the required node_type.
"""
node_type_name = request.args.get('node_type', '')
# Proceed only node_type has been requested
if not node_type_name:
return
# Look up the node type in the project document
if not any(node_type.get('name') == node_type_name
for node_type in response['node_types']):
return abort(404)
# Check permissions and append the allowed_methods to the node_type
check_permissions('projects', response, 'GET', append_allowed_methods=True,
check_node_type=node_type_name)
def projects_node_type_has_method(response):
for project in response['_items']:
project_node_type_has_method(project)
def parse_markdown(project, original=None):
schema = current_app.config['DOMAIN']['projects']['schema']
def find_markdown_fields(schema, project):
"""Find and process all Markdown coerced fields.
- look for fields with a 'coerce': 'markdown' property
- parse the name of the field and generate the sibling field name (_<field_name>_html -> <field_name>)
- parse the content of the <field_name> field as markdown and save it in _<field_name>_html
"""
for field_name, field_value in schema.items():
if not isinstance(field_value, dict):
continue
if field_value.get('coerce') != 'markdown':
continue
if field_name not in project:
continue
# Construct markdown source field name (strip the leading '_' and the trailing '_html')
source_field_name = field_name[1:-5]
html = pillar.markdown.markdown(project[source_field_name])
project[field_name] = html
if isinstance(project, dict) and field_name in project:
find_markdown_fields(field_value, project[field_name])
find_markdown_fields(schema, project)
def parse_markdowns(items):
for item in items:
parse_markdown(item)

View File

@ -0,0 +1,47 @@
"""Code for merging projects."""
import logging
from bson import ObjectId
from pillar import current_app
from pillar.api.file_storage.moving import move_to_bucket
from pillar.api.utils import random_etag, utcnow
log = logging.getLogger(__name__)
def merge_project(pid_from: ObjectId, pid_to: ObjectId):
"""Move nodes and files from one project to another.
Note that this may invalidate the nodes, as their node type definition
may differ between projects.
"""
log.info('Moving project contents from %s to %s', pid_from, pid_to)
assert isinstance(pid_from, ObjectId)
assert isinstance(pid_to, ObjectId)
files_coll = current_app.db('files')
nodes_coll = current_app.db('nodes')
# Move the files first. Since this requires API calls to an external
# service, this is more likely to go wrong than moving the nodes.
query = {'project': pid_from}
to_move = files_coll.find(query, projection={'_id': 1})
to_move_count = files_coll.count_documents(query)
log.info('Moving %d files to project %s', to_move_count, pid_to)
for file_doc in to_move:
fid = file_doc['_id']
log.debug('moving file %s to project %s', fid, pid_to)
move_to_bucket(fid, pid_to)
# Mass-move the nodes.
etag = random_etag()
result = nodes_coll.update_many(
query,
{'$set': {'project': pid_to,
'_etag': etag,
'_updated': utcnow(),
}}
)
log.info('Moved %d nodes to project %s', result.modified_count, pid_to)

View File

@ -0,0 +1,85 @@
"""Project patching support."""
import logging
import flask
from flask import Blueprint, request
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.auth import current_user
from pillar.api.utils import random_etag, str2id, utcnow
from pillar.api.utils import authorization
log = logging.getLogger(__name__)
blueprint = Blueprint('projects.patch', __name__)
@blueprint.route('/<project_id>', methods=['PATCH'])
@authorization.require_login()
def patch_project(project_id: str):
"""Undelete a project.
This is done via a custom PATCH due to the lack of transactions of MongoDB;
we cannot undelete both project-referenced files and file-referenced
projects in one atomic operation.
"""
# Parse the request
pid = str2id(project_id)
patch = request.get_json()
if not patch:
raise wz_exceptions.BadRequest('Expected JSON body')
log.debug('User %s wants to PATCH project %s: %s', current_user, pid, patch)
# 'undelete' is the only operation we support now, so no fancy handler registration.
op = patch.get('op', '')
if op != 'undelete':
log.warning('User %s sent unsupported PATCH op %r to project %s: %s',
current_user, op, pid, patch)
raise wz_exceptions.BadRequest(f'unsupported operation {op!r}')
# Get the project to find the user's permissions.
proj_coll = current_app.db('projects')
proj = proj_coll.find_one({'_id': pid})
if not proj:
raise wz_exceptions.NotFound(f'project {pid} not found')
allowed = authorization.compute_allowed_methods('projects', proj)
if 'PUT' not in allowed:
log.warning('User %s tried to undelete project %s but only has permissions %r',
current_user, pid, allowed)
raise wz_exceptions.Forbidden(f'no PUT access to project {pid}')
if not proj.get('_deleted', False):
raise wz_exceptions.BadRequest(f'project {pid} was not deleted, unable to undelete')
# Undelete the files. We cannot do this via Eve, as it doesn't support
# PATCHing collections, so direct MongoDB modification is used to set
# _deleted=False and provide new _etag and _updated values.
new_etag = random_etag()
log.debug('undeleting files before undeleting project %s', pid)
files_coll = current_app.db('files')
update_result = files_coll.update_many(
{'project': pid},
{'$set': {'_deleted': False,
'_etag': new_etag,
'_updated': utcnow()}})
log.info('undeleted %d of %d file documents of project %s',
update_result.modified_count, update_result.matched_count, pid)
log.info('undeleting project %s on behalf of user %s', pid, current_user)
update_result = proj_coll.update_one({'_id': pid},
{'$set': {'_deleted': False}})
log.info('undeleted %d project document %s', update_result.modified_count, pid)
resp = flask.Response('', status=204)
resp.location = flask.url_for('projects.view', project_url=proj['url'])
return resp
def setup_app(app):
# This needs to be on the same URL prefix as Eve uses for the collection,
# and not /p as used for the other Projects API calls.
app.register_api_blueprint(blueprint, url_prefix='/projects')

View File

@ -0,0 +1,147 @@
import json
import logging
from bson import ObjectId
from flask import Blueprint, request, current_app, make_response, url_for
from werkzeug import exceptions as wz_exceptions
import pillar.api.users.avatar
from pillar.api.utils import authorization, jsonify, str2id
from pillar.api.utils import mongo
from pillar.api.utils.authorization import require_login, check_permissions
from pillar.auth import current_user
from . import utils
log = logging.getLogger(__name__)
blueprint_api = Blueprint('projects_api', __name__)
@blueprint_api.route('/create', methods=['POST'])
@authorization.require_login(require_cap='subscriber')
def create_project(overrides=None):
"""Creates a new project."""
if request.mimetype == 'application/json':
project_name = request.json['name']
else:
project_name = request.form['project_name']
user_id = current_user.user_id
project = utils.create_new_project(project_name, user_id, overrides)
# Return the project in the response.
loc = url_for('projects|item_lookup', _id=project['_id'])
return jsonify(project, status=201, headers={'Location': loc})
@blueprint_api.route('/users', methods=['GET', 'POST'])
@authorization.require_login()
def project_manage_users():
"""Manage users of a project. In this initial implementation, we handle
addition and removal of a user to the admin group of a project.
No changes are done on the project itself.
"""
from pillar.api.utils import str2id
projects_collection = current_app.data.driver.db['projects']
users_collection = current_app.data.driver.db['users']
# TODO: check if user is admin of the project before anything
if request.method == 'GET':
project_id = request.args['project_id']
project = projects_collection.find_one({'_id': ObjectId(project_id)})
admin_group_id = project['permissions']['groups'][0]['group']
users = list(users_collection.find(
{'groups': {'$in': [admin_group_id]}},
{'username': 1, 'email': 1, 'full_name': 1, 'avatar': 1}))
for user in users:
user['avatar_url'] = pillar.api.users.avatar.url(user)
user.pop('avatar', None)
return jsonify({'_status': 'OK', '_items': users})
# The request is not a form, since it comes from the API sdk
data = json.loads(request.data)
project_id = str2id(data['project_id'])
target_user_id = str2id(data['user_id'])
action = data['action']
current_user_id = current_user.user_id
project = projects_collection.find_one({'_id': project_id})
# Check if the current_user is owner of the project, or removing themselves.
if not authorization.user_has_role('admin'):
remove_self = target_user_id == current_user_id and action == 'remove'
if project['user'] != current_user_id and not remove_self:
log.warning('User %s tries to %s %s to/from project %s, but is not allowed',
current_user_id, action, target_user_id, project_id)
utils.abort_with_error(403)
admin_group = utils.get_admin_group(project)
# Get the user and add the admin group to it
if action == 'add':
operation = '$addToSet'
log.info('project_manage_users: Adding user %s to admin group of project %s',
target_user_id, project_id)
elif action == 'remove':
log.info('project_manage_users: Removing user %s from admin group of project %s',
target_user_id, project_id)
operation = '$pull'
else:
log.warning('project_manage_users: Unsupported action %r called by user %s',
action, current_user_id)
raise wz_exceptions.UnprocessableEntity()
users_collection.update_one({'_id': target_user_id},
{operation: {'groups': admin_group['_id']}})
user = users_collection.find_one({'_id': target_user_id},
{'username': 1, 'email': 1,
'full_name': 1})
if not user:
return jsonify({'_status': 'ERROR'}), 404
user['_status'] = 'OK'
return jsonify(user)
@blueprint_api.route('/<string:project_id>/quotas')
@require_login()
def project_quotas(project_id):
"""Returns information about the project's limits."""
# Check that the user has GET permissions on the project itself.
project = mongo.find_one_or_404('projects', project_id)
check_permissions('projects', project, 'GET')
file_size_used = utils.project_total_file_size(project_id)
info = {
'file_size_quota': None, # TODO: implement this later.
'file_size_used': file_size_used,
}
return jsonify(info)
@blueprint_api.route('/<project_id>/<node_type>', methods=['OPTIONS', 'GET'])
def get_allowed_methods(project_id=None, node_type=None):
"""Returns allowed methods to create a node of a certain type.
Either project_id or parent_node_id must be given. If the latter is given,
the former is deducted from it.
"""
project = mongo.find_one_or_404('projects', str2id(project_id))
proj_methods = authorization.compute_allowed_methods('projects', project, node_type)
resp = make_response()
resp.headers['Allowed'] = ', '.join(sorted(proj_methods))
resp.status_code = 204
return resp

View File

@ -0,0 +1,214 @@
import logging
import typing
from bson import ObjectId
from werkzeug import exceptions as wz_exceptions
from werkzeug.exceptions import abort
from pillar import current_app
from pillar.auth import current_user
from pillar.api import file_storage_backends
log = logging.getLogger(__name__)
def project_total_file_size(project_id):
"""Returns the total number of bytes used by files of this project."""
files = current_app.data.driver.db['files']
file_size_used = files.aggregate([
{'$match': {'project': ObjectId(project_id)}},
{'$project': {'length_aggregate_in_bytes': 1}},
{'$group': {'_id': None,
'all_files': {'$sum': '$length_aggregate_in_bytes'}}}
])
# The aggregate function returns a cursor, not a document.
try:
return next(file_size_used)['all_files']
except StopIteration:
# No files used at all.
return 0
def get_admin_group_id(project_id: ObjectId) -> ObjectId:
assert isinstance(project_id, ObjectId)
project = current_app.db('projects').find_one({'_id': project_id},
{'permissions': 1})
if not project:
raise ValueError(f'Project {project_id} does not exist.')
# TODO: search through all groups to find the one with the project ID as its name,
# or identify "the admin group" in a different way (for example the group with DELETE rights).
try:
admin_group_id = ObjectId(project['permissions']['groups'][0]['group'])
except KeyError:
raise ValueError(f'Project {project_id} does not seem to have an admin group')
return admin_group_id
def get_admin_group(project: dict) -> dict:
"""Returns the admin group for the project."""
groups_collection = current_app.data.driver.db['groups']
# TODO: see get_admin_group_id
admin_group_id = ObjectId(project['permissions']['groups'][0]['group'])
group = groups_collection.find_one({'_id': admin_group_id})
if group is None:
raise ValueError('Unable to handle project without admin group.')
if group['name'] != str(project['_id']):
log.error('User %s tries to get admin group for project %s, '
'but that does not have the project ID as group name: %s',
current_user.user_id, project.get('_id', '-unknown-'), group)
return abort_with_error(403)
return group
def user_rights_in_project(project_id: ObjectId) -> frozenset:
"""Returns the set of HTTP methods allowed on the given project for the current user."""
from pillar.api.utils import authorization
assert isinstance(project_id, ObjectId)
proj_coll = current_app.db().projects
proj = proj_coll.find_one({'_id': project_id})
return frozenset(authorization.compute_allowed_methods('projects', proj))
def abort_with_error(status):
"""Aborts with the given status, or 500 if the status doesn't indicate an error.
If the status is < 400, status 500 is used instead.
"""
abort(status if status // 100 >= 4 else 500)
raise wz_exceptions.InternalServerError('abort() should have aborted!')
def create_new_project(project_name, user_id, overrides):
"""Creates a new project owned by the given user."""
log.info('Creating new project "%s" for user %s', project_name, user_id)
# Create the project itself, the rest will be done by the after-insert hook.
project = {'description': '',
'name': project_name,
'node_types': [],
'status': 'published',
'user': user_id,
'is_private': True,
'permissions': {},
'url': '',
'summary': '',
'category': 'assets', # TODO: allow the user to choose this.
}
if overrides is not None:
project.update(overrides)
result, _, _, status = current_app.post_internal('projects', project)
if status != 201:
log.error('Unable to create project "%s": %s', project_name, result)
return abort_with_error(status)
project.update(result)
# Now re-fetch the project, as both the initial document and the returned
# result do not contain the same etag as the database. This also updates
# other fields set by hooks.
document = current_app.data.driver.db['projects'].find_one(project['_id'])
project.update(document)
log.info('Created project %s for user %s', project['_id'], user_id)
return project
def get_node_type(project, node_type_name):
"""Returns the named node type, or None if it doesn't exist."""
return next((nt for nt in project['node_types']
if nt['name'] == node_type_name), None)
def node_type_dict(project: dict) -> typing.Dict[str, dict]:
"""Return the node types of the project as dictionary.
The returned dictionary will be keyed by the node type name.
"""
return {nt['name']: nt for nt in project['node_types']}
def project_id(project_url: str) -> ObjectId:
"""Returns the object ID, or raises a ValueError when not found."""
proj_coll = current_app.db('projects')
proj = proj_coll.find_one({'url': project_url}, projection={'_id': True})
if not proj:
raise ValueError(f'project with url={project_url!r} not found')
return proj['_id']
def get_project_url(project_id: ObjectId) -> str:
"""Returns the project URL, or raises a ValueError when not found."""
proj_coll = current_app.db('projects')
proj = proj_coll.find_one({'_id': project_id, '_deleted': {'$ne': True}},
projection={'url': True})
if not proj:
raise ValueError(f'project with id={project_id} not found')
return proj['url']
def get_project(project_url: str) -> dict:
"""Find a project in the database, raises ValueError if not found.
:param project_url: URL of the project
"""
proj_coll = current_app.db('projects')
project = proj_coll.find_one({'url': project_url, '_deleted': {'$ne': True}})
if not project:
raise ValueError(f'project url={project_url!r} does not exist')
return project
def put_project(project: dict):
"""Puts a project into the database via Eve.
:param project: the project data, should be the entire project document
:raises ValueError: if the project cannot be saved.
"""
from pillar.api.utils import remove_private_keys
from pillarsdk.utils import remove_none_attributes
pid = ObjectId(project['_id'])
proj_no_priv = remove_private_keys(project)
proj_no_none = remove_none_attributes(proj_no_priv)
result, _, _, status_code = current_app.put_internal('projects', proj_no_none, _id=pid)
if status_code != 200:
message = f"Can't update project {pid}, status {status_code} with issues: {result}"
log.error(message)
raise ValueError(message)
def storage(project_id: ObjectId) -> file_storage_backends.Bucket:
"""Return the storage bucket for this project.
For now this returns a bucket in the default storage backend, since
individual projects do not have a 'storage backend' setting (this is
set per file, not per project).
"""
return file_storage_backends.default_storage_backend(str(project_id))

View File

@ -0,0 +1,9 @@
from .routes import blueprint_search
from . import queries
def setup_app(app, url_prefix: str = None):
app.register_api_blueprint(
blueprint_search, url_prefix=url_prefix)
queries.setup_app(app)

View File

@ -0,0 +1,40 @@
import logging
from algoliasearch.helpers import AlgoliaException
log = logging.getLogger(__name__)
def push_updated_user(user_to_index: dict):
"""Push an update to the index when a user document is updated."""
from pillar.api.utils.algolia import index_user_save
try:
index_user_save(user_to_index)
except AlgoliaException as ex:
log.warning(
'Unable to push user info to Algolia for user "%s", id=%s; %s', # noqa
user_to_index.get('username'),
user_to_index.get('objectID'), ex)
def index_node_save(node_to_index: dict):
"""Save parsed node document to the index."""
from pillar.api.utils import algolia
try:
algolia.index_node_save(node_to_index)
except AlgoliaException as ex:
log.warning(
'Unable to push node info to Algolia for node %s; %s', node_to_index, ex) # noqa
def index_node_delete(delete_id: str):
"""Delete node using id."""
from pillar.api.utils import algolia
try:
algolia.index_node_delete(delete_id)
except AlgoliaException as ex:
log.warning('Unable to delete node info to Algolia for node %s; %s', delete_id, ex) # noqa

View File

@ -0,0 +1,193 @@
"""
Define elasticsearch document mapping.
Elasticsearch consist of two parts:
- Part 1: Define the documents in which you define who fields will be indexed.
- Part 2: Building elasticsearch json queries.
BOTH of these parts are equally importand to havea search API that returns
relevant results.
"""
import logging
import typing
import elasticsearch_dsl as es
from elasticsearch_dsl import analysis
log = logging.getLogger(__name__)
edge_ngram_filter = analysis.token_filter(
'edge_ngram_filter',
type='edge_ngram',
min_gram=1,
max_gram=15
)
autocomplete = es.analyzer(
'autocomplete',
tokenizer='standard',
filter=['standard', 'asciifolding', 'lowercase', edge_ngram_filter]
)
class User(es.DocType):
"""Elastic document describing user."""
objectID = es.Keyword()
username = es.Text(fielddata=True, analyzer=autocomplete)
username_exact = es.Keyword()
full_name = es.Text(fielddata=True, analyzer=autocomplete)
roles = es.Keyword(multi=True)
groups = es.Keyword(multi=True)
email = es.Text(fielddata=True, analyzer=autocomplete)
email_exact = es.Keyword()
class Meta:
index = 'users'
class Node(es.DocType):
"""
Elastic document describing user
"""
node_type = es.Keyword()
objectID = es.Keyword()
name = es.Text(
fielddata=True,
analyzer=autocomplete
)
user = es.Object(
fields={
'id': es.Keyword(),
'name': es.Text(
fielddata=True,
analyzer=autocomplete)
}
)
description = es.Text()
is_free = es.Boolean()
project = es.Object(
fields={
'id': es.Keyword(),
'name': es.Keyword(),
'url': es.Keyword(),
}
)
media = es.Keyword()
picture = es.Keyword()
tags = es.Keyword(multi=True)
license_notes = es.Text()
created_at = es.Date()
updated_at = es.Date()
class Meta:
index = 'nodes'
def create_doc_from_user_data(user_to_index: dict) -> typing.Optional[User]:
"""
Create the document to store in a search engine for this user.
See pillar.celery.search_index_task
:returns: an ElasticSearch document or None if user_to_index has no data.
"""
if not user_to_index:
return None
doc_id = str(user_to_index.get('objectID', ''))
if not doc_id:
log.error('USER ID is missing %s', user_to_index)
raise KeyError('Trying to create document without id')
doc = User(_id=doc_id)
doc.objectID = str(user_to_index['objectID'])
doc.username = user_to_index['username']
doc.username_exact = user_to_index['username']
doc.full_name = user_to_index['full_name']
doc.roles = list(map(str, user_to_index['roles']))
doc.groups = list(map(str, user_to_index['groups']))
doc.email = user_to_index['email']
doc.email_exact = user_to_index['email']
return doc
def create_doc_from_node_data(node_to_index: dict) -> typing.Optional[Node]:
"""
Create the document to store in a search engine for this node.
See pillar.celery.search_index_task
:returns: an ElasticSearch document or None if node_to_index has no data.
"""
if not node_to_index:
return None
# node stuff
doc_id = str(node_to_index.get('objectID', ''))
if not doc_id:
log.error('ID missing %s', node_to_index)
return None
doc = Node(_id=doc_id)
doc.objectID = str(node_to_index['objectID'])
doc.node_type = node_to_index['node_type']
doc.name = node_to_index['name']
doc.description = node_to_index.get('description')
doc.user.id = str(node_to_index['user']['_id'])
doc.user.name = node_to_index['user']['full_name']
doc.project.id = str(node_to_index['project']['_id'])
doc.project.name = node_to_index['project']['name']
doc.project.url = node_to_index['project']['url']
if node_to_index['node_type'] == 'asset':
doc.media = node_to_index['media']
doc.picture = str(node_to_index.get('picture'))
doc.tags = node_to_index.get('tags')
doc.license_notes = node_to_index.get('license_notes')
doc.is_free = node_to_index.get('is_free')
doc.created_at = node_to_index['created']
doc.updated_at = node_to_index['updated']
return doc
def create_doc_from_user(user_to_index: dict) -> User:
"""
Create a user document from user
"""
doc_id = str(user_to_index['objectID'])
doc = User(_id=doc_id)
doc.objectID = str(user_to_index['objectID'])
doc.full_name = user_to_index['full_name']
doc.username = user_to_index['username']
doc.roles = user_to_index['roles']
doc.groups = user_to_index['groups']
doc.email = user_to_index['email']
return doc

View File

@ -0,0 +1,65 @@
import logging
from elasticsearch_dsl.connections import connections
from elasticsearch.exceptions import NotFoundError
from pillar import current_app
from . import documents
log = logging.getLogger(__name__)
elk_hosts = current_app.config['ELASTIC_SEARCH_HOSTS']
connections.create_connection(
hosts=elk_hosts,
sniff_on_start=False,
timeout=20)
def push_updated_user(user_to_index: dict):
"""
Push an update to the Elastic index when a user item is updated.
"""
if not user_to_index:
return
doc = documents.create_doc_from_user_data(user_to_index)
if not doc:
return
index = current_app.config['ELASTIC_INDICES']['USER']
log.debug('Index %r update user doc %s in ElasticSearch.', index, doc._id)
doc.save(index=index)
def index_node_save(node_to_index: dict):
"""
Push an update to the Elastic index when a node item is saved.
"""
if not node_to_index:
return
doc = documents.create_doc_from_node_data(node_to_index)
if not doc:
return
index = current_app.config['ELASTIC_INDICES']['NODE']
log.debug('Index %r update node doc %s in ElasticSearch.', index, doc._id)
doc.save(index=index)
def index_node_delete(delete_id: str):
"""
Delete node document from Elastic index useing a node id
"""
index = current_app.config['ELASTIC_INDICES']['NODE']
log.debug('Index %r node doc delete %s', index, delete_id)
try:
doc: documents.Node = documents.Node.get(id=delete_id)
doc.delete(index=index)
except NotFoundError:
# seems to be gone already..
pass

View File

@ -0,0 +1,64 @@
import logging
from typing import List
from elasticsearch.exceptions import NotFoundError
from elasticsearch_dsl.connections import connections
import elasticsearch_dsl as es
from pillar import current_app
from . import documents
log = logging.getLogger(__name__)
class ResetIndexTask(object):
""" Clear and build index / mapping """
# Key into the ELASTIC_INDICES dict in the app config.
index_key: str = ''
# List of elastic document types
doc_types: List = []
name = 'remove index'
def __init__(self):
if not self.index_key:
raise ValueError("No index specified")
if not self.doc_types:
raise ValueError("No doc_types specified")
connections.create_connection(
hosts=current_app.config['ELASTIC_SEARCH_HOSTS'],
# sniff_on_start=True,
retry_on_timeout=True,
)
def execute(self):
index = current_app.config['ELASTIC_INDICES'][self.index_key]
idx = es.Index(index)
try:
idx.delete(ignore=404)
except NotFoundError:
log.warning("Could not delete index '%s', ignoring", index)
else:
log.info("Deleted index %s", index)
# create doc types
for dt in self.doc_types:
idx.doc_type(dt)
# create index
idx.create()
class ResetNodeIndex(ResetIndexTask):
index_key = 'NODE'
doc_types = [documents.Node]
class ResetUserIndex(ResetIndexTask):
index_key = 'USER'
doc_types = [documents.User]

View File

@ -0,0 +1,215 @@
import json
import logging
import typing
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q, MultiSearch
from elasticsearch_dsl.query import Query
from pillar import current_app
log = logging.getLogger(__name__)
BOOLEAN_TERMS = ['is_free']
NODE_AGG_TERMS = ['node_type', 'media', 'tags', *BOOLEAN_TERMS]
USER_AGG_TERMS = ['roles', ]
ITEMS_PER_PAGE = 10
USER_SOURCE_INCLUDE = ['full_name', 'objectID', 'username']
# Will be set in setup_app()
client: Elasticsearch = None
def add_aggs_to_search(search, agg_terms):
"""
Add facets / aggregations to the search result
"""
for term in agg_terms:
search.aggs.bucket(term, 'terms', field=term)
def make_filter(must: list, terms: dict) -> list:
""" Given term parameters append must queries to the must list """
for field, value in terms.items():
if value not in (None, ''):
must.append({'term': {field: value}})
return must
def nested_bool(filters: list, should: list, terms: dict, *, index_alias: str) -> Search:
"""
Create a nested bool, where the aggregation selection is a must.
:param index_alias: 'USER' or 'NODE', see ELASTIC_INDICES config.
"""
filters = make_filter(filters, terms)
bool_query = Q('bool', should=should)
bool_query = Q('bool', must=bool_query, filter=filters)
index = current_app.config['ELASTIC_INDICES'][index_alias]
search = Search(using=client, index=index)
search.query = bool_query
return search
def do_multi_node_search(queries: typing.List[dict]) -> typing.List[dict]:
"""
Given user query input and term refinements
search for public published nodes
"""
search = create_multi_node_search(queries)
return _execute_multi(search)
def do_node_search(query: str, terms: dict, page: int, project_id: str='') -> dict:
"""
Given user query input and term refinements
search for public published nodes
"""
search = create_node_search(query, terms, page, project_id)
return _execute(search)
def create_multi_node_search(queries: typing.List[dict]) -> MultiSearch:
search = MultiSearch(using=client)
for q in queries:
search = search.add(create_node_search(**q))
return search
def create_node_search(query: str, terms: dict, page: int, project_id: str='') -> Search:
terms = _transform_terms(terms)
should = [
Q('match', name=query),
{"match": {"project.name": query}},
{"match": {"user.name": query}},
Q('match', description=query),
Q('term', media=query),
Q('term', tags=query),
]
filters = []
if project_id:
filters.append({'term': {'project.id': project_id}})
if not query:
should = []
search = nested_bool(filters, should, terms, index_alias='NODE')
if not query:
search = search.sort('-created_at')
add_aggs_to_search(search, NODE_AGG_TERMS)
search = paginate(search, page)
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
return search
def do_user_search(query: str, terms: dict, page: int) -> dict:
""" return user objects represented in elasicsearch result dict"""
search = create_user_search(query, terms, page)
return _execute(search)
def _common_user_search(query: str) -> (typing.List[Query], typing.List[Query]):
"""Construct (filter,should) for regular + admin user search."""
if not query:
return [], []
should = []
if '@' in query:
should.append({'term': {'email_exact': {'value': query, 'boost': 50}}})
email_boost = 25
else:
email_boost = 1
should.extend([
Q('match', username=query),
Q('match', full_name=query),
{'match': {'email': {'query': query, 'boost': email_boost}}},
{'term': {'username_exact': {'value': query, 'boost': 50}}},
])
return [], should
def do_user_search_admin(query: str, terms: dict, page: int) -> dict:
"""
return users seach result dict object
search all user fields and provide aggregation information
"""
search = create_user_admin_search(query, terms, page)
return _execute(search)
def _execute(search: Search) -> dict:
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
resp = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(resp.to_dict(), indent=4))
return resp.to_dict()
def _execute_multi(search: typing.List[Search]) -> typing.List[dict]:
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
resp = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(resp.to_dict(), indent=4))
return [r.to_dict() for r in resp]
def create_user_admin_search(query: str, terms: dict, page: int) -> Search:
terms = _transform_terms(terms)
filters, should = _common_user_search(query)
if query:
# We most likely got and id field. we should find it.
if len(query) == len('563aca02c379cf0005e8e17d'):
should.append({'term': {
'objectID': {
'value': query, # the thing we're looking for
'boost': 100, # how much more it counts for the score
}
}})
search = nested_bool(filters, should, terms, index_alias='USER')
add_aggs_to_search(search, USER_AGG_TERMS)
search = paginate(search, page)
return search
def create_user_search(query: str, terms: dict, page: int) -> Search:
search = create_user_admin_search(query, terms, page)
return search.source(include=USER_SOURCE_INCLUDE)
def paginate(search: Search, page_idx: int) -> Search:
return search[page_idx * ITEMS_PER_PAGE:(page_idx + 1) * ITEMS_PER_PAGE]
def _transform_terms(terms: dict) -> dict:
"""
Ugly hack! Elastic uses 1/0 for boolean values in its aggregate response,
but expects true/false in queries.
"""
transformed = terms.copy()
for t in BOOLEAN_TERMS:
orig = transformed.get(t)
if orig in ('1', '0'):
transformed[t] = bool(int(orig))
return transformed
def setup_app(app):
global client
hosts = app.config['ELASTIC_SEARCH_HOSTS']
log.getChild('setup_app').info('Creating ElasticSearch client for %s', hosts)
client = Elasticsearch(hosts)

106
pillar/api/search/routes.py Normal file
View File

@ -0,0 +1,106 @@
import logging
from flask import Blueprint, request
import elasticsearch.exceptions as elk_ex
from werkzeug import exceptions as wz_exceptions
from pillar.api.utils import authorization, jsonify
from . import queries
log = logging.getLogger(__name__)
blueprint_search = Blueprint('elksearch', __name__)
TERMS = [
'node_type', 'media',
'tags', 'is_free', 'projectname',
'roles',
]
def _term_filters(args) -> dict:
"""
Check if frontent wants to filter stuff
on specific fields AKA facets
return mapping with term field name
and provided user term value
"""
return {term: args.get(term, '') for term in TERMS}
def _page_index(page) -> int:
"""Return the page index from the query string."""
try:
page_idx = int(page)
except TypeError:
log.info('invalid page number %r received', request.args.get('page'))
raise wz_exceptions.BadRequest()
return page_idx
@blueprint_search.route('/', methods=['GET'])
def search_nodes():
searchword = request.args.get('q', '')
project_id = request.args.get('project', '')
terms = _term_filters(request.args)
page_idx = _page_index(request.args.get('page', 0))
result = queries.do_node_search(searchword, terms, page_idx, project_id)
return jsonify(result)
@blueprint_search.route('/multisearch', methods=['POST'])
def multi_search_nodes():
if len(request.args) != 1:
log.info(f'Expected 1 argument, received {len(request.args)}')
json_obj = request.json
q = []
for row in json_obj:
q.append({
'query': row.get('q', ''),
'project_id': row.get('project', ''),
'terms': _term_filters(row),
'page': _page_index(row.get('page', 0))
})
result = queries.do_multi_node_search(q)
return jsonify(result)
@blueprint_search.route('/user')
def search_user():
searchword = request.args.get('q', '')
terms = _term_filters(request.args)
page_idx = _page_index(request.args.get('page', 0))
# result is the raw elasticseach output.
# we need to filter fields in case of user objects.
try:
result = queries.do_user_search(searchword, terms, page_idx)
except elk_ex.ElasticsearchException as ex:
resp = jsonify({'_message': str(ex)})
resp.status_code = 500
return resp
return jsonify(result)
@blueprint_search.route('/admin/user')
@authorization.require_login(require_cap='admin')
def search_user_admin():
"""
User search over all fields.
"""
searchword = request.args.get('q', '')
terms = _term_filters(request.args)
page_idx = _page_index(_page_index(request.args.get('page', 0)))
try:
result = queries.do_user_search_admin(searchword, terms, page_idx)
except elk_ex.ElasticsearchException as ex:
resp = jsonify({'_message': str(ex)})
resp.status_code = 500
return resp
return jsonify(result)

View File

@ -1,24 +1,31 @@
"""Service accounts."""
import logging
import typing
import blinker
from flask import Blueprint, current_app, g, request
import bson
from flask import Blueprint, current_app, request
from werkzeug import exceptions as wz_exceptions
from application.utils import authorization, authentication, str2id, mongo, jsonify
from application.modules import local_auth
from pillar.api import local_auth
from pillar.api.utils import authorization, authentication
blueprint = Blueprint('service', __name__)
log = logging.getLogger(__name__)
signal_user_changed_role = blinker.NamedSignal('badger:user_changed_role')
ROLES_WITH_GROUPS = {u'admin', u'demo', u'subscriber'}
ROLES_WITH_GROUPS = {'admin', 'demo', 'subscriber'}
# Map of role name to group ID, for the above groups.
role_to_group_id = {}
class ServiceAccountCreationError(Exception):
"""Raised when a service account cannot be created."""
@blueprint.before_app_first_request
def fetch_role_to_group_id_map():
"""Fills the _role_to_group_id mapping upon application startup."""
@ -38,7 +45,7 @@ def fetch_role_to_group_id_map():
@blueprint.route('/badger', methods=['POST'])
@authorization.require_login(require_roles={u'service', u'badger'}, require_all=True)
@authorization.require_login(require_roles={'service', 'badger'}, require_all=True)
def badger():
if request.mimetype != 'application/json':
log.debug('Received %s instead of application/json', request.mimetype)
@ -70,42 +77,76 @@ def badger():
action, user_email, role, action, role)
return 'Role not allowed', 403
return do_badger(action, user_email, role)
return do_badger(action, role=role, user_email=user_email)
def do_badger(action, user_email, role):
"""Performs a badger action, returning a HTTP response."""
def do_badger(action: str, *,
role: str=None, roles: typing.Iterable[str]=None,
user_email: str = '', user_id: bson.ObjectId = None):
"""Performs a badger action, returning a HTTP response.
Either role or roles must be given.
Either user_email or user_id must be given.
"""
if action not in {'grant', 'revoke'}:
log.error('do_badger(%r, %r, %r, %r): action %r not supported.',
action, role, user_email, user_id, action)
raise wz_exceptions.BadRequest('Action %r not supported' % action)
if not user_email:
if not user_email and user_id is None:
log.error('do_badger(%r, %r, %r, %r): neither email nor user_id given.',
action, role, user_email, user_id)
raise wz_exceptions.BadRequest('User email not given')
if not role:
raise wz_exceptions.BadRequest('Role not given')
if bool(role) == bool(roles):
log.error('do_badger(%r, role=%r, roles=%r, %r, %r): '
'either "role" or "roles" must be given.',
action, role, roles, user_email, user_id)
raise wz_exceptions.BadRequest('Invalid role(s) given')
# If only a single role was given, handle it as a set of one role.
if not roles:
roles = {role}
del role
users_coll = current_app.data.driver.db['users']
# Fetch the user
db_user = users_coll.find_one({'email': user_email}, projection={'roles': 1, 'groups': 1})
if user_email:
query = {'email': user_email}
else:
query = user_id
db_user = users_coll.find_one(query, projection={'roles': 1, 'groups': 1})
if db_user is None:
log.warning('badger(%s, %s, %s): user not found', action, user_email, role)
log.warning('badger(%s, roles=%s, user_email=%s, user_id=%s): user not found',
action, roles, user_email, user_id)
return 'User not found', 404
# Apply the action
roles = set(db_user.get('roles') or [])
user_roles = set(db_user.get('roles') or [])
if action == 'grant':
roles.add(role)
user_roles |= roles
else:
roles.discard(role)
user_roles -= roles
groups = manage_user_group_membership(db_user, role, action)
groups = None
for role in roles:
groups = manage_user_group_membership(db_user, role, action)
updates = {'roles': list(roles)}
if groups is None:
# No change for this role
continue
# Also update db_user for the next iteration.
db_user['groups'] = groups
updates = {'roles': list(user_roles)}
if groups is not None:
updates['groups'] = list(groups)
log.debug('badger(%s, %s, user_email=%s, user_id=%s): applying updates %r',
action, role, user_email, user_id, updates)
users_coll.update_one({'_id': db_user['_id']},
{'$set': updates})
@ -116,19 +157,6 @@ def do_badger(action, user_email, role):
return '', 204
@blueprint.route('/urler/<project_id>', methods=['GET'])
@authorization.require_login(require_roles={u'service', u'urler'}, require_all=True)
def urler(project_id):
"""Returns the URL of any project."""
project_id = str2id(project_id)
project = mongo.find_one_or_404('projects', project_id,
projection={'url': 1})
return jsonify({
'_id': project_id,
'url': project['url']})
def manage_user_group_membership(db_user, role, action):
"""Some roles have associated groups; this function maintains group & role membership.
@ -162,38 +190,52 @@ def manage_user_group_membership(db_user, role, action):
return user_groups
def create_service_account(email, roles, service):
def create_service_account(email: str, roles: typing.Iterable, service: dict,
*, full_name: str=None):
"""Creates a service account with the given roles + the role 'service'.
:param email: email address associated with the account
:type email: str
:param email: optional email address associated with the account.
:param roles: iterable of role names
:param service: dict of the 'service' key in the user.
:type service: dict
:param full_name: Full name of the service account. If None, will be set to
something reasonable.
:return: tuple (user doc, token doc)
"""
from eve.methods.post import post_internal
# Create a user with the correct roles.
roles = list(set(roles).union({u'service'}))
user = {'username': email,
roles = sorted(set(roles).union({'service'}))
user_id = bson.ObjectId()
log.info('Creating service account %s with roles %s', user_id, roles)
user = {'_id': user_id,
'username': f'SRV-{user_id}',
'groups': [],
'roles': roles,
'settings': {'email_communications': 0},
'auth': [],
'full_name': email,
'email': email,
'full_name': full_name or f'SRV-{user_id}',
'service': service}
result, _, _, status = post_internal('users', user)
if email:
user['email'] = email
result, _, _, status = current_app.post_internal('users', user)
if status != 201:
raise SystemExit('Error creating user {}: {}'.format(email, result))
raise ServiceAccountCreationError('Error creating user {}: {}'.format(user_id, result))
user.update(result)
# Create an authentication token that won't expire for a long time.
token = local_auth.generate_and_store_token(user['_id'], days=36500, prefix='SRV')
token = generate_auth_token(user['_id'])
return user, token
def setup_app(app, url_prefix):
app.register_blueprint(blueprint, url_prefix=url_prefix)
def generate_auth_token(service_account_id) -> dict:
"""Generates an authentication token for a service account."""
token_info = local_auth.generate_and_store_token(service_account_id, days=36500, prefix=b'SRV')
return token_info
def setup_app(app, api_prefix):
app.register_api_blueprint(blueprint, url_prefix=api_prefix)

374
pillar/api/timeline.py Normal file
View File

@ -0,0 +1,374 @@
import itertools
import typing
from datetime import datetime
from operator import itemgetter
import attr
import bson
import pymongo
from flask import Blueprint, current_app, request, url_for
import pillar
from pillar import shortcodes
from pillar.api.utils import jsonify, pretty_duration, str2id
blueprint = Blueprint('timeline', __name__)
@attr.s(auto_attribs=True)
class TimelineDO:
groups: typing.List['GroupDO'] = []
continue_from: typing.Optional[float] = None
@attr.s(auto_attribs=True)
class GroupDO:
label: typing.Optional[str] = None
url: typing.Optional[str] = None
items: typing.Dict = {}
groups: typing.Iterable['GroupDO'] = []
class SearchHelper:
def __init__(self, nbr_of_weeks: int, continue_from: typing.Optional[datetime],
project_ids: typing.List[bson.ObjectId], sort_direction: str):
self._nbr_of_weeks = nbr_of_weeks
self._continue_from = continue_from
self._project_ids = project_ids
self.sort_direction = sort_direction
def _match(self, continue_from: typing.Optional[datetime]) -> dict:
created = {}
if continue_from:
if self.sort_direction == 'desc':
created = {'_created': {'$lt': continue_from}}
else:
created = {'_created': {'$gt': continue_from}}
return {'_deleted': {'$ne': True},
'node_type': {'$in': ['asset', 'post']},
'properties.status': {'$eq': 'published'},
'project': {'$in': self._project_ids},
**created,
}
def raw_weeks_from_mongo(self) -> pymongo.collection.Collection:
direction = pymongo.DESCENDING if self.sort_direction == 'desc' else pymongo.ASCENDING
nodes_coll = current_app.db('nodes')
return nodes_coll.aggregate([
{'$match': self._match(self._continue_from)},
{'$lookup': {"from": "projects",
"localField": "project",
"foreignField": "_id",
"as": "project"}},
{'$unwind': {'path': "$project"}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$project': {
'_created': 1,
'project._id': 1,
'project.url': 1,
'project.name': 1,
'user._id': 1,
'user.full_name': 1,
'name': 1,
'node_type': 1,
'picture': 1,
'properties': 1,
'permissions': 1,
}},
{'$group': {
'_id': {'year': {'$isoWeekYear': '$_created'},
'week': {'$isoWeek': '$_created'}},
'nodes': {'$push': '$$ROOT'}
}},
{'$sort': {'_id.year': direction,
'_id.week': direction}},
{'$limit': self._nbr_of_weeks}
])
def has_more(self, continue_from: datetime) -> bool:
nodes_coll = current_app.db('nodes')
result = nodes_coll.count_documents(self._match(continue_from))
return bool(result)
class Grouper:
@classmethod
def label(cls, node):
return None
@classmethod
def url(cls, node):
return None
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
raise NotImplemented()
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
raise NotImplemented()
class ProjectGrouper(Grouper):
@classmethod
def label(cls, project: dict):
return project['name']
@classmethod
def url(cls, project: dict):
return url_for('projects.view', project_url=project['url'])
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
return itemgetter('project')
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
return lambda node: node['project']['_id']
class UserGrouper(Grouper):
@classmethod
def label(cls, user):
return user['full_name']
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
return itemgetter('user')
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
return lambda node: node['user']['_id']
class TimeLineBuilder:
def __init__(self, search_helper: SearchHelper, grouper: typing.Type[Grouper]):
self.search_helper = search_helper
self.grouper = grouper
self.continue_from = None
def build(self) -> TimelineDO:
raw_weeks = self.search_helper.raw_weeks_from_mongo()
clean_weeks = (self.create_week_group(week) for week in raw_weeks)
return TimelineDO(
groups=list(clean_weeks),
continue_from=self.continue_from.timestamp() if self.search_helper.has_more(self.continue_from) else None
)
def create_week_group(self, week: dict) -> GroupDO:
nodes = week['nodes']
nodes.sort(key=itemgetter('_created'), reverse=True)
self.update_continue_from(nodes)
groups = self.create_groups(nodes)
return GroupDO(
label=f'Week {week["_id"]["week"]}, {week["_id"]["year"]}',
groups=groups
)
def create_groups(self, nodes: typing.List[dict]) -> typing.List[GroupDO]:
self.sort_nodes(nodes) # groupby assumes that the list is sorted
nodes_grouped = itertools.groupby(nodes, self.grouper.group_key())
groups = (self.clean_group(grouped_by, group) for grouped_by, group in nodes_grouped)
groups_sorted = sorted(groups, key=self.group_row_sorter, reverse=True)
return groups_sorted
def sort_nodes(self, nodes: typing.List[dict]):
nodes.sort(key=itemgetter('node_type'))
nodes.sort(key=self.grouper.sort_key())
def update_continue_from(self, sorted_nodes: typing.List[dict]):
if self.search_helper.sort_direction == 'desc':
first_created = sorted_nodes[-1]['_created']
candidate = self.continue_from or first_created
self.continue_from = min(candidate, first_created)
else:
last_created = sorted_nodes[0]['_created']
candidate = self.continue_from or last_created
self.continue_from = max(candidate, last_created)
def clean_group(self, grouped_by: typing.Any, group: typing.Iterable[dict]) -> GroupDO:
items = self.create_items(group)
return GroupDO(
label=self.grouper.label(grouped_by),
url=self.grouper.url(grouped_by),
items=items
)
def create_items(self, group) -> typing.List[dict]:
by_node_type = itertools.groupby(group, key=itemgetter('node_type'))
items = {}
for node_type, nodes in by_node_type:
items[node_type] = [self.node_prettyfy(n) for n in nodes]
return items
@classmethod
def node_prettyfy(cls, node: dict)-> dict:
duration_seconds = node['properties'].get('duration_seconds')
if duration_seconds is not None:
node['properties']['duration'] = pretty_duration(duration_seconds)
if node['node_type'] == 'post':
html = _get_markdowned_html(node['properties'], 'content')
html = shortcodes.render_commented(html, context=node['properties'])
node['properties']['pretty_content'] = html
return node
@classmethod
def group_row_sorter(cls, row: GroupDO) -> typing.Tuple[datetime, datetime]:
'''
If a group contains posts are more interesting and therefor we put them higher in up
:param row:
:return: tuple with newest post date and newest asset date
'''
def newest_created(nodes: typing.List[dict]) -> datetime:
if nodes:
return nodes[0]['_created']
return datetime.fromtimestamp(0, tz=bson.tz_util.utc)
newest_post_date = newest_created(row.items.get('post'))
newest_asset_date = newest_created(row.items.get('asset'))
return newest_post_date, newest_asset_date
def _public_project_ids() -> typing.List[bson.ObjectId]:
"""Returns a list of ObjectIDs of public projects.
Memoized in setup_app().
"""
proj_coll = current_app.db('projects')
result = proj_coll.find({'is_private': False}, {'_id': 1})
return [p['_id'] for p in result]
def _get_markdowned_html(document: dict, field_name: str) -> str:
cache_field_name = pillar.markdown.cache_field_name(field_name)
html = document.get(cache_field_name)
if html is None:
markdown_src = document.get(field_name) or ''
html = pillar.markdown.markdown(markdown_src)
return html
@blueprint.route('/', methods=['GET'])
def global_timeline():
continue_from_str = request.args.get('from')
continue_from = parse_continue_from(continue_from_str)
nbr_of_weeks_str = request.args.get('weeksToLoad')
nbr_of_weeks = parse_nbr_of_weeks(nbr_of_weeks_str)
sort_direction = request.args.get('dir', 'desc')
return _global_timeline(continue_from, nbr_of_weeks, sort_direction)
@blueprint.route('/p/<string(length=24):pid_path>', methods=['GET'])
def project_timeline(pid_path: str):
continue_from_str = request.args.get('from')
continue_from = parse_continue_from(continue_from_str)
nbr_of_weeks_str = request.args.get('weeksToLoad')
nbr_of_weeks = parse_nbr_of_weeks(nbr_of_weeks_str)
sort_direction = request.args.get('dir', 'desc')
pid = str2id(pid_path)
return _project_timeline(continue_from, nbr_of_weeks, sort_direction, pid)
def parse_continue_from(from_arg) -> typing.Optional[datetime]:
try:
from_float = float(from_arg)
except (TypeError, ValueError):
return None
return datetime.fromtimestamp(from_float, tz=bson.tz_util.utc)
def parse_nbr_of_weeks(weeks_to_load: str) -> int:
try:
return int(weeks_to_load)
except (TypeError, ValueError):
return 3
def _global_timeline(continue_from: typing.Optional[datetime], nbr_of_weeks: int, sort_direction: str):
"""Returns an aggregated view of what has happened on the site
Memoized in setup_app().
:param continue_from: Python utc timestamp where to begin aggregation
:param nbr_of_weeks: Number of weeks to return
Example output:
{
groups: [{
label: 'Week 32',
groups: [{
label: 'Spring',
url: '/p/spring',
items:{
post: [blogPostDoc, blogPostDoc],
asset: [assetDoc, assetDoc]
},
groups: ...
}]
}],
continue_from: 123456.2 // python timestamp
}
"""
builder = TimeLineBuilder(
SearchHelper(nbr_of_weeks, continue_from, _public_project_ids(), sort_direction),
ProjectGrouper
)
return jsonify_timeline(builder.build())
def jsonify_timeline(timeline: TimelineDO):
return jsonify(
attr.asdict(timeline,
recurse=True,
filter=lambda att, value: value is not None)
)
def _project_timeline(continue_from: typing.Optional[datetime], nbr_of_weeks: int, sort_direction, pid: bson.ObjectId):
"""Returns an aggregated view of what has happened on the site
Memoized in setup_app().
:param continue_from: Python utc timestamp where to begin aggregation
:param nbr_of_weeks: Number of weeks to return
Example output:
{
groups: [{
label: 'Week 32',
groups: [{
label: 'Tobias Johansson',
items:{
post: [blogPostDoc, blogPostDoc],
asset: [assetDoc, assetDoc]
},
groups: ...
}]
}],
continue_from: 123456.2 // python timestamp
}
"""
builder = TimeLineBuilder(
SearchHelper(nbr_of_weeks, continue_from, [pid], sort_direction),
UserGrouper
)
return jsonify_timeline(builder.build())
def setup_app(app, url_prefix):
global _public_project_ids
global _global_timeline
global _project_timeline
app.register_api_blueprint(blueprint, url_prefix=url_prefix)
cached = app.cache.cached(timeout=3600)
_public_project_ids = cached(_public_project_ids)
memoize = app.cache.memoize(timeout=60)
_global_timeline = memoize(_global_timeline)
_project_timeline = memoize(_project_timeline)

View File

@ -0,0 +1,82 @@
import logging
import bson
from flask import current_app
from . import hooks
from .routes import blueprint_api
log = logging.getLogger(__name__)
def remove_user_from_group(user_id: bson.ObjectId, group_id: bson.ObjectId):
"""Removes the user from the given group.
Directly uses MongoDB, so that it doesn't require any special permissions.
"""
log.info('Removing user %s from group %s', user_id, group_id)
user_group_action(user_id, group_id, '$pull')
def add_user_to_group(user_id: bson.ObjectId, group_id: bson.ObjectId):
"""Makes the user member of the given group.
Directly uses MongoDB, so that it doesn't require any special permissions.
"""
log.info('Adding user %s to group %s', user_id, group_id)
user_group_action(user_id, group_id, '$addToSet')
def user_group_action(user_id: bson.ObjectId, group_id: bson.ObjectId, action: str):
"""Performs a group action (add/remove).
:param user_id: the user's ObjectID.
:param group_id: the group's ObjectID.
:param action: either '$pull' to remove from a group, or '$addToSet' to add to a group.
"""
from pymongo.results import UpdateResult
assert isinstance(user_id, bson.ObjectId)
assert isinstance(group_id, bson.ObjectId)
assert action in {'$pull', '$addToSet'}
users_coll = current_app.db('users')
result: UpdateResult = users_coll.update_one(
{'_id': user_id},
{action: {'groups': group_id}},
)
if result.matched_count == 0:
raise ValueError(f'Unable to {action} user {user_id} membership of group {group_id}; '
f'user not found.')
def _update_search_user_changed_role(sender, user: dict):
log.debug('Sending updated user %s to Algolia due to role change', user['_id'])
hooks.push_updated_user_to_search(user, original=None)
def setup_app(app, api_prefix):
from pillar.api import service
from . import patch
patch.setup_app(app, url_prefix=api_prefix)
app.on_pre_GET_users += hooks.check_user_access
app.on_post_GET_users += hooks.post_GET_user
app.on_pre_PUT_users += hooks.check_put_access
app.on_pre_PUT_users += hooks.before_replacing_user
app.on_replaced_users += hooks.push_updated_user_to_search
app.on_replaced_users += hooks.send_blinker_signal_roles_changed
app.on_fetched_item_users += hooks.after_fetching_user
app.on_fetched_resource_users += hooks.after_fetching_user_resource
app.on_insert_users += hooks.before_inserting_users
app.on_inserted_users += hooks.after_inserting_users
app.register_api_blueprint(blueprint_api, url_prefix=api_prefix)
service.signal_user_changed_role.connect(_update_search_user_changed_role)

159
pillar/api/users/avatar.py Normal file
View File

@ -0,0 +1,159 @@
import functools
import io
import logging
import mimetypes
import typing
from bson import ObjectId
from eve.methods.get import getitem_internal
import flask
from pillar import current_app
from pillar.api import blender_id
from pillar.api.blender_cloud import home_project
import pillar.api.file_storage
from werkzeug.datastructures import FileStorage
log = logging.getLogger(__name__)
DEFAULT_AVATAR = 'assets/img/default_user_avatar.png'
def url(user: dict) -> str:
"""Return the avatar URL for this user.
:param user: dictionary from the MongoDB 'users' collection.
"""
assert isinstance(user, dict), f'user must be dict, not {type(user)}'
avatar_id = user.get('avatar', {}).get('file')
if not avatar_id:
return _default_avatar()
# The file may not exist, in which case we get an empty string back.
return pillar.api.file_storage.get_file_url(avatar_id) or _default_avatar()
@functools.lru_cache(maxsize=1)
def _default_avatar() -> str:
"""Return the URL path of the default avatar.
Doesn't change after the app has started, so we just cache it.
"""
return flask.url_for('static_pillar', filename=DEFAULT_AVATAR)
def _extension_for_mime(mime_type: str) -> str:
# Take the longest extension. I'd rather have '.jpeg' than the weird '.jpe'.
extensions: typing.List[str] = mimetypes.guess_all_extensions(mime_type)
try:
return max(extensions, key=len)
except ValueError:
# Raised when extensions is empty, e.g. when the mime type is unknown.
return ''
def _get_file_link(file_id: ObjectId) -> str:
# Get the file document via Eve to make it update the link.
file_doc, _, _, status = getitem_internal('files', _id=file_id)
assert status == 200
return file_doc['link']
def sync_avatar(user_id: ObjectId) -> str:
"""Fetch the user's avatar from Blender ID and save to storage.
Errors are logged but do not raise an exception.
:return: the link to the avatar, or '' if it was not processed.
"""
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': user_id})
old_avatar_info = db_user.get('avatar', {})
if isinstance(old_avatar_info, ObjectId):
old_avatar_info = {'file': old_avatar_info}
home_proj = home_project.get_home_project(user_id)
if not home_project:
log.error('Home project of user %s does not exist, unable to store avatar', user_id)
return ''
bid_userid = blender_id.get_user_blenderid(db_user)
if not bid_userid:
log.error('User %s has no Blender ID user-id, unable to fetch avatar', user_id)
return ''
avatar_url = blender_id.avatar_url(bid_userid)
bid_session = blender_id.Session()
# Avoid re-downloading the same avatar.
request_headers = {}
if avatar_url == old_avatar_info.get('last_downloaded_url') and \
old_avatar_info.get('last_modified'):
request_headers['If-Modified-Since'] = old_avatar_info.get('last_modified')
log.info('Downloading avatar for user %s from %s', user_id, avatar_url)
resp = bid_session.get(avatar_url, headers=request_headers, allow_redirects=True)
if resp.status_code == 304:
# File was not modified, we can keep the old file.
log.debug('Avatar for user %s was not modified on Blender ID, not re-downloading', user_id)
return _get_file_link(old_avatar_info['file'])
resp.raise_for_status()
mime_type = resp.headers['Content-Type']
file_extension = _extension_for_mime(mime_type)
if not file_extension:
log.error('No file extension known for mime type %s, unable to handle avatar of user %s',
mime_type, user_id)
return ''
filename = f'avatar-{user_id}{file_extension}'
fake_local_file = io.BytesIO(resp.content)
fake_local_file.name = filename
# Act as if this file was just uploaded by the user, so we can reuse
# existing Pillar file-handling code.
log.debug("Uploading avatar for user %s to storage", user_id)
uploaded_file = FileStorage(
stream=fake_local_file,
filename=filename,
headers=resp.headers,
content_type=mime_type,
content_length=resp.headers['Content-Length'],
)
with pillar.auth.temporary_user(db_user):
upload_data = pillar.api.file_storage.upload_and_process(
fake_local_file,
uploaded_file,
str(home_proj['_id']),
# Disallow image processing, as it's a tiny file anyway and
# we'll just serve the original.
may_process_file=False,
)
file_id = ObjectId(upload_data['file_id'])
avatar_info = {
'file': file_id,
'last_downloaded_url': resp.url,
'last_modified': resp.headers.get('Last-Modified'),
}
# Update the user to store the reference to their avatar.
old_avatar_file_id = old_avatar_info.get('file')
update_result = users_coll.update_one({'_id': user_id},
{'$set': {'avatar': avatar_info}})
if update_result.matched_count == 1:
log.debug('Updated avatar for user ID %s to file %s', user_id, file_id)
else:
log.warning('Matched %d users while setting avatar for user ID %s to file %s',
update_result.matched_count, user_id, file_id)
if old_avatar_file_id:
current_app.delete_internal('files', _id=old_avatar_file_id)
return _get_file_link(file_id)

205
pillar/api/users/hooks.py Normal file
View File

@ -0,0 +1,205 @@
import copy
import json
from eve.utils import parse_request
from werkzeug import exceptions as wz_exceptions
from pillar import current_app
from pillar.api.users.routes import log
import pillar.api.users.avatar
import pillar.auth
USER_EDITABLE_FIELDS = {'full_name', 'username', 'email', 'settings'}
# These fields nobody is allowed to touch directly, not even admins.
USER_ALWAYS_RESTORE_FIELDS = {'auth'}
def before_replacing_user(request, lookup):
"""Prevents changes to any field of the user doc, except USER_EDITABLE_FIELDS."""
# Find the user that is being replaced
req = parse_request('users')
req.projection = json.dumps({key: 0 for key in USER_EDITABLE_FIELDS})
original = current_app.data.find_one('users', req, **lookup)
# Make sure that the replacement has a valid auth field.
put_data = request.get_json()
if put_data is None:
raise wz_exceptions.BadRequest('No JSON data received')
# We should get a ref to the cached JSON, and not a copy. This will allow us to
# modify the cached JSON so that Eve sees our modifications.
assert put_data is request.get_json()
# Reset fields that shouldn't be edited to their original values. This is only
# needed when users are editing themselves; admins are allowed to edit much more.
if not pillar.auth.current_user.has_cap('admin'):
for db_key, db_value in original.items():
if db_key[0] == '_' or db_key in USER_EDITABLE_FIELDS:
continue
if db_key in original:
put_data[db_key] = copy.deepcopy(original[db_key])
# Remove fields added by this PUT request, except when they are user-editable.
for put_key in list(put_data.keys()):
if put_key[0] == '_' or put_key in USER_EDITABLE_FIELDS:
continue
if put_key not in original:
del put_data[put_key]
# Always restore those fields
for db_key in USER_ALWAYS_RESTORE_FIELDS:
if db_key in original:
put_data[db_key] = copy.deepcopy(original[db_key])
else:
del put_data[db_key]
# Regular users should always have an email address
if 'service' not in put_data.get('roles', ()):
if not put_data.get('email'):
raise wz_exceptions.UnprocessableEntity(
'email field must be given')
def push_updated_user_to_search(user, original):
"""
Push an update to the Search index when a user
item is updated
"""
from pillar.celery import search_index_tasks as searchindex
searchindex.updated_user.delay(str(user['_id']))
def send_blinker_signal_roles_changed(user, original):
"""
Sends a Blinker signal that the user roles were
changed, so others can respond.
"""
current_roles = set(user.get('roles', []))
original_roles = set(original.get('roles', []))
if current_roles == original_roles:
return
from pillar.api.service import signal_user_changed_role
log.info('User %s changed roles to %s, sending Blinker signal',
user.get('_id'), current_roles)
signal_user_changed_role.send(current_app, user=user)
def check_user_access(request, lookup):
"""Modifies the lookup dict to limit returned user info."""
user = pillar.auth.get_current_user()
# Admins can do anything and get everything, except the 'auth' block.
if user.has_cap('admin'):
return
if not lookup and user.is_anonymous:
raise wz_exceptions.Forbidden()
# Add a filter to only return the current user.
if '_id' not in lookup:
lookup['_id'] = user.user_id
def check_put_access(request, lookup):
"""Only allow PUT to the current user, or all users if admin."""
user = pillar.auth.get_current_user()
if user.has_cap('admin'):
return
if user.is_anonymous:
raise wz_exceptions.Forbidden()
if str(lookup['_id']) != str(user.user_id):
raise wz_exceptions.Forbidden()
def after_fetching_user(user: dict) -> None:
# Deny access to auth block; authentication stuff is managed by
# custom end-points.
user.pop('auth', None)
current_user = pillar.auth.get_current_user()
# Admins can do anything and get everything, except the 'auth' block.
if current_user.has_cap('admin'):
return
# Only allow full access to the current user.
if current_user.is_authenticated and str(user['_id']) == str(current_user.user_id):
return
# Remove all fields except public ones.
public_fields = {'full_name', 'username', 'email', 'extension_props_public', 'badges'}
for field in list(user.keys()):
if field not in public_fields:
del user[field]
def after_fetching_user_resource(response):
for user in response['_items']:
after_fetching_user(user)
def post_GET_user(request, payload):
json_data = json.loads(payload.data)
# Check if we are querying the users endpoint (instead of the single user)
if json_data.get('_id') is None:
return
# json_data['computed_permissions'] = \
# compute_permissions(json_data['_id'], app.data.driver)
payload.data = json.dumps(json_data)
def grant_org_roles(user_doc):
"""Handle any organization this user may be part of."""
email = user_doc.get('email')
if not email:
log.info('Unable to check new user for organization membership, no email address: %r',
user_doc)
return
org_roles = current_app.org_manager.unknown_member_roles(email)
if not org_roles:
log.debug('No organization roles for user %r', email)
return
log.info('Granting organization roles %r to user %r', org_roles, email)
new_roles = set(user_doc.get('roles') or []) | org_roles
user_doc['roles'] = list(new_roles)
def before_inserting_users(user_docs):
"""Grants organization roles to the created users."""
for user_doc in user_docs:
grant_org_roles(user_doc)
def after_inserting_users(user_docs):
"""Moves the users from the unknown_members to the members list of their organizations."""
om = current_app.org_manager
for user_doc in user_docs:
user_id = user_doc.get('_id')
user_email = user_doc.get('email')
if not user_id or not user_email:
# Missing emails can happen when creating a service account, it's fine.
log.info('User created with _id=%r and email=%r, unable to check organizations',
user_id, user_email)
continue
om.make_member_known(user_id, user_email)

45
pillar/api/users/patch.py Normal file
View File

@ -0,0 +1,45 @@
"""User patching support."""
import logging
import bson
from flask import Blueprint
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.auth import current_user
from pillar.api.utils import authorization, jsonify, remove_private_keys
from pillar.api import patch_handler
log = logging.getLogger(__name__)
patch_api_blueprint = Blueprint('users.patch', __name__)
class UserPatchHandler(patch_handler.AbstractPatchHandler):
item_name = 'user'
@authorization.require_login()
def patch_set_username(self, user_id: bson.ObjectId, patch: dict):
"""Updates a user's username."""
if user_id != current_user.user_id:
log.info('User %s tried to change username of user %s',
current_user.user_id, user_id)
raise wz_exceptions.Forbidden('You may only change your own username')
new_username = patch['username']
log.info('User %s uses PATCH to set username to %r', current_user.user_id, new_username)
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': user_id})
db_user['username'] = new_username
# Save via Eve to check the schema and trigger update hooks.
response, _, _, status = current_app.put_internal(
'users', remove_private_keys(db_user), _id=user_id)
return jsonify(response), status
def setup_app(app, url_prefix):
UserPatchHandler(patch_api_blueprint)
app.register_api_blueprint(patch_api_blueprint, url_prefix=url_prefix)

144
pillar/api/users/routes.py Normal file
View File

@ -0,0 +1,144 @@
import logging
from eve.methods.get import get
from flask import Blueprint, request
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.api import utils
from pillar.api.utils.authorization import require_login
from pillar.auth import current_user
log = logging.getLogger(__name__)
blueprint_api = Blueprint('users_api', __name__)
@blueprint_api.route('/me')
@require_login()
def my_info():
eve_resp, _, _, status, _ = get('users', {'_id': current_user.user_id})
resp = utils.jsonify(eve_resp['_items'][0], status=status)
return resp
@blueprint_api.route('/video/<video_id>/progress')
@require_login()
def get_video_progress(video_id: str):
"""Return video progress information.
Either a `204 No Content` is returned (no information stored),
or a `200 Ok` with JSON from Eve's 'users' schema, from the key
video.view_progress.<video_id>.
"""
# Validation of the video ID; raises a BadRequest when it's not an ObjectID.
# This isn't strictly necessary, but it makes this function behave symmetrical
# to the set_video_progress() function.
utils.str2id(video_id)
users_coll = current_app.db('users')
user_doc = users_coll.find_one(current_user.user_id, projection={'nodes.view_progress': True})
try:
progress = user_doc['nodes']['view_progress'][video_id]
except KeyError:
return '', 204
if not progress:
return '', 204
return utils.jsonify(progress)
@blueprint_api.route('/video/<video_id>/progress', methods=['POST'])
@require_login()
def set_video_progress(video_id: str):
"""Save progress information about a certain video.
Expected parameters:
- progress_in_sec: float number of seconds
- progress_in_perc: integer percentage of video watched (interval [0-100])
"""
my_log = log.getChild('set_video_progress')
my_log.debug('Setting video progress for user %r video %r', current_user.user_id, video_id)
# Constructing this response requires an active app, and thus can't be done on module load.
no_video_response = utils.jsonify({'_message': 'No such video'}, status=404)
try:
progress_in_sec = float(request.form['progress_in_sec'])
progress_in_perc = int(request.form['progress_in_perc'])
except KeyError as ex:
my_log.debug('Missing POST field in request: %s', ex)
raise wz_exceptions.BadRequest(f'missing a form field')
except ValueError as ex:
my_log.debug('Invalid value for POST field in request: %s', ex)
raise wz_exceptions.BadRequest(f'Invalid value for field: {ex}')
users_coll = current_app.db('users')
nodes_coll = current_app.db('nodes')
# First check whether this is actually an existing video
video_oid = utils.str2id(video_id)
video_doc = nodes_coll.find_one(video_oid, projection={
'node_type': True,
'properties.content_type': True,
'properties.file': True,
})
if not video_doc:
my_log.debug('Node %r not found, unable to set progress for user %r',
video_oid, current_user.user_id)
return no_video_response
try:
is_video = (video_doc['node_type'] == 'asset'
and video_doc['properties']['content_type'] == 'video')
except KeyError:
is_video = False
if not is_video:
my_log.info('Node %r is not a video, unable to set progress for user %r',
video_oid, current_user.user_id)
# There is no video found at this URL, so act as if it doesn't even exist.
return no_video_response
# Compute the progress
percent = min(100, max(0, progress_in_perc))
progress = {
'progress_in_sec': progress_in_sec,
'progress_in_percent': percent,
'last_watched': utils.utcnow(),
}
# After watching a certain percentage of the video, we consider it 'done'
#
# Total Credit start Total Credit Percent
# HH:MM:SS HH:MM:SS sec sec of duration
# Sintel 00:14:48 00:12:24 888 744 83.78%
# Tears of Steel 00:12:14 00:09:49 734 589 80.25%
# Cosmos Laundro 00:12:10 00:10:05 730 605 82.88%
# Agent 327 00:03:51 00:03:26 231 206 89.18%
# Caminandes 3 00:02:30 00:02:18 150 138 92.00%
# Glass Half 00:03:13 00:02:52 193 172 89.12%
# Big Buck Bunny 00:09:56 00:08:11 596 491 82.38%
# Elephants Drea 00:10:54 00:09:25 654 565 86.39%
#
# Median 85.09%
# Average 85.75%
#
# For training videos marking at done at 85% of the video may be a bit
# early, since those probably won't have (long) credits. This is why we
# stick to 90% here.
if percent >= 90:
progress['done'] = True
# Setting each property individually prevents us from overwriting any
# existing {done: true} fields.
updates = {f'nodes.view_progress.{video_id}.{k}': v
for k, v in progress.items()}
result = users_coll.update_one({'_id': current_user.user_id},
{'$set': updates})
if result.matched_count == 0:
my_log.error('Current user %r could not be updated', current_user.user_id)
raise wz_exceptions.InternalServerError('Unable to find logged-in user')
return '', 204

View File

@ -0,0 +1,308 @@
import base64
import copy
import datetime
import functools
import hashlib
import json
import logging
import random
import typing
import urllib.request, urllib.parse, urllib.error
import warnings
import bson.objectid
import bson.tz_util
from eve import RFC1123_DATE_FORMAT
from flask import current_app
from werkzeug import exceptions as wz_exceptions
import pymongo.results
log = logging.getLogger(__name__)
def node_setattr(node, key, value):
"""Sets a node property by dotted key.
Modifies the node in-place. Deletes None values.
:type node: dict
:type key: str
:param value: the value to set, or None to delete the key.
"""
set_on = node
while key and '.' in key:
head, key = key.split('.', 1)
set_on = set_on[head]
if value is None:
set_on.pop(key, None)
else:
set_on[key] = value
def remove_private_keys(document):
"""Removes any key that starts with an underscore, returns result as new
dictionary.
"""
def do_remove(doc):
for key in list(doc.keys()):
if key.startswith('_'):
del doc[key]
elif isinstance(doc[key], dict):
doc[key] = do_remove(doc[key])
return doc
doc_copy = copy.deepcopy(document)
do_remove(doc_copy)
try:
del doc_copy['allowed_methods']
except KeyError:
pass
return doc_copy
def pretty_duration(seconds: typing.Union[None, int, float]):
if seconds is None:
return ''
seconds = round(seconds)
hours, seconds = divmod(seconds, 3600)
minutes, seconds = divmod(seconds, 60)
if hours > 0:
return f'{hours:02}:{minutes:02}:{seconds:02}'
else:
return f'{minutes:02}:{seconds:02}'
def pretty_duration_fractional(seconds: typing.Union[None, int, float]):
if seconds is None:
return ''
# Remove fraction of seconds from the seconds so that the rest is done as integers.
seconds, fracs = divmod(seconds, 1)
hours, seconds = divmod(int(seconds), 3600)
minutes, seconds = divmod(seconds, 60)
msec = int(round(fracs * 1000))
if msec == 0:
msec_str = ''
else:
msec_str = f'.{msec:03}'
if hours > 0:
return f'{hours:02}:{minutes:02}:{seconds:02}{msec_str}'
else:
return f'{minutes:02}:{seconds:02}{msec_str}'
class PillarJSONEncoder(json.JSONEncoder):
"""JSON encoder with support for Pillar resources."""
def default(self, obj):
if isinstance(obj, datetime.datetime):
return obj.strftime(RFC1123_DATE_FORMAT)
if isinstance(obj, datetime.timedelta):
return pretty_duration(obj.total_seconds())
if isinstance(obj, bson.ObjectId):
return str(obj)
if isinstance(obj, pymongo.results.UpdateResult):
return obj.raw_result
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
def dumps(mongo_doc, **kwargs):
"""json.dumps() for MongoDB documents."""
return json.dumps(mongo_doc, cls=PillarJSONEncoder, **kwargs)
def jsonify(mongo_doc, status=200, headers=None):
"""JSonifies a Mongo document into a Flask response object."""
return current_app.response_class(dumps(mongo_doc),
mimetype='application/json',
status=status,
headers=headers)
def bsonify(mongo_doc, status=200, headers=None):
"""BSonifies a Mongo document into a Flask response object."""
import bson
data = bson.BSON.encode(mongo_doc)
return current_app.response_class(data,
mimetype='application/bson',
status=status,
headers=headers)
def skip_when_testing(func):
"""Decorator, skips the decorated function when app.config['TESTING']"""
@functools.wraps(func)
def wrapper(*args, **kwargs):
if current_app.config['TESTING']:
log.debug('Skipping call to %s(...) due to TESTING', func.__name__)
return None
return func(*args, **kwargs)
return wrapper
def project_get_node_type(project_document, node_type_node_name):
"""Return a node_type subdocument for a project. If none is found, return
None.
"""
if project_document is None:
return None
return next((node_type for node_type in project_document['node_types']
if node_type['name'] == node_type_node_name), None)
def str2id(document_id: str) -> bson.ObjectId:
"""Returns the document ID as ObjectID, or raises a BadRequest exception.
:raises: wz_exceptions.BadRequest
"""
if not document_id:
log.debug('str2id(%r): Invalid Object ID', document_id)
raise wz_exceptions.BadRequest('Invalid object ID %r' % document_id)
try:
return bson.ObjectId(document_id)
except (bson.objectid.InvalidId, TypeError):
log.debug('str2id(%r): Invalid Object ID', document_id)
raise wz_exceptions.BadRequest('Invalid object ID %r' % document_id)
def gravatar(email: str, size=64) -> typing.Optional[str]:
"""Deprecated: return the Gravatar URL.
.. deprecated::
Use of Gravatar is deprecated, in favour of our self-hosted avatars.
See pillar.api.users.avatar.url(user).
"""
warnings.warn('pillar.api.utils.gravatar() is deprecated, '
'use pillar.api.users.avatar.url() instead',
category=DeprecationWarning)
if email is None:
return None
parameters = {'s': str(size), 'd': 'mm'}
return "https://www.gravatar.com/avatar/" + \
hashlib.md5(email.encode()).hexdigest() + \
"?" + urllib.parse.urlencode(parameters)
class MetaFalsey(type):
def __bool__(cls):
return False
class DoesNotExistMeta(MetaFalsey):
def __repr__(cls) -> str:
return 'DoesNotExist'
class DoesNotExist(object, metaclass=DoesNotExistMeta):
"""Returned as value by doc_diff if a value does not exist."""
def doc_diff(doc1, doc2, *, falsey_is_equal=True, superkey: str = None):
"""Generator, yields differences between documents.
Yields changes as (key, value in doc1, value in doc2) tuples, where
the value can also be the DoesNotExist class. Does not report changed
private keys (i.e. the standard Eve keys starting with underscores).
Sub-documents (i.e. dicts) are recursed, and dot notation is used
for the keys if changes are found.
If falsey_is_equal=True, all Falsey values compare as equal, i.e. this
function won't report differences between DoesNotExist, False, '', and 0.
"""
def is_private(key):
return str(key).startswith('_')
def combine_key(some_key):
"""Combine this key with the superkey.
Keep the key type the same, unless we have to combine with a superkey.
"""
if not superkey:
return some_key
if isinstance(some_key, str) and some_key[0] == '[':
return f'{superkey}{some_key}'
return f'{superkey}.{some_key}'
if doc1 is doc2:
return
if falsey_is_equal and not bool(doc1) and not bool(doc2):
return
if isinstance(doc1, dict) and isinstance(doc2, dict):
for key in set(doc1.keys()).union(set(doc2.keys())):
if is_private(key):
continue
val1 = doc1.get(key, DoesNotExist)
val2 = doc2.get(key, DoesNotExist)
yield from doc_diff(val1, val2,
falsey_is_equal=falsey_is_equal,
superkey=combine_key(key))
return
if isinstance(doc1, list) and isinstance(doc2, list):
for idx in range(max(len(doc1), len(doc2))):
try:
item1 = doc1[idx]
except IndexError:
item1 = DoesNotExist
try:
item2 = doc2[idx]
except IndexError:
item2 = DoesNotExist
subkey = f'[{idx}]'
if item1 is DoesNotExist or item2 is DoesNotExist:
yield combine_key(subkey), item1, item2
else:
yield from doc_diff(item1, item2,
falsey_is_equal=falsey_is_equal,
superkey=combine_key(subkey))
return
if doc1 != doc2:
yield superkey, doc1, doc2
def random_etag() -> str:
"""Random string usable as etag."""
randbytes = random.getrandbits(256).to_bytes(32, 'big')
return base64.b64encode(randbytes)[:-1].decode()
def utcnow() -> datetime.datetime:
"""Construct timezone-aware 'now' in UTC with millisecond precision."""
now = datetime.datetime.now(tz=bson.tz_util.utc)
# MongoDB stores in millisecond precision, so truncate the microseconds.
# This way the returned datetime can be round-tripped via MongoDB and stay the same.
trunc_now = now.replace(microsecond=now.microsecond - (now.microsecond % 1000))
return trunc_now

View File

@ -0,0 +1,33 @@
import logging
from bson import ObjectId
from pillar import current_app
from . import skip_when_testing
log = logging.getLogger(__name__)
@skip_when_testing
def index_user_save(to_index_user: dict):
index_users = current_app.algolia_index_users
if not index_users:
log.debug('No Algolia index defined, so nothing to do.')
return
# Create or update Algolia index for the user
index_users.save_object(to_index_user)
@skip_when_testing
def index_node_save(node_to_index):
if not current_app.algolia_index_nodes:
return
current_app.algolia_index_nodes.save_object(node_to_index)
@skip_when_testing
def index_node_delete(delete_id):
if current_app.algolia_index_nodes is None:
return
current_app.algolia_index_nodes.delete_object(delete_id)

View File

@ -0,0 +1,447 @@
"""Generic authentication.
Contains functionality to validate tokens, create users and tokens, and make
unique usernames from emails. Calls out to the pillar_server.modules.blender_id
module for Blender ID communication.
"""
import base64
import datetime
import hmac
import hashlib
import logging
import typing
import bson
from flask import g, current_app, session
from flask import request
from werkzeug import exceptions as wz_exceptions
from pillar.api.utils import remove_private_keys, utcnow
log = logging.getLogger(__name__)
# Construction is done when requested, since constructing a UserClass instance
# requires an application context to look up capabilities. We set the initial
# value to a not-None singleton to be able to differentiate between
# g.current_user set to "not logged in" or "uninitialised CLI_USER".
CLI_USER = ...
def force_cli_user():
"""Sets g.current_user to the CLI_USER object.
This is used as a marker to avoid authorization checks and just allow everything.
"""
global CLI_USER
from pillar.auth import UserClass
if CLI_USER is ...:
CLI_USER = UserClass.construct('CLI', {
'_id': 'CLI',
'groups': [],
'roles': {'admin'},
'email': 'local@nowhere',
'username': 'CLI',
})
log.info('CONSTRUCTED CLI USER %s of type %s', id(CLI_USER), id(type(CLI_USER)))
log.info('Logging in as CLI_USER (%s) of type %s, circumventing authentication.',
id(CLI_USER), id(type(CLI_USER)))
g.current_user = CLI_USER
def find_user_in_db(user_info: dict, provider='blender-id') -> dict:
"""Find the user in our database, creating/updating the returned document where needed.
First, search for the user using its id from the provider, then try to look the user up via the
email address.
Does NOT update the user in the database.
:param user_info: Information (id, email and full_name) from the auth provider
:param provider: One of the supported providers
"""
users = current_app.data.driver.db['users']
user_id = user_info['id']
query = {'$or': [
{'auth': {'$elemMatch': {
'user_id': str(user_id),
'provider': provider}}},
{'email': user_info['email']},
]}
log.debug('Querying: %s', query)
db_user = users.find_one(query)
if db_user:
log.debug('User with %s id %s already in our database, updating with info from %s',
provider, user_id, provider)
db_user['email'] = user_info['email']
# Find out if an auth entry for the current provider already exists
provider_entry = [element for element in db_user['auth'] if element['provider'] == provider]
if not provider_entry:
db_user['auth'].append({
'provider': provider,
'user_id': str(user_id),
'token': ''})
else:
log.debug('User %r not yet in our database, create a new one.', user_id)
db_user = create_new_user_document(
email=user_info['email'],
user_id=user_id,
username=user_info['full_name'],
provider=provider)
db_user['username'] = make_unique_username(user_info['email'])
if not db_user['full_name']:
db_user['full_name'] = db_user['username']
return db_user
def validate_token(*, force=False) -> bool:
"""Validate the token provided in the request and populate the current_user
flask.g object, so that permissions and access to a resource can be defined
from it.
When the token is successfully validated, sets `g.current_user` to contain
the user information, otherwise it is set to None.
:param force: don't trust g.current_user and force a re-check.
:returns: True iff the user is logged in with a valid Blender ID token.
"""
import pillar.auth
# Trust a pre-existing g.current_user
if not force:
cur = getattr(g, 'current_user', None)
if cur is not None and cur.is_authenticated:
log.debug('skipping token check because current user is already set to %s', cur)
return True
auth_header = request.headers.get('Authorization') or ''
if request.authorization:
token = request.authorization.username
oauth_subclient = request.authorization.password
elif auth_header.startswith('Bearer '):
token = auth_header[7:].strip()
oauth_subclient = ''
else:
# Check the session, the user might be logged in through Flask-Login.
# The user has a logged-in session; trust only if this request passes a CSRF check.
# FIXME(Sybren): we should stop saving the token as 'user_id' in the sesion.
token = session.get('user_id')
if token:
log.debug('skipping token check because current user already has a session')
current_app.csrf.protect()
else:
token = pillar.auth.get_blender_id_oauth_token()
oauth_subclient = None
if not token:
# If no authorization headers are provided, we are getting a request
# from a non logged in user. Proceed accordingly.
log.debug('No authentication headers, so not logged in.')
g.current_user = pillar.auth.AnonymousUser()
return False
return validate_this_token(token, oauth_subclient) is not None
def validate_this_token(token, oauth_subclient=None):
"""Validates a given token, and sets g.current_user.
:returns: the user in MongoDB, or None if not a valid token.
:rtype: dict
"""
from pillar.auth import UserClass, AnonymousUser, user_authenticated
g.current_user = None
_delete_expired_tokens()
# Check the users to see if there is one with this Blender ID token.
db_token = find_token(token, oauth_subclient)
if not db_token:
# If no valid token is found in our local database, we issue a new
# request to the Blender ID server to verify the validity of the token
# passed via the HTTP header. We will get basic user info if the user
# is authorized, and we will store the token in our local database.
from pillar.api import blender_id
db_user, status = blender_id.validate_create_user('', token, oauth_subclient)
else:
# log.debug("User is already in our database and token hasn't expired yet.")
users = current_app.data.driver.db['users']
db_user = users.find_one(db_token['user'])
if db_user is None:
log.debug('Validation failed, user not logged in')
g.current_user = AnonymousUser()
return None
g.current_user = UserClass.construct(token, db_user)
user_authenticated.send(g.current_user)
return db_user
def remove_token(token: str):
"""Removes the token from the database."""
tokens_coll = current_app.db('tokens')
token_hashed = hash_auth_token(token)
# TODO: remove matching on hashed tokens once all hashed tokens have expired.
lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}]}
del_res = tokens_coll.delete_many(lookup)
log.debug('Removed token %r, matched %d documents', token, del_res.deleted_count)
def find_token(token, is_subclient_token=False, **extra_filters):
"""Returns the token document, or None if it doesn't exist (or is expired)."""
tokens_coll = current_app.db('tokens')
token_hashed = hash_auth_token(token)
# TODO: remove matching on hashed tokens once all hashed tokens have expired.
lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}],
'is_subclient_token': True if is_subclient_token else {'$in': [False, None]},
'expire_time': {"$gt": utcnow()}}
lookup.update(extra_filters)
db_token = tokens_coll.find_one(lookup)
return db_token
def hash_auth_token(token: str) -> str:
"""Returns the hashed authentication token.
The token is hashed using HMAC and then base64-encoded.
"""
hmac_key = current_app.config['AUTH_TOKEN_HMAC_KEY']
token_hmac = hmac.new(hmac_key, msg=token.encode('utf8'), digestmod=hashlib.sha256)
digest = token_hmac.digest()
return base64.b64encode(digest).decode('ascii')
def store_token(user_id,
token: str,
token_expiry,
oauth_subclient_id=False,
*,
org_roles: typing.Set[str] = frozenset(),
oauth_scopes: typing.Optional[typing.List[str]] = None,
):
"""Stores an authentication token.
:returns: the token document from MongoDB
"""
assert isinstance(token, str), 'token must be string type, not %r' % type(token)
token_data = {
'user': user_id,
'token': token,
'expire_time': token_expiry,
}
if oauth_subclient_id:
token_data['is_subclient_token'] = True
if org_roles:
token_data['org_roles'] = sorted(org_roles)
if oauth_scopes:
token_data['oauth_scopes'] = oauth_scopes
r, _, _, status = current_app.post_internal('tokens', token_data)
if status not in {200, 201}:
log.error('Unable to store authentication token: %s', r)
raise RuntimeError('Unable to store authentication token.')
token_data.update(r)
return token_data
def create_new_user(email, username, user_id):
"""Creates a new user in our local database.
@param email: the user's email
@param username: the username, which is also used as full name.
@param user_id: the user ID from the Blender ID server.
@returns: the user ID from our local database.
"""
user_data = create_new_user_document(email, user_id, username)
r = current_app.post_internal('users', user_data)
user_id = r[0]['_id']
return user_id
def create_new_user_document(email, user_id, username, provider='blender-id',
token='', *, full_name=''):
"""Creates a new user document, without storing it in MongoDB. The token
parameter is a password in case provider is "local".
"""
user_data = {
'full_name': full_name or username,
'username': username,
'email': email,
'auth': [{
'provider': provider,
'user_id': str(user_id),
'token': token}],
'settings': {
'email_communications': 1
},
'groups': [],
}
return user_data
def make_unique_username(email):
"""Creates a unique username from the email address.
@param email: the email address
@returns: the new username
@rtype: str
"""
username = email.split('@')[0]
# Check for min length of username (otherwise validation fails)
username = "___{0}".format(username) if len(username) < 3 else username
users = current_app.data.driver.db['users']
user_from_username = users.find_one({'username': username})
if not user_from_username:
return username
# Username exists, make it unique by adding some number after it.
suffix = 1
while True:
unique_name = '%s%i' % (username, suffix)
user_from_username = users.find_one({'username': unique_name})
if user_from_username is None:
return unique_name
suffix += 1
def _delete_expired_tokens():
"""Deletes tokens that have expired.
For debugging, we keep expired tokens around for a few days, so that we
can determine that a token was expired rather than not created in the
first place. It also grants some leeway in clock synchronisation.
"""
token_coll = current_app.data.driver.db['tokens']
expiry_date = utcnow() - datetime.timedelta(days=7)
result = token_coll.delete_many({'expire_time': {"$lt": expiry_date}})
# log.debug('Deleted %i expired authentication tokens', result.deleted_count)
def current_user_id() -> typing.Optional[bson.ObjectId]:
"""None-safe fetching of user ID. Can return None itself, though."""
user = current_user()
return user.user_id
def current_user():
"""Returns the current user, or an AnonymousUser if not logged in.
:rtype: pillar.auth.UserClass
"""
import pillar.auth
user: pillar.auth.UserClass = g.get('current_user')
if user is None:
return pillar.auth.AnonymousUser()
return user
def setup_app(app):
@app.before_request
def validate_token_at_each_request():
# Skip token validation if this is a static asset
# to avoid spamming Blender ID for no good reason
if request.path.startswith('/static/'):
return
validate_token()
def upsert_user(db_user):
"""Inserts/updates the user in MongoDB.
Retries a few times when there are uniqueness issues in the username.
:returns: the user's database ID and the status of the PUT/POST.
The status is 201 on insert, and 200 on update.
:type: (ObjectId, int)
"""
if 'subscriber' in db_user.get('groups', []):
log.error('Non-ObjectID string found in user.groups: %s', db_user)
raise wz_exceptions.InternalServerError(
'Non-ObjectID string found in user.groups: %s' % db_user)
if not db_user['full_name']:
# Blender ID doesn't need a full name, but we do.
db_user['full_name'] = db_user['username']
r = {}
for retry in range(5):
if '_id' in db_user:
# Update the existing user
attempted_eve_method = 'PUT'
db_id = db_user['_id']
r, _, _, status = current_app.put_internal('users', remove_private_keys(db_user),
_id=db_id)
if status == 422:
log.error('Status %i trying to PUT user %s with values %s, should not happen! %s',
status, db_id, remove_private_keys(db_user), r)
else:
# Create a new user, retry for non-unique usernames.
attempted_eve_method = 'POST'
r, _, _, status = current_app.post_internal('users', db_user)
if status not in {200, 201}:
log.error('Status %i trying to create user with values %s: %s',
status, db_user, r)
raise wz_exceptions.InternalServerError()
db_id = r['_id']
db_user.update(r) # update with database/eve-generated fields.
if status == 422:
# Probably non-unique username, so retry a few times with different usernames.
log.info('Error creating new user: %s', r)
username_issue = r.get('_issues', {}).get('username', '')
if 'not unique' in username_issue:
# Retry
db_user['username'] = make_unique_username(db_user['email'])
continue
# Saving was successful, or at least didn't break on a non-unique username.
break
else:
log.error('Unable to create new user %s: %s', db_user, r)
raise wz_exceptions.InternalServerError()
if status not in (200, 201):
log.error('internal response from %s to Eve: %r %r', attempted_eve_method, status, r)
raise wz_exceptions.InternalServerError()
return db_id, status

View File

@ -1,5 +1,6 @@
import logging
import functools
import typing
from bson import ObjectId
from flask import g
@ -7,13 +8,14 @@ from flask import abort
from flask import current_app
from werkzeug.exceptions import Forbidden
CHECK_PERMISSIONS_IMPLEMENTED_FOR = {'projects', 'nodes'}
CHECK_PERMISSIONS_IMPLEMENTED_FOR = {'projects', 'nodes', 'flamenco_jobs'}
log = logging.getLogger(__name__)
def check_permissions(collection_name, resource, method, append_allowed_methods=False,
check_node_type=None):
def check_permissions(collection_name: str, resource: dict, method: str,
append_allowed_methods=False,
check_node_type: typing.Optional[str] = None):
"""Check user permissions to access a node. We look up node permissions from
world to groups to users and match them with the computed user permissions.
If there is not match, we raise 403.
@ -27,6 +29,12 @@ def check_permissions(collection_name, resource, method, append_allowed_methods=
:param check_node_type: node type to check. Only valid when collection_name='projects'.
:type check_node_type: str
"""
from pillar.auth import get_current_user
from .authentication import CLI_USER
if get_current_user() is CLI_USER:
log.debug('Short-circuiting check_permissions() for CLI user')
return
if not has_permissions(collection_name, resource, method, append_allowed_methods,
check_node_type):
@ -45,6 +53,8 @@ def compute_allowed_methods(collection_name, resource, check_node_type=None):
:rtype: set
"""
import pillar.auth
# Check some input values.
if collection_name not in CHECK_PERMISSIONS_IMPLEMENTED_FOR:
raise ValueError('compute_allowed_methods only implemented for %s, not for %s',
@ -62,15 +72,18 @@ def compute_allowed_methods(collection_name, resource, check_node_type=None):
# Accumulate allowed methods from the user, group and world level.
allowed_methods = set()
current_user = g.current_user
if current_user:
user = pillar.auth.get_current_user()
if user.is_authenticated:
user_is_admin = is_admin(user)
# If the user is authenticated, proceed to compare the group permissions
for permission in computed_permissions.get('groups', ()):
if permission['group'] in current_user['groups']:
if user_is_admin or permission['group'] in user.group_ids:
allowed_methods.update(permission['methods'])
for permission in computed_permissions.get('users', ()):
if current_user['user_id'] == permission['user']:
if user_is_admin or user.user_id == permission['user']:
allowed_methods.update(permission['methods'])
# Check if the node is public or private. This must be set for non logged
@ -82,8 +95,9 @@ def compute_allowed_methods(collection_name, resource, check_node_type=None):
return allowed_methods
def has_permissions(collection_name, resource, method, append_allowed_methods=False,
check_node_type=None):
def has_permissions(collection_name: str, resource: dict, method: str,
append_allowed_methods=False,
check_node_type: typing.Optional[str] = None):
"""Check user permissions to access a node. We look up node permissions from
world to groups to users and match them with the computed user permissions.
@ -132,6 +146,14 @@ def compute_aggr_permissions(collection_name, resource, check_node_type=None):
if check_node_type is None:
return project['permissions']
node_type_name = check_node_type
elif 'node_type' not in resource:
# Neither a project, nor a node, therefore is another collection
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one(
ObjectId(resource['project']),
{'permissions': 1})
return project['permissions']
else:
# Not a project, so it's a node.
assert 'project' in resource
@ -155,7 +177,7 @@ def compute_aggr_permissions(collection_name, resource, check_node_type=None):
project_permissions = project['permissions']
# Find the node type from the project.
node_type = next((node_type for node_type in project['node_types']
node_type = next((node_type for node_type in project.get('node_types', ())
if node_type['name'] == node_type_name), None)
if node_type is None: # This node type is not known, so doesn't give permissions.
node_type_permissions = {}
@ -203,6 +225,8 @@ def merge_permissions(*args):
:returns: combined list of permissions.
"""
from pillar.auth import current_user
if not args:
return {}
@ -224,25 +248,35 @@ def merge_permissions(*args):
from0 = args[0].get(plural_name, [])
from1 = args[1].get(plural_name, [])
asdict0 = {permission[field_name]: permission['methods'] for permission in from0}
asdict1 = {permission[field_name]: permission['methods'] for permission in from1}
try:
asdict0 = {permission[field_name]: permission['methods'] for permission in from0}
except KeyError:
log.exception('KeyError creating asdict0 for %r permissions; user=%s; args[0]=%r',
field_name, current_user.user_id, args[0])
asdict0 = {}
try:
asdict1 = {permission[field_name]: permission['methods'] for permission in from1}
except KeyError:
log.exception('KeyError creating asdict1 for %r permissions; user=%s; args[1]=%r',
field_name, current_user.user_id, args[1])
asdict1 = {}
keys = set(asdict0.keys() + asdict1.keys())
keys = set(asdict0.keys()).union(set(asdict1.keys()))
for key in maybe_sorted(keys):
methods0 = asdict0.get(key, [])
methods1 = asdict1.get(key, [])
methods = maybe_sorted(set(methods0).union(set(methods1)))
effective.setdefault(plural_name, []).append({field_name: key, u'methods': methods})
effective.setdefault(plural_name, []).append({field_name: key, 'methods': methods})
merge(u'user')
merge(u'group')
merge('user')
merge('group')
# Gather permissions for world
world0 = args[0].get('world', [])
world1 = args[1].get('world', [])
world_methods = set(world0).union(set(world1))
if world_methods:
effective[u'world'] = maybe_sorted(world_methods)
effective['world'] = maybe_sorted(world_methods)
# Recurse for longer merges
if len(args) > 2:
@ -251,39 +285,84 @@ def merge_permissions(*args):
return effective
def require_login(require_roles=set(),
require_all=False):
def require_login(*, require_roles=set(),
require_cap='',
require_all=False,
redirect_to_login=False,
error_view=None):
"""Decorator that enforces users to authenticate.
Optionally only allows access to users with a certain role.
Optionally only allows access to users with a certain role and/or capability.
Either check on roles or on a capability, but never on both. There is no
require_all check for capabilities; if you need to check for multiple
capabilities at once, it's a sign that you need to add another capability
and give it to everybody that needs it.
:param require_roles: set of roles.
:param require_cap: a capability.
:param require_all:
When False (the default): if the user's roles have a
non-empty intersection with the given roles, access is granted.
When True: require the user to have all given roles before access is
granted.
:param redirect_to_login: Determines the behaviour when the user is not
logged in. When False (the default), a 403 Forbidden response is
returned; this is suitable for API calls. When True, the user is
redirected to the login page; this is suitable for user-facing web
requests, and mimicks the flask_login behaviour.
:param error_view: Callable that returns a Flask response object. This is
sent back to the client instead of the default 403 Forbidden.
"""
from flask import request, redirect, url_for, Response
if not isinstance(require_roles, set):
raise TypeError('require_roles param should be a set, but is a %r' % type(require_roles))
raise TypeError(f'require_roles param should be a set, but is {type(require_roles)!r}')
if not isinstance(require_cap, str):
raise TypeError(f'require_caps param should be a str, but is {type(require_cap)!r}')
if require_roles and require_cap:
raise ValueError('either use require_roles or require_cap, but not both')
if require_all and not require_roles:
raise ValueError('require_login(require_all=True) cannot be used with empty require_roles.')
def render_error() -> Response:
if error_view is None:
resp = Forbidden().get_response()
else:
resp = error_view()
resp.status_code = 403
return resp
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if not user_matches_roles(require_roles, require_all):
if g.current_user is None:
# We don't need to log at a higher level, as this is very common.
# Many browsers first try to see whether authentication is needed
# at all, before sending the password.
log.debug('Unauthenticated acces to %s attempted.', func)
else:
log.warning('User %s is authenticated, but does not have required roles %s to '
'access %s', g.current_user['user_id'], require_roles, func)
abort(403)
import pillar.auth
current_user = pillar.auth.get_current_user()
if current_user.is_anonymous:
# We don't need to log at a higher level, as this is very common.
# Many browsers first try to see whether authentication is needed
# at all, before sending the password.
log.debug('Unauthenticated access to %s attempted.', func)
if redirect_to_login:
# Redirect using a 303 See Other, since even a POST
# request should cause a GET on the login page.
return redirect(url_for('users.login', next=request.url), 303)
return render_error()
if require_roles and not current_user.matches_roles(require_roles, require_all):
log.info('User %s is authenticated, but does not have required roles %s to '
'access %s', current_user.user_id, require_roles, func)
return render_error()
if require_cap and not current_user.has_cap(require_cap):
log.info('User %s is authenticated, but does not have required capability %s to '
'access %s', current_user.user_id, require_cap, func)
return render_error()
return func(*args, **kwargs)
@ -326,14 +405,36 @@ def ab_testing(require_roles=set(),
def user_has_role(role, user=None):
"""Returns True iff the user is logged in and has the given role."""
if user is None:
user = g.get('current_user')
import pillar.auth
if user is None:
user = pillar.auth.get_current_user()
if user is not None and not isinstance(user, pillar.auth.UserClass):
raise TypeError(f'pillar.auth.current_user should be instance of UserClass, '
f'not {type(user)}')
elif not isinstance(user, pillar.auth.UserClass):
raise TypeError(f'user should be instance of UserClass, not {type(user)}')
if user.is_anonymous:
return False
roles = user.get('roles') or ()
return role in roles
return user.has_role(role)
def user_has_cap(capability: str, user=None) -> bool:
"""Returns True iff the user is logged in and has the given capability."""
import pillar.auth
assert capability
if user is None:
user = pillar.auth.get_current_user()
if not isinstance(user, pillar.auth.UserClass):
raise TypeError(f'user should be instance of UserClass, not {type(user)}')
return user.has_cap(capability)
def user_matches_roles(require_roles=set(),
@ -348,25 +449,16 @@ def user_matches_roles(require_roles=set(),
returning True.
"""
if not isinstance(require_roles, set):
raise TypeError('require_roles param should be a set, but is a %r' % type(require_roles))
import pillar.auth
if require_all and not require_roles:
raise ValueError('require_login(require_all=True) cannot be used with empty require_roles.')
user = pillar.auth.get_current_user()
if not isinstance(user, pillar.auth.UserClass):
raise TypeError(f'user should be instance of UserClass, not {type(user)}')
current_user = g.get('current_user')
if current_user is None:
return False
intersection = require_roles.intersection(current_user['roles'])
if require_all:
return len(intersection) == len(require_roles)
return not bool(require_roles) or bool(intersection)
return user.matches_roles(require_roles, require_all)
def is_admin(user):
"""Returns True iff the given user has the admin role."""
"""Returns True iff the given user has the admin capability."""
return user_has_role(u'admin', user)
return user_has_cap('admin', user)

View File

@ -1,5 +1,7 @@
import datetime
from hashlib import md5
import base64
from flask import current_app
@ -17,19 +19,20 @@ def hash_file_path(file_path, expiry_timestamp=None):
if current_app.config['CDN_USE_URL_SIGNING']:
url_signing_key = current_app.config['CDN_URL_SIGNING_KEY']
hash_string = domain_subfolder + file_path + url_signing_key
to_hash = domain_subfolder + file_path + url_signing_key
if not expiry_timestamp:
expiry_timestamp = datetime.datetime.now() + datetime.timedelta(hours=24)
expiry_timestamp = expiry_timestamp.strftime('%s')
hash_string = expiry_timestamp + hash_string
to_hash = expiry_timestamp + to_hash
if isinstance(to_hash, str):
to_hash = to_hash.encode()
expiry_timestamp = "," + str(expiry_timestamp)
hashed_file_path = md5(hash_string).digest().encode('base64')[:-1]
hashed_file_path = hashed_file_path.replace('+', '-')
hashed_file_path = hashed_file_path.replace('/', '_')
hashed_file_path = base64.b64encode(md5(to_hash).digest())[:-1].decode()
hashed_file_path = hashed_file_path.replace('+', '-').replace('/', '_')
asset_url = asset_url + \
'?secure=' + \

View File

@ -3,8 +3,6 @@ import os
from flask import current_app
from application import encoding_service_client
log = logging.getLogger(__name__)
@ -18,7 +16,7 @@ class Encoder:
"""Create an encoding job. Return the backend used as well as an id.
"""
if current_app.config['ENCODING_BACKEND'] != 'zencoder' or \
encoding_service_client is None:
current_app.encoding_service_client is None:
log.error('I can only work with Zencoder, check the config file.')
return None
@ -33,11 +31,14 @@ class Encoder:
options = dict(notifications=current_app.config['ZENCODER_NOTIFICATIONS_URL'])
outputs = [{'format': v['format'],
'url': os.path.join(storage_base, v['file_path'])}
'url': os.path.join(storage_base, v['file_path']),
'upscale': False,
'size': '{width}x{height}'.format(**v),
}
for v in src_file['variations']]
r = encoding_service_client.job.create(file_input,
outputs=outputs,
options=options)
r = current_app.encoding_service_client.job.create(file_input,
outputs=outputs,
options=options)
if r.code != 201:
log.error('Error %i creating Zencoder job: %s', r.code, r.body)
return None
@ -47,8 +48,10 @@ class Encoder:
@staticmethod
def job_progress(job_id):
if isinstance(encoding_service_client, Zencoder):
r = encoding_service_client.job.progress(int(job_id))
from zencoder import Zencoder
if isinstance(current_app.encoding_service_client, Zencoder):
r = current_app.encoding_service_client.job.progress(int(job_id))
return r.body
else:
return None

View File

@ -1,47 +1,61 @@
import os
import json
import typing
import os
import pathlib
import subprocess
from PIL import Image
from flask import current_app
# Images with these modes will be thumbed to PNG, others to JPEG.
MODES_FOR_PNG = {'RGBA', 'LA'}
def generate_local_thumbnails(name_base, src):
def generate_local_thumbnails(fp_base: str, src: pathlib.Path):
"""Given a source image, use Pillow to generate thumbnails according to the
application settings.
:param name_base: the thumbnail will get a field 'name': '{basename}-{thumbsize}.jpg'
:type name_base: str
:param fp_base: the thumbnail will get a field
'file_path': '{fp_base}-{thumbsize}.{ext}'
:param src: the path of the image to be thumbnailed
:type src: str
"""
thumbnail_settings = current_app.config['UPLOADS_LOCAL_STORAGE_THUMBNAILS']
thumbnails = []
save_to_base, _ = os.path.splitext(src)
name_base, _ = os.path.splitext(name_base)
for size, settings in thumbnail_settings.items():
im = Image.open(src)
extra_args = {}
for size, settings in thumbnail_settings.iteritems():
dst = '{0}-{1}{2}'.format(save_to_base, size, '.jpg')
name = '{0}-{1}{2}'.format(name_base, size, '.jpg')
# If the source image has transparency, save as PNG
if im.mode in MODES_FOR_PNG:
suffix = '.png'
imformat = 'PNG'
else:
suffix = '.jpg'
imformat = 'JPEG'
extra_args = {'quality': 95}
dst = src.with_name(f'{src.stem}-{size}{suffix}')
if settings['crop']:
resize_and_crop(src, dst, settings['size'])
width, height = settings['size']
im = resize_and_crop(im, settings['size'])
else:
im = Image.open(src).convert('RGB')
im.thumbnail(settings['size'])
im.save(dst, "JPEG")
width, height = im.size
im.thumbnail(settings['size'], resample=Image.LANCZOS)
width, height = im.size
if imformat == 'JPEG':
im = im.convert('RGB')
im.save(dst, format=imformat, optimize=True, **extra_args)
thumb_info = {'size': size,
'file_path': name,
'local_path': dst,
'length': os.stat(dst).st_size,
'file_path': f'{fp_base}-{size}{suffix}',
'local_path': str(dst),
'length': dst.stat().st_size,
'width': width,
'height': height,
'md5': '',
'content_type': 'image/jpeg'}
'content_type': f'image/{imformat.lower()}'}
if size == 't':
thumb_info['is_public'] = True
@ -51,63 +65,40 @@ def generate_local_thumbnails(name_base, src):
return thumbnails
def resize_and_crop(img_path, modified_path, size, crop_type='middle'):
"""
Resize and crop an image to fit the specified size. Thanks to:
https://gist.github.com/sigilioso/2957026
def resize_and_crop(img: Image, size: typing.Tuple[int, int]) -> Image:
"""Resize and crop an image to fit the specified size.
args:
img_path: path for the image to resize.
modified_path: path to store the modified image.
size: `(width, height)` tuple.
crop_type: can be 'top', 'middle' or 'bottom', depending on this
value, the image will cropped getting the 'top/left', 'middle' or
'bottom/right' of the image to fit the size.
raises:
Exception: if can not open the file in img_path of there is problems
to save the image.
ValueError: if an invalid `crop_type` is provided.
Thanks to: https://gist.github.com/sigilioso/2957026
:param img: opened PIL.Image to work on
:param size: `(width, height)` tuple.
"""
# If height is higher we resize vertically, if not we resize horizontally
img = Image.open(img_path).convert('RGB')
# Get current and desired ratio for the images
img_ratio = img.size[0] / float(img.size[1])
ratio = size[0] / float(size[1])
cur_w, cur_h = img.size # current
img_ratio = cur_w / cur_h
w, h = size # desired
ratio = w / h
# The image is scaled/cropped vertically or horizontally depending on the ratio
if ratio > img_ratio:
img = img.resize((size[0], int(round(size[0] * img.size[1] / img.size[0]))),
Image.ANTIALIAS)
# Crop in the top, middle or bottom
if crop_type == 'top':
box = (0, 0, img.size[0], size[1])
elif crop_type == 'middle':
box = (0, int(round((img.size[1] - size[1]) / 2)), img.size[0],
int(round((img.size[1] + size[1]) / 2)))
elif crop_type == 'bottom':
box = (0, img.size[1] - size[1], img.size[0], img.size[1])
else:
raise ValueError('ERROR: invalid value for crop_type')
uncropped_h = (w * cur_h) // cur_w
img = img.resize((w, uncropped_h), Image.ANTIALIAS)
box = (0, (uncropped_h - h) // 2,
w, (uncropped_h + h) // 2)
img = img.crop(box)
elif ratio < img_ratio:
img = img.resize((int(round(size[1] * img.size[0] / img.size[1])), size[1]),
Image.ANTIALIAS)
# Crop in the top, middle or bottom
if crop_type == 'top':
box = (0, 0, size[0], img.size[1])
elif crop_type == 'middle':
box = (int(round((img.size[0] - size[0]) / 2)), 0,
int(round((img.size[0] + size[0]) / 2)), img.size[1])
elif crop_type == 'bottom':
box = (img.size[0] - size[0], 0, img.size[0], img.size[1])
else:
raise ValueError('ERROR: invalid value for crop_type')
uncropped_w = (h * cur_w) // cur_h
img = img.resize((uncropped_w, h), Image.ANTIALIAS)
box = ((uncropped_w - w) // 2, 0,
(uncropped_w + w) // 2, h)
img = img.crop(box)
else:
img = img.resize((size[0], size[1]),
Image.ANTIALIAS)
img = img.resize((w, h), Image.ANTIALIAS)
# If the scale is the same, we do not need to crop
img.save(modified_path, "JPEG")
return img
def get_video_data(filepath):
@ -143,7 +134,7 @@ def get_video_data(filepath):
res_y=video_stream['height'],
)
if video_stream['sample_aspect_ratio'] != '1:1':
print '[warning] Pixel aspect ratio is not square!'
print('[warning] Pixel aspect ratio is not square!')
return outdata
@ -190,14 +181,14 @@ def ffmpeg_encode(src, format, res_y=720):
dst = os.path.splitext(src)
dst = "{0}-{1}p.{2}".format(dst[0], res_y, format)
args.append(dst)
print "Encoding {0} to {1}".format(src, format)
print("Encoding {0} to {1}".format(src, format))
returncode = subprocess.call([current_app.config['BIN_FFMPEG']] + args)
if returncode == 0:
print "Successfully encoded {0}".format(dst)
print("Successfully encoded {0}".format(dst))
else:
print "Error during encode"
print "Code: {0}".format(returncode)
print "Command: {0}".format(current_app.config['BIN_FFMPEG'] + " " + " ".join(args))
print("Error during encode")
print("Code: {0}".format(returncode))
print("Command: {0}".format(current_app.config['BIN_FFMPEG'] + " " + " ".join(args)))
dst = None
# return path of the encoded video
return dst

View File

@ -0,0 +1,86 @@
import copy
import logging
import types
log = logging.getLogger(__name__)
def assign_permissions(project, node_types, permission_callback):
"""Generator, yields the node types with certain permissions set.
The permission_callback is called for each node type, and each user
and group permission in the project, and should return the appropriate
extra permissions for that node type.
Yields copies of the given node types with new permissions.
permission_callback(node_type, uwg, ident, proj_methods) is returned, where
- 'node_type' is the node type dict
- 'ugw' is either 'user', 'group', or 'world',
- 'ident' is the group or user ID, or None when ugw is 'world',
- 'proj_methods' is the list of already-allowed project methods.
"""
proj_perms = project['permissions']
for nt in node_types:
permissions = {}
for key in ('users', 'groups'):
perms = proj_perms.get(key)
if not perms:
continue
singular = key.rstrip('s')
for perm in perms:
assert isinstance(perm, dict), 'perm should be dict, but is %r' % perm
ident = perm[singular] # group or user ID.
methods_to_allow = permission_callback(nt, singular, ident, perm['methods'])
if not methods_to_allow:
continue
permissions.setdefault(key, []).append(
{singular: ident,
'methods': methods_to_allow}
)
# World permissions are simpler.
world_methods_to_allow = permission_callback(nt, 'world', None,
permissions.get('world', []))
if world_methods_to_allow:
permissions.setdefault('world', []).extend(world_methods_to_allow)
node_type = copy.deepcopy(nt)
if permissions:
node_type['permissions'] = permissions
yield node_type
def add_to_project(project, node_types, replace_existing):
"""Adds the given node types to the project.
Overwrites any existing by the same name when replace_existing=True.
"""
assert isinstance(project, dict)
assert isinstance(node_types, (list, set, frozenset, tuple, types.GeneratorType)), \
'node_types is of wrong type %s' % type(node_types)
project_id = project['_id']
for node_type in node_types:
found = [nt for nt in project['node_types']
if nt['name'] == node_type['name']]
if found:
assert len(found) == 1, 'node type name should be unique (found %ix)' % len(found)
# TODO: validate that the node type contains all the properties Attract needs.
if replace_existing:
log.info('Replacing existing node type %s on project %s',
node_type['name'], project_id)
project['node_types'].remove(found[0])
else:
continue
project['node_types'].append(node_type)

View File

@ -0,0 +1,87 @@
# These functions come from Reddit
# https://github.com/reddit/reddit/blob/master/r2/r2/lib/db/_sorts.pyx
# Additional resources
# http://www.redditblog.com/2009/10/reddits-new-comment-sorting-system.html
# http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
# http://amix.dk/blog/post/19588
from datetime import datetime, timezone
from math import log
from math import sqrt
epoch = datetime(1970, 1, 1, 0, 0, 0, 0, timezone.utc)
def epoch_seconds(date):
"""Returns the number of seconds from the epoch to date."""
td = date - epoch
return td.days * 86400 + td.seconds + (float(td.microseconds) / 1000000)
def score(ups, downs):
return ups - downs
def hot(ups, downs, date):
"""The hot formula. Reddit's hot ranking uses the logarithm function to
weight the first votes higher than the rest.
The first 10 upvotes have the same weight as the next 100 upvotes which
have the same weight as the next 1000, etc.
Dillo authors: we modified the formula to give more weight to negative
votes when an entry is controversial.
TODO: make this function more dynamic so that different defaults can be
specified depending on the item that is being rated.
"""
s = score(ups, downs)
order = log(max(abs(s), 1), 10)
sign = 1 if s > 0 else -1 if s < 0 else 0
seconds = epoch_seconds(date) - 1134028003
base_hot = round(sign * order + seconds / 45000, 7)
if downs > 1:
rating_delta = 100 * (downs - ups) / downs
if rating_delta < 25:
# The post is controversial
return base_hot
base_hot = base_hot - (downs * 6)
return base_hot
def _confidence(ups, downs):
n = ups + downs
if n == 0:
return 0
z = 1.0 #1.0 = 85%, 1.6 = 95%
phat = float(ups) / n
return sqrt(phat+z*z/(2*n)-z*((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n)
def confidence(ups, downs):
if ups + downs == 0:
return 0
else:
return _confidence(ups, downs)
def update_hot(document):
"""Update the hotness of a document given its current ratings.
We expect the document to implement the ratings_embedded_schema in
a 'ratings' property.
"""
dt = document['_created']
dt = dt.replace(tzinfo=timezone.utc)
document['properties']['ratings']['hot'] = hot(
document['properties']['ratings']['positive'],
document['properties']['ratings']['negative'],
dt,
)

View File

@ -0,0 +1 @@
"""Utility for managing storage backends and files."""

View File

@ -1,268 +0,0 @@
import logging.config
import os
import subprocess
import tempfile
from bson import ObjectId
from datetime import datetime
from flask import g
from flask import request
from flask import abort
from eve import Eve
from eve.auth import TokenAuth
from eve.io.mongo import Validator
from application.utils import project_get_node_type
RFC1123_DATE_FORMAT = '%a, %d %b %Y %H:%M:%S GMT'
class ValidateCustomFields(Validator):
def convert_properties(self, properties, node_schema):
for prop in node_schema:
if not prop in properties:
continue
schema_prop = node_schema[prop]
prop_type = schema_prop['type']
if prop_type == 'dict':
properties[prop] = self.convert_properties(
properties[prop], schema_prop['schema'])
if prop_type == 'list':
if properties[prop] in ['', '[]']:
properties[prop] = []
for k, val in enumerate(properties[prop]):
if not 'schema' in schema_prop:
continue
item_schema = {'item': schema_prop['schema']}
item_prop = {'item': properties[prop][k]}
properties[prop][k] = self.convert_properties(
item_prop, item_schema)['item']
# Convert datetime string to RFC1123 datetime
elif prop_type == 'datetime':
prop_val = properties[prop]
properties[prop] = datetime.strptime(prop_val, RFC1123_DATE_FORMAT)
elif prop_type == 'objectid':
prop_val = properties[prop]
if prop_val:
properties[prop] = ObjectId(prop_val)
else:
properties[prop] = None
return properties
def _validate_valid_properties(self, valid_properties, field, value):
projects_collection = app.data.driver.db['projects']
lookup = {'_id': ObjectId(self.document['project'])}
project = projects_collection.find_one(lookup, {
'node_types.name': 1,
'node_types.dyn_schema': 1,
})
if project is None:
log.warning('Unknown project %s, declared by node %s',
lookup, self.document.get('_id'))
self._error(field, 'Unknown project')
return False
node_type_name = self.document['node_type']
node_type = project_get_node_type(project, node_type_name)
if node_type is None:
log.warning('Project %s has no node type %s, declared by node %s',
project, node_type_name, self.document.get('_id'))
self._error(field, 'Unknown node type')
return False
try:
value = self.convert_properties(value, node_type['dyn_schema'])
except Exception as e:
log.warning("Error converting form properties", exc_info=True)
v = Validator(node_type['dyn_schema'])
val = v.validate(value)
if val:
return True
log.warning('Error validating properties for node %s: %s', self.document, v.errors)
self._error(field, "Error validating properties")
# We specify a settings.py file because when running on wsgi we can't detect it
# automatically. The default path (which works in Docker) can be overridden with
# an env variable.
settings_path = os.environ.get(
'EVE_SETTINGS', '/data/git/pillar/pillar/settings.py')
app = Eve(settings=settings_path, validator=ValidateCustomFields)
# Load configuration from three different sources, to make it easy to override
# settings with secrets, as well as for development & testing.
app_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
app.config.from_pyfile(os.path.join(app_root, 'config.py'), silent=False)
app.config.from_pyfile(os.path.join(app_root, 'config_local.py'), silent=True)
from_envvar = os.environ.get('PILLAR_CONFIG')
if from_envvar:
# Don't use from_envvar, as we want different behaviour. If the envvar
# is not set, it's fine (i.e. silent=True), but if it is set and the
# configfile doesn't exist, it should error out (i.e. silent=False).
app.config.from_pyfile(from_envvar, silent=False)
# Set the TMP environment variable to manage where uploads are stored.
# These are all used by tempfile.mkstemp(), but we don't knwow in whic
# order. As such, we remove all used variables but the one we set.
tempfile.tempdir = app.config['STORAGE_DIR']
os.environ['TMP'] = app.config['STORAGE_DIR']
os.environ.pop('TEMP', None)
os.environ.pop('TMPDIR', None)
# Configure logging
logging.config.dictConfig(app.config['LOGGING'])
log = logging.getLogger(__name__)
if app.config['DEBUG']:
log.info('Pillar starting, debug=%s', app.config['DEBUG'])
# Get the Git hash
try:
git_cmd = ['git', '-C', app_root, 'describe', '--always']
description = subprocess.check_output(git_cmd)
app.config['GIT_REVISION'] = description.strip()
except (subprocess.CalledProcessError, OSError) as ex:
log.warning('Unable to run "git describe" to get git revision: %s', ex)
app.config['GIT_REVISION'] = 'unknown'
log.info('Git revision %r', app.config['GIT_REVISION'])
# Configure Bugsnag
if not app.config.get('TESTING') and app.config.get('BUGSNAG_API_KEY'):
import bugsnag
import bugsnag.flask
import bugsnag.handlers
bugsnag.configure(
api_key=app.config['BUGSNAG_API_KEY'],
project_root="/data/git/pillar/pillar",
)
bugsnag.flask.handle_exceptions(app)
bs_handler = bugsnag.handlers.BugsnagHandler()
bs_handler.setLevel(logging.ERROR)
log.addHandler(bs_handler)
else:
log.info('Bugsnag NOT configured.')
# Google Cloud project
try:
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = \
app.config['GCLOUD_APP_CREDENTIALS']
except KeyError:
raise SystemExit('GCLOUD_APP_CREDENTIALS configuration is missing')
# Storage backend (GCS)
try:
os.environ['GCLOUD_PROJECT'] = app.config['GCLOUD_PROJECT']
except KeyError:
raise SystemExit('GCLOUD_PROJECT configuration value is missing')
# Algolia search
if app.config['SEARCH_BACKEND'] == 'algolia':
from algoliasearch import algoliasearch
client = algoliasearch.Client(
app.config['ALGOLIA_USER'],
app.config['ALGOLIA_API_KEY'])
algolia_index_users = client.init_index(app.config['ALGOLIA_INDEX_USERS'])
algolia_index_nodes = client.init_index(app.config['ALGOLIA_INDEX_NODES'])
else:
algolia_index_users = None
algolia_index_nodes = None
# Encoding backend
if app.config['ENCODING_BACKEND'] == 'zencoder':
from zencoder import Zencoder
encoding_service_client = Zencoder(app.config['ZENCODER_API_KEY'])
else:
encoding_service_client = None
from utils.authentication import validate_token
from utils.authorization import check_permissions
from utils.activities import notification_parse
from modules.projects import before_inserting_projects
from modules.projects import after_inserting_projects
@app.before_request
def validate_token_at_every_request():
validate_token()
def before_returning_item_notifications(response):
if request.args.get('parse'):
notification_parse(response)
def before_returning_resource_notifications(response):
for item in response['_items']:
if request.args.get('parse'):
notification_parse(item)
app.on_fetched_item_notifications += before_returning_item_notifications
app.on_fetched_resource_notifications += before_returning_resource_notifications
@app.before_first_request
def setup_db_indices():
"""Adds missing database indices.
This does NOT drop and recreate existing indices,
nor does it reconfigure existing indices.
If you want that, drop them manually first.
"""
log.debug('Adding missing database indices.')
import pymongo
db = app.data.driver.db
coll = db['tokens']
coll.create_index([('user', pymongo.ASCENDING)])
coll.create_index([('token', pymongo.ASCENDING)])
coll = db['notifications']
coll.create_index([('user', pymongo.ASCENDING)])
coll = db['activities-subscriptions']
coll.create_index([('context_object', pymongo.ASCENDING)])
coll = db['nodes']
# This index is used for queries on project, and for queries on
# the combination (project, node type).
coll.create_index([('project', pymongo.ASCENDING),
('node_type', pymongo.ASCENDING)])
coll.create_index([('parent', pymongo.ASCENDING)])
coll.create_index([('short_code', pymongo.ASCENDING)],
sparse=True, unique=True)
# The encoding module (receive notification and report progress)
from modules.encoding import encoding
from modules.blender_id import blender_id
from modules import projects
from modules import local_auth
from modules import file_storage
from modules import users
from modules import nodes
from modules import latest
from modules import blender_cloud
from modules import service
app.register_blueprint(encoding, url_prefix='/encoding')
app.register_blueprint(blender_id, url_prefix='/blender_id')
projects.setup_app(app, url_prefix='/p')
local_auth.setup_app(app, url_prefix='/auth')
file_storage.setup_app(app, url_prefix='/storage')
latest.setup_app(app, url_prefix='/latest')
blender_cloud.setup_app(app, url_prefix='/bcloud')
users.setup_app(app, url_prefix='/users')
service.setup_app(app, url_prefix='/service')
nodes.setup_app(app, url_prefix='/nodes')

View File

@ -1,240 +0,0 @@
"""Blender ID subclient endpoint.
Also contains functionality for other parts of Pillar to perform communication
with Blender ID.
"""
import logging
import datetime
from bson import tz_util
import requests
from requests.adapters import HTTPAdapter
from flask import Blueprint, request, current_app, abort, jsonify
from eve.methods.post import post_internal
from eve.methods.put import put_internal
from werkzeug import exceptions as wz_exceptions
from application.utils import authentication, remove_private_keys
blender_id = Blueprint('blender_id', __name__)
log = logging.getLogger(__name__)
@blender_id.route('/store_scst', methods=['POST'])
def store_subclient_token():
"""Verifies & stores a user's subclient-specific token."""
user_id = request.form['user_id'] # User ID at BlenderID
subclient_id = request.form['subclient_id']
scst = request.form['token']
db_user, status = validate_create_user(user_id, scst, subclient_id)
if db_user is None:
log.warning('Unable to verify subclient token with Blender ID.')
return jsonify({'status': 'fail',
'error': 'BLENDER ID ERROR'}), 403
return jsonify({'status': 'success',
'subclient_user_id': str(db_user['_id'])}), status
def blender_id_endpoint():
"""Gets the endpoint for the authentication API. If the env variable
is defined, it's possible to override the (default) production address.
"""
return current_app.config['BLENDER_ID_ENDPOINT'].rstrip('/')
def validate_create_user(blender_id_user_id, token, oauth_subclient_id):
"""Validates a user against Blender ID, creating the user in our database.
:param blender_id_user_id: the user ID at the BlenderID server.
:param token: the OAuth access token.
:param oauth_subclient_id: the subclient ID, or empty string if not a subclient.
:returns: (user in MongoDB, HTTP status 200 or 201)
"""
# Verify with Blender ID
log.debug('Storing token for BlenderID user %s', blender_id_user_id)
user_info, token_expiry = validate_token(blender_id_user_id, token, oauth_subclient_id)
if user_info is None:
log.debug('Unable to verify token with Blender ID.')
return None, None
# Blender ID can be queried without user ID, and will always include the
# correct user ID in its response.
log.debug('Obtained user info from Blender ID: %s', user_info)
blender_id_user_id = user_info['id']
# Store the user info in MongoDB.
db_user = find_user_in_db(blender_id_user_id, user_info)
db_id, status = upsert_user(db_user, blender_id_user_id)
# Store the token in MongoDB.
authentication.store_token(db_id, token, token_expiry, oauth_subclient_id)
return db_user, status
def upsert_user(db_user, blender_id_user_id):
"""Inserts/updates the user in MongoDB.
Retries a few times when there are uniqueness issues in the username.
:returns: the user's database ID and the status of the PUT/POST.
The status is 201 on insert, and 200 on update.
:type: (ObjectId, int)
"""
if u'subscriber' in db_user.get('groups', []):
log.error('Non-ObjectID string found in user.groups: %s', db_user)
raise wz_exceptions.InternalServerError('Non-ObjectID string found in user.groups: %s' % db_user)
r = {}
for retry in range(5):
if '_id' in db_user:
# Update the existing user
attempted_eve_method = 'PUT'
db_id = db_user['_id']
r, _, _, status = put_internal('users', remove_private_keys(db_user),
_id=db_id)
if status == 422:
log.error('Status %i trying to PUT user %s with values %s, should not happen! %s',
status, db_id, remove_private_keys(db_user), r)
else:
# Create a new user, retry for non-unique usernames.
attempted_eve_method = 'POST'
r, _, _, status = post_internal('users', db_user)
if status not in {200, 201}:
log.error('Status %i trying to create user for BlenderID %s with values %s: %s',
status, blender_id_user_id, db_user, r)
raise wz_exceptions.InternalServerError()
db_id = r['_id']
db_user.update(r) # update with database/eve-generated fields.
if status == 422:
# Probably non-unique username, so retry a few times with different usernames.
log.info('Error creating new user: %s', r)
username_issue = r.get('_issues', {}).get(u'username', '')
if u'not unique' in username_issue:
# Retry
db_user['username'] = authentication.make_unique_username(db_user['email'])
continue
# Saving was successful, or at least didn't break on a non-unique username.
break
else:
log.error('Unable to create new user %s: %s', db_user, r)
raise wz_exceptions.InternalServerError()
if status not in (200, 201):
log.error('internal response from %s to Eve: %r %r', attempted_eve_method, status, r)
raise wz_exceptions.InternalServerError()
return db_id, status
def validate_token(user_id, token, oauth_subclient_id):
"""Verifies a subclient token with Blender ID.
:returns: (user info, token expiry) on success, or (None, None) on failure.
The user information from Blender ID is returned as dict
{'email': 'a@b', 'full_name': 'AB'}, token expiry as a datime.datetime.
:rtype: dict
"""
our_subclient_id = current_app.config['BLENDER_ID_SUBCLIENT_ID']
# Check that IF there is a subclient ID given, it is the correct one.
if oauth_subclient_id and our_subclient_id != oauth_subclient_id:
log.warning('validate_token(): BlenderID user %s is trying to use the wrong subclient '
'ID %r; treating as invalid login.', user_id, oauth_subclient_id)
return None, None
# Validate against BlenderID.
log.debug('Validating subclient token for BlenderID user %r, subclient %r', user_id,
oauth_subclient_id)
payload = {'user_id': user_id,
'token': token}
if oauth_subclient_id:
payload['subclient_id'] = oauth_subclient_id
url = '{0}/u/validate_token'.format(blender_id_endpoint())
log.debug('POSTing to %r', url)
# Retry a few times when POSTing to BlenderID fails.
# Source: http://stackoverflow.com/a/15431343/875379
s = requests.Session()
s.mount(blender_id_endpoint(), HTTPAdapter(max_retries=5))
# POST to Blender ID, handling errors as negative verification results.
try:
r = s.post(url, data=payload, timeout=5,
verify=current_app.config['TLS_CERT_FILE'])
except requests.exceptions.ConnectionError as e:
log.error('Connection error trying to POST to %s, handling as invalid token.', url)
return None, None
if r.status_code != 200:
log.debug('Token %s invalid, HTTP status %i returned', token, r.status_code)
return None, None
resp = r.json()
if resp['status'] != 'success':
log.warning('Failed response from %s: %s', url, resp)
return None, None
expires = _compute_token_expiry(resp['token_expires'])
return resp['user'], expires
def _compute_token_expiry(token_expires_string):
"""Computes token expiry based on current time and BlenderID expiry.
Expires our side of the token when either the BlenderID token expires,
or in one hour. The latter case is to ensure we periodically verify
the token.
"""
date_format = current_app.config['RFC1123_DATE_FORMAT']
blid_expiry = datetime.datetime.strptime(token_expires_string, date_format)
blid_expiry = blid_expiry.replace(tzinfo=tz_util.utc)
our_expiry = datetime.datetime.now(tz=tz_util.utc) + datetime.timedelta(hours=1)
return min(blid_expiry, our_expiry)
def find_user_in_db(blender_id_user_id, user_info):
"""Find the user in our database, creating/updating the returned document where needed.
Does NOT update the user in the database.
"""
users = current_app.data.driver.db['users']
query = {'auth': {'$elemMatch': {'user_id': str(blender_id_user_id),
'provider': 'blender-id'}}}
log.debug('Querying: %s', query)
db_user = users.find_one(query)
if db_user:
log.debug('User blender_id_user_id=%r already in our database, '
'updating with info from Blender ID.', blender_id_user_id)
db_user['email'] = user_info['email']
else:
log.debug('User %r not yet in our database, create a new one.', blender_id_user_id)
db_user = authentication.create_new_user_document(
email=user_info['email'],
user_id=blender_id_user_id,
username=user_info['full_name'])
db_user['username'] = authentication.make_unique_username(user_info['email'])
if not db_user['full_name']:
db_user['full_name'] = db_user['username']
return db_user

View File

@ -1,802 +0,0 @@
import datetime
import logging
import mimetypes
import os
import tempfile
import uuid
import io
from hashlib import md5
import bson.tz_util
import eve.utils
import pymongo
from bson import ObjectId
from bson.errors import InvalidId
from eve.methods.patch import patch_internal
from eve.methods.post import post_internal
from eve.methods.put import put_internal
from flask import Blueprint
from flask import jsonify
from flask import request
from flask import send_from_directory
from flask import url_for, helpers
from flask import current_app
from flask import g
from flask import make_response
import werkzeug.exceptions as wz_exceptions
from application import utils
from application.utils import remove_private_keys, authentication
from application.utils.authorization import require_login, user_has_role, user_matches_roles
from application.utils.cdn import hash_file_path
from application.utils.encoding import Encoder
from application.utils.gcs import GoogleCloudStorageBucket
from application.utils.imaging import generate_local_thumbnails
log = logging.getLogger(__name__)
file_storage = Blueprint('file_storage', __name__,
template_folder='templates',
static_folder='../../static/storage', )
# Overrides for browser-specified mimetypes
OVERRIDE_MIMETYPES = {
# We don't want to thumbnail EXR files right now, so don't handle as image/...
'image/x-exr': 'application/x-exr',
}
# Add our own extensions to the mimetypes package
mimetypes.add_type('application/x-blender', '.blend')
mimetypes.add_type('application/x-radiance-hdr', '.hdr')
mimetypes.add_type('application/x-exr', '.exr')
@file_storage.route('/gcs/<bucket_name>/<subdir>/')
@file_storage.route('/gcs/<bucket_name>/<subdir>/<path:file_path>')
def browse_gcs(bucket_name, subdir, file_path=None):
"""Browse the content of a Google Cloud Storage bucket"""
# Initialize storage client
storage = GoogleCloudStorageBucket(bucket_name, subdir=subdir)
if file_path:
# If we provided a file_path, we try to fetch it
file_object = storage.Get(file_path)
if file_object:
# If it exists, return file properties in a dictionary
return jsonify(file_object)
else:
listing = storage.List(file_path)
return jsonify(listing)
# We always return an empty listing even if the directory does not
# exist. This can be changed later.
# return abort(404)
else:
listing = storage.List('')
return jsonify(listing)
@file_storage.route('/file', methods=['POST'])
@file_storage.route('/file/<path:file_name>', methods=['GET', 'POST'])
def index(file_name=None):
# GET file -> read it
if request.method == 'GET':
return send_from_directory(current_app.config['STORAGE_DIR'], file_name)
# POST file -> save it
# Sanitize the filename; source: http://stackoverflow.com/questions/7406102/
file_name = request.form['name']
keepcharacters = {' ', '.', '_'}
file_name = ''.join(
c for c in file_name if c.isalnum() or c in keepcharacters).strip()
file_name = file_name.lstrip('.')
# Determine & create storage directory
folder_name = file_name[:2]
file_folder_path = helpers.safe_join(current_app.config['STORAGE_DIR'], folder_name)
if not os.path.exists(file_folder_path):
log.info('Creating folder path %r', file_folder_path)
os.mkdir(file_folder_path)
# Save uploaded file
file_path = helpers.safe_join(file_folder_path, file_name)
log.info('Saving file %r', file_path)
request.files['data'].save(file_path)
# TODO: possibly nicer to just return a redirect to the file's URL.
return jsonify({'url': url_for('file_storage.index', file_name=file_name)})
def _process_image(gcs, file_id, local_file, src_file):
from PIL import Image
im = Image.open(local_file)
res = im.size
src_file['width'] = res[0]
src_file['height'] = res[1]
# Generate previews
log.info('Generating thumbnails for file %s', file_id)
src_file['variations'] = generate_local_thumbnails(src_file['name'],
local_file.name)
# Send those previews to Google Cloud Storage.
log.info('Uploading %i thumbnails for file %s to Google Cloud Storage (GCS)',
len(src_file['variations']), file_id)
# TODO: parallelize this at some point.
for variation in src_file['variations']:
fname = variation['file_path']
if current_app.config['TESTING']:
log.warning(' - NOT sending thumbnail %s to GCS', fname)
else:
log.debug(' - Sending thumbnail %s to GCS', fname)
blob = gcs.bucket.blob('_/' + fname, chunk_size=256 * 1024 * 2)
blob.upload_from_filename(variation['local_path'],
content_type=variation['content_type'])
if variation.get('size') == 't':
blob.make_public()
try:
os.unlink(variation['local_path'])
except OSError:
log.warning('Unable to unlink %s, ignoring this but it will need cleanup later.',
variation['local_path'])
del variation['local_path']
log.info('Done processing file %s', file_id)
src_file['status'] = 'complete'
def _process_video(gcs, file_id, local_file, src_file):
"""Video is processed by Zencoder; the file isn't even stored locally."""
log.info('Processing video for file %s', file_id)
# Create variations
root, _ = os.path.splitext(src_file['file_path'])
src_file['variations'] = []
# Most of these properties will be available after encode.
v = 'mp4'
file_variation = dict(
format=v,
content_type='video/{}'.format(v),
file_path='{}-{}.{}'.format(root, v, v),
size='',
duration=0,
width=0,
height=0,
length=0,
md5='',
)
# Append file variation. Originally mp4 and webm were the available options,
# that's why we build a list.
src_file['variations'].append(file_variation)
if current_app.config['TESTING']:
log.warning('_process_video: NOT sending out encoding job due to TESTING=%r',
current_app.config['TESTING'])
j = type('EncoderJob', (), {'process_id': 'fake-process-id',
'backend': 'fake'})
else:
j = Encoder.job_create(src_file)
if j is None:
log.warning('_process_video: unable to create encoder job for file %s.', file_id)
return
log.info('Created asynchronous Zencoder job %s for file %s', j['process_id'], file_id)
# Add the processing status to the file object
src_file['processing'] = {
'status': 'pending',
'job_id': str(j['process_id']),
'backend': j['backend']}
def process_file(gcs, file_id, local_file):
"""Process the file by creating thumbnails, sending to Zencoder, etc.
:param file_id: '_id' key of the file
:type file_id: ObjectId or str
:param local_file: locally stored file, or None if no local processing is needed.
:type local_file: file
"""
file_id = ObjectId(file_id)
# Fetch the src_file document from MongoDB.
files = current_app.data.driver.db['files']
src_file = files.find_one(file_id)
if not src_file:
log.warning('process_file(%s): no such file document found, ignoring.')
return
src_file = utils.remove_private_keys(src_file)
# Update the 'format' field from the content type.
# TODO: overrule the content type based on file extention & magic numbers.
mime_category, src_file['format'] = src_file['content_type'].split('/', 1)
# Prevent video handling for non-admins.
if not user_has_role(u'admin') and mime_category == 'video':
if src_file['format'].startswith('x-'):
xified = src_file['format']
else:
xified = 'x-' + src_file['format']
src_file['content_type'] = 'application/%s' % xified
mime_category = 'application'
log.info('Not processing video file %s for non-admin user', file_id)
# Run the required processor, based on the MIME category.
processors = {
'image': _process_image,
'video': _process_video,
}
try:
processor = processors[mime_category]
except KeyError:
log.info("POSTed file %s was of type %r, which isn't thumbnailed/encoded.", file_id,
mime_category)
src_file['status'] = 'complete'
else:
log.debug('process_file(%s): marking file status as "processing"', file_id)
src_file['status'] = 'processing'
update_file_doc(file_id, status='processing')
try:
processor(gcs, file_id, local_file, src_file)
except Exception:
log.warning('process_file(%s): error when processing file, resetting status to '
'"queued_for_processing"', file_id, exc_info=True)
update_file_doc(file_id, status='queued_for_processing')
return
# Update the original file with additional info, e.g. image resolution
r, _, _, status = put_internal('files', src_file, _id=file_id)
if status not in (200, 201):
log.warning('process_file(%s): status %i when saving processed file info to MongoDB: %s',
file_id, status, r)
def delete_file(file_item):
def process_file_delete(file_item):
"""Given a file item, delete the actual file from the storage backend.
This function can be probably made self-calling."""
if file_item['backend'] == 'gcs':
storage = GoogleCloudStorageBucket(str(file_item['project']))
storage.Delete(file_item['file_path'])
# Delete any file variation found in the file_item document
if 'variations' in file_item:
for v in file_item['variations']:
storage.Delete(v['file_path'])
return True
elif file_item['backend'] == 'pillar':
pass
elif file_item['backend'] == 'cdnsun':
pass
else:
pass
files_collection = current_app.data.driver.db['files']
# Collect children (variations) of the original file
children = files_collection.find({'parent': file_item['_id']})
for child in children:
process_file_delete(child)
# Finally remove the original file
process_file_delete(file_item)
def generate_link(backend, file_path, project_id=None, is_public=False):
"""Hook to check the backend of a file resource, to build an appropriate link
that can be used by the client to retrieve the actual file.
"""
if backend == 'gcs':
storage = GoogleCloudStorageBucket(project_id)
blob = storage.Get(file_path)
if blob is None:
return ''
if is_public:
return blob['public_url']
return blob['signed_url']
if backend == 'pillar':
return url_for('file_storage.index', file_name=file_path, _external=True,
_scheme=current_app.config['SCHEME'])
if backend == 'cdnsun':
return hash_file_path(file_path, None)
if backend == 'unittest':
return md5(file_path).hexdigest()
return ''
def before_returning_file(response):
ensure_valid_link(response)
# Enable this call later, when we have implemented the is_public field on files.
# strip_link_and_variations(response)
def strip_link_and_variations(response):
# Check the access level of the user.
if g.current_user is None:
has_full_access = False
else:
user_roles = g.current_user['roles']
access_roles = current_app.config['FULL_FILE_ACCESS_ROLES']
has_full_access = bool(user_roles.intersection(access_roles))
# Strip all file variations (unless image) and link to the actual file.
if not has_full_access:
response.pop('link', None)
response.pop('link_expires', None)
# Image files have public variations, other files don't.
if not response.get('content_type', '').startswith('image/'):
if response.get('variations') is not None:
response['variations'] = []
def before_returning_files(response):
for item in response['_items']:
ensure_valid_link(item)
def ensure_valid_link(response):
"""Ensures the file item has valid file links using generate_link(...)."""
# Log to function-specific logger, so we can easily turn it off.
log = logging.getLogger('%s.ensure_valid_link' % __name__)
# log.debug('Inspecting link for file %s', response['_id'])
# Check link expiry.
now = datetime.datetime.now(tz=bson.tz_util.utc)
if 'link_expires' in response:
link_expires = response['link_expires']
if now < link_expires:
# Not expired yet, so don't bother regenerating anything.
log.debug('Link expires at %s, which is in the future, so not generating new link',
link_expires)
return
log.debug('Link expired at %s, which is in the past; generating new link', link_expires)
else:
log.debug('No expiry date for link; generating new link')
_generate_all_links(response, now)
def _generate_all_links(response, now):
"""Generate a new link for the file and all its variations.
:param response: the file document that should be updated.
:param now: datetime that reflects 'now', for consistent expiry generation.
"""
project_id = str(
response['project']) if 'project' in response else None # TODO: add project id to all files
backend = response['backend']
response['link'] = generate_link(backend, response['file_path'], project_id)
variations = response.get('variations')
if variations:
for variation in variations:
variation['link'] = generate_link(backend, variation['file_path'], project_id)
# Construct the new expiry datetime.
validity_secs = current_app.config['FILE_LINK_VALIDITY'][backend]
response['link_expires'] = now + datetime.timedelta(seconds=validity_secs)
patch_info = remove_private_keys(response)
file_id = ObjectId(response['_id'])
(patch_resp, _, _, _) = patch_internal('files', patch_info, _id=file_id)
if patch_resp.get('_status') == 'ERR':
log.warning('Unable to save new links for file %s: %r', response['_id'], patch_resp)
# TODO: raise a snag.
response['_updated'] = now
else:
response['_updated'] = patch_resp['_updated']
# Be silly and re-fetch the etag ourselves. TODO: handle this better.
etag_doc = current_app.data.driver.db['files'].find_one({'_id': file_id}, {'_etag': 1})
response['_etag'] = etag_doc['_etag']
def before_deleting_file(item):
delete_file(item)
def on_pre_get_files(_, lookup):
# Override the HTTP header, we always want to fetch the document from MongoDB.
parsed_req = eve.utils.parse_request('files')
parsed_req.if_modified_since = None
# Only fetch it if the date got expired.
now = datetime.datetime.now(tz=bson.tz_util.utc)
lookup_expired = lookup.copy()
lookup_expired['link_expires'] = {'$lte': now}
cursor = current_app.data.find('files', parsed_req, lookup_expired)
for file_doc in cursor:
# log.debug('Updating expired links for file %r.', file_doc['_id'])
_generate_all_links(file_doc, now)
def refresh_links_for_project(project_uuid, chunk_size, expiry_seconds):
if chunk_size:
log.info('Refreshing the first %i links for project %s', chunk_size, project_uuid)
else:
log.info('Refreshing all links for project %s', project_uuid)
# Retrieve expired links.
files_collection = current_app.data.driver.db['files']
now = datetime.datetime.now(tz=bson.tz_util.utc)
expire_before = now + datetime.timedelta(seconds=expiry_seconds)
log.info('Limiting to links that expire before %s', expire_before)
to_refresh = files_collection.find(
{'project': ObjectId(project_uuid),
'link_expires': {'$lt': expire_before},
}).sort([('link_expires', pymongo.ASCENDING)]).limit(chunk_size)
if to_refresh.count() == 0:
log.info('No links to refresh.')
return
for file_doc in to_refresh:
log.debug('Refreshing links for file %s', file_doc['_id'])
_generate_all_links(file_doc, now)
log.info('Refreshed %i links', min(chunk_size, to_refresh.count()))
def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds):
import gcloud.exceptions
# Retrieve expired links.
files_collection = current_app.data.driver.db['files']
proj_coll = current_app.data.driver.db['projects']
now = datetime.datetime.now(tz=bson.tz_util.utc)
expire_before = now + datetime.timedelta(seconds=expiry_seconds)
log.info('Limiting to links that expire before %s', expire_before)
to_refresh = files_collection.find(
{'$or': [{'backend': backend_name, 'link_expires': None},
{'backend': backend_name, 'link_expires': {'$lt': expire_before}},
{'backend': backend_name, 'link': None}]
}).sort([('link_expires', pymongo.ASCENDING)]).limit(chunk_size).batch_size(5)
if to_refresh.count() == 0:
log.info('No links to refresh.')
return
refreshed = 0
for file_doc in to_refresh:
try:
file_id = file_doc['_id']
project_id = file_doc.get('project')
if project_id is None:
log.debug('Skipping file %s, it has no project.', file_id)
continue
count = proj_coll.count({'_id': project_id, '$or': [
{'_deleted': {'$exists': False}},
{'_deleted': False},
]})
if count == 0:
log.debug('Skipping file %s, project %s does not exist.', file_id, project_id)
continue
if 'file_path' not in file_doc:
log.warning("Skipping file %s, missing 'file_path' property.", file_id)
continue
log.debug('Refreshing links for file %s', file_id)
try:
_generate_all_links(file_doc, now)
except gcloud.exceptions.Forbidden:
log.warning('Skipping file %s, GCS forbids us access to project %s bucket.',
file_id, project_id)
continue
refreshed += 1
except KeyboardInterrupt:
log.warning('Aborting due to KeyboardInterrupt after refreshing %i links',
refreshed)
return
log.info('Refreshed %i links', refreshed)
@require_login()
def create_file_doc(name, filename, content_type, length, project, backend='gcs',
**extra_fields):
"""Creates a minimal File document for storage in MongoDB.
Doesn't save it to MongoDB yet.
"""
current_user = g.get('current_user')
file_doc = {'name': name,
'filename': filename,
'file_path': '',
'user': current_user['user_id'],
'backend': backend,
'md5': '',
'content_type': content_type,
'length': length,
'project': project}
file_doc.update(extra_fields)
return file_doc
def override_content_type(uploaded_file):
"""Overrides the content type based on file extensions.
:param uploaded_file: file from request.files['form-key']
:type uploaded_file: werkzeug.datastructures.FileStorage
"""
# Possibly use the browser-provided mime type
mimetype = uploaded_file.mimetype
try:
mimetype = OVERRIDE_MIMETYPES[mimetype]
except KeyError:
pass
if '/' in mimetype:
mimecat = mimetype.split('/')[0]
if mimecat in {'video', 'audio', 'image'}:
# The browser's mime type is probably ok, just use it.
return
# And then use it to set the mime type.
(mimetype, encoding) = mimetypes.guess_type(uploaded_file.filename)
# Only override the mime type if we can detect it, otherwise just
# keep whatever the browser gave us.
if mimetype:
# content_type property can't be set directly
uploaded_file.headers['content-type'] = mimetype
# It has this, because we used uploaded_file.mimetype earlier this function.
del uploaded_file._parsed_content_type
def assert_file_size_allowed(file_size):
"""Asserts that the current user is allowed to upload a file of the given size.
:raises
"""
roles = current_app.config['ROLES_FOR_UNLIMITED_UPLOADS']
if user_matches_roles(require_roles=roles):
return
filesize_limit = current_app.config['FILESIZE_LIMIT_BYTES_NONSUBS']
if file_size < filesize_limit:
return
filesize_limit_mb = filesize_limit / 2.0 ** 20
log.info('User %s tried to upload a %.3f MiB file, but is only allowed %.3f MiB.',
authentication.current_user_id(), file_size / 2.0 ** 20, filesize_limit_mb)
raise wz_exceptions.RequestEntityTooLarge(
'To upload files larger than %i MiB, subscribe to Blender Cloud' % filesize_limit_mb)
@file_storage.route('/stream/<string:project_id>', methods=['POST', 'OPTIONS'])
@require_login()
def stream_to_gcs(project_id):
project_oid = utils.str2id(project_id)
projects = current_app.data.driver.db['projects']
project = projects.find_one(project_oid, projection={'_id': 1})
if not project:
raise wz_exceptions.NotFound('Project %s does not exist' % project_id)
log.info('Streaming file to bucket for project=%s user_id=%s', project_id,
authentication.current_user_id())
log.info('request.headers[Origin] = %r', request.headers.get('Origin'))
uploaded_file = request.files['file']
# Not every upload has a Content-Length header. If it was passed, we might as
# well check for its value before we require the user to upload the entire file.
# (At least I hope that this part of the code is processed before the body is
# read in its entirety)
if uploaded_file.content_length:
assert_file_size_allowed(uploaded_file.content_length)
override_content_type(uploaded_file)
if not uploaded_file.content_type:
log.warning('File uploaded to project %s without content type.', project_oid)
raise wz_exceptions.BadRequest('Missing content type.')
if uploaded_file.content_type.startswith('image/'):
# We need to do local thumbnailing, so we have to write the stream
# both to Google Cloud Storage and to local storage.
local_file = tempfile.NamedTemporaryFile(dir=current_app.config['STORAGE_DIR'])
uploaded_file.save(local_file)
local_file.seek(0) # Make sure that a re-read starts from the beginning.
stream_for_gcs = local_file
else:
local_file = None
stream_for_gcs = uploaded_file.stream
# Figure out the file size, as we need to pass this in explicitly to GCloud.
# Otherwise it always uses os.fstat(file_obj.fileno()).st_size, which isn't
# supported by a BytesIO object (even though it does have a fileno attribute).
if isinstance(stream_for_gcs, io.BytesIO):
file_size = len(stream_for_gcs.getvalue())
else:
file_size = os.fstat(stream_for_gcs.fileno()).st_size
# Check the file size again, now that we know its size for sure.
assert_file_size_allowed(file_size)
# Create file document in MongoDB.
file_id, internal_fname, status = create_file_doc_for_upload(project_oid, uploaded_file)
if current_app.config['TESTING']:
log.warning('NOT streaming to GCS because TESTING=%r', current_app.config['TESTING'])
# Fake a Blob object.
gcs = None
blob = type('Blob', (), {'size': file_size})
else:
# Upload the file to GCS.
from gcloud.streaming import transfer
# Files larger than this many bytes will be streamed directly from disk, smaller
# ones will be read into memory and then uploaded.
transfer.RESUMABLE_UPLOAD_THRESHOLD = 102400
try:
gcs = GoogleCloudStorageBucket(project_id)
blob = gcs.bucket.blob('_/' + internal_fname, chunk_size=256 * 1024 * 2)
blob.upload_from_file(stream_for_gcs, size=file_size,
content_type=uploaded_file.mimetype)
except Exception:
log.exception('Error uploading file to Google Cloud Storage (GCS),'
' aborting handling of uploaded file (id=%s).', file_id)
update_file_doc(file_id, status='failed')
raise wz_exceptions.InternalServerError('Unable to stream file to Google Cloud Storage')
if stream_for_gcs.closed:
log.error('Eek, GCS closed its stream, Andy is not going to like this.')
# Reload the blob to get the file size according to Google.
blob.reload()
update_file_doc(file_id,
status='queued_for_processing',
file_path=internal_fname,
length=blob.size,
content_type=uploaded_file.mimetype)
process_file(gcs, file_id, local_file)
# Local processing is done, we can close the local file so it is removed.
if local_file is not None:
local_file.close()
log.debug('Handled uploaded file id=%s, fname=%s, size=%i', file_id, internal_fname, blob.size)
# Status is 200 if the file already existed, and 201 if it was newly created.
# TODO: add a link to a thumbnail in the response.
resp = jsonify(status='ok', file_id=str(file_id))
resp.status_code = status
add_access_control_headers(resp)
return resp
def add_access_control_headers(resp):
"""Allows cross-site requests from the configured domain."""
if 'Origin' not in request.headers:
return resp
resp.headers['Access-Control-Allow-Origin'] = request.headers['Origin']
resp.headers['Access-Control-Allow-Credentials'] = 'true'
return resp
def update_file_doc(file_id, **updates):
files = current_app.data.driver.db['files']
res = files.update_one({'_id': ObjectId(file_id)},
{'$set': updates})
log.debug('update_file_doc(%s, %s): %i matched, %i updated.',
file_id, updates, res.matched_count, res.modified_count)
return res
def create_file_doc_for_upload(project_id, uploaded_file):
"""Creates a secure filename and a document in MongoDB for the file.
The (project_id, filename) tuple should be unique. If such a document already
exists, it is updated with the new file.
:param uploaded_file: file from request.files['form-key']
:type uploaded_file: werkzeug.datastructures.FileStorage
:returns: a tuple (file_id, filename, status), where 'filename' is the internal
filename used on GCS.
"""
project_id = ObjectId(project_id)
# Hash the filename with path info to get the internal name. This should
# be unique for the project.
# internal_filename = uploaded_file.filename
_, ext = os.path.splitext(uploaded_file.filename)
internal_filename = uuid.uuid4().hex + ext
# For now, we don't support overwriting files, and create a new one every time.
# # See if we can find a pre-existing file doc.
# files = current_app.data.driver.db['files']
# file_doc = files.find_one({'project': project_id,
# 'name': internal_filename})
file_doc = None
# TODO: at some point do name-based and content-based content-type sniffing.
new_props = {'filename': uploaded_file.filename,
'content_type': uploaded_file.mimetype,
'length': uploaded_file.content_length,
'project': project_id,
'status': 'uploading'}
if file_doc is None:
# Create a file document on MongoDB for this file.
file_doc = create_file_doc(name=internal_filename, **new_props)
file_fields, _, _, status = post_internal('files', file_doc)
else:
file_doc.update(new_props)
file_fields, _, _, status = put_internal('files', remove_private_keys(file_doc))
if status not in (200, 201):
log.error('Unable to create new file document in MongoDB, status=%i: %s',
status, file_fields)
raise wz_exceptions.InternalServerError()
return file_fields['_id'], internal_filename, status
def compute_aggregate_length(file_doc, original=None):
"""Computes the total length (in bytes) of the file and all variations.
Stores the result in file_doc['length_aggregate_in_bytes']
"""
# Compute total size of all variations.
variations = file_doc.get('variations', ())
var_length = sum(var.get('length', 0) for var in variations)
file_doc['length_aggregate_in_bytes'] = file_doc.get('length', 0) + var_length
def compute_aggregate_length_items(file_docs):
for file_doc in file_docs:
compute_aggregate_length(file_doc)
def setup_app(app, url_prefix):
app.on_pre_GET_files += on_pre_get_files
app.on_fetched_item_files += before_returning_file
app.on_fetched_resource_files += before_returning_files
app.on_delete_item_files += before_deleting_file
app.on_update_files += compute_aggregate_length
app.on_replace_files += compute_aggregate_length
app.on_insert_files += compute_aggregate_length_items
app.register_blueprint(file_storage, url_prefix=url_prefix)

View File

@ -1,123 +0,0 @@
import itertools
import pymongo
from flask import Blueprint, current_app
from application.utils import jsonify
blueprint = Blueprint('latest', __name__)
def keep_fetching(collection, db_filter, projection, sort, py_filter, batch_size=12):
"""Yields results for which py_filter returns True"""
projection['_deleted'] = 1
curs = collection.find(db_filter, projection).sort(sort)
curs.batch_size(batch_size)
for doc in curs:
if doc.get('_deleted'):
continue
doc.pop('_deleted', None)
if py_filter(doc):
yield doc
def latest_nodes(db_filter, projection, py_filter, limit):
nodes = current_app.data.driver.db['nodes']
proj = {
'_created': 1,
'_updated': 1,
}
proj.update(projection)
latest = keep_fetching(nodes, db_filter, proj,
[('_created', pymongo.DESCENDING)],
py_filter, limit)
result = list(itertools.islice(latest, limit))
return result
def has_public_project(node_doc):
"""Returns True iff the project the node belongs to is public."""
project_id = node_doc.get('project')
return is_project_public(project_id)
# TODO: cache result, at least for a limited amt. of time, or for this HTTP request.
def is_project_public(project_id):
"""Returns True iff the project is public."""
project = current_app.data.driver.db['projects'].find_one(project_id)
if not project:
return False
return not project.get('is_private')
@blueprint.route('/assets')
def latest_assets():
latest = latest_nodes({'node_type': 'asset', 'properties.status': 'published'},
{'name': 1, 'project': 1, 'user': 1, 'node_type': 1,
'parent': 1, 'picture': 1, 'properties.status': 1,
'properties.content_type': 1,
'permissions.world': 1},
has_public_project, 12)
embed_user(latest)
embed_project(latest)
return jsonify({'_items': latest})
def embed_user(latest):
users = current_app.data.driver.db['users']
for comment in latest:
user_id = comment['user']
comment['user'] = users.find_one(user_id, {'auth': 0, 'groups': 0, 'roles': 0,
'settings': 0, 'email': 0,
'_created': 0, '_updated': 0, '_etag': 0})
def embed_project(latest):
projects = current_app.data.driver.db['projects']
for comment in latest:
project_id = comment['project']
comment['project'] = projects.find_one(project_id, {'_id': 1, 'name': 1, 'url': 1})
@blueprint.route('/comments')
def latest_comments():
latest = latest_nodes({'node_type': 'comment', 'properties.status': 'published'},
{'project': 1, 'parent': 1, 'user': 1,
'properties.content': 1, 'node_type': 1, 'properties.status': 1,
'properties.is_reply': 1},
has_public_project, 6)
# Embed the comments' parents.
nodes = current_app.data.driver.db['nodes']
parents = {}
for comment in latest:
parent_id = comment['parent']
if parent_id in parents:
comment['parent'] = parents[parent_id]
continue
parent = nodes.find_one(parent_id)
parents[parent_id] = parent
comment['parent'] = parent
embed_project(latest)
embed_user(latest)
return jsonify({'_items': latest})
def setup_app(app, url_prefix):
app.register_blueprint(blueprint, url_prefix=url_prefix)

View File

@ -1,418 +0,0 @@
import base64
import logging
import urlparse
import pymongo.errors
import rsa.randnum
from bson import ObjectId
from flask import current_app, g, Blueprint, request
import werkzeug.exceptions as wz_exceptions
from application.modules import file_storage
from application.utils import str2id, jsonify
from application.utils.authorization import check_permissions, require_login
from application.utils.gcs import update_file_name
from application.utils.activities import activity_subscribe, activity_object_add
from application.utils.algolia import algolia_index_node_delete
from application.utils.algolia import algolia_index_node_save
log = logging.getLogger(__name__)
blueprint = Blueprint('nodes', __name__)
ROLES_FOR_SHARING = {u'subscriber', u'demo'}
@blueprint.route('/<node_id>/share', methods=['GET', 'POST'])
@require_login(require_roles=ROLES_FOR_SHARING)
def share_node(node_id):
"""Shares a node, or returns sharing information."""
node_id = str2id(node_id)
nodes_coll = current_app.data.driver.db['nodes']
node = nodes_coll.find_one({'_id': node_id},
projection={
'project': 1,
'node_type': 1,
'short_code': 1
})
check_permissions('nodes', node, request.method)
log.info('Sharing node %s', node_id)
short_code = node.get('short_code')
status = 200
if not short_code:
if request.method == 'POST':
short_code = generate_and_store_short_code(node)
make_world_gettable(node)
status = 201
else:
return '', 204
return jsonify(short_link_info(short_code), status=status)
def generate_and_store_short_code(node):
nodes_coll = current_app.data.driver.db['nodes']
node_id = node['_id']
log.debug('Creating new short link for node %s', node_id)
max_attempts = 10
for attempt in range(1, max_attempts):
# Generate a new short code
short_code = create_short_code(node)
log.debug('Created short code for node %s: %s', node_id, short_code)
node['short_code'] = short_code
# Store it in MongoDB
try:
result = nodes_coll.update_one({'_id': node_id},
{'$set': {'short_code': short_code}})
break
except pymongo.errors.DuplicateKeyError:
log.info('Duplicate key while creating short code, retrying (attempt %i/%i)',
attempt, max_attempts)
pass
else:
log.error('Unable to find unique short code for node %s after %i attempts, failing!',
node_id, max_attempts)
raise wz_exceptions.InternalServerError('Unable to create unique short code for node %s' %
node_id)
# We were able to store a short code, now let's verify the result.
if result.matched_count != 1:
log.warning('Unable to update node %s with new short_links=%r', node_id, node['short_code'])
raise wz_exceptions.InternalServerError('Unable to update node %s with new short links' %
node_id)
return short_code
def make_world_gettable(node):
nodes_coll = current_app.data.driver.db['nodes']
node_id = node['_id']
log.debug('Ensuring the world can read node %s', node_id)
world_perms = set(node.get('permissions', {}).get('world', []))
world_perms.add(u'GET')
world_perms = list(world_perms)
result = nodes_coll.update_one({'_id': node_id},
{'$set': {'permissions.world': world_perms}})
if result.matched_count != 1:
log.warning('Unable to update node %s with new permissions.world=%r', node_id, world_perms)
raise wz_exceptions.InternalServerError('Unable to update node %s with new permissions' %
node_id)
def create_short_code(node):
"""Generates a new 'short code' for the node."""
length = current_app.config['SHORT_CODE_LENGTH']
bits = rsa.randnum.read_random_bits(32)
short_code = base64.b64encode(bits, altchars='xy').rstrip('=')
short_code = short_code[:length]
return short_code
def short_link_info(short_code):
"""Returns the short link info in a dict."""
short_link = urlparse.urljoin(current_app.config['SHORT_LINK_BASE_URL'], short_code)
return {
'short_code': short_code,
'short_link': short_link,
}
def item_parse_attachments(response):
"""Before returning a response, check if the 'attachments' property is
defined. If yes, load the file (for the moment only images) in the required
variation, get the link and build a Markdown representation. Search in the
'field' specified in the attachment and replace the 'slug' tag with the
generated link.
"""
attachments = response.get('properties', {}).get('attachments', None)
if not attachments:
return
files_collection = current_app.data.driver.db['files']
for attachment in attachments:
# Make a list from the property path
field_name_path = attachment['field'].split('.')
# This currently allow to access only properties inside of
# the properties property
if len(field_name_path) > 1:
field_content = response[field_name_path[0]][field_name_path[1]]
# This is for the "normal" first level property
else:
field_content = response[field_name_path[0]]
for af in attachment['files']:
slug = af['slug']
slug_tag = "[{0}]".format(slug)
f = files_collection.find_one({'_id': ObjectId(af['file'])})
if f is None:
af['file'] = None
continue
size = f['size'] if 'size' in f else 'l'
# Get the correct variation from the file
file_storage.ensure_valid_link(f)
thumbnail = next((item for item in f['variations'] if
item['size'] == size), None)
# Build Markdown img string
l = '![{0}]({1} "{2}")'.format(slug, thumbnail['link'], f['name'])
# Parse the content of the file and replace the attachment
# tag with the actual image link
field_content = field_content.replace(slug_tag, l)
# Apply the parsed value back to the property. See above for
# clarifications on how this is done.
if len(field_name_path) > 1:
response[field_name_path[0]][field_name_path[1]] = field_content
else:
response[field_name_path[0]] = field_content
def resource_parse_attachments(response):
for item in response['_items']:
item_parse_attachments(item)
def before_replacing_node(item, original):
check_permissions('nodes', original, 'PUT')
update_file_name(item)
def after_replacing_node(item, original):
"""Push an update to the Algolia index when a node item is updated. If the
project is private, prevent public indexing.
"""
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one({'_id': item['project']})
if project.get('is_private', False):
# Skip index updating and return
return
from algoliasearch.client import AlgoliaException
status = item['properties'].get('status', 'unpublished')
if status == 'published':
try:
algolia_index_node_save(item)
except AlgoliaException as ex:
log.warning('Unable to push node info to Algolia for node %s; %s',
item.get('_id'), ex)
else:
try:
algolia_index_node_delete(item)
except AlgoliaException as ex:
log.warning('Unable to delete node info to Algolia for node %s; %s',
item.get('_id'), ex)
def before_inserting_nodes(items):
"""Before inserting a node in the collection we check if the user is allowed
and we append the project id to it.
"""
nodes_collection = current_app.data.driver.db['nodes']
def find_parent_project(node):
"""Recursive function that finds the ultimate parent of a node."""
if node and 'parent' in node:
parent = nodes_collection.find_one({'_id': node['parent']})
return find_parent_project(parent)
if node:
return node
else:
return None
for item in items:
check_permissions('nodes', item, 'POST')
if 'parent' in item and 'project' not in item:
parent = nodes_collection.find_one({'_id': item['parent']})
project = find_parent_project(parent)
if project:
item['project'] = project['_id']
# Default the 'user' property to the current user.
item.setdefault('user', g.current_user['user_id'])
def after_inserting_nodes(items):
for item in items:
# Skip subscriptions for first level items (since the context is not a
# node, but a project).
# TODO: support should be added for mixed context
if 'parent' not in item:
return
context_object_id = item['parent']
if item['node_type'] == 'comment':
nodes_collection = current_app.data.driver.db['nodes']
parent = nodes_collection.find_one({'_id': item['parent']})
# Always subscribe to the parent node
activity_subscribe(item['user'], 'node', item['parent'])
if parent['node_type'] == 'comment':
# If the parent is a comment, we provide its own parent as
# context. We do this in order to point the user to an asset
# or group when viewing the notification.
verb = 'replied'
context_object_id = parent['parent']
# Subscribe to the parent of the parent comment (post or group)
activity_subscribe(item['user'], 'node', parent['parent'])
else:
activity_subscribe(item['user'], 'node', item['_id'])
verb = 'commented'
else:
verb = 'posted'
activity_subscribe(item['user'], 'node', item['_id'])
activity_object_add(
item['user'],
verb,
'node',
item['_id'],
'node',
context_object_id
)
def deduct_content_type(node_doc, original=None):
"""Deduct the content type from the attached file, if any."""
if node_doc['node_type'] != 'asset':
log.debug('deduct_content_type: called on node type %r, ignoring', node_doc['node_type'])
return
node_id = node_doc.get('_id')
try:
file_id = ObjectId(node_doc['properties']['file'])
except KeyError:
if node_id is None:
# Creation of a file-less node is allowed, but updates aren't.
return
log.warning('deduct_content_type: Asset without properties.file, rejecting.')
raise wz_exceptions.UnprocessableEntity('Missing file property for asset node')
files = current_app.data.driver.db['files']
file_doc = files.find_one({'_id': file_id},
{'content_type': 1})
if not file_doc:
log.warning('deduct_content_type: Node %s refers to non-existing file %s, rejecting.',
node_id, file_id)
raise wz_exceptions.UnprocessableEntity('File property refers to non-existing file')
# Guess the node content type from the file content type
file_type = file_doc['content_type']
if file_type.startswith('video/'):
content_type = 'video'
elif file_type.startswith('image/'):
content_type = 'image'
else:
content_type = 'file'
node_doc['properties']['content_type'] = content_type
def nodes_deduct_content_type(nodes):
for node in nodes:
deduct_content_type(node)
def before_returning_node(node):
# Run validation process, since GET on nodes entry point is public
check_permissions('nodes', node, 'GET', append_allowed_methods=True)
# Embed short_link_info if the node has a short_code.
short_code = node.get('short_code')
if short_code:
node['short_link'] = short_link_info(short_code)['short_link']
def before_returning_nodes(nodes):
for node in nodes['_items']:
before_returning_node(node)
def node_set_default_picture(node, original=None):
"""Uses the image of an image asset or colour map of texture node as picture."""
if node.get('picture'):
log.debug('Node %s already has a picture, not overriding', node.get('_id'))
return
node_type = node.get('node_type')
props = node.get('properties', {})
content = props.get('content_type')
if node_type == 'asset' and content == 'image':
image_file_id = props.get('file')
elif node_type == 'texture':
# Find the colour map, defaulting to the first image map available.
image_file_id = None
for image in props.get('files', []):
if image_file_id is None or image.get('map_type') == u'color':
image_file_id = image.get('file')
else:
log.debug('Not setting default picture on node type %s content type %s',
node_type, content)
return
if image_file_id is None:
log.debug('Nothing to set the picture to.')
return
log.debug('Setting default picture for node %s to %s', node.get('_id'), image_file_id)
node['picture'] = image_file_id
def nodes_set_default_picture(nodes):
for node in nodes:
node_set_default_picture(node)
def after_deleting_node(item):
from algoliasearch.client import AlgoliaException
try:
algolia_index_node_delete(item)
except AlgoliaException as ex:
log.warning('Unable to delete node info to Algolia for node %s; %s',
item.get('_id'), ex)
def setup_app(app, url_prefix):
from . import patch
patch.setup_app(app, url_prefix=url_prefix)
app.on_fetched_item_nodes += before_returning_node
app.on_fetched_resource_nodes += before_returning_nodes
app.on_fetched_item_nodes += item_parse_attachments
app.on_fetched_resource_nodes += resource_parse_attachments
app.on_replace_nodes += before_replacing_node
app.on_replace_nodes += deduct_content_type
app.on_replace_nodes += node_set_default_picture
app.on_replaced_nodes += after_replacing_node
app.on_insert_nodes += before_inserting_nodes
app.on_insert_nodes += nodes_deduct_content_type
app.on_insert_nodes += nodes_set_default_picture
app.on_inserted_nodes += after_inserting_nodes
app.on_deleted_item_nodes += after_deleting_node
app.register_blueprint(blueprint, url_prefix=url_prefix)

Some files were not shown because too many files have changed in this diff Show More