Compare commits

...

471 Commits

Author SHA1 Message Date
Anna Sirota
8f3a03d311 Log exception on each ResourceInvalid to make debugging easier 2021-04-26 17:40:03 +02:00
Anna Sirota
d9d3b73070 Don't validate tokens for each static asset URL 2021-03-19 10:28:28 +01:00
Anna Sirota
2bce52e189 Pin poetry deps to work around cryptography requiring Rust issue 2021-03-18 18:49:10 +01:00
9f76657603 Remove debug-log when auth token cannot be found 2021-02-16 13:55:28 +01:00
b4982c4128 Pillar: Wider scrollbars 2020-07-29 22:53:01 +02:00
970303577a Update gulp-sass 2020-07-23 18:49:12 +02:00
5d9bae1f0f Blender Cloud: Fix responsive issues on navigation. 2020-07-22 18:32:48 +02:00
2e41b7a4dd Blender Cloud: Fix responsive issues on timeline. 2020-07-22 18:32:35 +02:00
b4207cce47 Blender Cloud: Fix responsive issues on blog. 2020-07-22 18:32:22 +02:00
5ab4086cbe Notifications: Regulate fetching via cookie
We introduce a doNotQueryNotifications cookie with a short lifetime,
which is used to determine wether getNotifications should be called
or not. This prevents notifications from being fetched at every page
load, unless the cookie is expired.
2020-04-17 13:32:27 +02:00
86206d42dc Notifications: Set timeout from 30 to 60 seconds
This slightly reduces server load, as clients that keep a page open
will query less often.
2020-04-17 13:32:27 +02:00
Ankit
7c238571bf Fix T73490 Hyperlink bug
Fix typo in the link to Blender Cloud

Maniphest Tasks: T73490

Differential Revision: https://developer.blender.org/D7218
2020-03-27 09:52:51 +01:00
7dc0cadc46 Fix issue with Cerberus
Cerberus has a clause `… and X in self.persisted_document`, which fails
when `persisted_document` is `None` (which is the default value for the
parameter). This code can be found in the function `_normalize_default()`
in `.venv/lib/python3.6/site-packages/cerberus/validator.py:922`.
2020-03-19 16:57:50 +01:00
47474ac936 Replaced Gravatar with self-hosted avatars
Avatars are now obtained from Blender ID. They are downloaded from
Blender ID and stored in the users' home project storage.

Avatars can be synced via Celery and triggered from a webhook.

The avatar can be obtained from the current user object in Python, or
via pillar.api.users.avatar.url(user_dict).

Avatars can be shown in the web frontend by:

- an explicit image (like before but with a non-Gravatar URL)
- a Vue.js component `user-avatar`
- a Vue.js component `current-user-avatar`

The latter is the most efficient for the current user, as it uses user
info that's already injected into the webpage (so requires no extra
queries).
2019-05-31 16:49:24 +02:00
8a19efe7a7 Reformatted code and added import to resolve PyCharm warnings 2019-05-31 13:55:06 +02:00
3904c188ac Removed trailing spaces 2019-05-31 13:55:06 +02:00
26e20ca571 Fix for now-allowed PATCH on users
Commit 0f0a4be4 introduced using PATCH on users to set the username.
An old unit test failed, as it checks that PATCH is not allowed (e.g.
tests for 405 Method Not Allowed response).
2019-05-31 10:24:11 +02:00
e57ec4bede Moved user_to_dict() function out of pillar.web.jinja module 2019-05-31 10:23:25 +02:00
3705b60f25 Fixed unit test by doing late import
For some reason the old pillar.auth stuck around, failing the
`isinstance(some_object, auth.UserClass)` check because it compared to the
old class and not the reloaded one.
2019-05-31 10:22:46 +02:00
0f0a4be412 Fixed updating username in settings view
The timestamps used by the 'last viewed' property of the video progress
feature were converted to strings when sending to the frontend, but never
changed back to timestamps when PUTting via the SDK. I solved it by not
PUTing the user at all, but using PATCH to set the username instead.
2019-05-29 18:37:01 +02:00
23f8c1a446 Ran npm audit fix --force
This fixed 64 security vulnerabilities and hopefully didn't break too much.
2019-05-29 17:06:41 +02:00
1f5f781ecf Suppress warnings from Werkzeug
- Werkzeug deprecated Request.is_xhr, but it works fine with jQuery and we
  don't need a reminder every time a unit test is run. When we upgrade to
  Werkzeug 1.0 (once that's released) we'll see things break and fix them.
- Werkzeug deprecated their Atom feed. This we should act on; tracked in
  https://developer.blender.org/T65274.
2019-05-29 15:22:45 +02:00
4425771117 Suppress Cerberus deprecation warning caused by Eve
Eve is falling behind on Cerberus. See my bug report on
https://github.com/pyeve/eve/issues/1278 for more info.
2019-05-29 14:32:46 +02:00
931c29a21f MongoDB: db.collection_names() is deprecated → db.list_collection_names() 2019-05-29 13:46:53 +02:00
2aa79d3f09 MongoDB: more changing count() → count_documents() 2019-05-29 13:46:53 +02:00
6f8fd4cd72 Cerberus 1.3 renamed 'validator' → 'check_with'
This results in a change in schemas as well as in validator function names.
2019-05-29 12:58:40 +02:00
f53217cabf Added some type declarations 2019-05-29 12:58:40 +02:00
8b42e88817 Cerberus 1.3 renamed '{value,key}schema' to '{values,keys}rules'
'valueschema' and 'keyschema' have been replaced by 'valuesrules' and
'keysrules'. Note the change from 2x singular ('value' and 'schema') to
2x plural ('values' and 'rules').
2019-05-29 12:57:38 +02:00
dd5cd5b61a Compatibility with Eve 0.9.1
Note that Eve's update from 0.9 → 0.9.1 had a breaking API change, as the
return type of `app.data.find(...)` changed...
2019-05-29 10:50:55 +02:00
459a579964 Some extra type annotations 2019-05-28 16:13:14 +02:00
0b32e973a9 More thorough retrying in Blender ID communication 2019-05-28 16:13:14 +02:00
c6e70dc5d9 Removed and gitignored poetry.lock
The poetry.lock files are only relevant for repeatable deployments,
and the one in this project isn't used for that (only the Blender
Cloud project file is used, and that's still there).
2019-05-28 16:13:14 +02:00
1b90dd16ae Re-locked dependencies 2019-05-28 16:13:14 +02:00
1e823a9dbe MongoCollection.count() and update() are deprecated
Eve doesn't have any counting methods on `current_app.data`, so there is
no one-to-one translation for `cursor.count()` in
`file_storage/__init__.py`. Since the call was only used in a debug log
entry, I just removed it altogether.

I removed `pillar.cli.operations.index_users_rebuild()`, as it was
importing `pillar.api.utils.algolia.algolia_index_user_save` which doesn't
exist any more, so the code was dead anyway.
2019-05-28 16:13:14 +02:00
47d5c6cbad UnitTest.assertEquals is deprecated, replaced by assertEqual 2019-05-28 16:13:14 +02:00
b66247881b Relaxed required versions of all our dependencies
Some packages were upgraded; the rename from `CommonMark` to `commonmark`
was the only change breaking the unit tests.
2019-05-28 16:13:14 +02:00
90e5868b31 Dependencies: remove requests, it's pulled in via python-pillar-sdk anyway 2019-05-28 16:13:14 +02:00
94efa948ac Development dependencies updates to their latest versions 2019-05-28 16:13:14 +02:00
ec344ba894 Generate Blender ID URL based on configuration 2019-05-23 13:48:24 +02:00
cb8c9f1225 Merge branch 'production' 2019-05-22 10:27:25 +02:00
51ed7a647d put_project(project_dict): also log the error when we cannot PUT
Previously only a ValueError was raised, which was sometimes swallowed.
Instead of looking up the culprit and solving this properly, I just log the
error now.
2019-05-22 10:15:25 +02:00
c396c7d371 Allow web projects to un-attach project pictures
This makes it possible to PUT a project after attach_project_pictures()
has been called on it (which embeds the picture file documents).

This will be used in SVNman.
2019-05-22 10:14:19 +02:00
2d7425b591 Added 'idna' package as dependency
It's required by pyopenssl but for some reason wasn't installed by Poetry.
2019-05-14 11:19:03 +02:00
3f875ad722 Gitignore devdeps metadata directory 2019-05-14 10:42:15 +02:00
9c517b67c5 Documenting use of Poetry for dependency management 2019-05-14 10:42:15 +02:00
dd9a96d111 README: Removed trailing whitespace 2019-05-14 10:42:15 +02:00
3d6ff9a7bc Moving to Poetry 2019-05-14 10:42:15 +02:00
8ba7122a01 Forms: Use own label element for fields instead of wtforms.
This way we can do two things:
* Tag the field for translation
* Use a filter (like undertitle for nicer labels)
2019-04-24 21:29:55 +02:00
15d5ac687c Attach all project pictures when viewing node
The Open Graph rendering code is not completely refactored yet,
so it still requires a mix of project.picture_header and
project.picture_16_9. By attaching all project pictures we prevent
unexpected errors.
2019-04-19 15:30:55 +02:00
402f9f23b5 Use picture_16_9 as og_image
Previously we used picture_header, which did not guarantee a suitable
aspect ratio for an Open Graph image.
2019-04-19 14:12:43 +02:00
486fb20dcf Enhance project with attach_project_pictures
Instead of individually attaching project images, use the utility
function.
2019-04-19 14:11:42 +02:00
34f2372082 Add picture_16_9 when attaching project pictures 2019-04-19 14:10:19 +02:00
c217ec194f Save 16_9 picture via Project edit form 2019-04-19 14:09:54 +02:00
b68af6da8b Rename 16x9 to 16_9
We do this to reduce ambiguity about resolution vs aspect ratio.
2019-04-19 11:50:41 +02:00
06f5bc8f01 Add picture_16x9 attribute for Project
This image can be use as a source for Open Graph tags, as well as for
displaying a project thumbnail with a known (or at least expected)
aspect ratio.
2019-04-19 10:57:46 +02:00
53eb9f30fd Bumped Jinja2 2.10 → 2.10.1
Github poked us about this being a security update.
2019-04-18 10:15:41 +02:00
43d464c60c Fix missing icons. 2019-04-15 12:42:49 +02:00
d0ef76c19e CSS: Utility classes for column count property. 2019-04-12 17:16:06 +02:00
a43eca4237 Timeline: Less prominent project title. 2019-04-10 17:08:14 +02:00
af020d4653 Cleanup CSS.
Extend Bootstrap classes instead of using own styling.
2019-04-10 17:08:01 +02:00
2c207b35e2 UI Asset List: Add custom class to meta items. 2019-04-10 14:14:04 +02:00
3f3172e00e Allow PUT method for owner on comment creation
Make use of the permission system and allow PUT method for the creator
of a Node of type comment. This enables comment owners to edit their
own posts.
2019-04-09 01:09:08 +02:00
26a09a900f PEP8 formatting 2019-04-09 01:01:58 +02:00
90154896fb PEP8 formatting 2019-04-09 01:01:49 +02:00
95d611d0c5 Cleanup: remove unused import and blank line 2019-04-08 23:55:26 +02:00
dc7d7bab4a Extend projects/view.html for page templates
Using projects/landing.html was causing exception since the landing
template expects project attributes that are available only for
projects that are setup_for_film.
2019-04-08 16:43:20 +02:00
d047943a07 Cleanup duplicate code. 2019-04-04 14:21:34 +02:00
b64b75eecb Jumbotron: Subtle text shadow on text 2019-04-04 14:21:34 +02:00
152dc50715 UI Timeline: Make buttons outline white when dark background. 2019-04-04 14:21:34 +02:00
73edd5c5d2 Remove unused import 2019-04-04 14:15:03 +02:00
3d8ee61b03 Clean up: Whitespace 2019-04-04 11:34:13 +02:00
ee5a1a8bb7 Use kebab-case for vue names
https://vuejs.org/v2/guide/components-custom-events.html#Event-Names
2019-04-04 11:33:43 +02:00
ccc78af742 white space clean up 2019-04-04 10:44:43 +02:00
de40b4b2b6 Specify prop type 2019-04-04 10:44:22 +02:00
fe2f350013 Silence warning about changing prop value 2019-04-04 10:18:42 +02:00
1b42d114ad Whitespace cleanup 2019-04-04 10:18:42 +02:00
e58db61d2a Add missing closing bracket to components 2019-04-04 10:18:42 +02:00
c6333cecfe Better initial component values 2019-04-04 10:18:42 +02:00
ee6fd3386d Fix wrong prop type 2019-04-04 10:18:42 +02:00
700e7d2fc4 Bind vue component key 2019-04-04 10:18:42 +02:00
619dfda6fa Only use minified vue if built as production 2019-04-04 10:18:42 +02:00
985e96f20b Wrong type was passed into component 2019-04-04 10:18:42 +02:00
37e09c2943 Remove unused parameter 2019-04-04 10:18:42 +02:00
62af8c2cbf Add example of usage 2019-04-04 10:18:42 +02:00
0b12436a31 UI Page: Fix link on header. 2019-04-04 00:26:15 +02:00
7f12c9b4ad UI Pages: Hide title if there is an image. 2019-04-04 00:24:37 +02:00
1171a8e437 UI Theatre: margin around comments container. 2019-04-03 23:15:09 +02:00
54abda883d Cleanup: remove unused font-pillar link.
They are now built into the main stylesheets.
2019-04-03 23:12:17 +02:00
ad0f9b939a CSS: include font-pillar into the main stylesheets. 2019-04-03 23:11:57 +02:00
4d5a8613af UI Alerts: minor style tweaks.
Remove margin from paragraphs and remove redundant text-align.
2019-04-03 22:47:04 +02:00
ff314c0a7d Cleanup: remove blender-cloud specific pug component. 2019-04-03 15:28:06 +02:00
18ec206a40 UI Breadcrums: Always show. 2019-04-02 16:40:01 +02:00
8f3f3b6698 UI Fix: Show sidebar on project edit. 2019-04-02 16:40:01 +02:00
ad5dbdf094 Remove unused data property 2019-04-02 14:09:49 +02:00
67a56dc797 Fix typo 2019-04-02 14:09:49 +02:00
093f4101cf UI Comments: Minor style adjustments and fixes. 2019-04-02 13:53:55 +02:00
b96731a939 UI jstree: Fix collapse of folders with one click.
Two clicks is too much work. It was removed by mistake on previous commit.
2019-04-02 12:27:09 +02:00
4f5746e0b7 UI Page: style the Edit bar.
With light background color and border, so it stands out.
2019-04-01 14:53:57 +02:00
1d65ea9de0 UI Pages: Add page title. 2019-04-01 14:53:57 +02:00
c31ef97c9e UI Timeline: scale the placeholder to almost fit the screen.
So the timeline has some initial height (75% of viewport height), and
once the content shows up the page doesn't jump much.
2019-04-01 14:53:57 +02:00
3906bab2ac Cleanup: Tweak comments and sort classes. 2019-04-01 14:53:57 +02:00
c93393ad10 Export vue component user-avatar 2019-04-01 14:25:45 +02:00
a37aec61b2 Vue getting started links 2019-04-01 11:23:25 +02:00
1b96c6e37e Added comments 2019-04-01 10:34:35 +02:00
119900337d Mark as deprecated an recommend vue instead 2019-04-01 10:34:35 +02:00
1d476d03d7 UI Project: Show sidebar by default.
Change the logic to hide, instead.
2019-03-29 15:47:29 +01:00
77a7b15a73 Merge branch 'production' 2019-03-29 15:43:07 +01:00
562e21d57a UI Page: Set page url as title.
So it's highlighted in the navigation.
2019-03-29 15:35:19 +01:00
c80234bac2 UI Page: style node description with its own class.
Instead of relying on 'landing'.
2019-03-29 15:34:56 +01:00
f31253dd17 UI Pages: Show Edit Post link. 2019-03-29 15:19:28 +01:00
46bbd1297b UI Pages: Only show header div if there is a picture. 2019-03-29 15:19:28 +01:00
5556bfee52 UI Page: Style like a regular page, not like the landing template (dark background). 2019-03-29 15:19:28 +01:00
72a42c2bf8 Template Cleanup: Remove unused 'title' variable.
'title' is set by the extended template ('landing').
2019-03-29 15:19:28 +01:00
da337df82b HACK to get page editing to not 500 Internal Server Error on us 2019-03-29 15:06:21 +01:00
50aec93515 HACK to get page editing to not 500 Internal Server Error on us 2019-03-29 14:54:20 +01:00
4187d17f1f Formatting 2019-03-29 14:54:20 +01:00
ba299b2a4c Documentation of es6 transcompile and packaging 2019-03-29 10:44:04 +01:00
c8adfc5595 UI Jstree: Small padding and height adjustment of anchors. 2019-03-28 21:15:22 +01:00
50d17de278 UI Project: move sticky breadcrumbs when sidebar is visible. 2019-03-28 20:59:39 +01:00
f72c1fffca UI Jstree: Spacing and style adjustments. 2019-03-28 20:59:04 +01:00
afc8acff83 Breadcrumbs: Take into account breadcrumbs when scaling project container. 2019-03-28 20:57:59 +01:00
4c857e63b2 UI: Toggle project sidebar logic. 2019-03-28 20:46:52 +01:00
48cb216c4a Removed unnecessary <template> element
Vue.js uses `<template>` when we don't want to output an element but still
want to set some attributes (like `v-if`) on a piece of text. Since we're
outputting a `<span>`, we can just move the attributes there.
2019-03-28 16:40:01 +01:00
1fd17303a5 Breadcrumbs: emit 'navigate' event when clicking on the link
Clicking on the breadcrumb link now doesn't follow the link any more,
but by keeping it as a link users can still open in a new tab.
2019-03-28 16:38:28 +01:00
d5a4c247b0 Breadcrumbs: Initial styling. 2019-03-28 16:03:50 +01:00
a3b8a8933c Breadcrumbs: Use <span> element in last item (_self).
To be able to style it similarly to the links, but without a link.
2019-03-28 16:03:24 +01:00
5c8181ae41 Refactored Date columns to have a common base 2019-03-28 14:36:30 +01:00
ff43fa19fd Add Created and Updated column 2019-03-28 12:48:45 +01:00
f73b7e5c41 Corrected comment 2019-03-28 12:40:33 +01:00
c089b0b603 Added little clarification 2019-03-28 12:40:33 +01:00
4499f911de Node breadcrumbs
Breadcrumbs are served as JSON at `/nodes/{node ID}/breadcrumbs`, with
the top-level parent listed first and the node itself listed last:

    {breadcrumbs: [
        ...
        {_id: "parentID",
         name: "The Parent Node",
         node_type: "group",
         url: "/p/project/parentID"},
        {_id: "deadbeefbeefbeefbeeffeee",
         name: "The Node Itself",
         node_type: "asset",
         url: "/p/project/nodeID",
         _self: true},
    ]}

When a parent node is missing, it has a breadcrumb like this:

    {_id: "deadbeefbeefbeefbeeffeee",
     _exists': false,
     name': '-unknown-'}

Of course this will be the first in the breadcrumbs list, as we won't be
able to determine the parent of a deleted/non-existing node.

Breadcrumbs are rendered with Vue.js in Blender Cloud (not in Pillar);
see projects/view.pug.
2019-03-28 12:40:33 +01:00
465f1eb87e Store filter/column settings in localStorage
The filter and column settings in tables are stored per project and
context in the browsers localStorage. This makes the table keep the
settings even if the browser is refreshed or restarted.

The table emits a "componentStateChanged" event containing the tables
current state (filter/column settings) which then is saved by the top
level component.
2019-03-28 10:29:13 +01:00
f6056f4f7e UI: New mixin component for listing categories.
For e.g. Blender Cloud's Learn, Libraries, etc.
2019-03-27 15:51:41 +01:00
64cb7abcba Removed unused imports 2019-03-27 15:51:24 +01:00
1f671a2375 Update package-lock.json
The current packages where failing to build libsass on macOS.
2019-03-27 14:22:33 +01:00
898379d0d3 UI: Font-size tweak for node description in timeline. 2019-03-27 14:11:05 +01:00
87ff681750 UI: Font-size tweak to node description for blog and project. 2019-03-27 14:09:48 +01:00
db11b03c39 Fix typo 2019-03-27 12:12:17 +01:00
1525ceafd5 Fix for find_markdown_fields project hook
Original commit 3b59d3ee9aacae517b06bf25346efa3f2dae0fe7
Breaking commit 32e25ce129612010a4c14dfee0d21d1a93666108

The breaking commit was actually meant to remove the need for this
hook logic entirely, by relying on a custom validator instead.
This works for nodes, but it currently does not work for projects.
The issue needs to be further investigated via T63006.
2019-03-27 12:12:17 +01:00
9c1e345252 Newline at end of file 2019-03-27 12:12:17 +01:00
237c135c31 UI Timeline: support for dark backgrounds.
Simply place the +timeline(project_id) mixin inside a div with a 'timeline-dark' class.
2019-03-27 12:07:06 +01:00
85706fc264 Updated bug report URLs
The project was apparently moved. The issues are closed, too, though, so
we could at some point check whether our workarounds can be removed.
2019-03-27 11:58:48 +01:00
4cd182e2d2 Cleanup: spaces to tabs. 2019-03-27 11:19:11 +01:00
69806d96a9 UI: Narrower column for text in jumbotron component.
Leaves some room to see the image on the right.
2019-03-27 11:04:39 +01:00
4977829da7 Cleanup: Remove legacy Bootstrap 3 minified CSS file.
* Our Pillar apps now use Bootstrap 4.
* Pillar builds its own CSS from Bootstrap 4 components (from node_modules)
2019-03-26 18:31:54 +01:00
cd94eb237f Cleanup: One indentation level too much. 2019-03-26 17:45:33 +01:00
97cda1ef6b UI: Fix hidden fields showing up in project edit.
The 'hidden' class got renamed to d-none in Bootstrap 4.
2019-03-26 15:21:15 +01:00
5cba6f53f5 Make sure sort buttons is always clickable
Hide part overflow of column label if there is not enough room
2019-03-22 14:10:18 +01:00
072a1793e4 Add missing tooltips in table 2019-03-22 14:07:29 +01:00
375182a781 Add css class per task type to table columns 2019-03-22 14:06:54 +01:00
022fc9a1b2 Removed possibility to toggle selected in table 2019-03-22 14:06:17 +01:00
6c4e6088d3 UI: Vertically center badges under comment avatar. 2019-03-21 01:03:59 +01:00
5aed4ceff7 Avoid emitting duplicate selectedItemsChanged 2019-03-20 15:19:37 +01:00
dfd61c8bd8 Update pillar table props 2019-03-20 15:18:50 +01:00
6bae6a39df Mark pillar table rows as corrupt if init fails 2019-03-20 15:14:50 +01:00
66e6ba1467 Move table css from attract to pillar repo 2019-03-20 15:12:19 +01:00
a104117618 Added pillar.auth.cors.allow() decorator
Use this decorator on Flask endpoints that should respond with CORS
headers. These headers are sent in a reply when the browser sends an
`Origin` request header; for more info see [1].

This commit rolls back the previous commit (0ee1d0d3), as this new
approach with a separate decorator is both easier to use and less
error-prone.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
2019-03-19 10:55:15 +01:00
0ee1d0d3da Allow HTTP headers to be set for @require_login() error responses
This makes the `require_login` decorator always return a Flask response.
Previously it could also raise a `Forbidden` exception; now it returns a
403 Forbidden response in that case too.
2019-03-18 14:42:00 +01:00
cfff5ef189 Fixed redirects ignoring the 'next_after_login` session variable
There were a few redirects (for example, trying to log in while already
logged in) that would incorrectly redirect to the main page. They use the
`next_after_login` session variable now.
2019-03-18 14:37:20 +01:00
58ff236a99 Generalized table to not depend on project id 2019-03-15 10:18:23 +01:00
ace091c998 Row selection before table fully inited failed
If a row was selected before table was fully initialized it would
be unselected once the row was fully initialized.
2019-03-14 10:53:47 +01:00
4136da110f Added comments and minor refactoring 2019-03-14 10:53:46 +01:00
01da240f54 Attract multi edit: Shift + mouse to select all between
and hopefully now command button on Mac works for multiselect.
2019-03-13 15:27:16 +01:00
379f743864 Attract multi edit: Edit multiple tasks/shots/assets at the same time
For the user:
Ctrl + L-Mouse to select multiple tasks/shots/assets and then edit
the nodes as before. When multiple items are selected a chain icon
can be seen in editor next to the fields. If the chain is broken
it indicates that the values are not the same on all the selected
items.

When a field has been edited it will be marked with a green background
color.

The items are saved one by one in parallel. This means that one item
could fail to be saved, while the others get updated.

For developers:
The editor and activities has been ported to Vue. The table and has
been updated to support multi select.

MultiEditEngine is the core of the multi edit. It keeps track of
what values differs and what has been edited.
2019-03-13 13:53:40 +01:00
d22c4182bf UI: Align 'Linked' comment tag with comment metadata. 2019-03-12 20:27:30 +01:00
69251de995 UI: Set max-width variable for select2. 2019-03-12 14:27:29 +01:00
57a180dc00 UI: Don't set font-size on node-details-description.
This is used for comments, nodes, everywhere. So each component should set
its own size.
2019-03-12 14:27:06 +01:00
12d8a282aa Fix T62049: Wrong sorting of comment replies 2019-03-11 10:32:40 +01:00
fbcd4c9250 UI: Fix emojis margin-top on node description utility. 2019-03-11 03:12:07 +01:00
a3f58ef8fe Bumped some secondary requirements
The cryptography package was getting old, and since Flamenco is going to
issue JWT tokens soon, I wanted to be up to date with security fixes.

Also requires updating pillar-python-sdk.
2019-03-07 17:39:06 +01:00
c7b0842779 CSS: Remove primary buttons gradient.
Doesn't always look nice, fallback to default bootstrap primary color instead.
2019-02-28 03:55:01 +01:00
5bcfa5218a UI: Minor style fixes to node-details-description.
Blockquotes and unordered lists could have the first line badly indented
since we introduced single-line comments. Now they both break the line
before being displayed.
2019-02-23 02:17:39 +01:00
da14d34551 Added jinja filter pretty_duration_fractional that includes milliseconds 2019-02-21 17:38:37 +01:00
32e25ce129 Notifications regression: Notifications not created
Notifications for when someone posted a comment on your node
was not created.

Root cause was that default values defined in schema was not set,
resulting in activity subscriptions not being active.
There were 2 bugs preventing them to be set:
* The way the caching of markdown as html was implemented caused
  default values not to be set.
* Eve/Cerberus regression causes nested default values to fail
  https://github.com/pyeve/eve/issues/1174

Also, a 3rd bug caused nodes without a parent not to have a
subscription.

Migration scripts:
How markdown fields is cached has changed, and unused properties
of attachments has been removed.
./manage.py maintenance replace_pillar_node_type_schemas

Set the default values of activities-subscription
./manage.py maintenance fix_missing_activities_subscription_defaults
2019-02-19 14:16:28 +01:00
250c7e2631 Vue Attract: Default sort shots by cut_in_timeline_in_frames 2019-02-12 12:59:01 +01:00
2f5f73843d Vue Attract: Sort/filterable table based on Vue
Initial commit implementing sortable and filterable tables for attract
using Vue.
2019-02-12 09:08:37 +01:00
a5bae513e1 Navigation: Unified cloud navigation
* Removed main drop down menu
* Added "My cloud" to user menu
* Attract/Flamenco is found under Production Tools menu
* Attract/Flamenco has the same navigation as its project
2019-02-06 10:31:36 +01:00
1101b8e716 Fix Regression: Heart filled icon was shown on all voted comments
Heart filled icon should be an indication that the current user has
voted. Thanks to Pablo Vazques for pointing it out
2019-02-04 10:16:50 +01:00
f35c2529a6 UI: Make blog title link to the actual blog entry 2019-02-02 04:03:39 +01:00
ecfd27094c UI: Blog title in timeline more prominent 2019-02-02 04:01:56 +01:00
f531685ba8 Updated unit test for FFmpeg 4 2019-01-31 14:57:38 +01:00
ef89b9a1dd CSS: Increase space between avatar and content 2019-01-30 23:15:29 +01:00
c505694b2d Formatting 2019-01-30 23:12:35 +01:00
3b59d3ee9a Projects Bug: Projects page not showing project description
Cache field _description_html was never updated when a project was
inserted/updated. Added a eve hook similar to how this cache works
with Nodes.
2019-01-21 14:48:40 +01:00
5eae0f6122 Added convenience url_for() wrapper for use in unit tests 2019-01-08 19:07:14 +01:00
b5a74ce7b9 Utility function for easily getting the project URL given its ID 2019-01-08 19:06:56 +01:00
a32fb6a208 Storage: added function for setting content type, encoding, and attachmentness
These are used by Flamenco to store task logs as gzipped text files, but to
send them to the browser with such HTTP headers that the browser can gunzip
them and display directly (rather than having to download & gunzip yourself).
2019-01-08 15:07:47 +01:00
974ac6867c Moved storage backend names to a module-global constant
This allows others to import the constant and have proper 'allowed' values
for backends. This will be used by Flamenco for storing task logs.
2019-01-08 14:45:55 +01:00
a756632cad Added pillar.api.projects.utils.storage(project_id) function
For now this returns a bucket in the default storage backend, since
individual projects do not have a 'storage backend' setting (this is
set per file, not per project).
2019-01-08 14:13:30 +01:00
c28d3e333a Storage backends: removed unused Blob.filename attribute
Just use Blob.update_filename() instead.
2019-01-08 14:12:49 +01:00
004bd47e22 Gulp fix for NodeJS 10 2019-01-04 14:20:16 +01:00
64bd2150a4 AbstractPillarTest.create_valid_auth_token() now also accepts string user ID
Strings were already passed to this function, even though it was declared
as taking an ObjectID. Instead of updating all callers, I just made it
convert strings to ObjectID.
2019-01-04 12:46:37 +01:00
a23e063002 Don't use attr.ib to declare a logger
This doesn't work well when overriding in subclasses; it keeps using the
superclass logger. Simply returning a logger fixes this.
2019-01-04 12:45:47 +01:00
903fbf8b0d Missing import & typo 2018-12-20 13:08:23 +01:00
beac125ff9 Nicer logging when refreshing file links 2018-12-20 12:51:53 +01:00
ef259345ce Formatting 2018-12-20 12:51:32 +01:00
b87c5b3728 User Search Bug: Failed to render users without roles 2018-12-20 11:37:30 +01:00
efeea87249 Markdown preview regression: Markdown preview failed in edit project 2018-12-18 17:38:04 +01:00
fb28059ae7 Rebuilt package-lock.json with Node 10 / NPM 6.4 2018-12-18 15:39:18 +01:00
a84d4d13a0 DnD fileupload in comments in firefox bug: CSS seams to be the cause 2018-12-18 15:04:08 +01:00
cb265e1975 Formatting 2018-12-18 12:53:06 +01:00
5b3de5f551 Missing JS parameter 2018-12-18 12:53:02 +01:00
fbcce7a6d8 Vue Comments: Comments ported to Vue + DnD fileupload
* Drag and drop files to comment editor to add a file attachment
* Using Vue to render comments

Since comments now has attachments we need to update the schemas
./manage.py maintenance replace_pillar_node_type_schemas
2018-12-12 11:45:47 +01:00
bba1448acd Added two more maintenance cmds for finding & fixing projectless files
This is about fixing file documents that do not have a `project` key at
all. Those were deleted by the `delete_projectless_files` management
command and restored manually. These commands can fix those file
documents properly, by checking which project they're referenced in, and
setting their `project` property.

Finding the references (`manage.py maintenance find_projects_for_files`)
is a heavy operation as it inspects all nodes and all projects. This can
be done offline on a cloned database, and the result stored in a JSON
file. This JSON file can then be processed on the production server
(`manage.py maintenance fix_projects_for_files /path/to/file.json --go`)
to perform the fix.
2018-12-05 14:23:34 +01:00
da7dc19f66 Expanded test for delete_projectless_files CLI command
It now also checks that _updated and _etag have been updated correctly,
and that the other properties haven't been touched.
2018-12-04 18:03:13 +01:00
de8633a5a4 Formatting 2018-12-04 17:44:35 +01:00
de5c7a98a5 Added CLI command for soft-deleting projectless files
Run `./manage.py maintenance delete_projectless_files --help` for more info.
2018-12-04 17:44:29 +01:00
ac092587af Switch Celery broker from RabbitMQ to Redis
This should work around a bug in Celery where long Celery tasks would
time out and be re-queued, causing an infinite loop.

See https://github.com/celery/celery/issues/3430 for more info.
2018-12-04 10:22:20 +01:00
a10b42afe6 Find only non deleted comments 2018-12-03 22:56:20 +01:00
6377379144 Fix T58116: Timeline does not exclude Posts with 'pending' status 2018-11-28 16:58:24 +01:00
82071bf922 Quick Search: Queries containing equal sign (=) failed 2018-11-27 10:00:44 +01:00
1c0476699a Update default comments sorting
Confidence is not necessary, as we only allow rating_positive.
2018-11-26 23:48:52 +01:00
411a6f75c5 Change default comments sorting
Comments were sorted by descending creation date. Now they are sorted by
descending confidence and descending creation date.
2018-11-26 19:48:12 +01:00
07821c7f97 Timeline Firefox bug fix: load more not working properly
Firefox failed to redraw the page properly when loading more weeks.
2018-11-23 14:55:58 +01:00
64b4ce3ba9 Minor layout and style adjustments. 2018-11-22 21:52:07 +01:00
72417a9abb Minor layout and style adjustments. 2018-11-22 21:35:27 +01:00
6ae9a5ddeb Quick-Search: Added Quick-search in the topbar
Changed how and what we store in elastic to unify it with how we store
things in mongodb so we can have more generic javascript code
to render the data.

Elastic changes:
  Added:
  Node.project.url

  Altered to store id instead of url
  Node.picture

  Made Post searchable

./manage.py elastic reset_index
./manage.py elastic reindex

Thanks to Pablo and Sybren
2018-11-22 15:31:53 +01:00
a897e201ba Timeline Fix: Attachment in post did not work 2018-11-22 14:39:25 +01:00
3985a00c6f Timeline: Style and layout adjustments 2018-11-21 20:32:27 +01:00
119291f817 Timeline: Remove header and lead from posts.
Headers don't really match with the rest of the listing.
2018-11-21 20:24:12 +01:00
801cda88bf Project View: Labels for sections 2018-11-21 20:23:07 +01:00
fc99713732 Project-Timeline: Introduced timeline on projects
Limited to projects of category assets and film for now.
2018-11-20 16:29:01 +01:00
1d909faf49 CSS: Override margin-bottom for emoji images. 2018-11-16 23:57:00 +01:00
ed35c54361 CSS: Fix alignment on list with custom bullets. 2018-11-16 23:57:00 +01:00
411b15b1a0 Pin versions in package.json
This should lead to predictable results when running ./gulp.
2018-11-16 15:45:46 +01:00
9b85a938f3 Add npm deps: acorn and glob 2018-11-16 14:31:46 +01:00
989a40a7f7 Add missing dependency for transpiling es6 2018-11-16 14:06:50 +01:00
64cc4dc9bf Bug fix: Sharing files failing
Found using sentry
2018-11-16 12:46:30 +01:00
9182188647 CSS: Minor style tweaks to user login.
Don't use hardcoded white color for container-box mixin.
2018-11-16 12:38:40 +01:00
5896f4cfdd CSS: Use generic colors for inputs border colors.
More reliable when theming.
2018-11-16 02:31:13 +01:00
f9a407054d CSS: Fix emoji set as block.
When parent styling set images to be block, emoji should always be inline.
2018-11-15 23:54:16 +01:00
1c46e4c96b CSS: Fix !default setting in config 2018-11-14 02:06:22 +01:00
2990738b5d Lazy Home: Lazy load latest blog posts and assets and group by week and
project.

Javascript tutti.js and timeline.js is needed, and then the following to
init the timeline:

$('.timeline')
    .timeline({
        url: '/api/timeline'
    });

# Javascript Notes:
## ES6 transpile:
* Files in src/scripts/js/es6/common will be transpiled from
modern es6 js to old es5 js, and then added to tutti.js
* Files in src/scripts/js/es6/individual will be transpiled from
modern es6 js to old es5 js to individual module files
## JS Testing
* Added the Jest test framework to write javascript tests.
* `npm test` will run all the javascript tests

Thanks to Sybren for reviewing
2018-11-12 12:57:25 +01:00
e2432f6e9f NPM: Upgrade to Gulp 4
No functional changes. Besides slightly faster thanks to parallel tasks and future proof.
2018-11-10 01:08:30 +01:00
aa63389b4f Remove duplicated file
The file was copy-pasted in api/search.
2018-11-04 11:48:08 +01:00
5075cd5bd0 Introducing Flask Debug Toolbar
Display useful information for debugging.
2018-11-01 02:19:13 +01:00
ceef04455c Video player in project header bug (firefox):
Unable to play video in in project header in firefox.

Reason:
Firefox is missing ResizeObserver, so as a workaround videoJs inserts an
iframe bellow the video and listens to resize events on that. This iframe
lands in front of the video when we use the class ".embed-responsive",
and therefore we can not start the wideo.

Solution:
I could not see any difference in how the page was rendered
with/without this class so I removed it.
2018-10-24 13:34:08 +02:00
c8e62e3610 Loading bar: Introduced two event listeners on window 'pillar:workStart' and 'pillar:workStop' that (de)activates the loading bar.
Reason:
* To decouple code
* Have the loading bar active until whole page stopped working
* Have local loading info

Usage:
$.('.myClass')
   .on('pillar:workStart', function(){
    ... do stuff locally while loading ...
    })
   .on('pillar:workStop', function(){
   ... stop do stuff locally while loading ...
   })

$.('.myClass .mySubClass').trigger('pillar:workStart')
... do stuff ...
$.('.myClass .mySubClass').trigger('pillar:workStop')
2018-10-23 13:57:02 +02:00
ce7cf52d70 Refresh badges every 10 minutes
Now that they are new, they should be snappy!
2018-10-11 10:04:16 +02:00
dc2105fbb8 Enabled badges in comments 2018-10-10 16:55:10 +02:00
71185af880 Added json jinja filter for debugging purposes 2018-10-10 16:55:10 +02:00
041f8914b2 Show badges on user profile page 2018-10-10 16:55:06 +02:00
b4ee5b59bd Sync Blender ID badge as soon as user logs in
This adds a new Blinker signal `user_logged_in` that is only sent when
the user logs in via the web interface (and not on every token
authentication and every API call).
2018-10-10 16:54:58 +02:00
314ce40e71 Send logged-in user in user_authenticated signal 2018-10-10 15:30:35 +02:00
7e941e2299 Added TODOs and removed fetching unused field from MongoDB 2018-10-10 14:40:45 +02:00
53811363ce Search bug fix: Missing video plugins resulted in wrong volume and progress. 2018-10-05 14:37:32 +02:00
51057e4d63 Search bug fix: Grid/List toggle on group nodes also affected the the way search results where presented 2018-10-05 12:37:48 +02:00
a1a48c1941 Elasticsearch: Added documentation on how to set the indexing. 2018-10-05 11:35:02 +02:00
19fdc75e60 Free assets: Assets should not be advertised as free if the user is a logged in subscriber. 2018-10-04 17:44:08 +02:00
879bcffc2b Asset list item: Don't show user.full_name in latest and random assets 2018-10-04 12:30:05 +02:00
6ad12d0098 Video Duration: The duration of a video is now shown on thumbnails and bellow the video player
Asset nodes now have a new field called "properties.duration_seconds". This holds a copy of the duration stored on the referenced video file and stays in sync using eve hooks.

To migrate existing duration times from files to nodes you need to run the following:
./manage.py maintenance reconcile_node_video_duration -ag

There are 2 more maintenance commands to be used to determine if there are any missing durations in either files or nodes:
find_video_files_without_duration
find_video_nodes_without_duration

FFProbe is now used to detect what duration a video file has.

Reviewed by Sybren.
2018-10-03 18:30:40 +02:00
a738cdcad8 Fix and tweaks to theatre mode
* Only show width/height if available (would be None otherwise)
* If image width/height is not available, allow zooming
* Fix styling and cleanup
* Remove footer (reported by Vulp35 on Twitter, thanks!)
2018-10-01 11:56:52 +02:00
199f37c5d7 Tagged Asset: Added metadata
Video duration, Project link and pretty date
2018-09-26 11:29:15 +02:00
4cf93f00f6 Assets: Fix video progress not showing 2018-09-24 13:31:48 +02:00
eaf9235fa9 Fix users listing styling 2018-09-21 17:11:26 +02:00
24ecf36896 CSS: Brighter primary button 2018-09-21 16:51:45 +02:00
86aa494aed CSS: Use 3 cards even on media-xl 2018-09-21 16:25:48 +02:00
5a5b97d362 Introducing Main Dropdown navigation for mobile 2018-09-21 16:13:50 +02:00
831858a336 CSS: Make buttons use bootstraps' variable for roundness 2018-09-21 16:13:50 +02:00
e9d247fe97 Added assertion in test to verify that the asset was deleted 2018-09-21 14:24:37 +02:00
1ddd8525c7 Remove references to node from projects when the node is deleted.
Removes node references  in project fields header_node, nodes_blog, nodes_featured, nodes_latest.
2018-09-21 14:23:47 +02:00
c43941807c Node details: Center only on landing 2018-09-21 12:11:11 +02:00
bbad8eb5c5 Remove unused project macros file
The only macro was render_secondary_navigation, which is in the _navigation.pug
template together with the other Blender Cloud navigation macros.
2018-09-20 16:38:17 +02:00
04f00cdd4f Loading Bar: Utility to turn it on/off 2018-09-20 15:20:29 +02:00
66d9fd0908 Center node-details-description 2018-09-20 12:15:08 +02:00
516ef2ddc7 Navigation: if category is Assets, then call it Libraries 2018-09-20 12:10:35 +02:00
35fb07ee64 Navigation: Move marker on left side
On the right it looks like a scrollbar.
2018-09-20 12:10:09 +02:00
f1d67894dc Rename secondary_navigation to navigation_project 2018-09-20 12:05:46 +02:00
aef2cf8c2d Navigation: Fix notification number 2018-09-19 19:43:49 +02:00
d347ddac2c Navigation: Films -> Open Projects
And show navigation when in the Blog
2018-09-19 19:33:01 +02:00
186ba167f1 Navigation: remove extra 's' for assets project
Such a lame solution. We need better categories.
2018-09-19 19:09:04 +02:00
847e97fe8c Project: remove arrow left/right navigation hotkey 2018-09-19 18:33:53 +02:00
7ace5f4292 Search: use proper navigation
Also remove failing projectBrowseTypeList js
2018-09-19 18:22:27 +02:00
6cb85b06dc Project: Dark navbar for edit project 2018-09-19 18:21:47 +02:00
5c019e8d1c Landing: Set project title as active 2018-09-19 15:50:23 +02:00
7796179021 Navigation: Position icons 2018-09-19 15:42:18 +02:00
26aca917c8 Use correct permission format for gulp-chmod 2018-09-19 14:45:43 +02:00
e262a5c240 Jumbotron: take content if defined in the block 2018-09-19 12:39:18 +02:00
e079ac4da1 CSS adjustments to dropdowns, cards, responsive 2018-09-19 11:33:20 +02:00
83097cf473 Projects: Explore -> Browse 2018-09-18 18:53:55 +02:00
f4ade9cda7 Allow empty content for card-deck component
In cases like the tags groups we want an empty card-deck because its
content is filled up via javascript.
2018-09-18 16:56:08 +02:00
31244a89e5 Merge branch 'master' into production 2018-09-18 15:50:55 +02:00
749c3dbd58 Gulp: Add bootstrap's collapse and alert js to tutti 2018-09-18 15:25:20 +02:00
b1d97e723f Cards: Smaller ribbon for vertical aligned cards 2018-09-18 15:25:20 +02:00
46bdd4f51c Pages: Don't show date and page title
It's already in the jumbotron
2018-09-18 15:25:20 +02:00
93720e226c Badges: don't display them just yet 2018-09-18 15:25:20 +02:00
9a0da126e6 Fix failing tests
Failure was due to a new ‘slug’ key in the link dict.
2018-09-18 15:14:27 +02:00
45672565e9 Card style fixes 2018-09-18 12:53:34 +02:00
3e1273d56c CSS: zoom-in cursor utility 2018-09-18 12:49:06 +02:00
fe86f76617 Search: styling 2018-09-17 19:04:42 +02:00
008d9b8880 Comments: padding 2018-09-17 18:35:04 +02:00
13b606df45 CSS cleanup and use classes for styling 2018-09-17 18:16:42 +02:00
57f5836829 Cleanup and replace custom styles with bootstrap classes. 2018-09-17 17:08:46 +02:00
e40ba69872 Project style adjustments. 2018-09-17 17:07:10 +02:00
0aeae2cabd Navigation: Highlight current page in the navbar 2018-09-17 15:02:54 +02:00
601b94e23a Pages: Set title from page properties url 2018-09-17 15:02:24 +02:00
00c4ec8741 Navigation Links: Pass the slug
So we can style the items by comparing it to the page 'title'.
2018-09-17 15:01:57 +02:00
caee114d48 Posts: Remove unused title and pages 2018-09-17 15:01:23 +02:00
7fccf02e68 Posts: Pass navigation_links
Otherwise pages wont show up when looking at a project blog
2018-09-17 15:00:55 +02:00
1c42e8fd07 Nodes View: Remove unnecessary containers
#node-container and #node-overlay were not used.
2018-09-17 14:26:37 +02:00
77f855be3e Remove jQuery Montage
No longer used since we list assets with a macro.
2018-09-17 14:25:19 +02:00
cede3e75db Remove more Markdown references 2018-09-17 13:47:03 +02:00
02a7014bf4 Cleanup and title-underline utility 2018-09-17 12:54:07 +02:00
04e51a9d3f CSS: Break to large size a bit earlier 2018-09-17 12:53:25 +02:00
d4fd6b5cda Asset Listing: display author name (when available) 2018-09-17 12:52:48 +02:00
2935b442d8 Remove outdated remarkdown_comments management command 2018-09-17 09:14:11 +02:00
567247f3fd Rename hooks.py to eve_hooks.py
Follow naming convention started in Attract and Flamenco.
2018-09-17 09:09:46 +02:00
def52944bf CSS tweaks for embeds, videos and iframe 2018-09-16 23:56:31 +02:00
8753a12dee Tweak unit test to support new embed code 2018-09-16 22:04:22 +02:00
77e3c476f0 Move node hooks into own file 2018-09-16 13:04:12 +02:00
842ddaeab0 Assets: Display similar assets based on tags
Experimental.
2018-09-16 06:29:19 +02:00
85e5cb4f71 Projects: Only display category for public projects 2018-09-16 05:02:52 +02:00
6648f8d074 Minor style adjustments 2018-09-16 05:02:16 +02:00
a5bc36b1cf Jumbotron overlay is now optional.
Just add the jumbotron-overlay class, or jumbotron-overlay-gradient
2018-09-16 04:28:11 +02:00
e56b3ec61f Use Pillar's built-in markdown when editing projects/creating posts. 2018-09-16 04:27:24 +02:00
9624f6bd76 Style pages 2018-09-16 04:05:37 +02:00
4e5a53a19b Option to limit card-deck to a maximum N columns
Only 3 supported for now
2018-09-16 03:42:48 +02:00
fbc7c0fce7 CSS: media breakpoints
from Bootstrap and added a couple more for super big screens
2018-09-16 03:39:54 +02:00
bb483e72aa CSS cleanup (blog, comments) 2018-09-16 03:05:34 +02:00
baf27fa560 Blog: Fix and css cleanup 2018-09-16 02:04:14 +02:00
845ba953cb Make YouTube shortcode embeds responsive
Part of T56813
2018-09-15 22:32:03 +02:00
e5b7905a5c Project: Sort navigation links
See T56813
2018-09-15 22:12:12 +02:00
88c0ef0e7c Blog: fixes and tweaks 2018-09-15 21:32:54 +02:00
f8d992400e Extend attachment shortcode rendering
The previous implementation only supported rendering
attachments within the context of a node or project document.
Now it also supports node.properties. This is a temporary
solution, as noted in the TODO comments.
2018-09-15 19:01:58 +02:00
263d68071e Add view_progress to nodes of type asset 2018-09-15 17:59:30 +02:00
0f7f7d5a66 Profile styling, layout and cleanup. 2018-09-15 16:42:29 +02:00
6b29c70212 Navigation menu: Style see-more items 2018-09-15 06:16:06 +02:00
07670dce96 Fix view type list for folders 2018-09-15 05:50:42 +02:00
fe288b1cc2 Typo 2018-09-15 05:50:10 +02:00
2e9555e160 Layout and style for new global menu. 2018-09-15 05:41:15 +02:00
b0311af6b5 CSS: $primary-accent color and gradient utils 2018-09-15 05:40:29 +02:00
35a22cab4b Fix wrong url 2018-09-14 23:12:02 +02:00
0055633732 Blog: Styling and cleanup 2018-09-14 20:30:04 +02:00
78b186c8e4 Blog: Unify all post viewing in one template
During the years we went from site-wide blog, to project blog, to
post view inside a project, to full one-page post view. This led
to have multiple ways to see the same content.

This commit brings all post related stuff to always use index.pug
(or index_archive if we are looking blasts from the past).
2018-09-14 20:29:44 +02:00
232321cc2c Blog: Cleanup CSS 2018-09-14 17:29:13 +02:00
a6d662b690 Refactor render_secondary_navigation macro
* Use navigation_links instead of pages.
* Use secondary navigation mixin.
* Always include project category.
* Always include Explore tab.

Should be eventually moved to Blender Cloud repo.
2018-09-14 16:58:48 +02:00
32c7ffbc99 Move project-main to Blender Cloud
Also remove calls to project-landing, it is now part of project-main.
It was just a few lines of code not worth having a different CSS file.
2018-09-14 16:56:35 +02:00
cfcc629b61 Update package-lock.json 2018-09-14 13:11:49 +02:00
8ea0310956 Remove old videojs 2018-09-14 01:58:30 +02:00
c1958d2da7 Gulp: task to move vendor scripts
Only videojs at the moment.
2018-09-14 01:57:55 +02:00
030c5494a8 Cleanup: jQuery and Bootstrap are now part of tutti
Also remove font loading from Google, we use system fonts now.
2018-09-14 00:52:58 +02:00
462f31406a Package.json: videojs as new dependency
So it's easier to keep track of the version number.
2018-09-14 00:52:58 +02:00
1a1f67cf00 Cleanup: Remove markdown js scripts
Pillar has its own way to convert markdown (commonmark via backend) so it
does not longer need these files.
2018-09-14 00:52:58 +02:00
8d5bdf04aa Mixins no longer used 2018-09-13 18:10:39 +02:00
9a9d15ce47 Generate project_navigation_links
This function generates a list of selected links for important nodes such
as Pages and Blog. This list of links is used in the templates to provide
high level navigation of a Project.
2018-09-13 16:35:53 +02:00
c795015a3c Remove blog and page node types from jstree
They will be visible in project_navigation_links (see next commit).
2018-09-13 16:35:53 +02:00
afda0062f5 Navbar: Padding for items 2018-09-12 19:00:29 +02:00
a97c8ffc93 Search: Layout and styling 2018-09-12 19:00:16 +02:00
c5fa6b9535 Sass: set project_nav-width sizes 2018-09-12 18:59:12 +02:00
2be41a7145 Show author badges on assets and comments
Comments layout is still broken, marked as TODO(Pablo).
2018-09-12 15:58:29 +02:00
e8fb77c39b Badge sync: also support removal of all badges
Removal is stored as '' for the HTML. This way there is still the expiry
date, which means we won't repeatedly check for changes.
2018-09-12 15:29:45 +02:00
40933d51cf Show badges to users in their profile settings 2018-09-12 15:02:19 +02:00
9a9ca1bf8b Synchronise badges with Blender ID
Synchronisation is performed in the background by the Celery Beat, every
10 minutes. It has a time limit of 9 minutes to prevent multiple refresh
tasks from running at the same time.

Synchronisation is also possible with the `manage.py badges sync` CLI
command, which can sync either a single user or all users.
2018-09-12 15:02:19 +02:00
0983474e76 Store Blender ID OAuth scopes in MongoDB + request badge scope too
This also changes the way we treat Blender ID tokens. Before, the Blender ID
token was discarded and a random token was generated & stored. Now the
actual Blender ID token is stored.

The Facebook and Google OAuth code still uses the old approach of generating
a new token. Not sure what the added value is, though, because once the
Django session is gone there is nothing left to authenticate the user and
thus the random token is useless anyway.
2018-09-12 15:02:19 +02:00
6bcce87bb9 Sort celery task modules alphabetically 2018-09-12 15:02:19 +02:00
1401a6168f Always use urljoin to construct Blender ID URLs 2018-09-12 15:02:19 +02:00
85eab0c6cb No longer hash auth tokens + store the token scopes
This partially reverts commit c57aefd48b10ca3cabc9df162bc32efa62a6a21e.
The code to check against hashed tokens remains, because existing tokens
should still work.

The unhashed tokens are necessary for fetching badges from Blender ID.
2018-09-12 15:02:19 +02:00
a753637e70 Thicker progress bar on cards 2018-09-11 19:45:42 +02:00
f87c7a25df Asset: style and cleanup listing
Font pillar aliases for asset icons
2018-09-11 19:37:22 +02:00
3ae16d7750 Tweaks to asset listing 2018-09-11 17:45:33 +02:00
c546dd2881 Video: new macro for showing video progress
Import video_progress_bar from '_macros/_asset_video_progress.html'
and pass it the video and current_user.
2018-09-11 16:11:05 +02:00
48df0583ab Layout and styling of asset groups 2018-09-11 15:16:37 +02:00
094d15116e Video progress: fixed issue in group node view_embed when never watched video 2018-09-11 15:01:11 +02:00
534d06ca8f Include video progress data in UserClass
See src/templates/nodes/custom/group/view_embed.pug for a crude example.
2018-09-11 14:06:45 +02:00
df078b395d Video progress: skip 'only reporting when paused' when forcing report
This ensures that the final pause at the end of a non-looping video is
also reported.
2018-09-11 14:06:45 +02:00
5df92ca4cf Use list-asset() mixin component for project index 2018-09-10 19:02:27 +02:00
ecace8c55b Navbar: style tweaks 2018-09-10 17:09:37 +02:00
bcacdfb7ea Project view: List of pages 2018-09-10 16:11:21 +02:00
d7fd90ded1 Videoplayer: Custom playback speed 2018-09-10 15:23:05 +02:00
b9268337c3 Videoplayer: Move loop functions outside of videojs() 2018-09-10 15:22:05 +02:00
9b62daec74 Search: Cleanup and minor fixes. 2018-09-10 11:56:31 +02:00
5cc5698477 Pillar Font: A couple new icons and update.
Also added comments on how to update this file in the future.
2018-09-10 11:55:59 +02:00
00ba98d279 Search: replace spinning loader with page-bar loader 2018-09-10 11:10:25 +02:00
e818c92d4e Assets: License style 2018-09-07 18:17:50 +02:00
612862c048 Use bootstrap classes where possible 2018-09-07 18:13:04 +02:00
6b3f025e16 Project Edit: Cleanup and styling 2018-09-07 17:21:02 +02:00
8a90cd00e9 Pug mixin components for jumbotron, secondary navigation and more. 2018-09-07 17:20:22 +02:00
17a69b973e Videoplayer: thicker progress bar 2018-09-07 14:55:42 +02:00
8380270128 Fixes on buttons/dropdown layout 2018-09-07 14:55:27 +02:00
35225a189d Replace #project-loading spinning icon with a .loader-bar 2018-09-07 14:55:04 +02:00
be98a95fc0 Assets: Fix download dropdown 2018-09-07 12:27:37 +02:00
95c1f913c6 Videoplayer small improvements
* Disable volume change on scroll
* Add L key shortcut to toggle loop
* Minor style fixes (missing font family)
2018-09-07 11:49:34 +02:00
9bcd6cec89 Cleanup and minor tweaks for apps with a sidebar
Like Attract or Flamenco
2018-09-06 18:18:22 +02:00
4532c1ea39 Updated package-lock.json 2018-09-06 16:09:25 +02:00
e19dd27099 API endpoint /api/nodes/tagged/<tag>
This endpoint returns nodes in public projects that have the given tag.
The returned JSON is cached for 5 minutes.
2018-09-06 15:42:50 +02:00
f54e56bad8 Allow predefined tags on nodes
Whenever a node has a 'tags' property of type 'list' it will be handled as
if it has {'allowed': app.config['NODE_TAGS']} in the node type definition.
2018-09-06 15:42:20 +02:00
eb851ce6e1 Added some type declarations
I added those for a certain use that ended up not being committed, but
those declarations are useful anyway.
2018-09-06 15:42:20 +02:00
586d9c0d3b Create MongoDB indices at Pillar startup, and not at first request
This makes things a little more predictable, and allowed me to actually
find & fix a bug in a unittest.
2018-09-06 15:42:20 +02:00
ac23c7b00b Bootstrap popovers are no longer used. 2018-09-06 14:24:09 +02:00
811edc5a2a Gulp: generate sourcemaps when not in production 2018-09-06 14:14:15 +02:00
cb95bf989a Updated package.lock by running ./gulp 2018-09-06 13:44:03 +02:00
e4fa32b8e4 Fixed bug in attachment code 2018-09-06 13:36:01 +02:00
08bf63c2ee Merge branch 'wip-redesign'
# Conflicts:
#	src/templates/projects/view.pug
2018-09-06 13:30:24 +02:00
0baf5b38c3 Project view: dim title link 2018-09-06 12:52:54 +02:00
858a75af8d Pug: Move project home templates to blender-cloud
These are super hard-coded to the Cloud anyway.
2018-09-06 12:51:58 +02:00
6b1a5e24e8 Pug: Use templates from blender-cloud
Affects the following templates:

/projects/view.pug
/projects/index_dashboard.pug
/organizations/index.pug

A lot of this layout is hardcoded for blender-cloud anyway. Eventually
Pillar should have its own templates to use as starting point for building
other Pillar apps. This should be built using the minimal amount of code
possible and rely on styling possible via Bootstrap.
2018-09-06 12:46:33 +02:00
1500e20291 Blog: cleanup of layout and style
Simpler markup reusing bootstrap 4 classes.
2018-09-06 12:42:37 +02:00
d347534fea Pug: Move navigation macro to blender-cloud 2018-09-06 12:19:28 +02:00
4546469d37 Pug: Move blog macros to blender-cloud 2018-09-06 12:19:00 +02:00
b0d8da821f CSS: Blog cleanup 2018-09-06 12:11:18 +02:00
1821bb6b7d CSS general cleanup and minor style tweaks 2018-09-06 12:11:10 +02:00
278eebd235 Style jsTree 2018-09-06 12:06:14 +02:00
2777c37085 Style videoplayer. 2018-09-06 12:05:45 +02:00
5e07cfb9b2 Send the request URL to Sentry
Also removed some dead code.
2018-09-05 14:58:34 +02:00
bc16bb6e56 Send the request URL to Sentry
Also removed some dead code.
2018-09-05 14:54:30 +02:00
0fcafddbd1 Added unit test for creating comments
We had an issue creating comments, so I wrote a test for it. The test
succeeds on a new project, so the problem lies with the older projects.
In the end it was the comment node type that still had
`{'coerce': 'markdown'}`.
2018-09-05 14:54:08 +02:00
f29e01c78e Video player: remember volume in local storage 2018-09-04 12:16:24 +02:00
2698be3e12 Saving & restoring video watching progress
Video progress updates:

- Mark as 'done' when 90% or more is watched.
- Keep 'done' flag when re-watching.

The video progress is stored on three events, whichever comes first:

- Every 30 seconds of video.
- Every 10% of the video.
- Every pause/stop/navigation to another page.
- When we detect the video is looping.
2018-09-04 12:16:24 +02:00
9c2ded79dd CSS: Cleanup and simplification
Mainly to rely more on bootstrap styling
2018-08-31 19:32:17 +02:00
b4acfb89fa Layout: use bootstrap classes 2018-08-31 19:31:36 +02:00
3f8e0396cf VideoJS: don't use videojs.registerPlugin() to start Google Analytics
The `registerPlugin()` call should only be done once, and not for every
video shown.

This removes the warning about the 'analytics' plugin already being
registered, which you see when navigating from one video to another via
the JSTree.
2018-08-31 17:19:27 +02:00
05c488c484 Authentication: also accept user from session on API calls
When loading the user from the session, a CSRF check is performed.
2018-08-31 17:18:46 +02:00
33bd2c5880 Sass: Import modules on top level 2018-08-31 14:26:42 +02:00
76338b4568 Sass config: Bootstrap overrides 2018-08-31 14:24:25 +02:00
7405e198eb Use .displayAs() instead of .show()
Needed for CSS display to be set as inline-block instead of show()'s inline.
2018-08-31 14:23:23 +02:00
2332bc0960 jQuery: Small utility to set CSS display type
Showing elements with jQuery's native .show() sets display as 'inline',
but sometimes we need to set 'flex' or 'inline-block'.
2018-08-31 14:20:59 +02:00
ac3a599bb6 Gulp: build our own bootstrap js only using the needed modules.
At this point we only use tooltip and dropdown code, but we could use
tabs or carousels in the future. Just add them to the toUglify list.
2018-08-31 14:19:09 +02:00
814275fc95 Gulp: only chmod when running --production 2018-08-31 14:17:39 +02:00
40c19a3cb0 pillar.api.utils.utcnow() now truncates microseconds to milliseconds
MongoDB stores datetimes in millisecond precision, to keep datetimes the
same when roundtripping via MongoDB we now truncate the microseconds.
2018-08-31 11:26:32 +02:00
a67527d6af Use app_context() instead of test_request_context()
There is no request context needed here.
2018-08-30 18:28:17 +02:00
791906521f Added a test context manager to log in when doing Flask test client requests 2018-08-30 18:27:55 +02:00
2ad5b20880 Quick hack to get /p/{url}/jstree working again
Apparently Eve is now stricter in checking against MONGO_QUERY_BLACKLIST,
and blocks our use of $regex when getting child nodes. See
`jstree.py::jstree_get_children()`
2018-08-30 13:59:23 +02:00
f6fd9228e5 Upgrade Celery (fixes a problem with workers not starting) 2018-08-30 12:31:54 +02:00
e9f303f330 Re-pinned dependency versions 2018-08-30 12:04:57 +02:00
00a7406a1e Ignore .pytest_cache 2018-08-30 11:00:36 +02:00
82aa521b5f Merge branch 'master' into wip-flask-one 2018-08-30 10:59:00 +02:00
f7220924bc Replaced deprecated call to collection.count() 2018-08-30 10:33:30 +02:00
46b0d6d663 Upgrade npm dependencies
Change gulp-uglify for gulp-uglify-es which has support for ES6.

New dependencies:
* boostrap
* jquery
* popper.js (required by bootstrap)
2018-08-29 16:30:17 +02:00
595bb48741 Silence warning of Flask-Caching about NULL cache during testing 2018-08-29 15:23:47 +02:00
1c430044b9 More urljoin() instead of string concatenation 2018-08-29 14:28:24 +02:00
73bc084417 Cerberus or Eve apparently changed validator._id to document_id 2018-08-29 14:18:24 +02:00
37ca803162 Flask wrapped Response replaced json() function with json property 2018-08-29 14:18:07 +02:00
939bb97f13 Revert 9389fef8ba96a3e0eb03d4d600f8b85af1190fde 2018-08-29 14:17:38 +02:00
2c40665271 Use urljoin() to compose OAuth URLs instead of string concatenation
String concatenation is bound to mess up; in this case it was producing
double slashes instead of single ones when `BLENDER_ID_ENDPOINT` ends in
a slash. Since URLs generally end in a slash, this should be supported.
2018-08-29 14:17:17 +02:00
e8123b7839 Apparently the test client now uses `https://localhost.local/' as URL
Previously this was 'http://localhost/'
2018-08-29 11:27:00 +02:00
6d6a40b8c0 Empty lists don't seem to be stored in MongoDB any more
It looks like with the new Eve (or one of its dependencies) empty lists
aren't stored any more; rather than storing `{'org_roles': []}`, it skips
the `'org_roles'` key altogether. Not sure what caused this, as it was
mentioned in neither the Eve nor the PyMongo changelog.
2018-08-29 11:26:19 +02:00
efd345ec46 Upgrade attachments CLI cmd: added compatibility with new 'validator' key
We now support both the old coerce=markdown and the new validator=markdown.
Probably support for the old can be removed, but I'm keeping it around
just to be sure.
2018-08-29 11:24:44 +02:00
d655d2b749 Users schema: don't supply schema when allow_known=True
Apparently the new Cerberus doesn't like this, and will check against the
schema only and ignore `allow_unknown` when it's there.
2018-08-29 11:23:19 +02:00
a58e616769 Markdown validator: gracefully handle partial document validation
Validation of partial documents can happen when validating an update.
Missing data is fine then.
2018-08-29 11:22:39 +02:00
a8a7166e78 Use self.assertRaises as context manager 2018-08-28 17:45:58 +02:00
1649591d75 Create a copy in the validator's self.document
This ensures that further modifications (like setting '_etag' etc.) aren't
done in-place.
2018-08-28 17:45:44 +02:00
9389fef8ba Explicitly install pyasn1, solves certain build/test problems 2018-08-28 17:29:53 +02:00
6737aa1123 Markdown validator now also updates the doc with post_internal
The post_internal function does `document = validator.document`, replacing
the to-be-posted document by the copy that Cerberus made (and which we
cannot add keys to because it iterates over the keys and the dict size thus
isn't allowed to change).

I hope this doesn't break other validators who expect to be able to write
to `self.document`.
2018-08-28 17:29:29 +02:00
40f79af49d Tooltips: Cleanup 2018-08-28 15:54:14 +02:00
84608500b9 CSS: Split dropdown styling 2018-08-28 15:53:47 +02:00
819300f954 Navbar cleanup 2018-08-28 15:52:56 +02:00
b569829343 General cleanup 2018-08-28 15:52:50 +02:00
c35fb6202b render_secondary_navigation: Bootstrap 4 tweaks 2018-08-28 15:51:56 +02:00
d0ff519980 Font Pillar: Aliases for CC license icons
Also comments about updating the font from fontello.com
2018-08-27 17:03:13 +02:00
6ff4ee8fa1 Minor Dashboard style tweaks 2018-08-27 17:02:36 +02:00
b5535a8773 CSS: New primary color and navbar height 2018-08-27 17:02:07 +02:00
2ded541955 CSS Cleanup: remove font-body specifics 2018-08-27 17:01:43 +02:00
3965061bde CSS: Split into modules
Don't place pure styling on top-level files (those that don't begin with underscore).
Instead, import them as individual files.
2018-08-27 17:01:08 +02:00
5238e2c26d Pillar Font: Use variable for path 2018-08-22 19:57:22 +02:00
469f24d113 Fix for {validate: markdown} when used in Eve
Eve's Validator has not only a validate() function, but also
validate_update() and validate_replace(). Those set
self.persisted_document, so if that attribute exists we just use it.
2018-07-13 17:14:06 +02:00
8a0f582a80 Removed dependency on flask_pymongo 2018-07-13 17:08:06 +02:00
559e212c55 Removed debug prints + added TODO(fsiddi) 2018-07-13 17:04:23 +02:00
61278730c6 De-indent the code a bit 2018-07-13 17:02:47 +02:00
0fdcbc3947 Restored MarkDown conversion using 'validator': 'markdown' 2018-07-13 17:02:38 +02:00
8dc3296bd5 Schema change for IP range, use validator instead of type
Custom types became rather useless in Cerberus 1.0 since the type checker
is cripled (doesn't know field name, cannot return useful/detailed error
messages). Instead we use a validator now.
2018-07-13 15:03:35 +02:00
a699138fd6 Merge branch 'master' into wip-flask-one 2018-07-13 13:50:24 +02:00
7da741f354 Re-enabled PATCH handler for organisations 2018-07-13 13:36:59 +02:00
41369d134c Fix bloody Eve raising exceptions instead of returning status code 2018-07-13 12:45:58 +02:00
61ed083218 Don't change the global schema! 2018-07-13 12:33:22 +02:00
46777f7f8c Removed unnecessary ['shema'] 2018-07-13 12:06:48 +02:00
ef94c68177 Re-enabled the 'valid_properties': True in nodes_schema 2018-07-13 12:06:38 +02:00
aaf452e18b Fixed Cerberus canary unit test
Apparently it's no longer possible for Cerberus to validate its own schemas.
2018-07-13 12:02:40 +02:00
c607eaf23d Added magic custom validation rule schemas in docstrings 2018-07-13 12:02:18 +02:00
baa77a7de5 Merge branch 'master' into wip-flask-one 2018-07-13 11:43:57 +02:00
c83a1a21b8 Unpinned a bunch of package versions
This helps us get the latest versions and test with those, instead.
2018-07-13 11:01:22 +02:00
549cf0a3e8 WIP on libraries upgrade 2018-07-12 15:23:57 +02:00
341 changed files with 27233 additions and 19715 deletions

3
.babelrc Normal file
View File

@ -0,0 +1,3 @@
{
"presets": ["@babel/preset-env"]
}

6
.gitignore vendored
View File

@ -12,10 +12,12 @@ config_local.py
/build /build
/.cache /.cache
/*.egg-info/ /.pytest_cache/
*.egg-info/
profile.stats profile.stats
/dump/ /dump/
/.eggs /.eggs
/devdeps/pip-wheel-metadata/
/node_modules /node_modules
/.sass-cache /.sass-cache
@ -26,6 +28,8 @@ profile.stats
pillar/web/static/assets/css/*.css pillar/web/static/assets/css/*.css
pillar/web/static/assets/js/*.min.js pillar/web/static/assets/js/*.min.js
pillar/web/static/assets/js/vendor/video.min.js
pillar/web/static/storage/ pillar/web/static/storage/
pillar/web/static/uploads/ pillar/web/static/uploads/
pillar/web/templates/ pillar/web/templates/
/poetry.lock

View File

@ -3,7 +3,7 @@ Pillar
This is the latest iteration on the Attract project. We are building a unified This is the latest iteration on the Attract project. We are building a unified
framework called Pillar. Pillar will combine Blender Cloud and Attract. You framework called Pillar. Pillar will combine Blender Cloud and Attract. You
can see Pillar in action on the [Blender Cloud](https://cloud.bender.org). can see Pillar in action on the [Blender Cloud](https://cloud.blender.org).
## Custom fonts ## Custom fonts
@ -25,15 +25,16 @@ Don't forget to Gulp!
## Installation ## Installation
Dependencies are managed via [Poetry](https://poetry.eustace.io/).
Make sure your /data directory exists and is writable by the current user. Make sure your /data directory exists and is writable by the current user.
Alternatively, provide a `pillar/config_local.py` that changes the relevant Alternatively, provide a `pillar/config_local.py` that changes the relevant
settings. settings.
``` ```
git clone git@git.blender.org:pillar-python-sdk.git ../pillar-python-sdk git clone git@git.blender.org:pillar-python-sdk.git ../pillar-python-sdk
pip install -e ../pillar-python-sdk pip install -U --user poetry
pip install -U -r requirements.txt poetry install
pip install -e .
``` ```
## HDRi viewer ## HDRi viewer
@ -65,6 +66,12 @@ You can run the Celery Worker using `manage.py celery worker`.
Find other Celery operations with the `manage.py celery` command. Find other Celery operations with the `manage.py celery` command.
## Elasticsearch
Pillar uses [Elasticsearch](https://www.elastic.co/products/elasticsearch) to power the search engine.
You will need to run the `manage.py elastic reset_index` command to initialize the indexing.
If you need to reindex your documents in elastic you run the `manage.py elastic reindex` command.
## Translations ## Translations
If the language you want to support doesn't exist, you need to run: `translations init es_AR`. If the language you want to support doesn't exist, you need to run: `translations init es_AR`.

View File

16
devdeps/pyproject.toml Normal file
View File

@ -0,0 +1,16 @@
[tool.poetry]
name = "pillar-devdeps"
version = "1.0"
description = ""
authors = [
"Francesco Siddi <francesco@blender.org>",
"Pablo Vazquez <pablo@blender.studio>",
"Sybren Stüvel <sybren@blender.studio>",
]
[tool.poetry.dependencies]
python = "~3.6"
mypy = "^0.501"
pytest = "~4.4"
pytest-cov = "~2.7"
responses = "~0.10"

View File

@ -1,37 +1,51 @@
var argv = require('minimist')(process.argv.slice(2)); let argv = require('minimist')(process.argv.slice(2));
var autoprefixer = require('gulp-autoprefixer'); let autoprefixer = require('gulp-autoprefixer');
var cache = require('gulp-cached'); let cache = require('gulp-cached');
var chmod = require('gulp-chmod'); let chmod = require('gulp-chmod');
var concat = require('gulp-concat'); let concat = require('gulp-concat');
var git = require('gulp-git'); let git = require('gulp-git');
var gulpif = require('gulp-if'); let gulpif = require('gulp-if');
var gulp = require('gulp'); let gulp = require('gulp');
var livereload = require('gulp-livereload'); let livereload = require('gulp-livereload');
var plumber = require('gulp-plumber'); let plumber = require('gulp-plumber');
var pug = require('gulp-pug'); let pug = require('gulp-pug');
var rename = require('gulp-rename'); let rename = require('gulp-rename');
var sass = require('gulp-sass'); let sass = require('gulp-sass');
var sourcemaps = require('gulp-sourcemaps'); let sourcemaps = require('gulp-sourcemaps');
var uglify = require('gulp-uglify'); let uglify = require('gulp-uglify-es').default;
let browserify = require('browserify');
let babelify = require('babelify');
let sourceStream = require('vinyl-source-stream');
let glob = require('glob');
let es = require('event-stream');
let path = require('path');
let buffer = require('vinyl-buffer');
var enabled = { let enabled = {
uglify: argv.production, uglify: argv.production,
maps: argv.production, maps: !argv.production,
failCheck: !argv.production, failCheck: !argv.production,
prettyPug: !argv.production, prettyPug: !argv.production,
cachify: !argv.production, cachify: !argv.production,
cleanup: argv.production, cleanup: argv.production,
chmod: argv.production,
}; };
var destination = { let destination = {
css: 'pillar/web/static/assets/css', css: 'pillar/web/static/assets/css',
pug: 'pillar/web/templates', pug: 'pillar/web/templates',
js: 'pillar/web/static/assets/js', js: 'pillar/web/static/assets/js',
} }
let source = {
bootstrap: 'node_modules/bootstrap/',
jquery: 'node_modules/jquery/',
popper: 'node_modules/popper.js/',
vue: 'node_modules/vue/',
}
/* CSS */ /* Stylesheets */
gulp.task('styles', function() { gulp.task('styles', function(done) {
gulp.src('src/styles/**/*.sass') gulp.src('src/styles/**/*.sass')
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init())) .pipe(gulpif(enabled.maps, sourcemaps.init()))
@ -42,11 +56,12 @@ gulp.task('styles', function() {
.pipe(gulpif(enabled.maps, sourcemaps.write("."))) .pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.css)) .pipe(gulp.dest(destination.css))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(argv.livereload, livereload()));
done();
}); });
/* Templates - Pug */ /* Templates */
gulp.task('templates', function() { gulp.task('templates', function(done) {
gulp.src('src/templates/**/*.pug') gulp.src('src/templates/**/*.pug')
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('templating'))) .pipe(gulpif(enabled.cachify, cache('templating')))
@ -55,11 +70,12 @@ gulp.task('templates', function() {
})) }))
.pipe(gulp.dest(destination.pug)) .pipe(gulp.dest(destination.pug))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(argv.livereload, livereload()));
done();
}); });
/* Individual Uglified Scripts */ /* Individual Uglified Scripts */
gulp.task('scripts', function() { gulp.task('scripts', function(done) {
gulp.src('src/scripts/*.js') gulp.src('src/scripts/*.js')
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.cachify, cache('scripting'))) .pipe(gulpif(enabled.cachify, cache('scripting')))
@ -67,56 +83,131 @@ gulp.task('scripts', function() {
.pipe(gulpif(enabled.uglify, uglify())) .pipe(gulpif(enabled.uglify, uglify()))
.pipe(rename({suffix: '.min'})) .pipe(rename({suffix: '.min'}))
.pipe(gulpif(enabled.maps, sourcemaps.write("."))) .pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(chmod(644)) .pipe(gulpif(enabled.chmod, chmod(0o644)))
.pipe(gulp.dest(destination.js)) .pipe(gulp.dest(destination.js))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(argv.livereload, livereload()));
done();
});
function browserify_base(entry) {
let pathSplited = path.dirname(entry).split(path.sep);
let moduleName = pathSplited[pathSplited.length - 1];
return browserify({
entries: [entry],
standalone: 'pillar.' + moduleName,
})
.transform(babelify, { "presets": ["@babel/preset-env"] })
.bundle()
.pipe(gulpif(enabled.failCheck, plumber()))
.pipe(sourceStream(path.basename(entry)))
.pipe(buffer())
.pipe(rename({
basename: moduleName,
extname: '.min.js'
}));
}
/**
* Transcompile and package common modules to be included in tutti.js.
*
* Example:
* src/scripts/js/es6/common/api/init.js
* src/scripts/js/es6/common/events/init.js
* Everything exported in api/init.js will end up in module pillar.api.*, and everything exported in events/init.js
* will end up in pillar.events.*
*/
function browserify_common() {
return glob.sync('src/scripts/js/es6/common/**/init.js').map(browserify_base);
}
/**
* Transcompile and package individual modules.
*
* Example:
* src/scripts/js/es6/individual/coolstuff/init.js
* Will create a coolstuff.js and everything exported in init.js will end up in namespace pillar.coolstuff.*
*/
gulp.task('scripts_browserify', function(done) {
glob('src/scripts/js/es6/individual/**/init.js', function(err, files) {
if(err) done(err);
var tasks = files.map(function(entry) {
return browserify_base(entry)
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(gulpif(enabled.uglify, uglify()))
.pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(gulp.dest(destination.js));
});
es.merge(tasks).on('end', done);
})
}); });
/* Collection of scripts in src/scripts/tutti/ to merge into tutti.min.js */ /* Collection of scripts in src/scripts/tutti/ and src/scripts/js/es6/common/ to merge into tutti.min.js
/* Since it's always loaded, it's only for functions that we want site-wide */ * Since it's always loaded, it's only for functions that we want site-wide.
gulp.task('scripts_concat_tutti', function() { * It also includes jQuery and Bootstrap (and its dependency popper), since
gulp.src('src/scripts/tutti/**/*.js') * the site doesn't work without it anyway.*/
gulp.task('scripts_concat_tutti', function(done) {
let toUglify = [
source.jquery + 'dist/jquery.min.js',
source.vue + (enabled.uglify ? 'dist/vue.min.js' : 'dist/vue.js'),
source.popper + 'dist/umd/popper.min.js',
source.bootstrap + 'js/dist/index.js',
source.bootstrap + 'js/dist/util.js',
source.bootstrap + 'js/dist/alert.js',
source.bootstrap + 'js/dist/collapse.js',
source.bootstrap + 'js/dist/dropdown.js',
source.bootstrap + 'js/dist/tooltip.js',
'src/scripts/tutti/**/*.js'
];
es.merge(gulp.src(toUglify), ...browserify_common())
.pipe(gulpif(enabled.failCheck, plumber())) .pipe(gulpif(enabled.failCheck, plumber()))
.pipe(gulpif(enabled.maps, sourcemaps.init())) .pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(concat("tutti.min.js")) .pipe(concat("tutti.min.js"))
.pipe(gulpif(enabled.uglify, uglify())) .pipe(gulpif(enabled.uglify, uglify()))
.pipe(gulpif(enabled.maps, sourcemaps.write("."))) .pipe(gulpif(enabled.maps, sourcemaps.write(".")))
.pipe(chmod(644)) .pipe(gulpif(enabled.chmod, chmod(0o644)))
.pipe(gulp.dest(destination.js)) .pipe(gulp.dest(destination.js))
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulpif(argv.livereload, livereload()));
done();
}); });
gulp.task('scripts_concat_markdown', function() {
gulp.src('src/scripts/markdown/**/*.js') /* Simply move these vendor scripts from node_modules. */
.pipe(gulpif(enabled.failCheck, plumber())) gulp.task('scripts_move_vendor', function(done) {
.pipe(gulpif(enabled.maps, sourcemaps.init()))
.pipe(concat("markdown.min.js")) let toMove = [
.pipe(gulpif(enabled.uglify, uglify())) 'node_modules/video.js/dist/video.min.js',
.pipe(gulpif(enabled.maps, sourcemaps.write("."))) ];
.pipe(chmod(644))
.pipe(gulp.dest(destination.js)) gulp.src(toMove)
.pipe(gulpif(argv.livereload, livereload())); .pipe(gulp.dest(destination.js + '/vendor/'));
done();
}); });
// While developing, run 'gulp watch' // While developing, run 'gulp watch'
gulp.task('watch',function() { gulp.task('watch',function(done) {
// Only listen for live reloads if ran with --livereload // Only listen for live reloads if ran with --livereload
if (argv.livereload){ if (argv.livereload){
livereload.listen(); livereload.listen();
} }
gulp.watch('src/styles/**/*.sass',['styles']); gulp.watch('src/styles/**/*.sass',gulp.series('styles'));
gulp.watch('src/templates/**/*.pug',['templates']); gulp.watch('src/templates/**/*.pug',gulp.series('templates'));
gulp.watch('src/scripts/*.js',['scripts']); gulp.watch('src/scripts/*.js',gulp.series('scripts'));
gulp.watch('src/scripts/tutti/**/*.js',['scripts_concat_tutti']); gulp.watch('src/scripts/tutti/**/*.js',gulp.series('scripts_concat_tutti'));
gulp.watch('src/scripts/markdown/**/*.js',['scripts_concat_markdown']); gulp.watch('src/scripts/js/**/*.js',gulp.series(['scripts_browserify', 'scripts_concat_tutti']));
done();
}); });
// Erases all generated files in output directories. // Erases all generated files in output directories.
gulp.task('cleanup', function() { gulp.task('cleanup', function(done) {
var paths = []; let paths = [];
for (attr in destination) { for (attr in destination) {
paths.push(destination[attr]); paths.push(destination[attr]);
} }
@ -124,17 +215,20 @@ gulp.task('cleanup', function() {
git.clean({ args: '-f -X ' + paths.join(' ') }, function (err) { git.clean({ args: '-f -X ' + paths.join(' ') }, function (err) {
if(err) throw err; if(err) throw err;
}); });
done();
}); });
// Run 'gulp' to build everything at once // Run 'gulp' to build everything at once
var tasks = []; let tasks = [];
if (enabled.cleanup) tasks.push('cleanup'); if (enabled.cleanup) tasks.push('cleanup');
gulp.task('default', tasks.concat([ // gulp.task('default', gulp.parallel('styles', 'templates', 'scripts', 'scripts_tutti'));
gulp.task('default', gulp.parallel(tasks.concat([
'styles', 'styles',
'templates', 'templates',
'scripts', 'scripts',
'scripts_concat_tutti', 'scripts_concat_tutti',
'scripts_concat_markdown', 'scripts_move_vendor',
])); 'scripts_browserify',
])));

180
jest.config.js Normal file
View File

@ -0,0 +1,180 @@
// For a detailed explanation regarding each configuration property, visit:
// https://jestjs.io/docs/en/configuration.html
module.exports = {
// All imported modules in your tests should be mocked automatically
// automock: false,
// Stop running tests after the first failure
// bail: false,
// Respect "browser" field in package.json when resolving modules
// browser: false,
// The directory where Jest should store its cached dependency information
// cacheDirectory: "/tmp/jest_rs",
// Automatically clear mock calls and instances between every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
// collectCoverage: false,
// An array of glob patterns indicating a set of files for which coverage information should be collected
// collectCoverageFrom: null,
// The directory where Jest should output its coverage files
// coverageDirectory: null,
// An array of regexp pattern strings used to skip coverage collection
// coveragePathIgnorePatterns: [
// "/node_modules/"
// ],
// A list of reporter names that Jest uses when writing coverage reports
// coverageReporters: [
// "json",
// "text",
// "lcov",
// "clover"
// ],
// An object that configures minimum threshold enforcement for coverage results
// coverageThreshold: null,
// Make calling deprecated APIs throw helpful error messages
// errorOnDeprecated: false,
// Force coverage collection from ignored files usin a array of glob patterns
// forceCoverageMatch: [],
// A path to a module which exports an async function that is triggered once before all test suites
// globalSetup: null,
// A path to a module which exports an async function that is triggered once after all test suites
// globalTeardown: null,
// A set of global variables that need to be available in all test environments
// globals: {},
// An array of directory names to be searched recursively up from the requiring module's location
// moduleDirectories: [
// "node_modules"
// ],
// An array of file extensions your modules use
// moduleFileExtensions: [
// "js",
// "json",
// "jsx",
// "node"
// ],
// A map from regular expressions to module names that allow to stub out resources with a single module
// moduleNameMapper: {},
// An array of regexp pattern strings, matched against all module paths before considered 'visible' to the module loader
// modulePathIgnorePatterns: [],
// Activates notifications for test results
// notify: false,
// An enum that specifies notification mode. Requires { notify: true }
// notifyMode: "always",
// A preset that is used as a base for Jest's configuration
// preset: null,
// Run tests from one or more projects
// projects: null,
// Use this configuration option to add custom reporters to Jest
// reporters: undefined,
// Automatically reset mock state between every test
// resetMocks: false,
// Reset the module registry before running each individual test
// resetModules: false,
// A path to a custom resolver
// resolver: null,
// Automatically restore mock state between every test
// restoreMocks: false,
// The root directory that Jest should scan for tests and modules within
// rootDir: null,
// A list of paths to directories that Jest should use to search for files in
// roots: [
// "<rootDir>"
// ],
// Allows you to use a custom runner instead of Jest's default test runner
// runner: "jest-runner",
// The paths to modules that run some code to configure or set up the testing environment before each test
setupFiles: ["<rootDir>/src/scripts/js/es6/test_config/test-env.js"],
// The path to a module that runs some code to configure or set up the testing framework before each test
// setupTestFrameworkScriptFile: null,
// A list of paths to snapshot serializer modules Jest should use for snapshot testing
// snapshotSerializers: [],
// The test environment that will be used for testing
testEnvironment: "jsdom",
// Options that will be passed to the testEnvironment
// testEnvironmentOptions: {},
// Adds a location field to test results
// testLocationInResults: false,
// The glob patterns Jest uses to detect test files
// testMatch: [
// "**/__tests__/**/*.js?(x)",
// "**/?(*.)+(spec|test).js?(x)"
// ],
// An array of regexp pattern strings that are matched against all test paths, matched tests are skipped
// testPathIgnorePatterns: [
// "/node_modules/"
// ],
// The regexp pattern Jest uses to detect test files
// testRegex: "",
// This option allows the use of a custom results processor
// testResultsProcessor: null,
// This option allows use of a custom test runner
// testRunner: "jasmine2",
// This option sets the URL for the jsdom environment. It is reflected in properties such as location.href
// testURL: "http://localhost",
// Setting this value to "fake" allows the use of fake timers for functions such as "setTimeout"
// timers: "real",
// A map from regular expressions to paths to transformers
// transform: null,
// An array of regexp pattern strings that are matched against all source file paths, matched files will skip transformation
// transformIgnorePatterns: [
// "/node_modules/"
// ],
// An array of regexp pattern strings that are matched against all modules before the module loader will automatically return a mock for them
// unmockedModulePathPatterns: undefined,
// Indicates whether each individual test should be reported during the run
// verbose: null,
// An array of regexp patterns that are matched against all source file paths before re-running tests in watch mode
// watchPathIgnorePatterns: [],
// Whether to use watchman for file crawling
// watchman: true,
};

15923
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -1,26 +1,54 @@
{ {
"name": "pillar", "name": "pillar",
"license": "GPL-2.0+", "license": "GPL-2.0+",
"author": "Blender Institute", "author": "Blender Institute",
"repository": { "repository": {
"type": "git", "type": "git",
"url": "https://github.com/armadillica/pillar.git" "url": "git://git.blender.org/pillar.git"
}, },
"devDependencies": { "devDependencies": {
"gulp": "~3.9.1", "@babel/core": "7.1.6",
"gulp-autoprefixer": "~2.3.1", "@babel/preset-env": "7.1.6",
"gulp-cached": "~1.1.0", "acorn": "5.7.3",
"gulp-chmod": "~1.3.0", "babel-core": "7.0.0-bridge.0",
"gulp-concat": "~2.6.0", "babelify": "10.0.0",
"gulp-if": "^2.0.1", "browserify": "16.2.3",
"gulp-git": "~2.4.2", "gulp": "4.0.0",
"gulp-livereload": "~3.8.1", "gulp-autoprefixer": "6.0.0",
"gulp-plumber": "~1.1.0", "gulp-babel": "8.0.0",
"gulp-pug": "~3.2.0", "gulp-cached": "1.1.1",
"gulp-rename": "~1.2.2", "gulp-chmod": "2.0.0",
"gulp-sass": "~2.3.1", "gulp-concat": "2.6.1",
"gulp-sourcemaps": "~1.6.0", "gulp-git": "2.8.0",
"gulp-uglify": "~1.5.3", "gulp-if": "2.0.2",
"minimist": "^1.2.0" "gulp-livereload": "4.0.0",
} "gulp-plumber": "1.2.0",
"gulp-pug": "4.0.1",
"gulp-rename": "1.4.0",
"gulp-sass": "4.1.0",
"gulp-sourcemaps": "2.6.4",
"gulp-uglify-es": "1.0.4",
"jest": "^24.8.0",
"minimist": "1.2.0",
"vinyl-buffer": "1.0.1",
"vinyl-source-stream": "2.0.0"
},
"dependencies": {
"bootstrap": "^4.3.1",
"glob": "7.1.3",
"jquery": "^3.4.1",
"natives": "^1.1.6",
"popper.js": "1.14.4",
"video.js": "7.2.2",
"vue": "2.5.17"
},
"scripts": {
"test": "jest"
},
"__COMMENTS__": [
"natives@1.1.6 for Gulp 3.x on Node 10.x: https://github.com/gulpjs/gulp/issues/2162#issuecomment-385197164"
],
"resolutions": {
"natives": "1.1.6"
}
} }

View File

@ -12,10 +12,25 @@ import typing
import os import os
import os.path import os.path
import pathlib import pathlib
import warnings
# These warnings have to be suppressed before the first import.
# Eve is falling behind on Cerberus. See https://github.com/pyeve/eve/issues/1278
warnings.filterwarnings(
'ignore', category=DeprecationWarning,
message="Methods for type testing are deprecated, use TypeDefinition and the "
"'types_mapping'-property of a Validator-instance instead")
# Werkzeug deprecated Request.is_xhr, but it works fine with jQuery and we don't need a reminder
# every time a unit test is run.
warnings.filterwarnings('ignore', category=DeprecationWarning,
message="'Request.is_xhr' is deprecated as of version 0.13 and will be "
"removed in version 1.0.")
import jinja2 import jinja2
from eve import Eve
import flask import flask
from eve import Eve
from flask import g, render_template, request from flask import g, render_template, request
from flask_babel import Babel, gettext as _ from flask_babel import Babel, gettext as _
from flask.templating import TemplateNotFound from flask.templating import TemplateNotFound
@ -70,7 +85,7 @@ class BlinkerCompatibleEve(Eve):
class PillarServer(BlinkerCompatibleEve): class PillarServer(BlinkerCompatibleEve):
def __init__(self, app_root, **kwargs): def __init__(self, app_root: str, **kwargs) -> None:
from .extension import PillarExtension from .extension import PillarExtension
from celery import Celery from celery import Celery
from flask_wtf.csrf import CSRFProtect from flask_wtf.csrf import CSRFProtect
@ -140,8 +155,6 @@ class PillarServer(BlinkerCompatibleEve):
self.org_manager = pillar.api.organizations.OrgManager() self.org_manager = pillar.api.organizations.OrgManager()
self.before_first_request(self.setup_db_indices)
# Make CSRF protection available to the application. By default it is # Make CSRF protection available to the application. By default it is
# disabled on all endpoints. More info at WTF_CSRF_CHECK_DEFAULT in config.py # disabled on all endpoints. More info at WTF_CSRF_CHECK_DEFAULT in config.py
self.csrf = CSRFProtect(self) self.csrf = CSRFProtect(self)
@ -280,7 +293,7 @@ class PillarServer(BlinkerCompatibleEve):
self.encoding_service_client = Zencoder(self.config['ZENCODER_API_KEY']) self.encoding_service_client = Zencoder(self.config['ZENCODER_API_KEY'])
def _config_caching(self): def _config_caching(self):
from flask_cache import Cache from flask_caching import Cache
self.cache = Cache(self) self.cache = Cache(self)
def set_languages(self, translations_folder: pathlib.Path): def set_languages(self, translations_folder: pathlib.Path):
@ -479,10 +492,12 @@ class PillarServer(BlinkerCompatibleEve):
# Pillar-defined Celery task modules: # Pillar-defined Celery task modules:
celery_task_modules = [ celery_task_modules = [
'pillar.celery.tasks', 'pillar.celery.avatar',
'pillar.celery.search_index_tasks', 'pillar.celery.badges',
'pillar.celery.file_link_tasks',
'pillar.celery.email_tasks', 'pillar.celery.email_tasks',
'pillar.celery.file_link_tasks',
'pillar.celery.search_index_tasks',
'pillar.celery.tasks',
] ]
# Allow Pillar extensions from defining their own Celery tasks. # Allow Pillar extensions from defining their own Celery tasks.
@ -648,7 +663,7 @@ class PillarServer(BlinkerCompatibleEve):
return self.pillar_error_handler(error) return self.pillar_error_handler(error)
def handle_sdk_resource_invalid(self, error): def handle_sdk_resource_invalid(self, error):
self.log.info('Forwarding ResourceInvalid exception to client: %s', error, exc_info=True) self.log.exception('Forwarding ResourceInvalid exception to client: %s', error, exc_info=True)
# Raising a Werkzeug 422 exception doens't work, as Flask turns it into a 500. # Raising a Werkzeug 422 exception doens't work, as Flask turns it into a 500.
return _('The submitted data could not be validated.'), 422 return _('The submitted data could not be validated.'), 422
@ -704,6 +719,8 @@ class PillarServer(BlinkerCompatibleEve):
def finish_startup(self): def finish_startup(self):
self.log.info('Using MongoDB database %r', self.config['MONGO_DBNAME']) self.log.info('Using MongoDB database %r', self.config['MONGO_DBNAME'])
with self.app_context():
self.setup_db_indices()
self._config_celery() self._config_celery()
api.setup_app(self) api.setup_app(self)
@ -711,6 +728,10 @@ class PillarServer(BlinkerCompatibleEve):
authentication.setup_app(self) authentication.setup_app(self)
# Register Flask Debug Toolbar (disabled by default).
from flask_debugtoolbar import DebugToolbarExtension
DebugToolbarExtension(self)
for ext in self.pillar_extensions.values(): for ext in self.pillar_extensions.values():
self.log.info('Setting up extension %s', ext.name) self.log.info('Setting up extension %s', ext.name)
ext.setup_app(self) ext.setup_app(self)
@ -721,6 +742,7 @@ class PillarServer(BlinkerCompatibleEve):
self._config_user_caps() self._config_user_caps()
# Only enable this when debugging. # Only enable this when debugging.
# TODO(fsiddi): Consider removing this in favor of the routes tab in Flask Debug Toolbar.
# self._list_routes() # self._list_routes()
def setup_db_indices(self): def setup_db_indices(self):
@ -760,6 +782,8 @@ class PillarServer(BlinkerCompatibleEve):
coll.create_index([('properties.status', pymongo.ASCENDING), coll.create_index([('properties.status', pymongo.ASCENDING),
('node_type', pymongo.ASCENDING), ('node_type', pymongo.ASCENDING),
('_created', pymongo.DESCENDING)]) ('_created', pymongo.DESCENDING)])
# Used for asset tags
coll.create_index([('properties.tags', pymongo.ASCENDING)])
coll = db['projects'] coll = db['projects']
# This index is used for statistics, and for fetching public projects. # This index is used for statistics, and for fetching public projects.
@ -782,17 +806,18 @@ class PillarServer(BlinkerCompatibleEve):
return 'basic ' + base64.b64encode('%s:%s' % (username, subclient_id)) return 'basic ' + base64.b64encode('%s:%s' % (username, subclient_id))
def post_internal(self, resource: str, payl=None, skip_validation=False): def post_internal(self, resource: str, payl=None, skip_validation=False):
"""Workaround for Eve issue https://github.com/nicolaiarocci/eve/issues/810""" """Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.post import post_internal from eve.methods.post import post_internal
url = self.config['URLS'][resource] url = self.config['URLS'][resource]
path = '%s/%s' % (self.api_prefix, url) path = '%s/%s' % (self.api_prefix, url)
with self.__fake_request_url_rule('POST', path): with self.__fake_request_url_rule('POST', path):
return post_internal(resource, payl=payl, skip_validation=skip_validation)[:4] return post_internal(resource, payl=payl, skip_validation=skip_validation)[:4]
def put_internal(self, resource: str, payload=None, concurrency_check=False, def put_internal(self, resource: str, payload=None, concurrency_check=False,
skip_validation=False, **lookup): skip_validation=False, **lookup):
"""Workaround for Eve issue https://github.com/nicolaiarocci/eve/issues/810""" """Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.put import put_internal from eve.methods.put import put_internal
url = self.config['URLS'][resource] url = self.config['URLS'][resource]
@ -803,7 +828,7 @@ class PillarServer(BlinkerCompatibleEve):
def patch_internal(self, resource: str, payload=None, concurrency_check=False, def patch_internal(self, resource: str, payload=None, concurrency_check=False,
skip_validation=False, **lookup): skip_validation=False, **lookup):
"""Workaround for Eve issue https://github.com/nicolaiarocci/eve/issues/810""" """Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.patch import patch_internal from eve.methods.patch import patch_internal
url = self.config['URLS'][resource] url = self.config['URLS'][resource]
@ -814,7 +839,7 @@ class PillarServer(BlinkerCompatibleEve):
def delete_internal(self, resource: str, concurrency_check=False, def delete_internal(self, resource: str, concurrency_check=False,
suppress_callbacks=False, **lookup): suppress_callbacks=False, **lookup):
"""Workaround for Eve issue https://github.com/nicolaiarocci/eve/issues/810""" """Workaround for Eve issue https://github.com/pyeve/eve/issues/810"""
from eve.methods.delete import deleteitem_internal from eve.methods.delete import deleteitem_internal
url = self.config['URLS'][resource] url = self.config['URLS'][resource]
@ -895,7 +920,8 @@ class PillarServer(BlinkerCompatibleEve):
yield ctx yield ctx
def validator_for_resource(self, resource_name: str) -> custom_field_validation.ValidateCustomFields: def validator_for_resource(self,
resource_name: str) -> custom_field_validation.ValidateCustomFields:
schema = self.config['DOMAIN'][resource_name]['schema'] schema = self.config['DOMAIN'][resource_name]['schema']
validator = self.validator(schema, resource_name) validator = self.validator(schema, resource_name)
return validator return validator

View File

@ -1,6 +1,6 @@
def setup_app(app): def setup_app(app):
from . import encoding, blender_id, projects, local_auth, file_storage from . import encoding, blender_id, projects, local_auth, file_storage
from . import users, nodes, latest, blender_cloud, service, activities from . import users, nodes, latest, blender_cloud, service, activities, timeline
from . import organizations from . import organizations
from . import search from . import search
@ -11,6 +11,7 @@ def setup_app(app):
local_auth.setup_app(app, url_prefix='/auth') local_auth.setup_app(app, url_prefix='/auth')
file_storage.setup_app(app, url_prefix='/storage') file_storage.setup_app(app, url_prefix='/storage')
latest.setup_app(app, url_prefix='/latest') latest.setup_app(app, url_prefix='/latest')
timeline.setup_app(app, url_prefix='/timeline')
blender_cloud.setup_app(app, url_prefix='/bcloud') blender_cloud.setup_app(app, url_prefix='/bcloud')
users.setup_app(app, api_prefix='/users') users.setup_app(app, api_prefix='/users')
service.setup_app(app, api_prefix='/service') service.setup_app(app, api_prefix='/service')

View File

@ -1,7 +1,7 @@
import logging import logging
from flask import request, current_app from flask import request, current_app
from pillar.api.utils import gravatar import pillar.api.users.avatar
from pillar.auth import current_user from pillar.auth import current_user
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -68,7 +68,7 @@ def notification_parse(notification):
if actor: if actor:
parsed_actor = { parsed_actor = {
'username': actor['username'], 'username': actor['username'],
'avatar': gravatar(actor['email'])} 'avatar': pillar.api.users.avatar.url(actor)}
else: else:
parsed_actor = None parsed_actor = None
@ -91,14 +91,14 @@ def notification_parse(notification):
def notification_get_subscriptions(context_object_type, context_object_id, actor_user_id): def notification_get_subscriptions(context_object_type, context_object_id, actor_user_id):
subscriptions_collection = current_app.data.driver.db['activities-subscriptions'] subscriptions_collection = current_app.db('activities-subscriptions')
lookup = { lookup = {
'user': {"$ne": actor_user_id}, 'user': {"$ne": actor_user_id},
'context_object_type': context_object_type, 'context_object_type': context_object_type,
'context_object': context_object_id, 'context_object': context_object_id,
'is_subscribed': True, 'is_subscribed': True,
} }
return subscriptions_collection.find(lookup) return subscriptions_collection.find(lookup), subscriptions_collection.count_documents(lookup)
def activity_subscribe(user_id, context_object_type, context_object_id): def activity_subscribe(user_id, context_object_type, context_object_id):
@ -119,6 +119,8 @@ def activity_subscribe(user_id, context_object_type, context_object_id):
# If no subscription exists, we create one # If no subscription exists, we create one
if not subscription: if not subscription:
# Workaround for issue: https://github.com/pyeve/eve/issues/1174
lookup['notifications'] = {}
current_app.post_internal('activities-subscriptions', lookup) current_app.post_internal('activities-subscriptions', lookup)
@ -138,10 +140,10 @@ def activity_object_add(actor_user_id, verb, object_type, object_id,
:param object_id: object id, to be traced with object_type_id :param object_id: object id, to be traced with object_type_id
""" """
subscriptions = notification_get_subscriptions( subscriptions, subscription_count = notification_get_subscriptions(
context_object_type, context_object_id, actor_user_id) context_object_type, context_object_id, actor_user_id)
if subscriptions.count() == 0: if subscription_count == 0:
return return
info, status = register_activity(actor_user_id, verb, object_type, object_id, info, status = register_activity(actor_user_id, verb, object_type, object_id,

View File

@ -257,10 +257,10 @@ def has_home_project(user_id):
"""Returns True iff the user has a home project.""" """Returns True iff the user has a home project."""
proj_coll = current_app.data.driver.db['projects'] proj_coll = current_app.data.driver.db['projects']
return proj_coll.count({'user': user_id, 'category': 'home', '_deleted': False}) > 0 return proj_coll.count_documents({'user': user_id, 'category': 'home', '_deleted': False}) > 0
def get_home_project(user_id, projection=None): def get_home_project(user_id: ObjectId, projection=None) -> dict:
"""Returns the home project""" """Returns the home project"""
proj_coll = current_app.data.driver.db['projects'] proj_coll = current_app.data.driver.db['projects']
@ -272,10 +272,10 @@ def is_home_project(project_id, user_id):
"""Returns True iff the given project exists and is the user's home project.""" """Returns True iff the given project exists and is the user's home project."""
proj_coll = current_app.data.driver.db['projects'] proj_coll = current_app.data.driver.db['projects']
return proj_coll.count({'_id': project_id, return proj_coll.count_documents({'_id': project_id,
'user': user_id, 'user': user_id,
'category': 'home', 'category': 'home',
'_deleted': False}) > 0 '_deleted': False}) > 0
def mark_node_updated(node_id): def mark_node_updated(node_id):

View File

@ -104,7 +104,7 @@ def has_texture_node(proj, return_hdri=True):
if return_hdri: if return_hdri:
node_types.append('group_hdri') node_types.append('group_hdri')
count = nodes_collection.count( count = nodes_collection.count_documents(
{'node_type': {'$in': node_types}, {'node_type': {'$in': node_types},
'project': proj['_id'], 'project': proj['_id'],
'parent': None}) 'parent': None})

View File

@ -6,14 +6,17 @@ with Blender ID.
import datetime import datetime
import logging import logging
from urllib.parse import urljoin
import requests import requests
from bson import tz_util from bson import tz_util
from rauth import OAuth2Session from rauth import OAuth2Session
from flask import Blueprint, request, jsonify, session from flask import Blueprint, request, jsonify, session
from requests.adapters import HTTPAdapter from requests.adapters import HTTPAdapter
import urllib3.util.retry
from pillar import current_app from pillar import current_app
from pillar.auth import get_blender_id_oauth_token
from pillar.api.utils import authentication, utcnow from pillar.api.utils import authentication, utcnow
from pillar.api.utils.authentication import find_user_in_db, upsert_user from pillar.api.utils.authentication import find_user_in_db, upsert_user
@ -28,6 +31,30 @@ class LogoutUser(Exception):
""" """
class Session(requests.Session):
"""Requests Session suitable for Blender ID communication."""
def __init__(self):
super().__init__()
retries = urllib3.util.retry.Retry(
total=10,
backoff_factor=0.05,
)
http_adapter = requests.adapters.HTTPAdapter(max_retries=retries)
self.mount('https://', http_adapter)
self.mount('http://', http_adapter)
def authenticate(self):
"""Attach the current user's authentication token to the request."""
bid_token = get_blender_id_oauth_token()
if not bid_token:
raise TypeError('authenticate() requires current user to be logged in with Blender ID')
self.headers['Authorization'] = f'Bearer {bid_token}'
@blender_id.route('/store_scst', methods=['POST']) @blender_id.route('/store_scst', methods=['POST'])
def store_subclient_token(): def store_subclient_token():
"""Verifies & stores a user's subclient-specific token.""" """Verifies & stores a user's subclient-specific token."""
@ -114,15 +141,12 @@ def validate_token(user_id, token, oauth_subclient_id):
# We only want to accept Blender Cloud tokens. # We only want to accept Blender Cloud tokens.
payload['client_id'] = current_app.config['OAUTH_CREDENTIALS']['blender-id']['id'] payload['client_id'] = current_app.config['OAUTH_CREDENTIALS']['blender-id']['id']
url = '{0}/u/validate_token'.format(current_app.config['BLENDER_ID_ENDPOINT']) blender_id_endpoint = current_app.config['BLENDER_ID_ENDPOINT']
url = urljoin(blender_id_endpoint, 'u/validate_token')
log.debug('POSTing to %r', url) log.debug('POSTing to %r', url)
# Retry a few times when POSTing to BlenderID fails.
# Source: http://stackoverflow.com/a/15431343/875379
s = requests.Session()
s.mount(current_app.config['BLENDER_ID_ENDPOINT'], HTTPAdapter(max_retries=5))
# POST to Blender ID, handling errors as negative verification results. # POST to Blender ID, handling errors as negative verification results.
s = Session()
try: try:
r = s.post(url, data=payload, timeout=5, r = s.post(url, data=payload, timeout=5,
verify=current_app.config['TLS_CERT_FILE']) verify=current_app.config['TLS_CERT_FILE'])
@ -218,7 +242,7 @@ def fetch_blenderid_user() -> dict:
my_log = log.getChild('fetch_blenderid_user') my_log = log.getChild('fetch_blenderid_user')
bid_url = '%s/api/user' % current_app.config['BLENDER_ID_ENDPOINT'] bid_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'], 'api/user')
my_log.debug('Fetching user info from %s', bid_url) my_log.debug('Fetching user info from %s', bid_url)
credentials = current_app.config['OAUTH_CREDENTIALS']['blender-id'] credentials = current_app.config['OAUTH_CREDENTIALS']['blender-id']
@ -256,6 +280,16 @@ def fetch_blenderid_user() -> dict:
return payload return payload
def avatar_url(blenderid_user_id: str) -> str:
"""Return the URL to the user's avatar on Blender ID.
This avatar should be downloaded, and not served from the Blender ID URL.
"""
bid_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'],
f'api/user/{blenderid_user_id}/avatar')
return bid_url
def setup_app(app, url_prefix): def setup_app(app, url_prefix):
app.register_api_blueprint(blender_id, url_prefix=url_prefix) app.register_api_blueprint(blender_id, url_prefix=url_prefix)
@ -263,7 +297,7 @@ def setup_app(app, url_prefix):
def switch_user_url(next_url: str) -> str: def switch_user_url(next_url: str) -> str:
from urllib.parse import quote from urllib.parse import quote
base_url = '%s/switch' % current_app.config['BLENDER_ID_ENDPOINT'] base_url = urljoin(current_app.config['BLENDER_ID_ENDPOINT'], 'switch')
if next_url: if next_url:
return '%s?next=%s' % (base_url, quote(next_url)) return '%s?next=%s' % (base_url, quote(next_url))
return base_url return base_url

View File

@ -1,17 +1,17 @@
from datetime import datetime
import logging import logging
from bson import ObjectId, tz_util from bson import ObjectId, tz_util
from datetime import datetime
import cerberus.errors
from eve.io.mongo import Validator from eve.io.mongo import Validator
from flask import current_app from flask import current_app
import pillar.markdown from pillar import markdown
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
class ValidateCustomFields(Validator): class ValidateCustomFields(Validator):
# TODO: split this into a convert_property(property, schema) and call that from this function. # TODO: split this into a convert_property(property, schema) and call that from this function.
def convert_properties(self, properties, node_schema): def convert_properties(self, properties, node_schema):
"""Converts datetime strings and ObjectId strings to actual Python objects.""" """Converts datetime strings and ObjectId strings to actual Python objects."""
@ -29,7 +29,11 @@ class ValidateCustomFields(Validator):
dict_valueschema = schema_prop['schema'] dict_valueschema = schema_prop['schema']
properties[prop] = self.convert_properties(properties[prop], dict_valueschema) properties[prop] = self.convert_properties(properties[prop], dict_valueschema)
except KeyError: except KeyError:
dict_valueschema = schema_prop['valueschema'] # Cerberus 1.3 changed valueschema to valuesrules.
dict_valueschema = schema_prop.get('valuesrules') or \
schema_prop.get('valueschema')
if dict_valueschema is None:
raise KeyError(f"missing 'valuesrules' key in schema of property {prop}")
self.convert_dict_values(properties[prop], dict_valueschema) self.convert_dict_values(properties[prop], dict_valueschema)
elif prop_type == 'list': elif prop_type == 'list':
@ -73,6 +77,11 @@ class ValidateCustomFields(Validator):
dict_property[key] = self.convert_properties(item_prop, item_schema)['item'] dict_property[key] = self.convert_properties(item_prop, item_schema)['item']
def _validate_valid_properties(self, valid_properties, field, value): def _validate_valid_properties(self, valid_properties, field, value):
"""Fake property that triggers node dynamic property validation.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
from pillar.api.utils import project_get_node_type from pillar.api.utils import project_get_node_type
projects_collection = current_app.data.driver.db['projects'] projects_collection = current_app.data.driver.db['projects']
@ -107,7 +116,7 @@ class ValidateCustomFields(Validator):
if val: if val:
# This ensures the modifications made by v's coercion rules are # This ensures the modifications made by v's coercion rules are
# visible to this validator's output. # visible to this validator's output.
self.current[field] = v.current self.document[field] = v.document
return True return True
log.warning('Error validating properties for node %s: %s', self.document, v.errors) log.warning('Error validating properties for node %s: %s', self.document, v.errors)
@ -118,6 +127,9 @@ class ValidateCustomFields(Validator):
Combine "required_after_creation=True" with "required=False" to allow Combine "required_after_creation=True" with "required=False" to allow
pre-insert hooks to set default values. pre-insert hooks to set default values.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
""" """
if not required_after_creation: if not required_after_creation:
@ -125,14 +137,14 @@ class ValidateCustomFields(Validator):
# validator at all. # validator at all.
return return
if self._id is None: if self.document_id is None:
# This is a creation call, in which case this validator shouldn't run. # This is a creation call, in which case this validator shouldn't run.
return return
if not value: if not value:
self._error(field, "Value is required once the document was created") self._error(field, "Value is required once the document was created")
def _validate_type_iprange(self, field_name: str, value: str): def _check_with_iprange(self, field_name: str, value: str):
"""Ensure the field contains a valid IP address. """Ensure the field contains a valid IP address.
Supports both IPv6 and IPv4 ranges. Requires the IPy module. Supports both IPv6 and IPv4 ranges. Requires the IPy module.
@ -149,40 +161,19 @@ class ValidateCustomFields(Validator):
if ip.prefixlen() == 0: if ip.prefixlen() == 0:
self._error(field_name, 'Zero-length prefix is not allowed') self._error(field_name, 'Zero-length prefix is not allowed')
def _validate_type_binary(self, field_name: str, value: bytes): def _normalize_coerce_markdown(self, markdown_field: str) -> str:
"""Add support for binary type.
This type was actually introduced in Cerberus 1.0, so we can drop
support for this once Eve starts using that version (or newer).
""" """
Cache markdown as html.
if not isinstance(value, (bytes, bytearray)): :param markdown_field: name of the field containing Markdown
self._error(field_name, f'wrong value type {type(value)}, expected bytes or bytearray') :return: html string
def _validate_coerce(self, coerce, field: str, value):
"""Override Cerberus' _validate_coerce method for richer features.
This now supports named coercion functions (available in Cerberus 1.0+)
and passes the field name to coercion functions as well.
""" """
if isinstance(coerce, str): my_log = log.getChild('_normalize_coerce_markdown')
coerce = getattr(self, f'_normalize_coerce_{coerce}') mdown = self.document.get(markdown_field, '')
html = markdown.markdown(mdown)
try: my_log.debug('Generated html for markdown field %s in doc with id %s',
return coerce(field, value) markdown_field, id(self.document))
except (TypeError, ValueError): return html
self._error(field, cerberus.errors.ERROR_COERCION_FAILED.format(field))
def _normalize_coerce_markdown(self, field: str, value):
"""Render Markdown from this field into {field}_html.
The field name MUST NOT end in `_html`. The Markdown is read from this
field and the rendered HTML is written to the field `{field}_html`.
"""
html = pillar.markdown.markdown(value)
field_name = pillar.markdown.cache_field_name(field)
self.current[field_name] = html
return value
if __name__ == '__main__': if __name__ == '__main__':
@ -190,12 +181,12 @@ if __name__ == '__main__':
v = ValidateCustomFields() v = ValidateCustomFields()
v.schema = { v.schema = {
'foo': {'type': 'string', 'coerce': 'markdown'}, 'foo': {'type': 'string', 'check_with': 'markdown'},
'foo_html': {'type': 'string'}, 'foo_html': {'type': 'string'},
'nested': { 'nested': {
'type': 'dict', 'type': 'dict',
'schema': { 'schema': {
'bar': {'type': 'string', 'coerce': 'markdown'}, 'bar': {'type': 'string', 'check_with': 'markdown'},
'bar_html': {'type': 'string'}, 'bar_html': {'type': 'string'},
} }
} }

View File

@ -1,5 +1,8 @@
import os import os
from pillar.api.node_types.utils import markdown_fields
STORAGE_BACKENDS = ["local", "pillar", "cdnsun", "gcs", "unittest"]
URL_PREFIX = 'api' URL_PREFIX = 'api'
# Enable reads (GET), inserts (POST) and DELETE for resources/collections # Enable reads (GET), inserts (POST) and DELETE for resources/collections
@ -121,12 +124,62 @@ users_schema = {
'service': { 'service': {
'type': 'dict', 'type': 'dict',
'allow_unknown': True, 'allow_unknown': True,
},
'avatar': {
'type': 'dict',
'schema': { 'schema': {
'badger': { 'file': {
'type': 'list', 'type': 'objectid',
'schema': {'type': 'string'} 'data_relation': {
} 'resource': 'files',
} 'field': '_id',
},
},
# For only downloading when things really changed:
'last_downloaded_url': {
'type': 'string',
},
'last_modified': {
'type': 'string',
},
},
},
# Node-specific information for this user.
'nodes': {
'type': 'dict',
'schema': {
# Per watched video info about where the user left off, both in time and in percent.
'view_progress': {
'type': 'dict',
# Keyed by Node ID of the video asset. MongoDB doesn't support using
# ObjectIds as key, so we cast them to string instead.
'keysrules': {'type': 'string'},
'valuesrules': {
'type': 'dict',
'schema': {
'progress_in_sec': {'type': 'float', 'min': 0},
'progress_in_percent': {'type': 'integer', 'min': 0, 'max': 100},
# When the progress was last updated, so we can limit this history to
# the last-watched N videos if we want, or show stuff in chrono order.
'last_watched': {'type': 'datetime'},
# True means progress_in_percent = 100, for easy querying
'done': {'type': 'boolean', 'default': False},
},
},
},
},
},
'badges': {
'type': 'dict',
'schema': {
'html': {'type': 'string'}, # HTML fetched from Blender ID.
'expires': {'type': 'datetime'}, # When we should fetch it again.
},
}, },
# Properties defined by extensions. Extensions should use their name (see the # Properties defined by extensions. Extensions should use their name (see the
@ -152,12 +205,7 @@ organizations_schema = {
'maxlength': 128, 'maxlength': 128,
'required': True 'required': True
}, },
'description': { **markdown_fields('description', maxlength=256),
'type': 'string',
'maxlength': 256,
'coerce': 'markdown',
},
'_description_html': {'type': 'string'},
'website': { 'website': {
'type': 'string', 'type': 'string',
'maxlength': 256, 'maxlength': 256,
@ -227,7 +275,7 @@ organizations_schema = {
'start': {'type': 'binary', 'required': True}, 'start': {'type': 'binary', 'required': True},
'end': {'type': 'binary', 'required': True}, 'end': {'type': 'binary', 'required': True},
'prefix': {'type': 'integer', 'required': True}, 'prefix': {'type': 'integer', 'required': True},
'human': {'type': 'iprange', 'required': True}, 'human': {'type': 'string', 'required': True, 'check_with': 'iprange'},
} }
}, },
}, },
@ -290,11 +338,7 @@ nodes_schema = {
'maxlength': 128, 'maxlength': 128,
'required': True, 'required': True,
}, },
'description': { **markdown_fields('description'),
'type': 'string',
'coerce': 'markdown',
},
'_description_html': {'type': 'string'},
'picture': _file_embedded_schema, 'picture': _file_embedded_schema,
'order': { 'order': {
'type': 'integer', 'type': 'integer',
@ -327,7 +371,7 @@ nodes_schema = {
'properties': { 'properties': {
'type': 'dict', 'type': 'dict',
'valid_properties': True, 'valid_properties': True,
'required': True, 'required': True
}, },
'permissions': { 'permissions': {
'type': 'dict', 'type': 'dict',
@ -345,11 +389,11 @@ tokens_schema = {
}, },
'token': { 'token': {
'type': 'string', 'type': 'string',
'required': False, 'required': True,
}, },
'token_hashed': { 'token_hashed': {
'type': 'string', 'type': 'string',
'required': True, 'required': False,
}, },
'expire_time': { 'expire_time': {
'type': 'datetime', 'type': 'datetime',
@ -368,6 +412,13 @@ tokens_schema = {
'type': 'string', 'type': 'string',
}, },
}, },
# OAuth scopes granted to this token.
'oauth_scopes': {
'type': 'list',
'default': [],
'schema': {'type': 'string'},
}
} }
files_schema = { files_schema = {
@ -425,7 +476,7 @@ files_schema = {
'backend': { 'backend': {
'type': 'string', 'type': 'string',
'required': True, 'required': True,
'allowed': ["local", "pillar", "cdnsun", "gcs", "unittest"] 'allowed': STORAGE_BACKENDS,
}, },
# Where the file is in the backend storage itself. In the case of GCS, # Where the file is in the backend storage itself. In the case of GCS,
@ -537,11 +588,7 @@ projects_schema = {
'maxlength': 128, 'maxlength': 128,
'required': True, 'required': True,
}, },
'description': { **markdown_fields('description'),
'type': 'string',
'coerce': 'markdown',
},
'_description_html': {'type': 'string'},
# Short summary for the project # Short summary for the project
'summary': { 'summary': {
'type': 'string', 'type': 'string',
@ -551,6 +598,8 @@ projects_schema = {
'picture_square': _file_embedded_schema, 'picture_square': _file_embedded_schema,
# Header # Header
'picture_header': _file_embedded_schema, 'picture_header': _file_embedded_schema,
# Picture with a 16:9 aspect ratio (for Open Graph)
'picture_16_9': _file_embedded_schema,
'header_node': dict( 'header_node': dict(
nullable=True, nullable=True,
**_node_embedded_schema **_node_embedded_schema
@ -833,4 +882,9 @@ UPSET_ON_PUT = False # do not create new document on PUT of non-existant URL.
X_DOMAINS = '*' X_DOMAINS = '*'
X_ALLOW_CREDENTIALS = True X_ALLOW_CREDENTIALS = True
X_HEADERS = 'Authorization' X_HEADERS = 'Authorization'
XML = False RENDERERS = ['eve.render.JSONRenderer']
# TODO(Sybren): this is a quick workaround to make /p/{url}/jstree work again.
# Apparently Eve is now stricter in checking against MONGO_QUERY_BLACKLIST, and
# blocks our use of $regex.
MONGO_QUERY_BLACKLIST = ['$where']

View File

@ -5,6 +5,7 @@ import mimetypes
import os import os
import pathlib import pathlib
import tempfile import tempfile
import time
import typing import typing
import uuid import uuid
from hashlib import md5 from hashlib import md5
@ -130,6 +131,67 @@ def _process_image(bucket: Bucket,
src_file['status'] = 'complete' src_file['status'] = 'complete'
def _video_duration_seconds(filename: pathlib.Path) -> typing.Optional[int]:
"""Get the duration of a video file using ffprobe
https://superuser.com/questions/650291/how-to-get-video-duration-in-seconds
:param filename: file path to video
:return: video duration in seconds
"""
import subprocess
def run(cli_args):
if log.isEnabledFor(logging.INFO):
import shlex
cmd = ' '.join(shlex.quote(s) for s in cli_args)
log.info('Calling %s', cmd)
ffprobe = subprocess.run(
cli_args,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
timeout=10, # seconds
)
if ffprobe.returncode:
import shlex
cmd = ' '.join(shlex.quote(s) for s in cli_args)
log.error('Error running %s: stopped with return code %i',
cmd, ffprobe.returncode)
log.error('Output was: %s', ffprobe.stdout)
return None
try:
return int(float(ffprobe.stdout))
except ValueError as e:
log.exception('ffprobe produced invalid number: %s', ffprobe.stdout)
return None
ffprobe_from_container_args = [
current_app.config['BIN_FFPROBE'],
'-v', 'error',
'-show_entries', 'format=duration',
'-of', 'default=noprint_wrappers=1:nokey=1',
str(filename),
]
ffprobe_from_stream_args = [
current_app.config['BIN_FFPROBE'],
'-v', 'error',
'-hide_banner',
'-select_streams', 'v:0', # we only care about the first video stream
'-show_entries', 'stream=duration',
'-of', 'default=noprint_wrappers=1:nokey=1',
str(filename),
]
duration = run(ffprobe_from_stream_args) or \
run(ffprobe_from_container_args) or \
None
return duration
def _video_size_pixels(filename: pathlib.Path) -> typing.Tuple[int, int]: def _video_size_pixels(filename: pathlib.Path) -> typing.Tuple[int, int]:
"""Figures out the size (in pixels) of the video file. """Figures out the size (in pixels) of the video file.
@ -220,8 +282,10 @@ def _process_video(gcs,
# by determining the video size here we already have this information in the file # by determining the video size here we already have this information in the file
# document before Zencoder calls our notification URL. It also opens up possibilities # document before Zencoder calls our notification URL. It also opens up possibilities
# for other encoding backends that don't support this functionality. # for other encoding backends that don't support this functionality.
video_width, video_height = _video_size_pixels(pathlib.Path(local_file.name)) video_path = pathlib.Path(local_file.name)
video_width, video_height = _video_size_pixels(video_path)
capped_video_width, capped_video_height = _video_cap_at_1080(video_width, video_height) capped_video_width, capped_video_height = _video_cap_at_1080(video_width, video_height)
video_duration = _video_duration_seconds(video_path)
# Create variations # Create variations
root, _ = os.path.splitext(src_file['file_path']) root, _ = os.path.splitext(src_file['file_path'])
@ -234,12 +298,13 @@ def _process_video(gcs,
content_type='video/{}'.format(v), content_type='video/{}'.format(v),
file_path='{}-{}.{}'.format(root, v, v), file_path='{}-{}.{}'.format(root, v, v),
size='', size='',
duration=0,
width=capped_video_width, width=capped_video_width,
height=capped_video_height, height=capped_video_height,
length=0, length=0,
md5='', md5='',
) )
if video_duration:
file_variation['duration'] = video_duration
# Append file variation. Originally mp4 and webm were the available options, # Append file variation. Originally mp4 and webm were the available options,
# that's why we build a list. # that's why we build a list.
src_file['variations'].append(file_variation) src_file['variations'].append(file_variation)
@ -405,7 +470,7 @@ def before_returning_files(response):
ensure_valid_link(item) ensure_valid_link(item)
def ensure_valid_link(response): def ensure_valid_link(response: dict) -> None:
"""Ensures the file item has valid file links using generate_link(...).""" """Ensures the file item has valid file links using generate_link(...)."""
# Log to function-specific logger, so we can easily turn it off. # Log to function-specific logger, so we can easily turn it off.
@ -430,12 +495,13 @@ def ensure_valid_link(response):
generate_all_links(response, now) generate_all_links(response, now)
def generate_all_links(response, now): def generate_all_links(response: dict, now: datetime.datetime) -> None:
"""Generate a new link for the file and all its variations. """Generate a new link for the file and all its variations.
:param response: the file document that should be updated. :param response: the file document that should be updated.
:param now: datetime that reflects 'now', for consistent expiry generation. :param now: datetime that reflects 'now', for consistent expiry generation.
""" """
assert isinstance(response, dict), f'response must be dict, is {response!r}'
project_id = str( project_id = str(
response['project']) if 'project' in response else None response['project']) if 'project' in response else None
@ -500,13 +566,10 @@ def on_pre_get_files(_, lookup):
lookup_expired = lookup.copy() lookup_expired = lookup.copy()
lookup_expired['link_expires'] = {'$lte': now} lookup_expired['link_expires'] = {'$lte': now}
cursor = current_app.data.find('files', parsed_req, lookup_expired) cursor, _ = current_app.data.find('files', parsed_req, lookup_expired, perform_count=False)
if cursor.count() == 0: for idx, file_doc in enumerate(cursor):
return if idx == 0:
log.debug('Updating expired links for files that matched lookup %s', lookup_expired)
log.debug('Updating expired links for %d files that matched lookup %s',
cursor.count(), lookup_expired)
for file_doc in cursor:
# log.debug('Updating expired links for file %r.', file_doc['_id']) # log.debug('Updating expired links for file %r.', file_doc['_id'])
generate_all_links(file_doc, now) generate_all_links(file_doc, now)
@ -530,21 +593,21 @@ def refresh_links_for_project(project_uuid, chunk_size, expiry_seconds):
'link_expires': {'$lt': expire_before}, 'link_expires': {'$lt': expire_before},
}).sort([('link_expires', pymongo.ASCENDING)]).limit(chunk_size) }).sort([('link_expires', pymongo.ASCENDING)]).limit(chunk_size)
if to_refresh.count() == 0: refresh_count = 0
log.info('No links to refresh.')
return
for file_doc in to_refresh: for file_doc in to_refresh:
log.debug('Refreshing links for file %s', file_doc['_id']) log.debug('Refreshing links for file %s', file_doc['_id'])
generate_all_links(file_doc, now) generate_all_links(file_doc, now)
refresh_count += 1
log.info('Refreshed %i links', min(chunk_size, to_refresh.count())) if refresh_count:
log.info('Refreshed %i links', refresh_count)
def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds): def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds):
import gcloud.exceptions import gcloud.exceptions
my_log = log.getChild(f'refresh_links_for_backend.{backend_name}') my_log = log.getChild(f'refresh_links_for_backend.{backend_name}')
start_time = time.time()
# Retrieve expired links. # Retrieve expired links.
files_collection = current_app.data.driver.db['files'] files_collection = current_app.data.driver.db['files']
@ -555,23 +618,27 @@ def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds):
my_log.info('Limiting to links that expire before %s', expire_before) my_log.info('Limiting to links that expire before %s', expire_before)
base_query = {'backend': backend_name, '_deleted': {'$ne': True}} base_query = {'backend': backend_name, '_deleted': {'$ne': True}}
to_refresh = files_collection.find( to_refresh_query = {
{'$or': [{'link_expires': None, **base_query}, '$or': [{'link_expires': None, **base_query},
{'link_expires': {'$lt': expire_before}, **base_query}, {'link_expires': {'$lt': expire_before}, **base_query},
{'link': None, **base_query}] {'link': None, **base_query}]
}).sort([('link_expires', pymongo.ASCENDING)]).limit( }
chunk_size).batch_size(5)
document_count = to_refresh.count() document_count = files_collection.count_documents(to_refresh_query)
if document_count == 0: if document_count == 0:
my_log.info('No links to refresh.') my_log.info('No links to refresh.')
return return
if 0 < chunk_size == document_count: if 0 < chunk_size == document_count:
my_log.info('Found %d documents to refresh, probably limited by the chunk size.', my_log.info('Found %d documents to refresh, probably limited by the chunk size %d',
document_count) document_count, chunk_size)
else: else:
my_log.info('Found %d documents to refresh.', document_count) my_log.info('Found %d documents to refresh, chunk size=%d', document_count, chunk_size)
to_refresh = files_collection.find(to_refresh_query)\
.sort([('link_expires', pymongo.ASCENDING)])\
.limit(chunk_size)\
.batch_size(5)
refreshed = 0 refreshed = 0
report_chunks = min(max(5, document_count // 25), 100) report_chunks = min(max(5, document_count // 25), 100)
@ -583,7 +650,7 @@ def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds):
my_log.debug('Skipping file %s, it has no project.', file_id) my_log.debug('Skipping file %s, it has no project.', file_id)
continue continue
count = proj_coll.count({'_id': project_id, '$or': [ count = proj_coll.count_documents({'_id': project_id, '$or': [
{'_deleted': {'$exists': False}}, {'_deleted': {'$exists': False}},
{'_deleted': False}, {'_deleted': False},
]}) ]})
@ -615,8 +682,10 @@ def refresh_links_for_backend(backend_name, chunk_size, expiry_seconds):
'links', refreshed) 'links', refreshed)
return return
my_log.info('Refreshed %i links', refreshed) if refreshed % report_chunks != 0:
my_log.info('Refreshed %i links', refreshed)
my_log.info('Refresh took %s', datetime.timedelta(seconds=time.time() - start_time))
@require_login() @require_login()
def create_file_doc(name, filename, content_type, length, project, def create_file_doc(name, filename, content_type, length, project,
@ -752,6 +821,10 @@ def stream_to_storage(project_id: str):
local_file = uploaded_file.stream local_file = uploaded_file.stream
result = upload_and_process(local_file, uploaded_file, project_id) result = upload_and_process(local_file, uploaded_file, project_id)
# Local processing is done, we can close the local file so it is removed.
local_file.close()
resp = jsonify(result) resp = jsonify(result)
resp.status_code = result['status_code'] resp.status_code = result['status_code']
add_access_control_headers(resp) add_access_control_headers(resp)
@ -760,7 +833,9 @@ def stream_to_storage(project_id: str):
def upload_and_process(local_file: typing.Union[io.BytesIO, typing.BinaryIO], def upload_and_process(local_file: typing.Union[io.BytesIO, typing.BinaryIO],
uploaded_file: werkzeug.datastructures.FileStorage, uploaded_file: werkzeug.datastructures.FileStorage,
project_id: str): project_id: str,
*,
may_process_file=True) -> dict:
# Figure out the file size, as we need to pass this in explicitly to GCloud. # Figure out the file size, as we need to pass this in explicitly to GCloud.
# Otherwise it always uses os.fstat(file_obj.fileno()).st_size, which isn't # Otherwise it always uses os.fstat(file_obj.fileno()).st_size, which isn't
# supported by a BytesIO object (even though it does have a fileno # supported by a BytesIO object (even though it does have a fileno
@ -787,18 +862,15 @@ def upload_and_process(local_file: typing.Union[io.BytesIO, typing.BinaryIO],
'size=%i as "queued_for_processing"', 'size=%i as "queued_for_processing"',
file_id, internal_fname, file_size) file_id, internal_fname, file_size)
update_file_doc(file_id, update_file_doc(file_id,
status='queued_for_processing', status='queued_for_processing' if may_process_file else 'complete',
file_path=internal_fname, file_path=internal_fname,
length=blob.size, length=blob.size,
content_type=uploaded_file.mimetype) content_type=uploaded_file.mimetype)
log.debug('Processing uploaded file id=%s, fname=%s, size=%i', file_id, if may_process_file:
internal_fname, blob.size) log.debug('Processing uploaded file id=%s, fname=%s, size=%i', file_id,
process_file(bucket, file_id, local_file) internal_fname, blob.size)
process_file(bucket, file_id, local_file)
# Local processing is done, we can close the local file so it is removed.
if local_file is not None:
local_file.close()
log.debug('Handled uploaded file id=%s, fname=%s, size=%i, status=%i', log.debug('Handled uploaded file id=%s, fname=%s, size=%i, status=%i',
file_id, internal_fname, blob.size, status) file_id, internal_fname, blob.size, status)
@ -912,7 +984,50 @@ def compute_aggregate_length_items(file_docs):
compute_aggregate_length(file_doc) compute_aggregate_length(file_doc)
def get_file_url(file_id: ObjectId, variation='') -> str:
"""Return the URL of a file in storage.
Note that this function is cached, see setup_app().
:param file_id: the ID of the file
:param variation: if non-empty, indicates the variation of of the file
to return the URL for; if empty, returns the URL of the original.
:return: the URL, or an empty string if the file/variation does not exist.
"""
file_coll = current_app.db('files')
db_file = file_coll.find_one({'_id': file_id})
if not db_file:
return ''
ensure_valid_link(db_file)
if variation:
variations = file_doc.get('variations', ())
for file_var in variations:
if file_var['size'] == variation:
return file_var['link']
return ''
return db_file['link']
def update_file_doc(file_id, **updates):
files = current_app.data.driver.db['files']
res = files.update_one({'_id': ObjectId(file_id)},
{'$set': updates})
log.debug('update_file_doc(%s, %s): %i matched, %i updated.',
file_id, updates, res.matched_count, res.modified_count)
return res
def setup_app(app, url_prefix): def setup_app(app, url_prefix):
global get_file_url
cached = app.cache.memoize(timeout=10)
get_file_url = cached(get_file_url)
app.on_pre_GET_files += on_pre_get_files app.on_pre_GET_files += on_pre_get_files
app.on_fetched_item_files += before_returning_file app.on_fetched_item_files += before_returning_file
@ -923,12 +1038,3 @@ def setup_app(app, url_prefix):
app.on_insert_files += compute_aggregate_length_items app.on_insert_files += compute_aggregate_length_items
app.register_api_blueprint(file_storage, url_prefix=url_prefix) app.register_api_blueprint(file_storage, url_prefix=url_prefix)
def update_file_doc(file_id, **updates):
files = current_app.data.driver.db['files']
res = files.update_one({'_id': ObjectId(file_id)},
{'$set': updates})
log.debug('update_file_doc(%s, %s): %i matched, %i updated.',
file_id, updates, res.matched_count, res.modified_count)
return res

View File

@ -90,12 +90,11 @@ class Blob(metaclass=abc.ABCMeta):
def __init__(self, name: str, bucket: Bucket) -> None: def __init__(self, name: str, bucket: Bucket) -> None:
self.name = name self.name = name
"""Name of this blob in the bucket."""
self.bucket = bucket self.bucket = bucket
self._size_in_bytes: typing.Optional[int] = None self._size_in_bytes: typing.Optional[int] = None
self.filename: str = None
"""Name of the file for the Content-Disposition header when downloading it."""
self._log = logging.getLogger(f'{__name__}.Blob') self._log = logging.getLogger(f'{__name__}.Blob')
def __repr__(self): def __repr__(self):
@ -133,12 +132,19 @@ class Blob(metaclass=abc.ABCMeta):
file_size=file_size) file_size=file_size)
@abc.abstractmethod @abc.abstractmethod
def update_filename(self, filename: str): def update_filename(self, filename: str, *, is_attachment=True):
"""Sets the filename which is used when downloading the file. """Sets the filename which is used when downloading the file.
Not all storage backends support this, and will use the on-disk filename instead. Not all storage backends support this, and will use the on-disk filename instead.
""" """
@abc.abstractmethod
def update_content_type(self, content_type: str, content_encoding: str = ''):
"""Set the content type (and optionally content encoding).
Not all storage backends support this.
"""
@abc.abstractmethod @abc.abstractmethod
def get_url(self, *, is_public: bool) -> str: def get_url(self, *, is_public: bool) -> str:
"""Returns the URL to access this blob. """Returns the URL to access this blob.

View File

@ -174,7 +174,7 @@ class GoogleCloudStorageBlob(Blob):
self.gblob.reload() self.gblob.reload()
self._size_in_bytes = self.gblob.size self._size_in_bytes = self.gblob.size
def update_filename(self, filename: str): def update_filename(self, filename: str, *, is_attachment=True):
"""Set the ContentDisposition metadata so that when a file is downloaded """Set the ContentDisposition metadata so that when a file is downloaded
it has a human-readable name. it has a human-readable name.
""" """
@ -182,7 +182,17 @@ class GoogleCloudStorageBlob(Blob):
if '"' in filename: if '"' in filename:
raise ValueError(f'Filename is not allowed to have double quote in it: {filename!r}') raise ValueError(f'Filename is not allowed to have double quote in it: {filename!r}')
self.gblob.content_disposition = f'attachment; filename="{filename}"' if is_attachment:
self.gblob.content_disposition = f'attachment; filename="{filename}"'
else:
self.gblob.content_disposition = f'filename="{filename}"'
self.gblob.patch()
def update_content_type(self, content_type: str, content_encoding: str = ''):
"""Set the content type (and optionally content encoding)."""
self.gblob.content_type = content_type
self.gblob.content_encoding = content_encoding
self.gblob.patch() self.gblob.patch()
def get_url(self, *, is_public: bool) -> str: def get_url(self, *, is_public: bool) -> str:

View File

@ -113,10 +113,13 @@ class LocalBlob(Blob):
self._size_in_bytes = file_size self._size_in_bytes = file_size
def update_filename(self, filename: str): def update_filename(self, filename: str, *, is_attachment=True):
# TODO: implement this for local storage. # TODO: implement this for local storage.
self._log.info('update_filename(%r) not supported', filename) self._log.info('update_filename(%r) not supported', filename)
def update_content_type(self, content_type: str, content_encoding: str = ''):
self._log.info('update_content_type(%r, %r) not supported', content_type, content_encoding)
def make_public(self): def make_public(self):
# No-op on this storage backend. # No-op on this storage backend.
pass pass

View File

@ -29,7 +29,6 @@ def latest_nodes(db_filter, projection, limit):
proj = { proj = {
'_created': 1, '_created': 1,
'_updated': 1, '_updated': 1,
'user.full_name': 1,
'project._id': 1, 'project._id': 1,
'project.url': 1, 'project.url': 1,
'project.name': 1, 'project.name': 1,
@ -70,6 +69,7 @@ def latest_assets():
{'name': 1, 'node_type': 1, {'name': 1, 'node_type': 1,
'parent': 1, 'picture': 1, 'properties.status': 1, 'parent': 1, 'picture': 1, 'properties.status': 1,
'properties.content_type': 1, 'properties.content_type': 1,
'properties.duration_seconds': 1,
'permissions.world': 1}, 'permissions.world': 1},
12) 12)
@ -80,7 +80,7 @@ def latest_assets():
def latest_comments(): def latest_comments():
latest = latest_nodes({'node_type': 'comment', latest = latest_nodes({'node_type': 'comment',
'properties.status': 'published'}, 'properties.status': 'published'},
{'parent': 1, {'parent': 1, 'user.full_name': 1,
'properties.content': 1, 'node_type': 1, 'properties.content': 1, 'node_type': 1,
'properties.status': 1, 'properties.status': 1,
'properties.is_reply': 1}, 'properties.is_reply': 1},

View File

@ -94,17 +94,10 @@ def generate_and_store_token(user_id, days=15, prefix=b'') -> dict:
# Use 'xy' as altargs to prevent + and / characters from appearing. # Use 'xy' as altargs to prevent + and / characters from appearing.
# We never have to b64decode the string anyway. # We never have to b64decode the string anyway.
token_bytes = prefix + base64.b64encode(random_bits, altchars=b'xy').strip(b'=') token = prefix + base64.b64encode(random_bits, altchars=b'xy').strip(b'=')
token = token_bytes.decode('ascii')
token_expiry = utcnow() + datetime.timedelta(days=days) token_expiry = utcnow() + datetime.timedelta(days=days)
token_data = store_token(user_id, token, token_expiry) return store_token(user_id, token.decode('ascii'), token_expiry)
# Include the token in the returned document so that it can be stored client-side,
# in configuration, etc.
token_data['token'] = token
return token_data
def hash_password(password: str, salt: typing.Union[str, bytes]) -> str: def hash_password(password: str, salt: typing.Union[str, bytes]) -> str:

View File

@ -11,26 +11,17 @@ ATTACHMENT_SLUG_REGEX = r'[a-zA-Z0-9_\-]+'
attachments_embedded_schema = { attachments_embedded_schema = {
'type': 'dict', 'type': 'dict',
# TODO: will be renamed to 'keyschema' in Cerberus 1.0 'keysrules': {
'propertyschema': {
'type': 'string', 'type': 'string',
'regex': '^%s$' % ATTACHMENT_SLUG_REGEX, 'regex': '^%s$' % ATTACHMENT_SLUG_REGEX,
}, },
'valueschema': { 'valuesrules': {
'type': 'dict', 'type': 'dict',
'schema': { 'schema': {
'oid': { 'oid': {
'type': 'objectid', 'type': 'objectid',
'required': True, 'required': True,
}, },
'link': {
'type': 'string',
'allowed': ['self', 'none', 'custom'],
'default': 'self',
},
'link_custom': {
'type': 'string',
},
'collection': { 'collection': {
'type': 'string', 'type': 'string',
'allowed': ['files'], 'allowed': ['files'],

View File

@ -24,6 +24,10 @@ node_type_asset = {
'content_type': { 'content_type': {
'type': 'string' 'type': 'string'
}, },
# The duration of a video asset in seconds.
'duration_seconds': {
'type': 'integer'
},
# We point to the original file (and use it to extract any relevant # We point to the original file (and use it to extract any relevant
# variation useful for our scope). # variation useful for our scope).
'file': _file_embedded_schema, 'file': _file_embedded_schema,
@ -58,6 +62,7 @@ node_type_asset = {
}, },
'form_schema': { 'form_schema': {
'content_type': {'visible': False}, 'content_type': {'visible': False},
'duration_seconds': {'visible': False},
'order': {'visible': False}, 'order': {'visible': False},
'tags': {'visible': False}, 'tags': {'visible': False},
'categories': {'visible': False}, 'categories': {'visible': False},

View File

@ -1,15 +1,15 @@
from pillar.api.node_types import attachments_embedded_schema
from pillar.api.node_types.utils import markdown_fields
node_type_comment = { node_type_comment = {
'name': 'comment', 'name': 'comment',
'description': 'Comments for asset nodes, pages, etc.', 'description': 'Comments for asset nodes, pages, etc.',
'dyn_schema': { 'dyn_schema': {
# The actual comment content # The actual comment content
'content': { **markdown_fields(
'type': 'string', 'content',
'minlength': 5, minlength=5,
'required': True, required=True),
'coerce': 'markdown',
},
'_content_html': {'type': 'string'},
'status': { 'status': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [
@ -51,7 +51,8 @@ node_type_comment = {
} }
}, },
'confidence': {'type': 'float'}, 'confidence': {'type': 'float'},
'is_reply': {'type': 'boolean'} 'is_reply': {'type': 'boolean'},
'attachments': attachments_embedded_schema,
}, },
'form_schema': {}, 'form_schema': {},
'parent': ['asset', 'comment'], 'parent': ['asset', 'comment'],

View File

@ -3,7 +3,7 @@ node_type_group = {
'description': 'Folder node type', 'description': 'Folder node type',
'parent': ['group', 'project'], 'parent': ['group', 'project'],
'dyn_schema': { 'dyn_schema': {
# Used for sorting within the context of a group
'order': { 'order': {
'type': 'integer' 'type': 'integer'
}, },
@ -20,7 +20,8 @@ node_type_group = {
'notes': { 'notes': {
'type': 'string', 'type': 'string',
'maxlength': 256, 'maxlength': 256,
}, }
}, },
'form_schema': { 'form_schema': {
'url': {'visible': False}, 'url': {'visible': False},

View File

@ -1,17 +1,14 @@
from pillar.api.node_types import attachments_embedded_schema from pillar.api.node_types import attachments_embedded_schema
from pillar.api.node_types.utils import markdown_fields
node_type_post = { node_type_post = {
'name': 'post', 'name': 'post',
'description': 'A blog post, for any project', 'description': 'A blog post, for any project',
'dyn_schema': { 'dyn_schema': {
'content': { **markdown_fields('content',
'type': 'string', minlength=5,
'minlength': 5, maxlength=90000,
'maxlength': 90000, required=True),
'required': True,
'coerce': 'markdown',
},
'_content_html': {'type': 'string'},
'status': { 'status': {
'type': 'string', 'type': 'string',
'allowed': [ 'allowed': [

View File

@ -0,0 +1,34 @@
from pillar import markdown
def markdown_fields(field: str, **kwargs) -> dict:
"""
Creates a field for the markdown, and a field for the cached html.
Example usage:
schema = {'myDoc': {
'type': 'list',
'schema': {
'type': 'dict',
'schema': {
**markdown_fields('content', required=True),
}
},
}}
:param field:
:return:
"""
cache_field = markdown.cache_field_name(field)
return {
field: {
'type': 'string',
**kwargs
},
cache_field: {
'type': 'string',
'readonly': True,
'default': field, # Name of the field containing the markdown. Will be input to the coerce function.
'coerce': 'markdown',
}
}

View File

@ -1,56 +1,19 @@
import base64 import base64
import functools import datetime
import logging import logging
import urllib.parse
import pymongo.errors import pymongo.errors
import werkzeug.exceptions as wz_exceptions import werkzeug.exceptions as wz_exceptions
from bson import ObjectId
from flask import current_app, Blueprint, request from flask import current_app, Blueprint, request
from pillar.api.activities import activity_subscribe, activity_object_add from pillar.api.nodes import eve_hooks, comments, activities
from pillar.api.node_types import PILLAR_NAMED_NODE_TYPES
from pillar.api.file_storage_backends.gcs import update_file_name
from pillar.api.utils import str2id, jsonify from pillar.api.utils import str2id, jsonify
from pillar.api.utils.authorization import check_permissions, require_login from pillar.api.utils.authorization import check_permissions, require_login
from pillar.web.utils import pretty_date
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
blueprint = Blueprint('nodes_api', __name__) blueprint = Blueprint('nodes_api', __name__)
ROLES_FOR_SHARING = {'subscriber', 'demo'} ROLES_FOR_SHARING = ROLES_FOR_COMMENTING = {'subscriber', 'demo'}
def only_for_node_type_decorator(*required_node_type_names):
"""Returns a decorator that checks its first argument's node type.
If the node type is not of the required node type, returns None,
otherwise calls the wrapped function.
>>> deco = only_for_node_type_decorator('comment')
>>> @deco
... def handle_comment(node): pass
>>> deco = only_for_node_type_decorator('comment', 'post')
>>> @deco
... def handle_comment_or_post(node): pass
"""
# Convert to a set for efficient 'x in required_node_type_names' queries.
required_node_type_names = set(required_node_type_names)
def only_for_node_type(wrapped):
@functools.wraps(wrapped)
def wrapper(node, *args, **kwargs):
if node.get('node_type') not in required_node_type_names:
return
return wrapped(node, *args, **kwargs)
return wrapper
only_for_node_type.__doc__ = "Decorator, immediately returns when " \
"the first argument is not of type %s." % required_node_type_names
return only_for_node_type
@blueprint.route('/<node_id>/share', methods=['GET', 'POST']) @blueprint.route('/<node_id>/share', methods=['GET', 'POST'])
@ -85,7 +48,121 @@ def share_node(node_id):
else: else:
return '', 204 return '', 204
return jsonify(short_link_info(short_code), status=status) return jsonify(eve_hooks.short_link_info(short_code), status=status)
@blueprint.route('/<string(length=24):node_path>/comments', methods=['GET'])
def get_node_comments(node_path: str):
node_id = str2id(node_path)
return comments.get_node_comments(node_id)
@blueprint.route('/<string(length=24):node_path>/comments', methods=['POST'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def post_node_comment(node_path: str):
node_id = str2id(node_path)
msg = request.json['msg']
attachments = request.json.get('attachments', {})
return comments.post_node_comment(node_id, msg, attachments)
@blueprint.route('/<string(length=24):node_path>/comments/<string(length=24):comment_path>', methods=['PATCH'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def patch_node_comment(node_path: str, comment_path: str):
node_id = str2id(node_path)
comment_id = str2id(comment_path)
msg = request.json['msg']
attachments = request.json.get('attachments', {})
return comments.patch_node_comment(node_id, comment_id, msg, attachments)
@blueprint.route('/<string(length=24):node_path>/comments/<string(length=24):comment_path>/vote', methods=['POST'])
@require_login(require_roles=ROLES_FOR_COMMENTING)
def post_node_comment_vote(node_path: str, comment_path: str):
node_id = str2id(node_path)
comment_id = str2id(comment_path)
vote_str = request.json['vote']
vote = int(vote_str)
return comments.post_node_comment_vote(node_id, comment_id, vote)
@blueprint.route('/<string(length=24):node_path>/activities', methods=['GET'])
def activities_for_node(node_path: str):
node_id = str2id(node_path)
return jsonify(activities.for_node(node_id))
@blueprint.route('/tagged/')
@blueprint.route('/tagged/<tag>')
def tagged(tag=''):
"""Return all tagged nodes of public projects as JSON."""
from pillar.auth import current_user
# We explicitly register the tagless endpoint to raise a 404, otherwise the PATCH
# handler on /api/nodes/<node_id> will return a 405 Method Not Allowed.
if not tag:
raise wz_exceptions.NotFound()
# Build the (cached) list of tagged nodes
agg_list = _tagged(tag)
for node in agg_list:
if node['properties'].get('duration_seconds'):
node['properties']['duration'] = datetime.timedelta(seconds=node['properties']['duration_seconds'])
if node.get('_created') is not None:
node['pretty_created'] = pretty_date(node['_created'])
# If the user is anonymous, no more information is needed and we return
if current_user.is_anonymous:
return jsonify(agg_list)
# If the user is authenticated, attach view_progress for video assets
view_progress = current_user.nodes['view_progress']
for node in agg_list:
node_id = str(node['_id'])
# View progress should be added only for nodes of type 'asset' and
# with content_type 'video', only if the video was already in the watched
# list for the current user.
if node_id in view_progress:
node['view_progress'] = view_progress[node_id]
return jsonify(agg_list)
def _tagged(tag: str):
"""Fetch all public nodes with the given tag.
This function is cached, see setup_app().
"""
nodes_coll = current_app.db('nodes')
agg = nodes_coll.aggregate([
{'$match': {'properties.tags': tag,
'_deleted': {'$ne': True}}},
# Only get nodes from public projects. This is done after matching the
# tagged nodes, because most likely nobody else will be able to tag
# nodes anyway.
{'$lookup': {
'from': 'projects',
'localField': 'project',
'foreignField': '_id',
'as': '_project',
}},
{'$unwind': '$_project'},
{'$match': {'_project.is_private': False}},
{'$addFields': {
'project._id': '$_project._id',
'project.name': '$_project.name',
'project.url': '$_project.url',
}},
# Don't return the entire project/file for each node.
{'$project': {'_project': False}},
{'$sort': {'_created': -1}}
])
return list(agg)
def generate_and_store_short_code(node): def generate_and_store_short_code(node):
@ -163,265 +240,35 @@ def create_short_code(node) -> str:
return short_code return short_code
def short_link_info(short_code):
"""Returns the short link info in a dict."""
short_link = urllib.parse.urljoin(
current_app.config['SHORT_LINK_BASE_URL'], short_code)
return {
'short_code': short_code,
'short_link': short_link,
}
def before_replacing_node(item, original):
check_permissions('nodes', original, 'PUT')
update_file_name(item)
def after_replacing_node(item, original):
"""Push an update to the Algolia index when a node item is updated. If the
project is private, prevent public indexing.
"""
from pillar.celery import search_index_tasks as index
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one({'_id': item['project']})
if project.get('is_private', False):
# Skip index updating and return
return
status = item['properties'].get('status', 'unpublished')
node_id = str(item['_id'])
if status == 'published':
index.node_save.delay(node_id)
else:
index.node_delete.delay(node_id)
def before_inserting_nodes(items):
"""Before inserting a node in the collection we check if the user is allowed
and we append the project id to it.
"""
from pillar.auth import current_user
nodes_collection = current_app.data.driver.db['nodes']
def find_parent_project(node):
"""Recursive function that finds the ultimate parent of a node."""
if node and 'parent' in node:
parent = nodes_collection.find_one({'_id': node['parent']})
return find_parent_project(parent)
if node:
return node
else:
return None
for item in items:
check_permissions('nodes', item, 'POST')
if 'parent' in item and 'project' not in item:
parent = nodes_collection.find_one({'_id': item['parent']})
project = find_parent_project(parent)
if project:
item['project'] = project['_id']
# Default the 'user' property to the current user.
item.setdefault('user', current_user.user_id)
def after_inserting_nodes(items):
for item in items:
# Skip subscriptions for first level items (since the context is not a
# node, but a project).
# TODO: support should be added for mixed context
if 'parent' not in item:
return
context_object_id = item['parent']
if item['node_type'] == 'comment':
nodes_collection = current_app.data.driver.db['nodes']
parent = nodes_collection.find_one({'_id': item['parent']})
# Always subscribe to the parent node
activity_subscribe(item['user'], 'node', item['parent'])
if parent['node_type'] == 'comment':
# If the parent is a comment, we provide its own parent as
# context. We do this in order to point the user to an asset
# or group when viewing the notification.
verb = 'replied'
context_object_id = parent['parent']
# Subscribe to the parent of the parent comment (post or group)
activity_subscribe(item['user'], 'node', parent['parent'])
else:
activity_subscribe(item['user'], 'node', item['_id'])
verb = 'commented'
elif item['node_type'] in PILLAR_NAMED_NODE_TYPES:
verb = 'posted'
activity_subscribe(item['user'], 'node', item['_id'])
else:
# Don't automatically create activities for non-Pillar node types,
# as we don't know what would be a suitable verb (among other things).
continue
activity_object_add(
item['user'],
verb,
'node',
item['_id'],
'node',
context_object_id
)
def deduct_content_type(node_doc, original=None):
"""Deduct the content type from the attached file, if any."""
if node_doc['node_type'] != 'asset':
log.debug('deduct_content_type: called on node type %r, ignoring', node_doc['node_type'])
return
node_id = node_doc.get('_id')
try:
file_id = ObjectId(node_doc['properties']['file'])
except KeyError:
if node_id is None:
# Creation of a file-less node is allowed, but updates aren't.
return
log.warning('deduct_content_type: Asset without properties.file, rejecting.')
raise wz_exceptions.UnprocessableEntity('Missing file property for asset node')
files = current_app.data.driver.db['files']
file_doc = files.find_one({'_id': file_id},
{'content_type': 1})
if not file_doc:
log.warning('deduct_content_type: Node %s refers to non-existing file %s, rejecting.',
node_id, file_id)
raise wz_exceptions.UnprocessableEntity('File property refers to non-existing file')
# Guess the node content type from the file content type
file_type = file_doc['content_type']
if file_type.startswith('video/'):
content_type = 'video'
elif file_type.startswith('image/'):
content_type = 'image'
else:
content_type = 'file'
node_doc['properties']['content_type'] = content_type
def nodes_deduct_content_type(nodes):
for node in nodes:
deduct_content_type(node)
def before_returning_node(node):
# Run validation process, since GET on nodes entry point is public
check_permissions('nodes', node, 'GET', append_allowed_methods=True)
# Embed short_link_info if the node has a short_code.
short_code = node.get('short_code')
if short_code:
node['short_link'] = short_link_info(short_code)['short_link']
def before_returning_nodes(nodes):
for node in nodes['_items']:
before_returning_node(node)
def node_set_default_picture(node, original=None):
"""Uses the image of an image asset or colour map of texture node as picture."""
if node.get('picture'):
log.debug('Node %s already has a picture, not overriding', node.get('_id'))
return
node_type = node.get('node_type')
props = node.get('properties', {})
content = props.get('content_type')
if node_type == 'asset' and content == 'image':
image_file_id = props.get('file')
elif node_type == 'texture':
# Find the colour map, defaulting to the first image map available.
image_file_id = None
for image in props.get('files', []):
if image_file_id is None or image.get('map_type') == 'color':
image_file_id = image.get('file')
else:
log.debug('Not setting default picture on node type %s content type %s',
node_type, content)
return
if image_file_id is None:
log.debug('Nothing to set the picture to.')
return
log.debug('Setting default picture for node %s to %s', node.get('_id'), image_file_id)
node['picture'] = image_file_id
def nodes_set_default_picture(nodes):
for node in nodes:
node_set_default_picture(node)
def before_deleting_node(node: dict):
check_permissions('nodes', node, 'DELETE')
def after_deleting_node(item):
from pillar.celery import search_index_tasks as index
index.node_delete.delay(str(item['_id']))
only_for_textures = only_for_node_type_decorator('texture')
@only_for_textures
def texture_sort_files(node, original=None):
"""Sort files alphabetically by map type, with colour map first."""
try:
files = node['properties']['files']
except KeyError:
return
# Sort the map types alphabetically, ensuring 'color' comes first.
as_dict = {f['map_type']: f for f in files}
types = sorted(as_dict.keys(), key=lambda k: '\0' if k == 'color' else k)
node['properties']['files'] = [as_dict[map_type] for map_type in types]
def textures_sort_files(nodes):
for node in nodes:
texture_sort_files(node)
def setup_app(app, url_prefix): def setup_app(app, url_prefix):
global _tagged
cached = app.cache.memoize(timeout=300)
_tagged = cached(_tagged)
from . import patch from . import patch
patch.setup_app(app, url_prefix=url_prefix) patch.setup_app(app, url_prefix=url_prefix)
app.on_fetched_item_nodes += before_returning_node app.on_fetched_item_nodes += eve_hooks.before_returning_node
app.on_fetched_resource_nodes += before_returning_nodes app.on_fetched_resource_nodes += eve_hooks.before_returning_nodes
app.on_replace_nodes += before_replacing_node app.on_replace_nodes += eve_hooks.before_replacing_node
app.on_replace_nodes += texture_sort_files app.on_replace_nodes += eve_hooks.texture_sort_files
app.on_replace_nodes += deduct_content_type app.on_replace_nodes += eve_hooks.deduct_content_type_and_duration
app.on_replace_nodes += node_set_default_picture app.on_replace_nodes += eve_hooks.node_set_default_picture
app.on_replaced_nodes += after_replacing_node app.on_replaced_nodes += eve_hooks.after_replacing_node
app.on_insert_nodes += before_inserting_nodes app.on_insert_nodes += eve_hooks.before_inserting_nodes
app.on_insert_nodes += nodes_deduct_content_type app.on_insert_nodes += eve_hooks.nodes_deduct_content_type_and_duration
app.on_insert_nodes += nodes_set_default_picture app.on_insert_nodes += eve_hooks.nodes_set_default_picture
app.on_insert_nodes += textures_sort_files app.on_insert_nodes += eve_hooks.textures_sort_files
app.on_inserted_nodes += after_inserting_nodes app.on_inserted_nodes += eve_hooks.after_inserting_nodes
app.on_update_nodes += texture_sort_files app.on_update_nodes += eve_hooks.texture_sort_files
app.on_delete_item_nodes += before_deleting_node app.on_delete_item_nodes += eve_hooks.before_deleting_node
app.on_deleted_item_nodes += after_deleting_node app.on_deleted_item_nodes += eve_hooks.after_deleting_node
app.register_api_blueprint(blueprint, url_prefix=url_prefix) app.register_api_blueprint(blueprint, url_prefix=url_prefix)
activities.setup_app(app)

View File

@ -0,0 +1,43 @@
from eve.methods import get
import pillar.api.users.avatar
def for_node(node_id):
activities, _, _, status, _ =\
get('activities',
{
'$or': [
{'object_type': 'node',
'object': node_id},
{'context_object_type': 'node',
'context_object': node_id},
],
},)
for act in activities['_items']:
act['actor_user'] = _user_info(act['actor_user'])
return activities
def _user_info(user_id):
users, _, _, status, _ = get('users', {'_id': user_id})
if len(users['_items']) > 0:
user = users['_items'][0]
user['avatar'] = pillar.api.users.avatar.url(user)
public_fields = {'full_name', 'username', 'avatar'}
for field in list(user.keys()):
if field not in public_fields:
del user[field]
return user
return {}
def setup_app(app):
global _user_info
decorator = app.cache.memoize(timeout=300, make_name='%s.public_user_info' % __name__)
_user_info = decorator(_user_info)

View File

@ -0,0 +1,302 @@
import logging
from datetime import datetime
import pymongo
import typing
import bson
import attr
import werkzeug.exceptions as wz_exceptions
import pillar
from pillar import current_app, shortcodes
import pillar.api.users.avatar
from pillar.api.nodes.custom.comment import patch_comment
from pillar.api.utils import jsonify
from pillar.auth import current_user
import pillar.markdown
log = logging.getLogger(__name__)
@attr.s(auto_attribs=True)
class UserDO:
id: str
full_name: str
avatar_url: str
badges_html: str
@attr.s(auto_attribs=True)
class CommentPropertiesDO:
attachments: typing.Dict
rating_positive: int = 0
rating_negative: int = 0
@attr.s(auto_attribs=True)
class CommentDO:
id: bson.ObjectId
parent: bson.ObjectId
project: bson.ObjectId
user: UserDO
msg_html: str
msg_markdown: str
properties: CommentPropertiesDO
created: datetime
updated: datetime
etag: str
replies: typing.List['CommentDO'] = []
current_user_rating: typing.Optional[bool] = None
@attr.s(auto_attribs=True)
class CommentTreeDO:
node_id: bson.ObjectId
project: bson.ObjectId
nbr_of_comments: int = 0
comments: typing.List[CommentDO] = []
def _get_markdowned_html(document: dict, field_name: str) -> str:
cache_field_name = pillar.markdown.cache_field_name(field_name)
html = document.get(cache_field_name)
if html is None:
markdown_src = document.get(field_name) or ''
html = pillar.markdown.markdown(markdown_src)
return html
def jsonify_data_object(data_object: attr):
return jsonify(
attr.asdict(data_object,
recurse=True)
)
class CommentTreeBuilder:
def __init__(self, node_id: bson.ObjectId):
self.node_id = node_id
self.nbr_of_Comments: int = 0
def build(self) -> CommentTreeDO:
enriched_comments = self.child_comments(
self.node_id,
sort={'properties.rating_positive': pymongo.DESCENDING,
'_created': pymongo.DESCENDING})
project_id = self.get_project_id()
return CommentTreeDO(
node_id=self.node_id,
project=project_id,
nbr_of_comments=self.nbr_of_Comments,
comments=enriched_comments
)
def child_comments(self, node_id: bson.ObjectId, sort: dict) -> typing.List[CommentDO]:
raw_comments = self.mongodb_comments(node_id, sort)
return [self.enrich(comment) for comment in raw_comments]
def enrich(self, mongo_comment: dict) -> CommentDO:
self.nbr_of_Comments += 1
comment = to_comment_data_object(mongo_comment)
comment.replies = self.child_comments(mongo_comment['_id'],
sort={'_created': pymongo.ASCENDING})
return comment
def get_project_id(self):
nodes_coll = current_app.db('nodes')
result = nodes_coll.find_one({'_id': self.node_id})
return result['project']
@classmethod
def mongodb_comments(cls, node_id: bson.ObjectId, sort: dict) -> typing.Iterator:
nodes_coll = current_app.db('nodes')
return nodes_coll.aggregate([
{'$match': {'node_type': 'comment',
'_deleted': {'$ne': True},
'properties.status': 'published',
'parent': node_id}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$sort': sort},
])
def get_node_comments(node_id: bson.ObjectId):
comments_tree = CommentTreeBuilder(node_id).build()
return jsonify_data_object(comments_tree)
def post_node_comment(parent_id: bson.ObjectId, markdown_msg: str, attachments: dict):
parent_node = find_node_or_raise(parent_id,
'User %s tried to update comment with bad parent_id %s',
current_user.objectid,
parent_id)
is_reply = parent_node['node_type'] == 'comment'
comment = dict(
parent=parent_id,
project=parent_node['project'],
name='Comment',
user=current_user.objectid,
node_type='comment',
properties=dict(
content=markdown_msg,
status='published',
is_reply=is_reply,
confidence=0,
rating_positive=0,
rating_negative=0,
attachments=attachments,
),
permissions=dict(
users=[dict(
user=current_user.objectid,
methods=['PUT'])
]
)
)
r, _, _, status = current_app.post_internal('nodes', comment)
if status != 201:
log.warning('Unable to post comment on %s as %s: %s',
parent_id, current_user.objectid, r)
raise wz_exceptions.InternalServerError('Unable to create comment')
comment_do = get_comment(parent_id, r['_id'])
return jsonify_data_object(comment_do), 201
def find_node_or_raise(node_id, *args):
nodes_coll = current_app.db('nodes')
node_to_comment = nodes_coll.find_one({
'_id': node_id,
'_deleted': {'$ne': True},
})
if not node_to_comment:
log.warning(args)
raise wz_exceptions.UnprocessableEntity()
return node_to_comment
def patch_node_comment(parent_id: bson.ObjectId,
comment_id: bson.ObjectId,
markdown_msg: str,
attachments: dict):
_, _ = find_parent_and_comment_or_raise(parent_id, comment_id)
patch = dict(
op='edit',
content=markdown_msg,
attachments=attachments
)
json_result = patch_comment(comment_id, patch)
if json_result.json['result'] != 200:
raise wz_exceptions.InternalServerError('Failed to update comment')
comment_do = get_comment(parent_id, comment_id)
return jsonify_data_object(comment_do), 200
def find_parent_and_comment_or_raise(parent_id, comment_id):
parent = find_node_or_raise(parent_id,
'User %s tried to update comment with bad parent_id %s',
current_user.objectid,
parent_id)
comment = find_node_or_raise(comment_id,
'User %s tried to update comment with bad id %s',
current_user.objectid,
comment_id)
validate_comment_parent_relation(comment, parent)
return parent, comment
def validate_comment_parent_relation(comment, parent):
if comment['parent'] != parent['_id']:
log.warning('User %s tried to update comment with bad parent/comment pair.'
' parent_id: %s comment_id: %s',
current_user.objectid, parent['_id'], comment['_id'])
raise wz_exceptions.BadRequest()
def get_comment(parent_id: bson.ObjectId, comment_id: bson.ObjectId) -> CommentDO:
nodes_coll = current_app.db('nodes')
mongo_comment = list(nodes_coll.aggregate([
{'$match': {'node_type': 'comment',
'_deleted': {'$ne': True},
'properties.status': 'published',
'parent': parent_id,
'_id': comment_id}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
]))[0]
return to_comment_data_object(mongo_comment)
def to_comment_data_object(mongo_comment: dict) -> CommentDO:
def current_user_rating():
if current_user.is_authenticated:
for rating in mongo_comment['properties'].get('ratings', ()):
if str(rating['user']) != current_user.objectid:
continue
return rating['is_positive']
return None
user_dict = mongo_comment['user']
user = UserDO(
id=str(mongo_comment['user']['_id']),
full_name=user_dict['full_name'],
avatar_url=pillar.api.users.avatar.url(user_dict),
badges_html=user_dict.get('badges', {}).get('html', '')
)
html = _get_markdowned_html(mongo_comment['properties'], 'content')
html = shortcodes.render_commented(html, context=mongo_comment['properties'])
return CommentDO(
id=mongo_comment['_id'],
parent=mongo_comment['parent'],
project=mongo_comment['project'],
user=user,
msg_html=html,
msg_markdown=mongo_comment['properties']['content'],
current_user_rating=current_user_rating(),
created=mongo_comment['_created'],
updated=mongo_comment['_updated'],
etag=mongo_comment['_etag'],
properties=CommentPropertiesDO(
attachments=mongo_comment['properties'].get('attachments', {}),
rating_positive=mongo_comment['properties']['rating_positive'],
rating_negative=mongo_comment['properties']['rating_negative']
)
)
def post_node_comment_vote(parent_id: bson.ObjectId, comment_id: bson.ObjectId, vote: int):
normalized_vote = min(max(vote, -1), 1)
_, _ = find_parent_and_comment_or_raise(parent_id, comment_id)
actions = {
1: 'upvote',
0: 'revoke',
-1: 'downvote',
}
patch = dict(
op=actions[normalized_vote]
)
json_result = patch_comment(comment_id, patch)
if json_result.json['_status'] != 'OK':
raise wz_exceptions.InternalServerError('Failed to vote on comment')
comment_do = get_comment(parent_id, comment_id)
return jsonify_data_object(comment_do), 200

View File

@ -5,7 +5,7 @@ import logging
from flask import current_app from flask import current_app
import werkzeug.exceptions as wz_exceptions import werkzeug.exceptions as wz_exceptions
from pillar.api.utils import authorization, authentication, jsonify from pillar.api.utils import authorization, authentication, jsonify, remove_private_keys
from . import register_patch_handler from . import register_patch_handler
@ -135,10 +135,7 @@ def edit_comment(user_id, node_id, patch):
# we can pass this stuff to Eve's patch_internal; that way the validation & # we can pass this stuff to Eve's patch_internal; that way the validation &
# authorisation system has enough info to work. # authorisation system has enough info to work.
nodes_coll = current_app.data.driver.db['nodes'] nodes_coll = current_app.data.driver.db['nodes']
projection = {'user': 1, node = nodes_coll.find_one(node_id)
'project': 1,
'node_type': 1}
node = nodes_coll.find_one(node_id, projection=projection)
if node is None: if node is None:
log.warning('User %s wanted to patch non-existing node %s' % (user_id, node_id)) log.warning('User %s wanted to patch non-existing node %s' % (user_id, node_id))
raise wz_exceptions.NotFound('Node %s not found' % node_id) raise wz_exceptions.NotFound('Node %s not found' % node_id)
@ -146,14 +143,14 @@ def edit_comment(user_id, node_id, patch):
if node['user'] != user_id and not authorization.user_has_role('admin'): if node['user'] != user_id and not authorization.user_has_role('admin'):
raise wz_exceptions.Forbidden('You can only edit your own comments.') raise wz_exceptions.Forbidden('You can only edit your own comments.')
# Use Eve to PATCH this node, as that also updates the etag. node = remove_private_keys(node)
r, _, _, status = current_app.patch_internal('nodes', node['properties']['content'] = patch['content']
{'properties.content': patch['content'], node['properties']['attachments'] = patch.get('attachments', {})
'project': node['project'], # Use Eve to PUT this node, as that also updates the etag and we want to replace attachments.
'user': node['user'], r, _, _, status = current_app.put_internal('nodes',
'node_type': node['node_type']}, node,
concurrency_check=False, concurrency_check=False,
_id=node_id) _id=node_id)
if status != 200: if status != 200:
log.error('Error %i editing comment %s for user %s: %s', log.error('Error %i editing comment %s for user %s: %s',
status, node_id, user_id, r) status, node_id, user_id, r)

View File

@ -0,0 +1,336 @@
import collections
import functools
import logging
import urllib.parse
from bson import ObjectId
from werkzeug import exceptions as wz_exceptions
from pillar import current_app
from pillar.api.activities import activity_subscribe, activity_object_add
from pillar.api.file_storage_backends.gcs import update_file_name
from pillar.api.node_types import PILLAR_NAMED_NODE_TYPES
from pillar.api.utils import random_etag
from pillar.api.utils.authorization import check_permissions
log = logging.getLogger(__name__)
def before_returning_node(node):
# Run validation process, since GET on nodes entry point is public
check_permissions('nodes', node, 'GET', append_allowed_methods=True)
# Embed short_link_info if the node has a short_code.
short_code = node.get('short_code')
if short_code:
node['short_link'] = short_link_info(short_code)['short_link']
def before_returning_nodes(nodes):
for node in nodes['_items']:
before_returning_node(node)
def only_for_node_type_decorator(*required_node_type_names):
"""Returns a decorator that checks its first argument's node type.
If the node type is not of the required node type, returns None,
otherwise calls the wrapped function.
>>> deco = only_for_node_type_decorator('comment')
>>> @deco
... def handle_comment(node): pass
>>> deco = only_for_node_type_decorator('comment', 'post')
>>> @deco
... def handle_comment_or_post(node): pass
"""
# Convert to a set for efficient 'x in required_node_type_names' queries.
required_node_type_names = set(required_node_type_names)
def only_for_node_type(wrapped):
@functools.wraps(wrapped)
def wrapper(node, *args, **kwargs):
if node.get('node_type') not in required_node_type_names:
return
return wrapped(node, *args, **kwargs)
return wrapper
only_for_node_type.__doc__ = "Decorator, immediately returns when " \
"the first argument is not of type %s." % required_node_type_names
return only_for_node_type
def before_replacing_node(item, original):
check_permissions('nodes', original, 'PUT')
update_file_name(item)
def after_replacing_node(item, original):
"""Push an update to the Algolia index when a node item is updated. If the
project is private, prevent public indexing.
"""
from pillar.celery import search_index_tasks as index
projects_collection = current_app.data.driver.db['projects']
project = projects_collection.find_one({'_id': item['project']})
if project.get('is_private', False):
# Skip index updating and return
return
status = item['properties'].get('status', 'unpublished')
node_id = str(item['_id'])
if status == 'published':
index.node_save.delay(node_id)
else:
index.node_delete.delay(node_id)
def before_inserting_nodes(items):
"""Before inserting a node in the collection we check if the user is allowed
and we append the project id to it.
"""
from pillar.auth import current_user
nodes_collection = current_app.data.driver.db['nodes']
def find_parent_project(node):
"""Recursive function that finds the ultimate parent of a node."""
if node and 'parent' in node:
parent = nodes_collection.find_one({'_id': node['parent']})
return find_parent_project(parent)
if node:
return node
else:
return None
for item in items:
check_permissions('nodes', item, 'POST')
if 'parent' in item and 'project' not in item:
parent = nodes_collection.find_one({'_id': item['parent']})
project = find_parent_project(parent)
if project:
item['project'] = project['_id']
# Default the 'user' property to the current user.
item.setdefault('user', current_user.user_id)
def get_comment_verb_and_context_object_id(comment):
nodes_collection = current_app.data.driver.db['nodes']
verb = 'commented'
parent = nodes_collection.find_one({'_id': comment['parent']})
context_object_id = comment['parent']
while parent['node_type'] == 'comment':
# If the parent is a comment, we provide its own parent as
# context. We do this in order to point the user to an asset
# or group when viewing the notification.
verb = 'replied'
context_object_id = parent['parent']
parent = nodes_collection.find_one({'_id': parent['parent']})
return verb, context_object_id
def after_inserting_nodes(items):
for item in items:
context_object_id = None
# TODO: support should be added for mixed context
if item['node_type'] in PILLAR_NAMED_NODE_TYPES:
activity_subscribe(item['user'], 'node', item['_id'])
verb = 'posted'
context_object_id = item.get('parent')
if item['node_type'] == 'comment':
# Always subscribe to the parent node
activity_subscribe(item['user'], 'node', item['parent'])
verb, context_object_id = get_comment_verb_and_context_object_id(item)
# Subscribe to the parent of the parent comment (post or group)
activity_subscribe(item['user'], 'node', context_object_id)
if context_object_id and item['node_type'] in PILLAR_NAMED_NODE_TYPES:
# * Skip activity for first level items (since the context is not a
# node, but a project).
# * Don't automatically create activities for non-Pillar node types,
# as we don't know what would be a suitable verb (among other things).
activity_object_add(
item['user'],
verb,
'node',
item['_id'],
'node',
context_object_id
)
def deduct_content_type_and_duration(node_doc, original=None):
"""Deduct the content type from the attached file, if any."""
if node_doc['node_type'] != 'asset':
log.debug('deduct_content_type: called on node type %r, ignoring', node_doc['node_type'])
return
node_id = node_doc.get('_id')
try:
file_id = ObjectId(node_doc['properties']['file'])
except KeyError:
if node_id is None:
# Creation of a file-less node is allowed, but updates aren't.
return
log.warning('deduct_content_type: Asset without properties.file, rejecting.')
raise wz_exceptions.UnprocessableEntity('Missing file property for asset node')
files = current_app.data.driver.db['files']
file_doc = files.find_one({'_id': file_id},
{'content_type': 1,
'variations': 1})
if not file_doc:
log.warning('deduct_content_type: Node %s refers to non-existing file %s, rejecting.',
node_id, file_id)
raise wz_exceptions.UnprocessableEntity('File property refers to non-existing file')
# Guess the node content type from the file content type
file_type = file_doc['content_type']
if file_type.startswith('video/'):
content_type = 'video'
elif file_type.startswith('image/'):
content_type = 'image'
else:
content_type = 'file'
node_doc['properties']['content_type'] = content_type
if content_type == 'video':
duration = file_doc['variations'][0].get('duration')
if duration:
node_doc['properties']['duration_seconds'] = duration
else:
log.warning('Video file %s has no duration', file_id)
def nodes_deduct_content_type_and_duration(nodes):
for node in nodes:
deduct_content_type_and_duration(node)
def node_set_default_picture(node, original=None):
"""Uses the image of an image asset or colour map of texture node as picture."""
if node.get('picture'):
log.debug('Node %s already has a picture, not overriding', node.get('_id'))
return
node_type = node.get('node_type')
props = node.get('properties', {})
content = props.get('content_type')
if node_type == 'asset' and content == 'image':
image_file_id = props.get('file')
elif node_type == 'texture':
# Find the colour map, defaulting to the first image map available.
image_file_id = None
for image in props.get('files', []):
if image_file_id is None or image.get('map_type') == 'color':
image_file_id = image.get('file')
else:
log.debug('Not setting default picture on node type %s content type %s',
node_type, content)
return
if image_file_id is None:
log.debug('Nothing to set the picture to.')
return
log.debug('Setting default picture for node %s to %s', node.get('_id'), image_file_id)
node['picture'] = image_file_id
def nodes_set_default_picture(nodes):
for node in nodes:
node_set_default_picture(node)
def before_deleting_node(node: dict):
check_permissions('nodes', node, 'DELETE')
remove_project_references(node)
def remove_project_references(node):
project_id = node.get('project')
if not project_id:
return
node_id = node['_id']
log.info('Removing references to node %s from project %s', node_id, project_id)
projects_col = current_app.db('projects')
project = projects_col.find_one({'_id': project_id})
updates = collections.defaultdict(dict)
if project.get('header_node') == node_id:
updates['$unset']['header_node'] = node_id
project_reference_lists = ('nodes_blog', 'nodes_featured', 'nodes_latest')
for list_name in project_reference_lists:
references = project.get(list_name)
if not references:
continue
try:
references.remove(node_id)
except ValueError:
continue
updates['$set'][list_name] = references
if not updates:
return
updates['$set']['_etag'] = random_etag()
result = projects_col.update_one({'_id': project_id}, updates)
if result.modified_count != 1:
log.warning('Removing references to node %s from project %s resulted in %d modified documents (expected 1)',
node_id, project_id, result.modified_count)
def after_deleting_node(item):
from pillar.celery import search_index_tasks as index
index.node_delete.delay(str(item['_id']))
only_for_textures = only_for_node_type_decorator('texture')
@only_for_textures
def texture_sort_files(node, original=None):
"""Sort files alphabetically by map type, with colour map first."""
try:
files = node['properties']['files']
except KeyError:
return
# Sort the map types alphabetically, ensuring 'color' comes first.
as_dict = {f['map_type']: f for f in files}
types = sorted(as_dict.keys(), key=lambda k: '\0' if k == 'color' else k)
node['properties']['files'] = [as_dict[map_type] for map_type in types]
def textures_sort_files(nodes):
for node in nodes:
texture_sort_files(node)
def short_link_info(short_code):
"""Returns the short link info in a dict."""
short_link = urllib.parse.urljoin(
current_app.config['SHORT_LINK_BASE_URL'], short_code)
return {
'short_code': short_code,
'short_link': short_link,
}

View File

@ -1,7 +1,7 @@
"""Code for moving around nodes.""" """Code for moving around nodes."""
import attr import attr
import flask_pymongo.wrappers import pymongo.database
from bson import ObjectId from bson import ObjectId
from pillar import attrs_extra from pillar import attrs_extra
@ -10,7 +10,7 @@ import pillar.api.file_storage.moving
@attr.s @attr.s
class NodeMover(object): class NodeMover(object):
db = attr.ib(validator=attr.validators.instance_of(flask_pymongo.wrappers.Database)) db = attr.ib(validator=attr.validators.instance_of(pymongo.database.Database))
skip_gcs = attr.ib(default=False, validator=attr.validators.instance_of(bool)) skip_gcs = attr.ib(default=False, validator=attr.validators.instance_of(bool))
_log = attrs_extra.log('%s.NodeMover' % __name__) _log = attrs_extra.log('%s.NodeMover' % __name__)

View File

@ -153,7 +153,7 @@ class OrgManager:
org_coll = current_app.db('organizations') org_coll = current_app.db('organizations')
users_coll = current_app.db('users') users_coll = current_app.db('users')
if users_coll.count({'_id': user_id}) == 0: if users_coll.count_documents({'_id': user_id}) == 0:
raise ValueError('User not found') raise ValueError('User not found')
self._log.info('Updating organization %s, setting admin user to %s', org_id, user_id) self._log.info('Updating organization %s, setting admin user to %s', org_id, user_id)
@ -189,7 +189,7 @@ class OrgManager:
if user_doc is not None: if user_doc is not None:
user_id = user_doc['_id'] user_id = user_doc['_id']
if user_id and not users_coll.count({'_id': user_id}): if user_id and not users_coll.count_documents({'_id': user_id}):
raise wz_exceptions.UnprocessableEntity('User does not exist') raise wz_exceptions.UnprocessableEntity('User does not exist')
self._log.info('Removing user %s / %s from organization %s', user_id, email, org_id) self._log.info('Removing user %s / %s from organization %s', user_id, email, org_id)
@ -374,7 +374,7 @@ class OrgManager:
member_ids = [str2id(uid) for uid in member_sting_ids] member_ids = [str2id(uid) for uid in member_sting_ids]
users_coll = current_app.db('users') users_coll = current_app.db('users')
users = users_coll.find({'_id': {'$in': member_ids}}, users = users_coll.find({'_id': {'$in': member_ids}},
projection={'_id': 1, 'full_name': 1, 'email': 1}) projection={'_id': 1, 'full_name': 1, 'email': 1, 'avatar': 1})
return list(users) return list(users)
def user_has_organizations(self, user_id: bson.ObjectId) -> bool: def user_has_organizations(self, user_id: bson.ObjectId) -> bool:
@ -385,7 +385,7 @@ class OrgManager:
org_coll = current_app.db('organizations') org_coll = current_app.db('organizations')
org_count = org_coll.count({'$or': [ org_count = org_coll.count_documents({'$or': [
{'admin_uid': user_id}, {'admin_uid': user_id},
{'members': user_id} {'members': user_id}
]}) ]})
@ -396,7 +396,7 @@ class OrgManager:
"""Return True iff the email is an unknown member of some org.""" """Return True iff the email is an unknown member of some org."""
org_coll = current_app.db('organizations') org_coll = current_app.db('organizations')
org_count = org_coll.count({'unknown_members': member_email}) org_count = org_coll.count_documents({'unknown_members': member_email})
return bool(org_count) return bool(org_count)
def roles_for_ip_address(self, remote_addr: str) -> typing.Set[str]: def roles_for_ip_address(self, remote_addr: str) -> typing.Set[str]:

View File

@ -194,7 +194,7 @@ class OrganizationPatchHandler(patch_handler.AbstractPatchHandler):
self.log.info('User %s edits Organization %s: %s', current_user_id, org_id, update) self.log.info('User %s edits Organization %s: %s', current_user_id, org_id, update)
validator = current_app.validator_for_resource('organizations') validator = current_app.validator_for_resource('organizations')
if not validator.validate_update(update, org_id): if not validator.validate_update(update, org_id, persisted_document={}):
resp = jsonify({ resp = jsonify({
'_errors': validator.errors, '_errors': validator.errors,
'_message': ', '.join(f'{field}: {error}' '_message': ', '.join(f'{field}: {error}'

View File

@ -9,6 +9,7 @@ def setup_app(app, api_prefix):
app.on_replace_projects += hooks.override_is_private_field app.on_replace_projects += hooks.override_is_private_field
app.on_replace_projects += hooks.before_edit_check_permissions app.on_replace_projects += hooks.before_edit_check_permissions
app.on_replace_projects += hooks.protect_sensitive_fields app.on_replace_projects += hooks.protect_sensitive_fields
app.on_replace_projects += hooks.parse_markdown
app.on_update_projects += hooks.override_is_private_field app.on_update_projects += hooks.override_is_private_field
app.on_update_projects += hooks.before_edit_check_permissions app.on_update_projects += hooks.before_edit_check_permissions
@ -19,6 +20,8 @@ def setup_app(app, api_prefix):
app.on_insert_projects += hooks.before_inserting_override_is_private_field app.on_insert_projects += hooks.before_inserting_override_is_private_field
app.on_insert_projects += hooks.before_inserting_projects app.on_insert_projects += hooks.before_inserting_projects
app.on_insert_projects += hooks.parse_markdowns
app.on_inserted_projects += hooks.after_inserting_projects app.on_inserted_projects += hooks.after_inserting_projects
app.on_fetched_item_projects += hooks.before_returning_project_permissions app.on_fetched_item_projects += hooks.before_returning_project_permissions

View File

@ -3,6 +3,7 @@ import logging
from flask import request, abort from flask import request, abort
import pillar
from pillar import current_app from pillar import current_app
from pillar.api.node_types.asset import node_type_asset from pillar.api.node_types.asset import node_type_asset
from pillar.api.node_types.comment import node_type_comment from pillar.api.node_types.comment import node_type_comment
@ -71,14 +72,19 @@ def before_delete_project(document):
def after_delete_project(project: dict): def after_delete_project(project: dict):
"""Perform delete on the project's files too.""" """Perform delete on the project's files too."""
from werkzeug.exceptions import NotFound
from eve.methods.delete import delete from eve.methods.delete import delete
pid = project['_id'] pid = project['_id']
log.info('Project %s was deleted, also deleting its files.', pid) log.info('Project %s was deleted, also deleting its files.', pid)
r, _, _, status = delete('files', {'project': pid}) try:
r, _, _, status = delete('files', {'project': pid})
except NotFound:
# There were no files, and that's fine.
return
if status != 204: if status != 204:
# Will never happen because bloody Eve always returns 204 or raises an exception.
log.warning('Unable to delete files of project %s: %s', pid, r) log.warning('Unable to delete files of project %s: %s', pid, r)
@ -241,3 +247,37 @@ def project_node_type_has_method(response):
def projects_node_type_has_method(response): def projects_node_type_has_method(response):
for project in response['_items']: for project in response['_items']:
project_node_type_has_method(project) project_node_type_has_method(project)
def parse_markdown(project, original=None):
schema = current_app.config['DOMAIN']['projects']['schema']
def find_markdown_fields(schema, project):
"""Find and process all Markdown coerced fields.
- look for fields with a 'coerce': 'markdown' property
- parse the name of the field and generate the sibling field name (_<field_name>_html -> <field_name>)
- parse the content of the <field_name> field as markdown and save it in _<field_name>_html
"""
for field_name, field_value in schema.items():
if not isinstance(field_value, dict):
continue
if field_value.get('coerce') != 'markdown':
continue
if field_name not in project:
continue
# Construct markdown source field name (strip the leading '_' and the trailing '_html')
source_field_name = field_name[1:-5]
html = pillar.markdown.markdown(project[source_field_name])
project[field_name] = html
if isinstance(project, dict) and field_name in project:
find_markdown_fields(field_value, project[field_name])
find_markdown_fields(schema, project)
def parse_markdowns(items):
for item in items:
parse_markdown(item)

View File

@ -25,8 +25,11 @@ def merge_project(pid_from: ObjectId, pid_to: ObjectId):
# Move the files first. Since this requires API calls to an external # Move the files first. Since this requires API calls to an external
# service, this is more likely to go wrong than moving the nodes. # service, this is more likely to go wrong than moving the nodes.
to_move = files_coll.find({'project': pid_from}, projection={'_id': 1}) query = {'project': pid_from}
log.info('Moving %d files to project %s', to_move.count(), pid_to) to_move = files_coll.find(query, projection={'_id': 1})
to_move_count = files_coll.count_documents(query)
log.info('Moving %d files to project %s', to_move_count, pid_to)
for file_doc in to_move: for file_doc in to_move:
fid = file_doc['_id'] fid = file_doc['_id']
log.debug('moving file %s to project %s', fid, pid_to) log.debug('moving file %s to project %s', fid, pid_to)
@ -35,7 +38,7 @@ def merge_project(pid_from: ObjectId, pid_to: ObjectId):
# Mass-move the nodes. # Mass-move the nodes.
etag = random_etag() etag = random_etag()
result = nodes_coll.update_many( result = nodes_coll.update_many(
{'project': pid_from}, query,
{'$set': {'project': pid_to, {'$set': {'project': pid_to,
'_etag': etag, '_etag': etag,
'_updated': utcnow(), '_updated': utcnow(),

View File

@ -5,6 +5,7 @@ from bson import ObjectId
from flask import Blueprint, request, current_app, make_response, url_for from flask import Blueprint, request, current_app, make_response, url_for
from werkzeug import exceptions as wz_exceptions from werkzeug import exceptions as wz_exceptions
import pillar.api.users.avatar
from pillar.api.utils import authorization, jsonify, str2id from pillar.api.utils import authorization, jsonify, str2id
from pillar.api.utils import mongo from pillar.api.utils import mongo
from pillar.api.utils.authorization import require_login, check_permissions from pillar.api.utils.authorization import require_login, check_permissions
@ -54,10 +55,13 @@ def project_manage_users():
project = projects_collection.find_one({'_id': ObjectId(project_id)}) project = projects_collection.find_one({'_id': ObjectId(project_id)})
admin_group_id = project['permissions']['groups'][0]['group'] admin_group_id = project['permissions']['groups'][0]['group']
users = users_collection.find( users = list(users_collection.find(
{'groups': {'$in': [admin_group_id]}}, {'groups': {'$in': [admin_group_id]}},
{'username': 1, 'email': 1, 'full_name': 1}) {'username': 1, 'email': 1, 'full_name': 1, 'avatar': 1}))
return jsonify({'_status': 'OK', '_items': list(users)}) for user in users:
user['avatar_url'] = pillar.api.users.avatar.url(user)
user.pop('avatar', None)
return jsonify({'_status': 'OK', '_items': users})
# The request is not a form, since it comes from the API sdk # The request is not a form, since it comes from the API sdk
data = json.loads(request.data) data = json.loads(request.data)
@ -92,8 +96,8 @@ def project_manage_users():
action, current_user_id) action, current_user_id)
raise wz_exceptions.UnprocessableEntity() raise wz_exceptions.UnprocessableEntity()
users_collection.update({'_id': target_user_id}, users_collection.update_one({'_id': target_user_id},
{operation: {'groups': admin_group['_id']}}) {operation: {'groups': admin_group['_id']}})
user = users_collection.find_one({'_id': target_user_id}, user = users_collection.find_one({'_id': target_user_id},
{'username': 1, 'email': 1, {'username': 1, 'email': 1,
@ -141,5 +145,3 @@ def get_allowed_methods(project_id=None, node_type=None):
resp.status_code = 204 resp.status_code = 204
return resp return resp

View File

@ -7,6 +7,7 @@ from werkzeug.exceptions import abort
from pillar import current_app from pillar import current_app
from pillar.auth import current_user from pillar.auth import current_user
from pillar.api import file_storage_backends
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -155,6 +156,18 @@ def project_id(project_url: str) -> ObjectId:
return proj['_id'] return proj['_id']
def get_project_url(project_id: ObjectId) -> str:
"""Returns the project URL, or raises a ValueError when not found."""
proj_coll = current_app.db('projects')
proj = proj_coll.find_one({'_id': project_id, '_deleted': {'$ne': True}},
projection={'url': True})
if not proj:
raise ValueError(f'project with id={project_id} not found')
return proj['url']
def get_project(project_url: str) -> dict: def get_project(project_url: str) -> dict:
"""Find a project in the database, raises ValueError if not found. """Find a project in the database, raises ValueError if not found.
@ -185,5 +198,17 @@ def put_project(project: dict):
result, _, _, status_code = current_app.put_internal('projects', proj_no_none, _id=pid) result, _, _, status_code = current_app.put_internal('projects', proj_no_none, _id=pid)
if status_code != 200: if status_code != 200:
raise ValueError(f"Can't update project {pid}, " message = f"Can't update project {pid}, status {status_code} with issues: {result}"
f"status {status_code} with issues: {result}") log.error(message)
raise ValueError(message)
def storage(project_id: ObjectId) -> file_storage_backends.Bucket:
"""Return the storage bucket for this project.
For now this returns a bucket in the default storage backend, since
individual projects do not have a 'storage backend' setting (this is
set per file, not per project).
"""
return file_storage_backends.default_storage_backend(str(project_id))

View File

@ -81,6 +81,7 @@ class Node(es.DocType):
fields={ fields={
'id': es.Keyword(), 'id': es.Keyword(),
'name': es.Keyword(), 'name': es.Keyword(),
'url': es.Keyword(),
} }
) )
@ -153,18 +154,21 @@ def create_doc_from_node_data(node_to_index: dict) -> typing.Optional[Node]:
doc.objectID = str(node_to_index['objectID']) doc.objectID = str(node_to_index['objectID'])
doc.node_type = node_to_index['node_type'] doc.node_type = node_to_index['node_type']
doc.name = node_to_index['name'] doc.name = node_to_index['name']
doc.description = node_to_index.get('description')
doc.user.id = str(node_to_index['user']['_id']) doc.user.id = str(node_to_index['user']['_id'])
doc.user.name = node_to_index['user']['full_name'] doc.user.name = node_to_index['user']['full_name']
doc.project.id = str(node_to_index['project']['_id']) doc.project.id = str(node_to_index['project']['_id'])
doc.project.name = node_to_index['project']['name'] doc.project.name = node_to_index['project']['name']
doc.project.url = node_to_index['project']['url']
if node_to_index['node_type'] == 'asset': if node_to_index['node_type'] == 'asset':
doc.media = node_to_index['media'] doc.media = node_to_index['media']
doc.picture = node_to_index.get('picture') doc.picture = str(node_to_index.get('picture'))
doc.tags = node_to_index.get('tags') doc.tags = node_to_index.get('tags')
doc.license_notes = node_to_index.get('license_notes') doc.license_notes = node_to_index.get('license_notes')
doc.is_free = node_to_index.get('is_free')
doc.created_at = node_to_index['created'] doc.created_at = node_to_index['created']
doc.updated_at = node_to_index['updated'] doc.updated_at = node_to_index['updated']

View File

@ -3,16 +3,18 @@ import logging
import typing import typing
from elasticsearch import Elasticsearch from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q from elasticsearch_dsl import Search, Q, MultiSearch
from elasticsearch_dsl.query import Query from elasticsearch_dsl.query import Query
from pillar import current_app from pillar import current_app
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
NODE_AGG_TERMS = ['node_type', 'media', 'tags', 'is_free'] BOOLEAN_TERMS = ['is_free']
NODE_AGG_TERMS = ['node_type', 'media', 'tags', *BOOLEAN_TERMS]
USER_AGG_TERMS = ['roles', ] USER_AGG_TERMS = ['roles', ]
ITEMS_PER_PAGE = 10 ITEMS_PER_PAGE = 10
USER_SOURCE_INCLUDE = ['full_name', 'objectID', 'username']
# Will be set in setup_app() # Will be set in setup_app()
client: Elasticsearch = None client: Elasticsearch = None
@ -27,26 +29,25 @@ def add_aggs_to_search(search, agg_terms):
search.aggs.bucket(term, 'terms', field=term) search.aggs.bucket(term, 'terms', field=term)
def make_must(must: list, terms: dict) -> list: def make_filter(must: list, terms: dict) -> list:
""" Given term parameters append must queries to the must list """ """ Given term parameters append must queries to the must list """
for field, value in terms.items(): for field, value in terms.items():
if value: if value not in (None, ''):
must.append({'match': {field: value}}) must.append({'term': {field: value}})
return must return must
def nested_bool(must: list, should: list, terms: dict, *, index_alias: str) -> Search: def nested_bool(filters: list, should: list, terms: dict, *, index_alias: str) -> Search:
""" """
Create a nested bool, where the aggregation selection is a must. Create a nested bool, where the aggregation selection is a must.
:param index_alias: 'USER' or 'NODE', see ELASTIC_INDICES config. :param index_alias: 'USER' or 'NODE', see ELASTIC_INDICES config.
""" """
must = make_must(must, terms) filters = make_filter(filters, terms)
bool_query = Q('bool', should=should) bool_query = Q('bool', should=should)
must.append(bool_query) bool_query = Q('bool', must=bool_query, filter=filters)
bool_query = Q('bool', must=must)
index = current_app.config['ELASTIC_INDICES'][index_alias] index = current_app.config['ELASTIC_INDICES'][index_alias]
search = Search(using=client, index=index) search = Search(using=client, index=index)
@ -55,12 +56,34 @@ def nested_bool(must: list, should: list, terms: dict, *, index_alias: str) -> S
return search return search
def do_multi_node_search(queries: typing.List[dict]) -> typing.List[dict]:
"""
Given user query input and term refinements
search for public published nodes
"""
search = create_multi_node_search(queries)
return _execute_multi(search)
def do_node_search(query: str, terms: dict, page: int, project_id: str='') -> dict: def do_node_search(query: str, terms: dict, page: int, project_id: str='') -> dict:
""" """
Given user query input and term refinements Given user query input and term refinements
search for public published nodes search for public published nodes
""" """
search = create_node_search(query, terms, page, project_id)
return _execute(search)
def create_multi_node_search(queries: typing.List[dict]) -> MultiSearch:
search = MultiSearch(using=client)
for q in queries:
search = search.add(create_node_search(**q))
return search
def create_node_search(query: str, terms: dict, page: int, project_id: str='') -> Search:
terms = _transform_terms(terms)
should = [ should = [
Q('match', name=query), Q('match', name=query),
@ -71,52 +94,30 @@ def do_node_search(query: str, terms: dict, page: int, project_id: str='') -> di
Q('term', media=query), Q('term', media=query),
Q('term', tags=query), Q('term', tags=query),
] ]
filters = []
must = []
if project_id: if project_id:
must.append({'term': {'project.id': project_id}}) filters.append({'term': {'project.id': project_id}})
if not query: if not query:
should = [] should = []
search = nested_bool(filters, should, terms, index_alias='NODE')
search = nested_bool(must, should, terms, index_alias='NODE')
if not query: if not query:
search = search.sort('-created_at') search = search.sort('-created_at')
add_aggs_to_search(search, NODE_AGG_TERMS) add_aggs_to_search(search, NODE_AGG_TERMS)
search = paginate(search, page) search = paginate(search, page)
if log.isEnabledFor(logging.DEBUG): if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4)) log.debug(json.dumps(search.to_dict(), indent=4))
return search
response = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(response.to_dict(), indent=4))
return response.to_dict()
def do_user_search(query: str, terms: dict, page: int) -> dict: def do_user_search(query: str, terms: dict, page: int) -> dict:
""" return user objects represented in elasicsearch result dict""" """ return user objects represented in elasicsearch result dict"""
must, should = _common_user_search(query) search = create_user_search(query, terms, page)
search = nested_bool(must, should, terms, index_alias='USER') return _execute(search)
add_aggs_to_search(search, USER_AGG_TERMS)
search = paginate(search, page)
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
response = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(response.to_dict(), indent=4))
return response.to_dict()
def _common_user_search(query: str) -> (typing.List[Query], typing.List[Query]): def _common_user_search(query: str) -> (typing.List[Query], typing.List[Query]):
"""Construct (must,shoud) for regular + admin user search.""" """Construct (filter,should) for regular + admin user search."""
if not query: if not query:
return [], [] return [], []
@ -144,8 +145,31 @@ def do_user_search_admin(query: str, terms: dict, page: int) -> dict:
search all user fields and provide aggregation information search all user fields and provide aggregation information
""" """
must, should = _common_user_search(query) search = create_user_admin_search(query, terms, page)
return _execute(search)
def _execute(search: Search) -> dict:
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
resp = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(resp.to_dict(), indent=4))
return resp.to_dict()
def _execute_multi(search: typing.List[Search]) -> typing.List[dict]:
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
resp = search.execute()
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(resp.to_dict(), indent=4))
return [r.to_dict() for r in resp]
def create_user_admin_search(query: str, terms: dict, page: int) -> Search:
terms = _transform_terms(terms)
filters, should = _common_user_search(query)
if query: if query:
# We most likely got and id field. we should find it. # We most likely got and id field. we should find it.
if len(query) == len('563aca02c379cf0005e8e17d'): if len(query) == len('563aca02c379cf0005e8e17d'):
@ -155,26 +179,34 @@ def do_user_search_admin(query: str, terms: dict, page: int) -> dict:
'boost': 100, # how much more it counts for the score 'boost': 100, # how much more it counts for the score
} }
}}) }})
search = nested_bool(filters, should, terms, index_alias='USER')
search = nested_bool(must, should, terms, index_alias='USER')
add_aggs_to_search(search, USER_AGG_TERMS) add_aggs_to_search(search, USER_AGG_TERMS)
search = paginate(search, page) search = paginate(search, page)
return search
if log.isEnabledFor(logging.DEBUG):
log.debug(json.dumps(search.to_dict(), indent=4))
response = search.execute() def create_user_search(query: str, terms: dict, page: int) -> Search:
search = create_user_admin_search(query, terms, page)
if log.isEnabledFor(logging.DEBUG): return search.source(include=USER_SOURCE_INCLUDE)
log.debug(json.dumps(response.to_dict(), indent=4))
return response.to_dict()
def paginate(search: Search, page_idx: int) -> Search: def paginate(search: Search, page_idx: int) -> Search:
return search[page_idx * ITEMS_PER_PAGE:(page_idx + 1) * ITEMS_PER_PAGE] return search[page_idx * ITEMS_PER_PAGE:(page_idx + 1) * ITEMS_PER_PAGE]
def _transform_terms(terms: dict) -> dict:
"""
Ugly hack! Elastic uses 1/0 for boolean values in its aggregate response,
but expects true/false in queries.
"""
transformed = terms.copy()
for t in BOOLEAN_TERMS:
orig = transformed.get(t)
if orig in ('1', '0'):
transformed[t] = bool(int(orig))
return transformed
def setup_app(app): def setup_app(app):
global client global client

View File

@ -18,7 +18,7 @@ TERMS = [
] ]
def _term_filters() -> dict: def _term_filters(args) -> dict:
""" """
Check if frontent wants to filter stuff Check if frontent wants to filter stuff
on specific fields AKA facets on specific fields AKA facets
@ -26,35 +26,52 @@ def _term_filters() -> dict:
return mapping with term field name return mapping with term field name
and provided user term value and provided user term value
""" """
return {term: request.args.get(term, '') for term in TERMS} return {term: args.get(term, '') for term in TERMS}
def _page_index() -> int: def _page_index(page) -> int:
"""Return the page index from the query string.""" """Return the page index from the query string."""
try: try:
page_idx = int(request.args.get('page') or '0') page_idx = int(page)
except TypeError: except TypeError:
log.info('invalid page number %r received', request.args.get('page')) log.info('invalid page number %r received', request.args.get('page'))
raise wz_exceptions.BadRequest() raise wz_exceptions.BadRequest()
return page_idx return page_idx
@blueprint_search.route('/') @blueprint_search.route('/', methods=['GET'])
def search_nodes(): def search_nodes():
searchword = request.args.get('q', '') searchword = request.args.get('q', '')
project_id = request.args.get('project', '') project_id = request.args.get('project', '')
terms = _term_filters() terms = _term_filters(request.args)
page_idx = _page_index() page_idx = _page_index(request.args.get('page', 0))
result = queries.do_node_search(searchword, terms, page_idx, project_id) result = queries.do_node_search(searchword, terms, page_idx, project_id)
return jsonify(result) return jsonify(result)
@blueprint_search.route('/multisearch', methods=['POST'])
def multi_search_nodes():
if len(request.args) != 1:
log.info(f'Expected 1 argument, received {len(request.args)}')
json_obj = request.json
q = []
for row in json_obj:
q.append({
'query': row.get('q', ''),
'project_id': row.get('project', ''),
'terms': _term_filters(row),
'page': _page_index(row.get('page', 0))
})
result = queries.do_multi_node_search(q)
return jsonify(result)
@blueprint_search.route('/user') @blueprint_search.route('/user')
def search_user(): def search_user():
searchword = request.args.get('q', '') searchword = request.args.get('q', '')
terms = _term_filters() terms = _term_filters(request.args)
page_idx = _page_index() page_idx = _page_index(request.args.get('page', 0))
# result is the raw elasticseach output. # result is the raw elasticseach output.
# we need to filter fields in case of user objects. # we need to filter fields in case of user objects.
@ -65,27 +82,6 @@ def search_user():
resp.status_code = 500 resp.status_code = 500
return resp return resp
# filter sensitive stuff
# we only need. objectID, full_name, username
hits = result.get('hits', {})
new_hits = []
for hit in hits.get('hits'):
source = hit['_source']
single_hit = {
'_source': {
'objectID': source.get('objectID'),
'username': source.get('username'),
'full_name': source.get('full_name'),
}
}
new_hits.append(single_hit)
# replace search result with safe subset
result['hits']['hits'] = new_hits
return jsonify(result) return jsonify(result)
@ -97,8 +93,8 @@ def search_user_admin():
""" """
searchword = request.args.get('q', '') searchword = request.args.get('q', '')
terms = _term_filters() terms = _term_filters(request.args)
page_idx = _page_index() page_idx = _page_index(_page_index(request.args.get('page', 0)))
try: try:
result = queries.do_user_search_admin(searchword, terms, page_idx) result = queries.do_user_search_admin(searchword, terms, page_idx)

374
pillar/api/timeline.py Normal file
View File

@ -0,0 +1,374 @@
import itertools
import typing
from datetime import datetime
from operator import itemgetter
import attr
import bson
import pymongo
from flask import Blueprint, current_app, request, url_for
import pillar
from pillar import shortcodes
from pillar.api.utils import jsonify, pretty_duration, str2id
blueprint = Blueprint('timeline', __name__)
@attr.s(auto_attribs=True)
class TimelineDO:
groups: typing.List['GroupDO'] = []
continue_from: typing.Optional[float] = None
@attr.s(auto_attribs=True)
class GroupDO:
label: typing.Optional[str] = None
url: typing.Optional[str] = None
items: typing.Dict = {}
groups: typing.Iterable['GroupDO'] = []
class SearchHelper:
def __init__(self, nbr_of_weeks: int, continue_from: typing.Optional[datetime],
project_ids: typing.List[bson.ObjectId], sort_direction: str):
self._nbr_of_weeks = nbr_of_weeks
self._continue_from = continue_from
self._project_ids = project_ids
self.sort_direction = sort_direction
def _match(self, continue_from: typing.Optional[datetime]) -> dict:
created = {}
if continue_from:
if self.sort_direction == 'desc':
created = {'_created': {'$lt': continue_from}}
else:
created = {'_created': {'$gt': continue_from}}
return {'_deleted': {'$ne': True},
'node_type': {'$in': ['asset', 'post']},
'properties.status': {'$eq': 'published'},
'project': {'$in': self._project_ids},
**created,
}
def raw_weeks_from_mongo(self) -> pymongo.collection.Collection:
direction = pymongo.DESCENDING if self.sort_direction == 'desc' else pymongo.ASCENDING
nodes_coll = current_app.db('nodes')
return nodes_coll.aggregate([
{'$match': self._match(self._continue_from)},
{'$lookup': {"from": "projects",
"localField": "project",
"foreignField": "_id",
"as": "project"}},
{'$unwind': {'path': "$project"}},
{'$lookup': {"from": "users",
"localField": "user",
"foreignField": "_id",
"as": "user"}},
{'$unwind': {'path': "$user"}},
{'$project': {
'_created': 1,
'project._id': 1,
'project.url': 1,
'project.name': 1,
'user._id': 1,
'user.full_name': 1,
'name': 1,
'node_type': 1,
'picture': 1,
'properties': 1,
'permissions': 1,
}},
{'$group': {
'_id': {'year': {'$isoWeekYear': '$_created'},
'week': {'$isoWeek': '$_created'}},
'nodes': {'$push': '$$ROOT'}
}},
{'$sort': {'_id.year': direction,
'_id.week': direction}},
{'$limit': self._nbr_of_weeks}
])
def has_more(self, continue_from: datetime) -> bool:
nodes_coll = current_app.db('nodes')
result = nodes_coll.count_documents(self._match(continue_from))
return bool(result)
class Grouper:
@classmethod
def label(cls, node):
return None
@classmethod
def url(cls, node):
return None
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
raise NotImplemented()
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
raise NotImplemented()
class ProjectGrouper(Grouper):
@classmethod
def label(cls, project: dict):
return project['name']
@classmethod
def url(cls, project: dict):
return url_for('projects.view', project_url=project['url'])
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
return itemgetter('project')
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
return lambda node: node['project']['_id']
class UserGrouper(Grouper):
@classmethod
def label(cls, user):
return user['full_name']
@classmethod
def group_key(cls) -> typing.Callable[[dict], typing.Any]:
return itemgetter('user')
@classmethod
def sort_key(cls) -> typing.Callable[[dict], typing.Any]:
return lambda node: node['user']['_id']
class TimeLineBuilder:
def __init__(self, search_helper: SearchHelper, grouper: typing.Type[Grouper]):
self.search_helper = search_helper
self.grouper = grouper
self.continue_from = None
def build(self) -> TimelineDO:
raw_weeks = self.search_helper.raw_weeks_from_mongo()
clean_weeks = (self.create_week_group(week) for week in raw_weeks)
return TimelineDO(
groups=list(clean_weeks),
continue_from=self.continue_from.timestamp() if self.search_helper.has_more(self.continue_from) else None
)
def create_week_group(self, week: dict) -> GroupDO:
nodes = week['nodes']
nodes.sort(key=itemgetter('_created'), reverse=True)
self.update_continue_from(nodes)
groups = self.create_groups(nodes)
return GroupDO(
label=f'Week {week["_id"]["week"]}, {week["_id"]["year"]}',
groups=groups
)
def create_groups(self, nodes: typing.List[dict]) -> typing.List[GroupDO]:
self.sort_nodes(nodes) # groupby assumes that the list is sorted
nodes_grouped = itertools.groupby(nodes, self.grouper.group_key())
groups = (self.clean_group(grouped_by, group) for grouped_by, group in nodes_grouped)
groups_sorted = sorted(groups, key=self.group_row_sorter, reverse=True)
return groups_sorted
def sort_nodes(self, nodes: typing.List[dict]):
nodes.sort(key=itemgetter('node_type'))
nodes.sort(key=self.grouper.sort_key())
def update_continue_from(self, sorted_nodes: typing.List[dict]):
if self.search_helper.sort_direction == 'desc':
first_created = sorted_nodes[-1]['_created']
candidate = self.continue_from or first_created
self.continue_from = min(candidate, first_created)
else:
last_created = sorted_nodes[0]['_created']
candidate = self.continue_from or last_created
self.continue_from = max(candidate, last_created)
def clean_group(self, grouped_by: typing.Any, group: typing.Iterable[dict]) -> GroupDO:
items = self.create_items(group)
return GroupDO(
label=self.grouper.label(grouped_by),
url=self.grouper.url(grouped_by),
items=items
)
def create_items(self, group) -> typing.List[dict]:
by_node_type = itertools.groupby(group, key=itemgetter('node_type'))
items = {}
for node_type, nodes in by_node_type:
items[node_type] = [self.node_prettyfy(n) for n in nodes]
return items
@classmethod
def node_prettyfy(cls, node: dict)-> dict:
duration_seconds = node['properties'].get('duration_seconds')
if duration_seconds is not None:
node['properties']['duration'] = pretty_duration(duration_seconds)
if node['node_type'] == 'post':
html = _get_markdowned_html(node['properties'], 'content')
html = shortcodes.render_commented(html, context=node['properties'])
node['properties']['pretty_content'] = html
return node
@classmethod
def group_row_sorter(cls, row: GroupDO) -> typing.Tuple[datetime, datetime]:
'''
If a group contains posts are more interesting and therefor we put them higher in up
:param row:
:return: tuple with newest post date and newest asset date
'''
def newest_created(nodes: typing.List[dict]) -> datetime:
if nodes:
return nodes[0]['_created']
return datetime.fromtimestamp(0, tz=bson.tz_util.utc)
newest_post_date = newest_created(row.items.get('post'))
newest_asset_date = newest_created(row.items.get('asset'))
return newest_post_date, newest_asset_date
def _public_project_ids() -> typing.List[bson.ObjectId]:
"""Returns a list of ObjectIDs of public projects.
Memoized in setup_app().
"""
proj_coll = current_app.db('projects')
result = proj_coll.find({'is_private': False}, {'_id': 1})
return [p['_id'] for p in result]
def _get_markdowned_html(document: dict, field_name: str) -> str:
cache_field_name = pillar.markdown.cache_field_name(field_name)
html = document.get(cache_field_name)
if html is None:
markdown_src = document.get(field_name) or ''
html = pillar.markdown.markdown(markdown_src)
return html
@blueprint.route('/', methods=['GET'])
def global_timeline():
continue_from_str = request.args.get('from')
continue_from = parse_continue_from(continue_from_str)
nbr_of_weeks_str = request.args.get('weeksToLoad')
nbr_of_weeks = parse_nbr_of_weeks(nbr_of_weeks_str)
sort_direction = request.args.get('dir', 'desc')
return _global_timeline(continue_from, nbr_of_weeks, sort_direction)
@blueprint.route('/p/<string(length=24):pid_path>', methods=['GET'])
def project_timeline(pid_path: str):
continue_from_str = request.args.get('from')
continue_from = parse_continue_from(continue_from_str)
nbr_of_weeks_str = request.args.get('weeksToLoad')
nbr_of_weeks = parse_nbr_of_weeks(nbr_of_weeks_str)
sort_direction = request.args.get('dir', 'desc')
pid = str2id(pid_path)
return _project_timeline(continue_from, nbr_of_weeks, sort_direction, pid)
def parse_continue_from(from_arg) -> typing.Optional[datetime]:
try:
from_float = float(from_arg)
except (TypeError, ValueError):
return None
return datetime.fromtimestamp(from_float, tz=bson.tz_util.utc)
def parse_nbr_of_weeks(weeks_to_load: str) -> int:
try:
return int(weeks_to_load)
except (TypeError, ValueError):
return 3
def _global_timeline(continue_from: typing.Optional[datetime], nbr_of_weeks: int, sort_direction: str):
"""Returns an aggregated view of what has happened on the site
Memoized in setup_app().
:param continue_from: Python utc timestamp where to begin aggregation
:param nbr_of_weeks: Number of weeks to return
Example output:
{
groups: [{
label: 'Week 32',
groups: [{
label: 'Spring',
url: '/p/spring',
items:{
post: [blogPostDoc, blogPostDoc],
asset: [assetDoc, assetDoc]
},
groups: ...
}]
}],
continue_from: 123456.2 // python timestamp
}
"""
builder = TimeLineBuilder(
SearchHelper(nbr_of_weeks, continue_from, _public_project_ids(), sort_direction),
ProjectGrouper
)
return jsonify_timeline(builder.build())
def jsonify_timeline(timeline: TimelineDO):
return jsonify(
attr.asdict(timeline,
recurse=True,
filter=lambda att, value: value is not None)
)
def _project_timeline(continue_from: typing.Optional[datetime], nbr_of_weeks: int, sort_direction, pid: bson.ObjectId):
"""Returns an aggregated view of what has happened on the site
Memoized in setup_app().
:param continue_from: Python utc timestamp where to begin aggregation
:param nbr_of_weeks: Number of weeks to return
Example output:
{
groups: [{
label: 'Week 32',
groups: [{
label: 'Tobias Johansson',
items:{
post: [blogPostDoc, blogPostDoc],
asset: [assetDoc, assetDoc]
},
groups: ...
}]
}],
continue_from: 123456.2 // python timestamp
}
"""
builder = TimeLineBuilder(
SearchHelper(nbr_of_weeks, continue_from, [pid], sort_direction),
UserGrouper
)
return jsonify_timeline(builder.build())
def setup_app(app, url_prefix):
global _public_project_ids
global _global_timeline
global _project_timeline
app.register_api_blueprint(blueprint, url_prefix=url_prefix)
cached = app.cache.cached(timeout=3600)
_public_project_ids = cached(_public_project_ids)
memoize = app.cache.memoize(timeout=60)
_global_timeline = memoize(_global_timeline)
_project_timeline = memoize(_project_timeline)

View File

@ -61,6 +61,9 @@ def _update_search_user_changed_role(sender, user: dict):
def setup_app(app, api_prefix): def setup_app(app, api_prefix):
from pillar.api import service from pillar.api import service
from . import patch
patch.setup_app(app, url_prefix=api_prefix)
app.on_pre_GET_users += hooks.check_user_access app.on_pre_GET_users += hooks.check_user_access
app.on_post_GET_users += hooks.post_GET_user app.on_post_GET_users += hooks.post_GET_user

159
pillar/api/users/avatar.py Normal file
View File

@ -0,0 +1,159 @@
import functools
import io
import logging
import mimetypes
import typing
from bson import ObjectId
from eve.methods.get import getitem_internal
import flask
from pillar import current_app
from pillar.api import blender_id
from pillar.api.blender_cloud import home_project
import pillar.api.file_storage
from werkzeug.datastructures import FileStorage
log = logging.getLogger(__name__)
DEFAULT_AVATAR = 'assets/img/default_user_avatar.png'
def url(user: dict) -> str:
"""Return the avatar URL for this user.
:param user: dictionary from the MongoDB 'users' collection.
"""
assert isinstance(user, dict), f'user must be dict, not {type(user)}'
avatar_id = user.get('avatar', {}).get('file')
if not avatar_id:
return _default_avatar()
# The file may not exist, in which case we get an empty string back.
return pillar.api.file_storage.get_file_url(avatar_id) or _default_avatar()
@functools.lru_cache(maxsize=1)
def _default_avatar() -> str:
"""Return the URL path of the default avatar.
Doesn't change after the app has started, so we just cache it.
"""
return flask.url_for('static_pillar', filename=DEFAULT_AVATAR)
def _extension_for_mime(mime_type: str) -> str:
# Take the longest extension. I'd rather have '.jpeg' than the weird '.jpe'.
extensions: typing.List[str] = mimetypes.guess_all_extensions(mime_type)
try:
return max(extensions, key=len)
except ValueError:
# Raised when extensions is empty, e.g. when the mime type is unknown.
return ''
def _get_file_link(file_id: ObjectId) -> str:
# Get the file document via Eve to make it update the link.
file_doc, _, _, status = getitem_internal('files', _id=file_id)
assert status == 200
return file_doc['link']
def sync_avatar(user_id: ObjectId) -> str:
"""Fetch the user's avatar from Blender ID and save to storage.
Errors are logged but do not raise an exception.
:return: the link to the avatar, or '' if it was not processed.
"""
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': user_id})
old_avatar_info = db_user.get('avatar', {})
if isinstance(old_avatar_info, ObjectId):
old_avatar_info = {'file': old_avatar_info}
home_proj = home_project.get_home_project(user_id)
if not home_project:
log.error('Home project of user %s does not exist, unable to store avatar', user_id)
return ''
bid_userid = blender_id.get_user_blenderid(db_user)
if not bid_userid:
log.error('User %s has no Blender ID user-id, unable to fetch avatar', user_id)
return ''
avatar_url = blender_id.avatar_url(bid_userid)
bid_session = blender_id.Session()
# Avoid re-downloading the same avatar.
request_headers = {}
if avatar_url == old_avatar_info.get('last_downloaded_url') and \
old_avatar_info.get('last_modified'):
request_headers['If-Modified-Since'] = old_avatar_info.get('last_modified')
log.info('Downloading avatar for user %s from %s', user_id, avatar_url)
resp = bid_session.get(avatar_url, headers=request_headers, allow_redirects=True)
if resp.status_code == 304:
# File was not modified, we can keep the old file.
log.debug('Avatar for user %s was not modified on Blender ID, not re-downloading', user_id)
return _get_file_link(old_avatar_info['file'])
resp.raise_for_status()
mime_type = resp.headers['Content-Type']
file_extension = _extension_for_mime(mime_type)
if not file_extension:
log.error('No file extension known for mime type %s, unable to handle avatar of user %s',
mime_type, user_id)
return ''
filename = f'avatar-{user_id}{file_extension}'
fake_local_file = io.BytesIO(resp.content)
fake_local_file.name = filename
# Act as if this file was just uploaded by the user, so we can reuse
# existing Pillar file-handling code.
log.debug("Uploading avatar for user %s to storage", user_id)
uploaded_file = FileStorage(
stream=fake_local_file,
filename=filename,
headers=resp.headers,
content_type=mime_type,
content_length=resp.headers['Content-Length'],
)
with pillar.auth.temporary_user(db_user):
upload_data = pillar.api.file_storage.upload_and_process(
fake_local_file,
uploaded_file,
str(home_proj['_id']),
# Disallow image processing, as it's a tiny file anyway and
# we'll just serve the original.
may_process_file=False,
)
file_id = ObjectId(upload_data['file_id'])
avatar_info = {
'file': file_id,
'last_downloaded_url': resp.url,
'last_modified': resp.headers.get('Last-Modified'),
}
# Update the user to store the reference to their avatar.
old_avatar_file_id = old_avatar_info.get('file')
update_result = users_coll.update_one({'_id': user_id},
{'$set': {'avatar': avatar_info}})
if update_result.matched_count == 1:
log.debug('Updated avatar for user ID %s to file %s', user_id, file_id)
else:
log.warning('Matched %d users while setting avatar for user ID %s to file %s',
update_result.matched_count, user_id, file_id)
if old_avatar_file_id:
current_app.delete_internal('files', _id=old_avatar_file_id)
return _get_file_link(file_id)

View File

@ -1,13 +1,12 @@
import copy import copy
import json import json
import bson
from eve.utils import parse_request from eve.utils import parse_request
from werkzeug import exceptions as wz_exceptions from werkzeug import exceptions as wz_exceptions
from pillar import current_app from pillar import current_app
from pillar.api.users.routes import log from pillar.api.users.routes import log
from pillar.api.utils.authorization import user_has_role import pillar.api.users.avatar
import pillar.auth import pillar.auth
USER_EDITABLE_FIELDS = {'full_name', 'username', 'email', 'settings'} USER_EDITABLE_FIELDS = {'full_name', 'username', 'email', 'settings'}
@ -126,7 +125,7 @@ def check_put_access(request, lookup):
raise wz_exceptions.Forbidden() raise wz_exceptions.Forbidden()
def after_fetching_user(user): def after_fetching_user(user: dict) -> None:
# Deny access to auth block; authentication stuff is managed by # Deny access to auth block; authentication stuff is managed by
# custom end-points. # custom end-points.
user.pop('auth', None) user.pop('auth', None)
@ -142,7 +141,7 @@ def after_fetching_user(user):
return return
# Remove all fields except public ones. # Remove all fields except public ones.
public_fields = {'full_name', 'username', 'email', 'extension_props_public'} public_fields = {'full_name', 'username', 'email', 'extension_props_public', 'badges'}
for field in list(user.keys()): for field in list(user.keys()):
if field not in public_fields: if field not in public_fields:
del user[field] del user[field]

45
pillar/api/users/patch.py Normal file
View File

@ -0,0 +1,45 @@
"""User patching support."""
import logging
import bson
from flask import Blueprint
import werkzeug.exceptions as wz_exceptions
from pillar import current_app
from pillar.auth import current_user
from pillar.api.utils import authorization, jsonify, remove_private_keys
from pillar.api import patch_handler
log = logging.getLogger(__name__)
patch_api_blueprint = Blueprint('users.patch', __name__)
class UserPatchHandler(patch_handler.AbstractPatchHandler):
item_name = 'user'
@authorization.require_login()
def patch_set_username(self, user_id: bson.ObjectId, patch: dict):
"""Updates a user's username."""
if user_id != current_user.user_id:
log.info('User %s tried to change username of user %s',
current_user.user_id, user_id)
raise wz_exceptions.Forbidden('You may only change your own username')
new_username = patch['username']
log.info('User %s uses PATCH to set username to %r', current_user.user_id, new_username)
users_coll = current_app.db('users')
db_user = users_coll.find_one({'_id': user_id})
db_user['username'] = new_username
# Save via Eve to check the schema and trigger update hooks.
response, _, _, status = current_app.put_internal(
'users', remove_private_keys(db_user), _id=user_id)
return jsonify(response), status
def setup_app(app, url_prefix):
UserPatchHandler(patch_api_blueprint)
app.register_api_blueprint(patch_api_blueprint, url_prefix=url_prefix)

View File

@ -1,9 +1,11 @@
import logging import logging
from eve.methods.get import get from eve.methods.get import get
from flask import Blueprint from flask import Blueprint, request
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils import jsonify from pillar import current_app
from pillar.api import utils
from pillar.api.utils.authorization import require_login from pillar.api.utils.authorization import require_login
from pillar.auth import current_user from pillar.auth import current_user
@ -15,7 +17,128 @@ blueprint_api = Blueprint('users_api', __name__)
@require_login() @require_login()
def my_info(): def my_info():
eve_resp, _, _, status, _ = get('users', {'_id': current_user.user_id}) eve_resp, _, _, status, _ = get('users', {'_id': current_user.user_id})
resp = jsonify(eve_resp['_items'][0], status=status) resp = utils.jsonify(eve_resp['_items'][0], status=status)
return resp return resp
@blueprint_api.route('/video/<video_id>/progress')
@require_login()
def get_video_progress(video_id: str):
"""Return video progress information.
Either a `204 No Content` is returned (no information stored),
or a `200 Ok` with JSON from Eve's 'users' schema, from the key
video.view_progress.<video_id>.
"""
# Validation of the video ID; raises a BadRequest when it's not an ObjectID.
# This isn't strictly necessary, but it makes this function behave symmetrical
# to the set_video_progress() function.
utils.str2id(video_id)
users_coll = current_app.db('users')
user_doc = users_coll.find_one(current_user.user_id, projection={'nodes.view_progress': True})
try:
progress = user_doc['nodes']['view_progress'][video_id]
except KeyError:
return '', 204
if not progress:
return '', 204
return utils.jsonify(progress)
@blueprint_api.route('/video/<video_id>/progress', methods=['POST'])
@require_login()
def set_video_progress(video_id: str):
"""Save progress information about a certain video.
Expected parameters:
- progress_in_sec: float number of seconds
- progress_in_perc: integer percentage of video watched (interval [0-100])
"""
my_log = log.getChild('set_video_progress')
my_log.debug('Setting video progress for user %r video %r', current_user.user_id, video_id)
# Constructing this response requires an active app, and thus can't be done on module load.
no_video_response = utils.jsonify({'_message': 'No such video'}, status=404)
try:
progress_in_sec = float(request.form['progress_in_sec'])
progress_in_perc = int(request.form['progress_in_perc'])
except KeyError as ex:
my_log.debug('Missing POST field in request: %s', ex)
raise wz_exceptions.BadRequest(f'missing a form field')
except ValueError as ex:
my_log.debug('Invalid value for POST field in request: %s', ex)
raise wz_exceptions.BadRequest(f'Invalid value for field: {ex}')
users_coll = current_app.db('users')
nodes_coll = current_app.db('nodes')
# First check whether this is actually an existing video
video_oid = utils.str2id(video_id)
video_doc = nodes_coll.find_one(video_oid, projection={
'node_type': True,
'properties.content_type': True,
'properties.file': True,
})
if not video_doc:
my_log.debug('Node %r not found, unable to set progress for user %r',
video_oid, current_user.user_id)
return no_video_response
try:
is_video = (video_doc['node_type'] == 'asset'
and video_doc['properties']['content_type'] == 'video')
except KeyError:
is_video = False
if not is_video:
my_log.info('Node %r is not a video, unable to set progress for user %r',
video_oid, current_user.user_id)
# There is no video found at this URL, so act as if it doesn't even exist.
return no_video_response
# Compute the progress
percent = min(100, max(0, progress_in_perc))
progress = {
'progress_in_sec': progress_in_sec,
'progress_in_percent': percent,
'last_watched': utils.utcnow(),
}
# After watching a certain percentage of the video, we consider it 'done'
#
# Total Credit start Total Credit Percent
# HH:MM:SS HH:MM:SS sec sec of duration
# Sintel 00:14:48 00:12:24 888 744 83.78%
# Tears of Steel 00:12:14 00:09:49 734 589 80.25%
# Cosmos Laundro 00:12:10 00:10:05 730 605 82.88%
# Agent 327 00:03:51 00:03:26 231 206 89.18%
# Caminandes 3 00:02:30 00:02:18 150 138 92.00%
# Glass Half 00:03:13 00:02:52 193 172 89.12%
# Big Buck Bunny 00:09:56 00:08:11 596 491 82.38%
# Elephants Drea 00:10:54 00:09:25 654 565 86.39%
#
# Median 85.09%
# Average 85.75%
#
# For training videos marking at done at 85% of the video may be a bit
# early, since those probably won't have (long) credits. This is why we
# stick to 90% here.
if percent >= 90:
progress['done'] = True
# Setting each property individually prevents us from overwriting any
# existing {done: true} fields.
updates = {f'nodes.view_progress.{video_id}.{k}': v
for k, v in progress.items()}
result = users_coll.update_one({'_id': current_user.user_id},
{'$set': updates})
if result.matched_count == 0:
my_log.error('Current user %r could not be updated', current_user.user_id)
raise wz_exceptions.InternalServerError('Unable to find logged-in user')
return '', 204

View File

@ -8,6 +8,7 @@ import logging
import random import random
import typing import typing
import urllib.request, urllib.parse, urllib.error import urllib.request, urllib.parse, urllib.error
import warnings
import bson.objectid import bson.objectid
import bson.tz_util import bson.tz_util
@ -44,10 +45,16 @@ def remove_private_keys(document):
"""Removes any key that starts with an underscore, returns result as new """Removes any key that starts with an underscore, returns result as new
dictionary. dictionary.
""" """
def do_remove(doc):
for key in list(doc.keys()):
if key.startswith('_'):
del doc[key]
elif isinstance(doc[key], dict):
doc[key] = do_remove(doc[key])
return doc
doc_copy = copy.deepcopy(document) doc_copy = copy.deepcopy(document)
for key in list(doc_copy.keys()): do_remove(doc_copy)
if key.startswith('_'):
del doc_copy[key]
try: try:
del doc_copy['allowed_methods'] del doc_copy['allowed_methods']
@ -57,6 +64,39 @@ def remove_private_keys(document):
return doc_copy return doc_copy
def pretty_duration(seconds: typing.Union[None, int, float]):
if seconds is None:
return ''
seconds = round(seconds)
hours, seconds = divmod(seconds, 3600)
minutes, seconds = divmod(seconds, 60)
if hours > 0:
return f'{hours:02}:{minutes:02}:{seconds:02}'
else:
return f'{minutes:02}:{seconds:02}'
def pretty_duration_fractional(seconds: typing.Union[None, int, float]):
if seconds is None:
return ''
# Remove fraction of seconds from the seconds so that the rest is done as integers.
seconds, fracs = divmod(seconds, 1)
hours, seconds = divmod(int(seconds), 3600)
minutes, seconds = divmod(seconds, 60)
msec = int(round(fracs * 1000))
if msec == 0:
msec_str = ''
else:
msec_str = f'.{msec:03}'
if hours > 0:
return f'{hours:02}:{minutes:02}:{seconds:02}{msec_str}'
else:
return f'{minutes:02}:{seconds:02}{msec_str}'
class PillarJSONEncoder(json.JSONEncoder): class PillarJSONEncoder(json.JSONEncoder):
"""JSON encoder with support for Pillar resources.""" """JSON encoder with support for Pillar resources."""
@ -64,6 +104,9 @@ class PillarJSONEncoder(json.JSONEncoder):
if isinstance(obj, datetime.datetime): if isinstance(obj, datetime.datetime):
return obj.strftime(RFC1123_DATE_FORMAT) return obj.strftime(RFC1123_DATE_FORMAT)
if isinstance(obj, datetime.timedelta):
return pretty_duration(obj.total_seconds())
if isinstance(obj, bson.ObjectId): if isinstance(obj, bson.ObjectId):
return str(obj) return str(obj)
@ -144,6 +187,16 @@ def str2id(document_id: str) -> bson.ObjectId:
def gravatar(email: str, size=64) -> typing.Optional[str]: def gravatar(email: str, size=64) -> typing.Optional[str]:
"""Deprecated: return the Gravatar URL.
.. deprecated::
Use of Gravatar is deprecated, in favour of our self-hosted avatars.
See pillar.api.users.avatar.url(user).
"""
warnings.warn('pillar.api.utils.gravatar() is deprecated, '
'use pillar.api.users.avatar.url() instead',
category=DeprecationWarning)
if email is None: if email is None:
return None return None
@ -181,7 +234,8 @@ def doc_diff(doc1, doc2, *, falsey_is_equal=True, superkey: str = None):
function won't report differences between DoesNotExist, False, '', and 0. function won't report differences between DoesNotExist, False, '', and 0.
""" """
private_keys = {'_id', '_etag', '_deleted', '_updated', '_created'} def is_private(key):
return str(key).startswith('_')
def combine_key(some_key): def combine_key(some_key):
"""Combine this key with the superkey. """Combine this key with the superkey.
@ -202,7 +256,7 @@ def doc_diff(doc1, doc2, *, falsey_is_equal=True, superkey: str = None):
if isinstance(doc1, dict) and isinstance(doc2, dict): if isinstance(doc1, dict) and isinstance(doc2, dict):
for key in set(doc1.keys()).union(set(doc2.keys())): for key in set(doc1.keys()).union(set(doc2.keys())):
if key in private_keys: if is_private(key):
continue continue
val1 = doc1.get(key, DoesNotExist) val1 = doc1.get(key, DoesNotExist)
@ -245,4 +299,10 @@ def random_etag() -> str:
def utcnow() -> datetime.datetime: def utcnow() -> datetime.datetime:
return datetime.datetime.now(tz=bson.tz_util.utc) """Construct timezone-aware 'now' in UTC with millisecond precision."""
now = datetime.datetime.now(tz=bson.tz_util.utc)
# MongoDB stores in millisecond precision, so truncate the microseconds.
# This way the returned datetime can be round-tripped via MongoDB and stay the same.
trunc_now = now.replace(microsecond=now.microsecond - (now.microsecond % 1000))
return trunc_now

View File

@ -13,7 +13,7 @@ import logging
import typing import typing
import bson import bson
from flask import g, current_app from flask import g, current_app, session
from flask import request from flask import request
from werkzeug import exceptions as wz_exceptions from werkzeug import exceptions as wz_exceptions
@ -60,7 +60,7 @@ def find_user_in_db(user_info: dict, provider='blender-id') -> dict:
email address. email address.
Does NOT update the user in the database. Does NOT update the user in the database.
:param user_info: Information (id, email and full_name) from the auth provider :param user_info: Information (id, email and full_name) from the auth provider
:param provider: One of the supported providers :param provider: One of the supported providers
""" """
@ -103,7 +103,7 @@ def find_user_in_db(user_info: dict, provider='blender-id') -> dict:
return db_user return db_user
def validate_token(*, force=False): def validate_token(*, force=False) -> bool:
"""Validate the token provided in the request and populate the current_user """Validate the token provided in the request and populate the current_user
flask.g object, so that permissions and access to a resource can be defined flask.g object, so that permissions and access to a resource can be defined
from it. from it.
@ -115,7 +115,7 @@ def validate_token(*, force=False):
:returns: True iff the user is logged in with a valid Blender ID token. :returns: True iff the user is logged in with a valid Blender ID token.
""" """
from pillar.auth import AnonymousUser import pillar.auth
# Trust a pre-existing g.current_user # Trust a pre-existing g.current_user
if not force: if not force:
@ -133,16 +133,22 @@ def validate_token(*, force=False):
oauth_subclient = '' oauth_subclient = ''
else: else:
# Check the session, the user might be logged in through Flask-Login. # Check the session, the user might be logged in through Flask-Login.
from pillar import auth
token = auth.get_blender_id_oauth_token() # The user has a logged-in session; trust only if this request passes a CSRF check.
# FIXME(Sybren): we should stop saving the token as 'user_id' in the sesion.
token = session.get('user_id')
if token:
log.debug('skipping token check because current user already has a session')
current_app.csrf.protect()
else:
token = pillar.auth.get_blender_id_oauth_token()
oauth_subclient = None oauth_subclient = None
if not token: if not token:
# If no authorization headers are provided, we are getting a request # If no authorization headers are provided, we are getting a request
# from a non logged in user. Proceed accordingly. # from a non logged in user. Proceed accordingly.
log.debug('No authentication headers, so not logged in.') log.debug('No authentication headers, so not logged in.')
g.current_user = AnonymousUser() g.current_user = pillar.auth.AnonymousUser()
return False return False
return validate_this_token(token, oauth_subclient) is not None return validate_this_token(token, oauth_subclient) is not None
@ -163,8 +169,6 @@ def validate_this_token(token, oauth_subclient=None):
# Check the users to see if there is one with this Blender ID token. # Check the users to see if there is one with this Blender ID token.
db_token = find_token(token, oauth_subclient) db_token = find_token(token, oauth_subclient)
if not db_token: if not db_token:
log.debug('Token %r not found in our local database.', token)
# If no valid token is found in our local database, we issue a new # If no valid token is found in our local database, we issue a new
# request to the Blender ID server to verify the validity of the token # request to the Blender ID server to verify the validity of the token
# passed via the HTTP header. We will get basic user info if the user # passed via the HTTP header. We will get basic user info if the user
@ -183,7 +187,7 @@ def validate_this_token(token, oauth_subclient=None):
return None return None
g.current_user = UserClass.construct(token, db_user) g.current_user = UserClass.construct(token, db_user)
user_authenticated.send(None) user_authenticated.send(g.current_user)
return db_user return db_user
@ -194,7 +198,7 @@ def remove_token(token: str):
tokens_coll = current_app.db('tokens') tokens_coll = current_app.db('tokens')
token_hashed = hash_auth_token(token) token_hashed = hash_auth_token(token)
# TODO: remove matching on unhashed tokens once all tokens have been hashed. # TODO: remove matching on hashed tokens once all hashed tokens have expired.
lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}]} lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}]}
del_res = tokens_coll.delete_many(lookup) del_res = tokens_coll.delete_many(lookup)
log.debug('Removed token %r, matched %d documents', token, del_res.deleted_count) log.debug('Removed token %r, matched %d documents', token, del_res.deleted_count)
@ -206,7 +210,7 @@ def find_token(token, is_subclient_token=False, **extra_filters):
tokens_coll = current_app.db('tokens') tokens_coll = current_app.db('tokens')
token_hashed = hash_auth_token(token) token_hashed = hash_auth_token(token)
# TODO: remove matching on unhashed tokens once all tokens have been hashed. # TODO: remove matching on hashed tokens once all hashed tokens have expired.
lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}], lookup = {'$or': [{'token': token}, {'token_hashed': token_hashed}],
'is_subclient_token': True if is_subclient_token else {'$in': [False, None]}, 'is_subclient_token': True if is_subclient_token else {'$in': [False, None]},
'expire_time': {"$gt": utcnow()}} 'expire_time': {"$gt": utcnow()}}
@ -229,8 +233,14 @@ def hash_auth_token(token: str) -> str:
return base64.b64encode(digest).decode('ascii') return base64.b64encode(digest).decode('ascii')
def store_token(user_id, token: str, token_expiry, oauth_subclient_id=False, def store_token(user_id,
org_roles: typing.Set[str] = frozenset()): token: str,
token_expiry,
oauth_subclient_id=False,
*,
org_roles: typing.Set[str] = frozenset(),
oauth_scopes: typing.Optional[typing.List[str]] = None,
):
"""Stores an authentication token. """Stores an authentication token.
:returns: the token document from MongoDB :returns: the token document from MongoDB
@ -240,13 +250,15 @@ def store_token(user_id, token: str, token_expiry, oauth_subclient_id=False,
token_data = { token_data = {
'user': user_id, 'user': user_id,
'token_hashed': hash_auth_token(token), 'token': token,
'expire_time': token_expiry, 'expire_time': token_expiry,
} }
if oauth_subclient_id: if oauth_subclient_id:
token_data['is_subclient_token'] = True token_data['is_subclient_token'] = True
if org_roles: if org_roles:
token_data['org_roles'] = sorted(org_roles) token_data['org_roles'] = sorted(org_roles)
if oauth_scopes:
token_data['oauth_scopes'] = oauth_scopes
r, _, _, status = current_app.post_internal('tokens', token_data) r, _, _, status = current_app.post_internal('tokens', token_data)
@ -363,6 +375,10 @@ def current_user():
def setup_app(app): def setup_app(app):
@app.before_request @app.before_request
def validate_token_at_each_request(): def validate_token_at_each_request():
# Skip token validation if this is a static asset
# to avoid spamming Blender ID for no good reason
if request.path.startswith('/static/'):
return
validate_token() validate_token()

View File

@ -1,5 +1,6 @@
import logging import logging
import functools import functools
import typing
from bson import ObjectId from bson import ObjectId
from flask import g from flask import g
@ -12,8 +13,9 @@ CHECK_PERMISSIONS_IMPLEMENTED_FOR = {'projects', 'nodes', 'flamenco_jobs'}
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
def check_permissions(collection_name, resource, method, append_allowed_methods=False, def check_permissions(collection_name: str, resource: dict, method: str,
check_node_type=None): append_allowed_methods=False,
check_node_type: typing.Optional[str] = None):
"""Check user permissions to access a node. We look up node permissions from """Check user permissions to access a node. We look up node permissions from
world to groups to users and match them with the computed user permissions. world to groups to users and match them with the computed user permissions.
If there is not match, we raise 403. If there is not match, we raise 403.
@ -93,8 +95,9 @@ def compute_allowed_methods(collection_name, resource, check_node_type=None):
return allowed_methods return allowed_methods
def has_permissions(collection_name, resource, method, append_allowed_methods=False, def has_permissions(collection_name: str, resource: dict, method: str,
check_node_type=None): append_allowed_methods=False,
check_node_type: typing.Optional[str] = None):
"""Check user permissions to access a node. We look up node permissions from """Check user permissions to access a node. We look up node permissions from
world to groups to users and match them with the computed user permissions. world to groups to users and match them with the computed user permissions.
@ -328,8 +331,9 @@ def require_login(*, require_roles=set(),
def render_error() -> Response: def render_error() -> Response:
if error_view is None: if error_view is None:
abort(403) resp = Forbidden().get_response()
resp: Response = error_view() else:
resp = error_view()
resp.status_code = 403 resp.status_code = 403
return resp return resp

View File

@ -9,12 +9,8 @@ string = functools.partial(attr.ib, validator=attr.validators.instance_of(str))
def log(name): def log(name):
"""Returns a logger attr.ib """Returns a logger
:param name: name to pass to logging.getLogger() :param name: name to pass to logging.getLogger()
:rtype: attr.ib
""" """
return attr.ib(default=logging.getLogger(name), return logging.getLogger(name)
repr=False,
hash=False,
cmp=False)

View File

@ -1,18 +1,24 @@
"""Authentication code common to the web and api modules.""" """Authentication code common to the web and api modules."""
import collections import collections
import contextlib
import copy
import functools
import logging import logging
import typing import typing
import blinker import blinker
import bson from bson import ObjectId
from flask import session, g from flask import session, g
import flask_login import flask_login
from werkzeug.local import LocalProxy from werkzeug.local import LocalProxy
from pillar import current_app from pillar import current_app
# The sender is the user that was just authenticated.
user_authenticated = blinker.Signal('Sent whenever a user was authenticated') user_authenticated = blinker.Signal('Sent whenever a user was authenticated')
user_logged_in = blinker.Signal('Sent whenever a user logged in on the web')
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
# Mapping from user role to capabilities obtained by users with that role. # Mapping from user role to capabilities obtained by users with that role.
@ -28,16 +34,21 @@ class UserClass(flask_login.UserMixin):
def __init__(self, token: typing.Optional[str]): def __init__(self, token: typing.Optional[str]):
# We store the Token instead of ID # We store the Token instead of ID
self.id = token self.id = token
self.auth_token = token
self.username: str = None self.username: str = None
self.full_name: str = None self.full_name: str = None
self.user_id: bson.ObjectId = None self.user_id: ObjectId = None
self.objectid: str = None self.objectid: str = None
self.gravatar: str = None
self.email: str = None self.email: str = None
self.roles: typing.List[str] = [] self.roles: typing.List[str] = []
self.groups: typing.List[str] = [] # NOTE: these are stringified object IDs. self.groups: typing.List[str] = [] # NOTE: these are stringified object IDs.
self.group_ids: typing.List[bson.ObjectId] = [] self.group_ids: typing.List[ObjectId] = []
self.capabilities: typing.Set[str] = set() self.capabilities: typing.Set[str] = set()
self.nodes: dict = {} # see the 'nodes' key in eve_settings.py::user_schema.
self.badges_html: str = ''
# Stored when constructing a user from the database
self._db_user = {}
# Lazily evaluated # Lazily evaluated
self._has_organizations: typing.Optional[bool] = None self._has_organizations: typing.Optional[bool] = None
@ -46,20 +57,24 @@ class UserClass(flask_login.UserMixin):
def construct(cls, token: str, db_user: dict) -> 'UserClass': def construct(cls, token: str, db_user: dict) -> 'UserClass':
"""Constructs a new UserClass instance from a Mongo user document.""" """Constructs a new UserClass instance from a Mongo user document."""
from ..api import utils
user = cls(token) user = cls(token)
user._db_user = copy.deepcopy(db_user)
user.user_id = db_user.get('_id') user.user_id = db_user.get('_id')
user.roles = db_user.get('roles') or [] user.roles = db_user.get('roles') or []
user.group_ids = db_user.get('groups') or [] user.group_ids = db_user.get('groups') or []
user.email = db_user.get('email') or '' user.email = db_user.get('email') or ''
user.username = db_user.get('username') or '' user.username = db_user.get('username') or ''
user.full_name = db_user.get('full_name') or '' user.full_name = db_user.get('full_name') or ''
user.badges_html = db_user.get('badges', {}).get('html') or ''
# Be a little more specific than just db_user['nodes'] or db_user['avatar']
user.nodes = {
'view_progress': db_user.get('nodes', {}).get('view_progress', {}),
}
# Derived properties # Derived properties
user.objectid = str(user.user_id or '') user.objectid = str(user.user_id or '')
user.gravatar = utils.gravatar(user.email)
user.groups = [str(g) for g in user.group_ids] user.groups = [str(g) for g in user.group_ids]
user.collect_capabilities() user.collect_capabilities()
@ -152,6 +167,31 @@ class UserClass(flask_login.UserMixin):
return bool(self._has_organizations) return bool(self._has_organizations)
def frontend_info(self) -> dict:
"""Return a dictionary of user info for injecting into the page."""
return {
'user_id': str(self.user_id),
'username': self.username,
'full_name': self.full_name,
'avatar_url': self.avatar_url,
'email': self.email,
'capabilities': list(self.capabilities),
'badges_html': self.badges_html,
'is_authenticated': self.is_authenticated,
}
@property
@functools.lru_cache(maxsize=1)
def avatar_url(self) -> str:
"""Return the Avatar image URL for this user.
:return: The avatar URL (the default one if the user has no avatar).
"""
import pillar.api.users.avatar
return pillar.api.users.avatar.url(self._db_user)
class AnonymousUser(flask_login.AnonymousUserMixin, UserClass): class AnonymousUser(flask_login.AnonymousUserMixin, UserClass):
def __init__(self): def __init__(self):
@ -210,9 +250,15 @@ def login_user(oauth_token: str, *, load_from_db=False):
user = _load_user(oauth_token) user = _load_user(oauth_token)
else: else:
user = UserClass(oauth_token) user = UserClass(oauth_token)
login_user_object(user)
def login_user_object(user: UserClass):
"""Log in the given user."""
flask_login.login_user(user, remember=True) flask_login.login_user(user, remember=True)
g.current_user = user g.current_user = user
user_authenticated.send(None) user_authenticated.send(user)
user_logged_in.send(user)
def logout_user(): def logout_user():
@ -229,6 +275,25 @@ def logout_user():
g.current_user = AnonymousUser() g.current_user = AnonymousUser()
@contextlib.contextmanager
def temporary_user(db_user: dict):
"""Temporarily sets the given user as 'current user'.
Does not trigger login signals, as this is not a real login action.
"""
try:
actual_current_user = g.current_user
except AttributeError:
actual_current_user = AnonymousUser()
temp_user = UserClass.construct('', db_user)
try:
g.current_user = temp_user
yield
finally:
g.current_user = actual_current_user
def get_blender_id_oauth_token() -> str: def get_blender_id_oauth_token() -> str:
"""Returns the Blender ID auth token, or an empty string if there is none.""" """Returns the Blender ID auth token, or an empty string if there is none."""

48
pillar/auth/cors.py Normal file
View File

@ -0,0 +1,48 @@
"""Support for adding CORS headers to responses."""
import functools
import flask
import werkzeug.wrappers as wz_wrappers
import werkzeug.exceptions as wz_exceptions
def allow(*, allow_credentials=False):
"""Flask endpoint decorator, adds CORS headers to the response.
If the request has a non-empty 'Origin' header, the response header
'Access-Control-Allow-Origin' is set to the value of that request header,
and some other CORS headers are set.
"""
def decorator(wrapped):
@functools.wraps(wrapped)
def wrapper(*args, **kwargs):
request_origin = flask.request.headers.get('Origin')
if not request_origin:
# No CORS headers requested, so don't bother touching the response.
return wrapped(*args, **kwargs)
try:
response = wrapped(*args, **kwargs)
except wz_exceptions.HTTPException as ex:
response = ex.get_response()
else:
if isinstance(response, tuple):
response = flask.make_response(*response)
elif isinstance(response, str):
response = flask.make_response(response)
elif isinstance(response, wz_wrappers.Response):
pass
else:
raise TypeError(f'unknown response type {type(response)}')
assert isinstance(response, wz_wrappers.Response)
response.headers.set('Access-Control-Allow-Origin', request_origin)
response.headers.set('Access-Control-Allow-Headers', 'x-requested-with')
if allow_credentials:
response.headers.set('Access-Control-Allow-Credentials', 'true')
return response
return wrapper
return decorator

View File

@ -1,8 +1,9 @@
import abc import abc
import attr
import json import json
import logging import logging
import typing
import attr
from rauth import OAuth2Service from rauth import OAuth2Service
from flask import current_app, url_for, request, redirect, session, Response from flask import current_app, url_for, request, redirect, session, Response
@ -15,6 +16,8 @@ class OAuthUserResponse:
id = attr.ib(validator=attr.validators.instance_of(str)) id = attr.ib(validator=attr.validators.instance_of(str))
email = attr.ib(validator=attr.validators.instance_of(str)) email = attr.ib(validator=attr.validators.instance_of(str))
access_token = attr.ib(validator=attr.validators.instance_of(str))
scopes: typing.List[str] = attr.ib(validator=attr.validators.instance_of(list))
class OAuthError(Exception): class OAuthError(Exception):
@ -127,8 +130,10 @@ class OAuthSignIn(metaclass=abc.ABCMeta):
class BlenderIdSignIn(OAuthSignIn): class BlenderIdSignIn(OAuthSignIn):
provider_name = 'blender-id' provider_name = 'blender-id'
scopes = ['email', 'badge']
def __init__(self): def __init__(self):
from urllib.parse import urljoin
super().__init__() super().__init__()
base_url = current_app.config['BLENDER_ID_ENDPOINT'] base_url = current_app.config['BLENDER_ID_ENDPOINT']
@ -137,14 +142,14 @@ class BlenderIdSignIn(OAuthSignIn):
name='blender-id', name='blender-id',
client_id=self.consumer_id, client_id=self.consumer_id,
client_secret=self.consumer_secret, client_secret=self.consumer_secret,
authorize_url='%s/oauth/authorize' % base_url, authorize_url=urljoin(base_url, 'oauth/authorize'),
access_token_url='%s/oauth/token' % base_url, access_token_url=urljoin(base_url, 'oauth/token'),
base_url='%s/api/' % base_url base_url=urljoin(base_url, 'api/'),
) )
def authorize(self): def authorize(self):
return redirect(self.service.get_authorize_url( return redirect(self.service.get_authorize_url(
scope='email', scope=' '.join(self.scopes),
response_type='code', response_type='code',
redirect_uri=self.get_callback_url()) redirect_uri=self.get_callback_url())
) )
@ -158,7 +163,11 @@ class BlenderIdSignIn(OAuthSignIn):
session['blender_id_oauth_token'] = access_token session['blender_id_oauth_token'] = access_token
me = oauth_session.get('user').json() me = oauth_session.get('user').json()
return OAuthUserResponse(str(me['id']), me['email'])
# Blender ID doesn't tell us which scopes were granted by the user, so
# for now assume we got all the scopes we requested.
# (see https://github.com/jazzband/django-oauth-toolkit/issues/644)
return OAuthUserResponse(str(me['id']), me['email'], access_token, self.scopes)
class FacebookSignIn(OAuthSignIn): class FacebookSignIn(OAuthSignIn):
@ -188,7 +197,7 @@ class FacebookSignIn(OAuthSignIn):
me = oauth_session.get('me?fields=id,email').json() me = oauth_session.get('me?fields=id,email').json()
# TODO handle case when user chooses not to disclose en email # TODO handle case when user chooses not to disclose en email
# see https://developers.facebook.com/docs/graph-api/reference/user/ # see https://developers.facebook.com/docs/graph-api/reference/user/
return OAuthUserResponse(me['id'], me.get('email')) return OAuthUserResponse(me['id'], me.get('email'), '', [])
class GoogleSignIn(OAuthSignIn): class GoogleSignIn(OAuthSignIn):
@ -216,4 +225,4 @@ class GoogleSignIn(OAuthSignIn):
oauth_session = self.make_oauth_session() oauth_session = self.make_oauth_session()
me = oauth_session.get('userinfo').json() me = oauth_session.get('userinfo').json()
return OAuthUserResponse(str(me['id']), me['email']) return OAuthUserResponse(str(me['id']), me['email'], '', [])

266
pillar/badge_sync.py Normal file
View File

@ -0,0 +1,266 @@
import collections
import datetime
import logging
import typing
from urllib.parse import urljoin
import bson
import requests
from pillar import current_app, auth
from pillar.api.utils import utcnow
SyncUser = collections.namedtuple('SyncUser', 'user_id token bid_user_id')
BadgeHTML = collections.namedtuple('BadgeHTML', 'html expires')
log = logging.getLogger(__name__)
class StopRefreshing(Exception):
"""Indicates that Blender ID is having problems.
Further badge refreshes should be put on hold to avoid bludgeoning
a suffering Blender ID.
"""
def find_user_to_sync(user_id: bson.ObjectId) -> typing.Optional[SyncUser]:
"""Return user information for syncing badges for a specific user.
Returns None if the user cannot be synced (no 'badge' scope on a token,
or no Blender ID user_id known).
"""
my_log = log.getChild('refresh_single_user')
now = utcnow()
tokens_coll = current_app.db('tokens')
users_coll = current_app.db('users')
token_info = tokens_coll.find_one({
'user': user_id,
'token': {'$exists': True},
'oauth_scopes': 'badge',
'expire_time': {'$gt': now},
})
if not token_info:
my_log.debug('No token with scope "badge" for user %s', user_id)
return None
user_info = users_coll.find_one({'_id': user_id})
# TODO(Sybren): do this filtering in the MongoDB query:
bid_user_ids = [auth_info.get('user_id')
for auth_info in user_info.get('auth', [])
if auth_info.get('provider', '') == 'blender-id' and auth_info.get('user_id')]
if not bid_user_ids:
my_log.debug('No Blender ID user_id for user %s', user_id)
return None
bid_user_id = bid_user_ids[0]
return SyncUser(user_id=user_id, token=token_info['token'], bid_user_id=bid_user_id)
def find_users_to_sync() -> typing.Iterable[SyncUser]:
"""Return user information of syncable users with badges."""
now = utcnow()
tokens_coll = current_app.db('tokens')
cursor = tokens_coll.aggregate([
# Find all users who have a 'badge' scope in their OAuth token.
{'$match': {
'token': {'$exists': True},
'oauth_scopes': 'badge',
'expire_time': {'$gt': now},
# TODO(Sybren): save real token expiry time but keep checking tokens hourly when they are used!
}},
{'$lookup': {
'from': 'users',
'localField': 'user',
'foreignField': '_id',
'as': 'user'
}},
# Prevent 'user' from being an array.
{'$unwind': {'path': '$user'}},
# Get the Blender ID user ID only.
{'$unwind': {'path': '$user.auth'}},
{'$match': {'user.auth.provider': 'blender-id'}},
# Only select those users whose badge doesn't exist or has expired.
{'$match': {
'user.badges.expires': {'$not': {'$gt': now}}
}},
# Make sure that the badges that expire last are also refreshed last.
{'$sort': {'user.badges.expires': 1}},
# Reduce the document to the info we're after.
{'$project': {
'token': True,
'user._id': True,
'user.auth.user_id': True,
}},
])
log.debug('Aggregating tokens and users')
for user_info in cursor:
log.debug('User %s has badges %s',
user_info['user']['_id'], user_info['user'].get('badges'))
yield SyncUser(
user_id=user_info['user']['_id'],
token=user_info['token'],
bid_user_id=user_info['user']['auth']['user_id'])
def fetch_badge_html(session: requests.Session, user: SyncUser, size: str) \
-> str:
"""Fetch a Blender ID badge for this user.
:param session:
:param user:
:param size: Size indication for the badge images, see the Blender ID
documentation/code. As of this writing valid sizes are {'s', 'm', 'l'}.
"""
my_log = log.getChild('fetch_badge_html')
blender_id_endpoint = current_app.config['BLENDER_ID_ENDPOINT']
url = urljoin(blender_id_endpoint, f'api/badges/{user.bid_user_id}/html/{size}')
my_log.debug('Fetching badge HTML at %s for user %s', url, user.user_id)
try:
resp = session.get(url, headers={'Authorization': f'Bearer {user.token}'})
except requests.ConnectionError as ex:
my_log.warning('Unable to connect to Blender ID at %s: %s', url, ex)
raise StopRefreshing()
if resp.status_code == 204:
my_log.debug('No badges for user %s', user.user_id)
return ''
if resp.status_code == 403:
# TODO(Sybren): this indicates the token is invalid, so we could just as well delete it.
my_log.warning('Tried fetching %s for user %s but received a 403: %s',
url, user.user_id, resp.text)
return ''
if resp.status_code == 400:
my_log.warning('Blender ID did not accept our GET request at %s for user %s: %s',
url, user.user_id, resp.text)
return ''
if resp.status_code == 500:
my_log.warning('Blender ID returned an internal server error on %s for user %s, '
'aborting all badge refreshes: %s', url, user.user_id, resp.text)
raise StopRefreshing()
if resp.status_code == 404:
my_log.warning('Blender ID has no user %s for our user %s', user.bid_user_id, user.user_id)
return ''
resp.raise_for_status()
return resp.text
def refresh_all_badges(only_user_id: typing.Optional[bson.ObjectId] = None, *,
dry_run=False,
timelimit: datetime.timedelta):
"""Re-fetch all badges for all users, except when already refreshed recently.
:param only_user_id: Only refresh this user. This is expected to be used
sparingly during manual maintenance / debugging sessions only. It does
fetch all users to refresh, and in Python code skips all except the
given one.
:param dry_run: if True the changes are described in the log, but not performed.
:param timelimit: Refreshing will stop after this time. This allows for cron(-like)
jobs to run without overlapping, even when the number fo badges to refresh
becomes larger than possible within the period of the cron job.
"""
my_log = log.getChild('refresh_all_badges')
# Test the config before we start looping over the world.
badge_expiry = badge_expiry_config()
if not badge_expiry or not isinstance(badge_expiry, datetime.timedelta):
raise ValueError('BLENDER_ID_BADGE_EXPIRY not configured properly, should be a timedelta')
session = _get_requests_session()
deadline = utcnow() + timelimit
num_updates = 0
for user_info in find_users_to_sync():
if utcnow() > deadline:
my_log.info('Stopping badge refresh because the timelimit %s (H:MM:SS) was hit.',
timelimit)
break
if only_user_id and user_info.user_id != only_user_id:
my_log.debug('Skipping user %s', user_info.user_id)
continue
try:
badge_html = fetch_badge_html(session, user_info, 's')
except StopRefreshing:
my_log.error('Blender ID has internal problems, stopping badge refreshing at user %s',
user_info)
break
num_updates += 1
update_badges(user_info, badge_html, badge_expiry, dry_run=dry_run)
my_log.info('Updated badges of %d users%s', num_updates, ' (dry-run)' if dry_run else '')
def _get_requests_session() -> requests.Session:
from requests.adapters import HTTPAdapter
session = requests.Session()
session.mount('https://', HTTPAdapter(max_retries=5))
return session
def refresh_single_user(user_id: bson.ObjectId):
"""Refresh badges for a single user."""
my_log = log.getChild('refresh_single_user')
badge_expiry = badge_expiry_config()
if not badge_expiry:
my_log.warning('Skipping badge fetching, BLENDER_ID_BADGE_EXPIRY not configured')
my_log.debug('Fetching badges for user %s', user_id)
session = _get_requests_session()
user_info = find_user_to_sync(user_id)
if not user_info:
return
try:
badge_html = fetch_badge_html(session, user_info, 's')
except StopRefreshing:
my_log.error('Blender ID has internal problems, stopping badge refreshing at user %s',
user_info)
return
update_badges(user_info, badge_html, badge_expiry, dry_run=False)
my_log.info('Updated badges of user %s', user_id)
def update_badges(user_info: SyncUser, badge_html: str, badge_expiry: datetime.timedelta,
*, dry_run: bool):
my_log = log.getChild('update_badges')
users_coll = current_app.db('users')
update = {'badges': {
'html': badge_html,
'expires': utcnow() + badge_expiry,
}}
my_log.info('Updating badges HTML for Blender ID %s, user %s',
user_info.bid_user_id, user_info.user_id)
if dry_run:
return
result = users_coll.update_one({'_id': user_info.user_id},
{'$set': update})
if result.matched_count != 1:
my_log.warning('Unable to update badges for user %s', user_info.user_id)
def badge_expiry_config() -> datetime.timedelta:
return current_app.config.get('BLENDER_ID_BADGE_EXPIRY')
@auth.user_logged_in.connect
def sync_badge_upon_login(sender: auth.UserClass, **kwargs):
"""Auto-sync badges when a user logs in."""
log.info('Refreshing badge of %s because they logged in', sender.user_id)
refresh_single_user(sender.user_id)

View File

@ -1,38 +0,0 @@
import logging
from algoliasearch.helpers import AlgoliaException
log = logging.getLogger(__name__)
def push_updated_user(user_to_index: dict):
"""Push an update to the Algolia index when a user item is updated"""
from pillar.api.utils.algolia import index_user_save
try:
index_user_save(user_to_index)
except AlgoliaException as ex:
log.warning(
'Unable to push user info to Algolia for user "%s", id=%s; %s', # noqa
user_to_index.get('username'),
user_to_index.get('objectID'), ex)
def index_node_save(node_to_index: dict):
from pillar.api.utils import algolia
try:
algolia.index_node_save(node_to_index)
except AlgoliaException as ex:
log.warning(
'Unable to push node info to Algolia for node %s; %s', node_to_index, ex) # noqa
def index_node_delete(delete_id: str):
from pillar.api.utils import algolia
try:
algolia.index_node_delete(delete_id)
except AlgoliaException as ex:
log.warning('Unable to delete node info to Algolia for node %s; %s', delete_id, ex) # noqa

29
pillar/celery/avatar.py Normal file
View File

@ -0,0 +1,29 @@
"""Avatar synchronisation.
Note that this module can only be imported when an application context is
active. Best to late-import this in the functions where it's needed.
"""
import logging
from bson import ObjectId
import celery
from pillar import current_app
from pillar.api.users.avatar import sync_avatar
log = logging.getLogger(__name__)
@current_app.celery.task(bind=True, ignore_result=True, acks_late=True)
def sync_avatar_for_user(self: celery.Task, user_id: str):
"""Downloads the user's avatar from Blender ID."""
# WARNING: when changing the signature of this function, also change the
# self.retry() call below.
uid = ObjectId(user_id)
try:
sync_avatar(uid)
except (IOError, OSError):
log.exception('Error downloading Blender ID avatar for user %s, will retry later')
self.retry((user_id, ), countdown=current_app.config['AVATAR_DOWNLOAD_CELERY_RETRY'])

20
pillar/celery/badges.py Normal file
View File

@ -0,0 +1,20 @@
"""Badge HTML synchronisation.
Note that this module can only be imported when an application context is
active. Best to late-import this in the functions where it's needed.
"""
import datetime
import logging
from pillar import current_app, badge_sync
log = logging.getLogger(__name__)
@current_app.celery.task(ignore_result=True)
def sync_badges_for_users(timelimit_seconds: int):
"""Synchronises Blender ID badges for the most-urgent users."""
timelimit = datetime.timedelta(seconds=timelimit_seconds)
log.info('Refreshing badges, timelimit is %s (H:MM:SS)', timelimit)
badge_sync.refresh_all_badges(timelimit=timelimit)

View File

@ -1,4 +1,6 @@
import logging import logging
import bleach
from bson import ObjectId from bson import ObjectId
from pillar import current_app from pillar import current_app
@ -10,7 +12,7 @@ from pillar.api.search import algolia_indexing
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
INDEX_ALLOWED_NODE_TYPES = {'asset', 'texture', 'group', 'hdri'} INDEX_ALLOWED_NODE_TYPES = {'asset', 'texture', 'group', 'hdri', 'post'}
SEARCH_BACKENDS = { SEARCH_BACKENDS = {
@ -28,34 +30,6 @@ def _get_node_from_id(node_id: str):
return node return node
def _handle_picture(node: dict, to_index: dict):
"""Add picture URL in-place to the to-be-indexed node."""
picture_id = node.get('picture')
if not picture_id:
return
files_collection = current_app.data.driver.db['files']
lookup = {'_id': ObjectId(picture_id)}
picture = files_collection.find_one(lookup)
for item in picture.get('variations', []):
if item['size'] != 't':
continue
# Not all files have a project...
pid = picture.get('project')
if pid:
link = generate_link(picture['backend'],
item['file_path'],
str(pid),
is_public=True)
else:
link = item['link']
to_index['picture'] = link
break
def prepare_node_data(node_id: str, node: dict=None) -> dict: def prepare_node_data(node_id: str, node: dict=None) -> dict:
"""Given a node id or a node document, return an indexable version of it. """Given a node id or a node document, return an indexable version of it.
@ -86,25 +60,30 @@ def prepare_node_data(node_id: str, node: dict=None) -> dict:
users_collection = current_app.data.driver.db['users'] users_collection = current_app.data.driver.db['users']
user = users_collection.find_one({'_id': ObjectId(node['user'])}) user = users_collection.find_one({'_id': ObjectId(node['user'])})
clean_description = bleach.clean(node.get('_description_html') or '', strip=True)
if not clean_description and node['node_type'] == 'post':
clean_description = bleach.clean(node['properties'].get('_content_html') or '', strip=True)
to_index = { to_index = {
'objectID': node['_id'], 'objectID': node['_id'],
'name': node['name'], 'name': node['name'],
'project': { 'project': {
'_id': project['_id'], '_id': project['_id'],
'name': project['name'] 'name': project['name'],
'url': project['url'],
}, },
'created': node['_created'], 'created': node['_created'],
'updated': node['_updated'], 'updated': node['_updated'],
'node_type': node['node_type'], 'node_type': node['node_type'],
'picture': node.get('picture') or '',
'user': { 'user': {
'_id': user['_id'], '_id': user['_id'],
'full_name': user['full_name'] 'full_name': user['full_name']
}, },
'description': node.get('description'), 'description': clean_description or None,
'is_free': False
} }
_handle_picture(node, to_index)
# If the node has world permissions, compute the Free permission # If the node has world permissions, compute the Free permission
if 'world' in node.get('permissions', {}): if 'world' in node.get('permissions', {}):
if 'GET' in node['permissions']['world']: if 'GET' in node['permissions']['world']:

View File

@ -13,6 +13,7 @@ from pillar.cli.maintenance import manager_maintenance
from pillar.cli.operations import manager_operations from pillar.cli.operations import manager_operations
from pillar.cli.setup import manager_setup from pillar.cli.setup import manager_setup
from pillar.cli.elastic import manager_elastic from pillar.cli.elastic import manager_elastic
from . import badges
from pillar.cli import translations from pillar.cli import translations
@ -24,3 +25,4 @@ manager.add_command("maintenance", manager_maintenance)
manager.add_command("setup", manager_setup) manager.add_command("setup", manager_setup)
manager.add_command("operations", manager_operations) manager.add_command("operations", manager_operations)
manager.add_command("elastic", manager_elastic) manager.add_command("elastic", manager_elastic)
manager.add_command("badges", badges.manager)

39
pillar/cli/badges.py Normal file
View File

@ -0,0 +1,39 @@
import datetime
import logging
from flask_script import Manager
from pillar import current_app, badge_sync
from pillar.api.utils import utcnow
log = logging.getLogger(__name__)
manager = Manager(current_app, usage="Badge operations")
@manager.option('-u', '--user', dest='email', default='', help='Email address of the user to sync')
@manager.option('-a', '--all', dest='sync_all', action='store_true', default=False,
help='Sync all users')
@manager.option('--go', action='store_true', default=False,
help='Actually perform the sync; otherwise it is a dry-run.')
def sync(email: str = '', sync_all: bool=False, go: bool=False):
if bool(email) == bool(sync_all):
raise ValueError('Use either --user or --all.')
if email:
users_coll = current_app.db('users')
db_user = users_coll.find_one({'email': email}, projection={'_id': True})
if not db_user:
raise ValueError(f'No user with email {email!r} found')
specific_user = db_user['_id']
else:
specific_user = None
if not go:
log.info('Performing dry-run, not going to change the user database.')
start_time = utcnow()
badge_sync.refresh_all_badges(specific_user, dry_run=not go,
timelimit=datetime.timedelta(hours=1))
end_time = utcnow()
log.info('%s took %s (H:MM:SS)',
'Updating user badges' if go else 'Dry-run',
end_time - start_time)

View File

@ -1,7 +1,9 @@
import collections
import copy import copy
import datetime import datetime
import json
import logging import logging
from pathlib import PurePosixPath from pathlib import PurePosixPath, Path
import re import re
import typing import typing
@ -12,6 +14,7 @@ from flask_script import Manager
import pymongo import pymongo
from pillar import current_app from pillar import current_app
import pillar.api.utils
# Collections to skip when finding file references (during orphan file detection). # Collections to skip when finding file references (during orphan file detection).
# This collection can be added to from PillarExtension.setup_app(). # This collection can be added to from PillarExtension.setup_app().
@ -303,7 +306,7 @@ def purge_home_projects(go=False):
yield pid yield pid
continue continue
if users_coll.find({'_id': uid, '_deleted': {'$ne': True}}).count() == 0: if users_coll.count_documents({'_id': uid, '_deleted': {'$ne': True}}) == 0:
log.info('Project %s has non-existing owner %s', pid, uid) log.info('Project %s has non-existing owner %s', pid, uid)
bad += 1 bad += 1
yield pid yield pid
@ -559,50 +562,6 @@ def replace_pillar_node_type_schemas(project_url=None, all_projects=False, missi
projects_changed, projects_seen) projects_changed, projects_seen)
@manager_maintenance.command
def remarkdown_comments():
"""Retranslates all Markdown to HTML for all comment nodes.
"""
from pillar.api.nodes import convert_markdown
nodes_collection = current_app.db()['nodes']
comments = nodes_collection.find({'node_type': 'comment'},
projection={'properties.content': 1,
'node_type': 1})
updated = identical = skipped = errors = 0
for node in comments:
convert_markdown(node)
node_id = node['_id']
try:
content_html = node['properties']['content_html']
except KeyError:
log.warning('Node %s has no content_html', node_id)
skipped += 1
continue
result = nodes_collection.update_one(
{'_id': node_id},
{'$set': {'properties.content_html': content_html}}
)
if result.matched_count != 1:
log.error('Unable to update node %s', node_id)
errors += 1
continue
if result.modified_count:
updated += 1
else:
identical += 1
log.info('updated : %i', updated)
log.info('identical: %i', identical)
log.info('skipped : %i', skipped)
log.info('errors : %i', errors)
@manager_maintenance.option('-p', '--project', dest='proj_url', nargs='?', @manager_maintenance.option('-p', '--project', dest='proj_url', nargs='?',
help='Project URL') help='Project URL')
@manager_maintenance.option('-a', '--all', dest='all_projects', action='store_true', default=False, @manager_maintenance.option('-a', '--all', dest='all_projects', action='store_true', default=False,
@ -684,8 +643,8 @@ def upgrade_attachment_schema(proj_url=None, all_projects=False, go=False):
log_proj() log_proj()
log.info('Removed %d empty attachment dicts', res.modified_count) log.info('Removed %d empty attachment dicts', res.modified_count)
else: else:
to_remove = nodes_coll.count({'properties.attachments': {}, to_remove = nodes_coll.count_documents({'properties.attachments': {},
'project': project['_id']}) 'project': project['_id']})
if to_remove: if to_remove:
log_proj() log_proj()
log.info('Would remove %d empty attachment dicts', to_remove) log.info('Would remove %d empty attachment dicts', to_remove)
@ -767,7 +726,9 @@ def iter_markdown(proj_node_types: dict, some_node: dict, callback: typing.Calla
continue continue
to_visit.append((subdoc, definition['schema'])) to_visit.append((subdoc, definition['schema']))
continue continue
if definition.get('coerce') != 'markdown': coerce = definition.get('coerce') # Eve < 0.8
validator = definition.get('check_with') or definition.get('validator') # Eve >= 0.8
if coerce != 'markdown' and validator != 'markdown':
continue continue
my_log.debug('I have to change %r of %s', key, doc) my_log.debug('I have to change %r of %s', key, doc)
@ -778,113 +739,6 @@ def iter_markdown(proj_node_types: dict, some_node: dict, callback: typing.Calla
doc[key] = new_value doc[key] = new_value
@manager_maintenance.option('-p', '--project', dest='proj_url', nargs='?',
help='Project URL')
@manager_maintenance.option('-a', '--all', dest='all_projects', action='store_true', default=False,
help='Replace on all projects.')
@manager_maintenance.option('-g', '--go', dest='go', action='store_true', default=False,
help='Actually perform the changes (otherwise just show as dry-run).')
def upgrade_attachment_usage(proj_url=None, all_projects=False, go=False):
"""Replaces '@[slug]' with '{attachment slug}'.
Also moves links from the attachment dict to the attachment shortcode.
"""
if bool(proj_url) == all_projects:
log.error('Use either --project or --all.')
return 1
import html
from pillar.api.projects.utils import node_type_dict
from pillar.api.utils import remove_private_keys
from pillar.api.utils.authentication import force_cli_user
force_cli_user()
nodes_coll = current_app.db('nodes')
total_nodes = 0
failed_node_ids = set()
# Use a mixture of the old slug RE that still allowes spaces in the slug
# name and the new RE that allows dashes.
old_slug_re = re.compile(r'@\[([a-zA-Z0-9_\- ]+)\]')
for proj in _db_projects(proj_url, all_projects, go=go):
proj_id = proj['_id']
proj_url = proj.get('url', '-no-url-')
nodes = nodes_coll.find({
'_deleted': {'$ne': True},
'project': proj_id,
'properties.attachments': {'$exists': True},
})
node_count = nodes.count()
if node_count == 0:
log.debug('Skipping project %s (%s)', proj_url, proj_id)
continue
proj_node_types = node_type_dict(proj)
for node in nodes:
attachments = node['properties']['attachments']
replaced = False
# Inner functions because of access to the node's attachments.
def replace(match):
nonlocal replaced
slug = match.group(1)
log.debug(' - OLD STYLE attachment slug %r', slug)
try:
att = attachments[slug]
except KeyError:
log.info("Attachment %r not found for node %s", slug, node['_id'])
link = ''
else:
link = att.get('link', '')
if link == 'self':
link = " link='self'"
elif link == 'custom':
url = att.get('link_custom')
if url:
link = " link='%s'" % html.escape(url)
replaced = True
return '{attachment %r%s}' % (slug.replace(' ', '-'), link)
def update_markdown(value: str) -> str:
return old_slug_re.sub(replace, value)
iter_markdown(proj_node_types, node, update_markdown)
# Remove no longer used properties from attachments
new_attachments = {}
for slug, attachment in attachments.items():
replaced |= 'link' in attachment # link_custom implies link
attachment.pop('link', None)
attachment.pop('link_custom', None)
new_attachments[slug.replace(' ', '-')] = attachment
node['properties']['attachments'] = new_attachments
if replaced:
total_nodes += 1
else:
# Nothing got replaced,
continue
if go:
# Use Eve to PUT, so we have schema checking.
db_node = remove_private_keys(node)
r, _, _, status = current_app.put_internal('nodes', db_node, _id=node['_id'])
if status != 200:
log.error('Error %i storing altered node %s %s', status, node['_id'], r)
failed_node_ids.add(node['_id'])
# raise SystemExit('Error storing node; see log.')
log.debug('Updated node %s: %s', node['_id'], r)
log.info('Project %s (%s) has %d nodes with attachments',
proj_url, proj_id, node_count)
log.info('%s %d nodes', 'Updated' if go else 'Would update', total_nodes)
if failed_node_ids:
log.warning('Failed to update %d of %d nodes: %s', len(failed_node_ids), total_nodes,
', '.join(str(nid) for nid in failed_node_ids))
def _db_projects(proj_url: str, all_projects: bool, project_id='', *, go: bool) \ def _db_projects(proj_url: str, all_projects: bool, project_id='', *, go: bool) \
-> typing.Iterable[dict]: -> typing.Iterable[dict]:
"""Yields a subset of the projects in the database. """Yields a subset of the projects in the database.
@ -924,14 +778,38 @@ def _db_projects(proj_url: str, all_projects: bool, project_id='', *, go: bool)
log.info('Command took %s', duration) log.info('Command took %s', duration)
def find_object_ids(something: typing.Any) -> typing.Iterable[bson.ObjectId]:
"""Generator, yields all ObjectIDs referenced by the given object.
Assumes 'something' comes from a MongoDB. This function wasn't made for
generic Python objects.
"""
if isinstance(something, bson.ObjectId):
yield something
elif isinstance(something, str) and len(something) == 24:
try:
yield bson.ObjectId(something)
except (bson.objectid.InvalidId, TypeError):
# It apparently wasn't an ObjectID after all.
pass
elif isinstance(something, (list, set, tuple)):
for item in something:
yield from find_object_ids(item)
elif isinstance(something, dict):
for item in something.keys():
yield from find_object_ids(item)
for item in something.values():
yield from find_object_ids(item)
def _find_orphan_files() -> typing.Set[bson.ObjectId]: def _find_orphan_files() -> typing.Set[bson.ObjectId]:
"""Finds all non-referenced files for the given project. """Finds all non-referenced files.
Returns an iterable of all orphan file IDs. Returns an iterable of all orphan file IDs.
""" """
log.debug('Finding orphan files') log.debug('Finding orphan files')
# Get all file IDs that belong to this project. # Get all file IDs and make a set; we'll remove any referenced object ID later.
files_coll = current_app.db('files') files_coll = current_app.db('files')
cursor = files_coll.find({'_deleted': {'$ne': True}}, projection={'_id': 1}) cursor = files_coll.find({'_deleted': {'$ne': True}}, projection={'_id': 1})
file_ids = {doc['_id'] for doc in cursor} file_ids = {doc['_id'] for doc in cursor}
@ -942,26 +820,10 @@ def _find_orphan_files() -> typing.Set[bson.ObjectId]:
total_file_count = len(file_ids) total_file_count = len(file_ids)
log.debug('Found %d files in total', total_file_count) log.debug('Found %d files in total', total_file_count)
def find_object_ids(something: typing.Any) -> typing.Iterable[bson.ObjectId]:
if isinstance(something, bson.ObjectId):
yield something
elif isinstance(something, str) and len(something) == 24:
try:
yield bson.ObjectId(something)
except (bson.objectid.InvalidId, TypeError):
# It apparently wasn't an ObjectID after all.
pass
elif isinstance(something, (list, set, tuple)):
for item in something:
yield from find_object_ids(item)
elif isinstance(something, dict):
for item in something.values():
yield from find_object_ids(item)
# Find all references by iterating through the project itself and every document that has a # Find all references by iterating through the project itself and every document that has a
# 'project' key set to this ObjectId. # 'project' key set to this ObjectId.
db = current_app.db() db = current_app.db()
for coll_name in sorted(db.collection_names(include_system_collections=False)): for coll_name in sorted(db.list_collection_names()):
if coll_name in ORPHAN_FINDER_SKIP_COLLECTIONS: if coll_name in ORPHAN_FINDER_SKIP_COLLECTIONS:
continue continue
@ -987,7 +849,6 @@ def find_orphan_files():
This is a heavy operation that inspects *everything* in MongoDB. Use with care. This is a heavy operation that inspects *everything* in MongoDB. Use with care.
""" """
from jinja2.filters import do_filesizeformat from jinja2.filters import do_filesizeformat
from pathlib import Path
output_fpath = Path(current_app.config['STORAGE_DIR']) / 'orphan-files.txt' output_fpath = Path(current_app.config['STORAGE_DIR']) / 'orphan-files.txt'
if output_fpath.exists(): if output_fpath.exists():
@ -1033,7 +894,6 @@ def delete_orphan_files():
Use 'find_orphan_files' first to generate orphan-files.txt. Use 'find_orphan_files' first to generate orphan-files.txt.
""" """
import pymongo.results import pymongo.results
from pathlib import Path
output_fpath = Path(current_app.config['STORAGE_DIR']) / 'orphan-files.txt' output_fpath = Path(current_app.config['STORAGE_DIR']) / 'orphan-files.txt'
with output_fpath.open('r', encoding='ascii') as infile: with output_fpath.open('r', encoding='ascii') as infile:
@ -1064,3 +924,410 @@ def delete_orphan_files():
log.warning('Soft-deletion modified %d of %d files', res.modified_count, file_count) log.warning('Soft-deletion modified %d of %d files', res.modified_count, file_count)
log.info('%d files have been soft-deleted', res.modified_count) log.info('%d files have been soft-deleted', res.modified_count)
@manager_maintenance.command
def find_video_files_without_duration():
"""Finds video files without any duration
This is a heavy operation. Use with care.
"""
output_fpath = Path(current_app.config['STORAGE_DIR']) / 'video_files_without_duration.txt'
if output_fpath.exists():
log.error('Output filename %s already exists, remove it first.', output_fpath)
return 1
start_timestamp = datetime.datetime.now()
files_coll = current_app.db('files')
starts_with_video = re.compile("^video", re.IGNORECASE)
aggr = files_coll.aggregate([
{'$match': {'content_type': starts_with_video,
'_deleted': {'$ne': True}}},
{'$unwind': '$variations'},
{'$match': {
'variations.duration': {'$not': {'$gt': 0}}
}},
{'$project': {'_id': 1}}
])
file_ids = [str(f['_id']) for f in aggr]
nbr_files = len(file_ids)
log.info('Total nbr video files without duration: %d', nbr_files)
end_timestamp = datetime.datetime.now()
duration = end_timestamp - start_timestamp
log.info('Finding files took %s', duration)
log.info('Writing Object IDs to %s', output_fpath)
with output_fpath.open('w', encoding='ascii') as outfile:
outfile.write('\n'.join(sorted(file_ids)))
@manager_maintenance.command
def find_video_nodes_without_duration():
"""Finds video nodes without any duration
This is a heavy operation. Use with care.
"""
output_fpath = Path(current_app.config['STORAGE_DIR']) / 'video_nodes_without_duration.txt'
if output_fpath.exists():
log.error('Output filename %s already exists, remove it first.', output_fpath)
return 1
start_timestamp = datetime.datetime.now()
nodes_coll = current_app.db('nodes')
aggr = nodes_coll.aggregate([
{'$match': {'node_type': 'asset',
'properties.content_type': 'video',
'_deleted': {'$ne': True},
'properties.duration_seconds': {'$not': {'$gt': 0}}}},
{'$project': {'_id': 1}}
])
file_ids = [str(f['_id']) for f in aggr]
nbr_files = len(file_ids)
log.info('Total nbr video nodes without duration: %d', nbr_files)
end_timestamp = datetime.datetime.now()
duration = end_timestamp - start_timestamp
log.info('Finding nodes took %s', duration)
log.info('Writing Object IDs to %s', output_fpath)
with output_fpath.open('w', encoding='ascii') as outfile:
outfile.write('\n'.join(sorted(file_ids)))
@manager_maintenance.option('-n', '--nodes', dest='nodes_to_update', nargs='*',
help='List of nodes to update')
@manager_maintenance.option('-a', '--all', dest='all_nodes', action='store_true', default=False,
help='Update on all video nodes.')
@manager_maintenance.option('-g', '--go', dest='go', action='store_true', default=False,
help='Actually perform the changes (otherwise just show as dry-run).')
def reconcile_node_video_duration(nodes_to_update=None, all_nodes=False, go=False):
"""Copy video duration from file.variations.duration to node.properties.duraion_seconds
This is a heavy operation. Use with care.
"""
from pillar.api.utils import random_etag, utcnow
if bool(nodes_to_update) == all_nodes:
log.error('Use either --nodes or --all.')
return 1
start_timestamp = datetime.datetime.now()
nodes_coll = current_app.db('nodes')
node_subset = []
if nodes_to_update:
node_subset = [{'$match': {'_id': {'$in': [ObjectId(nid) for nid in nodes_to_update]}}}]
files = nodes_coll.aggregate(
[
*node_subset,
{'$match': {
'node_type': 'asset',
'properties.content_type': 'video',
'_deleted': {'$ne': True}}
},
{'$lookup': {
'from': 'files',
'localField': 'properties.file',
'foreignField': '_id',
'as': '_files',
}},
{'$unwind': '$_files'},
{'$unwind': '$_files.variations'},
{'$match': {'_files.variations.duration': {'$gt': 0}}},
{'$addFields': {
'need_update': {
'$ne': ['$_files.variations.duration', '$properties.duration_seconds']}
}},
{'$match': {'need_update': True}},
{'$project': {
'_id': 1,
'duration': '$_files.variations.duration',
}}]
)
if not go:
log.info('Would try to update %d nodes', len(list(files)))
return 0
modified_count = 0
for f in files:
log.debug('Updating node %s with duration %d', f['_id'], f['duration'])
new_etag = random_etag()
now = utcnow()
resp = nodes_coll.update_one(
{'_id': f['_id']},
{'$set': {
'properties.duration_seconds': f['duration'],
'_etag': new_etag,
'_updated': now,
}}
)
if resp.modified_count == 0:
log.debug('Node %s was already up to date', f['_id'])
modified_count += resp.modified_count
log.info('Updated %d nodes', modified_count)
end_timestamp = datetime.datetime.now()
duration = end_timestamp - start_timestamp
log.info('Operation took %s', duration)
return 0
@manager_maintenance.option('-g', '--go', dest='go', action='store_true', default=False,
help='Actually perform the changes (otherwise just show as dry-run).')
def delete_projectless_files(go=False):
"""Soft-deletes file documents of projects that have been deleted.
WARNING: this also soft-deletes file documents that do not have a project
property at all.
"""
start_timestamp = datetime.datetime.now()
files_coll = current_app.db('files')
aggr = files_coll.aggregate([
{'$match': {'_deleted': {'$ne': True}}},
{'$lookup': {
'from': 'projects',
'localField': 'project',
'foreignField': '_id',
'as': '_project'
}},
{'$match': {'$or': [
{'_project': []},
{'_project._deleted': True},
]}},
{'$project': {'_id': True}},
])
files_to_delete: typing.List[ObjectId] = [doc['_id'] for doc in aggr]
orphan_count = len(files_to_delete)
log.info('Total number of files to soft-delete: %d', orphan_count)
total_count = files_coll.count_documents({'_deleted': {'$ne': True}})
log.info('Total nr of orphan files: %d', orphan_count)
log.info('Total nr of files : %d', total_count)
log.info('Orphan percentage : %d%%', 100 * orphan_count / total_count)
if go:
log.info('Soft-deleting all %d projectless files', orphan_count)
now = pillar.api.utils.utcnow()
etag = pillar.api.utils.random_etag()
result = files_coll.update_many(
{'_id': {'$in': files_to_delete}},
{'$set': {
'_deleted': True,
'_updated': now,
'_etag': etag,
}},
)
log.info('Matched count: %d', result.matched_count)
log.info('Modified count: %d', result.modified_count)
end_timestamp = datetime.datetime.now()
duration = end_timestamp - start_timestamp
if go:
verb = 'Soft-deleting'
else:
verb = 'Finding'
log.info('%s orphans took %s', verb, duration)
@manager_maintenance.command
def find_projects_for_files():
"""For file documents without project, tries to find in which project files are used.
This is a heavy operation that inspects *everything* in MongoDB. Use with care.
"""
output_fpath = Path(current_app.config['STORAGE_DIR']) / 'files-without-project.json'
if output_fpath.exists():
log.error('Output filename %s already exists, remove it first.', output_fpath)
return 1
start_timestamp = datetime.datetime.now()
log.info('Finding files to fix...')
files_coll = current_app.db('files')
query = {'project': {'$exists': False},
'_deleted': {'$ne': True}}
files_to_fix = {file_doc['_id']: None for file_doc in files_coll.find(query)}
if not files_to_fix:
log.info('No files without projects found, congratulations.')
return 0
# Find all references by iterating through every node and project, and
# hoping that they reference the file.
projects_coll = current_app.db('projects')
existing_projects: typing.MutableSet[ObjectId] = set()
for doc in projects_coll.find():
project_id = doc['_id']
existing_projects.add(project_id)
for obj_id in find_object_ids(doc):
if obj_id not in files_to_fix:
continue
files_to_fix[obj_id] = project_id
nodes_coll = current_app.db('nodes')
for doc in nodes_coll.find():
project_id = doc.get('project')
if not project_id:
log.warning('Skipping node %s, as it is not part of any project', doc['_id'])
continue
if project_id not in existing_projects:
log.warning('Skipping node %s, as its project %s does not exist',
doc['_id'], project_id)
continue
for obj_id in find_object_ids(doc):
if obj_id not in files_to_fix:
continue
files_to_fix[obj_id] = project_id
orphans = {oid for oid, project_id in files_to_fix.items()
if project_id is None}
fixable = {str(oid): str(project_id)
for oid, project_id in files_to_fix.items()
if project_id is not None}
log.info('Total nr of orphan files : %d', len(orphans))
log.info('Total nr of fixable files: %d', len(fixable))
projects = set(fixable.values())
log.info('Fixable project count : %d', len(projects))
for project_id in projects:
project = projects_coll.find_one(ObjectId(project_id))
log.info(' - %40s /p/%-20s created on %s, ',
project['name'], project['url'], project['_created'])
end_timestamp = datetime.datetime.now()
duration = end_timestamp - start_timestamp
log.info('Finding projects took %s', duration)
log.info('Writing {file_id: project_id} mapping to %s', output_fpath)
with output_fpath.open('w', encoding='ascii') as outfile:
json.dump(fixable, outfile, indent=4, sort_keys=True)
@manager_maintenance.option('filepath', type=Path,
help='JSON file produced by find_projects_for_files')
@manager_maintenance.option('-g', '--go', dest='go', action='store_true', default=False,
help='Actually perform the changes (otherwise just show as dry-run).')
def fix_projects_for_files(filepath: Path, go=False):
"""Assigns file documents to projects.
Use 'manage.py maintenance find_projects_for_files` to produce the JSON
file that contains the file ID to project ID mapping.
"""
log.info('Loading %s', filepath)
with filepath.open('r', encoding='ascii') as infile:
mapping: typing.Mapping[str, str] = json.load(infile)
# Group IDs per project for more efficient querying.
log.info('Grouping per project')
project_to_file_ids: typing.Mapping[ObjectId, typing.List[ObjectId]] = \
collections.defaultdict(list)
for file_id, project_id in mapping.items():
project_to_file_ids[ObjectId(project_id)].append(ObjectId(file_id))
MockUpdateResult = collections.namedtuple('MockUpdateResult', 'matched_count modified_count')
files_coll = current_app.db('files')
total_matched = total_modified = 0
for project_oid, file_oids in project_to_file_ids.items():
query = {'_id': {'$in': file_oids}}
if go:
result = files_coll.update_many(query, {'$set': {'project': project_oid}})
else:
found = files_coll.count_documents(query)
result = MockUpdateResult(found, 0)
total_matched += result.matched_count
total_modified += result.modified_count
if result.matched_count != len(file_oids):
log.warning('Matched only %d of %d files; modified %d; for project %s',
result.matched_count, len(file_oids), result.modified_count, project_oid)
else:
log.info('Matched all %d files; modified %d; for project %s',
result.matched_count, result.modified_count, project_oid)
log.info('Done updating %d files (found %d, modified %d) on %d projects',
len(mapping), total_matched, total_modified, len(project_to_file_ids))
@manager_maintenance.option('-u', '--user', dest='user', nargs='?',
help='Update subscriptions for single user.')
@manager_maintenance.option('-o', '--object', dest='context_object', nargs='?',
help='Update subscriptions for context_object.')
@manager_maintenance.option('-g', '--go', dest='go', action='store_true', default=False,
help='Actually perform the changes (otherwise just show as dry-run).')
def fix_missing_activities_subscription_defaults(user=None, context_object=None, go=False):
"""Assign default values to activities-subscriptions documents where values are missing.
"""
subscriptions_collection = current_app.db('activities-subscriptions')
lookup_is_subscribed = {
'is_subscribed': {'$exists': False},
}
lookup_notifications = {
'notifications.web': {'$exists': False},
}
if user:
lookup_is_subscribed['user'] = ObjectId(user)
lookup_notifications['user'] = ObjectId(user)
if context_object:
lookup_is_subscribed['context_object'] = ObjectId(context_object)
lookup_notifications['context_object'] = ObjectId(context_object)
num_need_is_subscribed_update = subscriptions_collection.count_documents(lookup_is_subscribed)
log.info("Found %d documents that needs to be update 'is_subscribed'", num_need_is_subscribed_update)
num_need_notification_web_update = subscriptions_collection.count_documents(lookup_notifications)
log.info("Found %d documents that needs to be update 'notifications.web'", num_need_notification_web_update)
if not go:
return
if num_need_is_subscribed_update > 0:
log.info("Updating 'is_subscribed'")
resp = subscriptions_collection.update_many(
lookup_is_subscribed,
{
'$set': {'is_subscribed': True}
},
upsert=False
)
if resp.modified_count != num_need_is_subscribed_update:
log.warning("Expected % documents to be update, was %d",
num_need_is_subscribed_update, resp['nModified'])
if num_need_notification_web_update > 0:
log.info("Updating 'notifications.web'")
resp = subscriptions_collection.update_many(
lookup_notifications,
{
'$set': {'notifications.web': True}
},
upsert=False
)
if resp.modified_count != num_need_notification_web_update:
log.warning("Expected % documents to be update, was %d",
num_need_notification_web_update, resp['nModified'])
log.info("Done updating 'activities-subscriptions' documents")

View File

@ -165,49 +165,6 @@ def merge_project(src_proj_url, dest_proj_url):
log.info('Done moving.') log.info('Done moving.')
@manager_operations.command
def index_users_rebuild():
"""Clear users index, update settings and reindex all users."""
import concurrent.futures
from pillar.api.utils.algolia import algolia_index_user_save
users_index = current_app.algolia_index_users
if users_index is None:
log.error('Algolia is not configured properly, unable to do anything!')
return 1
log.info('Dropping existing index: %s', users_index)
users_index.clear_index()
index_users_update_settings()
db = current_app.db()
users = db['users'].find({'_deleted': {'$ne': True}})
user_count = users.count()
log.info('Reindexing all %i users', user_count)
real_current_app = current_app._get_current_object()._get_current_object()
def do_user(user):
with real_current_app.app_context():
algolia_index_user_save(user)
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
future_to_user = {executor.submit(do_user, user): user
for user in users}
for idx, future in enumerate(concurrent.futures.as_completed(future_to_user)):
user = future_to_user[future]
user_ident = user.get('email') or user.get('_id')
try:
future.result()
except Exception:
log.exception('Error updating user %i/%i %s', idx + 1, user_count, user_ident)
else:
log.info('Updated user %i/%i %s', idx + 1, user_count, user_ident)
@manager_operations.command @manager_operations.command
def index_users_update_settings(): def index_users_update_settings():
"""Configure indexing backend as required by the project""" """Configure indexing backend as required by the project"""
@ -234,7 +191,7 @@ def hash_auth_tokens():
tokens_coll = current_app.db('tokens') tokens_coll = current_app.db('tokens')
query = {'token': {'$exists': True}} query = {'token': {'$exists': True}}
cursor = tokens_coll.find(query, projection={'token': 1, '_id': 1}) cursor = tokens_coll.find(query, projection={'token': 1, '_id': 1})
log.info('Updating %d tokens', cursor.count()) log.info('Updating %d tokens', tokens_coll.count_documents(query))
for token_doc in cursor: for token_doc in cursor:
hashed_token = hash_auth_token(token_doc['token']) hashed_token = hash_auth_token(token_doc['token'])

View File

@ -1,6 +1,8 @@
from collections import defaultdict
import datetime
import os.path import os.path
from os import getenv from os import getenv
from collections import defaultdict
import requests.certs import requests.certs
# Certificate file for communication with other systems. # Certificate file for communication with other systems.
@ -29,10 +31,11 @@ DEBUG = False
SECRET_KEY = '' SECRET_KEY = ''
# Authentication token hashing key. If empty falls back to UTF8-encoded SECRET_KEY with a warning. # Authentication token hashing key. If empty falls back to UTF8-encoded SECRET_KEY with a warning.
# Not used to hash new tokens, but it is used to check pre-existing hashed tokens.
AUTH_TOKEN_HMAC_KEY = b'' AUTH_TOKEN_HMAC_KEY = b''
# Authentication settings # Authentication settings
BLENDER_ID_ENDPOINT = 'http://id.local:8000' BLENDER_ID_ENDPOINT = 'http://id.local:8000/'
CDN_USE_URL_SIGNING = True CDN_USE_URL_SIGNING = True
CDN_SERVICE_DOMAIN_PROTOCOL = 'https' CDN_SERVICE_DOMAIN_PROTOCOL = 'https'
@ -192,7 +195,7 @@ BLENDER_CLOUD_ADDON_VERSION = '1.4'
TLS_CERT_FILE = requests.certs.where() TLS_CERT_FILE = requests.certs.where()
CELERY_BACKEND = 'redis://redis/1' CELERY_BACKEND = 'redis://redis/1'
CELERY_BROKER = 'amqp://guest:guest@rabbit//' CELERY_BROKER = 'redis://redis/2'
# This configures the Celery task scheduler in such a way that we don't # This configures the Celery task scheduler in such a way that we don't
# have to import the pillar.celery.XXX modules. Remember to run # have to import the pillar.celery.XXX modules. Remember to run
@ -203,8 +206,20 @@ CELERY_BEAT_SCHEDULE = {
'schedule': 600, # every N seconds 'schedule': 600, # every N seconds
'args': ('gcs', 100) 'args': ('gcs', 100)
}, },
'refresh-blenderid-badges': {
'task': 'pillar.celery.badges.sync_badges_for_users',
'schedule': 10 * 60, # every N seconds
'args': (9 * 60, ), # time limit in seconds, keep shorter than 'schedule'
}
} }
# Badges will be re-fetched every timedelta.
# TODO(Sybren): A proper value should be determined after we actually have users with badges.
BLENDER_ID_BADGE_EXPIRY = datetime.timedelta(hours=4)
# How many times the Celery task for downloading an avatar is retried.
AVATAR_DOWNLOAD_CELERY_RETRY = 3
# Mapping from user role to capabilities obtained by users with that role. # Mapping from user role to capabilities obtained by users with that role.
USER_CAPABILITIES = defaultdict(**{ USER_CAPABILITIES = defaultdict(**{
'subscriber': {'subscriber', 'home-project'}, 'subscriber': {'subscriber', 'home-project'},
@ -257,3 +272,14 @@ STATIC_FILE_HASH = ''
# all API endpoints do not need it. On the views that require it, we use the # all API endpoints do not need it. On the views that require it, we use the
# current_app.csrf.protect() method. # current_app.csrf.protect() method.
WTF_CSRF_CHECK_DEFAULT = False WTF_CSRF_CHECK_DEFAULT = False
# Flask Debug Toolbar. Enable it by overriding DEBUG_TB_ENABLED in config_local.py.
DEBUG_TB_ENABLED = False
DEBUG_TB_PANELS = [
'flask_debugtoolbar.panels.versions.VersionDebugPanel',
'flask_debugtoolbar.panels.headers.HeaderDebugPanel',
'flask_debugtoolbar.panels.request_vars.RequestVarsDebugPanel',
'flask_debugtoolbar.panels.config_vars.ConfigVarsDebugPanel',
'flask_debugtoolbar.panels.template.TemplateDebugPanel',
'flask_debugtoolbar.panels.logger.LoggingPanel',
'flask_debugtoolbar.panels.route_list.RouteListDebugPanel']

View File

@ -4,7 +4,7 @@ This is for user-generated stuff, like comments.
""" """
import bleach import bleach
import CommonMark import commonmark
from . import shortcodes from . import shortcodes
@ -44,7 +44,7 @@ ALLOWED_STYLES = [
def markdown(s: str) -> str: def markdown(s: str) -> str:
commented_shortcodes = shortcodes.comment_shortcodes(s) commented_shortcodes = shortcodes.comment_shortcodes(s)
tainted_html = CommonMark.commonmark(commented_shortcodes) tainted_html = commonmark.commonmark(commented_shortcodes)
# Create a Cleaner that supports parsing of bare links (see filters). # Create a Cleaner that supports parsing of bare links (see filters).
cleaner = bleach.Cleaner(tags=ALLOWED_TAGS, cleaner = bleach.Cleaner(tags=ALLOWED_TAGS,

View File

@ -1,3 +1,5 @@
import flask
import raven.breadcrumbs
from raven.contrib.flask import Sentry from raven.contrib.flask import Sentry
from .auth import current_user from .auth import current_user
@ -14,16 +16,14 @@ class PillarSentry(Sentry):
def init_app(self, app, *args, **kwargs): def init_app(self, app, *args, **kwargs):
super().init_app(app, *args, **kwargs) super().init_app(app, *args, **kwargs)
# We perform authentication of the user while handling the request, flask.request_started.connect(self.__add_sentry_breadcrumbs, self)
# so Sentry calls get_user_info() too early.
def get_user_context_again(self, ): def __add_sentry_breadcrumbs(self, sender, **extra):
from flask import request raven.breadcrumbs.record(
message='Request started',
try: category='http',
self.client.user_context(self.get_user_info(request)) data={'url': flask.request.url}
except Exception as e: )
self.client.logger.exception(str(e))
def get_user_info(self, request): def get_user_info(self, request):
user_info = super().get_user_info(request) user_info = super().get_user_info(request)

View File

@ -162,9 +162,12 @@ class YouTube:
if not youtube_id: if not youtube_id:
return html_module.escape('{youtube invalid YouTube ID/URL}') return html_module.escape('{youtube invalid YouTube ID/URL}')
src = f'https://www.youtube.com/embed/{youtube_id}?rel=0' src = f'https://www.youtube.com/embed/{youtube_id}?rel=0'
html = f'<iframe class="shortcode youtube" width="{width}" height="{height}" src="{src}"' \ html = f'<div class="embed-responsive embed-responsive-16by9">' \
f' frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>' f'<iframe class="shortcode youtube embed-responsive-item"' \
f' width="{width}" height="{height}" src="{src}"' \
f' frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>' \
f'</div>'
return html return html
@ -225,12 +228,25 @@ class Attachment:
return self.render(file_doc, pargs, kwargs) return self.render(file_doc, pargs, kwargs)
def sdk_file(self, slug: str, node_properties: dict) -> pillarsdk.File: def sdk_file(self, slug: str, document: dict) -> pillarsdk.File:
"""Return the file document for the attachment with this slug.""" """Return the file document for the attachment with this slug."""
from pillar.web import system_util from pillar.web import system_util
attachments = node_properties.get('attachments', {}) # TODO (fsiddi) Make explicit what 'document' is.
# In some cases we pass the entire node or project documents, in other cases
# we pass node.properties. This should be unified at the level of do_markdown.
# For now we do a quick hack and first look for 'properties' in the doc,
# then we look for 'attachments'.
doc_properties = document.get('properties')
if doc_properties:
# We passed an entire document (all nodes must have 'properties')
attachments = doc_properties.get('attachments', {})
else:
# The value of document could have been defined as 'node.properties'
attachments = document.get('attachments', {})
attachment = attachments.get(slug) attachment = attachments.get(slug)
if not attachment: if not attachment:
raise self.NoSuchSlug(slug) raise self.NoSuchSlug(slug)

View File

@ -1,6 +1,7 @@
# -*- encoding: utf-8 -*- # -*- encoding: utf-8 -*-
import base64 import base64
import contextlib
import copy import copy
import datetime import datetime
import json import json
@ -10,11 +11,7 @@ import pathlib
import sys import sys
import typing import typing
import unittest.mock import unittest.mock
from urllib.parse import urlencode, urljoin
try:
from urllib.parse import urlencode
except ImportError:
from urllib.parse import urlencode
from bson import ObjectId, tz_util from bson import ObjectId, tz_util
@ -27,6 +24,7 @@ from eve.tests import TestMinimal
import pymongo.collection import pymongo.collection
from flask.testing import FlaskClient from flask.testing import FlaskClient
import flask.ctx import flask.ctx
import flask.wrappers
import responses import responses
import pillar import pillar
@ -176,6 +174,10 @@ class AbstractPillarTest(TestMinimal):
for modname in remove: for modname in remove:
del sys.modules[modname] del sys.modules[modname]
def url_for(self, endpoint, **values):
with self.app.app_context():
return flask.url_for(endpoint, **values)
def ensure_file_exists(self, file_overrides=None, *, example_file=None) -> (ObjectId, dict): def ensure_file_exists(self, file_overrides=None, *, example_file=None) -> (ObjectId, dict):
if example_file is None: if example_file is None:
example_file = ctd.EXAMPLE_FILE example_file = ctd.EXAMPLE_FILE
@ -185,7 +187,7 @@ class AbstractPillarTest(TestMinimal):
else: else:
self.ensure_project_exists() self.ensure_project_exists()
with self.app.test_request_context(): with self.app.app_context():
files_collection = self.app.data.driver.db['files'] files_collection = self.app.data.driver.db['files']
assert isinstance(files_collection, pymongo.collection.Collection) assert isinstance(files_collection, pymongo.collection.Collection)
@ -326,15 +328,48 @@ class AbstractPillarTest(TestMinimal):
return user return user
def create_valid_auth_token(self, user_id, token='token'): @contextlib.contextmanager
def login_as(self, user_id: typing.Union[str, ObjectId]):
"""Context manager, within the context the app context is active and the user logged in.
The logging-in happens when a request starts, so it's only active when
e.g. self.get() or self.post() or somesuch request is used.
"""
from pillar.auth import UserClass, login_user_object
if isinstance(user_id, str):
user_oid = ObjectId(user_id)
elif isinstance(user_id, ObjectId):
user_oid = user_id
else:
raise TypeError(f'invalid type {type(user_id)} for parameter user_id')
user_doc = self.fetch_user_from_db(user_oid)
def signal_handler(sender, **kwargs):
login_user_object(user)
with self.app.app_context():
user = UserClass.construct('', user_doc)
with flask.request_started.connected_to(signal_handler, self.app):
yield
# TODO: rename to 'create_auth_token' now that 'expire_in_days' can be negative.
def create_valid_auth_token(self,
user_id: typing.Union[str, ObjectId],
token='token',
*,
oauth_scopes: typing.Optional[typing.List[str]]=None,
expire_in_days=1) -> dict:
from pillar.api.utils import utcnow from pillar.api.utils import utcnow
future = utcnow() + datetime.timedelta(days=1) if isinstance(user_id, str):
user_id = ObjectId(user_id)
future = utcnow() + datetime.timedelta(days=expire_in_days)
with self.app.test_request_context(): with self.app.test_request_context():
from pillar.api.utils import authentication as auth from pillar.api.utils import authentication as auth
token_data = auth.store_token(user_id, token, future, None) token_data = auth.store_token(user_id, token, future, oauth_scopes=oauth_scopes)
return token_data return token_data
@ -364,7 +399,7 @@ class AbstractPillarTest(TestMinimal):
return user_id return user_id
def create_node(self, node_doc): def create_node(self, node_doc) -> ObjectId:
"""Creates a node, returning its ObjectId. """ """Creates a node, returning its ObjectId. """
with self.app.test_request_context(): with self.app.test_request_context():
@ -406,7 +441,7 @@ class AbstractPillarTest(TestMinimal):
"""Sets up Responses to mock unhappy validation flow.""" """Sets up Responses to mock unhappy validation flow."""
responses.add(responses.POST, responses.add(responses.POST,
'%s/u/validate_token' % self.app.config['BLENDER_ID_ENDPOINT'], urljoin(self.app.config['BLENDER_ID_ENDPOINT'], 'u/validate_token'),
json={'status': 'fail'}, json={'status': 'fail'},
status=403) status=403)
@ -414,7 +449,7 @@ class AbstractPillarTest(TestMinimal):
"""Sets up Responses to mock happy validation flow.""" """Sets up Responses to mock happy validation flow."""
responses.add(responses.POST, responses.add(responses.POST,
'%s/u/validate_token' % self.app.config['BLENDER_ID_ENDPOINT'], urljoin(self.app.config['BLENDER_ID_ENDPOINT'], 'u/validate_token'),
json=BLENDER_ID_USER_RESPONSE, json=BLENDER_ID_USER_RESPONSE,
status=200) status=200)
@ -485,11 +520,10 @@ class AbstractPillarTest(TestMinimal):
def client_request(self, method, path, qs=None, expected_status=200, auth_token=None, json=None, def client_request(self, method, path, qs=None, expected_status=200, auth_token=None, json=None,
data=None, headers=None, files=None, content_type=None, etag=None, data=None, headers=None, files=None, content_type=None, etag=None,
environ_overrides=None): environ_overrides=None) -> flask.wrappers.Response:
"""Performs a HTTP request to the server.""" """Performs a HTTP request to the server."""
from pillar.api.utils import dumps from pillar.api.utils import dumps
import json as mod_json
headers = headers or {} headers = headers or {}
environ_overrides = environ_overrides or {} environ_overrides = environ_overrides or {}
@ -522,29 +556,21 @@ class AbstractPillarTest(TestMinimal):
expected_status, resp.status_code, resp.data expected_status, resp.status_code, resp.data
)) ))
def get_json():
if resp.mimetype != 'application/json':
raise TypeError('Unable to load JSON from mimetype %r' % resp.mimetype)
return mod_json.loads(resp.data)
resp.json = get_json
resp.get_json = get_json
return resp return resp
def get(self, *args, **kwargs): def get(self, *args, **kwargs) -> flask.wrappers.Response:
return self.client_request('GET', *args, **kwargs) return self.client_request('GET', *args, **kwargs)
def post(self, *args, **kwargs): def post(self, *args, **kwargs) -> flask.wrappers.Response:
return self.client_request('POST', *args, **kwargs) return self.client_request('POST', *args, **kwargs)
def put(self, *args, **kwargs): def put(self, *args, **kwargs) -> flask.wrappers.Response:
return self.client_request('PUT', *args, **kwargs) return self.client_request('PUT', *args, **kwargs)
def delete(self, *args, **kwargs): def delete(self, *args, **kwargs) -> flask.wrappers.Response:
return self.client_request('DELETE', *args, **kwargs) return self.client_request('DELETE', *args, **kwargs)
def patch(self, *args, **kwargs): def patch(self, *args, **kwargs) -> flask.wrappers.Response:
return self.client_request('PATCH', *args, **kwargs) return self.client_request('PATCH', *args, **kwargs)
def assertAllowsAccess(self, def assertAllowsAccess(self,
@ -561,7 +587,7 @@ class AbstractPillarTest(TestMinimal):
raise TypeError('expected_user_id should be a string or ObjectId, ' raise TypeError('expected_user_id should be a string or ObjectId, '
f'but is {expected_user_id!r}') f'but is {expected_user_id!r}')
resp = self.get('/api/users/me', expected_status=200, auth_token=token).json() resp = self.get('/api/users/me', expected_status=200, auth_token=token).get_json()
if expected_user_id: if expected_user_id:
self.assertEqual(resp['_id'], str(expected_user_id)) self.assertEqual(resp['_id'], str(expected_user_id))

View File

@ -73,9 +73,9 @@ EXAMPLE_PROJECT = {
'nodes_featured': [], 'nodes_featured': [],
'nodes_latest': [], 'nodes_latest': [],
'permissions': {'groups': [{'group': EXAMPLE_ADMIN_GROUP_ID, 'permissions': {'groups': [{'group': EXAMPLE_ADMIN_GROUP_ID,
'methods': ['GET', 'POST', 'PUT', 'DELETE']}], 'methods': ['GET', 'POST', 'PUT', 'DELETE']}],
'users': [], 'users': [],
'world': ['GET']}, 'world': ['GET']},
'picture_header': ObjectId('5673f260c379cf0007b31bc4'), 'picture_header': ObjectId('5673f260c379cf0007b31bc4'),
'picture_square': ObjectId('5673f256c379cf0007b31bc3'), 'picture_square': ObjectId('5673f256c379cf0007b31bc3'),
'status': 'published', 'status': 'published',

View File

@ -1,9 +1,9 @@
"""Flask configuration file for unit testing.""" """Flask configuration file for unit testing."""
BLENDER_ID_ENDPOINT = 'http://id.local:8001' # Non existant server BLENDER_ID_ENDPOINT = 'http://id.local:8001/' # Non existant server
SERVER_NAME = 'localhost' SERVER_NAME = 'localhost.local'
PILLAR_SERVER_ENDPOINT = 'http://localhost/api/' PILLAR_SERVER_ENDPOINT = 'http://localhost.local/api/'
MAIN_PROJECT_ID = '5672beecc0261b2005ed1a33' MAIN_PROJECT_ID = '5672beecc0261b2005ed1a33'
@ -44,3 +44,5 @@ ELASTIC_INDICES = {
# MUST be 8 characters long, see pillar.flask_extra.HashedPathConverter # MUST be 8 characters long, see pillar.flask_extra.HashedPathConverter
STATIC_FILE_HASH = 'abcd1234' STATIC_FILE_HASH = 'abcd1234'
CACHE_NO_NULL_WARNING = True

View File

@ -1,6 +1,7 @@
from pillar.api.eve_settings import * from pillar.api.eve_settings import *
MONGO_DBNAME = 'pillar_test' MONGO_DBNAME = 'pillar_test'
MONGO_USERNAME = None
def override_eve(): def override_eve():
@ -10,5 +11,7 @@ def override_eve():
test_settings.MONGO_HOST = MONGO_HOST test_settings.MONGO_HOST = MONGO_HOST
test_settings.MONGO_PORT = MONGO_PORT test_settings.MONGO_PORT = MONGO_PORT
test_settings.MONGO_DBNAME = MONGO_DBNAME test_settings.MONGO_DBNAME = MONGO_DBNAME
test_settings.MONGO1_USERNAME = MONGO_USERNAME
tests.MONGO_HOST = MONGO_HOST tests.MONGO_HOST = MONGO_HOST
tests.MONGO_DBNAME = MONGO_DBNAME tests.MONGO_DBNAME = MONGO_DBNAME
tests.MONGO_USERNAME = MONGO_USERNAME

View File

@ -10,9 +10,12 @@ import flask_login
import jinja2.filters import jinja2.filters
import jinja2.utils import jinja2.utils
import werkzeug.exceptions as wz_exceptions import werkzeug.exceptions as wz_exceptions
from werkzeug.local import LocalProxy
import pillarsdk import pillarsdk
import pillar.api.utils import pillar.api.utils
from pillar.api.utils import pretty_duration
from pillar.web.utils import pretty_date from pillar.web.utils import pretty_date
from pillar.web.nodes.routes import url_for_node from pillar.web.nodes.routes import url_for_node
import pillar.markdown import pillar.markdown
@ -28,6 +31,14 @@ def format_pretty_date_time(d):
return pretty_date(d, detail=True) return pretty_date(d, detail=True)
def format_pretty_duration(s):
return pretty_duration(s)
def format_pretty_duration_fractional(s):
return pillar.api.utils.pretty_duration_fractional(s)
def format_undertitle(s): def format_undertitle(s):
"""Underscore-replacing title filter. """Underscore-replacing title filter.
@ -200,9 +211,23 @@ def do_yesno(value, arg=None):
return no return no
def do_json(some_object: typing.Any) -> str:
import pillar.auth
if isinstance(some_object, LocalProxy):
return do_json(some_object._get_current_object())
if isinstance(some_object, pillarsdk.Resource):
some_object = some_object.to_dict()
if isinstance(some_object, pillar.auth.UserClass):
some_object = some_object.frontend_info()
return pillar.api.utils.dumps(some_object)
def setup_jinja_env(jinja_env, app_config: dict): def setup_jinja_env(jinja_env, app_config: dict):
jinja_env.filters['pretty_date'] = format_pretty_date jinja_env.filters['pretty_date'] = format_pretty_date
jinja_env.filters['pretty_date_time'] = format_pretty_date_time jinja_env.filters['pretty_date_time'] = format_pretty_date_time
jinja_env.filters['pretty_duration'] = format_pretty_duration
jinja_env.filters['pretty_duration_fractional'] = format_pretty_duration_fractional
jinja_env.filters['undertitle'] = format_undertitle jinja_env.filters['undertitle'] = format_undertitle
jinja_env.filters['hide_none'] = do_hide_none jinja_env.filters['hide_none'] = do_hide_none
jinja_env.filters['pluralize'] = do_pluralize jinja_env.filters['pluralize'] = do_pluralize
@ -212,6 +237,7 @@ def setup_jinja_env(jinja_env, app_config: dict):
jinja_env.filters['yesno'] = do_yesno jinja_env.filters['yesno'] = do_yesno
jinja_env.filters['repr'] = repr jinja_env.filters['repr'] = repr
jinja_env.filters['urljoin'] = functools.partial(urllib.parse.urljoin, allow_fragments=True) jinja_env.filters['urljoin'] = functools.partial(urllib.parse.urljoin, allow_fragments=True)
jinja_env.filters['json'] = do_json
jinja_env.globals['url_for_node'] = do_url_for_node jinja_env.globals['url_for_node'] = do_url_for_node
jinja_env.globals['abs_url'] = functools.partial(flask.url_for, jinja_env.globals['abs_url'] = functools.partial(flask.url_for,
_external=True, _external=True,

View File

@ -1,5 +1,6 @@
import logging import logging
import urllib.parse import urllib.parse
import warnings
from pillarsdk import Node from pillarsdk import Node
from flask import Blueprint from flask import Blueprint
@ -7,7 +8,6 @@ from flask import current_app
from flask import render_template from flask import render_template
from flask import redirect from flask import redirect
from flask import request from flask import request
from werkzeug.contrib.atom import AtomFeed
from pillar.flask_extra import ensure_schema from pillar.flask_extra import ensure_schema
from pillar.web.utils import system_util from pillar.web.utils import system_util
@ -91,6 +91,11 @@ def error_403():
@blueprint.route('/feeds/blogs.atom') @blueprint.route('/feeds/blogs.atom')
def feeds_blogs(): def feeds_blogs():
"""Global feed generator for latest blogposts across all projects""" """Global feed generator for latest blogposts across all projects"""
# Werkzeug deprecated their Atom feed. Tracked in https://developer.blender.org/T65274.
with warnings.catch_warnings():
from werkzeug.contrib.atom import AtomFeed
@current_app.cache.cached(60*5) @current_app.cache.cached(60*5)
def render_page(): def render_page():
feed = AtomFeed('Blender Cloud - Latest updates', feed = AtomFeed('Blender Cloud - Latest updates',

View File

@ -19,10 +19,19 @@ def attachment_form_group_create(schema_prop):
def _attachment_build_single_field(schema_prop): def _attachment_build_single_field(schema_prop):
# 'keyschema' was renamed to 'keysrules' in Cerberus 1.3, but our data may still have the old
# names. Same for 'valueschema' and 'valuesrules'.
keysrules = schema_prop.get('keysrules') or schema_prop.get('keyschema')
if keysrules is None:
raise KeyError(f"missing 'keysrules' key in schema {schema_prop}")
valuesrules = schema_prop.get('valuesrules') or schema_prop.get('valueschema')
if valuesrules is None:
raise KeyError(f"missing 'valuesrules' key in schema {schema_prop}")
# Ugly hard-coded schema. # Ugly hard-coded schema.
fake_schema = { fake_schema = {
'slug': schema_prop['propertyschema'], 'slug': keysrules,
'oid': schema_prop['valueschema']['schema']['oid'], 'oid': valuesrules['schema']['oid'],
} }
file_select_form_group = build_file_select_form(fake_schema) file_select_form_group = build_file_select_form(fake_schema)
return file_select_form_group return file_select_form_group

View File

@ -1,236 +0,0 @@
import logging
from flask import current_app
from flask import request
from flask import jsonify
from flask import render_template
from flask_login import login_required, current_user
from pillarsdk import Node
from pillarsdk import Project
import werkzeug.exceptions as wz_exceptions
from pillar.api.utils import utcnow
from pillar.web import subquery
from pillar.web.nodes.routes import blueprint
from pillar.web.utils import gravatar
from pillar.web.utils import pretty_date
from pillar.web.utils import system_util
log = logging.getLogger(__name__)
@blueprint.route('/comments/create', methods=['POST'])
@login_required
def comments_create():
content = request.form['content']
parent_id = request.form.get('parent_id')
if not parent_id:
log.warning('User %s tried to create comment without parent_id', current_user.objectid)
raise wz_exceptions.UnprocessableEntity()
api = system_util.pillar_api()
parent_node = Node.find(parent_id, api=api)
if not parent_node:
log.warning('Unable to create comment for user %s, parent node %r not found',
current_user.objectid, parent_id)
raise wz_exceptions.UnprocessableEntity()
log.info('Creating comment for user %s on parent node %r',
current_user.objectid, parent_id)
comment_props = dict(
project=parent_node.project,
name='Comment',
user=current_user.objectid,
node_type='comment',
properties=dict(
content=content,
status='published',
confidence=0,
rating_positive=0,
rating_negative=0))
if parent_id:
comment_props['parent'] = parent_id
# Get the parent node and check if it's a comment. In which case we flag
# the current comment as a reply.
parent_node = Node.find(parent_id, api=api)
if parent_node.node_type == 'comment':
comment_props['properties']['is_reply'] = True
comment = Node(comment_props)
comment.create(api=api)
return jsonify({'node_id': comment._id}), 201
@blueprint.route('/comments/<string(length=24):comment_id>', methods=['POST'])
@login_required
def comment_edit(comment_id):
"""Allows a user to edit their comment."""
from pillar.web import jinja
api = system_util.pillar_api()
comment = Node({'_id': comment_id})
result = comment.patch({'op': 'edit', 'content': request.form['content']}, api=api)
assert result['_status'] == 'OK'
return jsonify({
'status': 'success',
'data': {
'content': result.properties.content or '',
'content_html': jinja.do_markdowned(result.properties, 'content'),
}})
def format_comment(comment, is_reply=False, is_team=False, replies=None):
"""Format a comment node into a simpler dictionary.
:param comment: the comment object
:param is_reply: True if the comment is a reply to another comment
:param is_team: True if the author belongs to the group that owns the node
:param replies: list of replies (formatted with this function)
"""
try:
is_own = (current_user.objectid == comment.user._id) \
if current_user.is_authenticated else False
except AttributeError:
current_app.bugsnag.notify(Exception(
'Missing user for embedded user ObjectId'),
meta_data={'nodes_info': {'node_id': comment['_id']}})
return
is_rated = False
is_rated_positive = None
if comment.properties.ratings:
for rating in comment.properties.ratings:
if current_user.is_authenticated and rating.user == current_user.objectid:
is_rated = True
is_rated_positive = rating.is_positive
break
return dict(_id=comment._id,
gravatar=gravatar(comment.user.email, size=32),
time_published=pretty_date(comment._created or utcnow(), detail=True),
rating=comment.properties.rating_positive - comment.properties.rating_negative,
author=comment.user.full_name,
author_username=comment.user.username,
content=comment.properties.content,
is_reply=is_reply,
is_own=is_own,
is_rated=is_rated,
is_rated_positive=is_rated_positive,
is_team=is_team,
replies=replies)
@blueprint.route('/<string(length=24):node_id>/comments')
def comments_for_node(node_id):
"""Shows the comments attached to the given node.
The URL can be overridden in order to define can_post_comments in a different way
"""
api = system_util.pillar_api()
node = Node.find(node_id, api=api)
project = Project({'_id': node.project})
can_post_comments = project.node_type_has_method('comment', 'POST', api=api)
can_comment_override = request.args.get('can_comment', 'True') == 'True'
can_post_comments = can_post_comments and can_comment_override
return render_comments_for_node(node_id, can_post_comments=can_post_comments)
def render_comments_for_node(node_id: str, *, can_post_comments: bool):
"""Render the list of comments for a node."""
api = system_util.pillar_api()
# Query for all children, i.e. comments on the node.
comments = Node.all({
'where': {'node_type': 'comment', 'parent': node_id},
}, api=api)
def enrich(some_comment):
some_comment['_user'] = subquery.get_user_info(some_comment['user'])
some_comment['_is_own'] = some_comment['user'] == current_user.objectid
some_comment['_current_user_rating'] = None # tri-state boolean
some_comment[
'_rating'] = some_comment.properties.rating_positive - some_comment.properties.rating_negative
if current_user.is_authenticated:
for rating in some_comment.properties.ratings or ():
if rating.user != current_user.objectid:
continue
some_comment['_current_user_rating'] = rating.is_positive
for comment in comments['_items']:
# Query for all grandchildren, i.e. replies to comments on the node.
comment['_replies'] = Node.all({
'where': {'node_type': 'comment', 'parent': comment['_id']},
}, api=api)
enrich(comment)
for reply in comment['_replies']['_items']:
enrich(reply)
nr_of_comments = sum(1 + comment['_replies']['_meta']['total']
for comment in comments['_items'])
return render_template('nodes/custom/comment/list_embed.html',
node_id=node_id,
comments=comments,
nr_of_comments=nr_of_comments,
show_comments=True,
can_post_comments=can_post_comments)
@blueprint.route('/<string(length=24):node_id>/commentform')
def commentform_for_node(node_id):
"""Shows only the comment for for comments attached to the given node.
i.e. does not show the comments themselves, just the form to post a new comment.
"""
api = system_util.pillar_api()
node = Node.find(node_id, api=api)
project = Project({'_id': node.project})
can_post_comments = project.node_type_has_method('comment', 'POST', api=api)
return render_template('nodes/custom/comment/list_embed.html',
node_id=node_id,
show_comments=False,
can_post_comments=can_post_comments)
@blueprint.route("/comments/<comment_id>/rate/<operation>", methods=['POST'])
@login_required
def comments_rate(comment_id, operation):
"""Comment rating function
:param comment_id: the comment id
:type comment_id: str
:param rating: the rating (is cast from 0 to False and from 1 to True)
:type rating: int
"""
if operation not in {'revoke', 'upvote', 'downvote'}:
raise wz_exceptions.BadRequest('Invalid operation')
api = system_util.pillar_api()
# PATCH the node and return the result.
comment = Node({'_id': comment_id})
result = comment.patch({'op': operation}, api=api)
assert result['_status'] == 'OK'
return jsonify({
'status': 'success',
'data': {
'op': operation,
'rating_positive': result.properties.rating_positive,
'rating_negative': result.properties.rating_negative,
}})

View File

@ -19,6 +19,7 @@ from pillar.web.nodes.routes import url_for_node
from pillar.web.nodes.forms import get_node_form from pillar.web.nodes.forms import get_node_form
import pillar.web.nodes.attachments import pillar.web.nodes.attachments
from pillar.web.projects.routes import project_update_nodes_list from pillar.web.projects.routes import project_update_nodes_list
from pillar.web.projects.routes import project_navigation_links
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -61,16 +62,10 @@ def posts_view(project_id=None, project_url=None, url=None, *, archive=False, pa
post.picture = get_file(post.picture, api=api) post.picture = get_file(post.picture, api=api)
post.url = url_for_node(node=post) post.url = url_for_node(node=post)
# Use the *_main_project.html template for the main blog
is_main_project = project_id == current_app.config['MAIN_PROJECT_ID']
main_project_template = '_main_project' if is_main_project else ''
main_project_template = '_main_project'
index_arch = 'archive' if archive else 'index' index_arch = 'archive' if archive else 'index'
template_path = f'nodes/custom/blog/{index_arch}{main_project_template}.html', template_path = f'nodes/custom/blog/{index_arch}.html',
if url: if url:
template_path = f'nodes/custom/post/view{main_project_template}.html',
post = Node.find_one({ post = Node.find_one({
'where': {'parent': blog._id, 'properties.url': url}, 'where': {'parent': blog._id, 'properties.url': url},
'embedded': {'node_type': 1, 'user': 1}, 'embedded': {'node_type': 1, 'user': 1},
@ -95,6 +90,7 @@ def posts_view(project_id=None, project_url=None, url=None, *, archive=False, pa
can_create_blog_posts = project.node_type_has_method('post', 'POST', api=api) can_create_blog_posts = project.node_type_has_method('post', 'POST', api=api)
# Use functools.partial so we can later pass page=X. # Use functools.partial so we can later pass page=X.
is_main_project = project_id == current_app.config['MAIN_PROJECT_ID']
if is_main_project: if is_main_project:
url_func = functools.partial(url_for, 'main.main_blog_archive') url_func = functools.partial(url_for, 'main.main_blog_archive')
else: else:
@ -112,24 +108,21 @@ def posts_view(project_id=None, project_url=None, url=None, *, archive=False, pa
else: else:
project.blog_archive_prev = None project.blog_archive_prev = None
title = 'blog_main' if is_main_project else 'blog' navigation_links = project_navigation_links(project, api)
extension_sidebar_links = current_app.extension_sidebar_links(project)
pages = Node.all({
'where': {'project': project._id, 'node_type': 'page'},
'projection': {'name': 1}}, api=api)
return render_template( return render_template(
template_path, template_path,
blog=blog, blog=blog,
node=post, node=post, # node is used by the generic comments rendering (see custom/_scripts.pug)
posts=posts._items, posts=posts._items,
posts_meta=pmeta, posts_meta=pmeta,
more_posts_available=pmeta['total'] > pmeta['max_results'], more_posts_available=pmeta['total'] > pmeta['max_results'],
project=project, project=project,
title=title,
node_type_post=project.get_node_type('post'), node_type_post=project.get_node_type('post'),
can_create_blog_posts=can_create_blog_posts, can_create_blog_posts=can_create_blog_posts,
pages=pages._items, navigation_links=navigation_links,
extension_sidebar_links=extension_sidebar_links,
api=api) api=api)

View File

@ -48,7 +48,10 @@ def find_for_comment(project, node):
continue continue
try: try:
parent = Node.find(parent.parent, api=api) parent = Node.find_one({'where': {
'_id': parent.parent,
'_deleted': {'$ne': True}
}}, api=api)
except ResourceNotFound: except ResourceNotFound:
log.warning( log.warning(
'url_for_node(node_id=%r): Unable to find parent node %r', 'url_for_node(node_id=%r): Unable to find parent node %r',
@ -94,6 +97,16 @@ def find_for_post(project, node):
url=node.properties.url) url=node.properties.url)
@register_node_finder('page')
def find_for_page(project, node):
"""Returns the URL for a page."""
project_id = project['_id']
the_project = project_url(project_id, project=project)
return url_for('projects.view_node', project_url=the_project.url, node_id=node.properties.url)
def find_for_other(project, node): def find_for_other(project, node):
"""Fallback: Assets, textures, and other node types. """Fallback: Assets, textures, and other node types.

View File

@ -1,9 +1,10 @@
import functools
import logging import logging
import typing
from datetime import datetime from datetime import datetime
from datetime import date from datetime import date
import pillarsdk import pillarsdk
from flask import current_app
from flask_wtf import FlaskForm from flask_wtf import FlaskForm
from wtforms import StringField from wtforms import StringField
from wtforms import DateField from wtforms import DateField
@ -17,6 +18,8 @@ from wtforms import DateTimeField
from wtforms import SelectMultipleField from wtforms import SelectMultipleField
from wtforms import FieldList from wtforms import FieldList
from wtforms.validators import DataRequired from wtforms.validators import DataRequired
from pillar import current_app
from pillar.web.utils import system_util from pillar.web.utils import system_util
from pillar.web.utils.forms import FileSelectField from pillar.web.utils.forms import FileSelectField
from pillar.web.utils.forms import CustomFormField from pillar.web.utils.forms import CustomFormField
@ -44,6 +47,13 @@ def iter_node_properties(node_type):
yield prop_name, prop_schema, prop_fschema yield prop_name, prop_schema, prop_fschema
@functools.lru_cache(maxsize=1)
def tag_choices() -> typing.List[typing.Tuple[str, str]]:
"""Return (value, label) tuples for the NODE_TAGS config setting."""
tags = current_app.config.get('NODE_TAGS') or []
return [(tag, tag.title()) for tag in tags] # (value, label) tuples
def add_form_properties(form_class, node_type): def add_form_properties(form_class, node_type):
"""Add fields to a form based on the node and form schema provided. """Add fields to a form based on the node and form schema provided.
:type node_schema: dict :type node_schema: dict
@ -60,7 +70,9 @@ def add_form_properties(form_class, node_type):
# Recursive call if detects a dict # Recursive call if detects a dict
field_type = schema_prop['type'] field_type = schema_prop['type']
if field_type == 'dict': if prop_name == 'tags' and field_type == 'list':
field = SelectMultipleField(choices=tag_choices())
elif field_type == 'dict':
assert prop_name == 'attachments' assert prop_name == 'attachments'
field = attachments.attachment_form_group_create(schema_prop) field = attachments.attachment_form_group_create(schema_prop)
elif field_type == 'list': elif field_type == 'list':

View File

@ -1,9 +1,9 @@
import os import os
import json
import logging import logging
from datetime import datetime from datetime import datetime
import pillarsdk import pillarsdk
from pillar import shortcodes
from pillarsdk import Node from pillarsdk import Node
from pillarsdk import Project from pillarsdk import Project
from pillarsdk.exceptions import ResourceNotFound from pillarsdk.exceptions import ResourceNotFound
@ -17,15 +17,12 @@ from flask import request
from flask import jsonify from flask import jsonify
from flask import abort from flask import abort
from flask_login import current_user from flask_login import current_user
from flask_wtf.csrf import validate_csrf
import werkzeug.exceptions as wz_exceptions import werkzeug.exceptions as wz_exceptions
from wtforms import SelectMultipleField from wtforms import SelectMultipleField
from flask_login import login_required from flask_login import login_required
from jinja2.exceptions import TemplateNotFound from jinja2.exceptions import TemplateNotFound
from pillar.api.utils.authorization import check_permissions
from pillar.web.utils import caching
from pillar.markdown import markdown from pillar.markdown import markdown
from pillar.web.nodes.forms import get_node_form from pillar.web.nodes.forms import get_node_form
from pillar.web.nodes.forms import process_node_form from pillar.web.nodes.forms import process_node_form
@ -108,6 +105,11 @@ def view(node_id, extra_template_args: dict=None):
node_type_name = node.node_type node_type_name = node.node_type
if node_type_name == 'page':
# HACK: The 'edit node' page GETs this endpoint, but for pages it's plain wrong,
# so we just redirect to the correct URL.
return redirect(url_for_node(node=node))
if node_type_name == 'post' and not request.args.get('embed'): if node_type_name == 'post' and not request.args.get('embed'):
# Posts shouldn't be shown at this route (unless viewed embedded, tipically # Posts shouldn't be shown at this route (unless viewed embedded, tipically
# after an edit. Redirect to the correct one. # after an edit. Redirect to the correct one.
@ -487,11 +489,14 @@ def preview_markdown():
current_app.csrf.protect() current_app.csrf.protect()
try: try:
content = request.form['content'] content = request.json['content']
except KeyError: except KeyError:
return jsonify({'_status': 'ERR', return jsonify({'_status': 'ERR',
'message': 'The field "content" was not specified.'}), 400 'message': 'The field "content" was not specified.'}), 400
return jsonify(content=markdown(content)) html = markdown(content)
attachmentsdict = request.json.get('attachments', {})
html = shortcodes.render_commented(html, context={'attachments': attachmentsdict})
return jsonify(content=html)
def ensure_lists_exist_as_empty(node_doc, node_type): def ensure_lists_exist_as_empty(node_doc, node_type):
@ -604,5 +609,94 @@ def url_for_node(node_id=None, node=None):
return finders.find_url_for_node(node) return finders.find_url_for_node(node)
@blueprint.route("/<node_id>/breadcrumbs")
def breadcrumbs(node_id: str):
"""Return breadcrumbs for the given node, as JSON.
Note that a missing parent is still returned in the breadcrumbs,
but with `{_exists: false, name: '-unknown-'}`.
The breadcrumbs start with the top-level parent, and end with the node
itself (marked by {_self: true}). Returns JSON like this:
{breadcrumbs: [
...,
{_id: "parentID",
name: "The Parent Node",
node_type: "group",
url: "/p/project/parentID"},
{_id: "deadbeefbeefbeefbeeffeee",
_self: true,
name: "The Node Itself",
node_type: "asset",
url: "/p/project/nodeID"},
]}
When a parent node is missing, it has a breadcrumb like this:
{_id: "deadbeefbeefbeefbeeffeee",
_exists': false,
name': '-unknown-'}
"""
api = system_util.pillar_api()
is_self = True
def make_crumb(some_node: None) -> dict:
"""Construct a breadcrumb for this node."""
nonlocal is_self
crumb = {
'_id': some_node._id,
'name': some_node.name,
'node_type': some_node.node_type,
'url': finders.find_url_for_node(some_node),
}
if is_self:
crumb['_self'] = True
is_self = False
return crumb
def make_missing_crumb(some_node_id: None) -> dict:
"""Construct 'missing parent' breadcrumb."""
return {
'_id': some_node_id,
'_exists': False,
'name': '-unknown-',
}
# The first node MUST exist.
try:
node = Node.find(node_id, api=api)
except ResourceNotFound:
log.warning('breadcrumbs(node_id=%r): Unable to find node', node_id)
raise wz_exceptions.NotFound(f'Unable to find node {node_id}')
except ForbiddenAccess:
log.warning('breadcrumbs(node_id=%r): access denied to current user', node_id)
raise wz_exceptions.Forbidden(f'No access to node {node_id}')
crumbs = []
while True:
crumbs.append(make_crumb(node))
child_id = node._id
node_id = node.parent
if not node_id:
break
# If a subsequent node doesn't exist any more, include that in the breadcrumbs.
# Forbidden nodes are handled as if they don't exist.
try:
node = Node.find(node_id, api=api)
except (ResourceNotFound, ForbiddenAccess):
log.warning('breadcrumbs: Unable to find node %r but it is marked as parent of %r',
node_id, child_id)
crumbs.append(make_missing_crumb(node_id))
break
return jsonify({'breadcrumbs': list(reversed(crumbs))})
# Import of custom modules (using the same nodes decorator) # Import of custom modules (using the same nodes decorator)
from .custom import comments, groups, storage, posts from .custom import groups, storage, posts

View File

@ -6,7 +6,8 @@ from flask_login import current_user
import pillar.flask_extra import pillar.flask_extra
from pillar import current_app from pillar import current_app
from pillar.api.utils import authorization, str2id, gravatar, jsonify import pillar.api.users.avatar
from pillar.api.utils import authorization, str2id, jsonify
from pillar.web.system_util import pillar_api from pillar.web.system_util import pillar_api
from pillarsdk import Organization, User from pillarsdk import Organization, User
@ -47,7 +48,7 @@ def view_embed(organization_id: str):
members = om.org_members(organization.members) members = om.org_members(organization.members)
for member in members: for member in members:
member['avatar'] = gravatar(member.get('email')) member['avatar'] = pillar.api.users.avatar.url(member)
member['_id'] = str(member['_id']) member['_id'] = str(member['_id'])
admin_user = User.find(organization.admin_uid, api=api) admin_user = User.find(organization.admin_uid, api=api)

View File

@ -30,6 +30,7 @@ class ProjectForm(FlaskForm):
('deleted', 'Deleted')]) ('deleted', 'Deleted')])
picture_header = FileSelectField('Picture header', file_format='image') picture_header = FileSelectField('Picture header', file_format='image')
picture_square = FileSelectField('Picture square', file_format='image') picture_square = FileSelectField('Picture square', file_format='image')
picture_16_9 = FileSelectField('Picture 16:9', file_format='image')
def validate(self): def validate(self):
rv = FlaskForm.validate(self) rv = FlaskForm.validate(self)

View File

@ -22,8 +22,10 @@ import werkzeug.exceptions as wz_exceptions
from pillar import current_app from pillar import current_app
from pillar.api.utils import utcnow from pillar.api.utils import utcnow
import pillar.api.users.avatar
from pillar.web import system_util from pillar.web import system_util
from pillar.web import utils from pillar.web import utils
from pillar.web.nodes import finders
from pillar.web.utils.jstree import jstree_get_children from pillar.web.utils.jstree import jstree_get_children
import pillar.extension import pillar.extension
@ -108,7 +110,6 @@ def index():
return render_template( return render_template(
'projects/index_dashboard.html', 'projects/index_dashboard.html',
gravatar=utils.gravatar(current_user.email, size=128),
projects_user=projects_user['_items'], projects_user=projects_user['_items'],
projects_deleted=projects_deleted['_items'], projects_deleted=projects_deleted['_items'],
projects_shared=projects_shared['_items'], projects_shared=projects_shared['_items'],
@ -302,9 +303,53 @@ def view(project_url):
'header_video_node': header_video_node}) 'header_video_node': header_video_node})
def project_navigation_links(project: typing.Type[Project], api) -> list:
"""Returns a list of nodes for the project, for top navigation display.
Args:
project: A Project object.
api: the api client credential.
Returns:
A list of links for the Project.
For example we display a link to the project blog if present, as well
as pages. The list is structured as follows:
[{'url': '/p/spring/about', 'label': 'About'},
{'url': '/p/spring/blog', 'label': 'Blog'}]
"""
links = []
# Fetch the blog
blog = Node.find_first({
'where': {'project': project._id, 'node_type': 'blog', '_deleted': {'$ne': True}},
'projection': {
'name': 1,
}
}, api=api)
if blog:
links.append({'url': finders.find_url_for_node(blog), 'label': blog.name, 'slug': 'blog'})
# Fetch pages
pages = Node.all({
'where': {'project': project._id, 'node_type': 'page', '_deleted': {'$ne': True}},
'projection': {
'name': 1,
'properties.url': 1
}
}, api=api)
# Process the results and append the links to the list
for p in pages._items:
links.append({'url': finders.find_url_for_node(p), 'label': p.name, 'slug': p.properties.url})
return links
def render_project(project, api, extra_context=None, template_name=None): def render_project(project, api, extra_context=None, template_name=None):
project.picture_square = utils.get_file(project.picture_square, api=api) utils.attach_project_pictures(project, api)
project.picture_header = utils.get_file(project.picture_header, api=api)
def load_latest(list_of_ids, node_type=None): def load_latest(list_of_ids, node_type=None):
"""Loads a list of IDs in reversed order.""" """Loads a list of IDs in reversed order."""
@ -315,6 +360,7 @@ def render_project(project, api, extra_context=None, template_name=None):
# Construct query parameters outside the loop. # Construct query parameters outside the loop.
projection = {'name': 1, 'user': 1, 'node_type': 1, 'project': 1, projection = {'name': 1, 'user': 1, 'node_type': 1, 'project': 1,
'properties.url': 1, 'properties.content_type': 1, 'properties.url': 1, 'properties.content_type': 1,
'properties.duration_seconds': 1,
'picture': 1} 'picture': 1}
params = {'projection': projection, 'embedded': {'user': 1}} params = {'projection': projection, 'embedded': {'user': 1}}
@ -356,7 +402,6 @@ def render_project(project, api, extra_context=None, template_name=None):
template_name = template_name or 'projects/home_index.html' template_name = template_name or 'projects/home_index.html'
return render_template( return render_template(
template_name, template_name,
gravatar=utils.gravatar(current_user.email, size=128),
project=project, project=project,
api=system_util.pillar_api(), api=system_util.pillar_api(),
**extra_context) **extra_context)
@ -368,6 +413,7 @@ def render_project(project, api, extra_context=None, template_name=None):
embed_string = '' embed_string = ''
template_name = "projects/view{0}.html".format(embed_string) template_name = "projects/view{0}.html".format(embed_string)
navigation_links = project_navigation_links(project, api)
extension_sidebar_links = current_app.extension_sidebar_links(project) extension_sidebar_links = current_app.extension_sidebar_links(project)
return render_template(template_name, return render_template(template_name,
@ -376,8 +422,9 @@ def render_project(project, api, extra_context=None, template_name=None):
node=None, node=None,
show_node=False, show_node=False,
show_project=True, show_project=True,
og_picture=project.picture_header, og_picture=project.picture_16_9,
activity_stream=activity_stream, activity_stream=activity_stream,
navigation_links=navigation_links,
extension_sidebar_links=extension_sidebar_links, extension_sidebar_links=extension_sidebar_links,
**extra_context) **extra_context)
@ -416,6 +463,7 @@ def view_node(project_url, node_id):
api = system_util.pillar_api() api = system_util.pillar_api()
# First we check if it's a simple string, in which case we are looking for # First we check if it's a simple string, in which case we are looking for
# a static page. Maybe we could use bson.objectid.ObjectId.is_valid(node_id) # a static page. Maybe we could use bson.objectid.ObjectId.is_valid(node_id)
project: typing.Optional[Project] = None
if not utils.is_valid_id(node_id): if not utils.is_valid_id(node_id):
# raise wz_exceptions.NotFound('No such node') # raise wz_exceptions.NotFound('No such node')
project, node = render_node_page(project_url, node_id, api) project, node = render_node_page(project_url, node_id, api)
@ -433,34 +481,33 @@ def view_node(project_url, node_id):
project = Project.find_one({'where': {"url": project_url, '_id': node.project}}, project = Project.find_one({'where': {"url": project_url, '_id': node.project}},
api=api) api=api)
except ResourceNotFound: except ResourceNotFound:
# In theatre mode, we don't need access to the project at all.
if theatre_mode: if theatre_mode:
project = None pass # In theatre mode, we don't need access to the project at all.
else: else:
raise wz_exceptions.NotFound('No such project') raise wz_exceptions.NotFound('No such project')
navigation_links = []
extension_sidebar_links = ''
og_picture = node.picture = utils.get_file(node.picture, api=api) og_picture = node.picture = utils.get_file(node.picture, api=api)
if project: if project:
utils.attach_project_pictures(project, api)
if not node.picture: if not node.picture:
og_picture = utils.get_file(project.picture_header, api=api) og_picture = project.picture_16_9
project.picture_square = utils.get_file(project.picture_square, api=api) navigation_links = project_navigation_links(project, api)
extension_sidebar_links = current_app.extension_sidebar_links(project)
# Append _theatre to load the proper template # Append _theatre to load the proper template
theatre = '_theatre' if theatre_mode else '' theatre = '_theatre' if theatre_mode else ''
if node.node_type == 'page': if node.node_type == 'page':
pages = Node.all({
'where': {'project': project._id, 'node_type': 'page'},
'projection': {'name': 1}}, api=api)
return render_template('nodes/custom/page/view_embed.html', return render_template('nodes/custom/page/view_embed.html',
api=api, api=api,
node=node, node=node,
project=project, project=project,
pages=pages._items, navigation_links=navigation_links,
extension_sidebar_links=extension_sidebar_links,
og_picture=og_picture,) og_picture=og_picture,)
extension_sidebar_links = current_app.extension_sidebar_links(project)
return render_template('projects/view{}.html'.format(theatre), return render_template('projects/view{}.html'.format(theatre),
api=api, api=api,
project=project, project=project,
@ -468,7 +515,8 @@ def view_node(project_url, node_id):
show_node=True, show_node=True,
show_project=False, show_project=False,
og_picture=og_picture, og_picture=og_picture,
extension_sidebar_links=extension_sidebar_links) navigation_links=navigation_links,
extension_sidebar_links=extension_sidebar_links,)
def find_project_or_404(project_url, embedded=None, api=None): def find_project_or_404(project_url, embedded=None, api=None):
@ -491,8 +539,7 @@ def search(project_url):
"""Search into a project""" """Search into a project"""
api = system_util.pillar_api() api = system_util.pillar_api()
project = find_project_or_404(project_url, api=api) project = find_project_or_404(project_url, api=api)
project.picture_square = utils.get_file(project.picture_square, api=api) utils.attach_project_pictures(project, api)
project.picture_header = utils.get_file(project.picture_header, api=api)
return render_template('nodes/search.html', return render_template('nodes/search.html',
project=project, project=project,
@ -533,6 +580,8 @@ def edit(project_url):
project.picture_square = form.picture_square.data project.picture_square = form.picture_square.data
if form.picture_header.data: if form.picture_header.data:
project.picture_header = form.picture_header.data project.picture_header = form.picture_header.data
if form.picture_16_9.data:
project.picture_16_9 = form.picture_16_9.data
# Update world permissions from is_private checkbox # Update world permissions from is_private checkbox
if form.is_private.data: if form.is_private.data:
@ -548,6 +597,8 @@ def edit(project_url):
form.picture_square.data = project.picture_square._id form.picture_square.data = project.picture_square._id
if project.picture_header: if project.picture_header:
form.picture_header.data = project.picture_header._id form.picture_header.data = project.picture_header._id
if project.picture_16_9:
form.picture_16_9.data = project.picture_16_9._id
# List of fields from the form that should be hidden to regular users # List of fields from the form that should be hidden to regular users
if current_user.has_role('admin'): if current_user.has_role('admin'):
@ -656,15 +707,12 @@ def sharing(project_url):
api = system_util.pillar_api() api = system_util.pillar_api()
# Fetch the project or 404 # Fetch the project or 404
try: try:
project = Project.find_one({ project = Project.find_one({'where': {'url': project_url}}, api=api)
'where': '{"url" : "%s"}' % (project_url)}, api=api)
except ResourceNotFound: except ResourceNotFound:
return abort(404) return abort(404)
# Fetch users that are part of the admin group # Fetch users that are part of the admin group
users = project.get_users(api=api) users = project.get_users(api=api)
for user in users['_items']:
user['avatar'] = utils.gravatar(user['email'])
if request.method == 'POST': if request.method == 'POST':
user_id = request.form['user_id'] user_id = request.form['user_id']
@ -674,13 +722,14 @@ def sharing(project_url):
user = project.add_user(user_id, api=api) user = project.add_user(user_id, api=api)
elif action == 'remove': elif action == 'remove':
user = project.remove_user(user_id, api=api) user = project.remove_user(user_id, api=api)
else:
raise wz_exceptions.BadRequest(f'invalid action {action}')
except ResourceNotFound: except ResourceNotFound:
log.info('/p/%s/edit/sharing: User %s not found', project_url, user_id) log.info('/p/%s/edit/sharing: User %s not found', project_url, user_id)
return jsonify({'_status': 'ERROR', return jsonify({'_status': 'ERROR',
'message': 'User %s not found' % user_id}), 404 'message': 'User %s not found' % user_id}), 404
# Add gravatar to user user['avatar'] = pillar.api.users.avatar.url(user)
user['avatar'] = utils.gravatar(user['email'])
return jsonify(user) return jsonify(user)
utils.attach_project_pictures(project, api) utils.attach_project_pictures(project, api)

View File

@ -1,13 +1,18 @@
import json import json
import logging import logging
import urllib.parse
from flask import Blueprint, flash, render_template from flask import Blueprint, flash, render_template
from flask_login import login_required, current_user from flask_login import login_required
from werkzeug.exceptions import abort from werkzeug.exceptions import abort
from pillar import current_app
from pillar.api.utils import jsonify
import pillar.api.users.avatar
from pillar.auth import current_user
from pillar.web import system_util from pillar.web import system_util
from pillar.web.users import forms from pillar.web.users import forms
from pillarsdk import User, exceptions as sdk_exceptions from pillarsdk import File, User, exceptions as sdk_exceptions
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
blueprint = Blueprint('settings', __name__) blueprint = Blueprint('settings', __name__)
@ -27,14 +32,20 @@ def profile():
if form.validate_on_submit(): if form.validate_on_submit():
try: try:
user.username = form.username.data response = user.set_username(form.username.data, api=api)
user.update(api=api) log.info('updated username of %s: %s', current_user, response)
flash("Profile updated", 'success') flash("Profile updated", 'success')
except sdk_exceptions.ResourceInvalid as e: except sdk_exceptions.ResourceInvalid as ex:
message = json.loads(e.content) log.warning('unable to set username %s to %r: %s', current_user, form.username.data, ex)
message = json.loads(ex.content)
flash(message) flash(message)
return render_template('users/settings/profile.html', form=form, title='profile') blender_id_endpoint = current_app.config['BLENDER_ID_ENDPOINT']
blender_profile_url = urllib.parse.urljoin(blender_id_endpoint, 'settings/profile')
return render_template('users/settings/profile.html',
form=form, title='profile',
blender_profile_url=blender_profile_url)
@blueprint.route('/roles') @blueprint.route('/roles')
@ -42,3 +53,19 @@ def profile():
def roles(): def roles():
"""Show roles and capabilties of the current user.""" """Show roles and capabilties of the current user."""
return render_template('users/settings/roles.html', title='roles') return render_template('users/settings/roles.html', title='roles')
@blueprint.route('/profile/sync-avatar', methods=['POST'])
@login_required
def sync_avatar():
"""Fetch the user's avatar from Blender ID and save to storage.
This is an API-like endpoint, in the sense that it returns JSON.
It's here in this file to have it close to the endpoint that
serves the only page that calls on this endpoint.
"""
new_url = pillar.api.users.avatar.sync_avatar(current_user.user_id)
if not new_url:
return jsonify({'_message': 'Your avatar could not be updated'})
return new_url

File diff suppressed because one or more lines are too long

View File

@ -872,12 +872,6 @@
"code": 61930, "code": 61930,
"src": "fontawesome" "src": "fontawesome"
}, },
{
"uid": "31972e4e9d080eaa796290349ae6c1fd",
"css": "users",
"code": 59502,
"src": "fontawesome"
},
{ {
"uid": "c8585e1e5b0467f28b70bce765d5840c", "uid": "c8585e1e5b0467f28b70bce765d5840c",
"css": "clipboard-copy", "css": "clipboard-copy",
@ -990,6 +984,30 @@
"code": 59394, "code": 59394,
"src": "entypo" "src": "entypo"
}, },
{
"uid": "347c38a8b96a509270fdcabc951e7571",
"css": "database",
"code": 61888,
"src": "fontawesome"
},
{
"uid": "3a6f0140c3a390bdb203f56d1bfdefcb",
"css": "speed",
"code": 59471,
"src": "entypo"
},
{
"uid": "4c1ef492f1d2c39a2250ae457cee2a6e",
"css": "social-instagram",
"code": 61805,
"src": "fontawesome"
},
{
"uid": "e36d581e4f2844db345bddc205d15dda",
"css": "users",
"code": 59507,
"src": "elusive"
},
{ {
"uid": "053a214a098a9453877363eeb45f004e", "uid": "053a214a098a9453877363eeb45f004e",
"css": "log-in", "css": "log-in",

Binary file not shown.

After

Width:  |  Height:  |  Size: 496 B

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -33,7 +33,8 @@ def get_user_info(user_id):
# TODO: put those fields into a config var or module-level global. # TODO: put those fields into a config var or module-level global.
return {'email': user.email, return {'email': user.email,
'full_name': user.full_name, 'full_name': user.full_name,
'username': user.username} 'username': user.username,
'badges_html': (user.badges and user.badges.html) or ''}
def setup_app(app): def setup_app(app):

View File

@ -31,8 +31,10 @@ def check_oauth_provider(provider):
@blueprint.route('/authorize/<provider>') @blueprint.route('/authorize/<provider>')
def oauth_authorize(provider): def oauth_authorize(provider):
if not current_user.is_anonymous: if current_user.is_authenticated:
return redirect(url_for('main.homepage')) next_after_login = session.pop('next_after_login', None) or url_for('main.homepage')
log.debug('Redirecting user to %s', next_after_login)
return redirect(next_after_login)
try: try:
oauth = OAuthSignIn.get_provider(provider) oauth = OAuthSignIn.get_provider(provider)
@ -48,8 +50,14 @@ def oauth_authorize(provider):
@blueprint.route('/oauth/<provider>/authorized') @blueprint.route('/oauth/<provider>/authorized')
def oauth_callback(provider): def oauth_callback(provider):
import datetime
from pillar.api.utils.authentication import store_token
from pillar.api.utils import utcnow
next_after_login = session.pop('next_after_login', None) or url_for('main.homepage')
if current_user.is_authenticated: if current_user.is_authenticated:
return redirect(url_for('main.homepage')) log.debug('Redirecting user to %s', next_after_login)
return redirect(next_after_login)
oauth = OAuthSignIn.get_provider(provider) oauth = OAuthSignIn.get_provider(provider)
try: try:
@ -59,13 +67,23 @@ def oauth_callback(provider):
raise wz_exceptions.Forbidden() raise wz_exceptions.Forbidden()
if oauth_user.id is None: if oauth_user.id is None:
log.debug('Authentication failed for user with {}'.format(provider)) log.debug('Authentication failed for user with {}'.format(provider))
return redirect(url_for('main.homepage')) return redirect(next_after_login)
# Find or create user # Find or create user
user_info = {'id': oauth_user.id, 'email': oauth_user.email, 'full_name': ''} user_info = {'id': oauth_user.id, 'email': oauth_user.email, 'full_name': ''}
db_user = find_user_in_db(user_info, provider=provider) db_user = find_user_in_db(user_info, provider=provider)
db_id, status = upsert_user(db_user) db_id, status = upsert_user(db_user)
token = generate_and_store_token(db_id)
# TODO(Sybren): If the user doesn't have any badges, but the access token
# does have 'badge' scope, we should fetch the badges in the background.
if oauth_user.access_token:
# TODO(Sybren): make nr of days configurable, or get from OAuthSignIn subclass.
token_expiry = utcnow() + datetime.timedelta(days=15)
token = store_token(db_id, oauth_user.access_token, token_expiry,
oauth_scopes=oauth_user.scopes)
else:
token = generate_and_store_token(db_id)
# Login user # Login user
pillar.auth.login_user(token['token'], load_from_db=True) pillar.auth.login_user(token['token'], load_from_db=True)
@ -74,11 +92,8 @@ def oauth_callback(provider):
# Check with Blender ID to update certain user roles. # Check with Blender ID to update certain user roles.
update_subscription() update_subscription()
next_after_login = session.pop('next_after_login', None) log.debug('Redirecting user to %s', next_after_login)
if next_after_login: return redirect(next_after_login)
log.debug('Redirecting user to %s', next_after_login)
return redirect(next_after_login)
return redirect(url_for('main.homepage'))
@blueprint.route('/login') @blueprint.route('/login')

View File

@ -43,8 +43,38 @@ def attach_project_pictures(project, api):
This function should be moved in the API, attached to a new Project object. This function should be moved in the API, attached to a new Project object.
""" """
# When adding to the list of pictures dealt with here, make sure
# you update unattach_project_pictures() too.
project.picture_square = get_file(project.picture_square, api=api) project.picture_square = get_file(project.picture_square, api=api)
project.picture_header = get_file(project.picture_header, api=api) project.picture_header = get_file(project.picture_header, api=api)
project.picture_16_9 = get_file(project.picture_16_9, api=api)
def unattach_project_pictures(project: dict):
"""Reverts the operation of 'attach_project_pictures'.
This makes it possible to PUT the project again.
"""
def unattach(property_name: str):
picture_info = project.get(property_name, None)
if not picture_info:
project.pop(property_name, None)
return
if not isinstance(picture_info, dict):
# Assume it's already is an ID.
return
try:
picture_id = picture_info['_id']
project[property_name] = picture_id
except KeyError:
return
unattach('picture_square')
unattach('picture_header')
unattach('picture_16_9')
def mass_attach_project_pictures(projects: typing.Iterable[pillarsdk.Project], *, def mass_attach_project_pictures(projects: typing.Iterable[pillarsdk.Project], *,
@ -106,9 +136,16 @@ def mass_attach_project_pictures(projects: typing.Iterable[pillarsdk.Project], *
def gravatar(email: str, size=64): def gravatar(email: str, size=64):
"""Deprecated: return the Gravatar URL.
.. deprecated::
Use of Gravatar is deprecated, in favour of our self-hosted avatars.
See pillar.api.users.avatar.url(user).
"""
import warnings import warnings
warnings.warn("the pillar.web.gravatar function is deprecated; use hashlib instead", warnings.warn('pillar.web.utils.gravatar() is deprecated, '
DeprecationWarning, 2) 'use pillar.api.users.avatar.url() instead',
category=DeprecationWarning, stacklevel=2)
from pillar.api.utils import gravatar as api_gravatar from pillar.api.utils import gravatar as api_gravatar
return api_gravatar(email, size) return api_gravatar(email, size)

View File

@ -62,7 +62,7 @@ def jstree_get_children(node_id, project_id=None):
'where': { 'where': {
'$and': [ '$and': [
{'node_type': {'$regex': '^(?!attract_)'}}, {'node_type': {'$regex': '^(?!attract_)'}},
{'node_type': {'$not': {'$in': ['comment', 'post']}}}, {'node_type': {'$not': {'$in': ['comment', 'post', 'blog', 'page']}}},
], ],
} }
} }

64
pyproject.toml Normal file
View File

@ -0,0 +1,64 @@
[tool.poetry]
name = "pillar"
version = "2.0"
description = ""
authors = [
"Francesco Siddi <francesco@blender.org>",
"Pablo Vazquez <pablo@blender.studio>",
"Sybren Stüvel <sybren@blender.studio>",
]
[tool.poetry.scripts]
# Must be run after installing/updating:
translations = 'pillar.cli.translations:main'
[tool.poetry.dependencies]
python = "~3.6"
attrs = "~19"
algoliasearch = "~1"
bcrypt = "~3"
blinker = "~1.4"
bleach = "~3.1"
celery = {version = "~4.3",extras = ["redis"]}
cryptography = "2.7"
commonmark = "~0.9"
# These must match the version of ElasticSearch used:
elasticsearch = "~6.1"
elasticsearch-dsl = "~6.1"
Eve = "~0.9"
Flask = "~1.0"
Flask-Babel = "~0.12"
Flask-Caching = "~1.7"
Flask-DebugToolbar = "~0.10"
Flask-Script = "~2.0"
Flask-Login = "~0.4"
Flask-WTF = "~0.14"
gcloud = "~0.18"
google-apitools = "~0.5"
IPy = "~1.00"
MarkupSafe = "~1.1"
ndg-httpsclient = "~0.5"
Pillow = "~6.0"
python-dateutil = "~2.8"
rauth = "~0.7"
raven = {version = "~6.10",extras = ["flask"]}
redis = "~3.2"
shortcodes = "~2.5"
zencoder = "~0.6"
pillarsdk = {path = "../pillar-python-sdk"}
# Secondary requirements that weren't installed automatically:
idna = "~2.8"
[tool.poetry.dev-dependencies]
pillar-devdeps = {path = "./devdeps"}
[build-system]
requires = ["poetry==1.0","cryptography==2.7","setuptools==51.0.0","wheel==0.35.1"]
build-backend = "poetry.masonry.api"

View File

@ -1,17 +0,0 @@
-r requirements.txt
-r ../pillar-python-sdk/requirements-dev.txt
-e ../pillar # also works from parent project, like blender-cloud
# Development requirements
pytest==3.0.6
responses==0.5.1
pytest-cov==2.4.0
mock==2.0.0
mypy==0.501
# Secondary development requirements
cookies==2.2.1
coverage==4.3.4
pbr==2.0.0
py==1.4.32
typed-ast==1.0.2

Some files were not shown because too many files have changed in this diff Show More