This is done via a custom PATCH due to the lack of transactions of MongoDB;
we cannot undelete both project-referenced files and file-referenced
projects in one atomic operation.
Previously we did an API call for each picture_square and picture_header
for each project listed in /p/. Now we do one API call that fetches only
the pictures needed, in one go; in other words, it fetches less data in
less HTTP calls.
Note that undeleting files cannot be done via Eve, as it doesn't support
PATCHing collections. Instead, direct MongoDB modification is used to set
_deleted=False and provide new _etag and _updated values.
It is possible to perform a GET request that has an empty `lookup`
param, like {'_id': {'$in': ['objectID1', 'objectID2', ...]}}. Such a
request would cause a refresh of *all* file documents, which is far too
heavy to do in one client HTTP request.
Since files aren't deleted (yet) when projects are deleted, it can happen
that a file is refreshed but then cannot reference a deleted project.
By removing the project ID from the PATCH, Eve doesn't have enough info to
check this, and it'll work fine.
This command fixes the filename in the Content-Disposition header of file
variations on Google Cloud Storage. This is to fix the existing files after
fixing T51477.
This means it's no longer GCS-specific, and can be tested using the local
storage implementation.
Required implementation of a rename operation. To mirror Google's API, I've
implemented the renaming of a Blob as a function on the Bucket class.
To me this makes sense, as it requires creating a new Blob instance, which
shouldn't be done by another Blob.
Rather than making them sortable, I made them automatically sorted upon
saving the node. The colour map comes first, then the other maps in
alphabetical order.
This ensures the thumbnail URL is public so that it won't expire.
Since this now requires API calls to Google, I've increased the number of
parallel threads used for indexing, since they'll be waiting for network
I/O more.
This is a two-stage approach that happens when a new token is verified
with Blender ID and stored in our local MongoDB:
- Given the remote IP address of the HTTP request, compute and store the
org roles in the token document.
- Recompute the user's roles based on their own roles, regular org roles,
and the roles stored in non-expired token documents.
This happens once per hour, since that's how long we store tokens in our
database.
We can now store IP ranges with Organizations. The aim is to have any user
logging in with a remote IP address within such a race will get the
organization roles assigned to the user object stored in the Flask session.
This commit just contains the MongoDB storage and querying, and not yet the
updates to the user.
By refactoring part of comments_for_node into a dedicated function called render_comments_for_node, we enable Pillar apps to override the comment url and determine in each app what are the conditions that allow a user to post.
Further, we introduce an extensible and overridable list_embed.pug, which currently defines custom blocks for when the user is allowed and not allowed to post a comment,
This actually undoes commits 90c62664a6713a9330a4fe44f2cfe58124b29ed3 and 18fe240b931ab8dbd3a8a9fb6760c8b8cdba7ad3 and simply adds the node.url property when rendering a post in the posts_view function. This is what the template macro actually expected in the first place.