`manage.py operations merge_project src_url dst_url` moves all nodes and
files from the project with `src_url` to the project with `dst_url`.
This also moves soft-deleted files/nodes, as it ignores the _deleted
field. The actual files on the storage backend are copied rather than
moved.
Note that this may invalidate the nodes, as their node type definition
may differ between projects. Since we use direct MongoDB queries the
nodes are moved to the new project anyway. This allows for a
move-first-then-fix approach).
This is done via a custom PATCH due to the lack of transactions of MongoDB;
we cannot undelete both project-referenced files and file-referenced
projects in one atomic operation.
Previously we did an API call for each picture_square and picture_header
for each project listed in /p/. Now we do one API call that fetches only
the pictures needed, in one go; in other words, it fetches less data in
less HTTP calls.
Note that undeleting files cannot be done via Eve, as it doesn't support
PATCHing collections. Instead, direct MongoDB modification is used to set
_deleted=False and provide new _etag and _updated values.
It is possible to perform a GET request that has an empty `lookup`
param, like {'_id': {'$in': ['objectID1', 'objectID2', ...]}}. Such a
request would cause a refresh of *all* file documents, which is far too
heavy to do in one client HTTP request.
Since files aren't deleted (yet) when projects are deleted, it can happen
that a file is refreshed but then cannot reference a deleted project.
By removing the project ID from the PATCH, Eve doesn't have enough info to
check this, and it'll work fine.
This command fixes the filename in the Content-Disposition header of file
variations on Google Cloud Storage. This is to fix the existing files after
fixing T51477.
This means it's no longer GCS-specific, and can be tested using the local
storage implementation.
Required implementation of a rename operation. To mirror Google's API, I've
implemented the renaming of a Blob as a function on the Bucket class.
To me this makes sense, as it requires creating a new Blob instance, which
shouldn't be done by another Blob.
Rather than making them sortable, I made them automatically sorted upon
saving the node. The colour map comes first, then the other maps in
alphabetical order.
This ensures the thumbnail URL is public so that it won't expire.
Since this now requires API calls to Google, I've increased the number of
parallel threads used for indexing, since they'll be waiting for network
I/O more.