Gitea: Split 'data' storage into seperate pools #34
Labels
No Label
Service
Buildbot
Service
Chat
Service
Gitea
Service
Translate
Type
Bug
Type
Config
Type
Deployment
Type
Feature
Type
Setup
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: infrastructure/blender-projects-platform#34
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Given the fact that the repo-archive disk filled up super-quick all of a sudden, it's clear that there's quite som potential to DdoS the site by simply uploading a lot, creating a large repo or , in the previous case, doing a lot of download-requests.
It'd be good to split up the current 'data' pool into three pools with different semantics (and storage-requirements). Some of this can be done by using seperate ceph-mounts or by using quota's instead.
One thing I wonder about is if blender and other repositories are in separate pools, are hard links between them still possible to avoid every fork taking up space? Or are hard links already not working?
With gitea.com, hardlinks across pools did not work, although that could just be because we didn't look into it much and it may have well been possible with appropriate tuning.
For repo archive, issue attachments, LFS etc.. you may wish to look into using ceph's radosgw and using their S3 compatible API and https://docs.gitea.io/en-us/config-cheat-sheet/#storage-storage that way you can manage that specific storage outside of VM config.
S3 storage can't be used for the issues search index, code search index, and git repos themselves (and a few other minor things).
Deployment: Split 'data' storage into seperate poolsto Gitea: Split 'data' storage into seperate pools