BCon Video Processing Tooling #103966
Labels
No Label
legacy project
Infrastructure: Websites
Priority
High
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Resolved
Type
Bug
Type
Design
Type
Report
Type
To Do
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: infrastructure/conference-website#103966
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
NOTE: Moved to infrastructure/conference-video-processing#1
The goal of this design task is to define a new tool handling all steps of video processing during the Blender Conference.
Now hosted on official repo. A
bcon-2022
tag has been made for the previous, commandline-based tooling used in previous years.Requirements
All In One
All required steps must be processed inside a single tool/UI/UX. This tool must give a clear complete overview of the status of each talk.
The processing of each talk is defined in five main stages, each containing various tasks. As much as possible, the user input in one stage affects background processing in the next stage.
Tasks that can be performed automatically in the background, without any user input, are marked with
[auto]
. Tasks that are blocking (i.e. require completion before next stages can be started) are marked with[block]
. Tasks that can be processed asynchronously are marked with[async]
.Processing Stages
Acquisition:
[auto]
[block]
[async]
Import raw file(s) from the provider (practically, copy the file(s) from an external SSD to local storage).Processing:
[auto]
[async]
Backup raw files (practically, copy the file(s) from the local storage to an external HDD and/or network drive [aka sinology]).[auto]
[block]
Automatic addition of 'extras' (title card, fade in/out, logo, etc.) to both video and thumbnail, based on user input.Export:
[auto]
[block]
[async]
Export the final video.[auto]
[block]
[async]
Export the final thumbnail.[auto]
[block]
Pre-fill from schedule data-base the meta-data for the youtube upload (title etc.).Upload:
[auto]
[block]
[async]
Upload final video on youtube, with all necessary meta-data, in private/non-published state.[auto]
[block]
[async]
Upload thumbnail.[auto]
[async]
Backup final video (practically, copy the file from the local storage to an external HDD and/or network drive [aka sinology]).[auto]
[async]
Backup thumbnail.Publish:
[auto]
[block]
Once all other steps are finished, remove local working files.Talk Statuses
Each talk has a unique status, and a set of additional reports (errors, warnings, ...).
Status are related to processing stages, with two sub-types: Active and Pending. Pending meaning that the user requested that stage to be the active one, but some tasks from previous stages (or other conditions, like errors) prevent the talk to be effectively in that stage yet.
Three other statuses exist for talks not being processed.
Future
talk processing has not yet started.Acquisition-Pending
user activated the talk, it is getting ready for acquisition.Acquisition-Active
user is working on the acquisition stage.Processing-Pending
user finished acquisition actions and requested to move to the Processing stage, but there are some blocking acquisition tasks active.Processing-Active
Export-Pending
Export-Active
Upload-Pending
Upload-Active
Publish-Pending
Publish-Active
Done-Pending
user request finalization of the talk, but some blocking tasks are still active.Done
talk has been fully successfully processed.Reports can be of three types:
Info
for basic user information.Warning
for important user information and non-critical issues.Error
for critical issues blocking the talk processing.Reports are stored per status, and an
Error
one should always prevent moving forwards in the processing of a talk.Resilience
Many of these processes (file copying, video encoding, internet uploads) are error prone. The tool must carefully check for success/failure of each step, and have a reasonable amount of re-tries/resumes before failing to the user.
It must also keep track of storage space, and warn user early enough when this is getting too small.
Source of Truth for Status
There should be an external 'source of truth' for the tool, to get info about talks etc., and to store reliably the current status of each task.
Final Export
Youtube recommandations are h264 for video, AAC (>= 128kbs) for audio.
If the raw video is already in h264, only re-encode the small parts that are edited (like title, fadings, ...) into small intermediate files, then use ffmpeg tool to stitch all the bits together (using ffmpeg
concat
). Otherwise, re-encode everything.Youtube Upload
Upload should leave the video un-published (or private?). User is responsible to do a final check before making it public.
Upload should pre-define as much data as possible and available, either from the schedule data-base, or as edited by the user (title, description, keywords, but also chapters [part of the description]?).
Youtube API also has a Python module.
Implementation Ideas
WIP mockups/UI/UX designs.
All In One Tool
An add-on running in Blender.
Slow tasks (like file copying, video encoding, uploading etc.) run in separate sub-processes in the background.
Some initial ideas:
Use the VSE.
Talks are listed in a UIList widget, with basic info and status.
Talks being processed each have a matching meta-strip in the VSE. Status of the talks can also be shown here (e.g. with strip color tag?).
There is only one active talk from user PoV, selected/defined either by selecting it in the UIList, or by selecting its matching VSE meta strip.
Only the active talk is not muted in the VSE.
Basic user input is done on the active talk (aka active meta strip):
Other user editing (like handling meta-data for upload etc.) is done in panels under the main talk list.
Each talk has 5 main stages as described above. Only one is active (at user-level, also UI) at a time for a given talk.
General UI Mockup
Some Notes:
UI Workflow Mockup
Some Notes:
Resilience
Validation/error handling is done both in the sub-processes, and in the main one.
It includes at least:
Source of Truth / Status
Schedule data-base also stores general status info about each talk.
Writing to this data-base is done in an as-secure-as-possible way (i.e. atomic, e.g. by writing in temp file and doing an atomic file replacement).
Access to this database is only done from a single main process.
Practically, this can be a JSON file.
Changed status from 'Needs Triage' to: 'Confirmed'
Added subscriber: @mont29
For file copying you could for example use shasum -a 256 at the source as well as at the target after copying, to compare their contents.
BConf Video Processing Toolingto BCon Video Processing ToolingClosing as this is now essentially done - and it was added to the wrong repo in the first place. Recreated as infrastructure/conference-video-processing#1