BCon Video Processing Tooling #103966

Closed
opened 2023-01-18 13:07:18 +01:00 by Bastien Montagne · 4 comments

NOTE: Moved to infrastructure/conference-video-processing#1


The goal of this design task is to define a new tool handling all steps of video processing during the Blender Conference.

Now hosted on official repo. A bcon-2022 tag has been made for the previous, commandline-based tooling used in previous years.

Requirements

All In One

All required steps must be processed inside a single tool/UI/UX. This tool must give a clear complete overview of the status of each talk.

The processing of each talk is defined in five main stages, each containing various tasks. As much as possible, the user input in one stage affects background processing in the next stage.

Tasks that can be performed automatically in the background, without any user input, are marked with [auto]. Tasks that are blocking (i.e. require completion before next stages can be started) are marked with [block]. Tasks that can be processed asynchronously are marked with [async].

Processing Stages

  • Acquisition:

    • Activate a talk.
    • Select raw file(s) from the provider (practically, copy the file(s) from an external SSD to local storage).
    • [auto] [block] [async] Import raw file(s) from the provider (practically, copy the file(s) from an external SSD to local storage).
  • Processing:

    • [auto] [async] Backup raw files (practically, copy the file(s) from the local storage to an external HDD and/or network drive [aka sinology]).
    • Let user check validity of raw video, and specify in and out cut points.
    • Let user select a thumbnail frame.
    • [auto] [block] Automatic addition of 'extras' (title card, fade in/out, logo, etc.) to both video and thumbnail, based on user input.
    • Let user validate final edits.
  • Export:

    • [auto] [block] [async] Export the final video.
    • [auto] [block] [async] Export the final thumbnail.
    • [auto] [block] Pre-fill from schedule data-base the meta-data for the youtube upload (title etc.).
    • Let user validate final exports.
    • Let user check/edit/extend pre-filled meta-data.
  • Upload:

    • [auto] [block] [async] Upload final video on youtube, with all necessary meta-data, in private/non-published state.
    • [auto] [block] [async] Upload thumbnail.
    • [auto] [async] Backup final video (practically, copy the file from the local storage to an external HDD and/or network drive [aka sinology]).
    • [auto] [async] Backup thumbnail.
    • (No user interaction at this stage).
  • Publish:

    • Check/validate the video on youtube, and publish it.
      • This is likely the only step that happens outside the processing tool, in the youtube website. the webpage can still be automatically opened by the tool.
    • Finalize a talk (user decision).
    • [auto] [block] Once all other steps are finished, remove local working files.

Note


Many of the steps inside each of the five high-level stages defined above can be performed in parallel.

Talk Statuses

Each talk has a unique status, and a set of additional reports (errors, warnings, ...).

Status are related to processing stages, with two sub-types: Active and Pending. Pending meaning that the user requested that stage to be the active one, but some tasks from previous stages (or other conditions, like errors) prevent the talk to be effectively in that stage yet.

Three other statuses exist for talks not being processed.

  • Future talk processing has not yet started.
  • Acquisition-Pending user activated the talk, it is getting ready for acquisition.
  • Acquisition-Active user is working on the acquisition stage.
  • Processing-Pending user finished acquisition actions and requested to move to the Processing stage, but there are some blocking acquisition tasks active.
  • Processing-Active
  • Export-Pending
  • Export-Active
  • Upload-Pending
  • Upload-Active
  • Publish-Pending
  • Publish-Active
  • Done-Pending user request finalization of the talk, but some blocking tasks are still active.
  • Done talk has been fully successfully processed.

Reports can be of three types:

  • Info for basic user information.
  • Warning for important user information and non-critical issues.
  • Error for critical issues blocking the talk processing.

Reports are stored per status, and an Error one should always prevent moving forwards in the processing of a talk.

Resilience

Many of these processes (file copying, video encoding, internet uploads) are error prone. The tool must carefully check for success/failure of each step, and have a reasonable amount of re-tries/resumes before failing to the user.

It must also keep track of storage space, and warn user early enough when this is getting too small.

Source of Truth for Status

There should be an external 'source of truth' for the tool, to get info about talks etc., and to store reliably the current status of each task.

Final Export

Youtube recommandations are h264 for video, AAC (>= 128kbs) for audio.

If the raw video is already in h264, only re-encode the small parts that are edited (like title, fadings, ...) into small intermediate files, then use ffmpeg tool to stitch all the bits together (using ffmpeg concat). Otherwise, re-encode everything.

Youtube Upload

Upload should leave the video un-published (or private?). User is responsible to do a final check before making it public.

Upload should pre-define as much data as possible and available, either from the schedule data-base, or as edited by the user (title, description, keywords, but also chapters [part of the description]?).

Youtube API also has a Python module.

Implementation Ideas

WIP mockups/UI/UX designs.

All In One Tool

An add-on running in Blender.

Slow tasks (like file copying, video encoding, uploading etc.) run in separate sub-processes in the background.

Some initial ideas:

  • Use the VSE.

  • Talks are listed in a UIList widget, with basic info and status.

  • Talks being processed each have a matching meta-strip in the VSE. Status of the talks can also be shown here (e.g. with strip color tag?).

    • These can be generated automatically using time info (e.g. generate new talks ten minutes before their scheduled end)?
  • There is only one active talk from user PoV, selected/defined either by selecting it in the UIList, or by selecting its matching VSE meta strip.

  • Only the active talk is not muted in the VSE.

  • Basic user input is done on the active talk (aka active meta strip):

    • Define in and out points.
    • Select thumbnail.
    • Optionally, the active talk can be further edited by entering its meta strip (this is only in case manual editing of the video is required).
  • Other user editing (like handling meta-data for upload etc.) is done in panels under the main talk list.

Each talk has 5 main stages as described above. Only one is active (at user-level, also UI) at a time for a given talk.

  • The user can navigate between these five stages in both directions.
  • User changes in one stage invalidates (or requires updates to) all processing (done or on-going) from current and next stages.
  • Talks store status info for each stages and each steps in these:
    • Not yet started (TODO).
    • Being processed (user-level).
    • Being processed (async/background processes).
    • Done (user-level).
    • Done (async/background processes).
  • Some async processes are blocking (i.e. next stage cannot start before they are done), like initial raw files copy, final video render.
  • Some async processes are not blocking (i.e. next stage can be started while they are still on-going), mainly the backup operations.

General UI Mockup

F14175270

Some Notes:

  • Not sure about the filebrowser, unless we can make drag-n-drop work for our usecase (doubtful), it may be useless.

UI Workflow Mockup

F14197171

Some Notes:

  • Path settings panel on top could be in the add-on user preferences instead. On the other end, we could hack in UI to get a preview of free space available on each path, which is necessary info to have.
  • The talks UIList filters would, among other things, include:
    • Filter out done talks.
    • Filter out talks that have not yet started.
    • Filter in talks by their room.

Resilience

Validation/error handling is done both in the sub-processes, and in the main one.

It includes at least:

  • Check on sub-processes exit codes.
  • Potentially, refined checks on on-going processes when possible/needed (partial parsing of ffmpeg outputs e.g.).
  • General easy checks on data validity (at least e.g. file sizes).
  • Usage of source of truth (see below) as final status check, also to resume crash of main process in a valid-enough state.
  • Auto-save on main .blend file.

Source of Truth / Status

Schedule data-base also stores general status info about each talk.

Writing to this data-base is done in an as-secure-as-possible way (i.e. atomic, e.g. by writing in temp file and doing an atomic file replacement).

Access to this database is only done from a single main process.

Practically, this can be a JSON file.

NOTE: Moved to infrastructure/conference-video-processing#1 ------------------------------------------ The goal of this design task is to define a new tool handling all steps of video processing during the Blender Conference. Now hosted on [official repo](https://projects.blender.org/infrastructure/conference-video-processing). A [`bcon-2022` tag](https://projects.blender.org/infrastructure/conference-video-processing/src/tag/bcon-2022) has been made for the previous, commandline-based tooling used in previous years. # Requirements ## All In One All required steps must be processed inside a single tool/UI/UX. This tool must give a clear complete overview of the status of each talk. The processing of each talk is defined in five main stages, each containing various tasks. As much as possible, the user input in one stage affects background processing in the next stage. Tasks that can be performed automatically in the background, without any user input, are marked with **`[auto]`**. Tasks that are blocking (i.e. require completion before next stages can be started) are marked with **`[block]`**. Tasks that can be processed asynchronously are marked with **`[async]`**. ## Processing Stages * **Acquisition:** * Activate a talk. * Select raw file(s) from the provider *(practically, copy the file(s) from an external SSD to local storage)*. * **`[auto]`** **`[block]`** **`[async]`** Import raw file(s) from the provider *(practically, copy the file(s) from an external SSD to local storage)*. * **Processing:** * **`[auto]`** **`[async]`** Backup raw files *(practically, copy the file(s) from the local storage to an external HDD and/or network drive [aka sinology])*. * Let user check validity of raw video, and specify in and out cut points. * Let user select a thumbnail frame. * **`[auto]`** **`[block]`** Automatic addition of 'extras' (title card, fade in/out, logo, etc.) to both video and thumbnail, based on user input. * Let user validate final edits. * **Export:** * **`[auto]`** **`[block]`** **`[async]`** Export the final video. * **`[auto]`** **`[block]`** **`[async]`** Export the final thumbnail. * **`[auto]`** **`[block]`** Pre-fill from schedule data-base the meta-data for the youtube upload (title etc.). * Let user validate final exports. * Let user check/edit/extend pre-filled meta-data. * **Upload:** * **`[auto]`** **`[block]`** **`[async]`** Upload final video on youtube, with all necessary meta-data, in private/non-published state. * **`[auto]`** **`[block]`** **`[async]`** Upload thumbnail. * **`[auto]`** **`[async]`** Backup final video *(practically, copy the file from the local storage to an external HDD and/or network drive [aka sinology])*. * **`[auto]`** **`[async]`** Backup thumbnail. * *(No user interaction at this stage).* * **Publish:** * Check/validate the video on youtube, and publish it. * *This is likely the only step that happens outside the processing tool, in the youtube website. the webpage can still be automatically opened by the tool.* * Finalize a talk (user decision). * **`[auto]`** **`[block]`** Once all other steps are finished, remove local working files. > **Note** > Many of the steps inside each of the five high-level stages defined above can be performed in parallel. ## Talk Statuses Each talk has a unique status, and a set of additional reports (errors, warnings, ...). Status are related to processing stages, with two sub-types: Active and Pending. Pending meaning that the user requested that stage to be the active one, but some tasks from previous stages (or other conditions, like errors) prevent the talk to be effectively in that stage yet. Three other statuses exist for talks not being processed. * **`Future`** *talk processing has not yet started.* * **`Acquisition-Pending`** *user activated the talk, it is getting ready for acquisition.* * **`Acquisition-Active`** *user is working on the acquisition stage.* * **`Processing-Pending`** *user finished acquisition actions and requested to move to the Processing stage, but there are some blocking acquisition tasks active.* * **`Processing-Active`** * **`Export-Pending`** * **`Export-Active`** * **`Upload-Pending`** * **`Upload-Active`** * **`Publish-Pending`** * **`Publish-Active`** * **`Done-Pending`** *user request finalization of the talk, but some blocking tasks are still active.* * **`Done`** *talk has been fully successfully processed.* Reports can be of three types: * **`Info`** *for basic user information.* * **`Warning`** *for important user information and non-critical issues.* * **`Error`** *for critical issues blocking the talk processing.* Reports are stored per status, and an `Error` one should always prevent moving forwards in the processing of a talk. ## Resilience Many of these processes (file copying, video encoding, internet uploads) are error prone. The tool must carefully check for success/failure of each step, and have a reasonable amount of re-tries/resumes before failing to the user. It must also keep track of storage space, and warn user early enough when this is getting too small. ## Source of Truth for Status There should be an external 'source of truth' for the tool, to get info about talks etc., and to store reliably the current status of each task. ## Final Export [Youtube recommandations](https://support.google.com/youtube/answer/4603579?hl=en) are h264 for video, AAC (>= 128kbs) for audio. If the raw video is already in h264, only re-encode the small parts that are edited (like title, fadings, ...) into small intermediate files, then use ffmpeg tool to stitch all the bits together (using ffmpeg `concat`). Otherwise, re-encode everything. ## Youtube Upload Upload should leave the video un-published (or private?). User is responsible to do a final check before making it public. Upload should pre-define as much data as possible and available, either from the schedule data-base, or as edited by the user (title, description, keywords, but also chapters [part of the description]?). [Youtube API](https:*developers.google.com/youtube/v3/docs) also has a [Python module](https:*developers.google.com/youtube/v3/guides/uploading_a_video). # Implementation Ideas [WIP mockups/UI/UX designs](https://design.penpot.app/#/dashboard/team/eeef1f40-f6c9-11eb-9ade-278287232ac9/projects/2a21f5ae-60ee-8151-8001-ec97bfff764a). ## All In One Tool An add-on running in Blender. Slow tasks (like file copying, video encoding, uploading etc.) run in separate sub-processes in the background. Some initial ideas: * Use the VSE. * Talks are listed in a UIList widget, with basic info and status. * Talks being processed each have a matching meta-strip in the VSE. Status of the talks can also be shown here (e.g. with strip color tag?). * These can be generated automatically using time info (e.g. generate new talks ten minutes before their scheduled end)? * There is only one active talk from user PoV, selected/defined either by selecting it in the UIList, or by selecting its matching VSE meta strip. * Only the active talk is not muted in the VSE. * Basic user input is done on the active talk (aka active meta strip): * Define in and out points. * Select thumbnail. * Optionally, the active talk can be further edited by entering its meta strip (this is only in case manual editing of the video is required). * Other user editing (like handling meta-data for upload etc.) is done in panels under the main talk list. Each talk has 5 main stages as described above. Only one is active (at user-level, also UI) at a time for a given talk. * The user can navigate between these five stages in both directions. * User changes in one stage invalidates (or requires updates to) all processing (done or on-going) from current and next stages. * Talks store status info for each stages and each steps in these: * Not yet started (TODO). * Being processed (user-level). * Being processed (async/background processes). * Done (user-level). * Done (async/background processes). * Some async processes are blocking (i.e. next stage cannot start before they are done), like initial raw files copy, final video render. * Some async processes are not blocking (i.e. next stage can be started while they are still on-going), mainly the backup operations. ### General UI Mockup ![F14175270](https://archive.blender.org/developer/F14175270/general_ui.png) Some Notes: * Not sure about the filebrowser, unless we can make drag-n-drop work for our usecase (doubtful), it may be useless. ### UI Workflow Mockup ![F14197171](https://archive.blender.org/developer/F14197171/workflow_panels_mockup.png) Some Notes: * Path settings panel on top could be in the add-on user preferences instead. On the other end, we could hack in UI to get a preview of free space available on each path, which is necessary info to have. * The talks UIList filters would, among other things, include: * Filter out done talks. * Filter out talks that have not yet started. * Filter in talks by their room. ## Resilience Validation/error handling is done both in the sub-processes, and in the main one. It includes at least: * Check on sub-processes exit codes. * Potentially, refined checks on on-going processes when possible/needed (partial parsing of ffmpeg outputs e.g.). * General easy checks on data validity (at least e.g. file sizes). * Usage of source of truth (see below) as final status check, also to resume crash of main process in a valid-enough state. * Auto-save on main .blend file. ## Source of Truth / Status Schedule data-base also stores general status info about each talk. Writing to this data-base is done in an as-secure-as-possible way (i.e. atomic, e.g. by writing in temp file and doing an atomic file replacement). Access to this database is only done from a single main process. Practically, this can be a JSON file.

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'

Added subscriber: @mont29

Added subscriber: @mont29

For file copying you could for example use shasum -a 256 at the source as well as at the target after copying, to compare their contents.

For file copying you could for example use ***shasum -a 256*** at the source as well as at the target after copying, to compare their contents.
Bastien Montagne changed title from BConf Video Processing Tooling to BCon Video Processing Tooling 2023-11-01 11:21:27 +01:00
Bastien Montagne added
Type
Design
and removed
Type
Report
labels 2023-11-01 11:29:12 +01:00

Closing as this is now essentially done - and it was added to the wrong repo in the first place. Recreated as infrastructure/conference-video-processing#1

Closing as this is now essentially done - and it was added to the wrong repo in the first place. Recreated as infrastructure/conference-video-processing#1
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: infrastructure/conference-website#103966
No description provided.