Build with Magefile #104341

Merged
Sybren A. Stüvel merged 26 commits from magefile into main 2024-10-04 21:59:46 +02:00
246 changed files with 14552 additions and 3881 deletions
Showing only changes of commit 77e8c42cd6 - Show all commits

6
.gitattributes vendored
View File

@ -3,6 +3,11 @@
/addon/flamenco/manager_README.md linguist-generated=true
/web/app/src/manager-api/** linguist-generated=true
**/*.gen.go linguist-generated=true
/go.sum linguist-generated=true
# In your Git config, set:
# git config core.eol native
# git config core.autocrlf true
# Set the default newline behavior, in case people don't have core.autocrlf set.
* text=auto
@ -19,6 +24,7 @@
*.md text
*.py text
*.sh text
*.sql text
*.svg text
*.toml text
*.txt text

View File

@ -1,9 +1,9 @@
name: Bug Report
about: File a bug report
labels:
- "type::Report"
- "status::Needs Triage"
- "priority::Normal"
- "Type/Report"
- "Status/Needs Triage"
- "Priority/Normal"
body:
- type: markdown
attributes:

View File

@ -1,7 +1,7 @@
name: Design
about: Create a design task (for developers only)
labels:
- "type::Design"
- "Type/Design"
body:
- type: textarea
id: body

View File

@ -0,0 +1,41 @@
name: Custom Job Type
about: Submit your custom job type
labels:
- "Type/Job Type"
body:
- type: markdown
attributes:
value: |
## Thanks for contributing!
With this form you can submit your custom job type for listing on https://flamenco.blender.org/third-party-jobs/
- type: input
id: blender_version
attributes:
label: "Blender Version(s)"
description: "Which version(s) of Blender are known to work with this job type?"
required: true
- type: input
id: flamenco_version
attributes:
label: "Flamenco Version(s)"
description: "Which version(s) of Flamenco are known to work with this job type?"
required: true
- type: textarea
id: description
attributes:
label: "Description"
description: "Please describe what this job type does, what the target audience is, how to use it, etc. Feel free to include images as well."
required: true
- type: markdown
attributes:
value: |
Please understand that both Flamenco and Blender are under constant
development. By their very nature, this means that every once in a while
your job type will need some attention and updating.
- type: checkboxes
id: understanding
attributes:
label: "Will you help to keep things up to date?"
options:
- label: "Yes, I'll check with new versions of Blender and/or Flamenco, and send in a report when updating is necessary"

View File

@ -1,7 +1,7 @@
name: To Do
about: Create a to do task (for developers only)
labels:
- "type::To Do"
- "Type/To Do"
body:
- type: textarea
id: body

View File

@ -4,11 +4,36 @@ This file contains the history of changes to Flamenco. Only changes that might
be interesting for users are listed here, such as new features and fixes for
bugs in actually-released versions.
## 3.5 - in development
## 3.6 - in development
- Add MQTT support. Flamenco Manager can now send internal events to an MQTT broker.
- Change the name of the add-on from "Flamenco 3" to just "Flamenco".
- Add `label` to job settings, to have full control over how they are presented in Blender's job submission GUI. If a job setting does not define a label, its `key` is used to generate one (like Flamenco 3.5 and older).
- Add `shellSplit(someString)` function to the job compiler scripts. This splits a string into an array of strings using shell/CLI semantics.
- Make it possible to script job submissions in Blender, by executing the `bpy.ops.flamenco.submit_job(job_name="jobname")` operator.
- Security updates of some deendencies:
- [GO-2024-2937: Parsing a corrupt or malicious image with invalid color indices can cause a panic](https://pkg.go.dev/vuln/GO-2024-2937)
- Web interface: list the job's worker tag in the job details.
- Ensure the submitted scene is rendered in a multi-scene blend file.
- Security updates of dependencies:
- [GO-2024-3106: Stack exhaustion in Decoder.Decode in encoding/gob](https://pkg.go.dev/vuln/GO-2024-3106)
- Fix bug where database foreign key constraints could be deactivated ([#104305](https://projects.blender.org/studio/flamenco/issues/104305)).
- Fix bug when submitting a file with a non-ASCII name via Shaman ([#104338](https://projects.blender.org/studio/flamenco/issues/104338)).
## 3.5 - released 2024-04-16
- Add MQTT support ([docs](https://flamenco.blender.org/usage/manager-configuration/mqtt/)). Flamenco Manager can now send internal events to an MQTT broker.
- Simplify the preview video filename when a complex set of frames rendered ([#104285](https://projects.blender.org/studio/flamenco/issues/104285)). Instead of `video-1, 4, 10.mp4` it is now simply `video-1-10.mp4`.
- Make the `blendfile` parameter of a `blender-render` command optional. This makes it possible to pass, for example, a Python file that loads/constructs the blend file, instead of loading one straight from disk.
- Show the farm status in the web frontend. This shows whether the farm is actively working on a job, idle, asleep (all workers are sleeping and no work is queued), waiting (all workers are sleeping, and work is queued), or inoperable (no workers, or all workers are offline). This status is also broadcast as event via the event bus, and thus available via SocketIO and MQTT.
- Fix an issue where the columns in the web interface wouldn't correctly resize when the shown information changed.
- Add-on: replace the different 'refresh' buttons (for Manager info & storage location, job types, and worker tags) with a single button that just refreshes everything in one go. The information obtained from Flamenco Manager is now stored in a JSON file on disk, making it independent from Blender auto-saving the user preferences.
- Ensure the web frontend connects to the backend correctly when served over HTTPS ([#104296](https://projects.blender.org/studio/flamenco/pulls/104296)).
- For Workers running on Linux, it is now possible to configure the "OOM score adjustment" for sub-processes. This makes it possible for the out-of-memory killer to target Blender, and not Flamenco Worker itself.
- Security updates of some dependencies:
- [Incorrect forwarding of sensitive headers and cookies on HTTP redirect in net/http](https://pkg.go.dev/vuln/GO-2024-2600)
- [Memory exhaustion in multipart form parsing in net/textproto and net/http](https://pkg.go.dev/vuln/GO-2024-2599)
- [Verify panics on certificates with an unknown public key algorithm in crypto/x509](https://pkg.go.dev/vuln/GO-2024-2600)
- [HTTP/2 CONTINUATION flood in net/http](https://pkg.go.dev/vuln/GO-2024-2687)
## 3.4 - released 2024-01-12

View File

@ -4,7 +4,7 @@ PKG := projects.blender.org/studio/flamenco
# To update the version number in all the relevant places, update the VERSION
# variable below and run `make update-version`.
VERSION := 3.5-alpha1
VERSION := 3.6-alpha5
# "alpha", "beta", or "release".
RELEASE_CYCLE := alpha
@ -240,7 +240,7 @@ swagger-ui:
test:
# Ensure the web-static directory exists, so that `web/web_app.go` can embed something.
mkdir -p ${WEB_STATIC}
go test -short ./...
go test -short -failfast ./...
clean:
@go clean -i -x
@ -355,6 +355,7 @@ release-package:
$(MAKE) -s release-package-linux
$(MAKE) -s release-package-darwin
$(MAKE) -s release-package-windows
$(MAKE) -s clean
.PHONY: release-package-linux
release-package-linux:

View File

@ -1,6 +1,6 @@
# Flamenco 3
# Flamenco
This repository contains the sources for Flamenco 3. The Manager, Worker, and
This repository contains the sources for Flamenco. The Manager, Worker, and
Blender add-on sources are all combined in this one repository.
The documentation is available on https://flamenco.blender.org/, including

View File

@ -1,4 +1,4 @@
# Flamenco 3 Blender add-on
# Flamenco Blender add-on
## Setting up development environment

View File

@ -3,16 +3,16 @@
# <pep8 compliant>
bl_info = {
"name": "Flamenco 3",
"name": "Flamenco",
"author": "Sybren A. Stüvel",
"version": (3, 5),
"version": (3, 6),
"blender": (3, 1, 0),
"description": "Flamenco client for Blender.",
"location": "Output Properties > Flamenco",
"doc_url": "https://flamenco.blender.org/",
"category": "System",
"support": "COMMUNITY",
"warning": "This is version 3.5-alpha1 of the add-on, which is not a stable release",
"warning": "This is version 3.6-alpha5 of the add-on, which is not a stable release",
}
from pathlib import Path
@ -27,6 +27,7 @@ if __is_first_load:
preferences,
projects,
worker_tags,
manager_info,
)
else:
import importlib
@ -38,6 +39,7 @@ else:
preferences = importlib.reload(preferences)
projects = importlib.reload(projects)
worker_tags = importlib.reload(worker_tags)
manager_info = importlib.reload(manager_info)
import bpy
@ -154,12 +156,21 @@ def register() -> None:
max=100,
)
bpy.types.Scene.flamenco_job_submit_as_paused = bpy.props.BoolProperty(
name="Flamenco Job Submit as Paused",
description="Whether the job is paused initially; Checked sets the job to `paused`, and Unchecked sets the job to `queued`",
default=False,
)
preferences.register()
worker_tags.register()
operators.register()
gui.register()
job_types.register()
# Once everything is registered, load the cached manager info from JSON.
manager_info.load_into_cache()
def unregister() -> None:
discard_global_flamenco_data(None)

View File

@ -180,9 +180,9 @@ class PackThread(threading.Thread):
def poll(self, timeout: Optional[int] = None) -> Optional[Message]:
"""Poll the queue, return the first message or None if there is none.
:param timeout: Max time to wait for a message to appear on the queue.
If None, will not wait and just return None immediately (if there is
no queued message).
:param timeout: Max time to wait for a message to appear on the queue,
in seconds. If None, will not wait and just return None immediately
(if there is no queued message).
"""
try:
return self.queue.get(block=timeout is not None, timeout=timeout)

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-3.0-or-later
"""BAT interface for sending files to the Manager via the Shaman API."""
import email.header
import logging
import random
import platform
@ -32,6 +33,7 @@ MAX_FAILED_PATHS = 8
HashableShamanFileSpec = tuple[str, int, str]
"""Tuple of the 'sha', 'size', and 'path' fields of a ShamanFileSpec."""
# Mypy doesn't understand that submodules.pack.Packer exists.
class Packer(submodules.pack.Packer): # type: ignore
"""Creates BAT Packs on a Shaman server."""
@ -286,12 +288,12 @@ class Transferrer(submodules.transfer.FileTransferer): # type: ignore
return None
self.log.debug(" %s: %s", file_spec.status, file_spec.path)
match file_spec.status.value:
case "unknown":
status = file_spec.status.value
if status == "unknown":
to_upload.appendleft(file_spec)
case "uploading":
elif status == "uploading":
to_upload.append(file_spec)
case _:
else:
msg = "Unknown status in response from Shaman: %r" % file_spec
self.log.error(msg)
self.error_set(msg)
@ -365,6 +367,7 @@ class Transferrer(submodules.transfer.FileTransferer): # type: ignore
)
local_filepath = self._rel_to_local_path[file_spec.path]
filename_header = _encode_original_filename_header(file_spec.path)
try:
with local_filepath.open("rb") as file_reader:
self.shaman_api.shaman_file_store(
@ -372,24 +375,25 @@ class Transferrer(submodules.transfer.FileTransferer): # type: ignore
filesize=file_spec.size,
body=file_reader,
x_shaman_can_defer_upload=can_defer,
x_shaman_original_filename=file_spec.path,
x_shaman_original_filename=filename_header,
)
except ApiException as ex:
match ex.status:
case 425: # Too Early, i.e. defer uploading this file.
if ex.status == 425:
# Too Early, i.e. defer uploading this file.
self.log.info(
" %s: someone else is uploading this file, deferring",
file_spec.path,
)
defer(file_spec)
continue
case 417: # Expectation Failed; mismatch of checksum or file size.
elif ex.status == 417:
# Expectation Failed; mismatch of checksum or file size.
msg = "Error from Shaman uploading %s, code %d: %s" % (
file_spec.path,
ex.status,
ex.body,
)
case _: # Unknown error
else: # Unknown error
msg = "API exception\nHeaders: %s\nBody: %s\n" % (
ex.headers,
ex.body,
@ -453,15 +457,11 @@ class Transferrer(submodules.transfer.FileTransferer): # type: ignore
checkoutRequest
)
except ApiException as ex:
match ex.status:
case 424: # Files were missing
if ex.status == 424: # Files were missing
msg = "We did not upload some files, checkout aborted"
case 409: # Checkout already exists
msg = (
"There is already an existing checkout at %s"
% self.checkout_path
)
case _: # Unknown error
elif ex.status == 409: # Checkout already exists
msg = "There is already an existing checkout at %s" % self.checkout_path
else: # Unknown error
msg = "API exception\nHeaders: %s\nBody: %s\n" % (
ex.headers,
ex.body,
@ -529,3 +529,16 @@ def _root_path_strip(path: PurePath) -> PurePosixPath:
if path.is_absolute():
return PurePosixPath(*path.parts[1:])
return PurePosixPath(path)
def _encode_original_filename_header(filename: str) -> str:
"""Encode the 'original filename' as valid HTTP Header.
See the specs for the X-Shaman-Original-Filename header in the OpenAPI
operation `shamanFileStore`, defined in flamenco-openapi.yaml.
"""
# This is a no-op when the filename is already in ASCII.
fake_header = email.header.Header()
fake_header.append(filename, charset="utf-8")
return fake_header.encode()

View File

@ -3,13 +3,12 @@
# <pep8 compliant>
import logging
import dataclasses
import platform
from typing import TYPE_CHECKING, Optional
from typing import TYPE_CHECKING
from urllib3.exceptions import HTTPError, MaxRetryError
import bpy
from flamenco import manager_info, job_types
_flamenco_client = None
_log = logging.getLogger(__name__)
@ -27,23 +26,6 @@ else:
_SharedStorageLocation = object
@dataclasses.dataclass(frozen=True)
class ManagerInfo:
version: Optional[_FlamencoVersion] = None
storage: Optional[_SharedStorageLocation] = None
error: str = ""
@classmethod
def with_error(cls, error: str) -> "ManagerInfo":
return cls(error=error)
@classmethod
def with_info(
cls, version: _FlamencoVersion, storage: _SharedStorageLocation
) -> "ManagerInfo":
return cls(version=version, storage=storage)
def flamenco_api_client(manager_url: str) -> _ApiClient:
"""Returns an API client for communicating with a Manager."""
global _flamenco_client
@ -87,12 +69,12 @@ def discard_flamenco_data():
_flamenco_client = None
def ping_manager_with_report(
def ping_manager(
window_manager: bpy.types.WindowManager,
scene: bpy.types.Scene,
api_client: _ApiClient,
prefs: _FlamencoPreferences,
) -> tuple[str, str]:
"""Ping the Manager, update preferences, and return a report as string.
"""Fetch Manager info, and update the scene for it.
:returns: tuple (report, level). The report will be something like "<name>
version <version> found", or an error message. The level will be
@ -100,55 +82,49 @@ def ping_manager_with_report(
`Operator.report()`.
"""
info = ping_manager(window_manager, api_client, prefs)
if info.error:
return info.error, "ERROR"
assert info.version is not None
report = "%s version %s found" % (info.version.name, info.version.version)
return report, "INFO"
def ping_manager(
window_manager: bpy.types.WindowManager,
api_client: _ApiClient,
prefs: _FlamencoPreferences,
) -> ManagerInfo:
"""Fetch Manager config & version, and update cached preferences."""
window_manager.flamenco_status_ping = "..."
# Do a late import, so that the API is only imported when actually used.
from flamenco.manager import ApiException
from flamenco.manager.apis import MetaApi
from flamenco.manager.models import FlamencoVersion, SharedStorageLocation
# Remember the old values, as they may have disappeared from the Manager.
old_job_type_name = getattr(scene, "flamenco_job_type", "")
old_tag_name = getattr(scene, "flamenco_worker_tag", "")
meta_api = MetaApi(api_client)
error = ""
try:
version: FlamencoVersion = meta_api.get_version()
storage: SharedStorageLocation = meta_api.get_shared_storage(
"users", platform.system().lower()
)
except ApiException as ex:
error = "Manager cannot be reached: %s" % ex
except MaxRetryError as ex:
# This is the common error, when for example the port number is
# incorrect and nothing is listening. The exception text is not included
# because it's very long and confusing.
error = "Manager cannot be reached"
except HTTPError as ex:
error = "Manager cannot be reached: %s" % ex
if error:
window_manager.flamenco_status_ping = error
return ManagerInfo.with_error(error)
# Store whether this Manager supports the Shaman API.
prefs.is_shaman_enabled = storage.shaman_enabled
prefs.job_storage = storage.location
report = "%s version %s found" % (version.name, version.version)
info = manager_info.fetch(api_client)
except manager_info.FetchError as ex:
report = str(ex)
window_manager.flamenco_status_ping = report
return report, "ERROR"
return ManagerInfo.with_info(version, storage)
manager_info.save(info)
report = "%s version %s found" % (
info.flamenco_version.name,
info.flamenco_version.version,
)
report_level = "INFO"
job_types.refresh_scene_properties(scene, info.job_types)
# Try to restore the old values.
#
# Since you cannot un-set an enum property, and 'empty string' is not a
# valid value either, when the old choice is no longer available we remove
# the underlying ID property.
if old_job_type_name:
try:
scene.flamenco_job_type = old_job_type_name
except TypeError: # Thrown when the old enum value no longer exists.
del scene["flamenco_job_type"]
report = f"Job type {old_job_type_name!r} no longer available, choose another one"
report_level = "WARNING"
if old_tag_name:
try:
scene.flamenco_worker_tag = old_tag_name
except TypeError: # Thrown when the old enum value no longer exists.
del scene["flamenco_worker_tag"]
report = f"Tag {old_tag_name!r} no longer available, choose another one"
report_level = "WARNING"
window_manager.flamenco_status_ping = report
return report, report_level

View File

@ -22,7 +22,7 @@ class FLAMENCO_PT_job_submission(bpy.types.Panel):
bl_space_type = "PROPERTIES"
bl_region_type = "WINDOW"
bl_context = "output"
bl_label = "Flamenco 3"
bl_label = "Flamenco"
# A temporary job can be constructed so that dynamic, read-only properties can be evaluated.
# This is only scoped to a single draw() call.
@ -42,24 +42,23 @@ class FLAMENCO_PT_job_submission(bpy.types.Panel):
col = layout.column(align=True)
col.prop(context.scene, "flamenco_job_name", text="Job Name")
col.prop(context.scene, "flamenco_job_priority", text="Priority")
col.prop(
context.scene, "flamenco_job_submit_as_paused", text="Submit as Paused"
)
# Worker tag:
row = col.row(align=True)
row.prop(context.scene, "flamenco_worker_tag", text="Tag")
row.operator("flamenco.fetch_worker_tags", text="", icon="FILE_REFRESH")
layout.separator()
col = layout.column()
# Refreshables:
col = layout.column(align=True)
col.operator(
"flamenco.ping_manager", text="Refresh from Manager", icon="FILE_REFRESH"
)
if not job_types.are_job_types_available():
col.operator("flamenco.fetch_job_types", icon="FILE_REFRESH")
return
col.prop(context.scene, "flamenco_worker_tag", text="Worker Tag")
row = col.row(align=True)
row.prop(context.scene, "flamenco_job_type", text="")
row.operator("flamenco.fetch_job_types", text="", icon="FILE_REFRESH")
self.draw_job_settings(context, layout.column(align=True))
# Job properties:
job_col = layout.column(align=True)
job_col.prop(context.scene, "flamenco_job_type", text="Job Type")
self.draw_job_settings(context, job_col)
layout.separator()

View File

@ -8,7 +8,7 @@ import bpy
from .job_types_propgroup import JobTypePropertyGroup
from .bat.submodules import bpathlib
from . import preferences
from . import manager_info
if TYPE_CHECKING:
from .manager import ApiClient as _ApiClient
@ -33,6 +33,7 @@ log = logging.getLogger(__name__)
def job_for_scene(scene: bpy.types.Scene) -> Optional[_SubmittedJob]:
from flamenco.manager.models import SubmittedJob, JobMetadata
from flamenco.manager.model.job_status import JobStatus
propgroup = getattr(scene, "flamenco_job_settings", None)
assert isinstance(propgroup, JobTypePropertyGroup), "did not expect %s" % (
@ -44,6 +45,12 @@ def job_for_scene(scene: bpy.types.Scene) -> Optional[_SubmittedJob]:
priority = getattr(scene, "flamenco_job_priority", 50)
submit_as_paused = getattr(scene, "flamenco_job_submit_as_paused", False)
if submit_as_paused:
initial_status = JobStatus("paused")
else:
initial_status = JobStatus("queued")
job: SubmittedJob = SubmittedJob(
name=scene.flamenco_job_name,
type=propgroup.job_type.name,
@ -52,6 +59,7 @@ def job_for_scene(scene: bpy.types.Scene) -> Optional[_SubmittedJob]:
metadata=metadata,
submitter_platform=platform.system().lower(),
type_etag=propgroup.job_type.etag,
initial_status=initial_status,
)
worker_tag: str = getattr(scene, "flamenco_worker_tag", "")
@ -133,8 +141,11 @@ def is_file_inside_job_storage(context: bpy.types.Context, blendfile: Path) -> b
blendfile = bpathlib.make_absolute(blendfile)
prefs = preferences.get(context)
job_storage = bpathlib.make_absolute(Path(prefs.job_storage))
info = manager_info.load_cached()
if not info:
raise RuntimeError("Flamenco Manager info unknown, please refresh.")
job_storage = bpathlib.make_absolute(Path(info.shared_storage.location))
log.info("Checking whether the file is already inside the job storage")
log.info(" file : %s", blendfile)

View File

@ -1,14 +1,10 @@
# SPDX-License-Identifier: GPL-3.0-or-later
import json
import logging
from typing import TYPE_CHECKING, Optional, Union
import bpy
from . import job_types_propgroup
_log = logging.getLogger(__name__)
from . import job_types_propgroup, manager_info
if TYPE_CHECKING:
from flamenco.manager import ApiClient as _ApiClient
@ -29,34 +25,34 @@ else:
_available_job_types: Optional[list[_AvailableJobType]] = None
# Enum property value that indicates 'no job type selected'. This is used
# because an empty string seems to be handled by Blender as 'nothing', which
# never seems to match an enum item even when there is one with "" as its 'key'.
_JOB_TYPE_NOT_SELECTED = "-"
_JOB_TYPE_NOT_SELECTED_ENUM_ITEM = (
_JOB_TYPE_NOT_SELECTED,
"Select a Job Type",
"",
0,
0,
)
# Items for a bpy.props.EnumProperty()
_job_type_enum_items: list[
Union[tuple[str, str, str], tuple[str, str, str, int, int]]
] = []
] = [_JOB_TYPE_NOT_SELECTED_ENUM_ITEM]
_selected_job_type_propgroup: Optional[
type[job_types_propgroup.JobTypePropertyGroup]
] = None
def fetch_available_job_types(api_client: _ApiClient, scene: bpy.types.Scene) -> None:
from flamenco.manager import ApiClient
from flamenco.manager.api import jobs_api
from flamenco.manager.model.available_job_types import AvailableJobTypes
assert isinstance(api_client, ApiClient)
job_api_instance = jobs_api.JobsApi(api_client)
response: AvailableJobTypes = job_api_instance.get_job_types()
def refresh_scene_properties(
scene: bpy.types.Scene, available_job_types: _AvailableJobTypes
) -> None:
_clear_available_job_types(scene)
# Store the response JSON on the scene. This is used when the blend file is
# loaded (and thus the _available_job_types global variable is still empty)
# to generate the PropertyGroup of the selected job type.
scene.flamenco_available_job_types_json = json.dumps(response.to_dict())
_store_available_job_types(response)
_store_available_job_types(available_job_types)
update_job_type_properties(scene)
def setting_is_visible(setting: _AvailableJobSetting) -> bool:
@ -120,36 +116,10 @@ def _store_available_job_types(available_job_types: _AvailableJobTypes) -> None:
else:
# Convert from API response type to list suitable for an EnumProperty.
_job_type_enum_items = [
(job_type.name, job_type.label, "") for job_type in job_types
(job_type.name, job_type.label, getattr(job_type, "description", ""))
for job_type in job_types
]
_job_type_enum_items.insert(0, ("", "Select a Job Type", "", 0, 0))
def _available_job_types_from_json(job_types_json: str) -> None:
"""Convert JSON to AvailableJobTypes object, and update global variables for it."""
from flamenco.manager.models import AvailableJobTypes
from flamenco.manager.configuration import Configuration
from flamenco.manager.model_utils import validate_and_convert_types
json_dict = json.loads(job_types_json)
dummy_cfg = Configuration()
try:
job_types = validate_and_convert_types(
json_dict, (AvailableJobTypes,), ["job_types"], True, True, dummy_cfg
)
except TypeError:
_log.warn(
"Flamenco: could not restore cached job types, refresh them from Flamenco Manager"
)
_store_available_job_types(AvailableJobTypes(job_types=[]))
return
assert isinstance(
job_types, AvailableJobTypes
), "expected AvailableJobTypes, got %s" % type(job_types)
_store_available_job_types(job_types)
_job_type_enum_items.insert(0, _JOB_TYPE_NOT_SELECTED_ENUM_ITEM)
def are_job_types_available() -> bool:
@ -199,7 +169,7 @@ def _clear_available_job_types(scene: bpy.types.Scene) -> None:
_clear_job_type_propgroup()
_available_job_types = None
_job_type_enum_items.clear()
_job_type_enum_items = []
scene.flamenco_available_job_types_json = ""
@ -238,26 +208,27 @@ def _get_job_types_enum_items(dummy1, dummy2):
@bpy.app.handlers.persistent
def restore_available_job_types(dummy1, dummy2):
def restore_available_job_types(_filepath, _none):
scene = bpy.context.scene
job_types_json = getattr(scene, "flamenco_available_job_types_json", "")
if not job_types_json:
info = manager_info.load_cached()
if info is None:
_clear_available_job_types(scene)
return
_available_job_types_from_json(job_types_json)
update_job_type_properties(scene)
refresh_scene_properties(scene, info.job_types)
def discard_flamenco_data():
if _available_job_types:
_available_job_types.clear()
if _job_type_enum_items:
_job_type_enum_items.clear()
global _available_job_types
global _job_type_enum_items
_available_job_types = None
_job_type_enum_items = []
def register() -> None:
bpy.types.Scene.flamenco_job_type = bpy.props.EnumProperty(
name="Job Type",
default=0,
items=_get_job_types_enum_items,
update=_update_job_type,
)

View File

@ -304,8 +304,8 @@ def _create_property(job_type: _AvailableJobType, setting: _AvailableJobSetting)
if not setting.get("editable", True):
prop_kwargs["get"] = _create_prop_getter(job_type, setting)
prop_name = _job_setting_key_to_label(setting.key)
prop = prop_type(name=prop_name, **prop_kwargs)
prop_label = _job_setting_label(setting)
prop = prop_type(name=prop_label, **prop_kwargs)
return prop
@ -316,10 +316,10 @@ def _create_autoeval_property(
assert isinstance(setting, AvailableJobSetting)
setting_name = _job_setting_key_to_label(setting.key)
setting_label = _job_setting_label(setting)
prop_descr = (
"Automatically determine the value for %r when the job gets submitted"
% setting_name
% setting_label
)
prop = bpy.props.BoolProperty(
@ -379,13 +379,15 @@ def _job_type_to_class_name(job_type_name: str) -> str:
return job_type_name.title().replace("-", "")
def _job_setting_key_to_label(setting_key: str) -> str:
"""Change 'some_setting_key' to 'Some Setting Key'.
def _job_setting_label(setting: _AvailableJobSetting) -> str:
"""Return a suitable label for this job setting."""
>>> _job_setting_key_to_label('some_setting_key')
'Some Setting Key'
"""
return setting_key.title().replace("_", " ")
label = str(setting.get("label", default=""))
if label:
return label
generated_label: str = setting.key.title().replace("_", " ")
return generated_label
def _set_if_available(

View File

@ -10,7 +10,7 @@
"""
__version__ = "3.5-alpha1"
__version__ = "3.6-alpha5"
# import ApiClient
from flamenco.manager.api_client import ApiClient

View File

@ -32,7 +32,6 @@ from flamenco.manager.model.job_mass_deletion_selection import JobMassDeletionSe
from flamenco.manager.model.job_priority_change import JobPriorityChange
from flamenco.manager.model.job_status_change import JobStatusChange
from flamenco.manager.model.job_tasks_summary import JobTasksSummary
from flamenco.manager.model.jobs_query import JobsQuery
from flamenco.manager.model.jobs_query_result import JobsQueryResult
from flamenco.manager.model.submitted_job import SubmittedJob
from flamenco.manager.model.task import Task
@ -437,6 +436,48 @@ class JobsApi(object):
},
api_client=api_client
)
self.fetch_jobs_endpoint = _Endpoint(
settings={
'response_type': (JobsQueryResult,),
'auth': [],
'endpoint_path': '/api/v3/jobs',
'operation_id': 'fetch_jobs',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.fetch_task_endpoint = _Endpoint(
settings={
'response_type': (Task,),
@ -676,56 +717,6 @@ class JobsApi(object):
},
api_client=api_client
)
self.query_jobs_endpoint = _Endpoint(
settings={
'response_type': (JobsQueryResult,),
'auth': [],
'endpoint_path': '/api/v3/jobs/query',
'operation_id': 'query_jobs',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'jobs_query',
],
'required': [
'jobs_query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'jobs_query':
(JobsQuery,),
},
'attribute_map': {
},
'location_map': {
'jobs_query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.remove_job_blocklist_endpoint = _Endpoint(
settings={
'response_type': None,
@ -1661,6 +1652,78 @@ class JobsApi(object):
job_id
return self.fetch_job_tasks_endpoint.call_with_http_info(**kwargs)
def fetch_jobs(
self,
**kwargs
):
"""List all jobs in the database. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.fetch_jobs(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
JobsQueryResult
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
return self.fetch_jobs_endpoint.call_with_http_info(**kwargs)
def fetch_task(
self,
task_id,
@ -2041,83 +2104,6 @@ class JobsApi(object):
kwargs['_host_index'] = kwargs.get('_host_index')
return self.get_job_types_endpoint.call_with_http_info(**kwargs)
def query_jobs(
self,
jobs_query,
**kwargs
):
"""Fetch list of jobs. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.query_jobs(jobs_query, async_req=True)
>>> result = thread.get()
Args:
jobs_query (JobsQuery): Specification of which jobs to get.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
JobsQueryResult
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['jobs_query'] = \
jobs_query
return self.query_jobs_endpoint.call_with_http_info(**kwargs)
def remove_job_blocklist(
self,
job_id,

View File

@ -24,6 +24,7 @@ from flamenco.manager.model_utils import ( # noqa: F401
from flamenco.manager.model.blender_path_check_result import BlenderPathCheckResult
from flamenco.manager.model.blender_path_find_result import BlenderPathFindResult
from flamenco.manager.model.error import Error
from flamenco.manager.model.farm_status_report import FarmStatusReport
from flamenco.manager.model.flamenco_version import FlamencoVersion
from flamenco.manager.model.manager_configuration import ManagerConfiguration
from flamenco.manager.model.manager_variable_audience import ManagerVariableAudience
@ -268,6 +269,48 @@ class MetaApi(object):
},
api_client=api_client
)
self.get_farm_status_endpoint = _Endpoint(
settings={
'response_type': (FarmStatusReport,),
'auth': [],
'endpoint_path': '/api/v3/status',
'operation_id': 'get_farm_status',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_shared_storage_endpoint = _Endpoint(
settings={
'response_type': (SharedStorageLocation,),
@ -831,6 +874,78 @@ class MetaApi(object):
kwargs['_host_index'] = kwargs.get('_host_index')
return self.get_configuration_file_endpoint.call_with_http_info(**kwargs)
def get_farm_status(
self,
**kwargs
):
"""Get the status of this Flamenco farm. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_farm_status(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
FarmStatusReport
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
return self.get_farm_status_endpoint.call_with_http_info(**kwargs)
def get_shared_storage(
self,
audience,

View File

@ -444,7 +444,7 @@ class ShamanApi(object):
Keyword Args:
x_shaman_can_defer_upload (bool): The client indicates that it can defer uploading this file. The \"208\" response will not only be returned when the file is already fully known to the Shaman server, but also when someone else is currently uploading this file. . [optional]
x_shaman_original_filename (str): The original filename. If sent along with the request, it will be included in the server logs, which can aid in debugging. . [optional]
x_shaman_original_filename (str): The original filename. If sent along with the request, it will be included in the server logs, which can aid in debugging. MUST either be ASCII or encoded using RFC 2047 (aka MIME encoding). In the latter case the encoding MUST be UTF-8. . [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object

View File

@ -76,7 +76,7 @@ class ApiClient(object):
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'Flamenco/3.5-alpha1 (Blender add-on)'
self.user_agent = 'Flamenco/3.6-alpha5 (Blender add-on)'
def __enter__(self):
return self

View File

@ -404,7 +404,7 @@ conf = flamenco.manager.Configuration(
"OS: {env}\n"\
"Python Version: {pyversion}\n"\
"Version of the API: 1.0.0\n"\
"SDK Package Version: 3.5-alpha1".\
"SDK Package Version: 3.6-alpha5".\
format(env=sys.platform, pyversion=sys.version)
def get_host_settings(self):

View File

@ -11,6 +11,7 @@ Name | Type | Description | Notes
**choices** | **[str]** | When given, limit the valid values to these choices. Only usable with string type. | [optional]
**propargs** | **{str: (bool, date, datetime, dict, float, int, list, str, none_type)}** | Any extra arguments to the bpy.props.SomeProperty() call used to create this property. | [optional]
**description** | **bool, date, datetime, dict, float, int, list, str, none_type** | The description/tooltip shown in the user interface. | [optional]
**label** | **bool, date, datetime, dict, float, int, list, str, none_type** | Label for displaying this setting. If not specified, the key is used to generate a reasonable label. | [optional]
**default** | **bool, date, datetime, dict, float, int, list, str, none_type** | The default value shown to the user when determining this setting. | [optional]
**eval** | **str** | Python expression to be evaluated in order to determine the default value for this setting. | [optional]
**eval_info** | [**AvailableJobSettingEvalInfo**](AvailableJobSettingEvalInfo.md) | | [optional]

View File

@ -9,6 +9,7 @@ Name | Type | Description | Notes
**label** | **str** | |
**settings** | [**[AvailableJobSetting]**](AvailableJobSetting.md) | |
**etag** | **str** | Hash of the job type. If the job settings or the label change, this etag will change. This is used on job submission to ensure that the submitted job settings are up to date. |
**description** | **str** | The description/tooltip shown in the user interface. | [optional]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -4,7 +4,7 @@
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**value** | **str** | | must be one of ["file_association", "path_envvar", "input_path", ]
**value** | **str** | | must be one of ["file_association", "path_envvar", "input_path", "default", ]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,11 @@
# EventFarmStatus
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**value** | **FarmStatusReport** | |
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,11 @@
# FarmStatus
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**value** | **str** | | must be one of ["active", "idle", "waiting", "asleep", "inoperative", "unknown", "starting", ]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -0,0 +1,12 @@
# FarmStatusReport
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**status** | [**FarmStatus**](FarmStatus.md) | |
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -18,6 +18,7 @@ Name | Type | Description | Notes
**metadata** | [**JobMetadata**](JobMetadata.md) | | [optional]
**storage** | [**JobStorageInfo**](JobStorageInfo.md) | | [optional]
**worker_tag** | **str** | Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. | [optional]
**initial_status** | [**JobStatus**](JobStatus.md) | | [optional]
**delete_requested_at** | **datetime** | If job deletion was requested, this is the timestamp at which that request was stored on Flamenco Manager. | [optional]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]

View File

@ -4,7 +4,7 @@
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**value** | **str** | | must be one of ["active", "canceled", "completed", "failed", "paused", "queued", "cancel-requested", "requeueing", "under-construction", ]
**value** | **str** | | must be one of ["active", "canceled", "completed", "failed", "paused", "pause-requested", "queued", "cancel-requested", "requeueing", "under-construction", ]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -12,12 +12,12 @@ Method | HTTP request | Description
[**fetch_job_blocklist**](JobsApi.md#fetch_job_blocklist) | **GET** /api/v3/jobs/{job_id}/blocklist | Fetch the list of workers that are blocked from doing certain task types on this job.
[**fetch_job_last_rendered_info**](JobsApi.md#fetch_job_last_rendered_info) | **GET** /api/v3/jobs/{job_id}/last-rendered | Get the URL that serves the last-rendered images of this job.
[**fetch_job_tasks**](JobsApi.md#fetch_job_tasks) | **GET** /api/v3/jobs/{job_id}/tasks | Fetch a summary of all tasks of the given job.
[**fetch_jobs**](JobsApi.md#fetch_jobs) | **GET** /api/v3/jobs | List all jobs in the database.
[**fetch_task**](JobsApi.md#fetch_task) | **GET** /api/v3/tasks/{task_id} | Fetch a single task.
[**fetch_task_log_info**](JobsApi.md#fetch_task_log_info) | **GET** /api/v3/tasks/{task_id}/log | Get the URL of the task log, and some more info.
[**fetch_task_log_tail**](JobsApi.md#fetch_task_log_tail) | **GET** /api/v3/tasks/{task_id}/logtail | Fetch the last few lines of the task&#39;s log.
[**get_job_type**](JobsApi.md#get_job_type) | **GET** /api/v3/jobs/type/{typeName} | Get single job type and its parameters.
[**get_job_types**](JobsApi.md#get_job_types) | **GET** /api/v3/jobs/types | Get list of job types and their parameters.
[**query_jobs**](JobsApi.md#query_jobs) | **POST** /api/v3/jobs/query | Fetch list of jobs.
[**remove_job_blocklist**](JobsApi.md#remove_job_blocklist) | **DELETE** /api/v3/jobs/{job_id}/blocklist | Remove entries from a job blocklist.
[**set_job_priority**](JobsApi.md#set_job_priority) | **POST** /api/v3/jobs/{job_id}/setpriority |
[**set_job_status**](JobsApi.md#set_job_status) | **POST** /api/v3/jobs/{job_id}/setstatus |
@ -552,6 +552,69 @@ No authorization required
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **fetch_jobs**
> JobsQueryResult fetch_jobs()
List all jobs in the database.
### Example
```python
import time
import flamenco.manager
from flamenco.manager.api import jobs_api
from flamenco.manager.model.error import Error
from flamenco.manager.model.jobs_query_result import JobsQueryResult
from pprint import pprint
# Defining the host is optional and defaults to http://localhost
# See configuration.py for a list of all supported configuration parameters.
configuration = flamenco.manager.Configuration(
host = "http://localhost"
)
# Enter a context with an instance of the API client
with flamenco.manager.ApiClient() as api_client:
# Create an instance of the API class
api_instance = jobs_api.JobsApi(api_client)
# example, this endpoint has no required or optional parameters
try:
# List all jobs in the database.
api_response = api_instance.fetch_jobs()
pprint(api_response)
except flamenco.manager.ApiException as e:
print("Exception when calling JobsApi->fetch_jobs: %s\n" % e)
```
### Parameters
This endpoint does not need any parameter.
### Return type
[**JobsQueryResult**](JobsQueryResult.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
### HTTP response details
| Status code | Description | Response headers |
|-------------|-------------|------------------|
**200** | Normal query response, can be empty list if there are no jobs. | - |
**0** | Error message | - |
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **fetch_task**
> Task fetch_task(task_id)
@ -880,87 +943,6 @@ No authorization required
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **query_jobs**
> JobsQueryResult query_jobs(jobs_query)
Fetch list of jobs.
### Example
```python
import time
import flamenco.manager
from flamenco.manager.api import jobs_api
from flamenco.manager.model.error import Error
from flamenco.manager.model.jobs_query import JobsQuery
from flamenco.manager.model.jobs_query_result import JobsQueryResult
from pprint import pprint
# Defining the host is optional and defaults to http://localhost
# See configuration.py for a list of all supported configuration parameters.
configuration = flamenco.manager.Configuration(
host = "http://localhost"
)
# Enter a context with an instance of the API client
with flamenco.manager.ApiClient() as api_client:
# Create an instance of the API class
api_instance = jobs_api.JobsApi(api_client)
jobs_query = JobsQuery(
offset=0,
limit=1,
order_by=[
"order_by_example",
],
status_in=[
JobStatus("active"),
],
metadata={
"key": "key_example",
},
settings={},
) # JobsQuery | Specification of which jobs to get.
# example passing only required values which don't have defaults set
try:
# Fetch list of jobs.
api_response = api_instance.query_jobs(jobs_query)
pprint(api_response)
except flamenco.manager.ApiException as e:
print("Exception when calling JobsApi->query_jobs: %s\n" % e)
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**jobs_query** | [**JobsQuery**](JobsQuery.md)| Specification of which jobs to get. |
### Return type
[**JobsQueryResult**](JobsQueryResult.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
### HTTP response details
| Status code | Description | Response headers |
|-------------|-------------|------------------|
**200** | Normal query response, can be empty list if nothing matched the query. | - |
**0** | Error message | - |
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **remove_job_blocklist**
> remove_job_blocklist(job_id)
@ -1296,6 +1278,7 @@ with flamenco.manager.ApiClient() as api_client:
shaman_checkout_id="shaman_checkout_id_example",
),
worker_tag="worker_tag_example",
initial_status=JobStatus("active"),
) # SubmittedJob | Job to submit
# example passing only required values which don't have defaults set
@ -1378,6 +1361,7 @@ with flamenco.manager.ApiClient() as api_client:
shaman_checkout_id="shaman_checkout_id_example",
),
worker_tag="worker_tag_example",
initial_status=JobStatus("active"),
) # SubmittedJob | Job to check
# example passing only required values which don't have defaults set

View File

@ -9,6 +9,7 @@ Method | HTTP request | Description
[**find_blender_exe_path**](MetaApi.md#find_blender_exe_path) | **GET** /api/v3/configuration/check/blender | Find one or more CLI commands for use as way to start Blender
[**get_configuration**](MetaApi.md#get_configuration) | **GET** /api/v3/configuration | Get the configuration of this Manager.
[**get_configuration_file**](MetaApi.md#get_configuration_file) | **GET** /api/v3/configuration/file | Retrieve the configuration of Flamenco Manager.
[**get_farm_status**](MetaApi.md#get_farm_status) | **GET** /api/v3/status | Get the status of this Flamenco farm.
[**get_shared_storage**](MetaApi.md#get_shared_storage) | **GET** /api/v3/configuration/shared-storage/{audience}/{platform} | Get the shared storage location of this Manager, adjusted for the given audience and platform.
[**get_variables**](MetaApi.md#get_variables) | **GET** /api/v3/configuration/variables/{audience}/{platform} | Get the variables of this Manager. Used by the Blender add-on to recognise two-way variables, and for the web interface to do variable replacement based on the browser&#39;s platform.
[**get_version**](MetaApi.md#get_version) | **GET** /api/v3/version | Get the Flamenco version of this Manager
@ -341,6 +342,67 @@ No authorization required
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **get_farm_status**
> FarmStatusReport get_farm_status()
Get the status of this Flamenco farm.
### Example
```python
import time
import flamenco.manager
from flamenco.manager.api import meta_api
from flamenco.manager.model.farm_status_report import FarmStatusReport
from pprint import pprint
# Defining the host is optional and defaults to http://localhost
# See configuration.py for a list of all supported configuration parameters.
configuration = flamenco.manager.Configuration(
host = "http://localhost"
)
# Enter a context with an instance of the API client
with flamenco.manager.ApiClient() as api_client:
# Create an instance of the API class
api_instance = meta_api.MetaApi(api_client)
# example, this endpoint has no required or optional parameters
try:
# Get the status of this Flamenco farm.
api_response = api_instance.get_farm_status()
pprint(api_response)
except flamenco.manager.ApiException as e:
print("Exception when calling MetaApi->get_farm_status: %s\n" % e)
```
### Parameters
This endpoint does not need any parameter.
### Return type
[**FarmStatusReport**](FarmStatusReport.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
### HTTP response details
| Status code | Description | Response headers |
|-------------|-------------|------------------|
**200** | normal response | - |
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **get_shared_storage**
> SharedStorageLocation get_shared_storage(audience, platform)

View File

@ -194,7 +194,7 @@ with flamenco.manager.ApiClient() as api_client:
filesize = 1 # int | Size of the file in bytes.
body = open('/path/to/file', 'rb') # file_type | Contents of the file
x_shaman_can_defer_upload = True # bool | The client indicates that it can defer uploading this file. The \"208\" response will not only be returned when the file is already fully known to the Shaman server, but also when someone else is currently uploading this file. (optional)
x_shaman_original_filename = "X-Shaman-Original-Filename_example" # str | The original filename. If sent along with the request, it will be included in the server logs, which can aid in debugging. (optional)
x_shaman_original_filename = "X-Shaman-Original-Filename_example" # str | The original filename. If sent along with the request, it will be included in the server logs, which can aid in debugging. MUST either be ASCII or encoded using RFC 2047 (aka MIME encoding). In the latter case the encoding MUST be UTF-8. (optional)
# example passing only required values which don't have defaults set
try:
@ -221,7 +221,7 @@ Name | Type | Description | Notes
**filesize** | **int**| Size of the file in bytes. |
**body** | **file_type**| Contents of the file |
**x_shaman_can_defer_upload** | **bool**| The client indicates that it can defer uploading this file. The \&quot;208\&quot; response will not only be returned when the file is already fully known to the Shaman server, but also when someone else is currently uploading this file. | [optional]
**x_shaman_original_filename** | **str**| The original filename. If sent along with the request, it will be included in the server logs, which can aid in debugging. | [optional]
**x_shaman_original_filename** | **str**| The original filename. If sent along with the request, it will be included in the server logs, which can aid in debugging. MUST either be ASCII or encoded using RFC 2047 (aka MIME encoding). In the latter case the encoding MUST be UTF-8. | [optional]
### Return type

View File

@ -14,6 +14,7 @@ Name | Type | Description | Notes
**metadata** | [**JobMetadata**](JobMetadata.md) | | [optional]
**storage** | [**JobStorageInfo**](JobStorageInfo.md) | | [optional]
**worker_tag** | **str** | Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. | [optional]
**initial_status** | [**JobStatus**](JobStatus.md) | | [optional]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@ -99,6 +99,7 @@ class AvailableJobSetting(ModelNormal):
'choices': ([str],), # noqa: E501
'propargs': ({str: (bool, date, datetime, dict, float, int, list, str, none_type)},), # noqa: E501
'description': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'label': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'default': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'eval': (str,), # noqa: E501
'eval_info': (AvailableJobSettingEvalInfo,), # noqa: E501
@ -119,6 +120,7 @@ class AvailableJobSetting(ModelNormal):
'choices': 'choices', # noqa: E501
'propargs': 'propargs', # noqa: E501
'description': 'description', # noqa: E501
'label': 'label', # noqa: E501
'default': 'default', # noqa: E501
'eval': 'eval', # noqa: E501
'eval_info': 'evalInfo', # noqa: E501
@ -176,6 +178,7 @@ class AvailableJobSetting(ModelNormal):
choices ([str]): When given, limit the valid values to these choices. Only usable with string type.. [optional] # noqa: E501
propargs ({str: (bool, date, datetime, dict, float, int, list, str, none_type)}): Any extra arguments to the bpy.props.SomeProperty() call used to create this property.. [optional] # noqa: E501
description (bool, date, datetime, dict, float, int, list, str, none_type): The description/tooltip shown in the user interface.. [optional] # noqa: E501
label (bool, date, datetime, dict, float, int, list, str, none_type): Label for displaying this setting. If not specified, the key is used to generate a reasonable label.. [optional] # noqa: E501
default (bool, date, datetime, dict, float, int, list, str, none_type): The default value shown to the user when determining this setting.. [optional] # noqa: E501
eval (str): Python expression to be evaluated in order to determine the default value for this setting.. [optional] # noqa: E501
eval_info (AvailableJobSettingEvalInfo): [optional] # noqa: E501
@ -273,6 +276,7 @@ class AvailableJobSetting(ModelNormal):
choices ([str]): When given, limit the valid values to these choices. Only usable with string type.. [optional] # noqa: E501
propargs ({str: (bool, date, datetime, dict, float, int, list, str, none_type)}): Any extra arguments to the bpy.props.SomeProperty() call used to create this property.. [optional] # noqa: E501
description (bool, date, datetime, dict, float, int, list, str, none_type): The description/tooltip shown in the user interface.. [optional] # noqa: E501
label (bool, date, datetime, dict, float, int, list, str, none_type): Label for displaying this setting. If not specified, the key is used to generate a reasonable label.. [optional] # noqa: E501
default (bool, date, datetime, dict, float, int, list, str, none_type): The default value shown to the user when determining this setting.. [optional] # noqa: E501
eval (str): Python expression to be evaluated in order to determine the default value for this setting.. [optional] # noqa: E501
eval_info (AvailableJobSettingEvalInfo): [optional] # noqa: E501

View File

@ -91,6 +91,7 @@ class AvailableJobType(ModelNormal):
'label': (str,), # noqa: E501
'settings': ([AvailableJobSetting],), # noqa: E501
'etag': (str,), # noqa: E501
'description': (str,), # noqa: E501
}
@cached_property
@ -103,6 +104,7 @@ class AvailableJobType(ModelNormal):
'label': 'label', # noqa: E501
'settings': 'settings', # noqa: E501
'etag': 'etag', # noqa: E501
'description': 'description', # noqa: E501
}
read_only_vars = {
@ -152,6 +154,7 @@ class AvailableJobType(ModelNormal):
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
description (str): The description/tooltip shown in the user interface.. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
@ -243,6 +246,7 @@ class AvailableJobType(ModelNormal):
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
description (str): The description/tooltip shown in the user interface.. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)

View File

@ -55,6 +55,7 @@ class BlenderPathSource(ModelSimple):
'FILE_ASSOCIATION': "file_association",
'PATH_ENVVAR': "path_envvar",
'INPUT_PATH': "input_path",
'DEFAULT': "default",
},
}
@ -106,10 +107,10 @@ class BlenderPathSource(ModelSimple):
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["file_association", "path_envvar", "input_path", ] # noqa: E501
args[0] (str):, must be one of ["file_association", "path_envvar", "input_path", "default", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["file_association", "path_envvar", "input_path", ] # noqa: E501
value (str):, must be one of ["file_association", "path_envvar", "input_path", "default", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
@ -196,10 +197,10 @@ class BlenderPathSource(ModelSimple):
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["file_association", "path_envvar", "input_path", ] # noqa: E501
args[0] (str):, must be one of ["file_association", "path_envvar", "input_path", "default", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["file_association", "path_envvar", "input_path", ] # noqa: E501
value (str):, must be one of ["file_association", "path_envvar", "input_path", "default", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.

View File

@ -0,0 +1,278 @@
"""
Flamenco manager
Render Farm manager API # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from flamenco.manager.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from flamenco.manager.exceptions import ApiAttributeError
class EventFarmStatus(ModelSimple):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'value': (FarmStatusReport,),
}
@cached_property
def discriminator():
return None
attribute_map = {}
read_only_vars = set()
_composed_schemas = None
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs):
"""EventFarmStatus - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (FarmStatusReport): # noqa: E501
Keyword Args:
value (FarmStatusReport): # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs):
"""EventFarmStatus - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (FarmStatusReport): # noqa: E501
Keyword Args:
value (FarmStatusReport): # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
self = super(OpenApiModel, cls).__new__(cls)
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
return self

View File

@ -0,0 +1,287 @@
"""
Flamenco manager
Render Farm manager API # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from flamenco.manager.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from flamenco.manager.exceptions import ApiAttributeError
class FarmStatus(ModelSimple):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
('value',): {
'ACTIVE': "active",
'IDLE': "idle",
'WAITING': "waiting",
'ASLEEP': "asleep",
'INOPERATIVE': "inoperative",
'UNKNOWN': "unknown",
'STARTING': "starting",
},
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'value': (str,),
}
@cached_property
def discriminator():
return None
attribute_map = {}
read_only_vars = set()
_composed_schemas = None
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs):
"""FarmStatus - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["active", "idle", "waiting", "asleep", "inoperative", "unknown", "starting", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["active", "idle", "waiting", "asleep", "inoperative", "unknown", "starting", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs):
"""FarmStatus - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["active", "idle", "waiting", "asleep", "inoperative", "unknown", "starting", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["active", "idle", "waiting", "asleep", "inoperative", "unknown", "starting", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
self = super(OpenApiModel, cls).__new__(cls)
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
return self

View File

@ -0,0 +1,267 @@
"""
Flamenco manager
Render Farm manager API # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from flamenco.manager.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from flamenco.manager.exceptions import ApiAttributeError
def lazy_import():
from flamenco.manager.model.farm_status import FarmStatus
globals()['FarmStatus'] = FarmStatus
class FarmStatusReport(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'status': (FarmStatus,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'status': 'status', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, status, *args, **kwargs): # noqa: E501
"""FarmStatusReport - a model defined in OpenAPI
Args:
status (FarmStatus):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.status = status
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, status, *args, **kwargs): # noqa: E501
"""FarmStatusReport - a model defined in OpenAPI
Args:
status (FarmStatus):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.status = status
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")

View File

@ -111,6 +111,7 @@ class Job(ModelComposed):
'metadata': (JobMetadata,), # noqa: E501
'storage': (JobStorageInfo,), # noqa: E501
'worker_tag': (str,), # noqa: E501
'initial_status': (JobStatus,), # noqa: E501
'delete_requested_at': (datetime,), # noqa: E501
}
@ -134,6 +135,7 @@ class Job(ModelComposed):
'metadata': 'metadata', # noqa: E501
'storage': 'storage', # noqa: E501
'worker_tag': 'worker_tag', # noqa: E501
'initial_status': 'initial_status', # noqa: E501
'delete_requested_at': 'delete_requested_at', # noqa: E501
}
@ -190,6 +192,7 @@ class Job(ModelComposed):
metadata (JobMetadata): [optional] # noqa: E501
storage (JobStorageInfo): [optional] # noqa: E501
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
initial_status (JobStatus): [optional] # noqa: E501
delete_requested_at (datetime): If job deletion was requested, this is the timestamp at which that request was stored on Flamenco Manager. . [optional] # noqa: E501
"""
@ -305,6 +308,7 @@ class Job(ModelComposed):
metadata (JobMetadata): [optional] # noqa: E501
storage (JobStorageInfo): [optional] # noqa: E501
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
initial_status (JobStatus): [optional] # noqa: E501
delete_requested_at (datetime): If job deletion was requested, this is the timestamp at which that request was stored on Flamenco Manager. . [optional] # noqa: E501
"""

View File

@ -57,6 +57,7 @@ class JobStatus(ModelSimple):
'COMPLETED': "completed",
'FAILED': "failed",
'PAUSED': "paused",
'PAUSE-REQUESTED': "pause-requested",
'QUEUED': "queued",
'CANCEL-REQUESTED': "cancel-requested",
'REQUEUEING': "requeueing",
@ -112,10 +113,10 @@ class JobStatus(ModelSimple):
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
args[0] (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "pause-requested", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
value (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "pause-requested", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
@ -202,10 +203,10 @@ class JobStatus(ModelSimple):
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
args[0] (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "pause-requested", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
value (str):, must be one of ["active", "canceled", "completed", "failed", "paused", "pause-requested", "queued", "cancel-requested", "requeueing", "under-construction", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.

View File

@ -32,9 +32,11 @@ from flamenco.manager.exceptions import ApiAttributeError
def lazy_import():
from flamenco.manager.model.job_metadata import JobMetadata
from flamenco.manager.model.job_settings import JobSettings
from flamenco.manager.model.job_status import JobStatus
from flamenco.manager.model.job_storage_info import JobStorageInfo
globals()['JobMetadata'] = JobMetadata
globals()['JobSettings'] = JobSettings
globals()['JobStatus'] = JobStatus
globals()['JobStorageInfo'] = JobStorageInfo
@ -100,6 +102,7 @@ class SubmittedJob(ModelNormal):
'metadata': (JobMetadata,), # noqa: E501
'storage': (JobStorageInfo,), # noqa: E501
'worker_tag': (str,), # noqa: E501
'initial_status': (JobStatus,), # noqa: E501
}
@cached_property
@ -117,6 +120,7 @@ class SubmittedJob(ModelNormal):
'metadata': 'metadata', # noqa: E501
'storage': 'storage', # noqa: E501
'worker_tag': 'worker_tag', # noqa: E501
'initial_status': 'initial_status', # noqa: E501
}
read_only_vars = {
@ -171,6 +175,7 @@ class SubmittedJob(ModelNormal):
metadata (JobMetadata): [optional] # noqa: E501
storage (JobStorageInfo): [optional] # noqa: E501
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
initial_status (JobStatus): [optional] # noqa: E501
"""
priority = kwargs.get('priority', 50)
@ -268,6 +273,7 @@ class SubmittedJob(ModelNormal):
metadata (JobMetadata): [optional] # noqa: E501
storage (JobStorageInfo): [optional] # noqa: E501
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
initial_status (JobStatus): [optional] # noqa: E501
"""
priority = kwargs.get('priority', 50)

View File

@ -22,6 +22,7 @@ from flamenco.manager.model.blender_path_find_result import BlenderPathFindResul
from flamenco.manager.model.blender_path_source import BlenderPathSource
from flamenco.manager.model.command import Command
from flamenco.manager.model.error import Error
from flamenco.manager.model.event_farm_status import EventFarmStatus
from flamenco.manager.model.event_job_update import EventJobUpdate
from flamenco.manager.model.event_last_rendered_update import EventLastRenderedUpdate
from flamenco.manager.model.event_life_cycle import EventLifeCycle
@ -29,6 +30,8 @@ from flamenco.manager.model.event_task_log_update import EventTaskLogUpdate
from flamenco.manager.model.event_task_update import EventTaskUpdate
from flamenco.manager.model.event_worker_tag_update import EventWorkerTagUpdate
from flamenco.manager.model.event_worker_update import EventWorkerUpdate
from flamenco.manager.model.farm_status import FarmStatus
from flamenco.manager.model.farm_status_report import FarmStatusReport
from flamenco.manager.model.flamenco_version import FlamencoVersion
from flamenco.manager.model.job import Job
from flamenco.manager.model.job_all_of import JobAllOf

View File

@ -4,9 +4,9 @@ Render Farm manager API
The `flamenco.manager` package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: 1.0.0
- Package version: 3.5-alpha1
- Package version: 3.6-alpha5
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
For more information, please visit [https://flamenco.io/](https://flamenco.io/)
For more information, please visit [https://flamenco.blender.org/](https://flamenco.blender.org/)
## Requirements.
@ -43,7 +43,6 @@ from flamenco.manager.model.job_mass_deletion_selection import JobMassDeletionSe
from flamenco.manager.model.job_priority_change import JobPriorityChange
from flamenco.manager.model.job_status_change import JobStatusChange
from flamenco.manager.model.job_tasks_summary import JobTasksSummary
from flamenco.manager.model.jobs_query import JobsQuery
from flamenco.manager.model.jobs_query_result import JobsQueryResult
from flamenco.manager.model.submitted_job import SubmittedJob
from flamenco.manager.model.task import Task
@ -84,12 +83,12 @@ Class | Method | HTTP request | Description
*JobsApi* | [**fetch_job_blocklist**](flamenco/manager/docs/JobsApi.md#fetch_job_blocklist) | **GET** /api/v3/jobs/{job_id}/blocklist | Fetch the list of workers that are blocked from doing certain task types on this job.
*JobsApi* | [**fetch_job_last_rendered_info**](flamenco/manager/docs/JobsApi.md#fetch_job_last_rendered_info) | **GET** /api/v3/jobs/{job_id}/last-rendered | Get the URL that serves the last-rendered images of this job.
*JobsApi* | [**fetch_job_tasks**](flamenco/manager/docs/JobsApi.md#fetch_job_tasks) | **GET** /api/v3/jobs/{job_id}/tasks | Fetch a summary of all tasks of the given job.
*JobsApi* | [**fetch_jobs**](flamenco/manager/docs/JobsApi.md#fetch_jobs) | **GET** /api/v3/jobs | List all jobs in the database.
*JobsApi* | [**fetch_task**](flamenco/manager/docs/JobsApi.md#fetch_task) | **GET** /api/v3/tasks/{task_id} | Fetch a single task.
*JobsApi* | [**fetch_task_log_info**](flamenco/manager/docs/JobsApi.md#fetch_task_log_info) | **GET** /api/v3/tasks/{task_id}/log | Get the URL of the task log, and some more info.
*JobsApi* | [**fetch_task_log_tail**](flamenco/manager/docs/JobsApi.md#fetch_task_log_tail) | **GET** /api/v3/tasks/{task_id}/logtail | Fetch the last few lines of the task&#39;s log.
*JobsApi* | [**get_job_type**](flamenco/manager/docs/JobsApi.md#get_job_type) | **GET** /api/v3/jobs/type/{typeName} | Get single job type and its parameters.
*JobsApi* | [**get_job_types**](flamenco/manager/docs/JobsApi.md#get_job_types) | **GET** /api/v3/jobs/types | Get list of job types and their parameters.
*JobsApi* | [**query_jobs**](flamenco/manager/docs/JobsApi.md#query_jobs) | **POST** /api/v3/jobs/query | Fetch list of jobs.
*JobsApi* | [**remove_job_blocklist**](flamenco/manager/docs/JobsApi.md#remove_job_blocklist) | **DELETE** /api/v3/jobs/{job_id}/blocklist | Remove entries from a job blocklist.
*JobsApi* | [**set_job_priority**](flamenco/manager/docs/JobsApi.md#set_job_priority) | **POST** /api/v3/jobs/{job_id}/setpriority |
*JobsApi* | [**set_job_status**](flamenco/manager/docs/JobsApi.md#set_job_status) | **POST** /api/v3/jobs/{job_id}/setstatus |
@ -101,6 +100,7 @@ Class | Method | HTTP request | Description
*MetaApi* | [**find_blender_exe_path**](flamenco/manager/docs/MetaApi.md#find_blender_exe_path) | **GET** /api/v3/configuration/check/blender | Find one or more CLI commands for use as way to start Blender
*MetaApi* | [**get_configuration**](flamenco/manager/docs/MetaApi.md#get_configuration) | **GET** /api/v3/configuration | Get the configuration of this Manager.
*MetaApi* | [**get_configuration_file**](flamenco/manager/docs/MetaApi.md#get_configuration_file) | **GET** /api/v3/configuration/file | Retrieve the configuration of Flamenco Manager.
*MetaApi* | [**get_farm_status**](flamenco/manager/docs/MetaApi.md#get_farm_status) | **GET** /api/v3/status | Get the status of this Flamenco farm.
*MetaApi* | [**get_shared_storage**](flamenco/manager/docs/MetaApi.md#get_shared_storage) | **GET** /api/v3/configuration/shared-storage/{audience}/{platform} | Get the shared storage location of this Manager, adjusted for the given audience and platform.
*MetaApi* | [**get_variables**](flamenco/manager/docs/MetaApi.md#get_variables) | **GET** /api/v3/configuration/variables/{audience}/{platform} | Get the variables of this Manager. Used by the Blender add-on to recognise two-way variables, and for the web interface to do variable replacement based on the browser&#39;s platform.
*MetaApi* | [**get_version**](flamenco/manager/docs/MetaApi.md#get_version) | **GET** /api/v3/version | Get the Flamenco version of this Manager
@ -147,6 +147,7 @@ Class | Method | HTTP request | Description
- [BlenderPathSource](flamenco/manager/docs/BlenderPathSource.md)
- [Command](flamenco/manager/docs/Command.md)
- [Error](flamenco/manager/docs/Error.md)
- [EventFarmStatus](flamenco/manager/docs/EventFarmStatus.md)
- [EventJobUpdate](flamenco/manager/docs/EventJobUpdate.md)
- [EventLastRenderedUpdate](flamenco/manager/docs/EventLastRenderedUpdate.md)
- [EventLifeCycle](flamenco/manager/docs/EventLifeCycle.md)
@ -154,6 +155,8 @@ Class | Method | HTTP request | Description
- [EventTaskUpdate](flamenco/manager/docs/EventTaskUpdate.md)
- [EventWorkerTagUpdate](flamenco/manager/docs/EventWorkerTagUpdate.md)
- [EventWorkerUpdate](flamenco/manager/docs/EventWorkerUpdate.md)
- [FarmStatus](flamenco/manager/docs/FarmStatus.md)
- [FarmStatusReport](flamenco/manager/docs/FarmStatusReport.md)
- [FlamencoVersion](flamenco/manager/docs/FlamencoVersion.md)
- [Job](flamenco/manager/docs/Job.md)
- [JobAllOf](flamenco/manager/docs/JobAllOf.md)

View File

@ -0,0 +1,210 @@
# SPDX-License-Identifier: GPL-3.0-or-later
# <pep8 compliant>
import dataclasses
import json
import platform
from pathlib import Path
from typing import TYPE_CHECKING, Optional, Union
from urllib3.exceptions import HTTPError, MaxRetryError
import bpy
if TYPE_CHECKING:
from flamenco.manager import ApiClient as _ApiClient
from flamenco.manager.models import (
AvailableJobTypes as _AvailableJobTypes,
FlamencoVersion as _FlamencoVersion,
SharedStorageLocation as _SharedStorageLocation,
WorkerTagList as _WorkerTagList,
)
else:
_ApiClient = object
_AvailableJobTypes = object
_FlamencoVersion = object
_SharedStorageLocation = object
_WorkerTagList = object
@dataclasses.dataclass
class ManagerInfo:
"""Cached information obtained from a Flamenco Manager.
This is the root object of what is stored on disk, every time someone
presses a 'refresh' button to update worker tags, job types, etc.
"""
flamenco_version: _FlamencoVersion
shared_storage: _SharedStorageLocation
job_types: _AvailableJobTypes
worker_tags: _WorkerTagList
@staticmethod
def type_info() -> dict[str, type]:
# Do a late import, so that the API is only imported when actually used.
from flamenco.manager.models import (
AvailableJobTypes,
FlamencoVersion,
SharedStorageLocation,
WorkerTagList,
)
# These types cannot be obtained by introspecting the ManagerInfo class, as
# at runtime that doesn't use real type annotations.
return {
"flamenco_version": FlamencoVersion,
"shared_storage": SharedStorageLocation,
"job_types": AvailableJobTypes,
"worker_tags": WorkerTagList,
}
class FetchError(RuntimeError):
"""Raised when the manager info could not be fetched from the Manager."""
class LoadError(RuntimeError):
"""Raised when the manager info could not be loaded from disk cache."""
_cached_manager_info: Optional[ManagerInfo] = None
def fetch(api_client: _ApiClient) -> ManagerInfo:
global _cached_manager_info
# Do a late import, so that the API is only imported when actually used.
from flamenco.manager import ApiException
from flamenco.manager.apis import MetaApi, JobsApi, WorkerMgtApi
from flamenco.manager.models import (
AvailableJobTypes,
FlamencoVersion,
SharedStorageLocation,
WorkerTagList,
)
meta_api = MetaApi(api_client)
jobs_api = JobsApi(api_client)
worker_mgt_api = WorkerMgtApi(api_client)
try:
flamenco_version: FlamencoVersion = meta_api.get_version()
shared_storage: SharedStorageLocation = meta_api.get_shared_storage(
"users", platform.system().lower()
)
job_types: AvailableJobTypes = jobs_api.get_job_types()
worker_tags: WorkerTagList = worker_mgt_api.fetch_worker_tags()
except ApiException as ex:
raise FetchError("Manager cannot be reached: %s" % ex) from ex
except MaxRetryError as ex:
# This is the common error, when for example the port number is
# incorrect and nothing is listening. The exception text is not included
# because it's very long and confusing.
raise FetchError("Manager cannot be reached") from ex
except HTTPError as ex:
raise FetchError("Manager cannot be reached: %s" % ex) from ex
_cached_manager_info = ManagerInfo(
flamenco_version=flamenco_version,
shared_storage=shared_storage,
job_types=job_types,
worker_tags=worker_tags,
)
return _cached_manager_info
class Encoder(json.JSONEncoder):
def default(self, o):
from flamenco.manager.model_utils import OpenApiModel
if isinstance(o, OpenApiModel):
return o.to_dict()
if isinstance(o, ManagerInfo):
# dataclasses.asdict() creates a copy of the OpenAPI models,
# in a way that just doesn't work, hence this workaround.
return {f.name: getattr(o, f.name) for f in dataclasses.fields(o)}
return super().default(o)
def _to_json(info: ManagerInfo) -> str:
return json.dumps(info, indent=" ", cls=Encoder)
def _from_json(contents: Union[str, bytes]) -> ManagerInfo:
# Do a late import, so that the API is only imported when actually used.
from flamenco.manager.configuration import Configuration
from flamenco.manager.model_utils import validate_and_convert_types
json_dict = json.loads(contents)
dummy_cfg = Configuration()
api_models = {}
for name, api_type in ManagerInfo.type_info().items():
api_model = validate_and_convert_types(
json_dict[name],
(api_type,),
[name],
True,
True,
dummy_cfg,
)
api_models[name] = api_model
return ManagerInfo(**api_models)
def _json_filepath() -> Path:
# This is the '~/.config/blender/{version}' path.
user_path = Path(bpy.utils.resource_path(type="USER"))
return user_path / "config" / "flamenco-manager-info.json"
def save(info: ManagerInfo) -> None:
json_path = _json_filepath()
json_path.parent.mkdir(parents=True, exist_ok=True)
as_json = _to_json(info)
json_path.write_text(as_json, encoding="utf8")
def load() -> ManagerInfo:
json_path = _json_filepath()
if not json_path.exists():
raise FileNotFoundError(f"{json_path.name} not found in {json_path.parent}")
try:
as_json = json_path.read_text(encoding="utf8")
except OSError as ex:
raise LoadError(f"Could not read {json_path}: {ex}") from ex
try:
return _from_json(as_json)
except json.JSONDecodeError as ex:
raise LoadError(f"Could not decode JSON in {json_path}") from ex
def load_into_cache() -> Optional[ManagerInfo]:
global _cached_manager_info
_cached_manager_info = None
try:
_cached_manager_info = load()
except FileNotFoundError:
return None
except LoadError as ex:
print(f"Could not load Flamenco Manager info from disk: {ex}")
return None
return _cached_manager_info
def load_cached() -> Optional[ManagerInfo]:
global _cached_manager_info
if _cached_manager_info is not None:
return _cached_manager_info
return load_into_cache()

View File

@ -10,7 +10,7 @@ from urllib3.exceptions import HTTPError, MaxRetryError
import bpy
from . import job_types, job_submission, preferences, worker_tags
from . import job_types, job_submission, preferences, manager_info
from .job_types_propgroup import JobTypePropertyGroup
from .bat.submodules import bpathlib
@ -51,80 +51,6 @@ class FlamencoOpMixin:
return api_client
class FLAMENCO_OT_fetch_job_types(FlamencoOpMixin, bpy.types.Operator):
bl_idname = "flamenco.fetch_job_types"
bl_label = "Fetch Job Types"
bl_description = "Query Flamenco Manager to obtain the available job types"
def execute(self, context: bpy.types.Context) -> set[str]:
api_client = self.get_api_client(context)
from flamenco.manager import ApiException
scene = context.scene
old_job_type_name = getattr(scene, "flamenco_job_type", "")
try:
job_types.fetch_available_job_types(api_client, scene)
except ApiException as ex:
self.report({"ERROR"}, "Error getting job types: %s" % ex)
return {"CANCELLED"}
except MaxRetryError as ex:
# This is the common error, when for example the port number is
# incorrect and nothing is listening.
self.report({"ERROR"}, "Unable to reach Manager")
return {"CANCELLED"}
if old_job_type_name:
try:
scene.flamenco_job_type = old_job_type_name
except TypeError: # Thrown when the old job type no longer exists.
# You cannot un-set an enum property, and 'empty string' is not
# a valid value either, so better to just remove the underlying
# ID property.
del scene["flamenco_job_type"]
self.report(
{"WARNING"},
"Job type %r no longer available, choose another one"
% old_job_type_name,
)
job_types.update_job_type_properties(scene)
return {"FINISHED"}
class FLAMENCO_OT_fetch_worker_tags(FlamencoOpMixin, bpy.types.Operator):
bl_idname = "flamenco.fetch_worker_tags"
bl_label = "Fetch Worker Tags"
bl_description = "Query Flamenco Manager to obtain the available worker tags"
def execute(self, context: bpy.types.Context) -> set[str]:
api_client = self.get_api_client(context)
from flamenco.manager import ApiException
scene = context.scene
old_tag = getattr(scene, "flamenco_worker_tag", "")
try:
worker_tags.refresh(context, api_client)
except ApiException as ex:
self.report({"ERROR"}, "Error getting job types: %s" % ex)
return {"CANCELLED"}
except MaxRetryError as ex:
# This is the common error, when for example the port number is
# incorrect and nothing is listening.
self.report({"ERROR"}, "Unable to reach Manager")
return {"CANCELLED"}
if old_tag:
# TODO: handle cases where the old tag no longer exists.
scene.flamenco_worker_tag = old_tag
return {"FINISHED"}
class FLAMENCO_OT_ping_manager(FlamencoOpMixin, bpy.types.Operator):
bl_idname = "flamenco.ping_manager"
bl_label = "Flamenco: Ping Manager"
@ -132,13 +58,13 @@ class FLAMENCO_OT_ping_manager(FlamencoOpMixin, bpy.types.Operator):
bl_options = {"REGISTER"} # No UNDO.
def execute(self, context: bpy.types.Context) -> set[str]:
from . import comms, preferences
from . import comms
api_client = self.get_api_client(context)
prefs = preferences.get(context)
report, level = comms.ping_manager_with_report(
context.window_manager, api_client, prefs
report, level = comms.ping_manager(
context.window_manager,
context.scene,
api_client,
)
self.report({level}, report)
@ -200,33 +126,53 @@ class FLAMENCO_OT_submit_job(FlamencoOpMixin, bpy.types.Operator):
def poll(cls, context: bpy.types.Context) -> bool:
# Only allow submission when there is a job type selected.
job_type = job_types.active_job_type(context.scene)
cls.poll_message_set("No job type selected")
return job_type is not None
def execute(self, context: bpy.types.Context) -> set[str]:
filepath, ok = self._presubmit_check(context)
if not ok:
return {"CANCELLED"}
is_running = self._submit_files(context, filepath)
if not is_running:
return {"CANCELLED"}
if self.packthread is None:
# If there is no pack thread running, there isn't much we can do.
self.report({"ERROR"}, "No pack thread running, please report a bug")
self._quit(context)
return {"CANCELLED"}
# Keep handling messages from the background thread.
while True:
# Block for 5 seconds at a time. The exact duration doesn't matter,
# as this while-loop is blocking the main thread anyway.
msg = self.packthread.poll(timeout=5)
if not msg:
# No message received, is fine, just wait for another one.
continue
result = self._on_bat_pack_msg(context, msg)
if "RUNNING_MODAL" not in result:
break
self._quit(context)
self.packthread.join(timeout=5)
return {"FINISHED"}
def invoke(self, context: bpy.types.Context, event: bpy.types.Event) -> set[str]:
# Before doing anything, make sure the info we cached about the Manager
# is up to date. A change in job storage directory on the Manager can
# cause nasty error messages when we submit, and it's better to just be
# ahead of the curve and refresh first. This also allows for checking
# the actual Manager version before submitting.
err = self._check_manager(context)
if err:
self.report({"WARNING"}, err)
filepath, ok = self._presubmit_check(context)
if not ok:
return {"CANCELLED"}
if not context.blend_data.filepath:
# The file path needs to be known before the file can be submitted.
self.report(
{"ERROR"}, "Please save your .blend file before submitting to Flamenco"
)
is_running = self._submit_files(context, filepath)
if not is_running:
return {"CANCELLED"}
filepath = self._save_blendfile(context)
# Check the job with the Manager, to see if it would be accepted.
if not self._check_job(context):
return {"CANCELLED"}
return self._submit_files(context, filepath)
context.window_manager.modal_handler_add(self)
return {"RUNNING_MODAL"}
def modal(self, context: bpy.types.Context, event: bpy.types.Event) -> set[str]:
# This function is called for TIMER events to poll the BAT pack thread.
@ -259,29 +205,31 @@ class FLAMENCO_OT_submit_job(FlamencoOpMixin, bpy.types.Operator):
:return: an error string when something went wrong.
"""
from . import comms, preferences
from . import comms
# Get the manager's info. This is cached in the preferences, so
# regardless of whether this function actually responds to version
# mismatches, it has to be called to also refresh the shared storage
# location.
# Get the manager's info. This is cached to disk, so regardless of
# whether this function actually responds to version mismatches, it has
# to be called to also refresh the shared storage location.
api_client = self.get_api_client(context)
prefs = preferences.get(context)
mgrinfo = comms.ping_manager(context.window_manager, api_client, prefs)
if mgrinfo.error:
return mgrinfo.error
report, report_level = comms.ping_manager(
context.window_manager,
context.scene,
api_client,
)
if report_level != "INFO":
return report
# Check the Manager's version.
if not self.ignore_version_mismatch:
my_version = comms.flamenco_client_version()
assert mgrinfo.version is not None
mgrinfo = manager_info.load_cached()
# Safe to assume, as otherwise the ping_manager() call would not have succeeded.
assert mgrinfo is not None
my_version = comms.flamenco_client_version()
mgrversion = mgrinfo.flamenco_version.shortversion
try:
mgrversion = mgrinfo.version.shortversion
except AttributeError:
# shortversion was introduced in Manager version 3.0-beta2, which
# may not be running here yet.
mgrversion = mgrinfo.version.version
if mgrversion != my_version:
context.window_manager.flamenco_version_mismatch = True
return (
@ -299,6 +247,56 @@ class FLAMENCO_OT_submit_job(FlamencoOpMixin, bpy.types.Operator):
# Empty error message indicates 'ok'.
return ""
def _manager_info(
self, context: bpy.types.Context
) -> Optional[manager_info.ManagerInfo]:
"""Load the manager info.
If it cannot be loaded, returns None after emitting an error message and
calling self._quit(context).
"""
manager = manager_info.load_cached()
if not manager:
self.report(
{"ERROR"}, "No information known about Flamenco Manager, refresh first."
)
self._quit(context)
return None
return manager
def _presubmit_check(self, context: bpy.types.Context) -> tuple[Path, bool]:
"""Do a pre-submission check, returning whether submission can continue.
Reports warnings when returning False, so the caller can just abort.
Returns a tuple (can_submit, filepath_to_submit)
"""
# Before doing anything, make sure the info we cached about the Manager
# is up to date. A change in job storage directory on the Manager can
# cause nasty error messages when we submit, and it's better to just be
# ahead of the curve and refresh first. This also allows for checking
# the actual Manager version before submitting.
err = self._check_manager(context)
if err:
self.report({"WARNING"}, err)
return Path(), False
if not context.blend_data.filepath:
# The file path needs to be known before the file can be submitted.
self.report(
{"ERROR"}, "Please save your .blend file before submitting to Flamenco"
)
return Path(), False
filepath = self._save_blendfile(context)
# Check the job with the Manager, to see if it would be accepted.
if not self._check_job(context):
return Path(), False
return filepath, True
def _save_blendfile(self, context):
"""Save to a different file, specifically for Flamenco.
@ -338,7 +336,14 @@ class FLAMENCO_OT_submit_job(FlamencoOpMixin, bpy.types.Operator):
)
prefs.experimental.use_all_linked_data_direct = True
filepath = Path(context.blend_data.filepath).with_suffix(".flamenco.blend")
filepath = Path(context.blend_data.filepath)
if job_submission.is_file_inside_job_storage(context, filepath):
self.log.info(
"Saving blendfile, already in shared storage: %s", filepath
)
bpy.ops.wm.save_as_mainfile()
else:
filepath = filepath.with_suffix(".flamenco.blend")
self.log.info("Saving copy to temporary file %s", filepath)
bpy.ops.wm.save_as_mainfile(
filepath=str(filepath), compress=True, copy=True
@ -358,18 +363,24 @@ class FLAMENCO_OT_submit_job(FlamencoOpMixin, bpy.types.Operator):
return filepath
def _submit_files(self, context: bpy.types.Context, blendfile: Path) -> set[str]:
"""Ensure that the files are somewhere in the shared storage."""
def _submit_files(self, context: bpy.types.Context, blendfile: Path) -> bool:
"""Ensure that the files are somewhere in the shared storage.
Returns True if a packing thread has been started, and False otherwise.
"""
from .bat import interface as bat_interface
if bat_interface.is_packing():
self.report({"ERROR"}, "Another packing operation is running")
self._quit(context)
return {"CANCELLED"}
return False
prefs = preferences.get(context)
if prefs.is_shaman_enabled:
manager = self._manager_info(context)
if not manager:
return False
if manager.shared_storage.shaman_enabled:
# self.blendfile_on_farm will be set when BAT created the checkout,
# see _on_bat_pack_msg() below.
self.blendfile_on_farm = None
@ -387,13 +398,12 @@ class FLAMENCO_OT_submit_job(FlamencoOpMixin, bpy.types.Operator):
self.blendfile_on_farm = self._bat_pack_filesystem(context, blendfile)
except FileNotFoundError:
self._quit(context)
return {"CANCELLED"}
return False
context.window_manager.modal_handler_add(self)
wm = context.window_manager
self.timer = wm.event_timer_add(self.TIMER_PERIOD, window=context.window)
return {"RUNNING_MODAL"}
return True
def _bat_pack_filesystem(
self, context: bpy.types.Context, blendfile: Path
@ -414,11 +424,14 @@ class FLAMENCO_OT_submit_job(FlamencoOpMixin, bpy.types.Operator):
raise FileNotFoundError()
# Determine where the blend file will be stored.
manager = self._manager_info(context)
if not manager:
raise FileNotFoundError("Manager info not known")
unique_dir = "%s-%s" % (
datetime.datetime.now().isoformat("-").replace(":", ""),
self.job_name,
)
pack_target_dir = Path(prefs.job_storage) / unique_dir
pack_target_dir = Path(manager.shared_storage.location) / unique_dir
# TODO: this should take the blendfile location relative to the project path into account.
pack_target_file = pack_target_dir / blendfile.name
@ -690,8 +703,6 @@ class FLAMENCO3_OT_explore_file_path(bpy.types.Operator):
classes = (
FLAMENCO_OT_fetch_job_types,
FLAMENCO_OT_fetch_worker_tags,
FLAMENCO_OT_ping_manager,
FLAMENCO_OT_eval_setting,
FLAMENCO_OT_submit_job,

View File

@ -5,7 +5,7 @@ from pathlib import Path
import bpy
from . import projects
from . import projects, manager_info
def discard_flamenco_client(context):
@ -16,9 +16,7 @@ def discard_flamenco_client(context):
context.window_manager.flamenco_status_ping = ""
def _refresh_the_planet(
prefs: "FlamencoPreferences", context: bpy.types.Context
) -> None:
def _refresh_the_planet(context: bpy.types.Context) -> None:
"""Refresh all GUI areas."""
for win in context.window_manager.windows:
for area in win.screen.areas:
@ -35,7 +33,8 @@ def _manager_url_updated(prefs, context):
# Warning, be careful what of the context to access here. Accessing /
# changing too much can cause crashes, infinite loops, etc.
comms.ping_manager_with_report(context.window_manager, api_client, prefs)
comms.ping_manager(context.window_manager, context.scene, api_client)
_refresh_the_planet(context)
_project_finder_enum_items = [
@ -66,22 +65,6 @@ class FlamencoPreferences(bpy.types.AddonPreferences):
items=_project_finder_enum_items,
)
is_shaman_enabled: bpy.props.BoolProperty( # type: ignore
name="Shaman Enabled",
description="Whether this Manager has the Shaman protocol enabled",
default=False,
update=_refresh_the_planet,
)
# Property that should be editable from Python. It's not exposed to the GUI.
job_storage: bpy.props.StringProperty( # type: ignore
name="Job Storage Directory",
subtype="DIR_PATH",
default="",
options={"HIDDEN"},
description="Directory where blend files are stored when submitting them to Flamenco. This value is determined by Flamenco Manager",
)
# Property that gets its value from the above _job_storage, and cannot be
# set. This makes it read-only in the GUI.
job_storage_for_gui: bpy.props.StringProperty( # type: ignore
@ -90,14 +73,7 @@ class FlamencoPreferences(bpy.types.AddonPreferences):
default="",
options={"SKIP_SAVE"},
description="Directory where blend files are stored when submitting them to Flamenco. This value is determined by Flamenco Manager",
get=lambda prefs: prefs.job_storage,
)
worker_tags: bpy.props.CollectionProperty( # type: ignore
type=WorkerTag,
name="Worker Tags",
description="Cache for the worker tags available on the configured Manager",
options={"HIDDEN"},
get=lambda prefs: prefs._job_storage(),
)
def draw(self, context: bpy.types.Context) -> None:
@ -116,7 +92,9 @@ class FlamencoPreferences(bpy.types.AddonPreferences):
split.label(text="")
split.label(text=label)
if not self.job_storage:
manager = manager_info.load_cached()
if not manager:
text_row(col, "Press the refresh button before using Flamenco")
if context.window_manager.flamenco_status_ping:
@ -126,7 +104,7 @@ class FlamencoPreferences(bpy.types.AddonPreferences):
text_row(aligned, "Press the refresh button to check the connection")
text_row(aligned, "and update the job storage location")
if self.is_shaman_enabled:
if manager and manager.shared_storage.shaman_enabled:
text_row(col, "Shaman enabled")
col.prop(self, "job_storage_for_gui", text="Job Storage")
@ -152,6 +130,12 @@ class FlamencoPreferences(bpy.types.AddonPreferences):
blendfile = Path(bpy.data.filepath)
return projects.for_blendfile(blendfile, self.project_finder)
def _job_storage(self) -> str:
info = manager_info.load_cached()
if not info:
return "Unknown, refresh first."
return str(info.shared_storage.location)
def get(context: bpy.types.Context) -> FlamencoPreferences:
"""Return the add-on preferences."""

View File

@ -2,7 +2,7 @@
# <pep8 compliant>
from pathlib import Path
from typing import Callable, TypeAlias
from typing import Callable
import dataclasses
from .bat.submodules import bpathlib
@ -45,7 +45,7 @@ def _finder_subversion(blendfile: Path) -> Path:
def _search_path_marker(blendfile: Path, marker_path: str) -> Path:
"""Go up the directory hierarchy until a file or directory 'marker_path' is found."""
blendfile_dir = bpathlib.make_absolute(blendfile).parent
blendfile_dir: Path = bpathlib.make_absolute(blendfile).parent
directory = blendfile_dir
while True:
@ -64,7 +64,7 @@ def _search_path_marker(blendfile: Path, marker_path: str) -> Path:
return blendfile_dir
Finder: TypeAlias = Callable[[Path], Path]
Finder = Callable[[Path], Path]
@dataclasses.dataclass

View File

@ -1,57 +1,35 @@
# SPDX-License-Identifier: GPL-3.0-or-later
from typing import TYPE_CHECKING, Union
from typing import Union
import bpy
from . import preferences
if TYPE_CHECKING:
from flamenco.manager import ApiClient as _ApiClient
else:
_ApiClient = object
from . import manager_info
_enum_items: list[Union[tuple[str, str, str], tuple[str, str, str, int, int]]] = []
def refresh(context: bpy.types.Context, api_client: _ApiClient) -> None:
"""Fetch the available worker tags from the Manager."""
from flamenco.manager import ApiClient
from flamenco.manager.api import worker_mgt_api
from flamenco.manager.model.worker_tag_list import WorkerTagList
assert isinstance(api_client, ApiClient)
api = worker_mgt_api.WorkerMgtApi(api_client)
response: WorkerTagList = api.fetch_worker_tags()
# Store on the preferences, so a cached version persists until the next refresh.
prefs = preferences.get(context)
prefs.worker_tags.clear()
for tag in response.tags:
rna_tag = prefs.worker_tags.add()
rna_tag.id = tag.id
rna_tag.name = tag.name
rna_tag.description = getattr(tag, "description", "")
# Preferences have changed, so make sure that Blender saves them (assuming
# auto-save here).
context.preferences.is_dirty = True
def _get_enum_items(self, context):
global _enum_items
prefs = preferences.get(context)
manager = manager_info.load_cached()
if manager is None:
_enum_items = [
(
"-",
"-tags unknown-",
"Refresh to load the available Worker tags from the Manager",
),
]
return _enum_items
_enum_items = [
("-", "All", "No specific tag assigned, any worker can handle this job"),
]
_enum_items.extend(
(tag.id, tag.name, tag.description)
for tag in prefs.worker_tags
)
for tag in manager.worker_tags.tags:
_enum_items.append((tag.id, tag.name, getattr(tag, "description", "")))
return _enum_items
@ -70,9 +48,3 @@ def unregister() -> None:
delattr(ob, attr)
except AttributeError:
pass
if __name__ == "__main__":
import doctest
print(doctest.testmod())

View File

@ -27,6 +27,7 @@ import (
"projects.blender.org/studio/flamenco/internal/manager/api_impl/dummy"
"projects.blender.org/studio/flamenco/internal/manager/config"
"projects.blender.org/studio/flamenco/internal/manager/eventbus"
"projects.blender.org/studio/flamenco/internal/manager/farmstatus"
"projects.blender.org/studio/flamenco/internal/manager/job_compilers"
"projects.blender.org/studio/flamenco/internal/manager/job_deleter"
"projects.blender.org/studio/flamenco/internal/manager/last_rendered"
@ -55,6 +56,10 @@ const (
developmentWebInterfacePort = 8081
webappEntryPoint = "index.html"
// dbOpenTimeout is the time the persistence layer gets to open the database.
// This includes database migrations, which can take some time to perform.
dbOpenTimeout = 1 * time.Minute
)
type shutdownFunc func()
@ -143,7 +148,13 @@ func runFlamencoManager() bool {
log.Fatal().Err(err).Msg("unable to figure out my own URL")
}
ssdp := makeAutoDiscoverable(urls)
// Construct the UPnP/SSDP server.
var ssdp *upnp_ssdp.Server
if configService.Get().SSDPDiscovery {
ssdp = makeAutoDiscoverable(urls)
} else {
log.Debug().Msg("UPnP/SSDP autodiscovery disabled in configuration")
}
// Construct the services.
persist := openDB(*configService)
@ -174,10 +185,12 @@ func runFlamencoManager() bool {
shamanServer := buildShamanServer(configService, isFirstRun)
jobDeleter := job_deleter.NewService(persist, localStorage, eventBroker, shamanServer)
farmStatus := farmstatus.NewService(persist, eventBroker)
flamenco := api_impl.NewFlamenco(
compiler, persist, eventBroker, logStorage, configService,
taskStateMachine, shamanServer, timeService, lastRender,
localStorage, sleepScheduler, jobDeleter)
localStorage, sleepScheduler, jobDeleter, farmStatus)
e := buildWebService(flamenco, persist, ssdp, socketio, urls, localStorage)
@ -278,6 +291,13 @@ func runFlamencoManager() bool {
jobDeleter.Run(mainCtx)
}()
// Run the Farm Status service.
wg.Add(1)
go func() {
defer wg.Done()
farmStatus.Run(mainCtx)
}()
// Log the URLs last, hopefully that makes them more visible / encouraging to go to for users.
go func() {
time.Sleep(100 * time.Millisecond)
@ -369,7 +389,7 @@ func openDB(configService config.Service) *persistence.DB {
log.Fatal().Msg("configure the database in flamenco-manager.yaml")
}
dbCtx, dbCtxCancel := context.WithTimeout(context.Background(), 5*time.Second)
dbCtx, dbCtxCancel := context.WithTimeout(context.Background(), dbOpenTimeout)
defer dbCtxCancel()
persist, err := persistence.OpenDB(dbCtx, dsn)
if err != nil {

View File

@ -38,7 +38,7 @@ func findBlender() {
result, err := find_blender.Find(ctx)
switch {
case errors.Is(err, fs.ErrNotExist), errors.Is(err, exec.ErrNotFound):
log.Warn().Msg("Blender could not be found. " + helpMsg)
log.Info().Msg("Blender could not be found. " + helpMsg)
case err != nil:
log.Warn().AnErr("cause", err).Msg("There was an error finding Blender on this system. " + helpMsg)
default:

View File

@ -23,7 +23,9 @@ import (
"projects.blender.org/studio/flamenco/internal/appinfo"
"projects.blender.org/studio/flamenco/internal/worker"
"projects.blender.org/studio/flamenco/internal/worker/cli_runner"
"projects.blender.org/studio/flamenco/pkg/oomscore"
"projects.blender.org/studio/flamenco/pkg/sysinfo"
"projects.blender.org/studio/flamenco/pkg/website"
)
var (
@ -113,6 +115,10 @@ func main() {
findBlender()
findFFmpeg()
// Create the CLI runner before the auto-discovery, to make any configuration
// problems clear before waiting for the Manager to respond.
cliRunner := createCLIRunner(&configWrangler)
// Give the auto-discovery some time to find a Manager.
discoverTimeout := 10 * time.Minute
discoverCtx, discoverCancel := context.WithTimeout(context.Background(), discoverTimeout)
@ -148,7 +154,6 @@ func main() {
return
}
cliRunner := cli_runner.NewCLIRunner()
listener = worker.NewListener(client, buffer)
cmdRunner := worker.NewCommandExecutor(cliRunner, listener, timeService)
taskRunner := worker.NewTaskExecutor(cmdRunner, listener)
@ -296,8 +301,34 @@ func upstreamBufferOrDie(client worker.FlamencoClient, timeService clock.Clock)
func logFatalManagerDiscoveryError(err error, discoverTimeout time.Duration) {
if errors.Is(err, context.DeadlineExceeded) {
log.Fatal().Str("timeout", discoverTimeout.String()).Msg("could not discover Manager in time")
log.Fatal().Stringer("timeout", discoverTimeout).
Msgf("could not discover Manager in time, see %s", website.CannotFindManagerHelpURL)
} else {
log.Fatal().Err(err).Msg("auto-discovery error")
log.Fatal().Err(err).
Msgf("auto-discovery error, see %s", website.CannotFindManagerHelpURL)
}
}
func createCLIRunner(configWrangler *worker.FileConfigWrangler) *cli_runner.CLIRunner {
config, err := configWrangler.WorkerConfig()
if err != nil {
log.Fatal().Err(err).Msg("error loading worker configuration")
}
if config.LinuxOOMScoreAdjust == nil {
log.Debug().Msg("executables will be run without OOM score adjustment")
return cli_runner.NewCLIRunner()
}
if !oomscore.Available() {
log.Warn().
Msgf("config: oom_score_adjust configured, but that is only available on Linux, not this platform. See %s for more information.",
website.OOMScoreAdjURL)
return cli_runner.NewCLIRunner()
}
adjustment := *config.LinuxOOMScoreAdjust
log.Info().Int("oom_score_adjust", adjustment).Msg("executables will be run with OOM score adjustment")
return cli_runner.NewCLIRunnerWithOOMScoreAdjuster(adjustment)
}

View File

@ -21,7 +21,6 @@ import (
"projects.blender.org/studio/flamenco/internal/appinfo"
"projects.blender.org/studio/flamenco/internal/manager/config"
"projects.blender.org/studio/flamenco/internal/manager/persistence"
"projects.blender.org/studio/flamenco/pkg/api"
)
func main() {
@ -72,7 +71,7 @@ func main() {
defer persist.Close()
// Get all jobs from the database.
jobs, err := persist.QueryJobs(ctx, api.JobsQuery{})
jobs, err := persist.FetchJobs(ctx)
if err != nil {
log.Fatal().Err(err).Msg("unable to fetch jobs")
}

View File

@ -0,0 +1,189 @@
package main
// SPDX-License-Identifier: GPL-3.0-or-later
import (
"context"
"database/sql"
"flag"
"fmt"
"os"
"os/signal"
"regexp"
"strings"
"syscall"
"time"
"github.com/mattn/go-colorable"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"gopkg.in/yaml.v2"
_ "modernc.org/sqlite"
)
var (
// Tables and/or indices to skip when writing the schema.
// Anything that is *not* to be seen by sqlc should be listed here.
skips = map[SQLiteSchema]bool{
// Goose manages its own versioning table. SQLC should ignore its existence.
{Type: "table", Name: "goose_db_version"}: true,
}
tableNameDequoter = regexp.MustCompile("^(?:CREATE TABLE )(\"([^\"]+)\")")
)
type SQLiteSchema struct {
Type string
Name string
TableName string
RootPage int
SQL sql.NullString
}
func saveSchema(ctx context.Context, sqlOutPath string) error {
db, err := sql.Open("sqlite", "flamenco-manager.sqlite")
if err != nil {
return err
}
defer db.Close()
rows, err := db.QueryContext(ctx, "select * from sqlite_schema order by type desc, name asc")
if err != nil {
return err
}
defer rows.Close()
sqlBuilder := strings.Builder{}
for rows.Next() {
var data SQLiteSchema
if err := rows.Scan(
&data.Type,
&data.Name,
&data.TableName,
&data.RootPage,
&data.SQL,
); err != nil {
return err
}
if strings.HasPrefix(data.Name, "sqlite_") {
continue
}
if skips[SQLiteSchema{Type: data.Type, Name: data.Name}] {
continue
}
if !data.SQL.Valid {
continue
}
sql := tableNameDequoter.ReplaceAllString(data.SQL.String, "CREATE TABLE $2")
sqlBuilder.WriteString(sql)
sqlBuilder.WriteString(";\n")
}
sqlBytes := []byte(sqlBuilder.String())
if err := os.WriteFile(sqlOutPath, sqlBytes, os.ModePerm); err != nil {
return fmt.Errorf("writing to %s: %w", sqlOutPath, err)
}
log.Info().Str("path", sqlOutPath).Msg("schema written to file")
return nil
}
// SqlcConfig models the minimal subset of the sqlc.yaml we need to parse.
type SqlcConfig struct {
Version string `yaml:"version"`
SQL []struct {
Schema string `yaml:"schema"`
} `yaml:"sql"`
}
func main() {
output := zerolog.ConsoleWriter{Out: colorable.NewColorableStdout(), TimeFormat: time.RFC3339}
log.Logger = log.Output(output)
parseCliArgs()
mainCtx, mainCtxCancel := context.WithCancel(context.Background())
defer mainCtxCancel()
installSignalHandler(mainCtxCancel)
schemaPath := schemaPathFromSqlcYAML()
if err := saveSchema(mainCtx, schemaPath); err != nil {
log.Fatal().Err(err).Msg("couldn't export schema")
}
}
// installSignalHandler spawns a goroutine that handles incoming POSIX signals.
func installSignalHandler(cancelFunc context.CancelFunc) {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt)
signal.Notify(signals, syscall.SIGTERM)
go func() {
for signum := range signals {
log.Info().Str("signal", signum.String()).Msg("signal received, shutting down")
cancelFunc()
}
}()
}
func parseCliArgs() {
var quiet, debug, trace bool
flag.BoolVar(&quiet, "quiet", false, "Only log warning-level and worse.")
flag.BoolVar(&debug, "debug", false, "Enable debug-level logging.")
flag.BoolVar(&trace, "trace", false, "Enable trace-level logging.")
flag.Parse()
var logLevel zerolog.Level
switch {
case trace:
logLevel = zerolog.TraceLevel
case debug:
logLevel = zerolog.DebugLevel
case quiet:
logLevel = zerolog.WarnLevel
default:
logLevel = zerolog.InfoLevel
}
zerolog.SetGlobalLevel(logLevel)
}
func schemaPathFromSqlcYAML() string {
var sqlcConfig SqlcConfig
{
sqlcConfigBytes, err := os.ReadFile("sqlc.yaml")
if err != nil {
log.Fatal().Err(err).Msg("cannot read sqlc.yaml")
}
if err := yaml.Unmarshal(sqlcConfigBytes, &sqlcConfig); err != nil {
log.Fatal().Err(err).Msg("cannot parse sqlc.yaml")
}
}
if sqlcConfig.Version != "2" {
log.Fatal().
Str("version", sqlcConfig.Version).
Str("expected", "2").
Msg("unexpected version in sqlc.yaml")
}
if len(sqlcConfig.SQL) != 1 {
log.Fatal().
Int("sql items", len(sqlcConfig.SQL)).
Msg("sqlc.yaml should contain a single item in the 'sql' list")
}
schema := sqlcConfig.SQL[0].Schema
if schema == "" {
log.Fatal().Msg("sqlc.yaml should have a 'schema' key in the 'sql' item")
}
return schema
}

50
go.mod
View File

@ -1,6 +1,6 @@
module projects.blender.org/studio/flamenco
go 1.22
go 1.23.1
require (
github.com/adrg/xdg v0.4.0
@ -14,37 +14,34 @@ require (
github.com/fromkeith/gossdp v0.0.0-20180102154144-1b2c43f6886e
github.com/gertd/go-pluralize v0.2.1
github.com/getkin/kin-openapi v0.88.0
github.com/glebarez/go-sqlite v1.22.0
github.com/glebarez/sqlite v1.10.0
github.com/golang/mock v1.6.0
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
github.com/google/uuid v1.5.0
github.com/graarh/golang-socketio v0.0.0-20170510162725-2c44953b9b5f
github.com/labstack/echo/v4 v4.9.1
github.com/magefile/mage v1.14.0
github.com/magefile/mage v1.15.0
github.com/mattn/go-colorable v0.1.12
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8
github.com/pressly/goose/v3 v3.15.1
github.com/rs/zerolog v1.26.1
github.com/stretchr/testify v1.8.4
github.com/tc-hib/go-winres v0.3.1
github.com/tc-hib/go-winres v0.3.3
github.com/zcalusic/sysinfo v1.0.1
github.com/ziflex/lecho/v3 v3.1.0
golang.org/x/crypto v0.18.0
golang.org/x/image v0.10.0
golang.org/x/net v0.20.0
golang.org/x/sync v0.6.0
golang.org/x/sys v0.16.0
golang.org/x/vuln v1.0.4
golang.org/x/crypto v0.25.0
golang.org/x/image v0.18.0
golang.org/x/net v0.27.0
golang.org/x/sync v0.7.0
golang.org/x/sys v0.22.0
golang.org/x/vuln v1.1.3
gopkg.in/yaml.v2 v2.4.0
gorm.io/gorm v1.25.5
honnef.co/go/tools v0.4.2
honnef.co/go/tools v0.5.1
modernc.org/sqlite v1.28.0
)
require (
github.com/BurntSushi/toml v1.2.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dlclark/regexp2 v1.7.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
@ -56,8 +53,7 @@ require (
github.com/google/pprof v0.0.0-20230207041349-798e818bf904 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
github.com/labstack/gommon v0.4.0 // indirect
github.com/mailru/easyjson v0.7.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
@ -65,17 +61,25 @@ require (
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/tc-hib/winres v0.1.6 // indirect
github.com/urfave/cli/v2 v2.3.0 // indirect
github.com/tc-hib/winres v0.2.1 // indirect
github.com/urfave/cli/v2 v2.25.7 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasttemplate v1.2.1 // indirect
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a // indirect
golang.org/x/mod v0.14.0 // indirect
golang.org/x/text v0.14.0 // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 // indirect
golang.org/x/mod v0.19.0 // indirect
golang.org/x/telemetry v0.0.0-20240522233618-39ace7a40ae7 // indirect
golang.org/x/text v0.16.0 // indirect
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba // indirect
golang.org/x/tools v0.17.0 // indirect
golang.org/x/tools v0.23.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/uint128 v1.3.0 // indirect
modernc.org/cc/v3 v3.41.0 // indirect
modernc.org/ccgo/v3 v3.16.15 // indirect
modernc.org/libc v1.37.6 // indirect
modernc.org/mathutil v1.6.0 // indirect
modernc.org/memory v1.7.2 // indirect
modernc.org/opt v0.1.3 // indirect
modernc.org/strutil v1.2.0 // indirect
modernc.org/token v1.1.0 // indirect
)

111
go.sum generated
View File

@ -1,6 +1,5 @@
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c h1:pxW6RcqyfI9/kWtOwnv/G+AzdKuy2ZrqINhenH4HyNs=
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/adrg/xdg v0.4.0 h1:RzRqFcjH4nE5C6oTAxhBtoE2IRyjBSa62SCbyPidvls=
github.com/adrg/xdg v0.4.0/go.mod h1:N6ag73EX4wyxeaoeHctc1mas01KZgsj5tYiAIwqJE/E=
github.com/alessio/shellescape v1.4.2 h1:MHPfaU+ddJ0/bYWpgIeUnQUqKrlJ1S7BfEYPM4uEoM0=
@ -11,9 +10,8 @@ github.com/chzyer/logex v1.2.0/go.mod h1:9+9sk7u7pGNWYMkh0hdiL++6OeibzJccyQU4p4M
github.com/chzyer/readline v1.5.0/go.mod h1:x22KAscuvRqlLoK9CsoYsmxoXZMMFVyOl86cAH8qUic=
github.com/chzyer/test v0.0.0-20210722231415-061457976a23/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0 h1:EoUDS0afbrsXAZ9YQ9jdu/mZ2sXgT1/2yyNng4PGlyM=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cyberdelia/templates v0.0.0-20141128023046-ca7fffd4298c/go.mod h1:GyV+0YP4qX0UQ7r2MoYZ+AvYDp12OF5yg4q8rGnyNh4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -50,10 +48,6 @@ github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.7.4/go.mod h1:jD2toBW3GZUr5UMcdrwQA10I7RuaFOl/SGeDjXkfUtY=
github.com/glebarez/go-sqlite v1.22.0 h1:uAcMJhaA6r3LHMTFgP0SifzgXg46yJkgxqyuyec+ruQ=
github.com/glebarez/go-sqlite v1.22.0/go.mod h1:PlBIdHe0+aUEFn+r2/uthrWq4FxbzugL0L8Li6yQJbc=
github.com/glebarez/sqlite v1.10.0 h1:u4gt8y7OND/cCei/NMHmfbLxF6xP2wgKcT/BJf2pYkc=
github.com/glebarez/sqlite v1.10.0/go.mod h1:IJ+lfSOmiekhQsFTJRx/lHtGYmCdtAiTaf5wI9u5uHA=
github.com/go-chi/chi/v5 v5.0.0/go.mod h1:BBug9lr0cqtdAhsu6R4AAdvufI0/XBzAQSsUqJpoZOs=
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
@ -82,8 +76,8 @@ github.com/golangci/lint-1 v0.0.0-20181222135242-d2cdd8c08219/go.mod h1:/X8TswGS
github.com/google/go-cmdtest v0.4.1-0.20220921163831-55ab3332a786 h1:rcv+Ippz6RAtvaGgKxc+8FQIpxHgsF+HBzPyYL2cyVU=
github.com/google/go-cmdtest v0.4.1-0.20220921163831-55ab3332a786/go.mod h1:apVn/GCasLZUVpAJ6oWAuyP7Ne7CEsQbTnc0plM3m+o=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20230207041349-798e818bf904 h1:4/hN5RUoecvl+RmJRE2YxKWtnnQls6rQjjW5oV7qg2U=
github.com/google/pprof v0.0.0-20230207041349-798e818bf904/go.mod h1:uglQLonpP8qtYCYyzA+8c/9qtqgA3qsXGYqCPKARAFg=
@ -100,10 +94,6 @@ github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/ad
github.com/graarh/golang-socketio v0.0.0-20170510162725-2c44953b9b5f h1:utzdm9zUvVWGRtIpkdE4+36n+Gv60kNb7mFvgGxLElY=
github.com/graarh/golang-socketio v0.0.0-20170510162725-2c44953b9b5f/go.mod h1:8gudiNCFh3ZfvInknmoXzPeV17FSH+X2J5k2cUPIwnA=
github.com/ianlancetaylor/demangle v0.0.0-20220319035150-800ac71e25c2/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
@ -133,8 +123,8 @@ github.com/lestrrat-go/httpcc v1.0.0/go.mod h1:tGS/u00Vh5N6FHNkExqGGNId8e0Big+++
github.com/lestrrat-go/iter v1.0.1/go.mod h1:zIdgO1mRKhn8l9vrZJZz9TUMMFbQbLeTsbqPDrJ/OJc=
github.com/lestrrat-go/jwx v1.2.7/go.mod h1:bw24IXWbavc0R2RsOtpXL7RtMyP589yZ1+L7kd09ZGA=
github.com/lestrrat-go/option v1.0.0/go.mod h1:5ZHFbivi4xwXxhxY9XHDe2FHo6/Z7WWmtT7T5nBBp3I=
github.com/magefile/mage v1.14.0 h1:6QDX3g6z1YvJ4olPhT1wksUcSa/V0a1B+pJb73fBjyo=
github.com/magefile/mage v1.14.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/magefile/mage v1.15.0 h1:BvGheCMAsG3bWUDbZ8AyXXpCNwU9u5CB6sM+HNb9HYg=
github.com/magefile/mage v1.15.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0 h1:aizVhC/NAAcKWb+5QsU1iNOZb4Yws5UO2I+aIprQITM=
@ -152,6 +142,8 @@ github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Ky
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.16 h1:yOQRA0RpS5PFz/oikGwBEqvAWhWg5ufRz4ETLjwpU1Y=
github.com/mattn/go-sqlite3 v1.14.16/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
@ -175,10 +167,8 @@ github.com/rs/xid v1.3.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.26.0/go.mod h1:yBiM87lvSqX8h0Ww4sdzNSkVYZ8dL2xjZJG1lAuGZEo=
github.com/rs/zerolog v1.26.1 h1:/ihwxqH+4z8UxyI70wM1z9yCvkWcfz/a3mj48k/Zngc=
github.com/rs/zerolog v1.26.1/go.mod h1:/wSSJWX7lVrsOwlbyTRSOJvqRlc+WjWlfes+CiJ+tmc=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@ -187,21 +177,23 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/tc-hib/go-winres v0.3.1 h1:9r67V7Ep34yyx8SL716BzcKePRvEBOjan47SmMnxEdE=
github.com/tc-hib/go-winres v0.3.1/go.mod h1:lTPf0MW3eu6rmvMyLrPXSy6xsSz4t5dRxB7dc5YFP6k=
github.com/tc-hib/winres v0.1.6 h1:qgsYHze+BxQPEYilxIz/KCQGaClvI2+yLBAZs+3+0B8=
github.com/tc-hib/winres v0.1.6/go.mod h1:pe6dOR40VOrGz8PkzreVKNvEKnlE8t4yR8A8naL+t7A=
github.com/tc-hib/go-winres v0.3.3 h1:DQ50qlvDVhqrDOY0svTxZFZWfKWtZtfnXXHPCw6lqh0=
github.com/tc-hib/go-winres v0.3.3/go.mod h1:5NGzOtuvjSqnpIEi2o1h48MKZzP9olvrf+PeY2t1uoA=
github.com/tc-hib/winres v0.2.1 h1:YDE0FiP0VmtRaDn7+aaChp1KiF4owBiJa5l964l5ujA=
github.com/tc-hib/winres v0.2.1/go.mod h1:C/JaNhH3KBvhNKVbvdlDWkbMDO9H4fKKDaN7/07SSuk=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go v1.2.6/go.mod h1:anCg0y61KIhDlPZmnH+so+RQbysYVyDko0IMgJv0Nn0=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/ugorji/go/codec v1.2.6/go.mod h1:V6TCNZ4PHqoHGFZuSG1W8nrCzzdgA2DozYxWFFpvxTw=
github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
github.com/urfave/cli/v2 v2.25.7 h1:VAzn5oq403l5pHjc4OhD54+XGO9cdKVL/7lDjF+iKUs=
github.com/urfave/cli/v2 v2.25.7/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8=
github.com/valyala/fasttemplate v1.2.1 h1:TVEnxayobAdVkhQfrfes2IzOB6o+z4roRkPF52WA1u4=
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 h1:bAn7/zixMGCfxrRTfdpNzjtPYqr8smhKouy9mxVdGPU=
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsrSj3ccvlPHLoLsHnpR27oXr4ZE984MbSER8=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
@ -221,22 +213,19 @@ golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215165025-cf75a172585e/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
golang.org/x/crypto v0.18.0 h1:PGVlW0xEltQnzFZ55hkuX5+KLyrMYhHld1YHO4AKcdc=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a h1:Jw5wfR+h9mnIYH+OtGT2im5wV1YGGDora5vTv/aa5bE=
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30=
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M=
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 h1:1P7xPZEwZMoBoz0Yze5Nx2/4pxj6nw9ZqHWXqP0iRgQ=
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.0.0-20201208152932-35266b937fa6/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.0.0-20210220032944-ac19c3e999fb/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.10.0 h1:gXjUUtwtx5yOE0VKWq1CH4IJAClq4UGgUA3i+rpON9M=
golang.org/x/image v0.10.0/go.mod h1:jtrku+n79PfroUbvDdeUWMAI+heR786BofxrbiSF+J0=
golang.org/x/image v0.18.0 h1:jGzIakQa/ZXI1I0Fxvaa9W7yP25TqT6cHIHn+6CqvSQ=
golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.19.0 h1:fEdghXQSo20giMthA7cd28ZC+jts4amQ3YMXiP5oMQ8=
golang.org/x/mod v0.19.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
@ -246,17 +235,15 @@ golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96b
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210913180222-943fd674d43e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.20.0 h1:aCL9BSgETF1k+blQaYUBx9hJ9LOGP3gAVemcZlf1Kpo=
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -284,24 +271,22 @@ golang.org/x/sys v0.0.0-20211103235746-7861aae1554b/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/telemetry v0.0.0-20240522233618-39ace7a40ae7 h1:FemxDzfMUcK2f3YY4H+05K9CDzbSVr2+q/JKN45pey0=
golang.org/x/telemetry v0.0.0-20240522233618-39ace7a40ae7/go.mod h1:pRgIJT+bRLFKnoM1ldnzKoxTIn14Yxz928LQRYYgIN0=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba h1:O8mE0/t419eoIwhTFpKVkHiTs/Igowgfkj25AcZrtiE=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@ -312,11 +297,10 @@ golang.org/x/tools v0.0.0-20210114065538-d78b04bdf963/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.17.0 h1:FvmRgNOcs3kOa+T20R1uhfP9F6HgG2mfxDv1vrx1Htc=
golang.org/x/tools v0.17.0/go.mod h1:xsh6VxdV005rRVaS6SSAf9oiAqljS7UZUacMZ8Bnsps=
golang.org/x/vuln v1.0.4 h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
golang.org/x/vuln v1.0.4/go.mod h1:NbJdUQhX8jY++FtuhrXs2Eyx0yePo9pF7nPlIjo9aaQ=
golang.org/x/tools v0.23.0 h1:SGsXPZ+2l4JsgaCKkx+FQ9YZ5XEtA1GZYuoDjenLjvg=
golang.org/x/tools v0.23.0/go.mod h1:pnu6ufv6vQkll6szChhK3C3L/ruaIv5eBeztNG8wtsI=
golang.org/x/vuln v1.1.3 h1:NPGnvPOTgnjBc9HTaUx+nj+EaUYxl5SJOWqaDYGaFYw=
golang.org/x/vuln v1.1.3/go.mod h1:7Le6Fadm5FOqE9C926BCD0g12NWyhg7cxV4BwcPFuNY=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -330,7 +314,6 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntN
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
@ -339,16 +322,18 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/gorm v1.25.5 h1:zR9lOiiYf09VNh5Q1gphfyia1JpiClIWG9hQaxB/mls=
gorm.io/gorm v1.25.5/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
honnef.co/go/tools v0.4.2 h1:6qXr+R5w+ktL5UkwEbPp+fEvfyoMPche6GkOpGHZcLc=
honnef.co/go/tools v0.4.2/go.mod h1:36ZgoUOrqOk1GxwHhyryEkq8FQWkUO2xGuSMhUCcdvA=
honnef.co/go/tools v0.5.1 h1:4bH5o3b5ZULQ4UrBmP+63W9r7qIkqJClEA9ko5YKx+I=
honnef.co/go/tools v0.5.1/go.mod h1:e9irvo83WDG9/irijV44wr3tbhcFeRnfpVlRqVwpzMs=
lukechampine.com/uint128 v1.3.0 h1:cDdUVfRwDUDovz610ABgFD17nXD4/uDgVHl2sC3+sbo=
lukechampine.com/uint128 v1.3.0/go.mod h1:c4eWIwlEGaxC/+H1VguhU4PHXNWDCDMUlWdIWl2j1gk=
modernc.org/cc/v3 v3.41.0 h1:QoR1Sn3YWlmA1T4vLaKZfawdVtSiGx8H+cEojbC7v1Q=
modernc.org/cc/v3 v3.41.0/go.mod h1:Ni4zjJYJ04CDOhG7dn640WGfwBzfE0ecX8TyMB0Fv0Y=
modernc.org/ccgo/v3 v3.16.15 h1:KbDR3ZAVU+wiLyMESPtbtE/Add4elztFyfsWoNTgxS0=
modernc.org/ccgo/v3 v3.16.15/go.mod h1:yT7B+/E2m43tmMOT51GMoM98/MtHIcQQSleGnddkUNI=
modernc.org/ccorpus v1.11.6 h1:J16RXiiqiCgua6+ZvQot4yUuUy8zxgqbqEEUuGPlISk=
modernc.org/ccorpus v1.11.6/go.mod h1:2gEUTrWqdpH2pXsmTM1ZkjeSrUWDpjMu2T6m29L/ErQ=
modernc.org/httpfs v1.0.6 h1:AAgIpFZRXuYnkjftxTAZwMIiwEqAfk8aVB2/oA6nAeM=
modernc.org/httpfs v1.0.6/go.mod h1:7dosgurJGp0sPaRanU53W4xZYKh14wfzX420oZADeHM=
modernc.org/libc v1.37.6 h1:orZH3c5wmhIQFTXF+Nt+eeauyd+ZIt2BX6ARe+kD+aw=
modernc.org/libc v1.37.6/go.mod h1:YAXkAZ8ktnkCKaN9sw/UDeUVkGYJ/YquGO4FTi5nmHE=
modernc.org/mathutil v1.6.0 h1:fRe9+AmYlaej+64JsEEhoWuAYBkOtQiMEU7n/XgfYi4=
@ -361,5 +346,9 @@ modernc.org/sqlite v1.28.0 h1:Zx+LyDDmXczNnEQdvPuEfcFVA2ZPyaD7UCZDjef3BHQ=
modernc.org/sqlite v1.28.0/go.mod h1:Qxpazz0zH8Z1xCFyi5GSL3FzbtZ3fvbjmywNogldEW0=
modernc.org/strutil v1.2.0 h1:agBi9dp1I+eOnxXeiZawM8F4LawKv4NzGWSaLfyeNZA=
modernc.org/strutil v1.2.0/go.mod h1:/mdcBmfOibveCTBxUl5B5l6W+TTH1FXPLHZE6bTosX0=
modernc.org/tcl v1.15.2 h1:C4ybAYCGJw968e+Me18oW55kD/FexcHbqH2xak1ROSY=
modernc.org/tcl v1.15.2/go.mod h1:3+k/ZaEbKrC8ePv8zJWPtBSW0V7Gg9g8rkmhI1Kfs3c=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
modernc.org/z v1.7.3 h1:zDJf6iHjrnB+WRD88stbXokugjyc0/pB91ri1gO6LZY=
modernc.org/z v1.7.3/go.mod h1:Ipv4tsdxZRbQyLq9Q1M6gdbkxYzdlrciF2Hi/lS7nWE=

View File

@ -12,6 +12,7 @@ import (
"time"
"github.com/labstack/echo/v4"
"github.com/rs/zerolog"
"projects.blender.org/studio/flamenco/pkg/api"
)
@ -28,6 +29,7 @@ type Flamenco struct {
localStorage LocalStorage
sleepScheduler WorkerSleepScheduler
jobDeleter JobDeleter
farmstatus FarmStatusService
// The task scheduler can be locked to prevent multiple Workers from getting
// the same task. It is also used for certain other queries, like
@ -55,6 +57,7 @@ func NewFlamenco(
localStorage LocalStorage,
wss WorkerSleepScheduler,
jd JobDeleter,
farmstatus FarmStatusService,
) *Flamenco {
return &Flamenco{
jobCompiler: jc,
@ -69,6 +72,7 @@ func NewFlamenco(
localStorage: localStorage,
sleepScheduler: wss,
jobDeleter: jd,
farmstatus: farmstatus,
done: make(chan struct{}),
}
@ -131,3 +135,12 @@ func sendAPIErrorDBBusy(e echo.Context, message string, args ...interface{}) err
e.Response().Header().Set("Retry-After", strconv.FormatInt(seconds, 10))
return e.JSON(code, apiErr)
}
// handleConnectionClosed logs a message and sends a "418 I'm a teapot" response
// to the HTTP client. The response is likely to be seen, as the connection was
// closed. But just in case this function was called by mistake, it's a response
// code that is unlikely to be accepted by the client.
func handleConnectionClosed(e echo.Context, logger zerolog.Logger, logMessage string) error {
logger.Debug().Msg(logMessage)
return e.NoContent(http.StatusTeapot)
}

View File

@ -15,6 +15,7 @@ import (
"projects.blender.org/studio/flamenco/internal/manager/config"
"projects.blender.org/studio/flamenco/internal/manager/eventbus"
"projects.blender.org/studio/flamenco/internal/manager/farmstatus"
"projects.blender.org/studio/flamenco/internal/manager/job_compilers"
"projects.blender.org/studio/flamenco/internal/manager/job_deleter"
"projects.blender.org/studio/flamenco/internal/manager/last_rendered"
@ -26,17 +27,20 @@ import (
)
// Generate mock implementations of these interfaces.
//go:generate go run github.com/golang/mock/mockgen -destination mocks/api_impl_mock.gen.go -package mocks projects.blender.org/studio/flamenco/internal/manager/api_impl PersistenceService,ChangeBroadcaster,JobCompiler,LogStorage,ConfigService,TaskStateMachine,Shaman,LastRendered,LocalStorage,WorkerSleepScheduler,JobDeleter
//go:generate go run github.com/golang/mock/mockgen -destination mocks/api_impl_mock.gen.go -package mocks projects.blender.org/studio/flamenco/internal/manager/api_impl PersistenceService,ChangeBroadcaster,JobCompiler,LogStorage,ConfigService,TaskStateMachine,Shaman,LastRendered,LocalStorage,WorkerSleepScheduler,JobDeleter,FarmStatusService
type PersistenceService interface {
StoreAuthoredJob(ctx context.Context, authoredJob job_compilers.AuthoredJob) error
// FetchJob fetches a single job, without fetching its tasks.
FetchJob(ctx context.Context, jobID string) (*persistence.Job, error)
FetchJobs(ctx context.Context) ([]*persistence.Job, error)
SaveJobPriority(ctx context.Context, job *persistence.Job) error
// FetchTask fetches the given task and the accompanying job.
FetchTask(ctx context.Context, taskID string) (*persistence.Task, error)
// FetchTaskJobUUID fetches the UUID of the job this task belongs to.
FetchTaskJobUUID(ctx context.Context, taskID string) (string, error)
FetchTaskFailureList(context.Context, *persistence.Task) ([]*persistence.Worker, error)
SaveTask(ctx context.Context, task *persistence.Task) error
SaveTaskActivity(ctx context.Context, t *persistence.Task) error
// TaskTouchedByWorker marks the task as 'touched' by a worker. This is used for timeout detection.
TaskTouchedByWorker(context.Context, *persistence.Task) error
@ -79,7 +83,6 @@ type PersistenceService interface {
CountTaskFailuresOfWorker(ctx context.Context, job *persistence.Job, worker *persistence.Worker, taskType string) (int, error)
// Database queries.
QueryJobs(ctx context.Context, query api.JobsQuery) ([]*persistence.Job, error)
QueryJobTaskSummaries(ctx context.Context, jobUUID string) ([]*persistence.Task, error)
// SetLastRendered sets this job as the one with the most recent rendered image.
@ -239,3 +242,9 @@ type JobDeleter interface {
}
var _ JobDeleter = (*job_deleter.Service)(nil)
type FarmStatusService interface {
Report() api.FarmStatusReport
}
var _ FarmStatusService = (*farmstatus.Service)(nil)

View File

@ -91,8 +91,12 @@ func (f *Flamenco) SubmitJob(e echo.Context) error {
logger = logger.With().Str("job_id", authoredJob.JobID).Logger()
// TODO: check whether this job should be queued immediately or start paused.
authoredJob.Status = api.JobStatusQueued
submittedJob := api.SubmittedJob(job)
initialStatus := api.JobStatusQueued
if submittedJob.InitialStatus != nil {
initialStatus = *submittedJob.InitialStatus
}
authoredJob.Status = initialStatus
if err := f.persist.StoreAuthoredJob(ctx, *authoredJob); err != nil {
logger.Error().Err(err).Msg("error persisting job in database")
@ -439,7 +443,7 @@ func (f *Flamenco) FetchTaskLogInfo(e echo.Context, taskID string) error {
return sendAPIError(e, http.StatusBadRequest, "bad task ID")
}
dbTask, err := f.persist.FetchTask(ctx, taskID)
jobUUID, err := f.persist.FetchTaskJobUUID(ctx, taskID)
if err != nil {
if errors.Is(err, persistence.ErrTaskNotFound) {
return sendAPIError(e, http.StatusNotFound, "no such task")
@ -447,9 +451,9 @@ func (f *Flamenco) FetchTaskLogInfo(e echo.Context, taskID string) error {
logger.Error().Err(err).Msg("error fetching task")
return sendAPIError(e, http.StatusInternalServerError, "error fetching task: %v", err)
}
logger = logger.With().Str("job", dbTask.Job.UUID).Logger()
logger = logger.With().Str("job", jobUUID).Logger()
size, err := f.logStorage.TaskLogSize(dbTask.Job.UUID, taskID)
size, err := f.logStorage.TaskLogSize(jobUUID, taskID)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
logger.Debug().Msg("task log unavailable, task has no log on disk")
@ -475,11 +479,11 @@ func (f *Flamenco) FetchTaskLogInfo(e echo.Context, taskID string) error {
taskLogInfo := api.TaskLogInfo{
TaskId: taskID,
JobId: dbTask.Job.UUID,
JobId: jobUUID,
Size: int(size),
}
fullLogPath := f.logStorage.Filepath(dbTask.Job.UUID, taskID)
fullLogPath := f.logStorage.Filepath(jobUUID, taskID)
relPath, err := f.localStorage.RelPath(fullLogPath)
if err != nil {
logger.Error().Err(err).Msg("task log is outside the manager storage, cannot construct its URL for download")
@ -501,7 +505,7 @@ func (f *Flamenco) FetchTaskLogTail(e echo.Context, taskID string) error {
return sendAPIError(e, http.StatusBadRequest, "bad task ID")
}
dbTask, err := f.persist.FetchTask(ctx, taskID)
jobUUID, err := f.persist.FetchTaskJobUUID(ctx, taskID)
if err != nil {
if errors.Is(err, persistence.ErrTaskNotFound) {
return sendAPIError(e, http.StatusNotFound, "no such task")
@ -509,9 +513,9 @@ func (f *Flamenco) FetchTaskLogTail(e echo.Context, taskID string) error {
logger.Error().Err(err).Msg("error fetching task")
return sendAPIError(e, http.StatusInternalServerError, "error fetching task: %v", err)
}
logger = logger.With().Str("job", dbTask.Job.UUID).Logger()
logger = logger.With().Str("job", jobUUID).Logger()
tail, err := f.logStorage.Tail(dbTask.Job.UUID, taskID)
tail, err := f.logStorage.Tail(jobUUID, taskID)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
logger.Debug().Msg("task tail unavailable, task has no log on disk")
@ -700,7 +704,11 @@ func taskDBtoAPI(dbTask *persistence.Task) api.Task {
Status: dbTask.Status,
Activity: dbTask.Activity,
Commands: make([]api.Command, len(dbTask.Commands)),
// TODO: convert this to just store dbTask.WorkerUUID.
Worker: workerToTaskWorker(dbTask.Worker),
JobId: dbTask.JobUUID,
}
if dbTask.Job != nil {

View File

@ -59,20 +59,18 @@ func (f *Flamenco) FetchJob(e echo.Context, jobID string) error {
return e.JSON(http.StatusOK, apiJob)
}
func (f *Flamenco) QueryJobs(e echo.Context) error {
func (f *Flamenco) FetchJobs(e echo.Context) error {
logger := requestLogger(e)
var jobsQuery api.QueryJobsJSONRequestBody
if err := e.Bind(&jobsQuery); err != nil {
logger.Warn().Err(err).Msg("bad request received")
return sendAPIError(e, http.StatusBadRequest, "invalid format")
}
ctx := e.Request().Context()
dbJobs, err := f.persist.QueryJobs(ctx, api.JobsQuery(jobsQuery))
if err != nil {
dbJobs, err := f.persist.FetchJobs(ctx)
switch {
case errors.Is(err, context.Canceled):
logger.Debug().AnErr("cause", err).Msg("could not query for jobs, remote end probably closed the connection")
return sendAPIError(e, http.StatusInternalServerError, "error querying for jobs: %v", err)
case err != nil:
logger.Warn().Err(err).Msg("error querying for jobs")
return sendAPIError(e, http.StatusInternalServerError, "error querying for jobs")
return sendAPIError(e, http.StatusInternalServerError, "error querying for jobs: %v", err)
}
apiJobs := make([]api.Job, len(dbJobs))
@ -97,9 +95,13 @@ func (f *Flamenco) FetchJobTasks(e echo.Context, jobID string) error {
}
tasks, err := f.persist.QueryJobTaskSummaries(ctx, jobID)
if err != nil {
logger.Warn().Err(err).Msg("error querying for jobs")
return sendAPIError(e, http.StatusInternalServerError, "error querying for jobs")
switch {
case errors.Is(err, context.Canceled):
logger.Debug().AnErr("cause", err).Msg("could not fetch job tasks, remote end probably closed connection")
return sendAPIError(e, http.StatusInternalServerError, "error fetching job tasks: %v", err)
case err != nil:
logger.Warn().Err(err).Msg("error fetching job tasks")
return sendAPIError(e, http.StatusInternalServerError, "error fetching job tasks: %v", err)
}
summaries := make([]api.TaskSummary, len(tasks))

View File

@ -8,12 +8,12 @@ import (
"time"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"projects.blender.org/studio/flamenco/internal/manager/persistence"
"projects.blender.org/studio/flamenco/pkg/api"
)
func TestQueryJobs(t *testing.T) {
func TestFetchJobs(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
@ -48,11 +48,11 @@ func TestQueryJobs(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
ctx := echoCtx.Request().Context()
mf.persistence.EXPECT().QueryJobs(ctx, api.JobsQuery{}).
mf.persistence.EXPECT().FetchJobs(ctx).
Return([]*persistence.Job{&activeJob, &deletionQueuedJob}, nil)
err := mf.flamenco.QueryJobs(echoCtx)
assert.NoError(t, err)
err := mf.flamenco.FetchJobs(echoCtx)
require.NoError(t, err)
expectedJobs := api.JobsQueryResult{
Jobs: []api.Job{
@ -89,6 +89,56 @@ func TestQueryJobs(t *testing.T) {
assertResponseJSON(t, echoCtx, http.StatusOK, expectedJobs)
}
func TestFetchJob(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
mf := newMockedFlamenco(mockCtrl)
dbJob := persistence.Job{
UUID: "afc47568-bd9d-4368-8016-e91d945db36d",
Name: "работа",
JobType: "test",
Priority: 50,
Status: api.JobStatusActive,
Settings: persistence.StringInterfaceMap{
"result": "/render/frames/exploding.kittens",
},
Metadata: persistence.StringStringMap{
"project": "/projects/exploding-kittens",
},
WorkerTag: &persistence.WorkerTag{
UUID: "d86e1b84-5ee2-4784-a178-65963eeb484b",
Name: "Tikkie terug Kees!",
Description: "",
},
}
echoCtx := mf.prepareMockedRequest(nil)
mf.persistence.EXPECT().FetchJob(gomock.Any(), dbJob.UUID).Return(&dbJob, nil)
require.NoError(t, mf.flamenco.FetchJob(echoCtx, dbJob.UUID))
expectedJob := api.Job{
SubmittedJob: api.SubmittedJob{
Name: "работа",
Type: "test",
Priority: 50,
Settings: &api.JobSettings{AdditionalProperties: map[string]interface{}{
"result": "/render/frames/exploding.kittens",
}},
Metadata: &api.JobMetadata{AdditionalProperties: map[string]string{
"project": "/projects/exploding-kittens",
}},
WorkerTag: ptr("d86e1b84-5ee2-4784-a178-65963eeb484b"),
},
Id: "afc47568-bd9d-4368-8016-e91d945db36d",
Status: api.JobStatusActive,
}
assertResponseJSON(t, echoCtx, http.StatusOK, expectedJob)
}
func TestFetchTask(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
@ -160,7 +210,7 @@ func TestFetchTask(t *testing.T) {
Return([]*persistence.Worker{&taskWorker}, nil)
err := mf.flamenco.FetchTask(echoCtx, taskUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echoCtx, http.StatusOK, expectAPITask)
}

View File

@ -88,7 +88,7 @@ func TestSubmitJobWithoutSettings(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(submittedJob)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.SubmitJob(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
}
func TestSubmitJobWithSettings(t *testing.T) {
@ -177,7 +177,7 @@ func TestSubmitJobWithSettings(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(submittedJob)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.SubmitJob(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
}
func TestSubmitJobWithEtag(t *testing.T) {
@ -202,7 +202,7 @@ func TestSubmitJobWithEtag(t *testing.T) {
{
echoCtx := mf.prepareMockedJSONRequest(submittedJob)
err := mf.flamenco.SubmitJob(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echoCtx,
http.StatusPreconditionFailed, "rejecting job because its settings are outdated, refresh the job type")
}
@ -240,7 +240,7 @@ func TestSubmitJobWithEtag(t *testing.T) {
submittedJob.TypeEtag = ptr("correct etag")
echoCtx := mf.prepareMockedJSONRequest(submittedJob)
err := mf.flamenco.SubmitJob(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
}
}
@ -318,7 +318,7 @@ func TestSubmitJobWithShamanCheckoutID(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(submittedJob)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.SubmitJob(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
}
func TestSubmitJobWithWorkerTag(t *testing.T) {
@ -437,7 +437,33 @@ func TestGetJobTypeHappy(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.GetJobType(echoCtx, "test-job-type")
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echoCtx, http.StatusOK, jt)
}
func TestGetJobTypeWithDescriptionHappy(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
mf := newMockedFlamenco(mockCtrl)
// Get an existing job type with a description.
description := "This is a test job type"
jt := api.AvailableJobType{
Description: &description,
Etag: "some etag",
Name: "test-job-type",
Label: "Test Job Type",
Settings: []api.AvailableJobSetting{
{Key: "setting", Type: api.AvailableJobSettingTypeString},
},
}
mf.jobCompiler.EXPECT().GetJobType("test-job-type").
Return(jt, nil)
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.GetJobType(echoCtx, "test-job-type")
require.NoError(t, err)
assertResponseJSON(t, echoCtx, http.StatusOK, jt)
}
@ -453,7 +479,7 @@ func TestGetJobTypeUnknown(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.GetJobType(echoCtx, "nonexistent-type")
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echoCtx, http.StatusNotFound, api.Error{
Code: http.StatusNotFound,
Message: "no such job type known",
@ -482,7 +508,7 @@ func TestSubmitJobCheckWithEtag(t *testing.T) {
{
echoCtx := mf.prepareMockedJSONRequest(submittedJob)
err := mf.flamenco.SubmitJobCheck(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echoCtx,
http.StatusPreconditionFailed, "rejecting job because its settings are outdated, refresh the job type")
}
@ -502,7 +528,7 @@ func TestSubmitJobCheckWithEtag(t *testing.T) {
submittedJob.TypeEtag = ptr("correct etag")
echoCtx := mf.prepareMockedJSONRequest(submittedJob)
err := mf.flamenco.SubmitJobCheck(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
}
}
@ -516,7 +542,7 @@ func TestGetJobTypeError(t *testing.T) {
Return(api.AvailableJobType{}, errors.New("didn't expect this"))
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.GetJobType(echoCtx, "error")
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echoCtx, http.StatusInternalServerError, "error getting job type")
}
@ -537,7 +563,7 @@ func TestSetJobStatus_nonexistentJob(t *testing.T) {
// Do the call.
echoCtx := mf.prepareMockedJSONRequest(statusUpdate)
err := mf.flamenco.SetJobStatus(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echoCtx, http.StatusNotFound, "no such job")
}
@ -571,7 +597,7 @@ func TestSetJobStatus_happy(t *testing.T) {
// Do the call.
echoCtx := mf.prepareMockedJSONRequest(statusUpdate)
err := mf.flamenco.SetJobStatus(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -592,7 +618,7 @@ func TestSetJobPrio_nonexistentJob(t *testing.T) {
// Do the call.
echoCtx := mf.prepareMockedJSONRequest(prioUpdate)
err := mf.flamenco.SetJobStatus(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echoCtx, http.StatusNotFound, "no such job")
}
@ -634,7 +660,7 @@ func TestSetJobPrio(t *testing.T) {
mf.broadcaster.EXPECT().BroadcastJobUpdate(expectUpdate)
err := mf.flamenco.SetJobPriority(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -668,7 +694,7 @@ func TestSetJobStatusFailedToRequeueing(t *testing.T) {
// Do the call.
err := mf.flamenco.SetJobStatus(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -714,7 +740,7 @@ func TestSetTaskStatusQueued(t *testing.T) {
// Do the call.
err := mf.flamenco.SetTaskStatus(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -727,38 +753,26 @@ func TestFetchTaskLogTail(t *testing.T) {
jobID := "18a9b096-d77e-438c-9be2-74397038298b"
taskID := "2e020eee-20f8-4e95-8dcf-65f7dfc3ebab"
dbJob := persistence.Job{
UUID: jobID,
Name: "test job",
Status: api.JobStatusActive,
Settings: persistence.StringInterfaceMap{},
Metadata: persistence.StringStringMap{},
}
dbTask := persistence.Task{
UUID: taskID,
Job: &dbJob,
Name: "test task",
}
// The task can be found, but has no on-disk task log.
// This should not cause any error, but instead be returned as "no content".
mf.persistence.EXPECT().FetchTask(gomock.Any(), taskID).Return(&dbTask, nil)
mf.persistence.EXPECT().FetchTaskJobUUID(gomock.Any(), taskID).Return(jobID, nil)
mf.logStorage.EXPECT().Tail(jobID, taskID).
Return("", fmt.Errorf("wrapped error: %w", os.ErrNotExist))
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchTaskLogTail(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
// Check that a 204 No Content is also returned when the task log file on disk exists, but is empty.
mf.persistence.EXPECT().FetchTask(gomock.Any(), taskID).Return(&dbTask, nil)
mf.persistence.EXPECT().FetchTaskJobUUID(gomock.Any(), taskID).Return(jobID, nil)
mf.logStorage.EXPECT().Tail(jobID, taskID).
Return("", fmt.Errorf("wrapped error: %w", os.ErrNotExist))
echoCtx = mf.prepareMockedRequest(nil)
err = mf.flamenco.FetchTaskLogTail(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -770,21 +784,9 @@ func TestFetchTaskLogInfo(t *testing.T) {
jobID := "18a9b096-d77e-438c-9be2-74397038298b"
taskID := "2e020eee-20f8-4e95-8dcf-65f7dfc3ebab"
dbJob := persistence.Job{
UUID: jobID,
Name: "test job",
Status: api.JobStatusActive,
Settings: persistence.StringInterfaceMap{},
Metadata: persistence.StringStringMap{},
}
dbTask := persistence.Task{
UUID: taskID,
Job: &dbJob,
Name: "test task",
}
mf.persistence.EXPECT().
FetchTask(gomock.Any(), taskID).
Return(&dbTask, nil).
FetchTaskJobUUID(gomock.Any(), taskID).
Return(jobID, nil).
AnyTimes()
// The task can be found, but has no on-disk task log.
@ -794,7 +796,7 @@ func TestFetchTaskLogInfo(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchTaskLogInfo(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
// Check that a 204 No Content is also returned when the task log file on disk exists, but is empty.
@ -803,7 +805,7 @@ func TestFetchTaskLogInfo(t *testing.T) {
echoCtx = mf.prepareMockedRequest(nil)
err = mf.flamenco.FetchTaskLogInfo(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
// Check that otherwise we actually get the info.
@ -813,7 +815,7 @@ func TestFetchTaskLogInfo(t *testing.T) {
echoCtx = mf.prepareMockedRequest(nil)
err = mf.flamenco.FetchTaskLogInfo(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echoCtx, http.StatusOK, api.TaskLogInfo{
JobId: jobID,
TaskId: taskID,
@ -842,7 +844,7 @@ func TestFetchJobLastRenderedInfo(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchJobLastRenderedInfo(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
expectBody := api.JobLastRenderedImageInfo{
Base: "/job-files/relative/path",
@ -857,7 +859,7 @@ func TestFetchJobLastRenderedInfo(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchJobLastRenderedInfo(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
}
@ -876,7 +878,7 @@ func TestFetchGlobalLastRenderedInfo(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchGlobalLastRenderedInfo(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -893,7 +895,7 @@ func TestFetchGlobalLastRenderedInfo(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchGlobalLastRenderedInfo(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
expectBody := api.JobLastRenderedImageInfo{
Base: "/job-files/relative/path",
@ -927,7 +929,7 @@ func TestDeleteJob(t *testing.T) {
// Do the call.
err := mf.flamenco.DeleteJob(echoCtx, jobID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}

View File

@ -21,6 +21,14 @@ import (
"projects.blender.org/studio/flamenco/pkg/api"
)
var (
ErrSetupConfigUnusableSource = errors.New("sources should not have the 'is_usable' field set to false")
ErrSetupConfigEmptyStorageLocation = errors.New("'storageLocation' field must not be empty")
ErrSetupConfigEmptyPath = errors.New("'path' field must not be empty while using the 'file_association' source")
ErrSetupConfigEmptyPathOrInput = errors.New("'path' or 'input' fields must not be empty while using the 'input_path' or 'path_envvar' sources")
ErrSetupConfigEmptySource = errors.New("'source' field must not be empty")
)
func (f *Flamenco) GetVersion(e echo.Context) error {
return e.JSON(http.StatusOK, api.FlamencoVersion{
Version: appinfo.ExtendedVersion(),
@ -265,11 +273,9 @@ func (f *Flamenco) SaveSetupAssistantConfig(e echo.Context) error {
logger = logger.With().Interface("config", setupAssistantCfg).Logger()
if setupAssistantCfg.StorageLocation == "" ||
!setupAssistantCfg.BlenderExecutable.IsUsable ||
setupAssistantCfg.BlenderExecutable.Path == "" {
logger.Warn().Msg("setup assistant: configuration is incomplete, unable to accept")
return sendAPIError(e, http.StatusBadRequest, "configuration is incomplete")
if err := checkSetupAssistantConfig(setupAssistantCfg); err != nil {
logger.Error().AnErr("cause", err).Msg("setup assistant: configuration is incomplete")
return sendAPIError(e, http.StatusBadRequest, "configuration is incomplete: %v", err)
}
conf := f.config.Get()
@ -277,7 +283,7 @@ func (f *Flamenco) SaveSetupAssistantConfig(e echo.Context) error {
var executable string
switch setupAssistantCfg.BlenderExecutable.Source {
case api.BlenderPathSourceFileAssociation:
case api.BlenderPathSourceFileAssociation, api.BlenderPathSourceDefault:
// The Worker will try to use the file association when the command is set
// to the string "blender".
executable = "blender"
@ -321,6 +327,10 @@ func (f *Flamenco) SaveSetupAssistantConfig(e echo.Context) error {
return e.NoContent(http.StatusNoContent)
}
func (f *Flamenco) GetFarmStatus(e echo.Context) error {
return e.JSON(http.StatusOK, f.farmstatus.Report())
}
func flamencoManagerDir() (string, error) {
exename, err := os.Executable()
if err != nil {
@ -332,3 +342,37 @@ func flamencoManagerDir() (string, error) {
func commandNeedsQuoting(cmd string) bool {
return strings.ContainsAny(cmd, "\n\t;()")
}
func checkSetupAssistantConfig(config api.SetupAssistantConfig) error {
if config.StorageLocation == "" {
return ErrSetupConfigEmptyStorageLocation
}
if !config.BlenderExecutable.IsUsable {
return ErrSetupConfigUnusableSource
}
switch config.BlenderExecutable.Source {
case api.BlenderPathSourceDefault:
return nil
case api.BlenderPathSourceFileAssociation:
if config.BlenderExecutable.Path == "" {
return ErrSetupConfigEmptyPath
}
case api.BlenderPathSourceInputPath, api.BlenderPathSourcePathEnvvar:
if config.BlenderExecutable.Path == "" ||
config.BlenderExecutable.Input == "" {
return ErrSetupConfigEmptyPathOrInput
}
case "":
return ErrSetupConfigEmptySource
default:
return fmt.Errorf("unknown 'source' field value: %v", config.BlenderExecutable.Source)
}
return nil
}

View File

@ -43,7 +43,7 @@ func TestGetVariables(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.GetVariables(echoCtx, api.ManagerVariableAudienceWorkers, "linux")
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echoCtx, http.StatusOK, api.ManagerVariables{
AdditionalProperties: map[string]api.ManagerVariable{
"blender": {Value: "/usr/local/blender", IsTwoway: false},
@ -61,7 +61,7 @@ func TestGetVariables(t *testing.T) {
echoCtx := mf.prepareMockedRequest(nil)
err := mf.flamenco.GetVariables(echoCtx, api.ManagerVariableAudienceUsers, "troll")
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echoCtx, http.StatusOK, api.ManagerVariables{})
}
}
@ -208,9 +208,7 @@ func TestCheckSharedStoragePath(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(
api.PathCheckInput{Path: path})
err := mf.flamenco.CheckSharedStoragePath(echoCtx)
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
return echoCtx
}
@ -230,9 +228,8 @@ func TestCheckSharedStoragePath(t *testing.T) {
Cause: "Directory checked successfully",
})
files, err := filepath.Glob(filepath.Join(mf.tempdir, "*"))
if assert.NoError(t, err) {
require.NoError(t, err)
assert.Empty(t, files, "After a query, there should not be any leftovers")
}
// Test inaccessible path.
// For some reason, this doesn't work on Windows, and creating a file in
@ -253,12 +250,9 @@ func TestCheckSharedStoragePath(t *testing.T) {
parentPath := filepath.Join(mf.tempdir, "deep")
testPath := filepath.Join(parentPath, "nesting")
if err := os.Mkdir(parentPath, fs.ModePerm); !assert.NoError(t, err) {
t.FailNow()
}
if err := os.Mkdir(testPath, fs.FileMode(0)); !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, os.Mkdir(parentPath, fs.ModePerm))
require.NoError(t, os.Mkdir(testPath, fs.FileMode(0)))
echoCtx := doTest(testPath)
result := api.PathCheckResult{}
getResponseJSON(t, echoCtx, http.StatusOK, &result)
@ -295,9 +289,7 @@ func TestSaveSetupAssistantConfig(t *testing.T) {
// Call the API.
echoCtx := mf.prepareMockedJSONRequest(body)
err := mf.flamenco.SaveSetupAssistantConfig(echoCtx)
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
return savedConfig
@ -371,6 +363,27 @@ func TestSaveSetupAssistantConfig(t *testing.T) {
assert.Equal(t, expectBlenderVar, savedConfig.Variables["blender"])
assert.Equal(t, defaultBlenderArgsVar, savedConfig.Variables["blenderArgs"])
}
// Test situation where adding a blender executable was skipped.
{
savedConfig := doTest(api.SetupAssistantConfig{
StorageLocation: mf.tempdir,
BlenderExecutable: api.BlenderPathCheckResult{
IsUsable: true,
Source: api.BlenderPathSourceDefault,
},
})
assert.Equal(t, mf.tempdir, savedConfig.SharedStoragePath)
expectBlenderVar := config.Variable{
Values: config.VariableValues{
{Platform: "linux", Value: "blender"},
{Platform: "windows", Value: "blender"},
{Platform: "darwin", Value: "blender"},
},
}
assert.Equal(t, expectBlenderVar, savedConfig.Variables["blender"])
assert.Equal(t, defaultBlenderArgsVar, savedConfig.Variables["blenderArgs"])
}
}
func metaTestFixtures(t *testing.T) (mockedFlamenco, func()) {
@ -378,9 +391,7 @@ func metaTestFixtures(t *testing.T) (mockedFlamenco, func()) {
mf := newMockedFlamenco(mockCtrl)
tempdir, err := os.MkdirTemp("", "test-temp-dir")
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
mf.tempdir = tempdir
finish := func() {

View File

@ -1,5 +1,5 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: projects.blender.org/studio/flamenco/internal/manager/api_impl (interfaces: PersistenceService,ChangeBroadcaster,JobCompiler,LogStorage,ConfigService,TaskStateMachine,Shaman,LastRendered,LocalStorage,WorkerSleepScheduler,JobDeleter)
// Source: projects.blender.org/studio/flamenco/internal/manager/api_impl (interfaces: PersistenceService,ChangeBroadcaster,JobCompiler,LogStorage,ConfigService,TaskStateMachine,Shaman,LastRendered,LocalStorage,WorkerSleepScheduler,JobDeleter,FarmStatusService)
// Package mocks is a generated GoMock package.
package mocks
@ -214,6 +214,21 @@ func (mr *MockPersistenceServiceMockRecorder) FetchJobBlocklist(arg0, arg1 inter
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchJobBlocklist", reflect.TypeOf((*MockPersistenceService)(nil).FetchJobBlocklist), arg0, arg1)
}
// FetchJobs mocks base method.
func (m *MockPersistenceService) FetchJobs(arg0 context.Context) ([]*persistence.Job, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "FetchJobs", arg0)
ret0, _ := ret[0].([]*persistence.Job)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// FetchJobs indicates an expected call of FetchJobs.
func (mr *MockPersistenceServiceMockRecorder) FetchJobs(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchJobs", reflect.TypeOf((*MockPersistenceService)(nil).FetchJobs), arg0)
}
// FetchTask mocks base method.
func (m *MockPersistenceService) FetchTask(arg0 context.Context, arg1 string) (*persistence.Task, error) {
m.ctrl.T.Helper()
@ -244,6 +259,21 @@ func (mr *MockPersistenceServiceMockRecorder) FetchTaskFailureList(arg0, arg1 in
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchTaskFailureList", reflect.TypeOf((*MockPersistenceService)(nil).FetchTaskFailureList), arg0, arg1)
}
// FetchTaskJobUUID mocks base method.
func (m *MockPersistenceService) FetchTaskJobUUID(arg0 context.Context, arg1 string) (string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "FetchTaskJobUUID", arg0, arg1)
ret0, _ := ret[0].(string)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// FetchTaskJobUUID indicates an expected call of FetchTaskJobUUID.
func (mr *MockPersistenceServiceMockRecorder) FetchTaskJobUUID(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchTaskJobUUID", reflect.TypeOf((*MockPersistenceService)(nil).FetchTaskJobUUID), arg0, arg1)
}
// FetchWorker mocks base method.
func (m *MockPersistenceService) FetchWorker(arg0 context.Context, arg1 string) (*persistence.Worker, error) {
m.ctrl.T.Helper()
@ -349,21 +379,6 @@ func (mr *MockPersistenceServiceMockRecorder) QueryJobTaskSummaries(arg0, arg1 i
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "QueryJobTaskSummaries", reflect.TypeOf((*MockPersistenceService)(nil).QueryJobTaskSummaries), arg0, arg1)
}
// QueryJobs mocks base method.
func (m *MockPersistenceService) QueryJobs(arg0 context.Context, arg1 api.JobsQuery) ([]*persistence.Job, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "QueryJobs", arg0, arg1)
ret0, _ := ret[0].([]*persistence.Job)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// QueryJobs indicates an expected call of QueryJobs.
func (mr *MockPersistenceServiceMockRecorder) QueryJobs(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "QueryJobs", reflect.TypeOf((*MockPersistenceService)(nil).QueryJobs), arg0, arg1)
}
// RemoveFromJobBlocklist mocks base method.
func (m *MockPersistenceService) RemoveFromJobBlocklist(arg0 context.Context, arg1, arg2, arg3 string) error {
m.ctrl.T.Helper()
@ -392,20 +407,6 @@ func (mr *MockPersistenceServiceMockRecorder) SaveJobPriority(arg0, arg1 interfa
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SaveJobPriority", reflect.TypeOf((*MockPersistenceService)(nil).SaveJobPriority), arg0, arg1)
}
// SaveTask mocks base method.
func (m *MockPersistenceService) SaveTask(arg0 context.Context, arg1 *persistence.Task) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SaveTask", arg0, arg1)
ret0, _ := ret[0].(error)
return ret0
}
// SaveTask indicates an expected call of SaveTask.
func (mr *MockPersistenceServiceMockRecorder) SaveTask(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SaveTask", reflect.TypeOf((*MockPersistenceService)(nil).SaveTask), arg0, arg1)
}
// SaveTaskActivity mocks base method.
func (m *MockPersistenceService) SaveTaskActivity(arg0 context.Context, arg1 *persistence.Task) error {
m.ctrl.T.Helper()
@ -1413,3 +1414,40 @@ func (mr *MockJobDeleterMockRecorder) WhatWouldBeDeleted(arg0 interface{}) *gomo
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "WhatWouldBeDeleted", reflect.TypeOf((*MockJobDeleter)(nil).WhatWouldBeDeleted), arg0)
}
// MockFarmStatusService is a mock of FarmStatusService interface.
type MockFarmStatusService struct {
ctrl *gomock.Controller
recorder *MockFarmStatusServiceMockRecorder
}
// MockFarmStatusServiceMockRecorder is the mock recorder for MockFarmStatusService.
type MockFarmStatusServiceMockRecorder struct {
mock *MockFarmStatusService
}
// NewMockFarmStatusService creates a new mock instance.
func NewMockFarmStatusService(ctrl *gomock.Controller) *MockFarmStatusService {
mock := &MockFarmStatusService{ctrl: ctrl}
mock.recorder = &MockFarmStatusServiceMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockFarmStatusService) EXPECT() *MockFarmStatusServiceMockRecorder {
return m.recorder
}
// Report mocks base method.
func (m *MockFarmStatusService) Report() api.FarmStatusReport {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Report")
ret0, _ := ret[0].(api.FarmStatusReport)
return ret0
}
// Report indicates an expected call of Report.
func (mr *MockFarmStatusServiceMockRecorder) Report() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Report", reflect.TypeOf((*MockFarmStatusService)(nil).Report))
}

View File

@ -3,6 +3,7 @@ package api_impl
// SPDX-License-Identifier: GPL-3.0-or-later
import (
"mime"
"net/http"
"github.com/labstack/echo/v4"
@ -110,8 +111,22 @@ func (f *Flamenco) ShamanFileStore(e echo.Context, checksum string, filesize int
canDefer = *params.XShamanCanDeferUpload
logCtx = logCtx.Bool("canDefer", canDefer)
}
if params.XShamanOriginalFilename != nil {
origFilename = *params.XShamanOriginalFilename
rawHeadervalue := *params.XShamanOriginalFilename
decoder := mime.WordDecoder{}
var err error // origFilename has to be used from the outer scope.
origFilename, err = decoder.DecodeHeader(rawHeadervalue)
if err != nil {
logger := logCtx.Logger()
logger.Error().
Str("headerValue", rawHeadervalue).
Err(err).
Msg("shaman: received invalid X-Shaman-Original-Filename header")
return sendAPIError(e, http.StatusBadRequest, "invalid X-Shaman-Original-Filename header: %q", rawHeadervalue)
}
logCtx = logCtx.Str("originalFilename", origFilename)
}
logger := logCtx.Logger()

View File

@ -16,6 +16,7 @@ import (
"github.com/golang/mock/gomock"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"projects.blender.org/studio/flamenco/internal/manager/api_impl/mocks"
"projects.blender.org/studio/flamenco/internal/manager/config"
@ -37,6 +38,7 @@ type mockedFlamenco struct {
localStorage *mocks.MockLocalStorage
sleepScheduler *mocks.MockWorkerSleepScheduler
jobDeleter *mocks.MockJobDeleter
farmstatus *mocks.MockFarmStatusService
// Place for some tests to store a temporary directory.
tempdir string
@ -54,6 +56,7 @@ func newMockedFlamenco(mockCtrl *gomock.Controller) mockedFlamenco {
localStore := mocks.NewMockLocalStorage(mockCtrl)
wss := mocks.NewMockWorkerSleepScheduler(mockCtrl)
jd := mocks.NewMockJobDeleter(mockCtrl)
fs := mocks.NewMockFarmStatusService(mockCtrl)
clock := clock.NewMock()
mockedNow, err := time.Parse(time.RFC3339, "2022-06-09T11:14:41+02:00")
@ -62,7 +65,7 @@ func newMockedFlamenco(mockCtrl *gomock.Controller) mockedFlamenco {
}
clock.Set(mockedNow)
f := NewFlamenco(jc, ps, cb, logStore, cs, sm, sha, clock, lr, localStore, wss, jd)
f := NewFlamenco(jc, ps, cb, logStore, cs, sm, sha, clock, lr, localStore, wss, jd, fs)
return mockedFlamenco{
flamenco: f,
@ -78,6 +81,7 @@ func newMockedFlamenco(mockCtrl *gomock.Controller) mockedFlamenco {
localStorage: localStore,
sleepScheduler: wss,
jobDeleter: jd,
farmstatus: fs,
}
}
@ -179,14 +183,10 @@ func getResponseJSON(t *testing.T, echoCtx echo.Context, expectStatusCode int, a
}
actualJSON, err := io.ReadAll(resp.Body)
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
err = json.Unmarshal(actualJSON, actualPayloadPtr)
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
}
// assertResponseJSON asserts that a recorded response is JSON with the given HTTP status code.
@ -201,14 +201,10 @@ func assertResponseJSON(t *testing.T, echoCtx echo.Context, expectStatusCode int
}
expectJSON, err := json.Marshal(expectBody)
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
actualJSON, err := io.ReadAll(resp.Body)
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
assert.JSONEq(t, string(expectJSON), string(actualJSON))
}

View File

@ -243,6 +243,85 @@ func TestReplaceTwoWayVariables(t *testing.T) {
}
}
// TestReplaceTwoWayVariablesFFmpegExpression tests that slashes (for division)
// in an FFmpeg filter expression are NOT replaced with backslashes when sending
// to a Windows worker.
func TestReplaceTwoWayVariablesFFmpegExpression(t *testing.T) {
c := config.DefaultConfig(func(c *config.Conf) {
// Mock that the Manager is running Linux.
c.MockCurrentGOOSForTests("linux")
// Trigger a translation of a path in the FFmpeg command arguments.
c.Variables["project"] = config.Variable{
IsTwoWay: true,
Values: []config.VariableValue{
{Value: "/projects/charge", Platform: config.VariablePlatformLinux, Audience: config.VariableAudienceAll},
{Value: `P:\charge`, Platform: config.VariablePlatformWindows, Audience: config.VariableAudienceAll},
},
}
})
task := api.AssignedTask{
Job: "f0bde4d0-eaaf-4ee0-976b-802a86aa2d02",
JobPriority: 50,
JobType: "simple-blender-render",
Name: "preview-video",
Priority: 50,
Status: api.TaskStatusQueued,
TaskType: "ffmpeg",
Uuid: "fd963a82-2e98-4a39-9bd4-c302e5b8814f",
Commands: []api.Command{
{
Name: "frames-to-video",
Parameters: map[string]interface{}{
"exe": "ffmpeg", // Should not change.
"fps": 24, // Should not change type.
"inputGlob": "/projects/charge/renders/*.webp", // Path, should change.
"outputFile": "/projects/charge/renders/video.mp4", // Path, should change.
"args": []string{
"-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2", // Should not change.
"-fake-lut", `/projects/charge/ffmpeg.lut`, // Path, should change.
},
},
},
},
}
worker := persistence.Worker{Platform: "windows"}
replacedTask := replaceTaskVariables(&c, task, worker)
expectTask := api.AssignedTask{
Job: "f0bde4d0-eaaf-4ee0-976b-802a86aa2d02",
JobPriority: 50,
JobType: "simple-blender-render",
Name: "preview-video",
Priority: 50,
Status: api.TaskStatusQueued,
TaskType: "ffmpeg",
Uuid: "fd963a82-2e98-4a39-9bd4-c302e5b8814f",
Commands: []api.Command{
{
Name: "frames-to-video",
Parameters: map[string]interface{}{
"exe": "ffmpeg",
"fps": 24,
// These two parameters matched a two-way variable:
"inputGlob": `P:\charge\renders\*.webp`,
"outputFile": `P:\charge\renders\video.mp4`,
"args": []string{
// This parameter should not change:
"-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2",
// This parameter should change:
"-fake-lut", `P:\charge\ffmpeg.lut`,
},
},
},
},
}
assert.Equal(t, expectTask, replacedTask)
}
func varReplSubmittedJob() api.SubmittedJob {
return api.SubmittedJob{
Type: "simple-blender-render",
@ -273,7 +352,7 @@ func varReplSubmittedJob() api.SubmittedJob {
}
// jsonWash converts the given value to JSON and back.
// This makes sure the types are as closed to what the API will handle as
// This makes sure the types are as close to what the API will handle as
// possible, making the difference between "array of strings" and "array of
// interface{}s that happen to be strings".
func jsonWash[T any](value T) T {

View File

@ -3,6 +3,7 @@ package api_impl
// SPDX-License-Identifier: GPL-3.0-or-later
import (
"context"
"errors"
"net/http"
@ -16,7 +17,10 @@ import (
func (f *Flamenco) FetchWorkers(e echo.Context) error {
logger := requestLogger(e)
dbWorkers, err := f.persist.FetchWorkers(e.Request().Context())
if err != nil {
switch {
case errors.Is(err, context.Canceled):
return handleConnectionClosed(e, logger, "fetching all workers")
case err != nil:
logger.Error().Err(err).Msg("fetching all workers")
return sendAPIError(e, http.StatusInternalServerError, "error fetching workers: %v", err)
}
@ -42,18 +46,23 @@ func (f *Flamenco) FetchWorker(e echo.Context, workerUUID string) error {
ctx := e.Request().Context()
dbWorker, err := f.persist.FetchWorker(ctx, workerUUID)
if errors.Is(err, persistence.ErrWorkerNotFound) {
switch {
case errors.Is(err, persistence.ErrWorkerNotFound):
logger.Debug().Msg("non-existent worker requested")
return sendAPIError(e, http.StatusNotFound, "worker %q not found", workerUUID)
}
if err != nil {
case errors.Is(err, context.Canceled):
return handleConnectionClosed(e, logger, "fetching task assigned to worker")
case err != nil:
logger.Error().Err(err).Msg("fetching worker")
return sendAPIError(e, http.StatusInternalServerError, "error fetching worker: %v", err)
}
dbTask, err := f.persist.FetchWorkerTask(ctx, dbWorker)
if err != nil {
logger.Error().Err(err).Msg("fetching task assigned to worker")
switch {
case errors.Is(err, context.Canceled):
return handleConnectionClosed(e, logger, "fetching task assigned to worker")
case err != nil:
logger.Error().AnErr("cause", err).Msg("fetching task assigned to worker")
return sendAPIError(e, http.StatusInternalServerError, "error fetching task assigned to worker: %v", err)
}
@ -86,11 +95,11 @@ func (f *Flamenco) DeleteWorker(e echo.Context, workerUUID string) error {
// Fetch the worker in order to re-queue its tasks.
worker, err := f.persist.FetchWorker(ctx, workerUUID)
if errors.Is(err, persistence.ErrWorkerNotFound) {
switch {
case errors.Is(err, persistence.ErrWorkerNotFound):
logger.Debug().Msg("deletion of non-existent worker requested")
return sendAPIError(e, http.StatusNotFound, "worker %q not found", workerUUID)
}
if err != nil {
case err != nil:
logger.Error().Err(err).Msg("fetching worker for deletion")
return sendAPIError(e, http.StatusInternalServerError,
"error fetching worker for deletion: %v", err)
@ -105,11 +114,11 @@ func (f *Flamenco) DeleteWorker(e echo.Context, workerUUID string) error {
// Actually delete the worker.
err = f.persist.DeleteWorker(ctx, workerUUID)
if errors.Is(err, persistence.ErrWorkerNotFound) {
switch {
case errors.Is(err, persistence.ErrWorkerNotFound):
logger.Debug().Msg("deletion of non-existent worker requested")
return sendAPIError(e, http.StatusNotFound, "worker %q not found", workerUUID)
}
if err != nil {
case err != nil:
logger.Error().Err(err).Msg("deleting worker")
return sendAPIError(e, http.StatusInternalServerError, "error deleting worker: %v", err)
}
@ -145,11 +154,13 @@ func (f *Flamenco) RequestWorkerStatusChange(e echo.Context, workerUUID string)
// Fetch the worker.
dbWorker, err := f.persist.FetchWorker(e.Request().Context(), workerUUID)
if errors.Is(err, persistence.ErrWorkerNotFound) {
switch {
case errors.Is(err, context.Canceled):
return handleConnectionClosed(e, logger, "fetching worker")
case errors.Is(err, persistence.ErrWorkerNotFound):
logger.Debug().Msg("non-existent worker requested")
return sendAPIError(e, http.StatusNotFound, "worker %q not found", workerUUID)
}
if err != nil {
case err != nil:
logger.Error().Err(err).Msg("fetching worker")
return sendAPIError(e, http.StatusInternalServerError, "error fetching worker: %v", err)
}
@ -168,6 +179,11 @@ func (f *Flamenco) RequestWorkerStatusChange(e echo.Context, workerUUID string)
logger.Info().Msg("worker status change requested")
// All information to do the operation is known, so even when the client
// disconnects, the work should be completed.
ctx, ctxCancel := bgContext()
defer ctxCancel()
if dbWorker.Status == change.Status {
// Requesting that the worker should go to its current status basically
// means cancelling any previous status change request.
@ -177,7 +193,7 @@ func (f *Flamenco) RequestWorkerStatusChange(e echo.Context, workerUUID string)
}
// Store the status change.
if err := f.persist.SaveWorker(e.Request().Context(), dbWorker); err != nil {
if err := f.persist.SaveWorker(ctx, dbWorker); err != nil {
logger.Error().Err(err).Msg("saving worker after status change request")
return sendAPIError(e, http.StatusInternalServerError, "error saving worker: %v", err)
}
@ -221,6 +237,11 @@ func (f *Flamenco) SetWorkerTags(e echo.Context, workerUUID string) error {
Logger()
logger.Info().Msg("worker tag change requested")
// All information to do the operation is known, so even when the client
// disconnects, the work should be completed.
ctx, ctxCancel := bgContext()
defer ctxCancel()
// Store the new tag assignment.
if err := f.persist.WorkerSetTags(ctx, dbWorker, change.TagIds); err != nil {
logger.Error().Err(err).Msg("saving worker after tag change request")
@ -292,8 +313,11 @@ func (f *Flamenco) FetchWorkerTag(e echo.Context, tagUUID string) error {
logger.Error().Err(err).Msg("fetching worker tag")
return sendAPIError(e, http.StatusInternalServerError, "error fetching worker tag: %v", err)
}
if tag == nil {
panic("Could fetch a worker tag without error, but then the returned tag was still nil")
}
return e.JSON(http.StatusOK, workerTagDBtoAPI(*tag))
return e.JSON(http.StatusOK, workerTagDBtoAPI(tag))
}
func (f *Flamenco) UpdateWorkerTag(e echo.Context, tagUUID string) error {
@ -366,8 +390,8 @@ func (f *Flamenco) FetchWorkerTags(e echo.Context) error {
apiTags := []api.WorkerTag{}
for _, dbTag := range dbTags {
apiTag := workerTagDBtoAPI(*dbTag)
apiTags = append(apiTags, apiTag)
apiTag := workerTagDBtoAPI(dbTag)
apiTags = append(apiTags, *apiTag)
}
tagList := api.WorkerTagList{
@ -422,7 +446,7 @@ func (f *Flamenco) CreateWorkerTag(e echo.Context) error {
sioUpdate := eventbus.NewWorkerTagUpdate(&dbTag)
f.broadcaster.BroadcastNewWorkerTag(sioUpdate)
return e.JSON(http.StatusOK, workerTagDBtoAPI(dbTag))
return e.JSON(http.StatusOK, workerTagDBtoAPI(&dbTag))
}
func workerSummary(w persistence.Worker) api.WorkerSummary {
@ -458,7 +482,7 @@ func workerDBtoAPI(w persistence.Worker) api.Worker {
if len(w.Tags) > 0 {
tags := []api.WorkerTag{}
for i := range w.Tags {
tags = append(tags, workerTagDBtoAPI(*w.Tags[i]))
tags = append(tags, *workerTagDBtoAPI(w.Tags[i]))
}
apiWorker.Tags = &tags
}
@ -466,7 +490,11 @@ func workerDBtoAPI(w persistence.Worker) api.Worker {
return apiWorker
}
func workerTagDBtoAPI(wc persistence.WorkerTag) api.WorkerTag {
func workerTagDBtoAPI(wc *persistence.WorkerTag) *api.WorkerTag {
if wc == nil {
return nil
}
uuid := wc.UUID // Take a copy for safety.
apiTag := api.WorkerTag{
@ -476,5 +504,5 @@ func workerTagDBtoAPI(wc persistence.WorkerTag) api.WorkerTag {
if len(wc.Description) > 0 {
apiTag.Description = &wc.Description
}
return apiTag
return &apiTag
}

View File

@ -33,7 +33,7 @@ func TestFetchWorkers(t *testing.T) {
echo := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchWorkers(echo)
assert.NoError(t, err)
require.NoError(t, err)
// Check the response
workers := api.WorkerList{
@ -74,7 +74,7 @@ func TestFetchWorker(t *testing.T) {
Return(nil, fmt.Errorf("wrapped: %w", persistence.ErrWorkerNotFound))
echo := mf.prepareMockedRequest(nil)
err := mf.flamenco.FetchWorker(echo, workerUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echo, http.StatusNotFound, fmt.Sprintf("worker %q not found", workerUUID))
// Test database error fetching worker.
@ -82,7 +82,7 @@ func TestFetchWorker(t *testing.T) {
Return(nil, errors.New("some unknown error"))
echo = mf.prepareMockedRequest(nil)
err = mf.flamenco.FetchWorker(echo, workerUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echo, http.StatusInternalServerError, "error fetching worker: some unknown error")
// Test with worker that does NOT have a status change requested, and DOES have an assigned task.
@ -97,7 +97,7 @@ func TestFetchWorker(t *testing.T) {
echo = mf.prepareMockedRequest(nil)
err = mf.flamenco.FetchWorker(echo, workerUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.Worker{
WorkerSummary: api.WorkerSummary{
Id: workerUUID,
@ -126,7 +126,7 @@ func TestFetchWorker(t *testing.T) {
echo = mf.prepareMockedRequest(nil)
err = mf.flamenco.FetchWorker(echo, worker.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.Worker{
WorkerSummary: api.WorkerSummary{
Id: workerUUID,
@ -155,7 +155,7 @@ func TestDeleteWorker(t *testing.T) {
Return(nil, fmt.Errorf("wrapped: %w", persistence.ErrWorkerNotFound))
echo := mf.prepareMockedRequest(nil)
err := mf.flamenco.DeleteWorker(echo, workerUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echo, http.StatusNotFound, fmt.Sprintf("worker %q not found", workerUUID))
// Test with existing worker.
@ -176,7 +176,7 @@ func TestDeleteWorker(t *testing.T) {
echo = mf.prepareMockedRequest(nil)
err = mf.flamenco.DeleteWorker(echo, workerUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
@ -214,7 +214,7 @@ func TestRequestWorkerStatusChange(t *testing.T) {
IsLazy: true,
})
err := mf.flamenco.RequestWorkerStatusChange(echo, workerUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
@ -258,7 +258,7 @@ func TestRequestWorkerStatusChangeRevert(t *testing.T) {
IsLazy: true,
})
err := mf.flamenco.RequestWorkerStatusChange(echo, workerUUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}

View File

@ -8,6 +8,7 @@ import (
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"projects.blender.org/studio/flamenco/internal/manager/config"
"projects.blender.org/studio/flamenco/internal/manager/persistence"
@ -77,7 +78,7 @@ func TestTaskUpdate(t *testing.T) {
err := mf.flamenco.TaskUpdate(echoCtx, taskID)
// Check the saved task.
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, mockTask.UUID, statusChangedtask.UUID)
assert.Equal(t, mockTask.UUID, actUpdatedTask.UUID)
assert.Equal(t, mockTask.UUID, touchedTask.UUID)
@ -148,7 +149,7 @@ func TestTaskUpdateFailed(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(taskUpdate)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.TaskUpdate(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -164,7 +165,7 @@ func TestTaskUpdateFailed(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(taskUpdate)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.TaskUpdate(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
}
@ -248,7 +249,7 @@ func TestBlockingAfterFailure(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(taskUpdate)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.TaskUpdate(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -279,7 +280,7 @@ func TestBlockingAfterFailure(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(taskUpdate)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.TaskUpdate(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
@ -314,7 +315,7 @@ func TestBlockingAfterFailure(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(taskUpdate)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.TaskUpdate(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}
}
@ -381,6 +382,6 @@ func TestJobFailureAfterWorkerTaskFailure(t *testing.T) {
echoCtx := mf.prepareMockedJSONRequest(taskUpdate)
requestWorkerStore(echoCtx, &worker)
err := mf.flamenco.TaskUpdate(echoCtx, taskID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echoCtx)
}

View File

@ -12,6 +12,7 @@ import (
"github.com/golang/mock/gomock"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"projects.blender.org/studio/flamenco/internal/manager/config"
"projects.blender.org/studio/flamenco/internal/manager/last_rendered"
@ -61,7 +62,7 @@ func TestTaskScheduleHappy(t *testing.T) {
mf.broadcaster.EXPECT().BroadcastWorkerUpdate(gomock.Any())
err := mf.flamenco.ScheduleTask(echo)
assert.NoError(t, err)
require.NoError(t, err)
// Check the response
assignedTask := api.AssignedTask{
@ -98,7 +99,7 @@ func TestTaskScheduleNoTaskAvailable(t *testing.T) {
mf.persistence.EXPECT().WorkerSeen(bgCtx, &worker)
err := mf.flamenco.ScheduleTask(echo)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
@ -119,7 +120,7 @@ func TestTaskScheduleNonActiveStatus(t *testing.T) {
mf.persistence.EXPECT().WorkerSeen(bgCtx, &worker)
err := mf.flamenco.ScheduleTask(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
resp := getRecordedResponse(echoCtx)
assert.Equal(t, http.StatusConflict, resp.StatusCode)
@ -142,7 +143,7 @@ func TestTaskScheduleOtherStatusRequested(t *testing.T) {
mf.persistence.EXPECT().WorkerSeen(bgCtx, &worker)
err := mf.flamenco.ScheduleTask(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
expectBody := api.WorkerStateChange{StatusRequested: api.WorkerStatusAsleep}
assertResponseJSON(t, echoCtx, http.StatusLocked, expectBody)
@ -169,7 +170,7 @@ func TestTaskScheduleOtherStatusRequestedAndBadState(t *testing.T) {
mf.persistence.EXPECT().WorkerSeen(bgCtx, &worker)
err := mf.flamenco.ScheduleTask(echoCtx)
assert.NoError(t, err)
require.NoError(t, err)
expectBody := api.WorkerStateChange{StatusRequested: api.WorkerStatusAwake}
assertResponseJSON(t, echoCtx, http.StatusLocked, expectBody)
@ -206,7 +207,7 @@ func TestWorkerSignOn(t *testing.T) {
})
requestWorkerStore(echo, &worker)
err := mf.flamenco.SignOn(echo)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.WorkerStateChange{
StatusRequested: api.WorkerStatusAsleep,
@ -253,7 +254,7 @@ func TestWorkerSignoffTaskRequeue(t *testing.T) {
})
err := mf.flamenco.SignOff(echo)
assert.NoError(t, err)
require.NoError(t, err)
resp := getRecordedResponse(echo)
assert.Equal(t, http.StatusNoContent, resp.StatusCode)
@ -292,7 +293,7 @@ func TestWorkerRememberPreviousStatus(t *testing.T) {
echo := mf.prepareMockedRequest(nil)
requestWorkerStore(echo, &worker)
err := mf.flamenco.SignOff(echo)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
assert.Equal(t, api.WorkerStatusAwake, worker.StatusRequested)
@ -329,7 +330,7 @@ func TestWorkerDontRememberPreviousStatus(t *testing.T) {
echo := mf.prepareMockedRequest(nil)
requestWorkerStore(echo, &worker)
err := mf.flamenco.SignOff(echo)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
@ -347,10 +348,9 @@ func TestWorkerState(t *testing.T) {
echo := mf.prepareMockedRequest(nil)
requestWorkerStore(echo, &worker)
err := mf.flamenco.WorkerState(echo)
if assert.NoError(t, err) {
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
}
// State change requested.
{
@ -361,12 +361,11 @@ func TestWorkerState(t *testing.T) {
requestWorkerStore(echo, &worker)
err := mf.flamenco.WorkerState(echo)
if assert.NoError(t, err) {
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.WorkerStateChange{
StatusRequested: requestStatus,
})
}
}
}
func TestWorkerStateChanged(t *testing.T) {
@ -402,7 +401,7 @@ func TestWorkerStateChanged(t *testing.T) {
})
requestWorkerStore(echo, &worker)
err := mf.flamenco.WorkerStateChanged(echo)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
@ -445,7 +444,7 @@ func TestWorkerStateChangedAfterChangeRequest(t *testing.T) {
})
requestWorkerStore(echo, &worker)
err := mf.flamenco.WorkerStateChanged(echo)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
@ -475,7 +474,7 @@ func TestWorkerStateChangedAfterChangeRequest(t *testing.T) {
})
requestWorkerStore(echo, &worker)
err := mf.flamenco.WorkerStateChanged(echo)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoContent(t, echo)
}
}
@ -514,7 +513,7 @@ func TestMayWorkerRun(t *testing.T) {
{
echo := prepareRequest()
err := mf.flamenco.MayWorkerRun(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.MayKeepRunning{
MayKeepRunning: false,
Reason: "task not assigned to this worker",
@ -529,7 +528,7 @@ func TestMayWorkerRun(t *testing.T) {
echo := prepareRequest()
task.WorkerID = &worker.ID
err := mf.flamenco.MayWorkerRun(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.MayKeepRunning{
MayKeepRunning: true,
})
@ -541,7 +540,7 @@ func TestMayWorkerRun(t *testing.T) {
task.WorkerID = &worker.ID
task.Status = api.TaskStatusCanceled
err := mf.flamenco.MayWorkerRun(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.MayKeepRunning{
MayKeepRunning: false,
Reason: "task is in non-runnable status \"canceled\"",
@ -555,7 +554,7 @@ func TestMayWorkerRun(t *testing.T) {
task.WorkerID = &worker.ID
task.Status = api.TaskStatusActive
err := mf.flamenco.MayWorkerRun(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.MayKeepRunning{
MayKeepRunning: false,
Reason: "worker status change requested",
@ -573,7 +572,7 @@ func TestMayWorkerRun(t *testing.T) {
task.WorkerID = &worker.ID
task.Status = api.TaskStatusActive
err := mf.flamenco.MayWorkerRun(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseJSON(t, echo, http.StatusOK, api.MayKeepRunning{
MayKeepRunning: true,
})
@ -618,7 +617,7 @@ func TestTaskOutputProduced(t *testing.T) {
echo := prepareRequest(nil)
err := mf.flamenco.TaskOutputProduced(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echo, http.StatusLengthRequired, "Content-Length header required")
}
@ -633,7 +632,7 @@ func TestTaskOutputProduced(t *testing.T) {
echo := prepareRequest(bytes.NewReader(bodyBytes))
err := mf.flamenco.TaskOutputProduced(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echo, http.StatusRequestEntityTooLarge,
"image too large; should be max %v bytes", last_rendered.MaxImageSizeBytes)
}
@ -648,7 +647,7 @@ func TestTaskOutputProduced(t *testing.T) {
mf.lastRender.EXPECT().QueueImage(gomock.Any()).Return(last_rendered.ErrMimeTypeUnsupported)
err := mf.flamenco.TaskOutputProduced(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echo, http.StatusUnsupportedMediaType, `unsupported mime type "image/openexr"`)
}
@ -661,7 +660,7 @@ func TestTaskOutputProduced(t *testing.T) {
mf.lastRender.EXPECT().QueueImage(gomock.Any()).Return(last_rendered.ErrQueueFull)
err := mf.flamenco.TaskOutputProduced(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseAPIError(t, echo, http.StatusTooManyRequests, "image processing queue is full")
}
@ -687,7 +686,7 @@ func TestTaskOutputProduced(t *testing.T) {
})
err := mf.flamenco.TaskOutputProduced(echo, task.UUID)
assert.NoError(t, err)
require.NoError(t, err)
assertResponseNoBody(t, echo, http.StatusAccepted)
if assert.NotNil(t, actualPayload) {

View File

@ -517,6 +517,23 @@ func (c *Conf) GetTwoWayVariables(audience VariableAudience, platform VariablePl
return twoWayVars
}
// GetOneWayVariables returns the regular (one-way) variable values for this (audience,
// platform) combination. If no variables are found, just returns an empty map.
// If a value is defined for both the "all" platform and specifically the given
// platform, the specific platform definition wins.
func (c *Conf) GetOneWayVariables(audience VariableAudience, platform VariablePlatform) map[string]string {
varsForPlatform := c.getVariables(audience, platform)
// Only keep the two-way variables.
oneWayVars := map[string]string{}
for varname, value := range varsForPlatform {
if !c.isTwoWay(varname) {
oneWayVars[varname] = value
}
}
return oneWayVars
}
// ResolveVariables returns the variables for this (audience, platform) combination.
// If no variables are found, just returns an empty map. If a value is defined
// for both the "all" platform and specifically the given platform, the specific

View File

@ -4,6 +4,7 @@ import (
"runtime"
"time"
"projects.blender.org/studio/flamenco/internal/manager/eventbus"
shaman_config "projects.blender.org/studio/flamenco/pkg/shaman/config"
)
@ -18,9 +19,8 @@ var defaultConfig = Conf{
ManagerName: "Flamenco",
Listen: ":8080",
// ListenHTTPS: ":8433",
DatabaseDSN: "flamenco-manager.sqlite",
DBIntegrityCheck: 1 * time.Hour,
DBIntegrityCheck: 10 * time.Minute,
SSDPDiscovery: true,
LocalManagerStoragePath: "./flamenco-manager-storage",
SharedStoragePath: "", // Empty string means "first run", and should trigger the config setup assistant.
@ -38,25 +38,15 @@ var defaultConfig = Conf{
TaskTimeout: 10 * time.Minute,
WorkerTimeout: 1 * time.Minute,
// // Days are assumed to be 24 hours long. This is not exactly accurate, but should
// // be accurate enough for this type of cleanup.
// TaskCleanupMaxAge: 14 * 24 * time.Hour,
BlocklistThreshold: 3,
TaskFailAfterSoftFailCount: 3,
// WorkerCleanupStatus: []string{string(api.WorkerStatusOffline)},
// TestTasks: TestTasks{
// BlenderRender: BlenderRenderConfig{
// JobStorage: "{job_storage}/test-jobs",
// RenderOutput: "{render}/test-renders",
// },
// },
// JWT: jwtauth.Config{
// DownloadKeysInterval: 1 * time.Hour,
// },
MQTT: MQTTConfig{
Client: eventbus.MQTTClientConfig{
ClientID: eventbus.MQTTDefaultClientID,
TopicPrefix: eventbus.MQTTDefaultTopicPrefix,
},
},
},
Variables: map[string]Variable{

View File

@ -39,17 +39,8 @@ func (c *Conf) NewVariableToValueConverter(audience VariableAudience, platform V
// NewVariableExpander returns a new VariableExpander for the given audience & platform.
func (c *Conf) NewVariableExpander(audience VariableAudience, platform VariablePlatform) *VariableExpander {
// Get the variables for the given audience & platform.
varsForPlatform := c.getVariables(audience, platform)
if len(varsForPlatform) == 0 {
log.Warn().
Str("audience", string(audience)).
Str("platform", string(platform)).
Msg("no variables defined for this platform given this audience")
}
return &VariableExpander{
oneWayVars: varsForPlatform,
oneWayVars: c.GetOneWayVariables(audience, platform),
managerTwoWayVars: c.GetTwoWayVariables(audience, c.currentGOOS),
targetTwoWayVars: c.GetTwoWayVariables(audience, platform),
targetPlatform: platform,
@ -89,15 +80,16 @@ func isValueMatch(valueToMatch, variableValue string) bool {
return true
}
// If the variable value has a backslash, assume it is a Windows path.
// If any of the values have backslashes, assume it is a Windows path.
// Convert it to slash notation just to see if that would provide a
// match.
if strings.ContainsRune(variableValue, '\\') {
slashedValue := crosspath.ToSlash(variableValue)
return strings.HasPrefix(valueToMatch, slashedValue)
variableValue = crosspath.ToSlash(variableValue)
}
return false
if strings.ContainsRune(valueToMatch, '\\') {
valueToMatch = crosspath.ToSlash(valueToMatch)
}
return strings.HasPrefix(valueToMatch, variableValue)
}
// Replace converts "{variable name}" to the value that belongs to the audience and platform.
@ -110,6 +102,21 @@ func (ve *VariableExpander) Expand(valueToExpand string) string {
expanded = strings.Replace(expanded, placeholder, varvalue, -1)
}
// Go through the two-way variables for the target platform.
isPathValue := false
for varname, varvalue := range ve.targetTwoWayVars {
placeholder := fmt.Sprintf("{%s}", varname)
if !strings.Contains(expanded, placeholder) {
continue
}
expanded = strings.Replace(expanded, placeholder, varvalue, -1)
// Since two-way variables are meant for path replacement, we know this
// should be a path.
isPathValue = true
}
// Go through the two-way variables, to make sure that the result of
// expanding variables gets the two-way variables applied as well. This is
// necessary to make implicitly-defined variable, which are only defined for
@ -117,7 +124,6 @@ func (ve *VariableExpander) Expand(valueToExpand string) string {
//
// Practically, this replaces "value for the Manager platform" with "value
// for the target platform".
isPathValue := false
for varname, managerValue := range ve.managerTwoWayVars {
targetValue, ok := ve.targetTwoWayVars[varname]
if !ok {
@ -137,6 +143,11 @@ func (ve *VariableExpander) Expand(valueToExpand string) string {
expanded = crosspath.ToPlatform(expanded, string(ve.targetPlatform))
}
log.Trace().
Str("valueToExpand", valueToExpand).
Str("expanded", expanded).
Bool("isPathValue", isPathValue).
Msg("expanded variable")
return expanded
}

View File

@ -6,7 +6,7 @@ import (
"github.com/stretchr/testify/assert"
)
func TestReplaceTwowayVariables(t *testing.T) {
func TestReplaceTwowayVariablesMixedSlashes(t *testing.T) {
c := DefaultConfig(func(c *Conf) {
c.Variables["shared"] = Variable{
IsTwoWay: true,
@ -17,10 +17,36 @@ func TestReplaceTwowayVariables(t *testing.T) {
}
})
replacer := c.NewVariableToValueConverter(VariableAudienceUsers, VariablePlatformWindows)
replacerWin := c.NewVariableToValueConverter(VariableAudienceWorkers, VariablePlatformWindows)
replacerLnx := c.NewVariableToValueConverter(VariableAudienceWorkers, VariablePlatformLinux)
// This is the real reason for this test: forward slashes in the path should
// still be matched to the backslashes in the variable value.
assert.Equal(t, `{shared}\shot\file.blend`, replacer.Replace(`Y:\shared\flamenco\shot\file.blend`))
assert.Equal(t, `{shared}/shot/file.blend`, replacer.Replace(`Y:/shared/flamenco/shot/file.blend`))
assert.Equal(t, `{shared}\shot\file.blend`, replacerWin.Replace(`Y:\shared\flamenco\shot\file.blend`))
assert.Equal(t, `{shared}/shot/file.blend`, replacerWin.Replace(`Y:/shared/flamenco/shot/file.blend`))
assert.Equal(t, `{shared}\shot\file.blend`, replacerLnx.Replace(`/shared\flamenco\shot\file.blend`))
assert.Equal(t, `{shared}/shot/file.blend`, replacerLnx.Replace(`/shared/flamenco/shot/file.blend`))
}
func TestExpandTwowayVariablesMixedSlashes(t *testing.T) {
c := DefaultConfig(func(c *Conf) {
c.Variables["shared"] = Variable{
IsTwoWay: true,
Values: []VariableValue{
{Value: "/shared/flamenco", Platform: VariablePlatformLinux},
{Value: `Y:\shared\flamenco`, Platform: VariablePlatformWindows},
},
}
})
expanderWin := c.NewVariableExpander(VariableAudienceWorkers, VariablePlatformWindows)
expanderLnx := c.NewVariableExpander(VariableAudienceWorkers, VariablePlatformLinux)
// Slashes should always be normalised for the target platform, on the entire path, not just the replaced part.
assert.Equal(t, `Y:\shared\flamenco\shot\file.blend`, expanderWin.Expand(`{shared}\shot\file.blend`))
assert.Equal(t, `Y:\shared\flamenco\shot\file.blend`, expanderWin.Expand(`{shared}/shot/file.blend`))
assert.Equal(t, `/shared/flamenco/shot/file.blend`, expanderLnx.Expand(`{shared}\shot\file.blend`))
assert.Equal(t, `/shared/flamenco/shot/file.blend`, expanderLnx.Expand(`{shared}/shot/file.blend`))
}

View File

@ -10,17 +10,25 @@ type (
EventTopic string
)
// Listener is the interface for internal components that want to respond to events.
type Listener interface {
OnEvent(topic EventTopic, payload interface{})
}
// Forwarder is the interface for components that forward events to external systems.
type Forwarder interface {
Broadcast(topic EventTopic, payload interface{})
}
type Broker struct {
listeners []Listener
forwarders []Forwarder
mutex sync.Mutex
}
func NewBroker() *Broker {
return &Broker{
listeners: []Listener{},
forwarders: []Forwarder{},
mutex: sync.Mutex{},
}
@ -32,10 +40,20 @@ func (b *Broker) AddForwarder(forwarder Forwarder) {
b.forwarders = append(b.forwarders, forwarder)
}
func (b *Broker) AddListener(listener Listener) {
b.mutex.Lock()
defer b.mutex.Unlock()
b.listeners = append(b.listeners, listener)
}
func (b *Broker) broadcast(topic EventTopic, payload interface{}) {
b.mutex.Lock()
defer b.mutex.Unlock()
for _, listener := range b.listeners {
listener.OnEvent(topic, payload)
}
for _, forwarder := range b.forwarders {
forwarder.Broadcast(topic, payload)
}

View File

@ -0,0 +1,17 @@
package eventbus
// SPDX-License-Identifier: GPL-3.0-or-later
import (
"github.com/rs/zerolog/log"
"projects.blender.org/studio/flamenco/pkg/api"
)
func NewFarmStatusEvent(farmstatus api.FarmStatusReport) api.EventFarmStatus {
return api.EventFarmStatus(farmstatus)
}
func (b *Broker) BroadcastFarmStatusEvent(event api.EventFarmStatus) {
log.Debug().Interface("event", event).Msg("eventbus: broadcasting FarmStatus event")
b.broadcast(TopicFarmStatus, event)
}

View File

@ -13,10 +13,14 @@ import (
"github.com/eclipse/paho.golang/paho"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"projects.blender.org/studio/flamenco/pkg/api"
)
const (
defaultClientID = "flamenco"
MQTTDefaultTopicPrefix = "flamenco"
MQTTDefaultClientID = "flamenco"
keepAlive = 30 // seconds
connectRetryDelay = 10 * time.Second
@ -61,7 +65,7 @@ func NewMQTTForwarder(config MQTTClientConfig) *MQTTForwarder {
return nil
}
if config.ClientID == "" {
config.ClientID = defaultClientID
config.ClientID = MQTTDefaultClientID
}
brokerURL, err := url.Parse(config.BrokerURL)
@ -150,6 +154,11 @@ func (m *MQTTForwarder) queueRunner(queueRunnerCtx context.Context) {
}
func (m *MQTTForwarder) Broadcast(topic EventTopic, payload interface{}) {
if _, ok := payload.(api.EventTaskLogUpdate); ok {
// Task log updates aren't sent through MQTT, as that can generate a lot of traffic.
return
}
fullTopic := m.topicPrefix + string(topic)
asJSON, err := json.Marshal(payload)

View File

@ -14,6 +14,7 @@ import (
"github.com/rs/zerolog/log"
"projects.blender.org/studio/flamenco/internal/uuid"
"projects.blender.org/studio/flamenco/pkg/api"
"projects.blender.org/studio/flamenco/pkg/website"
)
type SocketIOEventType string
@ -23,6 +24,8 @@ const (
)
var socketIOEventTypes = map[string]string{
reflect.TypeOf(api.EventLifeCycle{}).Name(): "/lifecycle",
reflect.TypeOf(api.EventFarmStatus{}).Name(): "/status",
reflect.TypeOf(api.EventJobUpdate{}).Name(): "/jobs",
reflect.TypeOf(api.EventTaskUpdate{}).Name(): "/task",
reflect.TypeOf(api.EventLastRenderedUpdate{}).Name(): "/last-rendered",
@ -59,7 +62,17 @@ func (s *SocketIOForwarder) Broadcast(topic EventTopic, payload interface{}) {
// SocketIO has a concept of 'event types'. MQTT doesn't have this, and thus the Flamenco event
// system doesn't rely on it. We use the payload type name as event type.
payloadType := reflect.TypeOf(payload).Name()
eventType := socketIOEventTypes[payloadType]
eventType, ok := socketIOEventTypes[payloadType]
if !ok {
log.Error().
Str("topic", string(topic)).
Str("payloadType", payloadType).
Interface("event", payload).
Msgf("socketIO: payload type does not have an event type, please copy-paste this message into a bug report at %s", website.BugReportURL)
return
}
log.Debug().
Str("topic", string(topic)).
Str("eventType", eventType).
@ -80,6 +93,10 @@ func (s *SocketIOForwarder) registerSIOEventHandlers() {
_ = sio.On(gosocketio.OnConnection, func(c *gosocketio.Channel) {
logger := sioLogger(c)
logger.Debug().Msg("socketIO: connected")
// All SocketIO connections get these events, regardless of their subscription.
_ = c.Join(string(TopicLifeCycle))
_ = c.Join(string(TopicFarmStatus))
})
// socket disconnection

View File

@ -6,7 +6,9 @@ import "fmt"
const (
// Topics on which events are published.
// NOTE: when adding here, also add to socketIOEventTypes in socketio.go.
TopicLifeCycle EventTopic = "/lifecycle" // sends api.EventLifeCycle
TopicFarmStatus EventTopic = "/status" // sends api.EventFarmStatus
TopicJobUpdate EventTopic = "/jobs" // sends api.EventJobUpdate
TopicLastRenderedImage EventTopic = "/last-rendered" // sends api.EventLastRenderedUpdate
TopicTaskUpdate EventTopic = "/task" // sends api.EventTaskUpdate

View File

@ -0,0 +1,233 @@
// package farmstatus provides a status indicator for the entire Flamenco farm.
package farmstatus
import (
"context"
"errors"
"slices"
"sync"
"time"
"github.com/rs/zerolog/log"
"projects.blender.org/studio/flamenco/internal/manager/eventbus"
"projects.blender.org/studio/flamenco/pkg/api"
"projects.blender.org/studio/flamenco/pkg/website"
)
const (
// pollWait determines how often the persistence layer is queried to get the
// counts & statuses of workers and jobs.
//
// Note that this indicates the time between polls, so between a poll
// operation being done, and the next one starting.
pollWait = 30 * time.Second
)
// Service keeps track of the overall farm status.
type Service struct {
persist PersistenceService
eventbus EventBus
mutex sync.Mutex
lastReport api.FarmStatusReport
forcePoll chan struct{} // Send anything here to force a poll, if none is running yet.
}
// NewService returns a 'farm status' service. Run its Run() function in a
// goroutine to make it actually do something.
func NewService(persist PersistenceService, eventbus EventBus) *Service {
service := Service{
persist: persist,
eventbus: eventbus,
mutex: sync.Mutex{},
forcePoll: make(chan struct{}, 1),
lastReport: api.FarmStatusReport{
Status: api.FarmStatusStarting,
},
}
eventbus.AddListener(&service)
return &service
}
// Run the farm status polling loop.
func (s *Service) Run(ctx context.Context) {
log.Debug().Msg("farm status: polling service running")
defer log.Debug().Msg("farm status: polling service stopped")
// At startup the first poll should happen quickly.
waitTime := 1 * time.Second
for {
select {
case <-ctx.Done():
return
case <-time.After(waitTime):
s.poll(ctx)
case <-s.forcePoll:
s.poll(ctx)
}
// After the first poll we can go to a slower pace, as mostly the event bus
// is the main source of poll triggers.
waitTime = pollWait
}
}
func (s *Service) OnEvent(topic eventbus.EventTopic, payload interface{}) {
forcePoll := false
eventSubject := ""
switch event := payload.(type) {
case api.EventJobUpdate:
forcePoll = event.PreviousStatus != nil && *event.PreviousStatus != event.Status
eventSubject = "job"
case api.EventWorkerUpdate:
forcePoll = event.PreviousStatus != nil && *event.PreviousStatus != event.Status
eventSubject = "worker"
}
if !forcePoll {
return
}
log.Debug().
Str("event", string(topic)).
Msgf("farm status: investigating after %s status change", eventSubject)
// Polling queries the database, and thus can have a non-trivial duration.
// Better to run in the Run() goroutine.
select {
case s.forcePoll <- struct{}{}:
default:
// If sending to the channel fails, there is already a struct{}{} in
// there, and thus a poll will be triggered ASAP anyway.
}
}
// Report returns the last-known farm status report.
//
// It is updated every few seconds, from the Run() function.
func (s *Service) Report() api.FarmStatusReport {
s.mutex.Lock()
defer s.mutex.Unlock()
return s.lastReport
}
// updateStatusReport updates the last status report in a thread-safe way.
// It returns whether the report changed.
func (s *Service) updateStatusReport(report api.FarmStatusReport) bool {
s.mutex.Lock()
defer s.mutex.Unlock()
reportChanged := s.lastReport != report
s.lastReport = report
return reportChanged
}
func (s *Service) poll(ctx context.Context) {
report := s.checkFarmStatus(ctx)
if report == nil {
// Already logged, just keep the last known log around for querying.
return
}
reportChanged := s.updateStatusReport(*report)
if reportChanged {
event := eventbus.NewFarmStatusEvent(s.lastReport)
s.eventbus.BroadcastFarmStatusEvent(event)
}
}
// checkFarmStatus checks the farm status by querying the peristence layer.
// This function does not return an error, but instead logs them as warnings and returns nil.
func (s *Service) checkFarmStatus(ctx context.Context) *api.FarmStatusReport {
log.Trace().Msg("farm status: checking the farm status")
startTime := time.Now()
defer func() {
duration := time.Since(startTime)
log.Debug().Stringer("duration", duration).Msg("farm status: checked the farm status")
}()
workerStatuses, err := s.persist.SummarizeWorkerStatuses(ctx)
if err != nil {
logDBError(err, "farm status: could not summarize worker statuses")
return nil
}
// Check some worker statuses first. When there are no workers and the farm is
// inoperative, there is little use in checking jobs. At least for now. Maybe
// later we want to have some info in the reported status that indicates a
// more pressing matter (as in, inoperative AND a job is queued).
// Check: inoperative
if len(workerStatuses) == 0 || allIn(workerStatuses, api.WorkerStatusOffline, api.WorkerStatusError) {
return &api.FarmStatusReport{
Status: api.FarmStatusInoperative,
}
}
jobStatuses, err := s.persist.SummarizeJobStatuses(ctx)
if err != nil {
logDBError(err, "farm status: could not summarize job statuses")
return nil
}
anyJobActive := jobStatuses[api.JobStatusActive] > 0
anyJobQueued := jobStatuses[api.JobStatusQueued] > 0
isWorkAvailable := anyJobActive || anyJobQueued
anyWorkerAwake := workerStatuses[api.WorkerStatusAwake] > 0
anyWorkerAsleep := workerStatuses[api.WorkerStatusAsleep] > 0
allWorkersAsleep := !anyWorkerAwake && anyWorkerAsleep
report := api.FarmStatusReport{}
switch {
case anyJobActive && anyWorkerAwake:
// - "active" # Actively working on jobs.
report.Status = api.FarmStatusActive
case isWorkAvailable:
// - "waiting" # Work to be done, but there is no worker awake.
report.Status = api.FarmStatusWaiting
case !isWorkAvailable && allWorkersAsleep:
// - "asleep" # Farm is idle, and all workers are asleep.
report.Status = api.FarmStatusAsleep
case !isWorkAvailable:
// - "idle" # Farm could be active, but has no work to do.
report.Status = api.FarmStatusIdle
default:
log.Warn().
Interface("workerStatuses", workerStatuses).
Interface("jobStatuses", jobStatuses).
Msgf("farm status: unexpected configuration of worker and job statuses, please report this at %s", website.BugReportURL)
report.Status = api.FarmStatusUnknown
}
return &report
}
func logDBError(err error, message string) {
switch {
case errors.Is(err, context.DeadlineExceeded):
log.Warn().Msg(message + " (it took too long)")
case errors.Is(err, context.Canceled):
log.Debug().Msg(message + " (Flamenco is shutting down)")
default:
log.Warn().AnErr("cause", err).Msg(message)
}
}
func allIn[T comparable](statuses map[T]int, shouldBeIn ...T) bool {
for status, count := range statuses {
if count == 0 {
continue
}
if !slices.Contains(shouldBeIn, status) {
return false
}
}
return true
}

View File

@ -0,0 +1,241 @@
// package farmstatus provides a status indicator for the entire Flamenco farm.
package farmstatus
import (
"context"
"testing"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"projects.blender.org/studio/flamenco/internal/manager/farmstatus/mocks"
"projects.blender.org/studio/flamenco/internal/manager/persistence"
"projects.blender.org/studio/flamenco/pkg/api"
)
type Fixtures struct {
service *Service
persist *mocks.MockPersistenceService
eventbus *mocks.MockEventBus
ctx context.Context
}
func TestFarmStatusStarting(t *testing.T) {
f := fixtures(t)
report := f.service.Report()
assert.Equal(t, api.FarmStatusStarting, report.Status)
}
func TestFarmStatusLoop(t *testing.T) {
f := fixtures(t)
// Mock an "active" status.
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusOffline: 2,
api.WorkerStatusAsleep: 1,
api.WorkerStatusError: 1,
api.WorkerStatusAwake: 3,
})
f.mockJobStatuses(persistence.JobStatusCount{
api.JobStatusActive: 1,
})
// Before polling, the status should still be 'starting'.
report := f.service.Report()
assert.Equal(t, api.FarmStatusStarting, report.Status)
// After a single poll, the report should have been updated.
f.eventbus.EXPECT().BroadcastFarmStatusEvent(api.EventFarmStatus{Status: api.FarmStatusActive})
f.service.poll(f.ctx)
report = f.service.Report()
assert.Equal(t, api.FarmStatusActive, report.Status)
}
func TestCheckFarmStatusInoperative(t *testing.T) {
f := fixtures(t)
// "inoperative": no workers.
f.mockWorkerStatuses(persistence.WorkerStatusCount{})
report := f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusInoperative, report.Status)
// "inoperative": all workers offline.
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusOffline: 3,
})
report = f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusInoperative, report.Status)
// "inoperative": some workers offline, some in error,
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusOffline: 2,
api.WorkerStatusError: 1,
})
report = f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusInoperative, report.Status)
}
func TestCheckFarmStatusActive(t *testing.T) {
f := fixtures(t)
// "active" # Actively working on jobs.
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusOffline: 2,
api.WorkerStatusAsleep: 1,
api.WorkerStatusError: 1,
api.WorkerStatusAwake: 3,
})
f.mockJobStatuses(persistence.JobStatusCount{
api.JobStatusActive: 1,
})
report := f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusActive, report.Status)
}
func TestCheckFarmStatusWaiting(t *testing.T) {
f := fixtures(t)
// "waiting": Active job, and only sleeping workers.
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusAsleep: 1,
})
f.mockJobStatuses(persistence.JobStatusCount{
api.JobStatusActive: 1,
})
report := f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusWaiting, report.Status)
// "waiting": Queued job, and awake worker. It could pick up the job any
// second now, but it could also have been blocklisted already.
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusAsleep: 1,
api.WorkerStatusAwake: 1,
})
f.mockJobStatuses(persistence.JobStatusCount{
api.JobStatusQueued: 1,
})
report = f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusWaiting, report.Status)
}
func TestCheckFarmStatusIdle(t *testing.T) {
f := fixtures(t)
// "idle" # Farm could be active, but has no work to do.
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusOffline: 2,
api.WorkerStatusAsleep: 1,
api.WorkerStatusAwake: 1,
})
f.mockJobStatuses(persistence.JobStatusCount{
api.JobStatusCompleted: 1,
api.JobStatusCancelRequested: 1,
})
report := f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusIdle, report.Status)
}
func TestCheckFarmStatusAsleep(t *testing.T) {
f := fixtures(t)
// "asleep": No worker is awake, some are asleep, no work to do.
f.mockWorkerStatuses(persistence.WorkerStatusCount{
api.WorkerStatusOffline: 2,
api.WorkerStatusAsleep: 2,
})
f.mockJobStatuses(persistence.JobStatusCount{
api.JobStatusCanceled: 10,
api.JobStatusCompleted: 4,
api.JobStatusFailed: 2,
})
report := f.service.checkFarmStatus(f.ctx)
require.NotNil(t, report)
assert.Equal(t, api.FarmStatusAsleep, report.Status)
}
func TestFarmStatusEvent(t *testing.T) {
f := fixtures(t)
// "inoperative": no workers.
f.mockWorkerStatuses(persistence.WorkerStatusCount{})
f.eventbus.EXPECT().BroadcastFarmStatusEvent(api.EventFarmStatus{
Status: api.FarmStatusInoperative,
})
f.service.poll(f.ctx)
// Re-polling should not trigger any event, as the status doesn't change.
f.mockWorkerStatuses(persistence.WorkerStatusCount{})
f.service.poll(f.ctx)
// "active": Actively working on jobs.
f.mockWorkerStatuses(persistence.WorkerStatusCount{api.WorkerStatusAwake: 3})
f.mockJobStatuses(persistence.JobStatusCount{api.JobStatusActive: 1})
f.eventbus.EXPECT().BroadcastFarmStatusEvent(api.EventFarmStatus{
Status: api.FarmStatusActive,
})
f.service.poll(f.ctx)
}
func Test_allIn(t *testing.T) {
type args struct {
statuses map[api.WorkerStatus]int
shouldBeIn []api.WorkerStatus
}
tests := []struct {
name string
args args
want bool
}{
{"none", args{map[api.WorkerStatus]int{}, []api.WorkerStatus{api.WorkerStatusAsleep}}, true},
{"match-only", args{
map[api.WorkerStatus]int{api.WorkerStatusAsleep: 5},
[]api.WorkerStatus{api.WorkerStatusAsleep},
}, true},
{"match-some", args{
map[api.WorkerStatus]int{api.WorkerStatusAsleep: 5, api.WorkerStatusOffline: 2},
[]api.WorkerStatus{api.WorkerStatusAsleep},
}, false},
{"match-all", args{
map[api.WorkerStatus]int{api.WorkerStatusAsleep: 5, api.WorkerStatusOffline: 2},
[]api.WorkerStatus{api.WorkerStatusAsleep, api.WorkerStatusOffline},
}, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := allIn(tt.args.statuses, tt.args.shouldBeIn...); got != tt.want {
t.Errorf("allIn() = %v, want %v", got, tt.want)
}
})
}
}
func fixtures(t *testing.T) *Fixtures {
mockCtrl := gomock.NewController(t)
f := Fixtures{
persist: mocks.NewMockPersistenceService(mockCtrl),
eventbus: mocks.NewMockEventBus(mockCtrl),
ctx: context.Background(),
}
// calling NewService() immediate registers as a listener with the event bus.
f.eventbus.EXPECT().AddListener(gomock.Any())
f.service = NewService(f.persist, f.eventbus)
return &f
}
func (f *Fixtures) mockWorkerStatuses(workerStatuses persistence.WorkerStatusCount) {
f.persist.EXPECT().SummarizeWorkerStatuses(f.ctx).Return(workerStatuses, nil)
}
func (f *Fixtures) mockJobStatuses(jobStatuses persistence.JobStatusCount) {
f.persist.EXPECT().SummarizeJobStatuses(f.ctx).Return(jobStatuses, nil)
}

View File

@ -0,0 +1,26 @@
package farmstatus
import (
"context"
"projects.blender.org/studio/flamenco/internal/manager/eventbus"
"projects.blender.org/studio/flamenco/internal/manager/persistence"
"projects.blender.org/studio/flamenco/pkg/api"
)
// Generate mock implementations of these interfaces.
//go:generate go run github.com/golang/mock/mockgen -destination mocks/interfaces_mock.gen.go -package mocks projects.blender.org/studio/flamenco/internal/manager/farmstatus PersistenceService,EventBus
type PersistenceService interface {
SummarizeJobStatuses(ctx context.Context) (persistence.JobStatusCount, error)
SummarizeWorkerStatuses(ctx context.Context) (persistence.WorkerStatusCount, error)
}
var _ PersistenceService = (*persistence.DB)(nil)
type EventBus interface {
AddListener(listener eventbus.Listener)
BroadcastFarmStatusEvent(event api.EventFarmStatus)
}
var _ EventBus = (*eventbus.Broker)(nil)

View File

@ -0,0 +1,115 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: projects.blender.org/studio/flamenco/internal/manager/farmstatus (interfaces: PersistenceService,EventBus)
// Package mocks is a generated GoMock package.
package mocks
import (
context "context"
reflect "reflect"
gomock "github.com/golang/mock/gomock"
eventbus "projects.blender.org/studio/flamenco/internal/manager/eventbus"
persistence "projects.blender.org/studio/flamenco/internal/manager/persistence"
api "projects.blender.org/studio/flamenco/pkg/api"
)
// MockPersistenceService is a mock of PersistenceService interface.
type MockPersistenceService struct {
ctrl *gomock.Controller
recorder *MockPersistenceServiceMockRecorder
}
// MockPersistenceServiceMockRecorder is the mock recorder for MockPersistenceService.
type MockPersistenceServiceMockRecorder struct {
mock *MockPersistenceService
}
// NewMockPersistenceService creates a new mock instance.
func NewMockPersistenceService(ctrl *gomock.Controller) *MockPersistenceService {
mock := &MockPersistenceService{ctrl: ctrl}
mock.recorder = &MockPersistenceServiceMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockPersistenceService) EXPECT() *MockPersistenceServiceMockRecorder {
return m.recorder
}
// SummarizeJobStatuses mocks base method.
func (m *MockPersistenceService) SummarizeJobStatuses(arg0 context.Context) (persistence.JobStatusCount, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SummarizeJobStatuses", arg0)
ret0, _ := ret[0].(persistence.JobStatusCount)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// SummarizeJobStatuses indicates an expected call of SummarizeJobStatuses.
func (mr *MockPersistenceServiceMockRecorder) SummarizeJobStatuses(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SummarizeJobStatuses", reflect.TypeOf((*MockPersistenceService)(nil).SummarizeJobStatuses), arg0)
}
// SummarizeWorkerStatuses mocks base method.
func (m *MockPersistenceService) SummarizeWorkerStatuses(arg0 context.Context) (persistence.WorkerStatusCount, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SummarizeWorkerStatuses", arg0)
ret0, _ := ret[0].(persistence.WorkerStatusCount)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// SummarizeWorkerStatuses indicates an expected call of SummarizeWorkerStatuses.
func (mr *MockPersistenceServiceMockRecorder) SummarizeWorkerStatuses(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SummarizeWorkerStatuses", reflect.TypeOf((*MockPersistenceService)(nil).SummarizeWorkerStatuses), arg0)
}
// MockEventBus is a mock of EventBus interface.
type MockEventBus struct {
ctrl *gomock.Controller
recorder *MockEventBusMockRecorder
}
// MockEventBusMockRecorder is the mock recorder for MockEventBus.
type MockEventBusMockRecorder struct {
mock *MockEventBus
}
// NewMockEventBus creates a new mock instance.
func NewMockEventBus(ctrl *gomock.Controller) *MockEventBus {
mock := &MockEventBus{ctrl: ctrl}
mock.recorder = &MockEventBusMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockEventBus) EXPECT() *MockEventBusMockRecorder {
return m.recorder
}
// AddListener mocks base method.
func (m *MockEventBus) AddListener(arg0 eventbus.Listener) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "AddListener", arg0)
}
// AddListener indicates an expected call of AddListener.
func (mr *MockEventBusMockRecorder) AddListener(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AddListener", reflect.TypeOf((*MockEventBus)(nil).AddListener), arg0)
}
// BroadcastFarmStatusEvent mocks base method.
func (m *MockEventBus) BroadcastFarmStatusEvent(arg0 api.EventFarmStatus) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "BroadcastFarmStatusEvent", arg0)
}
// BroadcastFarmStatusEvent indicates an expected call of BroadcastFarmStatusEvent.
func (mr *MockEventBusMockRecorder) BroadcastFarmStatusEvent(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "BroadcastFarmStatusEvent", reflect.TypeOf((*MockEventBus)(nil).BroadcastFarmStatusEvent), arg0)
}

View File

@ -58,7 +58,7 @@ func exampleSubmittedJob() api.SubmittedJob {
func mockedClock(t *testing.T) clock.Clock {
c := clock.NewMock()
now, err := time.ParseInLocation("2006-01-02T15:04:05", "2006-01-02T15:04:05", time.Local)
assert.NoError(t, err)
require.NoError(t, err)
c.Set(now)
return c
}
@ -67,7 +67,7 @@ func TestSimpleBlenderRenderHappy(t *testing.T) {
c := mockedClock(t)
s, err := Load(c)
assert.NoError(t, err)
require.NoError(t, err)
// Compiling a job should be really fast.
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
@ -139,6 +139,41 @@ func TestSimpleBlenderRenderHappy(t *testing.T) {
assert.Equal(t, expectDeps, tVideo.Dependencies)
}
func TestSimpleBlenderRenderWithScene(t *testing.T) {
c := mockedClock(t)
s, err := Load(c)
require.NoError(t, err)
// Compiling a job should be really fast.
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
defer cancel()
sj := exampleSubmittedJob()
sj.Settings.AdditionalProperties["scene"] = "Test Scene"
aj, err := s.Compile(ctx, sj)
require.NoError(t, err)
require.NotNil(t, aj)
t0 := aj.Tasks[0]
expectCliArgs := []interface{}{ // They are strings, but Goja doesn't know that and will produce an []interface{}.
"--scene", "Test Scene",
"--render-output", "/render/sprites/farm_output/promo/square_ellie/square_ellie.lighting_light_breakdown2/######",
"--render-format", "PNG",
"--render-frame", "1..3",
}
assert.Equal(t, "render-1-3", t0.Name)
assert.Equal(t, 1, len(t0.Commands))
assert.Equal(t, "blender-render", t0.Commands[0].Name)
assert.EqualValues(t, AuthoredCommandParameters{
"exe": "{blender}",
"exeArgs": "{blenderArgs}",
"blendfile": "/render/sf/jobs/scene123.blend",
"args": expectCliArgs,
"argsBefore": make([]interface{}, 0),
}, t0.Commands[0].Parameters)
}
func TestJobWithoutTag(t *testing.T) {
c := mockedClock(t)
@ -172,7 +207,7 @@ func TestSimpleBlenderRenderWindowsPaths(t *testing.T) {
c := mockedClock(t)
s, err := Load(c)
assert.NoError(t, err)
require.NoError(t, err)
// Compiling a job should be really fast.
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
@ -307,10 +342,9 @@ func TestEtag(t *testing.T) {
{ // Test without etag.
aj, err := s.Compile(ctx, sj)
if assert.NoError(t, err, "job without etag should always be accepted") {
require.NoError(t, err, "job without etag should always be accepted")
assert.NotNil(t, aj)
}
}
{ // Test with bad etag.
sj.TypeEtag = ptr("this is not the right etag")
@ -321,10 +355,9 @@ func TestEtag(t *testing.T) {
{ // Test with correct etag.
sj.TypeEtag = ptr(expectEtag)
aj, err := s.Compile(ctx, sj)
if assert.NoError(t, err, "job with correct etag should be accepted") {
require.NoError(t, err, "job with correct etag should be accepted")
assert.NotNil(t, aj)
}
}
}
func TestComplexFrameRange(t *testing.T) {

View File

@ -10,6 +10,7 @@ import (
"time"
"github.com/dop251/goja"
"github.com/google/shlex"
"github.com/rs/zerolog/log"
)
@ -33,6 +34,19 @@ func jsFormatTimestampLocal(timestamp time.Time) string {
return timestamp.Local().Format("2006-01-02_150405")
}
// jsShellSplit splits a string into its parts, using CLI/shell semantics.
func jsShellSplit(vm *goja.Runtime, someCLIArgs string) []string {
split, err := shlex.Split(someCLIArgs)
if err != nil {
// Generate a JS exception by panicing with a Goja Value.
exception := vm.ToValue(err)
panic(exception)
}
return split
}
type ErrInvalidRange struct {
Range string // The frame range that was invalid.
Message string // The error message

View File

@ -5,12 +5,31 @@ package job_compilers
import (
"testing"
"github.com/dop251/goja"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestShellSplitHappy(t *testing.T) {
expect := []string{"--python-expr", "print(1 + 1)"}
actual := jsShellSplit(nil, "--python-expr 'print(1 + 1)'")
assert.Equal(t, expect, actual)
}
func TestShellSplitFailure(t *testing.T) {
vm := goja.New()
testFunc := func() {
jsShellSplit(vm, "--python-expr invalid_quoting(1 + 1)'")
}
// Testing that a goja.Value is used for the panic is a bit tricky, so just
// test that the function panics.
assert.Panics(t, testFunc)
}
func TestFrameChunkerHappyBlenderStyle(t *testing.T) {
chunks, err := jsFrameChunker("1..10,20..25,40,3..8", 4)
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, []string{"1-4", "5-8", "9,10,20,21", "22-25", "40"}, chunks)
}
@ -21,24 +40,24 @@ func TestFrameChunkerHappySmallInput(t *testing.T) {
// Just one frame.
chunks, err := jsFrameChunker("47", 4)
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, []string{"47"}, chunks)
// Just one range of exactly one chunk.
chunks, err = jsFrameChunker("1-3", 3)
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, []string{"1-3"}, chunks)
}
func TestFrameChunkerHappyRegularStyle(t *testing.T) {
chunks, err := jsFrameChunker("1-10,20-25,40", 4)
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, []string{"1-4", "5-8", "9,10,20,21", "22-25", "40"}, chunks)
}
func TestFrameChunkerHappyExtraWhitespace(t *testing.T) {
chunks, err := jsFrameChunker(" 1 .. 10,\t20..25\n,40 ", 4)
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, []string{"1-4", "5-8", "9,10,20,21", "22-25", "40"}, chunks)
}
@ -50,7 +69,7 @@ func TestFrameChunkerUnhappy(t *testing.T) {
func TestFrameRangeExplode(t *testing.T) {
frames, err := frameRangeExplode("1..10,20..25,40")
assert.NoError(t, err)
require.NoError(t, err)
assert.Equal(t, []int{
1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
20, 21, 22, 23, 24, 25, 40,

View File

@ -2,6 +2,7 @@
const JOB_TYPE = {
label: "Simple Blender Render",
description: "Render a sequence of frames, and create a preview video file",
settings: [
// Settings for artists to determine:
{ key: "frames", type: "string", required: true, eval: "f'{C.scene.frame_start}-{C.scene.frame_end}'",

View File

@ -140,6 +140,9 @@ func newGojaVM(registry *require.Registry) *goja.Runtime {
mustSet("alert", jsAlert)
mustSet("frameChunker", jsFrameChunker)
mustSet("formatTimestampLocal", jsFormatTimestampLocal)
mustSet("shellSplit", func(cliArgs string) []string {
return jsShellSplit(vm, cliArgs)
})
// Pre-import some useful modules.
registry.Enable(vm)

View File

@ -2,6 +2,7 @@
const JOB_TYPE = {
label: "Simple Blender Render",
description: "Render a sequence of frames, and create a preview video file",
settings: [
// Settings for artists to determine:
{ key: "frames", type: "string", required: true,
@ -31,6 +32,8 @@ const JOB_TYPE = {
description: "File extension used when rendering images" },
{ key: "has_previews", type: "bool", required: false, eval: "C.scene.render.image_settings.use_preview", visible: "hidden",
description: "Whether Blender will render preview images."},
{ key: "scene", type: "string", required: true, eval: "C.scene.name", visible: "web",
description: "Name of the scene to render."},
]
};
@ -99,6 +102,12 @@ function authorRenderTasks(settings, renderDir, renderOutput) {
print("authorRenderTasks(", renderDir, renderOutput, ")");
let renderTasks = [];
let chunks = frameChunker(settings.frames, settings.chunk_size);
let baseArgs = [];
if (settings.scene) {
baseArgs = baseArgs.concat(["--scene", settings.scene]);
}
for (let chunk of chunks) {
const task = author.Task(`render-${chunk}`, "blender");
const command = author.Command("blender-render", {
@ -106,11 +115,11 @@ function authorRenderTasks(settings, renderDir, renderOutput) {
exeArgs: "{blenderArgs}",
argsBefore: [],
blendfile: settings.blendfile,
args: [
args: baseArgs.concat([
"--render-output", path.join(renderDir, path.basename(renderOutput)),
"--render-format", settings.format,
"--render-frame", chunk.replaceAll("-", ".."), // Convert to Blender frame range notation.
]
])
});
task.addCommand(command);
renderTasks.push(task);

View File

@ -0,0 +1,334 @@
const JOB_TYPE = {
label: "Single Image Render",
description: "Distributed rendering of a single image.",
settings: [
// Settings for artists to determine:
{
key: "tile_size_x",
type: "int32",
default: 64,
description: "Tile size in pixels for the X axis"
},
{
key: "tile_size_y",
type: "int32",
default: 64,
description: "Tile size in pixels for the Y axis"
},
{
key: "frame", type: "int32", required: true,
eval: "C.scene.frame_current",
description: "Frame to render. Examples: '47', '1'"
},
// render_output_root + add_path_components determine the value of render_output_path.
{
key: "render_output_root",
type: "string",
subtype: "dir_path",
required: true,
visible: "submission",
description: "Base directory of where render output is stored. Will have some job-specific parts appended to it"
},
{
key: "add_path_components",
type: "int32",
required: true,
default: 0,
propargs: {min: 0, max: 32},
visible: "submission",
description: "Number of path components of the current blend file to use in the render output path"
},
{
key: "render_output_path", type: "string", subtype: "file_path", editable: false,
eval: "str(Path(abspath(settings.render_output_root), last_n_dir_parts(settings.add_path_components), jobname, '{timestamp}', 'tiles'))",
description: "Final file path of where render output will be saved"
},
// Automatically evaluated settings:
{
key: "blendfile",
type: "string",
required: true,
description: "Path of the Blend file to render",
visible: "web"
},
{
key: "format",
type: "string",
required: true,
eval: "C.scene.render.image_settings.file_format",
visible: "web"
},
{
key: "image_file_extension",
type: "string",
required: true,
eval: "C.scene.render.file_extension",
visible: "hidden",
description: "File extension used when rendering images"
},
{
key: "resolution_x",
type: "int32",
required: true,
eval: "C.scene.render.resolution_x",
visible: "hidden",
description: "Resolution X"
},
{
key: "resolution_y",
type: "int32",
required: true,
eval: "C.scene.render.resolution_y",
visible: "hidden",
description: "Resolution Y"
},
{
key: "resolution_scale",
type: "int32",
required: true,
eval: "C.scene.render.resolution_percentage",
visible: "hidden",
description: "Resolution scale"
}
]
};
function compileJob(job) {
print("Single Image Render job submitted");
print("job: ", job);
const settings = job.settings;
const renderOutput = renderOutputPath(job);
if (settings.resolution_scale !== 100) {
throw "Flamenco currently does not support rendering with a resolution scale other than 100%";
}
// Make sure that when the job is investigated later, it shows the
// actually-used render output:
settings.render_output_path = renderOutput;
const renderDir = path.dirname(renderOutput);
const renderTasks = authorRenderTasks(settings, renderDir, renderOutput);
const mergeTask = authorMergeTask(settings, renderDir);
for (const rt of renderTasks) {
job.addTask(rt);
}
if (mergeTask) {
// If there is a merge task, all other tasks have to be done first.
for (const rt of renderTasks) {
mergeTask.addDependency(rt);
}
job.addTask(mergeTask);
}
}
// Do field replacement on the render output path.
function renderOutputPath(job) {
let path = job.settings.render_output_path;
if (!path) {
throw "no render_output_path setting!";
}
return path.replace(/{([^}]+)}/g, (match, group0) => {
switch (group0) {
case "timestamp":
return formatTimestampLocal(job.created);
default:
return match;
}
});
}
// Calculate the borders for the tiles
// Does not take into account the overscan
function calcBorders(tileSizeX, tileSizeY, width, height) {
let borders = [];
for (let y = 0; y < height; y += tileSizeY) {
for (let x = 0; x < width; x += tileSizeX) {
borders.push([x, y, Math.min(x + tileSizeX, width), Math.min(y + tileSizeY, height)]);
}
}
print("borders: ", borders);
return borders;
}
function authorRenderTasks(settings, renderDir, renderOutput) {
print("authorRenderTasks(", renderDir, renderOutput, ")");
let renderTasks = [];
let borders = calcBorders(settings.tile_size_x, settings.tile_size_y, settings.resolution_x, settings.resolution_y);
for (let border of borders) {
const task = author.Task(`render-${border[0]}-${border[1]}`, "blender");
// Overscan is calculated in this manner to avoid rendering outside the image resolution
let pythonExpr = `import bpy
scene = bpy.context.scene
render = scene.render
render.image_settings.file_format = 'OPEN_EXR_MULTILAYER'
render.use_compositing = False
render.use_stamp = False
overscan = 16
render.border_min_x = max(${border[0]} - overscan, 0) / ${settings.resolution_x}
render.border_min_y = max(${border[1]} - overscan, 0) / ${settings.resolution_y}
render.border_max_x = min(${border[2]} + overscan, ${settings.resolution_x}) / ${settings.resolution_x}
render.border_max_y = min(${border[3]} + overscan, ${settings.resolution_x}) / ${settings.resolution_y}
render.use_border = True
render.use_crop_to_border = True
bpy.ops.render.render(write_still=True)`
const command = author.Command("blender-render", {
exe: "{blender}",
exeArgs: "{blenderArgs}",
argsBefore: [],
blendfile: settings.blendfile,
args: [
"--render-output", path.join(renderDir, path.basename(renderOutput), border[0] + "-" + border[1] + "-" + border[2] + "-" + border[3]),
"--render-format", settings.format,
"--python-expr", pythonExpr
]
});
task.addCommand(command);
renderTasks.push(task);
}
return renderTasks;
}
function authorMergeTask(settings, renderDir, renderOutput) {
print("authorMergeTask(", renderDir, ")");
const task = author.Task("merge", "blender");
// Burning metadata into the image is done by the compositor for the entire merged image
// The overall logic of the merge is as follows:
// 1. Find out the Render Layers node and to which socket it is connected
// 2. Load image files from the tiles directory.
// Their correct position is determined by their filename.
// 3. Create a node tree that scales, translates and adds the tiles together.
// A simple version of the node tree is linked here:
// https://devtalk.blender.org/uploads/default/original/3X/f/0/f047f221c70955b32e4b455e53453c5df716079e.jpeg
// 4. The final image is then fed into the socket the Render Layers node was connected to.
// This allows the compositing to work as if the image was rendered in one go.
let pythonExpr = `import bpy
render = bpy.context.scene.render
render.resolution_x = ${settings.resolution_x}
render.resolution_y = ${settings.resolution_y}
bpy.context.scene.use_nodes = True
render.use_compositing = True
render.use_stamp = True
node_tree = bpy.context.scene.node_tree
overscan = 16
render_layers_node = None
for node in node_tree.nodes:
if node.type == 'R_LAYERS':
feed_in_input = node.outputs[0]
render_layers_node = node
break
for link in node_tree.links:
if feed_in_input is not None and link.from_socket == feed_in_input:
feed_in_output = link.to_socket
break
from pathlib import Path
root = Path("${path.join(renderDir, path.basename(renderOutput))}/tiles")
image_files = [f for f in root.iterdir() if f.is_file()]
separate_nodes = []
first_crop_node = None
translate_nodes = []
min_width = min([int(f.stem.split('-')[2]) - int(f.stem.split('-')[0]) for f in image_files])
min_height = min([int(f.stem.split('-')[3]) - int(f.stem.split('-')[1]) for f in image_files])
for i, image_file in enumerate(image_files):
image_node = node_tree.nodes.new('CompositorNodeImage')
image_node.image = bpy.data.images.load(str(root / image_file.name))
crop_node = node_tree.nodes.new('CompositorNodeCrop')
crop_node.use_crop_size = True
left, top, right, bottom = image_file.stem.split('-')
actual_width = int(right) - int(left)
actual_height = int(bottom) - int(top)
if left == '0':
crop_node.min_x = 0
crop_node.max_x = actual_width
else:
crop_node.min_x = overscan
crop_node.max_x = actual_width + overscan
if top == '0':
crop_node.max_y = 0
crop_node.min_y = actual_height
else:
crop_node.max_y = overscan
crop_node.min_y = actual_height + overscan
if i == 0:
first_crop_node = crop_node
translate_node = node_tree.nodes.new('CompositorNodeTranslate')
# translate_node.use_relative = True
translate_node.inputs[1].default_value = float(left) + (actual_width - ${settings.resolution_x}) / 2
translate_node.inputs[2].default_value = float(top) + (actual_height - ${settings.resolution_y}) / 2
translate_nodes.append(translate_node)
separate_node = node_tree.nodes.new('CompositorNodeSeparateColor')
separate_nodes.append(separate_node)
node_tree.links.new(image_node.outputs[0], crop_node.inputs[0])
node_tree.links.new(crop_node.outputs[0], translate_node.inputs[0])
node_tree.links.new(translate_node.outputs[0], separate_node.inputs[0])
scale_node = node_tree.nodes.new('CompositorNodeScale')
scale_node.space = 'RELATIVE'
scale_node.inputs[1].default_value = ${settings.resolution_x} / min_width
scale_node.inputs[2].default_value = ${settings.resolution_y} / min_height
node_tree.links.new(first_crop_node.outputs[0], scale_node.inputs[0])
mix_node = node_tree.nodes.new('CompositorNodeMixRGB')
mix_node.blend_type = 'MIX'
mix_node.inputs[0].default_value = 0.0
mix_node.inputs[1].default_value = (0, 0, 0, 1)
node_tree.links.new(scale_node.outputs[0], mix_node.inputs[2])
mix_adds = [node_tree.nodes.new('CompositorNodeMixRGB') for _ in range(len(separate_nodes))]
math_adds = [node_tree.nodes.new('CompositorNodeMath') for _ in range(len(separate_nodes))]
for i, mix_add in enumerate(mix_adds):
mix_add.blend_type = 'ADD'
if i == 0:
node_tree.links.new(mix_node.outputs[0], mix_add.inputs[1])
else:
node_tree.links.new(mix_adds[i - 1].outputs[0], mix_add.inputs[1])
node_tree.links.new(translate_nodes[i].outputs[0], mix_add.inputs[2])
for i, math_add in enumerate(math_adds):
math_add.operation = 'ADD'
if i == 0:
node_tree.links.new(mix_node.outputs[0], math_add.inputs[0])
else:
node_tree.links.new(math_adds[i - 1].outputs[0], math_add.inputs[0])
node_tree.links.new(separate_nodes[i - 1].outputs[3], math_add.inputs[1])
set_alpha_node = node_tree.nodes.new('CompositorNodeSetAlpha')
set_alpha_node.mode = 'REPLACE_ALPHA'
node_tree.links.new(mix_adds[-1].outputs[0], set_alpha_node.inputs[0])
node_tree.links.new(math_adds[-1].outputs[0], set_alpha_node.inputs[1])
if feed_in_input is not None:
node_tree.links.new(set_alpha_node.outputs[0], feed_in_output)
else:
raise Exception('No Render Layers Node found. Currently only supported with a Render Layers Node in the Compositor.')
node_tree.nodes.remove(render_layers_node)
bpy.ops.render.render(write_still=True)`
const command = author.Command("blender-render", {
exe: "{blender}",
exeArgs: "{blenderArgs}",
argsBefore: [],
blendfile: settings.blendfile,
args: [
"--render-output", path.join(renderDir, path.basename(renderOutput), "merged"),
"--render-format", settings.format,
"--python-expr", pythonExpr
]
});
task.addCommand(command);
return task;
}

View File

@ -8,12 +8,13 @@ import (
"github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestLoadScriptsFrom_skip_nonjs(t *testing.T) {
thisDirFS := os.DirFS(".")
compilers, err := loadScriptsFrom(thisDirFS)
assert.NoError(t, err, "input without JS files should not cause errors")
require.NoError(t, err, "input without JS files should not cause errors")
assert.Empty(t, compilers)
}
@ -21,7 +22,7 @@ func TestLoadScriptsFrom_on_disk_js(t *testing.T) {
scriptsFS := os.DirFS("scripts-for-unittest")
compilers, err := loadScriptsFrom(scriptsFS)
assert.NoError(t, err)
require.NoError(t, err)
expectKeys := map[string]bool{
"echo-and-sleep": true,
"simple-blender-render": true,
@ -34,10 +35,11 @@ func TestLoadScriptsFrom_embedded(t *testing.T) {
initEmbeddedFS()
compilers, err := loadScriptsFrom(embeddedScriptsFS)
assert.NoError(t, err)
require.NoError(t, err)
expectKeys := map[string]bool{
"echo-sleep-test": true,
"simple-blender-render": true,
"single-image-render": true,
}
assert.Equal(t, expectKeys, keys(compilers))
}
@ -48,7 +50,7 @@ func BenchmarkLoadScripts_fromEmbedded(b *testing.B) {
for i := 0; i < b.N; i++ {
compilers, err := loadScriptsFrom(embeddedScriptsFS)
assert.NoError(b, err)
require.NoError(b, err)
assert.NotEmpty(b, compilers)
}
}
@ -59,7 +61,7 @@ func BenchmarkLoadScripts_fromDisk(b *testing.B) {
onDiskFS := os.DirFS("scripts-for-unittest")
for i := 0; i < b.N; i++ {
compilers, err := loadScriptsFrom(onDiskFS)
assert.NoError(b, err)
require.NoError(b, err)
assert.NotEmpty(b, compilers)
}
}

View File

@ -18,6 +18,7 @@ import (
type PersistenceService interface {
FetchJob(ctx context.Context, jobUUID string) (*persistence.Job, error)
FetchJobShamanCheckoutID(ctx context.Context, jobUUID string) (string, error)
RequestJobDeletion(ctx context.Context, j *persistence.Job) error
RequestJobMassDeletion(ctx context.Context, lastUpdatedMax time.Time) ([]string, error)

View File

@ -150,19 +150,28 @@ func (s *Service) Run(ctx context.Context) {
log.Debug().Msg("job deleter: running")
defer log.Debug().Msg("job deleter: shutting down")
waitTime := jobDeletionCheckInterval
for {
select {
case <-ctx.Done():
return
case jobUUID := <-s.queue:
s.deleteJob(ctx, jobUUID)
case <-time.After(jobDeletionCheckInterval):
if len(s.queue) == 0 {
waitTime = 100 * time.Millisecond
}
case <-time.After(waitTime):
// Inspect the database to see if there was anything marked for deletion
// without getting into our queue. This can happen when lots of jobs are
// queued in quick succession, as then the queue channel gets full.
if len(s.queue) == 0 {
s.queuePendingDeletions(ctx)
}
// The next iteration should just wait for the default duration.
waitTime = jobDeletionCheckInterval
}
}
}
@ -196,7 +205,9 @@ queueLoop:
func (s *Service) deleteJob(ctx context.Context, jobUUID string) error {
logger := log.With().Str("job", jobUUID).Logger()
startTime := time.Now()
logger.Debug().Msg("job deleter: starting job deletion")
err := s.deleteShamanCheckout(ctx, logger, jobUUID)
if err != nil {
return err
@ -224,11 +235,10 @@ func (s *Service) deleteJob(ctx context.Context, jobUUID string) error {
}
s.changeBroadcaster.BroadcastJobUpdate(jobUpdate)
logger.Info().Msg("job deleter: job removal complete")
// Request a consistency check on the database. In the past there have been
// some issues after deleting a job.
s.persist.RequestIntegrityCheck()
duration := time.Since(startTime)
logger.Info().
Stringer("duration", duration).
Msg("job deleter: job removal complete")
return nil
}
@ -258,12 +268,10 @@ func (s *Service) deleteShamanCheckout(ctx context.Context, logger zerolog.Logge
}
// To erase the Shaman checkout we need more info than just its UUID.
dbJob, err := s.persist.FetchJob(ctx, jobUUID)
checkoutID, err := s.persist.FetchJobShamanCheckoutID(ctx, jobUUID)
if err != nil {
return fmt.Errorf("unable to fetch job from database: %w", err)
}
checkoutID := dbJob.Storage.ShamanCheckoutID
if checkoutID == "" {
logger.Info().Msg("job deleter: job was not created with Shaman (or before Flamenco v3.2), skipping job file deletion")
return nil
@ -272,10 +280,10 @@ func (s *Service) deleteShamanCheckout(ctx context.Context, logger zerolog.Logge
err = s.shaman.EraseCheckout(checkoutID)
switch {
case errors.Is(err, shaman.ErrDoesNotExist):
logger.Info().Msg("job deleter: Shaman checkout directory does not exist, ignoring")
logger.Debug().Msg("job deleter: Shaman checkout directory does not exist, ignoring")
return nil
case err != nil:
logger.Info().Err(err).Msg("job deleter: Shaman checkout directory could not be erased")
logger.Warn().Err(err).Msg("job deleter: Shaman checkout directory could not be erased")
return err
}

View File

@ -9,6 +9,7 @@ import (
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"projects.blender.org/studio/flamenco/internal/manager/job_deleter/mocks"
"projects.blender.org/studio/flamenco/internal/manager/persistence"
"projects.blender.org/studio/flamenco/pkg/shaman"
@ -32,16 +33,16 @@ func TestQueueJobDeletion(t *testing.T) {
job1 := &persistence.Job{UUID: "2f7d910f-08a6-4b0f-8ecb-b3946939ed1b"}
mocks.persist.EXPECT().RequestJobDeletion(mocks.ctx, job1)
assert.NoError(t, s.QueueJobDeletion(mocks.ctx, job1))
require.NoError(t, s.QueueJobDeletion(mocks.ctx, job1))
// Call twice more to overflow the queue.
job2 := &persistence.Job{UUID: "e8fbe41c-ed24-46df-ba63-8d4f5524071b"}
mocks.persist.EXPECT().RequestJobDeletion(mocks.ctx, job2)
assert.NoError(t, s.QueueJobDeletion(mocks.ctx, job2))
require.NoError(t, s.QueueJobDeletion(mocks.ctx, job2))
job3 := &persistence.Job{UUID: "deeab6ba-02cd-42c0-b7bc-2367a2f04c7d"}
mocks.persist.EXPECT().RequestJobDeletion(mocks.ctx, job3)
assert.NoError(t, s.QueueJobDeletion(mocks.ctx, job3))
require.NoError(t, s.QueueJobDeletion(mocks.ctx, job3))
if assert.Len(t, s.queue, 2, "the first two job UUID should be queued") {
assert.Equal(t, job1.UUID, <-s.queue)
@ -109,9 +110,8 @@ func TestDeleteJobWithoutShaman(t *testing.T) {
// Mock that everything went OK.
mocks.storage.EXPECT().RemoveJobStorage(mocks.ctx, jobUUID)
mocks.persist.EXPECT().DeleteJob(mocks.ctx, jobUUID)
mocks.persist.EXPECT().RequestIntegrityCheck()
mocks.broadcaster.EXPECT().BroadcastJobUpdate(gomock.Any())
assert.NoError(t, s.deleteJob(mocks.ctx, jobUUID))
require.NoError(t, s.deleteJob(mocks.ctx, jobUUID))
}
func TestDeleteJobWithShaman(t *testing.T) {
@ -127,14 +127,7 @@ func TestDeleteJobWithShaman(t *testing.T) {
AnyTimes()
shamanCheckoutID := "010_0431_lighting"
dbJob := persistence.Job{
UUID: jobUUID,
Name: "сцена/shot/010_0431_lighting",
Storage: persistence.JobStorageInfo{
ShamanCheckoutID: shamanCheckoutID,
},
}
mocks.persist.EXPECT().FetchJob(mocks.ctx, jobUUID).Return(&dbJob, nil).AnyTimes()
mocks.persist.EXPECT().FetchJobShamanCheckoutID(mocks.ctx, jobUUID).Return(shamanCheckoutID, nil).AnyTimes()
// Mock that Shaman deletion failed. The rest of the deletion should be
// blocked by this.
@ -161,9 +154,8 @@ func TestDeleteJobWithShaman(t *testing.T) {
mocks.shaman.EXPECT().EraseCheckout(shamanCheckoutID)
mocks.storage.EXPECT().RemoveJobStorage(mocks.ctx, jobUUID)
mocks.persist.EXPECT().DeleteJob(mocks.ctx, jobUUID)
mocks.persist.EXPECT().RequestIntegrityCheck()
mocks.broadcaster.EXPECT().BroadcastJobUpdate(gomock.Any())
assert.NoError(t, s.deleteJob(mocks.ctx, jobUUID))
require.NoError(t, s.deleteJob(mocks.ctx, jobUUID))
}
func jobDeleterTestFixtures(t *testing.T) (*Service, func(), *JobDeleterMocks) {

View File

@ -66,6 +66,21 @@ func (mr *MockPersistenceServiceMockRecorder) FetchJob(arg0, arg1 interface{}) *
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchJob", reflect.TypeOf((*MockPersistenceService)(nil).FetchJob), arg0, arg1)
}
// FetchJobShamanCheckoutID mocks base method.
func (m *MockPersistenceService) FetchJobShamanCheckoutID(arg0 context.Context, arg1 string) (string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "FetchJobShamanCheckoutID", arg0, arg1)
ret0, _ := ret[0].(string)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// FetchJobShamanCheckoutID indicates an expected call of FetchJobShamanCheckoutID.
func (mr *MockPersistenceServiceMockRecorder) FetchJobShamanCheckoutID(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchJobShamanCheckoutID", reflect.TypeOf((*MockPersistenceService)(nil).FetchJobShamanCheckoutID), arg0, arg1)
}
// FetchJobsDeletionRequested mocks base method.
func (m *MockPersistenceService) FetchJobsDeletionRequested(arg0 context.Context) ([]string, error) {
m.ctrl.T.Helper()

View File

@ -10,6 +10,7 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"projects.blender.org/studio/flamenco/internal/manager/local_storage"
)
@ -38,9 +39,9 @@ func TestQueueImage(t *testing.T) {
defer storage.MustErase()
lrp := New(storage)
assert.NoError(t, lrp.QueueImage(payload))
assert.NoError(t, lrp.QueueImage(payload))
assert.NoError(t, lrp.QueueImage(payload))
require.NoError(t, lrp.QueueImage(payload))
require.NoError(t, lrp.QueueImage(payload))
require.NoError(t, lrp.QueueImage(payload))
assert.ErrorIs(t, lrp.QueueImage(payload), ErrQueueFull)
}
@ -48,9 +49,7 @@ func TestProcessImage(t *testing.T) {
// Load the test image. Note that this intentionally has an approximate 21:9
// ratio, whereas the thumbnail specs define a 16:9 ratio.
imgBytes, err := os.ReadFile("last_rendered_test.jpg")
if !assert.NoError(t, err) {
t.FailNow()
}
require.NoError(t, err)
jobID := "e078438b-c9f5-43e6-9e86-52f8be91dd12"
payload := Payload{
@ -87,15 +86,11 @@ func TestProcessImage(t *testing.T) {
assertImageSize := func(spec Thumbspec) {
path := filepath.Join(jobdir, spec.Filename)
file, err := os.Open(path)
if !assert.NoError(t, err, "thumbnail %s should be openable", spec.Filename) {
return
}
require.NoError(t, err, "thumbnail %s should be openable", spec.Filename)
defer file.Close()
img, format, err := image.Decode(file)
if !assert.NoErrorf(t, err, "thumbnail %s should be decodable", spec.Filename) {
return
}
require.NoErrorf(t, err, "thumbnail %s should be decodable", spec.Filename)
assert.Equalf(t, "jpeg", format, "thumbnail %s not written in the expected format", spec.Filename)
assert.LessOrEqualf(t, img.Bounds().Dx(), spec.MaxWidth, "thumbnail %s has wrong width", spec.Filename)

Some files were not shown because too many files have changed in this diff Show More