Fix: Tag Interface Delete Button #104256
29
.gitattributes
vendored
29
.gitattributes
vendored
@ -3,3 +3,32 @@
|
||||
/addon/flamenco/manager_README.md linguist-generated=true
|
||||
/web/app/src/manager-api/** linguist-generated=true
|
||||
**/*.gen.go linguist-generated=true
|
||||
|
||||
# Set the default newline behavior, in case people don't have core.autocrlf set.
|
||||
* text=auto
|
||||
|
||||
*.cjs text
|
||||
*.css text
|
||||
*.csv text
|
||||
*.go text
|
||||
*.html text
|
||||
*.ini text
|
||||
*.js text
|
||||
*.json text
|
||||
*.map text
|
||||
*.md text
|
||||
*.py text
|
||||
*.sh text
|
||||
*.svg text
|
||||
*.toml text
|
||||
*.txt text
|
||||
*.vue text
|
||||
*.webapp text
|
||||
*.webmanifest text
|
||||
*.xml text
|
||||
*.yaml text
|
||||
/go.mod text
|
||||
/go.sum text
|
||||
/LICENSE text
|
||||
/Makefile text
|
||||
/VERSION text
|
||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -13,6 +13,7 @@
|
||||
/flamenco-worker_race
|
||||
/shaman-checkout-id-setter
|
||||
/stresser
|
||||
/job-creator
|
||||
/addon-packer
|
||||
flamenco-manager.yaml
|
||||
flamenco-worker.yaml
|
||||
@ -47,3 +48,4 @@ web/project-website/resources/_gen/
|
||||
# IDE related stuff
|
||||
*.DS_Store
|
||||
.vscode/settings.json
|
||||
.vscode/launch.json
|
||||
|
24
.vscode/launch.json
vendored
24
.vscode/launch.json
vendored
@ -1,24 +0,0 @@
|
||||
{
|
||||
// Use IntelliSense to learn about possible attributes.
|
||||
// Hover to view descriptions of existing attributes.
|
||||
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Manager",
|
||||
"type": "go",
|
||||
"request": "launch",
|
||||
"mode": "auto",
|
||||
"program": "${workspaceFolder}/cmd/flamenco-manager",
|
||||
"cwd": "${workspaceFolder}"
|
||||
},
|
||||
{
|
||||
"name": "Worker",
|
||||
"type": "go",
|
||||
"request": "launch",
|
||||
"mode": "auto",
|
||||
"program": "${workspaceFolder}/cmd/flamenco-worker",
|
||||
"cwd": "${workspaceFolder}"
|
||||
}
|
||||
]
|
||||
}
|
@ -8,11 +8,15 @@ bugs in actually-released versions.
|
||||
|
||||
- Improve speed of queueing up >100 simultaneous job deletions.
|
||||
- Improve logging of job deletion.
|
||||
- Add Worker Cluster support. Workers can be members of any number of clusters. Workers will only work on jobs that are assigned to that cluster. Jobs that do not have a cluster will be available to all workers, regardless of their cluster assignment. As a result, clusterless workers will only work on clusterless jobs.
|
||||
- Add Worker Tag support. Workers can be members of any number of tags. Workers will only work on jobs that are assigned to that tag. Jobs that do not have a tag will be available to all workers, regardless of their tag assignment. As a result, tagless workers will only work on tagless jobs.
|
||||
- Fix limitation where a job could have no more than 1000 tasks ([#104201](https://projects.blender.org/studio/flamenco/issues/104201))
|
||||
- Add support for finding the top-level 'project' directory. When submitting files to Flamenco, the add-on will try to retain the directory structure of your Blender project as precisely as possible. This new feature allows the add-on to find the top-level directory of your project by finding a `.blender_project`, `.git`, or `.subversion` directory. This can be configured in the add-on preferences.
|
||||
- Worker status is remembered when they sign off, so that workers when they come back online do so to the same state ([#99549](https://projects.blender.org/studio/flamenco/issues/99549)).
|
||||
- Nicer version display for non-release builds. Instead of `3.3-alpha0-v3.2-76-gdd34d538`, show `3.3-alpha0 (v3.2-76-gdd34d538)`.
|
||||
- Job settings: add a description for the `eval` field. This is shown in the tooltip of the 'set to automatic value' button, to make it clear what that button will do.
|
||||
- Job settings: make it possible for a setting to be "linked" to its automatic value. For job settings that have this new feature enabled, they will not be editable by default, and the setting will just use its `eval` expression to determine the value. This can be toggled by the user in Blender's submission interface, to still allow manual edits of the value when needed.
|
||||
- Database integrity tests. These are always run at startup of Flamenco Manager, and by default run periodically every hour. This can be configured by adding/changing the `database_check_period: 1h` setting in `flamenco-manager.yaml`. Setting it to `0` will disable the periodic check. When a database consistency error is found, Flamenco Manager will immediately shut down.
|
||||
|
||||
|
||||
## 3.2 - released 2023-02-21
|
||||
|
||||
|
4
Makefile
4
Makefile
@ -69,6 +69,10 @@ flamenco-worker:
|
||||
stresser:
|
||||
go build -v ${BUILD_FLAGS} ${PKG}/cmd/stresser
|
||||
|
||||
.PHONY: job-creator
|
||||
job-creator:
|
||||
go build -v ${BUILD_FLAGS} ${PKG}/cmd/job-creator
|
||||
|
||||
addon-packer: cmd/addon-packer/addon-packer.go
|
||||
go build -v ${BUILD_FLAGS} ${PKG}/cmd/addon-packer
|
||||
|
||||
|
@ -26,7 +26,7 @@ if __is_first_load:
|
||||
comms,
|
||||
preferences,
|
||||
projects,
|
||||
worker_clusters,
|
||||
worker_tags,
|
||||
)
|
||||
else:
|
||||
import importlib
|
||||
@ -37,7 +37,7 @@ else:
|
||||
comms = importlib.reload(comms)
|
||||
preferences = importlib.reload(preferences)
|
||||
projects = importlib.reload(projects)
|
||||
worker_clusters = importlib.reload(worker_clusters)
|
||||
worker_tags = importlib.reload(worker_tags)
|
||||
|
||||
import bpy
|
||||
|
||||
@ -155,7 +155,7 @@ def register() -> None:
|
||||
)
|
||||
|
||||
preferences.register()
|
||||
worker_clusters.register()
|
||||
worker_tags.register()
|
||||
operators.register()
|
||||
gui.register()
|
||||
job_types.register()
|
||||
@ -173,5 +173,5 @@ def unregister() -> None:
|
||||
job_types.unregister()
|
||||
gui.unregister()
|
||||
operators.unregister()
|
||||
worker_clusters.unregister()
|
||||
worker_tags.unregister()
|
||||
preferences.unregister()
|
||||
|
@ -43,10 +43,10 @@ class FLAMENCO_PT_job_submission(bpy.types.Panel):
|
||||
col.prop(context.scene, "flamenco_job_name", text="Job Name")
|
||||
col.prop(context.scene, "flamenco_job_priority", text="Priority")
|
||||
|
||||
# Worker cluster:
|
||||
# Worker tag:
|
||||
row = col.row(align=True)
|
||||
row.prop(context.scene, "flamenco_worker_cluster", text="Cluster")
|
||||
row.operator("flamenco.fetch_worker_clusters", text="", icon="FILE_REFRESH")
|
||||
row.prop(context.scene, "flamenco_worker_tag", text="Tag")
|
||||
row.operator("flamenco.fetch_worker_tags", text="", icon="FILE_REFRESH")
|
||||
|
||||
layout.separator()
|
||||
|
||||
@ -95,8 +95,12 @@ class FLAMENCO_PT_job_submission(bpy.types.Panel):
|
||||
return
|
||||
|
||||
row = layout.row(align=True)
|
||||
|
||||
if setting.get("editable", True):
|
||||
self.draw_setting_editable(row, propgroup, setting)
|
||||
if job_types.show_eval_on_submit_button(setting):
|
||||
self.draw_setting_autoeval(row, propgroup, setting)
|
||||
else:
|
||||
self.draw_setting_editable(row, propgroup, setting)
|
||||
else:
|
||||
self.draw_setting_readonly(context, row, propgroup, setting)
|
||||
|
||||
@ -122,6 +126,7 @@ class FLAMENCO_PT_job_submission(bpy.types.Panel):
|
||||
props = layout.operator("flamenco.eval_setting", text="", icon="SCRIPTPLUGINS")
|
||||
props.setting_key = setting.key
|
||||
props.setting_eval = setting_eval
|
||||
props.eval_description = job_types.eval_description(setting)
|
||||
|
||||
def draw_setting_readonly(
|
||||
self,
|
||||
@ -132,6 +137,38 @@ class FLAMENCO_PT_job_submission(bpy.types.Panel):
|
||||
) -> None:
|
||||
layout.prop(propgroup, setting.key)
|
||||
|
||||
def draw_setting_autoeval(
|
||||
self,
|
||||
layout: bpy.types.UILayout,
|
||||
propgroup: JobTypePropertyGroup,
|
||||
setting: _AvailableJobSetting,
|
||||
) -> None:
|
||||
autoeval_enabled = job_types.setting_should_autoeval(propgroup, setting)
|
||||
if autoeval_enabled:
|
||||
# Mypy doesn't know the bl_rna attribute exists.
|
||||
label = propgroup.bl_rna.properties[setting.key].name # type: ignore
|
||||
|
||||
split = layout.split(factor=0.4, align=True)
|
||||
split.alignment = "RIGHT"
|
||||
split.label(text=label)
|
||||
|
||||
row = split.row(align=True)
|
||||
row.label(text=getattr(setting.eval_info, "description") or "")
|
||||
row.prop(
|
||||
propgroup,
|
||||
job_types.setting_autoeval_propname(setting),
|
||||
text="",
|
||||
icon="LINKED",
|
||||
)
|
||||
else:
|
||||
self.draw_setting_editable(layout, propgroup, setting)
|
||||
layout.prop(
|
||||
propgroup,
|
||||
job_types.setting_autoeval_propname(setting),
|
||||
text="",
|
||||
icon="UNLINKED",
|
||||
)
|
||||
|
||||
def draw_flamenco_status(
|
||||
self, context: bpy.types.Context, layout: bpy.types.UILayout
|
||||
) -> None:
|
||||
|
@ -54,9 +54,9 @@ def job_for_scene(scene: bpy.types.Scene) -> Optional[_SubmittedJob]:
|
||||
type_etag=propgroup.job_type.etag,
|
||||
)
|
||||
|
||||
worker_cluster: str = getattr(scene, "flamenco_worker_cluster", "")
|
||||
if worker_cluster and worker_cluster != "-":
|
||||
job.worker_cluster = worker_cluster
|
||||
worker_tag: str = getattr(scene, "flamenco_worker_tag", "")
|
||||
if worker_tag and worker_tag != "-":
|
||||
job.worker_tag = worker_tag
|
||||
|
||||
return job
|
||||
|
||||
|
@ -68,6 +68,45 @@ def setting_is_visible(setting: _AvailableJobSetting) -> bool:
|
||||
return str(visibility) in {"visible", "submission"}
|
||||
|
||||
|
||||
def setting_should_autoeval(
|
||||
propgroup: job_types_propgroup.JobTypePropertyGroup,
|
||||
setting: _AvailableJobSetting,
|
||||
) -> bool:
|
||||
if not setting_is_visible(setting):
|
||||
# Invisible settings are there purely to be auto-evaluated.
|
||||
return True
|
||||
|
||||
propname = setting_autoeval_propname(setting)
|
||||
return getattr(propgroup, propname, False)
|
||||
|
||||
|
||||
def show_eval_on_submit_button(setting: _AvailableJobSetting) -> bool:
|
||||
"""Return whether this setting should show the 'eval on submit' toggle button."""
|
||||
|
||||
eval_info = setting.get("eval_info", None)
|
||||
if not eval_info:
|
||||
return False
|
||||
|
||||
show_button: bool = eval_info.get("show_link_button", False)
|
||||
return show_button
|
||||
|
||||
|
||||
def eval_description(setting: _AvailableJobSetting) -> str:
|
||||
"""Return the 'eval description' of this setting, or an empty string if not found."""
|
||||
|
||||
eval_info = setting.get("eval_info", None)
|
||||
if not eval_info:
|
||||
return ""
|
||||
|
||||
description: str = eval_info.get("description", "")
|
||||
return description
|
||||
|
||||
|
||||
def setting_autoeval_propname(setting: _AvailableJobSetting) -> str:
|
||||
"""Return the property name of the 'auto-eval' state for this setting."""
|
||||
return f"autoeval_{setting.key}"
|
||||
|
||||
|
||||
def _store_available_job_types(available_job_types: _AvailableJobTypes) -> None:
|
||||
global _available_job_types
|
||||
global _job_type_enum_items
|
||||
|
@ -131,8 +131,7 @@ class JobTypePropertyGroup:
|
||||
setting value. Otherwise the default is used.
|
||||
"""
|
||||
for setting in self.job_type.settings:
|
||||
if job_types.setting_is_visible(setting):
|
||||
# Skip those settings that will be visible in the GUI.
|
||||
if not job_types.setting_should_autoeval(self, setting):
|
||||
continue
|
||||
|
||||
setting_eval = setting.get("eval", "")
|
||||
@ -253,10 +252,16 @@ def generate(job_type: _AvailableJobType) -> type[JobTypePropertyGroup]:
|
||||
)
|
||||
pg_type.__annotations__ = {}
|
||||
|
||||
# Add RNA properties for the settings.
|
||||
for setting in job_type.settings:
|
||||
prop = _create_property(job_type, setting)
|
||||
pg_type.__annotations__[setting.key] = prop
|
||||
|
||||
if job_types.show_eval_on_submit_button(setting):
|
||||
# Add RNA property for the 'auto-eval' toggle.
|
||||
propname, prop = _create_autoeval_property(setting)
|
||||
pg_type.__annotations__[propname] = prop
|
||||
|
||||
assert issubclass(pg_type, JobTypePropertyGroup), "did not expect type %r" % type(
|
||||
pg_type
|
||||
)
|
||||
@ -304,6 +309,29 @@ def _create_property(job_type: _AvailableJobType, setting: _AvailableJobSetting)
|
||||
return prop
|
||||
|
||||
|
||||
def _create_autoeval_property(
|
||||
setting: _AvailableJobSetting,
|
||||
) -> tuple[str, Any]:
|
||||
from flamenco.manager.model.available_job_setting import AvailableJobSetting
|
||||
|
||||
assert isinstance(setting, AvailableJobSetting)
|
||||
|
||||
setting_name = _job_setting_key_to_label(setting.key)
|
||||
prop_descr = (
|
||||
"Automatically determine the value for %r when the job gets submitted"
|
||||
% setting_name
|
||||
)
|
||||
|
||||
prop = bpy.props.BoolProperty(
|
||||
name="Use Automatic Value",
|
||||
description=prop_descr,
|
||||
default=True,
|
||||
)
|
||||
|
||||
prop_name = job_types.setting_autoeval_propname(setting)
|
||||
return prop_name, prop
|
||||
|
||||
|
||||
def _find_prop_type(
|
||||
job_type: _AvailableJobType, setting: _AvailableJobSetting
|
||||
) -> tuple[Any, dict[str, Any]]:
|
||||
|
880
addon/flamenco/manager/api/worker_mgt_api.py
generated
880
addon/flamenco/manager/api/worker_mgt_api.py
generated
File diff suppressed because it is too large
Load Diff
@ -13,6 +13,7 @@ Name | Type | Description | Notes
|
||||
**description** | **bool, date, datetime, dict, float, int, list, str, none_type** | The description/tooltip shown in the user interface. | [optional]
|
||||
**default** | **bool, date, datetime, dict, float, int, list, str, none_type** | The default value shown to the user when determining this setting. | [optional]
|
||||
**eval** | **str** | Python expression to be evaluated in order to determine the default value for this setting. | [optional]
|
||||
**eval_info** | [**AvailableJobSettingEvalInfo**](AvailableJobSettingEvalInfo.md) | | [optional]
|
||||
**visible** | [**AvailableJobSettingVisibility**](AvailableJobSettingVisibility.md) | | [optional]
|
||||
**required** | **bool** | Whether to immediately reject a job definition, of this type, without this particular setting. | [optional] if omitted the server will use the default value of False
|
||||
**editable** | **bool** | Whether to allow editing this setting after the job has been submitted. Would imply deleting all existing tasks for this job, and recompiling it. | [optional] if omitted the server will use the default value of False
|
||||
|
14
addon/flamenco/manager/docs/AvailableJobSettingEvalInfo.md
generated
Normal file
14
addon/flamenco/manager/docs/AvailableJobSettingEvalInfo.md
generated
Normal file
@ -0,0 +1,14 @@
|
||||
# AvailableJobSettingEvalInfo
|
||||
|
||||
Meta-data for the 'eval' expression.
|
||||
|
||||
## Properties
|
||||
Name | Type | Description | Notes
|
||||
------------ | ------------- | ------------- | -------------
|
||||
**show_link_button** | **bool** | Enables the 'eval on submit' toggle button behavior for this setting. A toggle button will be shown in Blender's submission interface. When toggled on, the `eval` expression will determine the setting's value. Manually editing the setting is then no longer possible, and instead of an input field, the 'description' string is shown. An example use is the to-be-rendered frame range, which by default automatically follows the scene range, but can be overridden manually when desired. | defaults to False
|
||||
**description** | **str** | Description of what the 'eval' expression is doing. It is also used as placeholder text to show when the manual input field is hidden (because eval-on-submit has been toggled on by the user). | defaults to ""
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
||||
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
|
||||
|
||||
|
2
addon/flamenco/manager/docs/Job.md
generated
2
addon/flamenco/manager/docs/Job.md
generated
@ -17,7 +17,7 @@ Name | Type | Description | Notes
|
||||
**settings** | [**JobSettings**](JobSettings.md) | | [optional]
|
||||
**metadata** | [**JobMetadata**](JobMetadata.md) | | [optional]
|
||||
**storage** | [**JobStorageInfo**](JobStorageInfo.md) | | [optional]
|
||||
**worker_cluster** | **str** | Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job. | [optional]
|
||||
**worker_tag** | **str** | Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. | [optional]
|
||||
**delete_requested_at** | **datetime** | If job deletion was requested, this is the timestamp at which that request was stored on Flamenco Manager. | [optional]
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
||||
|
4
addon/flamenco/manager/docs/JobsApi.md
generated
4
addon/flamenco/manager/docs/JobsApi.md
generated
@ -1225,7 +1225,7 @@ with flamenco.manager.ApiClient() as api_client:
|
||||
storage=JobStorageInfo(
|
||||
shaman_checkout_id="shaman_checkout_id_example",
|
||||
),
|
||||
worker_cluster="worker_cluster_example",
|
||||
worker_tag="worker_tag_example",
|
||||
) # SubmittedJob | Job to submit
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
@ -1307,7 +1307,7 @@ with flamenco.manager.ApiClient() as api_client:
|
||||
storage=JobStorageInfo(
|
||||
shaman_checkout_id="shaman_checkout_id_example",
|
||||
),
|
||||
worker_cluster="worker_cluster_example",
|
||||
worker_tag="worker_tag_example",
|
||||
) # SubmittedJob | Job to check
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
|
2
addon/flamenco/manager/docs/SubmittedJob.md
generated
2
addon/flamenco/manager/docs/SubmittedJob.md
generated
@ -13,7 +13,7 @@ Name | Type | Description | Notes
|
||||
**settings** | [**JobSettings**](JobSettings.md) | | [optional]
|
||||
**metadata** | [**JobMetadata**](JobMetadata.md) | | [optional]
|
||||
**storage** | [**JobStorageInfo**](JobStorageInfo.md) | | [optional]
|
||||
**worker_cluster** | **str** | Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job. | [optional]
|
||||
**worker_tag** | **str** | Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. | [optional]
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
||||
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
|
||||
|
2
addon/flamenco/manager/docs/Worker.md
generated
2
addon/flamenco/manager/docs/Worker.md
generated
@ -15,7 +15,7 @@ Name | Type | Description | Notes
|
||||
**status_change** | [**WorkerStatusChangeRequest**](WorkerStatusChangeRequest.md) | | [optional]
|
||||
**last_seen** | **datetime** | Last time this worker was seen by the Manager. | [optional]
|
||||
**task** | [**WorkerTask**](WorkerTask.md) | | [optional]
|
||||
**clusters** | [**[WorkerCluster]**](WorkerCluster.md) | Clusters of which this Worker is a member. | [optional]
|
||||
**tags** | [**[WorkerTag]**](WorkerTag.md) | Tags of which this Worker is a member. | [optional]
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
||||
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
|
||||
|
2
addon/flamenco/manager/docs/WorkerAllOf.md
generated
2
addon/flamenco/manager/docs/WorkerAllOf.md
generated
@ -8,7 +8,7 @@ Name | Type | Description | Notes
|
||||
**platform** | **str** | Operating system of the Worker |
|
||||
**supported_task_types** | **[str]** | |
|
||||
**task** | [**WorkerTask**](WorkerTask.md) | | [optional]
|
||||
**clusters** | [**[WorkerCluster]**](WorkerCluster.md) | Clusters of which this Worker is a member. | [optional]
|
||||
**tags** | [**[WorkerTag]**](WorkerTag.md) | Tags of which this Worker is a member. | [optional]
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
||||
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
|
||||
|
480
addon/flamenco/manager/docs/WorkerMgtApi.md
generated
480
addon/flamenco/manager/docs/WorkerMgtApi.md
generated
@ -4,24 +4,24 @@ All URIs are relative to *http://localhost*
|
||||
|
||||
Method | HTTP request | Description
|
||||
------------- | ------------- | -------------
|
||||
[**create_worker_cluster**](WorkerMgtApi.md#create_worker_cluster) | **POST** /api/v3/worker-mgt/clusters | Create a new worker cluster.
|
||||
[**create_worker_tag**](WorkerMgtApi.md#create_worker_tag) | **POST** /api/v3/worker-mgt/tags | Create a new worker tag.
|
||||
[**delete_worker**](WorkerMgtApi.md#delete_worker) | **DELETE** /api/v3/worker-mgt/workers/{worker_id} | Remove the given worker. It is recommended to only call this function when the worker is in `offline` state. If the worker is still running, stop it first. Any task still assigned to the worker will be requeued.
|
||||
[**delete_worker_cluster**](WorkerMgtApi.md#delete_worker_cluster) | **DELETE** /api/v3/worker-mgt/cluster/{cluster_id} | Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
[**delete_worker_tag**](WorkerMgtApi.md#delete_worker_tag) | **DELETE** /api/v3/worker-mgt/tag/{tag_id} | Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
[**fetch_worker**](WorkerMgtApi.md#fetch_worker) | **GET** /api/v3/worker-mgt/workers/{worker_id} | Fetch info about the worker.
|
||||
[**fetch_worker_cluster**](WorkerMgtApi.md#fetch_worker_cluster) | **GET** /api/v3/worker-mgt/cluster/{cluster_id} | Get a single worker cluster.
|
||||
[**fetch_worker_clusters**](WorkerMgtApi.md#fetch_worker_clusters) | **GET** /api/v3/worker-mgt/clusters | Get list of worker clusters.
|
||||
[**fetch_worker_sleep_schedule**](WorkerMgtApi.md#fetch_worker_sleep_schedule) | **GET** /api/v3/worker-mgt/workers/{worker_id}/sleep-schedule |
|
||||
[**fetch_worker_tag**](WorkerMgtApi.md#fetch_worker_tag) | **GET** /api/v3/worker-mgt/tag/{tag_id} | Get a single worker tag.
|
||||
[**fetch_worker_tags**](WorkerMgtApi.md#fetch_worker_tags) | **GET** /api/v3/worker-mgt/tags | Get list of worker tags.
|
||||
[**fetch_workers**](WorkerMgtApi.md#fetch_workers) | **GET** /api/v3/worker-mgt/workers | Get list of workers.
|
||||
[**request_worker_status_change**](WorkerMgtApi.md#request_worker_status_change) | **POST** /api/v3/worker-mgt/workers/{worker_id}/setstatus |
|
||||
[**set_worker_clusters**](WorkerMgtApi.md#set_worker_clusters) | **POST** /api/v3/worker-mgt/workers/{worker_id}/setclusters |
|
||||
[**set_worker_sleep_schedule**](WorkerMgtApi.md#set_worker_sleep_schedule) | **POST** /api/v3/worker-mgt/workers/{worker_id}/sleep-schedule |
|
||||
[**update_worker_cluster**](WorkerMgtApi.md#update_worker_cluster) | **PUT** /api/v3/worker-mgt/cluster/{cluster_id} | Update an existing worker cluster.
|
||||
[**set_worker_tags**](WorkerMgtApi.md#set_worker_tags) | **POST** /api/v3/worker-mgt/workers/{worker_id}/settags |
|
||||
[**update_worker_tag**](WorkerMgtApi.md#update_worker_tag) | **PUT** /api/v3/worker-mgt/tag/{tag_id} | Update an existing worker tag.
|
||||
|
||||
|
||||
# **create_worker_cluster**
|
||||
> WorkerCluster create_worker_cluster(worker_cluster)
|
||||
# **create_worker_tag**
|
||||
> WorkerTag create_worker_tag(worker_tag)
|
||||
|
||||
Create a new worker cluster.
|
||||
Create a new worker tag.
|
||||
|
||||
### Example
|
||||
|
||||
@ -31,7 +31,7 @@ import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.error import Error
|
||||
from flamenco.manager.model.worker_cluster import WorkerCluster
|
||||
from flamenco.manager.model.worker_tag import WorkerTag
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
@ -44,19 +44,19 @@ configuration = flamenco.manager.Configuration(
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
worker_cluster = WorkerCluster(
|
||||
worker_tag = WorkerTag(
|
||||
id="id_example",
|
||||
name="name_example",
|
||||
description="description_example",
|
||||
) # WorkerCluster | The worker cluster.
|
||||
) # WorkerTag | The worker tag.
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
try:
|
||||
# Create a new worker cluster.
|
||||
api_response = api_instance.create_worker_cluster(worker_cluster)
|
||||
# Create a new worker tag.
|
||||
api_response = api_instance.create_worker_tag(worker_tag)
|
||||
pprint(api_response)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->create_worker_cluster: %s\n" % e)
|
||||
print("Exception when calling WorkerMgtApi->create_worker_tag: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
@ -64,11 +64,11 @@ with flamenco.manager.ApiClient() as api_client:
|
||||
|
||||
Name | Type | Description | Notes
|
||||
------------- | ------------- | ------------- | -------------
|
||||
**worker_cluster** | [**WorkerCluster**](WorkerCluster.md)| The worker cluster. |
|
||||
**worker_tag** | [**WorkerTag**](WorkerTag.md)| The worker tag. |
|
||||
|
||||
### Return type
|
||||
|
||||
[**WorkerCluster**](WorkerCluster.md)
|
||||
[**WorkerTag**](WorkerTag.md)
|
||||
|
||||
### Authorization
|
||||
|
||||
@ -84,7 +84,7 @@ No authorization required
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**200** | The cluster was created. The created cluster is returned, so that the caller can know its UUID. | - |
|
||||
**200** | The tag was created. The created tag is returned, so that the caller can know its UUID. | - |
|
||||
**0** | Error message | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
@ -154,10 +154,10 @@ No authorization required
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **delete_worker_cluster**
|
||||
> delete_worker_cluster(cluster_id)
|
||||
# **delete_worker_tag**
|
||||
> delete_worker_tag(tag_id)
|
||||
|
||||
Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
|
||||
### Example
|
||||
|
||||
@ -179,14 +179,14 @@ configuration = flamenco.manager.Configuration(
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
cluster_id = "cluster_id_example" # str |
|
||||
tag_id = "tag_id_example" # str |
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
try:
|
||||
# Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
api_instance.delete_worker_cluster(cluster_id)
|
||||
# Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
api_instance.delete_worker_tag(tag_id)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->delete_worker_cluster: %s\n" % e)
|
||||
print("Exception when calling WorkerMgtApi->delete_worker_tag: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
@ -194,7 +194,7 @@ with flamenco.manager.ApiClient() as api_client:
|
||||
|
||||
Name | Type | Description | Notes
|
||||
------------- | ------------- | ------------- | -------------
|
||||
**cluster_id** | **str**| |
|
||||
**tag_id** | **str**| |
|
||||
|
||||
### Return type
|
||||
|
||||
@ -214,7 +214,7 @@ No authorization required
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**204** | The cluster has been removed. | - |
|
||||
**204** | The tag has been removed. | - |
|
||||
**0** | Unexpected error. | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
@ -284,132 +284,6 @@ No authorization required
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **fetch_worker_cluster**
|
||||
> WorkerCluster fetch_worker_cluster(cluster_id)
|
||||
|
||||
Get a single worker cluster.
|
||||
|
||||
### Example
|
||||
|
||||
|
||||
```python
|
||||
import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.worker_cluster import WorkerCluster
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
configuration = flamenco.manager.Configuration(
|
||||
host = "http://localhost"
|
||||
)
|
||||
|
||||
|
||||
# Enter a context with an instance of the API client
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
cluster_id = "cluster_id_example" # str |
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
try:
|
||||
# Get a single worker cluster.
|
||||
api_response = api_instance.fetch_worker_cluster(cluster_id)
|
||||
pprint(api_response)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->fetch_worker_cluster: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
### Parameters
|
||||
|
||||
Name | Type | Description | Notes
|
||||
------------- | ------------- | ------------- | -------------
|
||||
**cluster_id** | **str**| |
|
||||
|
||||
### Return type
|
||||
|
||||
[**WorkerCluster**](WorkerCluster.md)
|
||||
|
||||
### Authorization
|
||||
|
||||
No authorization required
|
||||
|
||||
### HTTP request headers
|
||||
|
||||
- **Content-Type**: Not defined
|
||||
- **Accept**: application/json
|
||||
|
||||
|
||||
### HTTP response details
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**200** | The worker cluster. | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **fetch_worker_clusters**
|
||||
> WorkerClusterList fetch_worker_clusters()
|
||||
|
||||
Get list of worker clusters.
|
||||
|
||||
### Example
|
||||
|
||||
|
||||
```python
|
||||
import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.worker_cluster_list import WorkerClusterList
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
configuration = flamenco.manager.Configuration(
|
||||
host = "http://localhost"
|
||||
)
|
||||
|
||||
|
||||
# Enter a context with an instance of the API client
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
|
||||
# example, this endpoint has no required or optional parameters
|
||||
try:
|
||||
# Get list of worker clusters.
|
||||
api_response = api_instance.fetch_worker_clusters()
|
||||
pprint(api_response)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->fetch_worker_clusters: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
### Parameters
|
||||
This endpoint does not need any parameter.
|
||||
|
||||
### Return type
|
||||
|
||||
[**WorkerClusterList**](WorkerClusterList.md)
|
||||
|
||||
### Authorization
|
||||
|
||||
No authorization required
|
||||
|
||||
### HTTP request headers
|
||||
|
||||
- **Content-Type**: Not defined
|
||||
- **Accept**: application/json
|
||||
|
||||
|
||||
### HTTP response details
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**200** | Worker clusters. | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **fetch_worker_sleep_schedule**
|
||||
> WorkerSleepSchedule fetch_worker_sleep_schedule(worker_id)
|
||||
|
||||
@ -477,6 +351,132 @@ No authorization required
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **fetch_worker_tag**
|
||||
> WorkerTag fetch_worker_tag(tag_id)
|
||||
|
||||
Get a single worker tag.
|
||||
|
||||
### Example
|
||||
|
||||
|
||||
```python
|
||||
import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.worker_tag import WorkerTag
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
configuration = flamenco.manager.Configuration(
|
||||
host = "http://localhost"
|
||||
)
|
||||
|
||||
|
||||
# Enter a context with an instance of the API client
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
tag_id = "tag_id_example" # str |
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
try:
|
||||
# Get a single worker tag.
|
||||
api_response = api_instance.fetch_worker_tag(tag_id)
|
||||
pprint(api_response)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->fetch_worker_tag: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
### Parameters
|
||||
|
||||
Name | Type | Description | Notes
|
||||
------------- | ------------- | ------------- | -------------
|
||||
**tag_id** | **str**| |
|
||||
|
||||
### Return type
|
||||
|
||||
[**WorkerTag**](WorkerTag.md)
|
||||
|
||||
### Authorization
|
||||
|
||||
No authorization required
|
||||
|
||||
### HTTP request headers
|
||||
|
||||
- **Content-Type**: Not defined
|
||||
- **Accept**: application/json
|
||||
|
||||
|
||||
### HTTP response details
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**200** | The worker tag. | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **fetch_worker_tags**
|
||||
> WorkerTagList fetch_worker_tags()
|
||||
|
||||
Get list of worker tags.
|
||||
|
||||
### Example
|
||||
|
||||
|
||||
```python
|
||||
import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.worker_tag_list import WorkerTagList
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
configuration = flamenco.manager.Configuration(
|
||||
host = "http://localhost"
|
||||
)
|
||||
|
||||
|
||||
# Enter a context with an instance of the API client
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
|
||||
# example, this endpoint has no required or optional parameters
|
||||
try:
|
||||
# Get list of worker tags.
|
||||
api_response = api_instance.fetch_worker_tags()
|
||||
pprint(api_response)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->fetch_worker_tags: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
### Parameters
|
||||
This endpoint does not need any parameter.
|
||||
|
||||
### Return type
|
||||
|
||||
[**WorkerTagList**](WorkerTagList.md)
|
||||
|
||||
### Authorization
|
||||
|
||||
No authorization required
|
||||
|
||||
### HTTP request headers
|
||||
|
||||
- **Content-Type**: Not defined
|
||||
- **Accept**: application/json
|
||||
|
||||
|
||||
### HTTP response details
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**200** | Worker tags. | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **fetch_workers**
|
||||
> WorkerList fetch_workers()
|
||||
|
||||
@ -599,77 +599,6 @@ No authorization required
|
||||
- **Accept**: application/json
|
||||
|
||||
|
||||
### HTTP response details
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**204** | Status change was accepted. | - |
|
||||
**0** | Unexpected error. | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **set_worker_clusters**
|
||||
> set_worker_clusters(worker_id, worker_cluster_change_request)
|
||||
|
||||
|
||||
|
||||
### Example
|
||||
|
||||
|
||||
```python
|
||||
import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.error import Error
|
||||
from flamenco.manager.model.worker_cluster_change_request import WorkerClusterChangeRequest
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
configuration = flamenco.manager.Configuration(
|
||||
host = "http://localhost"
|
||||
)
|
||||
|
||||
|
||||
# Enter a context with an instance of the API client
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
worker_id = "worker_id_example" # str |
|
||||
worker_cluster_change_request = WorkerClusterChangeRequest(
|
||||
cluster_ids=[
|
||||
"cluster_ids_example",
|
||||
],
|
||||
) # WorkerClusterChangeRequest | The list of cluster IDs this worker should be a member of.
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
try:
|
||||
api_instance.set_worker_clusters(worker_id, worker_cluster_change_request)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->set_worker_clusters: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
### Parameters
|
||||
|
||||
Name | Type | Description | Notes
|
||||
------------- | ------------- | ------------- | -------------
|
||||
**worker_id** | **str**| |
|
||||
**worker_cluster_change_request** | [**WorkerClusterChangeRequest**](WorkerClusterChangeRequest.md)| The list of cluster IDs this worker should be a member of. |
|
||||
|
||||
### Return type
|
||||
|
||||
void (empty response body)
|
||||
|
||||
### Authorization
|
||||
|
||||
No authorization required
|
||||
|
||||
### HTTP request headers
|
||||
|
||||
- **Content-Type**: application/json
|
||||
- **Accept**: application/json
|
||||
|
||||
|
||||
### HTTP response details
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
@ -751,10 +680,10 @@ No authorization required
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **update_worker_cluster**
|
||||
> update_worker_cluster(cluster_id, worker_cluster)
|
||||
# **set_worker_tags**
|
||||
> set_worker_tags(worker_id, worker_tag_change_request)
|
||||
|
||||
|
||||
Update an existing worker cluster.
|
||||
|
||||
### Example
|
||||
|
||||
@ -764,7 +693,7 @@ import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.error import Error
|
||||
from flamenco.manager.model.worker_cluster import WorkerCluster
|
||||
from flamenco.manager.model.worker_tag_change_request import WorkerTagChangeRequest
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
@ -777,19 +706,18 @@ configuration = flamenco.manager.Configuration(
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
cluster_id = "cluster_id_example" # str |
|
||||
worker_cluster = WorkerCluster(
|
||||
id="id_example",
|
||||
name="name_example",
|
||||
description="description_example",
|
||||
) # WorkerCluster | The updated worker cluster.
|
||||
worker_id = "worker_id_example" # str |
|
||||
worker_tag_change_request = WorkerTagChangeRequest(
|
||||
tag_ids=[
|
||||
"tag_ids_example",
|
||||
],
|
||||
) # WorkerTagChangeRequest | The list of worker tag IDs this worker should be a member of.
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
try:
|
||||
# Update an existing worker cluster.
|
||||
api_instance.update_worker_cluster(cluster_id, worker_cluster)
|
||||
api_instance.set_worker_tags(worker_id, worker_tag_change_request)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->update_worker_cluster: %s\n" % e)
|
||||
print("Exception when calling WorkerMgtApi->set_worker_tags: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
@ -797,8 +725,8 @@ with flamenco.manager.ApiClient() as api_client:
|
||||
|
||||
Name | Type | Description | Notes
|
||||
------------- | ------------- | ------------- | -------------
|
||||
**cluster_id** | **str**| |
|
||||
**worker_cluster** | [**WorkerCluster**](WorkerCluster.md)| The updated worker cluster. |
|
||||
**worker_id** | **str**| |
|
||||
**worker_tag_change_request** | [**WorkerTagChangeRequest**](WorkerTagChangeRequest.md)| The list of worker tag IDs this worker should be a member of. |
|
||||
|
||||
### Return type
|
||||
|
||||
@ -818,7 +746,79 @@ No authorization required
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**204** | The cluster update has been stored. | - |
|
||||
**204** | Status change was accepted. | - |
|
||||
**0** | Unexpected error. | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
||||
# **update_worker_tag**
|
||||
> update_worker_tag(tag_id, worker_tag)
|
||||
|
||||
Update an existing worker tag.
|
||||
|
||||
### Example
|
||||
|
||||
|
||||
```python
|
||||
import time
|
||||
import flamenco.manager
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.error import Error
|
||||
from flamenco.manager.model.worker_tag import WorkerTag
|
||||
from pprint import pprint
|
||||
# Defining the host is optional and defaults to http://localhost
|
||||
# See configuration.py for a list of all supported configuration parameters.
|
||||
configuration = flamenco.manager.Configuration(
|
||||
host = "http://localhost"
|
||||
)
|
||||
|
||||
|
||||
# Enter a context with an instance of the API client
|
||||
with flamenco.manager.ApiClient() as api_client:
|
||||
# Create an instance of the API class
|
||||
api_instance = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
tag_id = "tag_id_example" # str |
|
||||
worker_tag = WorkerTag(
|
||||
id="id_example",
|
||||
name="name_example",
|
||||
description="description_example",
|
||||
) # WorkerTag | The updated worker tag.
|
||||
|
||||
# example passing only required values which don't have defaults set
|
||||
try:
|
||||
# Update an existing worker tag.
|
||||
api_instance.update_worker_tag(tag_id, worker_tag)
|
||||
except flamenco.manager.ApiException as e:
|
||||
print("Exception when calling WorkerMgtApi->update_worker_tag: %s\n" % e)
|
||||
```
|
||||
|
||||
|
||||
### Parameters
|
||||
|
||||
Name | Type | Description | Notes
|
||||
------------- | ------------- | ------------- | -------------
|
||||
**tag_id** | **str**| |
|
||||
**worker_tag** | [**WorkerTag**](WorkerTag.md)| The updated worker tag. |
|
||||
|
||||
### Return type
|
||||
|
||||
void (empty response body)
|
||||
|
||||
### Authorization
|
||||
|
||||
No authorization required
|
||||
|
||||
### HTTP request headers
|
||||
|
||||
- **Content-Type**: application/json
|
||||
- **Accept**: application/json
|
||||
|
||||
|
||||
### HTTP response details
|
||||
|
||||
| Status code | Description | Response headers |
|
||||
|-------------|-------------|------------------|
|
||||
**204** | The tag update has been stored. | - |
|
||||
**0** | Error message | - |
|
||||
|
||||
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
|
||||
|
@ -1,12 +1,12 @@
|
||||
# WorkerCluster
|
||||
# WorkerTag
|
||||
|
||||
Cluster of workers. A job can optionally specify which cluster it should be limited to. Workers can be part of multiple clusters simultaneously.
|
||||
Tag of workers. A job can optionally specify which tag it should be limited to. Workers can be part of multiple tags simultaneously.
|
||||
|
||||
## Properties
|
||||
Name | Type | Description | Notes
|
||||
------------ | ------------- | ------------- | -------------
|
||||
**name** | **str** | |
|
||||
**id** | **str** | UUID of the cluster. Can be ommitted when creating a new cluster, in which case a random UUID will be assigned. | [optional]
|
||||
**id** | **str** | UUID of the tag. Can be ommitted when creating a new tag, in which case a random UUID will be assigned. | [optional]
|
||||
**description** | **str** | | [optional]
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
@ -1,11 +1,11 @@
|
||||
# WorkerClusterChangeRequest
|
||||
# WorkerTagChangeRequest
|
||||
|
||||
Request to change which clusters this Worker is assigned to.
|
||||
Request to change which tags this Worker is assigned to.
|
||||
|
||||
## Properties
|
||||
Name | Type | Description | Notes
|
||||
------------ | ------------- | ------------- | -------------
|
||||
**cluster_ids** | **[str]** | |
|
||||
**tag_ids** | **[str]** | |
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
||||
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
|
@ -1,10 +1,10 @@
|
||||
# WorkerClusterList
|
||||
# WorkerTagList
|
||||
|
||||
|
||||
## Properties
|
||||
Name | Type | Description | Notes
|
||||
------------ | ------------- | ------------- | -------------
|
||||
**clusters** | [**[WorkerCluster]**](WorkerCluster.md) | | [optional]
|
||||
**tags** | [**[WorkerTag]**](WorkerTag.md) | | [optional]
|
||||
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
|
||||
|
||||
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
|
@ -30,9 +30,11 @@ from flamenco.manager.exceptions import ApiAttributeError
|
||||
|
||||
|
||||
def lazy_import():
|
||||
from flamenco.manager.model.available_job_setting_eval_info import AvailableJobSettingEvalInfo
|
||||
from flamenco.manager.model.available_job_setting_subtype import AvailableJobSettingSubtype
|
||||
from flamenco.manager.model.available_job_setting_type import AvailableJobSettingType
|
||||
from flamenco.manager.model.available_job_setting_visibility import AvailableJobSettingVisibility
|
||||
globals()['AvailableJobSettingEvalInfo'] = AvailableJobSettingEvalInfo
|
||||
globals()['AvailableJobSettingSubtype'] = AvailableJobSettingSubtype
|
||||
globals()['AvailableJobSettingType'] = AvailableJobSettingType
|
||||
globals()['AvailableJobSettingVisibility'] = AvailableJobSettingVisibility
|
||||
@ -99,6 +101,7 @@ class AvailableJobSetting(ModelNormal):
|
||||
'description': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
|
||||
'default': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
|
||||
'eval': (str,), # noqa: E501
|
||||
'eval_info': (AvailableJobSettingEvalInfo,), # noqa: E501
|
||||
'visible': (AvailableJobSettingVisibility,), # noqa: E501
|
||||
'required': (bool,), # noqa: E501
|
||||
'editable': (bool,), # noqa: E501
|
||||
@ -118,6 +121,7 @@ class AvailableJobSetting(ModelNormal):
|
||||
'description': 'description', # noqa: E501
|
||||
'default': 'default', # noqa: E501
|
||||
'eval': 'eval', # noqa: E501
|
||||
'eval_info': 'evalInfo', # noqa: E501
|
||||
'visible': 'visible', # noqa: E501
|
||||
'required': 'required', # noqa: E501
|
||||
'editable': 'editable', # noqa: E501
|
||||
@ -174,6 +178,7 @@ class AvailableJobSetting(ModelNormal):
|
||||
description (bool, date, datetime, dict, float, int, list, str, none_type): The description/tooltip shown in the user interface.. [optional] # noqa: E501
|
||||
default (bool, date, datetime, dict, float, int, list, str, none_type): The default value shown to the user when determining this setting.. [optional] # noqa: E501
|
||||
eval (str): Python expression to be evaluated in order to determine the default value for this setting.. [optional] # noqa: E501
|
||||
eval_info (AvailableJobSettingEvalInfo): [optional] # noqa: E501
|
||||
visible (AvailableJobSettingVisibility): [optional] # noqa: E501
|
||||
required (bool): Whether to immediately reject a job definition, of this type, without this particular setting. . [optional] if omitted the server will use the default value of False # noqa: E501
|
||||
editable (bool): Whether to allow editing this setting after the job has been submitted. Would imply deleting all existing tasks for this job, and recompiling it. . [optional] if omitted the server will use the default value of False # noqa: E501
|
||||
@ -270,6 +275,7 @@ class AvailableJobSetting(ModelNormal):
|
||||
description (bool, date, datetime, dict, float, int, list, str, none_type): The description/tooltip shown in the user interface.. [optional] # noqa: E501
|
||||
default (bool, date, datetime, dict, float, int, list, str, none_type): The default value shown to the user when determining this setting.. [optional] # noqa: E501
|
||||
eval (str): Python expression to be evaluated in order to determine the default value for this setting.. [optional] # noqa: E501
|
||||
eval_info (AvailableJobSettingEvalInfo): [optional] # noqa: E501
|
||||
visible (AvailableJobSettingVisibility): [optional] # noqa: E501
|
||||
required (bool): Whether to immediately reject a job definition, of this type, without this particular setting. . [optional] if omitted the server will use the default value of False # noqa: E501
|
||||
editable (bool): Whether to allow editing this setting after the job has been submitted. Would imply deleting all existing tasks for this job, and recompiling it. . [optional] if omitted the server will use the default value of False # noqa: E501
|
||||
|
271
addon/flamenco/manager/model/available_job_setting_eval_info.py
generated
Normal file
271
addon/flamenco/manager/model/available_job_setting_eval_info.py
generated
Normal file
@ -0,0 +1,271 @@
|
||||
"""
|
||||
Flamenco manager
|
||||
|
||||
Render Farm manager API # noqa: E501
|
||||
|
||||
The version of the OpenAPI document: 1.0.0
|
||||
Generated by: https://openapi-generator.tech
|
||||
"""
|
||||
|
||||
|
||||
import re # noqa: F401
|
||||
import sys # noqa: F401
|
||||
|
||||
from flamenco.manager.model_utils import ( # noqa: F401
|
||||
ApiTypeError,
|
||||
ModelComposed,
|
||||
ModelNormal,
|
||||
ModelSimple,
|
||||
cached_property,
|
||||
change_keys_js_to_python,
|
||||
convert_js_args_to_python_args,
|
||||
date,
|
||||
datetime,
|
||||
file_type,
|
||||
none_type,
|
||||
validate_get_composed_info,
|
||||
OpenApiModel
|
||||
)
|
||||
from flamenco.manager.exceptions import ApiAttributeError
|
||||
|
||||
|
||||
|
||||
class AvailableJobSettingEvalInfo(ModelNormal):
|
||||
"""NOTE: This class is auto generated by OpenAPI Generator.
|
||||
Ref: https://openapi-generator.tech
|
||||
|
||||
Do not edit the class manually.
|
||||
|
||||
Attributes:
|
||||
allowed_values (dict): The key is the tuple path to the attribute
|
||||
and the for var_name this is (var_name,). The value is a dict
|
||||
with a capitalized key describing the allowed value and an allowed
|
||||
value. These dicts store the allowed enum values.
|
||||
attribute_map (dict): The key is attribute name
|
||||
and the value is json key in definition.
|
||||
discriminator_value_class_map (dict): A dict to go from the discriminator
|
||||
variable value to the discriminator class name.
|
||||
validations (dict): The key is the tuple path to the attribute
|
||||
and the for var_name this is (var_name,). The value is a dict
|
||||
that stores validations for max_length, min_length, max_items,
|
||||
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
|
||||
inclusive_minimum, and regex.
|
||||
additional_properties_type (tuple): A tuple of classes accepted
|
||||
as additional properties values.
|
||||
"""
|
||||
|
||||
allowed_values = {
|
||||
}
|
||||
|
||||
validations = {
|
||||
}
|
||||
|
||||
@cached_property
|
||||
def additional_properties_type():
|
||||
"""
|
||||
This must be a method because a model may have properties that are
|
||||
of type self, this must run after the class is loaded
|
||||
"""
|
||||
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
|
||||
|
||||
_nullable = False
|
||||
|
||||
@cached_property
|
||||
def openapi_types():
|
||||
"""
|
||||
This must be a method because a model may have properties that are
|
||||
of type self, this must run after the class is loaded
|
||||
|
||||
Returns
|
||||
openapi_types (dict): The key is attribute name
|
||||
and the value is attribute type.
|
||||
"""
|
||||
return {
|
||||
'show_link_button': (bool,), # noqa: E501
|
||||
'description': (str,), # noqa: E501
|
||||
}
|
||||
|
||||
@cached_property
|
||||
def discriminator():
|
||||
return None
|
||||
|
||||
|
||||
attribute_map = {
|
||||
'show_link_button': 'showLinkButton', # noqa: E501
|
||||
'description': 'description', # noqa: E501
|
||||
}
|
||||
|
||||
read_only_vars = {
|
||||
}
|
||||
|
||||
_composed_schemas = {}
|
||||
|
||||
@classmethod
|
||||
@convert_js_args_to_python_args
|
||||
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
|
||||
"""AvailableJobSettingEvalInfo - a model defined in OpenAPI
|
||||
|
||||
Args:
|
||||
|
||||
Keyword Args:
|
||||
show_link_button (bool): Enables the 'eval on submit' toggle button behavior for this setting. A toggle button will be shown in Blender's submission interface. When toggled on, the `eval` expression will determine the setting's value. Manually editing the setting is then no longer possible, and instead of an input field, the 'description' string is shown. An example use is the to-be-rendered frame range, which by default automatically follows the scene range, but can be overridden manually when desired. . defaults to False # noqa: E501
|
||||
description (str): Description of what the 'eval' expression is doing. It is also used as placeholder text to show when the manual input field is hidden (because eval-on-submit has been toggled on by the user). . defaults to "" # noqa: E501
|
||||
_check_type (bool): if True, values for parameters in openapi_types
|
||||
will be type checked and a TypeError will be
|
||||
raised if the wrong type is input.
|
||||
Defaults to True
|
||||
_path_to_item (tuple/list): This is a list of keys or values to
|
||||
drill down to the model in received_data
|
||||
when deserializing a response
|
||||
_spec_property_naming (bool): True if the variable names in the input data
|
||||
are serialized names, as specified in the OpenAPI document.
|
||||
False if the variable names in the input data
|
||||
are pythonic names, e.g. snake case (default)
|
||||
_configuration (Configuration): the instance to use when
|
||||
deserializing a file_type parameter.
|
||||
If passed, type conversion is attempted
|
||||
If omitted no type conversion is done.
|
||||
_visited_composed_classes (tuple): This stores a tuple of
|
||||
classes that we have traveled through so that
|
||||
if we see that class again we will not use its
|
||||
discriminator again.
|
||||
When traveling through a discriminator, the
|
||||
composed schema that is
|
||||
is traveled through is added to this set.
|
||||
For example if Animal has a discriminator
|
||||
petType and we pass in "Dog", and the class Dog
|
||||
allOf includes Animal, we move through Animal
|
||||
once using the discriminator, and pick Dog.
|
||||
Then in Dog, we will make an instance of the
|
||||
Animal class but this time we won't travel
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
"""
|
||||
|
||||
show_link_button = kwargs.get('show_link_button', False)
|
||||
description = kwargs.get('description', "")
|
||||
_check_type = kwargs.pop('_check_type', True)
|
||||
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
|
||||
_path_to_item = kwargs.pop('_path_to_item', ())
|
||||
_configuration = kwargs.pop('_configuration', None)
|
||||
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
|
||||
|
||||
self = super(OpenApiModel, cls).__new__(cls)
|
||||
|
||||
if args:
|
||||
raise ApiTypeError(
|
||||
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
|
||||
args,
|
||||
self.__class__.__name__,
|
||||
),
|
||||
path_to_item=_path_to_item,
|
||||
valid_classes=(self.__class__,),
|
||||
)
|
||||
|
||||
self._data_store = {}
|
||||
self._check_type = _check_type
|
||||
self._spec_property_naming = _spec_property_naming
|
||||
self._path_to_item = _path_to_item
|
||||
self._configuration = _configuration
|
||||
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
|
||||
|
||||
self.show_link_button = show_link_button
|
||||
self.description = description
|
||||
for var_name, var_value in kwargs.items():
|
||||
if var_name not in self.attribute_map and \
|
||||
self._configuration is not None and \
|
||||
self._configuration.discard_unknown_keys and \
|
||||
self.additional_properties_type is None:
|
||||
# discard variable.
|
||||
continue
|
||||
setattr(self, var_name, var_value)
|
||||
return self
|
||||
|
||||
required_properties = set([
|
||||
'_data_store',
|
||||
'_check_type',
|
||||
'_spec_property_naming',
|
||||
'_path_to_item',
|
||||
'_configuration',
|
||||
'_visited_composed_classes',
|
||||
])
|
||||
|
||||
@convert_js_args_to_python_args
|
||||
def __init__(self, *args, **kwargs): # noqa: E501
|
||||
"""AvailableJobSettingEvalInfo - a model defined in OpenAPI
|
||||
|
||||
Args:
|
||||
|
||||
Keyword Args:
|
||||
show_link_button (bool): Enables the 'eval on submit' toggle button behavior for this setting. A toggle button will be shown in Blender's submission interface. When toggled on, the `eval` expression will determine the setting's value. Manually editing the setting is then no longer possible, and instead of an input field, the 'description' string is shown. An example use is the to-be-rendered frame range, which by default automatically follows the scene range, but can be overridden manually when desired. . defaults to False # noqa: E501
|
||||
description (str): Description of what the 'eval' expression is doing. It is also used as placeholder text to show when the manual input field is hidden (because eval-on-submit has been toggled on by the user). . defaults to "" # noqa: E501
|
||||
_check_type (bool): if True, values for parameters in openapi_types
|
||||
will be type checked and a TypeError will be
|
||||
raised if the wrong type is input.
|
||||
Defaults to True
|
||||
_path_to_item (tuple/list): This is a list of keys or values to
|
||||
drill down to the model in received_data
|
||||
when deserializing a response
|
||||
_spec_property_naming (bool): True if the variable names in the input data
|
||||
are serialized names, as specified in the OpenAPI document.
|
||||
False if the variable names in the input data
|
||||
are pythonic names, e.g. snake case (default)
|
||||
_configuration (Configuration): the instance to use when
|
||||
deserializing a file_type parameter.
|
||||
If passed, type conversion is attempted
|
||||
If omitted no type conversion is done.
|
||||
_visited_composed_classes (tuple): This stores a tuple of
|
||||
classes that we have traveled through so that
|
||||
if we see that class again we will not use its
|
||||
discriminator again.
|
||||
When traveling through a discriminator, the
|
||||
composed schema that is
|
||||
is traveled through is added to this set.
|
||||
For example if Animal has a discriminator
|
||||
petType and we pass in "Dog", and the class Dog
|
||||
allOf includes Animal, we move through Animal
|
||||
once using the discriminator, and pick Dog.
|
||||
Then in Dog, we will make an instance of the
|
||||
Animal class but this time we won't travel
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
"""
|
||||
|
||||
show_link_button = kwargs.get('show_link_button', False)
|
||||
description = kwargs.get('description', "")
|
||||
_check_type = kwargs.pop('_check_type', True)
|
||||
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
|
||||
_path_to_item = kwargs.pop('_path_to_item', ())
|
||||
_configuration = kwargs.pop('_configuration', None)
|
||||
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
|
||||
|
||||
if args:
|
||||
raise ApiTypeError(
|
||||
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
|
||||
args,
|
||||
self.__class__.__name__,
|
||||
),
|
||||
path_to_item=_path_to_item,
|
||||
valid_classes=(self.__class__,),
|
||||
)
|
||||
|
||||
self._data_store = {}
|
||||
self._check_type = _check_type
|
||||
self._spec_property_naming = _spec_property_naming
|
||||
self._path_to_item = _path_to_item
|
||||
self._configuration = _configuration
|
||||
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
|
||||
|
||||
self.show_link_button = show_link_button
|
||||
self.description = description
|
||||
for var_name, var_value in kwargs.items():
|
||||
if var_name not in self.attribute_map and \
|
||||
self._configuration is not None and \
|
||||
self._configuration.discard_unknown_keys and \
|
||||
self.additional_properties_type is None:
|
||||
# discard variable.
|
||||
continue
|
||||
setattr(self, var_name, var_value)
|
||||
if var_name in self.read_only_vars:
|
||||
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
|
||||
f"class with read only attributes.")
|
8
addon/flamenco/manager/model/job.py
generated
8
addon/flamenco/manager/model/job.py
generated
@ -110,7 +110,7 @@ class Job(ModelComposed):
|
||||
'settings': (JobSettings,), # noqa: E501
|
||||
'metadata': (JobMetadata,), # noqa: E501
|
||||
'storage': (JobStorageInfo,), # noqa: E501
|
||||
'worker_cluster': (str,), # noqa: E501
|
||||
'worker_tag': (str,), # noqa: E501
|
||||
'delete_requested_at': (datetime,), # noqa: E501
|
||||
}
|
||||
|
||||
@ -133,7 +133,7 @@ class Job(ModelComposed):
|
||||
'settings': 'settings', # noqa: E501
|
||||
'metadata': 'metadata', # noqa: E501
|
||||
'storage': 'storage', # noqa: E501
|
||||
'worker_cluster': 'worker_cluster', # noqa: E501
|
||||
'worker_tag': 'worker_tag', # noqa: E501
|
||||
'delete_requested_at': 'delete_requested_at', # noqa: E501
|
||||
}
|
||||
|
||||
@ -189,7 +189,7 @@ class Job(ModelComposed):
|
||||
settings (JobSettings): [optional] # noqa: E501
|
||||
metadata (JobMetadata): [optional] # noqa: E501
|
||||
storage (JobStorageInfo): [optional] # noqa: E501
|
||||
worker_cluster (str): Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
delete_requested_at (datetime): If job deletion was requested, this is the timestamp at which that request was stored on Flamenco Manager. . [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
@ -304,7 +304,7 @@ class Job(ModelComposed):
|
||||
settings (JobSettings): [optional] # noqa: E501
|
||||
metadata (JobMetadata): [optional] # noqa: E501
|
||||
storage (JobStorageInfo): [optional] # noqa: E501
|
||||
worker_cluster (str): Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
delete_requested_at (datetime): If job deletion was requested, this is the timestamp at which that request was stored on Flamenco Manager. . [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
|
8
addon/flamenco/manager/model/submitted_job.py
generated
8
addon/flamenco/manager/model/submitted_job.py
generated
@ -99,7 +99,7 @@ class SubmittedJob(ModelNormal):
|
||||
'settings': (JobSettings,), # noqa: E501
|
||||
'metadata': (JobMetadata,), # noqa: E501
|
||||
'storage': (JobStorageInfo,), # noqa: E501
|
||||
'worker_cluster': (str,), # noqa: E501
|
||||
'worker_tag': (str,), # noqa: E501
|
||||
}
|
||||
|
||||
@cached_property
|
||||
@ -116,7 +116,7 @@ class SubmittedJob(ModelNormal):
|
||||
'settings': 'settings', # noqa: E501
|
||||
'metadata': 'metadata', # noqa: E501
|
||||
'storage': 'storage', # noqa: E501
|
||||
'worker_cluster': 'worker_cluster', # noqa: E501
|
||||
'worker_tag': 'worker_tag', # noqa: E501
|
||||
}
|
||||
|
||||
read_only_vars = {
|
||||
@ -170,7 +170,7 @@ class SubmittedJob(ModelNormal):
|
||||
settings (JobSettings): [optional] # noqa: E501
|
||||
metadata (JobMetadata): [optional] # noqa: E501
|
||||
storage (JobStorageInfo): [optional] # noqa: E501
|
||||
worker_cluster (str): Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
priority = kwargs.get('priority', 50)
|
||||
@ -267,7 +267,7 @@ class SubmittedJob(ModelNormal):
|
||||
settings (JobSettings): [optional] # noqa: E501
|
||||
metadata (JobMetadata): [optional] # noqa: E501
|
||||
storage (JobStorageInfo): [optional] # noqa: E501
|
||||
worker_cluster (str): Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
worker_tag (str): Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job. . [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
priority = kwargs.get('priority', 50)
|
||||
|
12
addon/flamenco/manager/model/worker.py
generated
12
addon/flamenco/manager/model/worker.py
generated
@ -31,16 +31,16 @@ from flamenco.manager.exceptions import ApiAttributeError
|
||||
|
||||
def lazy_import():
|
||||
from flamenco.manager.model.worker_all_of import WorkerAllOf
|
||||
from flamenco.manager.model.worker_cluster import WorkerCluster
|
||||
from flamenco.manager.model.worker_status import WorkerStatus
|
||||
from flamenco.manager.model.worker_status_change_request import WorkerStatusChangeRequest
|
||||
from flamenco.manager.model.worker_summary import WorkerSummary
|
||||
from flamenco.manager.model.worker_tag import WorkerTag
|
||||
from flamenco.manager.model.worker_task import WorkerTask
|
||||
globals()['WorkerAllOf'] = WorkerAllOf
|
||||
globals()['WorkerCluster'] = WorkerCluster
|
||||
globals()['WorkerStatus'] = WorkerStatus
|
||||
globals()['WorkerStatusChangeRequest'] = WorkerStatusChangeRequest
|
||||
globals()['WorkerSummary'] = WorkerSummary
|
||||
globals()['WorkerTag'] = WorkerTag
|
||||
globals()['WorkerTask'] = WorkerTask
|
||||
|
||||
|
||||
@ -107,7 +107,7 @@ class Worker(ModelComposed):
|
||||
'status_change': (WorkerStatusChangeRequest,), # noqa: E501
|
||||
'last_seen': (datetime,), # noqa: E501
|
||||
'task': (WorkerTask,), # noqa: E501
|
||||
'clusters': ([WorkerCluster],), # noqa: E501
|
||||
'tags': ([WorkerTag],), # noqa: E501
|
||||
}
|
||||
|
||||
@cached_property
|
||||
@ -126,7 +126,7 @@ class Worker(ModelComposed):
|
||||
'status_change': 'status_change', # noqa: E501
|
||||
'last_seen': 'last_seen', # noqa: E501
|
||||
'task': 'task', # noqa: E501
|
||||
'clusters': 'clusters', # noqa: E501
|
||||
'tags': 'tags', # noqa: E501
|
||||
}
|
||||
|
||||
read_only_vars = {
|
||||
@ -178,7 +178,7 @@ class Worker(ModelComposed):
|
||||
status_change (WorkerStatusChangeRequest): [optional] # noqa: E501
|
||||
last_seen (datetime): Last time this worker was seen by the Manager.. [optional] # noqa: E501
|
||||
task (WorkerTask): [optional] # noqa: E501
|
||||
clusters ([WorkerCluster]): Clusters of which this Worker is a member.. [optional] # noqa: E501
|
||||
tags ([WorkerTag]): Tags of which this Worker is a member.. [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
_check_type = kwargs.pop('_check_type', True)
|
||||
@ -288,7 +288,7 @@ class Worker(ModelComposed):
|
||||
status_change (WorkerStatusChangeRequest): [optional] # noqa: E501
|
||||
last_seen (datetime): Last time this worker was seen by the Manager.. [optional] # noqa: E501
|
||||
task (WorkerTask): [optional] # noqa: E501
|
||||
clusters ([WorkerCluster]): Clusters of which this Worker is a member.. [optional] # noqa: E501
|
||||
tags ([WorkerTag]): Tags of which this Worker is a member.. [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
_check_type = kwargs.pop('_check_type', True)
|
||||
|
12
addon/flamenco/manager/model/worker_all_of.py
generated
12
addon/flamenco/manager/model/worker_all_of.py
generated
@ -30,9 +30,9 @@ from flamenco.manager.exceptions import ApiAttributeError
|
||||
|
||||
|
||||
def lazy_import():
|
||||
from flamenco.manager.model.worker_cluster import WorkerCluster
|
||||
from flamenco.manager.model.worker_tag import WorkerTag
|
||||
from flamenco.manager.model.worker_task import WorkerTask
|
||||
globals()['WorkerCluster'] = WorkerCluster
|
||||
globals()['WorkerTag'] = WorkerTag
|
||||
globals()['WorkerTask'] = WorkerTask
|
||||
|
||||
|
||||
@ -93,7 +93,7 @@ class WorkerAllOf(ModelNormal):
|
||||
'platform': (str,), # noqa: E501
|
||||
'supported_task_types': ([str],), # noqa: E501
|
||||
'task': (WorkerTask,), # noqa: E501
|
||||
'clusters': ([WorkerCluster],), # noqa: E501
|
||||
'tags': ([WorkerTag],), # noqa: E501
|
||||
}
|
||||
|
||||
@cached_property
|
||||
@ -106,7 +106,7 @@ class WorkerAllOf(ModelNormal):
|
||||
'platform': 'platform', # noqa: E501
|
||||
'supported_task_types': 'supported_task_types', # noqa: E501
|
||||
'task': 'task', # noqa: E501
|
||||
'clusters': 'clusters', # noqa: E501
|
||||
'tags': 'tags', # noqa: E501
|
||||
}
|
||||
|
||||
read_only_vars = {
|
||||
@ -156,7 +156,7 @@ class WorkerAllOf(ModelNormal):
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
task (WorkerTask): [optional] # noqa: E501
|
||||
clusters ([WorkerCluster]): Clusters of which this Worker is a member.. [optional] # noqa: E501
|
||||
tags ([WorkerTag]): Tags of which this Worker is a member.. [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
_check_type = kwargs.pop('_check_type', True)
|
||||
@ -247,7 +247,7 @@ class WorkerAllOf(ModelNormal):
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
task (WorkerTask): [optional] # noqa: E501
|
||||
clusters ([WorkerCluster]): Clusters of which this Worker is a member.. [optional] # noqa: E501
|
||||
tags ([WorkerTag]): Tags of which this Worker is a member.. [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
_check_type = kwargs.pop('_check_type', True)
|
||||
|
@ -30,7 +30,7 @@ from flamenco.manager.exceptions import ApiAttributeError
|
||||
|
||||
|
||||
|
||||
class WorkerCluster(ModelNormal):
|
||||
class WorkerTag(ModelNormal):
|
||||
"""NOTE: This class is auto generated by OpenAPI Generator.
|
||||
Ref: https://openapi-generator.tech
|
||||
|
||||
@ -105,7 +105,7 @@ class WorkerCluster(ModelNormal):
|
||||
@classmethod
|
||||
@convert_js_args_to_python_args
|
||||
def _from_openapi_data(cls, name, *args, **kwargs): # noqa: E501
|
||||
"""WorkerCluster - a model defined in OpenAPI
|
||||
"""WorkerTag - a model defined in OpenAPI
|
||||
|
||||
Args:
|
||||
name (str):
|
||||
@ -141,7 +141,7 @@ class WorkerCluster(ModelNormal):
|
||||
Animal class but this time we won't travel
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
id (str): UUID of the cluster. Can be ommitted when creating a new cluster, in which case a random UUID will be assigned. . [optional] # noqa: E501
|
||||
id (str): UUID of the tag. Can be ommitted when creating a new tag, in which case a random UUID will be assigned. . [optional] # noqa: E501
|
||||
description (str): [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
@ -192,7 +192,7 @@ class WorkerCluster(ModelNormal):
|
||||
|
||||
@convert_js_args_to_python_args
|
||||
def __init__(self, name, *args, **kwargs): # noqa: E501
|
||||
"""WorkerCluster - a model defined in OpenAPI
|
||||
"""WorkerTag - a model defined in OpenAPI
|
||||
|
||||
Args:
|
||||
name (str):
|
||||
@ -228,7 +228,7 @@ class WorkerCluster(ModelNormal):
|
||||
Animal class but this time we won't travel
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
id (str): UUID of the cluster. Can be ommitted when creating a new cluster, in which case a random UUID will be assigned. . [optional] # noqa: E501
|
||||
id (str): UUID of the tag. Can be ommitted when creating a new tag, in which case a random UUID will be assigned. . [optional] # noqa: E501
|
||||
description (str): [optional] # noqa: E501
|
||||
"""
|
||||
|
@ -30,7 +30,7 @@ from flamenco.manager.exceptions import ApiAttributeError
|
||||
|
||||
|
||||
|
||||
class WorkerClusterChangeRequest(ModelNormal):
|
||||
class WorkerTagChangeRequest(ModelNormal):
|
||||
"""NOTE: This class is auto generated by OpenAPI Generator.
|
||||
Ref: https://openapi-generator.tech
|
||||
|
||||
@ -81,7 +81,7 @@ class WorkerClusterChangeRequest(ModelNormal):
|
||||
and the value is attribute type.
|
||||
"""
|
||||
return {
|
||||
'cluster_ids': ([str],), # noqa: E501
|
||||
'tag_ids': ([str],), # noqa: E501
|
||||
}
|
||||
|
||||
@cached_property
|
||||
@ -90,7 +90,7 @@ class WorkerClusterChangeRequest(ModelNormal):
|
||||
|
||||
|
||||
attribute_map = {
|
||||
'cluster_ids': 'cluster_ids', # noqa: E501
|
||||
'tag_ids': 'tag_ids', # noqa: E501
|
||||
}
|
||||
|
||||
read_only_vars = {
|
||||
@ -100,11 +100,11 @@ class WorkerClusterChangeRequest(ModelNormal):
|
||||
|
||||
@classmethod
|
||||
@convert_js_args_to_python_args
|
||||
def _from_openapi_data(cls, cluster_ids, *args, **kwargs): # noqa: E501
|
||||
"""WorkerClusterChangeRequest - a model defined in OpenAPI
|
||||
def _from_openapi_data(cls, tag_ids, *args, **kwargs): # noqa: E501
|
||||
"""WorkerTagChangeRequest - a model defined in OpenAPI
|
||||
|
||||
Args:
|
||||
cluster_ids ([str]):
|
||||
tag_ids ([str]):
|
||||
|
||||
Keyword Args:
|
||||
_check_type (bool): if True, values for parameters in openapi_types
|
||||
@ -164,7 +164,7 @@ class WorkerClusterChangeRequest(ModelNormal):
|
||||
self._configuration = _configuration
|
||||
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
|
||||
|
||||
self.cluster_ids = cluster_ids
|
||||
self.tag_ids = tag_ids
|
||||
for var_name, var_value in kwargs.items():
|
||||
if var_name not in self.attribute_map and \
|
||||
self._configuration is not None and \
|
||||
@ -185,11 +185,11 @@ class WorkerClusterChangeRequest(ModelNormal):
|
||||
])
|
||||
|
||||
@convert_js_args_to_python_args
|
||||
def __init__(self, cluster_ids, *args, **kwargs): # noqa: E501
|
||||
"""WorkerClusterChangeRequest - a model defined in OpenAPI
|
||||
def __init__(self, tag_ids, *args, **kwargs): # noqa: E501
|
||||
"""WorkerTagChangeRequest - a model defined in OpenAPI
|
||||
|
||||
Args:
|
||||
cluster_ids ([str]):
|
||||
tag_ids ([str]):
|
||||
|
||||
Keyword Args:
|
||||
_check_type (bool): if True, values for parameters in openapi_types
|
||||
@ -247,7 +247,7 @@ class WorkerClusterChangeRequest(ModelNormal):
|
||||
self._configuration = _configuration
|
||||
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
|
||||
|
||||
self.cluster_ids = cluster_ids
|
||||
self.tag_ids = tag_ids
|
||||
for var_name, var_value in kwargs.items():
|
||||
if var_name not in self.attribute_map and \
|
||||
self._configuration is not None and \
|
@ -30,11 +30,11 @@ from flamenco.manager.exceptions import ApiAttributeError
|
||||
|
||||
|
||||
def lazy_import():
|
||||
from flamenco.manager.model.worker_cluster import WorkerCluster
|
||||
globals()['WorkerCluster'] = WorkerCluster
|
||||
from flamenco.manager.model.worker_tag import WorkerTag
|
||||
globals()['WorkerTag'] = WorkerTag
|
||||
|
||||
|
||||
class WorkerClusterList(ModelNormal):
|
||||
class WorkerTagList(ModelNormal):
|
||||
"""NOTE: This class is auto generated by OpenAPI Generator.
|
||||
Ref: https://openapi-generator.tech
|
||||
|
||||
@ -87,7 +87,7 @@ class WorkerClusterList(ModelNormal):
|
||||
"""
|
||||
lazy_import()
|
||||
return {
|
||||
'clusters': ([WorkerCluster],), # noqa: E501
|
||||
'tags': ([WorkerTag],), # noqa: E501
|
||||
}
|
||||
|
||||
@cached_property
|
||||
@ -96,7 +96,7 @@ class WorkerClusterList(ModelNormal):
|
||||
|
||||
|
||||
attribute_map = {
|
||||
'clusters': 'clusters', # noqa: E501
|
||||
'tags': 'tags', # noqa: E501
|
||||
}
|
||||
|
||||
read_only_vars = {
|
||||
@ -107,7 +107,7 @@ class WorkerClusterList(ModelNormal):
|
||||
@classmethod
|
||||
@convert_js_args_to_python_args
|
||||
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
|
||||
"""WorkerClusterList - a model defined in OpenAPI
|
||||
"""WorkerTagList - a model defined in OpenAPI
|
||||
|
||||
Keyword Args:
|
||||
_check_type (bool): if True, values for parameters in openapi_types
|
||||
@ -140,7 +140,7 @@ class WorkerClusterList(ModelNormal):
|
||||
Animal class but this time we won't travel
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
clusters ([WorkerCluster]): [optional] # noqa: E501
|
||||
tags ([WorkerTag]): [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
_check_type = kwargs.pop('_check_type', True)
|
||||
@ -189,7 +189,7 @@ class WorkerClusterList(ModelNormal):
|
||||
|
||||
@convert_js_args_to_python_args
|
||||
def __init__(self, *args, **kwargs): # noqa: E501
|
||||
"""WorkerClusterList - a model defined in OpenAPI
|
||||
"""WorkerTagList - a model defined in OpenAPI
|
||||
|
||||
Keyword Args:
|
||||
_check_type (bool): if True, values for parameters in openapi_types
|
||||
@ -222,7 +222,7 @@ class WorkerClusterList(ModelNormal):
|
||||
Animal class but this time we won't travel
|
||||
through its discriminator because we passed in
|
||||
_visited_composed_classes = (Animal,)
|
||||
clusters ([WorkerCluster]): [optional] # noqa: E501
|
||||
tags ([WorkerTag]): [optional] # noqa: E501
|
||||
"""
|
||||
|
||||
_check_type = kwargs.pop('_check_type', True)
|
7
addon/flamenco/manager/models/__init__.py
generated
7
addon/flamenco/manager/models/__init__.py
generated
@ -11,6 +11,7 @@
|
||||
|
||||
from flamenco.manager.model.assigned_task import AssignedTask
|
||||
from flamenco.manager.model.available_job_setting import AvailableJobSetting
|
||||
from flamenco.manager.model.available_job_setting_eval_info import AvailableJobSettingEvalInfo
|
||||
from flamenco.manager.model.available_job_setting_subtype import AvailableJobSettingSubtype
|
||||
from flamenco.manager.model.available_job_setting_type import AvailableJobSettingType
|
||||
from flamenco.manager.model.available_job_setting_visibility import AvailableJobSettingVisibility
|
||||
@ -74,9 +75,6 @@ from flamenco.manager.model.task_update import TaskUpdate
|
||||
from flamenco.manager.model.task_worker import TaskWorker
|
||||
from flamenco.manager.model.worker import Worker
|
||||
from flamenco.manager.model.worker_all_of import WorkerAllOf
|
||||
from flamenco.manager.model.worker_cluster import WorkerCluster
|
||||
from flamenco.manager.model.worker_cluster_change_request import WorkerClusterChangeRequest
|
||||
from flamenco.manager.model.worker_cluster_list import WorkerClusterList
|
||||
from flamenco.manager.model.worker_list import WorkerList
|
||||
from flamenco.manager.model.worker_registration import WorkerRegistration
|
||||
from flamenco.manager.model.worker_sign_on import WorkerSignOn
|
||||
@ -86,5 +84,8 @@ from flamenco.manager.model.worker_state_changed import WorkerStateChanged
|
||||
from flamenco.manager.model.worker_status import WorkerStatus
|
||||
from flamenco.manager.model.worker_status_change_request import WorkerStatusChangeRequest
|
||||
from flamenco.manager.model.worker_summary import WorkerSummary
|
||||
from flamenco.manager.model.worker_tag import WorkerTag
|
||||
from flamenco.manager.model.worker_tag_change_request import WorkerTagChangeRequest
|
||||
from flamenco.manager.model.worker_tag_list import WorkerTagList
|
||||
from flamenco.manager.model.worker_task import WorkerTask
|
||||
from flamenco.manager.model.worker_task_all_of import WorkerTaskAllOf
|
||||
|
19
addon/flamenco/manager_README.md
generated
19
addon/flamenco/manager_README.md
generated
@ -116,24 +116,25 @@ Class | Method | HTTP request | Description
|
||||
*WorkerApi* | [**task_update**](flamenco/manager/docs/WorkerApi.md#task_update) | **POST** /api/v3/worker/task/{task_id} | Update the task, typically to indicate progress, completion, or failure.
|
||||
*WorkerApi* | [**worker_state**](flamenco/manager/docs/WorkerApi.md#worker_state) | **GET** /api/v3/worker/state |
|
||||
*WorkerApi* | [**worker_state_changed**](flamenco/manager/docs/WorkerApi.md#worker_state_changed) | **POST** /api/v3/worker/state-changed | Worker changed state. This could be as acknowledgement of a Manager-requested state change, or in response to worker-local signals.
|
||||
*WorkerMgtApi* | [**create_worker_cluster**](flamenco/manager/docs/WorkerMgtApi.md#create_worker_cluster) | **POST** /api/v3/worker-mgt/clusters | Create a new worker cluster.
|
||||
*WorkerMgtApi* | [**create_worker_tag**](flamenco/manager/docs/WorkerMgtApi.md#create_worker_tag) | **POST** /api/v3/worker-mgt/tags | Create a new worker tag.
|
||||
*WorkerMgtApi* | [**delete_worker**](flamenco/manager/docs/WorkerMgtApi.md#delete_worker) | **DELETE** /api/v3/worker-mgt/workers/{worker_id} | Remove the given worker. It is recommended to only call this function when the worker is in `offline` state. If the worker is still running, stop it first. Any task still assigned to the worker will be requeued.
|
||||
*WorkerMgtApi* | [**delete_worker_cluster**](flamenco/manager/docs/WorkerMgtApi.md#delete_worker_cluster) | **DELETE** /api/v3/worker-mgt/cluster/{cluster_id} | Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
*WorkerMgtApi* | [**delete_worker_tag**](flamenco/manager/docs/WorkerMgtApi.md#delete_worker_tag) | **DELETE** /api/v3/worker-mgt/tag/{tag_id} | Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
*WorkerMgtApi* | [**fetch_worker**](flamenco/manager/docs/WorkerMgtApi.md#fetch_worker) | **GET** /api/v3/worker-mgt/workers/{worker_id} | Fetch info about the worker.
|
||||
*WorkerMgtApi* | [**fetch_worker_cluster**](flamenco/manager/docs/WorkerMgtApi.md#fetch_worker_cluster) | **GET** /api/v3/worker-mgt/cluster/{cluster_id} | Get a single worker cluster.
|
||||
*WorkerMgtApi* | [**fetch_worker_clusters**](flamenco/manager/docs/WorkerMgtApi.md#fetch_worker_clusters) | **GET** /api/v3/worker-mgt/clusters | Get list of worker clusters.
|
||||
*WorkerMgtApi* | [**fetch_worker_sleep_schedule**](flamenco/manager/docs/WorkerMgtApi.md#fetch_worker_sleep_schedule) | **GET** /api/v3/worker-mgt/workers/{worker_id}/sleep-schedule |
|
||||
*WorkerMgtApi* | [**fetch_worker_tag**](flamenco/manager/docs/WorkerMgtApi.md#fetch_worker_tag) | **GET** /api/v3/worker-mgt/tag/{tag_id} | Get a single worker tag.
|
||||
*WorkerMgtApi* | [**fetch_worker_tags**](flamenco/manager/docs/WorkerMgtApi.md#fetch_worker_tags) | **GET** /api/v3/worker-mgt/tags | Get list of worker tags.
|
||||
*WorkerMgtApi* | [**fetch_workers**](flamenco/manager/docs/WorkerMgtApi.md#fetch_workers) | **GET** /api/v3/worker-mgt/workers | Get list of workers.
|
||||
*WorkerMgtApi* | [**request_worker_status_change**](flamenco/manager/docs/WorkerMgtApi.md#request_worker_status_change) | **POST** /api/v3/worker-mgt/workers/{worker_id}/setstatus |
|
||||
*WorkerMgtApi* | [**set_worker_clusters**](flamenco/manager/docs/WorkerMgtApi.md#set_worker_clusters) | **POST** /api/v3/worker-mgt/workers/{worker_id}/setclusters |
|
||||
*WorkerMgtApi* | [**set_worker_sleep_schedule**](flamenco/manager/docs/WorkerMgtApi.md#set_worker_sleep_schedule) | **POST** /api/v3/worker-mgt/workers/{worker_id}/sleep-schedule |
|
||||
*WorkerMgtApi* | [**update_worker_cluster**](flamenco/manager/docs/WorkerMgtApi.md#update_worker_cluster) | **PUT** /api/v3/worker-mgt/cluster/{cluster_id} | Update an existing worker cluster.
|
||||
*WorkerMgtApi* | [**set_worker_tags**](flamenco/manager/docs/WorkerMgtApi.md#set_worker_tags) | **POST** /api/v3/worker-mgt/workers/{worker_id}/settags |
|
||||
*WorkerMgtApi* | [**update_worker_tag**](flamenco/manager/docs/WorkerMgtApi.md#update_worker_tag) | **PUT** /api/v3/worker-mgt/tag/{tag_id} | Update an existing worker tag.
|
||||
|
||||
|
||||
## Documentation For Models
|
||||
|
||||
- [AssignedTask](flamenco/manager/docs/AssignedTask.md)
|
||||
- [AvailableJobSetting](flamenco/manager/docs/AvailableJobSetting.md)
|
||||
- [AvailableJobSettingEvalInfo](flamenco/manager/docs/AvailableJobSettingEvalInfo.md)
|
||||
- [AvailableJobSettingSubtype](flamenco/manager/docs/AvailableJobSettingSubtype.md)
|
||||
- [AvailableJobSettingType](flamenco/manager/docs/AvailableJobSettingType.md)
|
||||
- [AvailableJobSettingVisibility](flamenco/manager/docs/AvailableJobSettingVisibility.md)
|
||||
@ -197,9 +198,6 @@ Class | Method | HTTP request | Description
|
||||
- [TaskWorker](flamenco/manager/docs/TaskWorker.md)
|
||||
- [Worker](flamenco/manager/docs/Worker.md)
|
||||
- [WorkerAllOf](flamenco/manager/docs/WorkerAllOf.md)
|
||||
- [WorkerCluster](flamenco/manager/docs/WorkerCluster.md)
|
||||
- [WorkerClusterChangeRequest](flamenco/manager/docs/WorkerClusterChangeRequest.md)
|
||||
- [WorkerClusterList](flamenco/manager/docs/WorkerClusterList.md)
|
||||
- [WorkerList](flamenco/manager/docs/WorkerList.md)
|
||||
- [WorkerRegistration](flamenco/manager/docs/WorkerRegistration.md)
|
||||
- [WorkerSignOn](flamenco/manager/docs/WorkerSignOn.md)
|
||||
@ -209,6 +207,9 @@ Class | Method | HTTP request | Description
|
||||
- [WorkerStatus](flamenco/manager/docs/WorkerStatus.md)
|
||||
- [WorkerStatusChangeRequest](flamenco/manager/docs/WorkerStatusChangeRequest.md)
|
||||
- [WorkerSummary](flamenco/manager/docs/WorkerSummary.md)
|
||||
- [WorkerTag](flamenco/manager/docs/WorkerTag.md)
|
||||
- [WorkerTagChangeRequest](flamenco/manager/docs/WorkerTagChangeRequest.md)
|
||||
- [WorkerTagList](flamenco/manager/docs/WorkerTagList.md)
|
||||
- [WorkerTask](flamenco/manager/docs/WorkerTask.md)
|
||||
- [WorkerTaskAllOf](flamenco/manager/docs/WorkerTaskAllOf.md)
|
||||
|
||||
|
@ -10,7 +10,7 @@ from urllib3.exceptions import HTTPError, MaxRetryError
|
||||
|
||||
import bpy
|
||||
|
||||
from . import job_types, job_submission, preferences, worker_clusters
|
||||
from . import job_types, job_submission, preferences, worker_tags
|
||||
from .job_types_propgroup import JobTypePropertyGroup
|
||||
from .bat.submodules import bpathlib
|
||||
|
||||
@ -83,10 +83,10 @@ class FLAMENCO_OT_fetch_job_types(FlamencoOpMixin, bpy.types.Operator):
|
||||
return {"FINISHED"}
|
||||
|
||||
|
||||
class FLAMENCO_OT_fetch_worker_clusters(FlamencoOpMixin, bpy.types.Operator):
|
||||
bl_idname = "flamenco.fetch_worker_clusters"
|
||||
bl_label = "Fetch Worker Clusters"
|
||||
bl_description = "Query Flamenco Manager to obtain the available worker clusters"
|
||||
class FLAMENCO_OT_fetch_worker_tags(FlamencoOpMixin, bpy.types.Operator):
|
||||
bl_idname = "flamenco.fetch_worker_tags"
|
||||
bl_label = "Fetch Worker Tags"
|
||||
bl_description = "Query Flamenco Manager to obtain the available worker tags"
|
||||
|
||||
def execute(self, context: bpy.types.Context) -> set[str]:
|
||||
api_client = self.get_api_client(context)
|
||||
@ -94,10 +94,10 @@ class FLAMENCO_OT_fetch_worker_clusters(FlamencoOpMixin, bpy.types.Operator):
|
||||
from flamenco.manager import ApiException
|
||||
|
||||
scene = context.scene
|
||||
old_cluster = getattr(scene, "flamenco_worker_cluster", "")
|
||||
old_tag = getattr(scene, "flamenco_worker_tag", "")
|
||||
|
||||
try:
|
||||
worker_clusters.refresh(context, api_client)
|
||||
worker_tags.refresh(context, api_client)
|
||||
except ApiException as ex:
|
||||
self.report({"ERROR"}, "Error getting job types: %s" % ex)
|
||||
return {"CANCELLED"}
|
||||
@ -107,9 +107,9 @@ class FLAMENCO_OT_fetch_worker_clusters(FlamencoOpMixin, bpy.types.Operator):
|
||||
self.report({"ERROR"}, "Unable to reach Manager")
|
||||
return {"CANCELLED"}
|
||||
|
||||
if old_cluster:
|
||||
# TODO: handle cases where the old cluster no longer exists.
|
||||
scene.flamenco_worker_cluster = old_cluster
|
||||
if old_tag:
|
||||
# TODO: handle cases where the old tag no longer exists.
|
||||
scene.flamenco_worker_tag = old_tag
|
||||
|
||||
return {"FINISHED"}
|
||||
|
||||
@ -143,6 +143,14 @@ class FLAMENCO_OT_eval_setting(FlamencoOpMixin, bpy.types.Operator):
|
||||
setting_key: bpy.props.StringProperty(name="Setting Key") # type: ignore
|
||||
setting_eval: bpy.props.StringProperty(name="Python Expression") # type: ignore
|
||||
|
||||
eval_description: bpy.props.StringProperty(name="Description", options={"HIDDEN"})
|
||||
|
||||
@classmethod
|
||||
def description(cls, context, properties):
|
||||
if not properties.eval_description:
|
||||
return "" # Causes bl_description to be shown.
|
||||
return f"Set value to: {properties.eval_description}"
|
||||
|
||||
def execute(self, context: bpy.types.Context) -> set[str]:
|
||||
job = job_submission.job_for_scene(context.scene)
|
||||
if job is None:
|
||||
@ -669,7 +677,7 @@ class FLAMENCO3_OT_explore_file_path(bpy.types.Operator):
|
||||
|
||||
classes = (
|
||||
FLAMENCO_OT_fetch_job_types,
|
||||
FLAMENCO_OT_fetch_worker_clusters,
|
||||
FLAMENCO_OT_fetch_worker_tags,
|
||||
FLAMENCO_OT_ping_manager,
|
||||
FLAMENCO_OT_eval_setting,
|
||||
FLAMENCO_OT_submit_job,
|
||||
|
@ -43,7 +43,7 @@ _project_finder_enum_items = [
|
||||
]
|
||||
|
||||
|
||||
class WorkerCluster(bpy.types.PropertyGroup):
|
||||
class WorkerTag(bpy.types.PropertyGroup):
|
||||
id: bpy.props.StringProperty(name="id") # type: ignore
|
||||
name: bpy.props.StringProperty(name="Name") # type: ignore
|
||||
description: bpy.props.StringProperty(name="Description") # type: ignore
|
||||
@ -93,10 +93,10 @@ class FlamencoPreferences(bpy.types.AddonPreferences):
|
||||
get=lambda prefs: prefs.job_storage,
|
||||
)
|
||||
|
||||
worker_clusters: bpy.props.CollectionProperty( # type: ignore
|
||||
type=WorkerCluster,
|
||||
name="Worker Clusters",
|
||||
description="Cache for the worker clusters available on the configured Manager",
|
||||
worker_tags: bpy.props.CollectionProperty( # type: ignore
|
||||
type=WorkerTag,
|
||||
name="Worker Tags",
|
||||
description="Cache for the worker tags available on the configured Manager",
|
||||
options={"HIDDEN"},
|
||||
)
|
||||
|
||||
@ -169,7 +169,7 @@ def manager_url(context: bpy.types.Context) -> str:
|
||||
|
||||
|
||||
classes = (
|
||||
WorkerCluster,
|
||||
WorkerTag,
|
||||
FlamencoPreferences,
|
||||
)
|
||||
_register, _unregister = bpy.utils.register_classes_factory(classes)
|
||||
|
@ -16,25 +16,25 @@ _enum_items: list[Union[tuple[str, str, str], tuple[str, str, str, int, int]]] =
|
||||
|
||||
|
||||
def refresh(context: bpy.types.Context, api_client: _ApiClient) -> None:
|
||||
"""Fetch the available worker clusters from the Manager."""
|
||||
"""Fetch the available worker tags from the Manager."""
|
||||
from flamenco.manager import ApiClient
|
||||
from flamenco.manager.api import worker_mgt_api
|
||||
from flamenco.manager.model.worker_cluster_list import WorkerClusterList
|
||||
from flamenco.manager.model.worker_tag_list import WorkerTagList
|
||||
|
||||
assert isinstance(api_client, ApiClient)
|
||||
|
||||
api = worker_mgt_api.WorkerMgtApi(api_client)
|
||||
response: WorkerClusterList = api.fetch_worker_clusters()
|
||||
response: WorkerTagList = api.fetch_worker_tags()
|
||||
|
||||
# Store on the preferences, so a cached version persists until the next refresh.
|
||||
prefs = preferences.get(context)
|
||||
prefs.worker_clusters.clear()
|
||||
prefs.worker_tags.clear()
|
||||
|
||||
for cluster in response.clusters:
|
||||
rna_cluster = prefs.worker_clusters.add()
|
||||
rna_cluster.id = cluster.id
|
||||
rna_cluster.name = cluster.name
|
||||
rna_cluster.description = getattr(cluster, "description", "")
|
||||
for tag in response.tags:
|
||||
rna_tag = prefs.worker_tags.add()
|
||||
rna_tag.id = tag.id
|
||||
rna_tag.name = tag.name
|
||||
rna_tag.description = getattr(tag, "description", "")
|
||||
|
||||
# Preferences have changed, so make sure that Blender saves them (assuming
|
||||
# auto-save here).
|
||||
@ -46,25 +46,25 @@ def _get_enum_items(self, context):
|
||||
prefs = preferences.get(context)
|
||||
|
||||
_enum_items = [
|
||||
("-", "All", "No specific cluster assigned, any worker can handle this job"),
|
||||
("-", "All", "No specific tag assigned, any worker can handle this job"),
|
||||
]
|
||||
_enum_items.extend(
|
||||
(cluster.id, cluster.name, cluster.description)
|
||||
for cluster in prefs.worker_clusters
|
||||
(tag.id, tag.name, tag.description)
|
||||
for tag in prefs.worker_tags
|
||||
)
|
||||
return _enum_items
|
||||
|
||||
|
||||
def register() -> None:
|
||||
bpy.types.Scene.flamenco_worker_cluster = bpy.props.EnumProperty(
|
||||
name="Worker Cluster",
|
||||
bpy.types.Scene.flamenco_worker_tag = bpy.props.EnumProperty(
|
||||
name="Worker Tag",
|
||||
items=_get_enum_items,
|
||||
description="The set of Workers that can handle tasks of this job",
|
||||
)
|
||||
|
||||
|
||||
def unregister() -> None:
|
||||
to_del = ((bpy.types.Scene, "flamenco_worker_cluster"),)
|
||||
to_del = ((bpy.types.Scene, "flamenco_worker_tag"),)
|
||||
for ob, attr in to_del:
|
||||
try:
|
||||
delattr(ob, attr)
|
@ -139,12 +139,6 @@ func runFlamencoManager() bool {
|
||||
persist := openDB(*configService)
|
||||
defer persist.Close()
|
||||
|
||||
// Disabled for now. `VACUUM` locks the database, which means that other
|
||||
// queries can fail with a "database is locked (5) (SQLITE_BUSY)" error. This
|
||||
// situation should be handled gracefully before reinstating the vacuum loop.
|
||||
//
|
||||
// go persist.PeriodicMaintenanceLoop(mainCtx)
|
||||
|
||||
timeService := clock.New()
|
||||
compiler, err := job_compilers.Load(timeService)
|
||||
if err != nil {
|
||||
@ -196,6 +190,16 @@ func runFlamencoManager() bool {
|
||||
lastRender.Run(mainCtx)
|
||||
}()
|
||||
|
||||
// Run a periodic integrity check on the database.
|
||||
// When that check fails, the entire application should shut down.
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
persist.PeriodicIntegrityCheck(mainCtx,
|
||||
configService.Get().DBIntegrityCheck,
|
||||
mainCtxCancel)
|
||||
}()
|
||||
|
||||
// Start the web server.
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
|
@ -31,16 +31,19 @@ func findBlender() {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
helpMsg := "Flamenco Manager will have to supply the full path to Blender when tasks are sent " +
|
||||
"to this Worker. For more info see https://flamenco.blender.org/usage/variables/blender/"
|
||||
|
||||
result, err := find_blender.Find(ctx)
|
||||
switch {
|
||||
case errors.Is(err, fs.ErrNotExist), errors.Is(err, exec.ErrNotFound):
|
||||
log.Warn().Msg("Blender could not be found, Flamenco Manager will have to supply a full path")
|
||||
log.Warn().Msg("Blender could not be found. " + helpMsg)
|
||||
case err != nil:
|
||||
log.Warn().AnErr("cause", err).Msg("there was an issue finding Blender on this system, Flamenco Manager will have to supply a full path")
|
||||
log.Warn().AnErr("cause", err).Msg("There was an error finding Blender on this system. " + helpMsg)
|
||||
default:
|
||||
log.Info().
|
||||
Str("path", result.FoundLocation).
|
||||
Str("version", result.BlenderVersion).
|
||||
Msg("Blender found on this system, it will be used unless Flamenco Manager specifies a path to a different Blender")
|
||||
Msg("Blender found on this system, it will be used unless the Flamenco Manager configuration specifies a different path.")
|
||||
}
|
||||
}
|
||||
|
@ -52,7 +52,7 @@ var cliArgs struct {
|
||||
func main() {
|
||||
parseCliArgs()
|
||||
if cliArgs.version {
|
||||
fmt.Println(appinfo.ApplicationVersion)
|
||||
fmt.Println(appinfo.ExtendedVersion())
|
||||
return
|
||||
}
|
||||
|
||||
|
261
cmd/job-creator/main.go
Normal file
261
cmd/job-creator/main.go
Normal file
@ -0,0 +1,261 @@
|
||||
package main
|
||||
|
||||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"io/fs"
|
||||
"os"
|
||||
"os/signal"
|
||||
"runtime"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/benbjohnson/clock"
|
||||
"github.com/mattn/go-colorable"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
|
||||
"git.blender.org/flamenco/internal/appinfo"
|
||||
"git.blender.org/flamenco/internal/manager/config"
|
||||
"git.blender.org/flamenco/internal/manager/job_compilers"
|
||||
"git.blender.org/flamenco/internal/manager/persistence"
|
||||
"git.blender.org/flamenco/pkg/api"
|
||||
)
|
||||
|
||||
var cliArgs struct {
|
||||
version bool
|
||||
jobUUID string
|
||||
}
|
||||
|
||||
func main() {
|
||||
output := zerolog.ConsoleWriter{Out: colorable.NewColorableStdout(), TimeFormat: time.RFC3339}
|
||||
log.Logger = log.Output(output)
|
||||
log.Info().
|
||||
Str("version", appinfo.ApplicationVersion).
|
||||
Str("git", appinfo.ApplicationGitHash).
|
||||
Str("releaseCycle", appinfo.ReleaseCycle).
|
||||
Str("os", runtime.GOOS).
|
||||
Str("arch", runtime.GOARCH).
|
||||
Msgf("starting %v job compiler", appinfo.ApplicationName)
|
||||
|
||||
parseCliArgs()
|
||||
if cliArgs.version {
|
||||
return
|
||||
}
|
||||
|
||||
if cliArgs.jobUUID == "" {
|
||||
log.Fatal().Msg("give me a job UUID to regenerate tasks for")
|
||||
}
|
||||
|
||||
// Load configuration.
|
||||
configService := config.NewService()
|
||||
err := configService.Load()
|
||||
if err != nil && !errors.Is(err, fs.ErrNotExist) {
|
||||
log.Error().Err(err).Msg("loading configuration")
|
||||
}
|
||||
|
||||
isFirstRun, err := configService.IsFirstRun()
|
||||
switch {
|
||||
case err != nil:
|
||||
log.Fatal().Err(err).Msg("unable to determine whether this is the first run of Flamenco or not")
|
||||
case isFirstRun:
|
||||
log.Info().Msg("This seems to be your first run of Flamenco, this tool won't work.")
|
||||
return
|
||||
}
|
||||
|
||||
// Construct the services.
|
||||
persist := openDB(*configService)
|
||||
defer persist.Close()
|
||||
|
||||
timeService := clock.New()
|
||||
compiler, err := job_compilers.Load(timeService)
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("error loading job compilers")
|
||||
}
|
||||
|
||||
// The main context determines the lifetime of the application. All
|
||||
// long-running goroutines need to keep an eye on this, and stop their work
|
||||
// once it closes.
|
||||
mainCtx, mainCtxCancel := context.WithCancel(context.Background())
|
||||
defer mainCtxCancel()
|
||||
|
||||
installSignalHandler(mainCtxCancel)
|
||||
|
||||
recompile(mainCtx, cliArgs.jobUUID, persist, compiler)
|
||||
}
|
||||
|
||||
// recompile regenerates the job's tasks.
|
||||
func recompile(ctx context.Context, jobUUID string, db *persistence.DB, compiler *job_compilers.Service) {
|
||||
dbJob, err := db.FetchJob(ctx, jobUUID)
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("could not get job from database")
|
||||
}
|
||||
logger := log.With().Str("job", jobUUID).Logger()
|
||||
logger.Info().Msg("found job")
|
||||
|
||||
dbTasks, err := db.FetchTasksOfJob(ctx, dbJob)
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("could not query database for tasks")
|
||||
}
|
||||
if len(dbTasks) > 0 {
|
||||
// This tool has only been tested with jobs that have had their tasks completely lost.
|
||||
log.Fatal().
|
||||
Int("numTasks", len(dbTasks)).
|
||||
Msg("this job still has tasks, this is not a situation this tool should be used in")
|
||||
}
|
||||
|
||||
// Recompile the job.
|
||||
fakeSubmittedJob := constructSubmittedJob(dbJob)
|
||||
authoredJob, err := compiler.Compile(ctx, fakeSubmittedJob)
|
||||
if err != nil {
|
||||
logger.Fatal().Err(err).Msg("could not recompile job")
|
||||
}
|
||||
sanityCheck(logger, dbJob, authoredJob)
|
||||
|
||||
// Store the recompiled tasks.
|
||||
if err := db.StoreAuthoredJobTaks(ctx, dbJob, authoredJob); err != nil {
|
||||
logger.Fatal().Err(err).Msg("error storing recompiled tasks")
|
||||
}
|
||||
logger.Info().Msg("new tasks have been stored")
|
||||
|
||||
updateTaskStatuses(ctx, logger, db, dbJob)
|
||||
|
||||
logger.Info().Msg("job recompilation seems to have worked out")
|
||||
}
|
||||
|
||||
func constructSubmittedJob(dbJob *persistence.Job) api.SubmittedJob {
|
||||
fakeSubmittedJob := api.SubmittedJob{
|
||||
Name: dbJob.Name,
|
||||
Priority: dbJob.Priority,
|
||||
SubmitterPlatform: "reconstrutor", // The platform shouldn't matter, as all paths have already been replaced.
|
||||
Type: dbJob.JobType,
|
||||
TypeEtag: nil,
|
||||
|
||||
Settings: &api.JobSettings{AdditionalProperties: make(map[string]interface{})},
|
||||
Metadata: &api.JobMetadata{AdditionalProperties: make(map[string]string)},
|
||||
}
|
||||
|
||||
for key, value := range dbJob.Settings {
|
||||
fakeSubmittedJob.Settings.AdditionalProperties[key] = value
|
||||
}
|
||||
for key, value := range dbJob.Metadata {
|
||||
fakeSubmittedJob.Metadata.AdditionalProperties[key] = value
|
||||
}
|
||||
if dbJob.WorkerTag != nil {
|
||||
fakeSubmittedJob.WorkerTag = &dbJob.WorkerTag.UUID
|
||||
} else if dbJob.WorkerTagID != nil {
|
||||
panic("WorkerTagID is set, but WorkerTag is not")
|
||||
}
|
||||
|
||||
return fakeSubmittedJob
|
||||
}
|
||||
|
||||
// Check that the authored job is consistent with the original job.
|
||||
func sanityCheck(logger zerolog.Logger, expect *persistence.Job, actual *job_compilers.AuthoredJob) {
|
||||
if actual.Name != expect.Name {
|
||||
logger.Fatal().
|
||||
Str("expected", expect.Name).
|
||||
Str("actual", actual.Name).
|
||||
Msg("recompilation did not produce expected name")
|
||||
}
|
||||
if actual.JobType != expect.JobType {
|
||||
logger.Fatal().
|
||||
Str("expected", expect.JobType).
|
||||
Str("actual", actual.JobType).
|
||||
Msg("recompilation did not produce expected job type")
|
||||
}
|
||||
}
|
||||
|
||||
func updateTaskStatuses(ctx context.Context, logger zerolog.Logger, db *persistence.DB, dbJob *persistence.Job) {
|
||||
logger = logger.With().Str("jobStatus", string(dbJob.Status)).Logger()
|
||||
|
||||
// Update the task statuses based on the job status. This is NOT using the
|
||||
// state machine, as these tasks are not actually going from one state to the
|
||||
// other. They are just being updated in the database.
|
||||
taskStatusMap := map[api.JobStatus]api.TaskStatus{
|
||||
api.JobStatusActive: api.TaskStatusQueued,
|
||||
api.JobStatusCancelRequested: api.TaskStatusCanceled,
|
||||
api.JobStatusCanceled: api.TaskStatusCanceled,
|
||||
api.JobStatusCompleted: api.TaskStatusCompleted,
|
||||
api.JobStatusFailed: api.TaskStatusCanceled,
|
||||
api.JobStatusPaused: api.TaskStatusPaused,
|
||||
api.JobStatusQueued: api.TaskStatusQueued,
|
||||
api.JobStatusRequeueing: api.TaskStatusQueued,
|
||||
api.JobStatusUnderConstruction: api.TaskStatusQueued,
|
||||
}
|
||||
newTaskStatus, ok := taskStatusMap[dbJob.Status]
|
||||
if !ok {
|
||||
logger.Warn().Msg("unknown job status, not touching task statuses")
|
||||
return
|
||||
}
|
||||
|
||||
logger = logger.With().Str("taskStatus", string(newTaskStatus)).Logger()
|
||||
|
||||
err := db.UpdateJobsTaskStatuses(ctx, dbJob, newTaskStatus, "reset task status after job reconstruction")
|
||||
if err != nil {
|
||||
logger.Fatal().Msg("could not update task statuses")
|
||||
}
|
||||
|
||||
logger.Info().Msg("task statuses have been updated based on the job status")
|
||||
}
|
||||
|
||||
func parseCliArgs() {
|
||||
var quiet, debug, trace bool
|
||||
|
||||
flag.BoolVar(&cliArgs.version, "version", false, "Shows the application version, then exits.")
|
||||
flag.BoolVar(&quiet, "quiet", false, "Only log warning-level and worse.")
|
||||
flag.BoolVar(&debug, "debug", false, "Enable debug-level logging.")
|
||||
flag.BoolVar(&trace, "trace", false, "Enable trace-level logging.")
|
||||
flag.StringVar(&cliArgs.jobUUID, "job", "", "Job UUID to regenerate")
|
||||
|
||||
flag.Parse()
|
||||
|
||||
var logLevel zerolog.Level
|
||||
switch {
|
||||
case trace:
|
||||
logLevel = zerolog.TraceLevel
|
||||
case debug:
|
||||
logLevel = zerolog.DebugLevel
|
||||
case quiet:
|
||||
logLevel = zerolog.WarnLevel
|
||||
default:
|
||||
logLevel = zerolog.InfoLevel
|
||||
}
|
||||
zerolog.SetGlobalLevel(logLevel)
|
||||
}
|
||||
|
||||
// openDB opens the database or dies.
|
||||
func openDB(configService config.Service) *persistence.DB {
|
||||
dsn := configService.Get().DatabaseDSN
|
||||
if dsn == "" {
|
||||
log.Fatal().Msg("configure the database in flamenco-manager.yaml")
|
||||
}
|
||||
|
||||
dbCtx, dbCtxCancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer dbCtxCancel()
|
||||
persist, err := persistence.OpenDB(dbCtx, dsn)
|
||||
if err != nil {
|
||||
log.Fatal().
|
||||
Err(err).
|
||||
Str("dsn", dsn).
|
||||
Msg("error opening database")
|
||||
}
|
||||
|
||||
return persist
|
||||
}
|
||||
|
||||
// installSignalHandler spawns a goroutine that handles incoming POSIX signals.
|
||||
func installSignalHandler(cancelFunc context.CancelFunc) {
|
||||
signals := make(chan os.Signal, 1)
|
||||
signal.Notify(signals, os.Interrupt)
|
||||
signal.Notify(signals, syscall.SIGTERM)
|
||||
go func() {
|
||||
for signum := range signals {
|
||||
log.Info().Str("signal", signum.String()).Msg("signal received, shutting down")
|
||||
cancelFunc()
|
||||
}
|
||||
}()
|
||||
}
|
@ -4,6 +4,8 @@ MY_DIR="$(dirname "$(readlink -e "$0")")"
|
||||
ADDON_ZIP="$MY_DIR/web/static/flamenco3-addon.zip"
|
||||
WORKER_TARGET=/shared/software/flamenco3-worker/flamenco-worker
|
||||
|
||||
TIMESTAMP=$(date +'%Y-%m-%d-%H%M%S')
|
||||
|
||||
set -e
|
||||
|
||||
function prompt() {
|
||||
@ -19,6 +21,7 @@ make
|
||||
|
||||
prompt "Deploying Manager"
|
||||
ssh -o ClearAllForwardings=yes flamenco.farm.blender -t sudo systemctl stop flamenco3-manager
|
||||
ssh -o ClearAllForwardings=yes flamenco.farm.blender -t cp /home/flamenco3/flamenco-manager.sqlite /home/flamenco3/flamenco-manager.sqlite-bak-$TIMESTAMP
|
||||
scp flamenco-manager flamenco.farm.blender:/home/flamenco3/
|
||||
ssh -o ClearAllForwardings=yes flamenco.farm.blender -t sudo systemctl start flamenco3-manager
|
||||
|
||||
|
@ -65,13 +65,13 @@ type PersistenceService interface {
|
||||
RemoveFromJobBlocklist(ctx context.Context, jobUUID, workerUUID, taskType string) error
|
||||
ClearJobBlocklist(ctx context.Context, job *persistence.Job) error
|
||||
|
||||
// Worker cluster management.
|
||||
WorkerSetClusters(ctx context.Context, worker *persistence.Worker, clusterUUIDs []string) error
|
||||
CreateWorkerCluster(ctx context.Context, cluster *persistence.WorkerCluster) error
|
||||
FetchWorkerCluster(ctx context.Context, uuid string) (*persistence.WorkerCluster, error)
|
||||
FetchWorkerClusters(ctx context.Context) ([]*persistence.WorkerCluster, error)
|
||||
DeleteWorkerCluster(ctx context.Context, uuid string) error
|
||||
SaveWorkerCluster(ctx context.Context, cluster *persistence.WorkerCluster) error
|
||||
// Worker tag management.
|
||||
WorkerSetTags(ctx context.Context, worker *persistence.Worker, tagUUIDs []string) error
|
||||
CreateWorkerTag(ctx context.Context, tag *persistence.WorkerTag) error
|
||||
FetchWorkerTag(ctx context.Context, uuid string) (*persistence.WorkerTag, error)
|
||||
FetchWorkerTags(ctx context.Context) ([]*persistence.WorkerTag, error)
|
||||
DeleteWorkerTag(ctx context.Context, uuid string) error
|
||||
SaveWorkerTag(ctx context.Context, tag *persistence.WorkerTag) error
|
||||
|
||||
// WorkersLeftToRun returns a set of worker UUIDs that can run tasks of the given type on the given job.
|
||||
WorkersLeftToRun(ctx context.Context, job *persistence.Job, taskType string) (map[string]bool, error)
|
||||
|
@ -618,8 +618,8 @@ func jobDBtoAPI(dbJob *persistence.Job) api.Job {
|
||||
if dbJob.DeleteRequestedAt.Valid {
|
||||
apiJob.DeleteRequestedAt = &dbJob.DeleteRequestedAt.Time
|
||||
}
|
||||
if dbJob.WorkerCluster != nil {
|
||||
apiJob.WorkerCluster = &dbJob.WorkerCluster.UUID
|
||||
if dbJob.WorkerTag != nil {
|
||||
apiJob.WorkerTag = &dbJob.WorkerTag.UUID
|
||||
}
|
||||
|
||||
return apiJob
|
||||
|
@ -320,19 +320,19 @@ func TestSubmitJobWithShamanCheckoutID(t *testing.T) {
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestSubmitJobWithWorkerCluster(t *testing.T) {
|
||||
func TestSubmitJobWithWorkerTag(t *testing.T) {
|
||||
mockCtrl := gomock.NewController(t)
|
||||
defer mockCtrl.Finish()
|
||||
|
||||
mf := newMockedFlamenco(mockCtrl)
|
||||
worker := testWorker()
|
||||
|
||||
workerClusterUUID := "04435762-9dc8-4f13-80b7-643a6fa5b6fd"
|
||||
cluster := persistence.WorkerCluster{
|
||||
workerTagUUID := "04435762-9dc8-4f13-80b7-643a6fa5b6fd"
|
||||
tag := persistence.WorkerTag{
|
||||
Model: persistence.Model{ID: 47},
|
||||
UUID: workerClusterUUID,
|
||||
Name: "first cluster",
|
||||
Description: "my first cluster",
|
||||
UUID: workerTagUUID,
|
||||
Name: "first tag",
|
||||
Description: "my first tag",
|
||||
}
|
||||
|
||||
submittedJob := api.SubmittedJob{
|
||||
@ -340,7 +340,7 @@ func TestSubmitJobWithWorkerCluster(t *testing.T) {
|
||||
Type: "test",
|
||||
Priority: 50,
|
||||
SubmitterPlatform: worker.Platform,
|
||||
WorkerCluster: &workerClusterUUID,
|
||||
WorkerTag: &workerTagUUID,
|
||||
}
|
||||
|
||||
mf.expectConvertTwoWayVariables(t,
|
||||
@ -351,8 +351,8 @@ func TestSubmitJobWithWorkerCluster(t *testing.T) {
|
||||
|
||||
// Expect the job compiler to be called.
|
||||
authoredJob := job_compilers.AuthoredJob{
|
||||
JobID: "afc47568-bd9d-4368-8016-e91d945db36d",
|
||||
WorkerClusterUUID: workerClusterUUID,
|
||||
JobID: "afc47568-bd9d-4368-8016-e91d945db36d",
|
||||
WorkerTagUUID: workerTagUUID,
|
||||
|
||||
Name: submittedJob.Name,
|
||||
JobType: submittedJob.Type,
|
||||
@ -382,8 +382,8 @@ func TestSubmitJobWithWorkerCluster(t *testing.T) {
|
||||
Settings: persistence.StringInterfaceMap{},
|
||||
Metadata: persistence.StringStringMap{},
|
||||
|
||||
WorkerClusterID: &cluster.ID,
|
||||
WorkerCluster: &cluster,
|
||||
WorkerTagID: &tag.ID,
|
||||
WorkerTag: &tag,
|
||||
}
|
||||
mf.persistence.EXPECT().FetchJob(gomock.Any(), queuedJob.JobID).Return(&dbJob, nil)
|
||||
|
||||
|
92
internal/manager/api_impl/mocks/api_impl_mock.gen.go
generated
92
internal/manager/api_impl/mocks/api_impl_mock.gen.go
generated
@ -141,18 +141,18 @@ func (mr *MockPersistenceServiceMockRecorder) CreateWorker(arg0, arg1 interface{
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateWorker", reflect.TypeOf((*MockPersistenceService)(nil).CreateWorker), arg0, arg1)
|
||||
}
|
||||
|
||||
// CreateWorkerCluster mocks base method.
|
||||
func (m *MockPersistenceService) CreateWorkerCluster(arg0 context.Context, arg1 *persistence.WorkerCluster) error {
|
||||
// CreateWorkerTag mocks base method.
|
||||
func (m *MockPersistenceService) CreateWorkerTag(arg0 context.Context, arg1 *persistence.WorkerTag) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "CreateWorkerCluster", arg0, arg1)
|
||||
ret := m.ctrl.Call(m, "CreateWorkerTag", arg0, arg1)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// CreateWorkerCluster indicates an expected call of CreateWorkerCluster.
|
||||
func (mr *MockPersistenceServiceMockRecorder) CreateWorkerCluster(arg0, arg1 interface{}) *gomock.Call {
|
||||
// CreateWorkerTag indicates an expected call of CreateWorkerTag.
|
||||
func (mr *MockPersistenceServiceMockRecorder) CreateWorkerTag(arg0, arg1 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateWorkerCluster", reflect.TypeOf((*MockPersistenceService)(nil).CreateWorkerCluster), arg0, arg1)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateWorkerTag", reflect.TypeOf((*MockPersistenceService)(nil).CreateWorkerTag), arg0, arg1)
|
||||
}
|
||||
|
||||
// DeleteWorker mocks base method.
|
||||
@ -169,18 +169,18 @@ func (mr *MockPersistenceServiceMockRecorder) DeleteWorker(arg0, arg1 interface{
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteWorker", reflect.TypeOf((*MockPersistenceService)(nil).DeleteWorker), arg0, arg1)
|
||||
}
|
||||
|
||||
// DeleteWorkerCluster mocks base method.
|
||||
func (m *MockPersistenceService) DeleteWorkerCluster(arg0 context.Context, arg1 string) error {
|
||||
// DeleteWorkerTag mocks base method.
|
||||
func (m *MockPersistenceService) DeleteWorkerTag(arg0 context.Context, arg1 string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "DeleteWorkerCluster", arg0, arg1)
|
||||
ret := m.ctrl.Call(m, "DeleteWorkerTag", arg0, arg1)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// DeleteWorkerCluster indicates an expected call of DeleteWorkerCluster.
|
||||
func (mr *MockPersistenceServiceMockRecorder) DeleteWorkerCluster(arg0, arg1 interface{}) *gomock.Call {
|
||||
// DeleteWorkerTag indicates an expected call of DeleteWorkerTag.
|
||||
func (mr *MockPersistenceServiceMockRecorder) DeleteWorkerTag(arg0, arg1 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteWorkerCluster", reflect.TypeOf((*MockPersistenceService)(nil).DeleteWorkerCluster), arg0, arg1)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteWorkerTag", reflect.TypeOf((*MockPersistenceService)(nil).DeleteWorkerTag), arg0, arg1)
|
||||
}
|
||||
|
||||
// FetchJob mocks base method.
|
||||
@ -258,34 +258,34 @@ func (mr *MockPersistenceServiceMockRecorder) FetchWorker(arg0, arg1 interface{}
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorker", reflect.TypeOf((*MockPersistenceService)(nil).FetchWorker), arg0, arg1)
|
||||
}
|
||||
|
||||
// FetchWorkerCluster mocks base method.
|
||||
func (m *MockPersistenceService) FetchWorkerCluster(arg0 context.Context, arg1 string) (*persistence.WorkerCluster, error) {
|
||||
// FetchWorkerTag mocks base method.
|
||||
func (m *MockPersistenceService) FetchWorkerTag(arg0 context.Context, arg1 string) (*persistence.WorkerTag, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "FetchWorkerCluster", arg0, arg1)
|
||||
ret0, _ := ret[0].(*persistence.WorkerCluster)
|
||||
ret := m.ctrl.Call(m, "FetchWorkerTag", arg0, arg1)
|
||||
ret0, _ := ret[0].(*persistence.WorkerTag)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// FetchWorkerCluster indicates an expected call of FetchWorkerCluster.
|
||||
func (mr *MockPersistenceServiceMockRecorder) FetchWorkerCluster(arg0, arg1 interface{}) *gomock.Call {
|
||||
// FetchWorkerTag indicates an expected call of FetchWorkerTag.
|
||||
func (mr *MockPersistenceServiceMockRecorder) FetchWorkerTag(arg0, arg1 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerCluster", reflect.TypeOf((*MockPersistenceService)(nil).FetchWorkerCluster), arg0, arg1)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerTag", reflect.TypeOf((*MockPersistenceService)(nil).FetchWorkerTag), arg0, arg1)
|
||||
}
|
||||
|
||||
// FetchWorkerClusters mocks base method.
|
||||
func (m *MockPersistenceService) FetchWorkerClusters(arg0 context.Context) ([]*persistence.WorkerCluster, error) {
|
||||
// FetchWorkerTags mocks base method.
|
||||
func (m *MockPersistenceService) FetchWorkerTags(arg0 context.Context) ([]*persistence.WorkerTag, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "FetchWorkerClusters", arg0)
|
||||
ret0, _ := ret[0].([]*persistence.WorkerCluster)
|
||||
ret := m.ctrl.Call(m, "FetchWorkerTags", arg0)
|
||||
ret0, _ := ret[0].([]*persistence.WorkerTag)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// FetchWorkerClusters indicates an expected call of FetchWorkerClusters.
|
||||
func (mr *MockPersistenceServiceMockRecorder) FetchWorkerClusters(arg0 interface{}) *gomock.Call {
|
||||
// FetchWorkerTags indicates an expected call of FetchWorkerTags.
|
||||
func (mr *MockPersistenceServiceMockRecorder) FetchWorkerTags(arg0 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerClusters", reflect.TypeOf((*MockPersistenceService)(nil).FetchWorkerClusters), arg0)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerTags", reflect.TypeOf((*MockPersistenceService)(nil).FetchWorkerTags), arg0)
|
||||
}
|
||||
|
||||
// FetchWorkerTask mocks base method.
|
||||
@ -433,20 +433,6 @@ func (mr *MockPersistenceServiceMockRecorder) SaveWorker(arg0, arg1 interface{})
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SaveWorker", reflect.TypeOf((*MockPersistenceService)(nil).SaveWorker), arg0, arg1)
|
||||
}
|
||||
|
||||
// SaveWorkerCluster mocks base method.
|
||||
func (m *MockPersistenceService) SaveWorkerCluster(arg0 context.Context, arg1 *persistence.WorkerCluster) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "SaveWorkerCluster", arg0, arg1)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// SaveWorkerCluster indicates an expected call of SaveWorkerCluster.
|
||||
func (mr *MockPersistenceServiceMockRecorder) SaveWorkerCluster(arg0, arg1 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SaveWorkerCluster", reflect.TypeOf((*MockPersistenceService)(nil).SaveWorkerCluster), arg0, arg1)
|
||||
}
|
||||
|
||||
// SaveWorkerStatus mocks base method.
|
||||
func (m *MockPersistenceService) SaveWorkerStatus(arg0 context.Context, arg1 *persistence.Worker) error {
|
||||
m.ctrl.T.Helper()
|
||||
@ -461,6 +447,20 @@ func (mr *MockPersistenceServiceMockRecorder) SaveWorkerStatus(arg0, arg1 interf
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SaveWorkerStatus", reflect.TypeOf((*MockPersistenceService)(nil).SaveWorkerStatus), arg0, arg1)
|
||||
}
|
||||
|
||||
// SaveWorkerTag mocks base method.
|
||||
func (m *MockPersistenceService) SaveWorkerTag(arg0 context.Context, arg1 *persistence.WorkerTag) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "SaveWorkerTag", arg0, arg1)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// SaveWorkerTag indicates an expected call of SaveWorkerTag.
|
||||
func (mr *MockPersistenceServiceMockRecorder) SaveWorkerTag(arg0, arg1 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SaveWorkerTag", reflect.TypeOf((*MockPersistenceService)(nil).SaveWorkerTag), arg0, arg1)
|
||||
}
|
||||
|
||||
// ScheduleTask mocks base method.
|
||||
func (m *MockPersistenceService) ScheduleTask(arg0 context.Context, arg1 *persistence.Worker) (*persistence.Task, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@ -532,18 +532,18 @@ func (mr *MockPersistenceServiceMockRecorder) WorkerSeen(arg0, arg1 interface{})
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "WorkerSeen", reflect.TypeOf((*MockPersistenceService)(nil).WorkerSeen), arg0, arg1)
|
||||
}
|
||||
|
||||
// WorkerSetClusters mocks base method.
|
||||
func (m *MockPersistenceService) WorkerSetClusters(arg0 context.Context, arg1 *persistence.Worker, arg2 []string) error {
|
||||
// WorkerSetTags mocks base method.
|
||||
func (m *MockPersistenceService) WorkerSetTags(arg0 context.Context, arg1 *persistence.Worker, arg2 []string) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "WorkerSetClusters", arg0, arg1, arg2)
|
||||
ret := m.ctrl.Call(m, "WorkerSetTags", arg0, arg1, arg2)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// WorkerSetClusters indicates an expected call of WorkerSetClusters.
|
||||
func (mr *MockPersistenceServiceMockRecorder) WorkerSetClusters(arg0, arg1, arg2 interface{}) *gomock.Call {
|
||||
// WorkerSetTags indicates an expected call of WorkerSetTags.
|
||||
func (mr *MockPersistenceServiceMockRecorder) WorkerSetTags(arg0, arg1, arg2 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "WorkerSetClusters", reflect.TypeOf((*MockPersistenceService)(nil).WorkerSetClusters), arg0, arg1, arg2)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "WorkerSetTags", reflect.TypeOf((*MockPersistenceService)(nil).WorkerSetTags), arg0, arg1, arg2)
|
||||
}
|
||||
|
||||
// WorkersLeftToRun mocks base method.
|
||||
|
@ -182,7 +182,7 @@ func (f *Flamenco) RequestWorkerStatusChange(e echo.Context, workerUUID string)
|
||||
return e.NoContent(http.StatusNoContent)
|
||||
}
|
||||
|
||||
func (f *Flamenco) SetWorkerClusters(e echo.Context, workerUUID string) error {
|
||||
func (f *Flamenco) SetWorkerTags(e echo.Context, workerUUID string) error {
|
||||
ctx := e.Request().Context()
|
||||
logger := requestLogger(e)
|
||||
logger = logger.With().Str("worker", workerUUID).Logger()
|
||||
@ -192,7 +192,7 @@ func (f *Flamenco) SetWorkerClusters(e echo.Context, workerUUID string) error {
|
||||
}
|
||||
|
||||
// Decode the request body.
|
||||
var change api.WorkerClusterChangeRequest
|
||||
var change api.WorkerTagChangeRequest
|
||||
if err := e.Bind(&change); err != nil {
|
||||
logger.Warn().Err(err).Msg("bad request received")
|
||||
return sendAPIError(e, http.StatusBadRequest, "invalid format")
|
||||
@ -210,13 +210,13 @@ func (f *Flamenco) SetWorkerClusters(e echo.Context, workerUUID string) error {
|
||||
}
|
||||
|
||||
logger = logger.With().
|
||||
Strs("clusters", change.ClusterIds).
|
||||
Strs("tags", change.TagIds).
|
||||
Logger()
|
||||
logger.Info().Msg("worker cluster change requested")
|
||||
logger.Info().Msg("worker tag change requested")
|
||||
|
||||
// Store the new cluster assignment.
|
||||
if err := f.persist.WorkerSetClusters(ctx, dbWorker, change.ClusterIds); err != nil {
|
||||
logger.Error().Err(err).Msg("saving worker after cluster change request")
|
||||
// Store the new tag assignment.
|
||||
if err := f.persist.WorkerSetTags(ctx, dbWorker, change.TagIds); err != nil {
|
||||
logger.Error().Err(err).Msg("saving worker after tag change request")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error saving worker: %v", err)
|
||||
}
|
||||
|
||||
@ -227,155 +227,155 @@ func (f *Flamenco) SetWorkerClusters(e echo.Context, workerUUID string) error {
|
||||
return e.NoContent(http.StatusNoContent)
|
||||
}
|
||||
|
||||
func (f *Flamenco) DeleteWorkerCluster(e echo.Context, clusterUUID string) error {
|
||||
func (f *Flamenco) DeleteWorkerTag(e echo.Context, tagUUID string) error {
|
||||
ctx := e.Request().Context()
|
||||
logger := requestLogger(e)
|
||||
logger = logger.With().Str("cluster", clusterUUID).Logger()
|
||||
logger = logger.With().Str("tag", tagUUID).Logger()
|
||||
|
||||
if !uuid.IsValid(clusterUUID) {
|
||||
if !uuid.IsValid(tagUUID) {
|
||||
return sendAPIError(e, http.StatusBadRequest, "not a valid UUID")
|
||||
}
|
||||
|
||||
err := f.persist.DeleteWorkerCluster(ctx, clusterUUID)
|
||||
err := f.persist.DeleteWorkerTag(ctx, tagUUID)
|
||||
switch {
|
||||
case errors.Is(err, persistence.ErrWorkerClusterNotFound):
|
||||
logger.Debug().Msg("non-existent worker cluster requested")
|
||||
return sendAPIError(e, http.StatusNotFound, "worker cluster %q not found", clusterUUID)
|
||||
case errors.Is(err, persistence.ErrWorkerTagNotFound):
|
||||
logger.Debug().Msg("non-existent worker tag requested")
|
||||
return sendAPIError(e, http.StatusNotFound, "worker tag %q not found", tagUUID)
|
||||
case err != nil:
|
||||
logger.Error().Err(err).Msg("deleting worker cluster")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error deleting worker cluster: %v", err)
|
||||
logger.Error().Err(err).Msg("deleting worker tag")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error deleting worker tag: %v", err)
|
||||
}
|
||||
|
||||
// TODO: SocketIO broadcast of cluster deletion.
|
||||
// TODO: SocketIO broadcast of tag deletion.
|
||||
|
||||
logger.Info().Msg("worker cluster deleted")
|
||||
logger.Info().Msg("worker tag deleted")
|
||||
return e.NoContent(http.StatusNoContent)
|
||||
}
|
||||
|
||||
func (f *Flamenco) FetchWorkerCluster(e echo.Context, clusterUUID string) error {
|
||||
func (f *Flamenco) FetchWorkerTag(e echo.Context, tagUUID string) error {
|
||||
ctx := e.Request().Context()
|
||||
logger := requestLogger(e)
|
||||
logger = logger.With().Str("cluster", clusterUUID).Logger()
|
||||
logger = logger.With().Str("tag", tagUUID).Logger()
|
||||
|
||||
if !uuid.IsValid(clusterUUID) {
|
||||
if !uuid.IsValid(tagUUID) {
|
||||
return sendAPIError(e, http.StatusBadRequest, "not a valid UUID")
|
||||
}
|
||||
|
||||
cluster, err := f.persist.FetchWorkerCluster(ctx, clusterUUID)
|
||||
tag, err := f.persist.FetchWorkerTag(ctx, tagUUID)
|
||||
switch {
|
||||
case errors.Is(err, persistence.ErrWorkerClusterNotFound):
|
||||
logger.Debug().Msg("non-existent worker cluster requested")
|
||||
return sendAPIError(e, http.StatusNotFound, "worker cluster %q not found", clusterUUID)
|
||||
case errors.Is(err, persistence.ErrWorkerTagNotFound):
|
||||
logger.Debug().Msg("non-existent worker tag requested")
|
||||
return sendAPIError(e, http.StatusNotFound, "worker tag %q not found", tagUUID)
|
||||
case err != nil:
|
||||
logger.Error().Err(err).Msg("fetching worker cluster")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error fetching worker cluster: %v", err)
|
||||
logger.Error().Err(err).Msg("fetching worker tag")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error fetching worker tag: %v", err)
|
||||
}
|
||||
|
||||
return e.JSON(http.StatusOK, workerClusterDBtoAPI(*cluster))
|
||||
return e.JSON(http.StatusOK, workerTagDBtoAPI(*tag))
|
||||
}
|
||||
|
||||
func (f *Flamenco) UpdateWorkerCluster(e echo.Context, clusterUUID string) error {
|
||||
func (f *Flamenco) UpdateWorkerTag(e echo.Context, tagUUID string) error {
|
||||
ctx := e.Request().Context()
|
||||
logger := requestLogger(e)
|
||||
logger = logger.With().Str("cluster", clusterUUID).Logger()
|
||||
logger = logger.With().Str("tag", tagUUID).Logger()
|
||||
|
||||
if !uuid.IsValid(clusterUUID) {
|
||||
if !uuid.IsValid(tagUUID) {
|
||||
return sendAPIError(e, http.StatusBadRequest, "not a valid UUID")
|
||||
}
|
||||
|
||||
// Decode the request body.
|
||||
var update api.UpdateWorkerClusterJSONBody
|
||||
var update api.UpdateWorkerTagJSONBody
|
||||
if err := e.Bind(&update); err != nil {
|
||||
logger.Warn().Err(err).Msg("bad request received")
|
||||
return sendAPIError(e, http.StatusBadRequest, "invalid format")
|
||||
}
|
||||
|
||||
dbCluster, err := f.persist.FetchWorkerCluster(ctx, clusterUUID)
|
||||
dbTag, err := f.persist.FetchWorkerTag(ctx, tagUUID)
|
||||
switch {
|
||||
case errors.Is(err, persistence.ErrWorkerClusterNotFound):
|
||||
logger.Debug().Msg("non-existent worker cluster requested")
|
||||
return sendAPIError(e, http.StatusNotFound, "worker cluster %q not found", clusterUUID)
|
||||
case errors.Is(err, persistence.ErrWorkerTagNotFound):
|
||||
logger.Debug().Msg("non-existent worker tag requested")
|
||||
return sendAPIError(e, http.StatusNotFound, "worker tag %q not found", tagUUID)
|
||||
case err != nil:
|
||||
logger.Error().Err(err).Msg("fetching worker cluster")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error fetching worker cluster: %v", err)
|
||||
logger.Error().Err(err).Msg("fetching worker tag")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error fetching worker tag: %v", err)
|
||||
}
|
||||
|
||||
// Update the cluster.
|
||||
dbCluster.Name = update.Name
|
||||
// Update the tag.
|
||||
dbTag.Name = update.Name
|
||||
if update.Description == nil {
|
||||
dbCluster.Description = ""
|
||||
dbTag.Description = ""
|
||||
} else {
|
||||
dbCluster.Description = *update.Description
|
||||
dbTag.Description = *update.Description
|
||||
}
|
||||
|
||||
if err := f.persist.SaveWorkerCluster(ctx, dbCluster); err != nil {
|
||||
logger.Error().Err(err).Msg("saving worker cluster")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error saving worker cluster")
|
||||
if err := f.persist.SaveWorkerTag(ctx, dbTag); err != nil {
|
||||
logger.Error().Err(err).Msg("saving worker tag")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error saving worker tag")
|
||||
}
|
||||
|
||||
// TODO: SocketIO broadcast of cluster update.
|
||||
// TODO: SocketIO broadcast of tag update.
|
||||
|
||||
return e.NoContent(http.StatusNoContent)
|
||||
}
|
||||
|
||||
func (f *Flamenco) FetchWorkerClusters(e echo.Context) error {
|
||||
func (f *Flamenco) FetchWorkerTags(e echo.Context) error {
|
||||
ctx := e.Request().Context()
|
||||
logger := requestLogger(e)
|
||||
|
||||
dbClusters, err := f.persist.FetchWorkerClusters(ctx)
|
||||
dbTags, err := f.persist.FetchWorkerTags(ctx)
|
||||
if err != nil {
|
||||
logger.Error().Err(err).Msg("fetching worker clusters")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error saving worker cluster")
|
||||
logger.Error().Err(err).Msg("fetching worker tags")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error saving worker tag")
|
||||
}
|
||||
|
||||
apiClusters := []api.WorkerCluster{}
|
||||
for _, dbCluster := range dbClusters {
|
||||
apiCluster := workerClusterDBtoAPI(*dbCluster)
|
||||
apiClusters = append(apiClusters, apiCluster)
|
||||
apiTags := []api.WorkerTag{}
|
||||
for _, dbTag := range dbTags {
|
||||
apiTag := workerTagDBtoAPI(*dbTag)
|
||||
apiTags = append(apiTags, apiTag)
|
||||
}
|
||||
|
||||
clusterList := api.WorkerClusterList{
|
||||
Clusters: &apiClusters,
|
||||
tagList := api.WorkerTagList{
|
||||
Tags: &apiTags,
|
||||
}
|
||||
return e.JSON(http.StatusOK, &clusterList)
|
||||
return e.JSON(http.StatusOK, &tagList)
|
||||
}
|
||||
|
||||
func (f *Flamenco) CreateWorkerCluster(e echo.Context) error {
|
||||
func (f *Flamenco) CreateWorkerTag(e echo.Context) error {
|
||||
ctx := e.Request().Context()
|
||||
logger := requestLogger(e)
|
||||
|
||||
// Decode the request body.
|
||||
var apiCluster api.CreateWorkerClusterJSONBody
|
||||
if err := e.Bind(&apiCluster); err != nil {
|
||||
var apiTag api.CreateWorkerTagJSONBody
|
||||
if err := e.Bind(&apiTag); err != nil {
|
||||
logger.Warn().Err(err).Msg("bad request received")
|
||||
return sendAPIError(e, http.StatusBadRequest, "invalid format")
|
||||
}
|
||||
|
||||
// Convert to persistence layer model.
|
||||
var clusterUUID string
|
||||
if apiCluster.Id != nil && *apiCluster.Id != "" {
|
||||
clusterUUID = *apiCluster.Id
|
||||
var tagUUID string
|
||||
if apiTag.Id != nil && *apiTag.Id != "" {
|
||||
tagUUID = *apiTag.Id
|
||||
} else {
|
||||
clusterUUID = uuid.New()
|
||||
tagUUID = uuid.New()
|
||||
}
|
||||
|
||||
dbCluster := persistence.WorkerCluster{
|
||||
UUID: clusterUUID,
|
||||
Name: apiCluster.Name,
|
||||
dbTag := persistence.WorkerTag{
|
||||
UUID: tagUUID,
|
||||
Name: apiTag.Name,
|
||||
}
|
||||
if apiCluster.Description != nil {
|
||||
dbCluster.Description = *apiCluster.Description
|
||||
if apiTag.Description != nil {
|
||||
dbTag.Description = *apiTag.Description
|
||||
}
|
||||
|
||||
// Store in the database.
|
||||
if err := f.persist.CreateWorkerCluster(ctx, &dbCluster); err != nil {
|
||||
logger.Error().Err(err).Msg("creating worker cluster")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error creating worker cluster")
|
||||
if err := f.persist.CreateWorkerTag(ctx, &dbTag); err != nil {
|
||||
logger.Error().Err(err).Msg("creating worker tag")
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error creating worker tag")
|
||||
}
|
||||
|
||||
// TODO: SocketIO broadcast of cluster creation.
|
||||
// TODO: SocketIO broadcast of tag creation.
|
||||
|
||||
return e.JSON(http.StatusOK, workerClusterDBtoAPI(dbCluster))
|
||||
return e.JSON(http.StatusOK, workerTagDBtoAPI(dbTag))
|
||||
}
|
||||
|
||||
func workerSummary(w persistence.Worker) api.WorkerSummary {
|
||||
@ -407,26 +407,26 @@ func workerDBtoAPI(w persistence.Worker) api.Worker {
|
||||
SupportedTaskTypes: w.TaskTypes(),
|
||||
}
|
||||
|
||||
if len(w.Clusters) > 0 {
|
||||
clusters := []api.WorkerCluster{}
|
||||
for i := range w.Clusters {
|
||||
clusters = append(clusters, workerClusterDBtoAPI(*w.Clusters[i]))
|
||||
if len(w.Tags) > 0 {
|
||||
tags := []api.WorkerTag{}
|
||||
for i := range w.Tags {
|
||||
tags = append(tags, workerTagDBtoAPI(*w.Tags[i]))
|
||||
}
|
||||
apiWorker.Clusters = &clusters
|
||||
apiWorker.Tags = &tags
|
||||
}
|
||||
|
||||
return apiWorker
|
||||
}
|
||||
|
||||
func workerClusterDBtoAPI(wc persistence.WorkerCluster) api.WorkerCluster {
|
||||
func workerTagDBtoAPI(wc persistence.WorkerTag) api.WorkerTag {
|
||||
uuid := wc.UUID // Take a copy for safety.
|
||||
|
||||
apiCluster := api.WorkerCluster{
|
||||
apiTag := api.WorkerTag{
|
||||
Id: &uuid,
|
||||
Name: wc.Name,
|
||||
}
|
||||
if len(wc.Description) > 0 {
|
||||
apiCluster.Description = &wc.Description
|
||||
apiTag.Description = &wc.Description
|
||||
}
|
||||
return apiCluster
|
||||
return apiTag
|
||||
}
|
||||
|
@ -262,58 +262,58 @@ func TestRequestWorkerStatusChangeRevert(t *testing.T) {
|
||||
assertResponseNoContent(t, echo)
|
||||
}
|
||||
|
||||
func TestWorkerClusterCRUDHappyFlow(t *testing.T) {
|
||||
func TestWorkerTagCRUDHappyFlow(t *testing.T) {
|
||||
mockCtrl := gomock.NewController(t)
|
||||
defer mockCtrl.Finish()
|
||||
|
||||
mf := newMockedFlamenco(mockCtrl)
|
||||
|
||||
// Create a cluster.
|
||||
// Create a tag.
|
||||
UUID := "18d9234e-5135-458f-a1ba-a350c3d4e837"
|
||||
apiCluster := api.WorkerCluster{
|
||||
apiTag := api.WorkerTag{
|
||||
Id: &UUID,
|
||||
Name: "ʻO nā manu ʻino",
|
||||
Description: ptr("Ke aloha"),
|
||||
}
|
||||
expectDBCluster := persistence.WorkerCluster{
|
||||
expectDBTag := persistence.WorkerTag{
|
||||
UUID: UUID,
|
||||
Name: apiCluster.Name,
|
||||
Description: *apiCluster.Description,
|
||||
Name: apiTag.Name,
|
||||
Description: *apiTag.Description,
|
||||
}
|
||||
mf.persistence.EXPECT().CreateWorkerCluster(gomock.Any(), &expectDBCluster)
|
||||
// TODO: expect SocketIO broadcast of the cluster creation.
|
||||
echo := mf.prepareMockedJSONRequest(apiCluster)
|
||||
require.NoError(t, mf.flamenco.CreateWorkerCluster(echo))
|
||||
assertResponseJSON(t, echo, http.StatusOK, &apiCluster)
|
||||
mf.persistence.EXPECT().CreateWorkerTag(gomock.Any(), &expectDBTag)
|
||||
// TODO: expect SocketIO broadcast of the tag creation.
|
||||
echo := mf.prepareMockedJSONRequest(apiTag)
|
||||
require.NoError(t, mf.flamenco.CreateWorkerTag(echo))
|
||||
assertResponseJSON(t, echo, http.StatusOK, &apiTag)
|
||||
|
||||
// Fetch the cluster
|
||||
mf.persistence.EXPECT().FetchWorkerCluster(gomock.Any(), UUID).Return(&expectDBCluster, nil)
|
||||
// Fetch the tag
|
||||
mf.persistence.EXPECT().FetchWorkerTag(gomock.Any(), UUID).Return(&expectDBTag, nil)
|
||||
echo = mf.prepareMockedRequest(nil)
|
||||
require.NoError(t, mf.flamenco.FetchWorkerCluster(echo, UUID))
|
||||
assertResponseJSON(t, echo, http.StatusOK, &apiCluster)
|
||||
require.NoError(t, mf.flamenco.FetchWorkerTag(echo, UUID))
|
||||
assertResponseJSON(t, echo, http.StatusOK, &apiTag)
|
||||
|
||||
// Update & save.
|
||||
newUUID := "60442762-83d3-4fc3-bf75-6ab5799cdbaa"
|
||||
newAPICluster := api.WorkerCluster{
|
||||
newAPITag := api.WorkerTag{
|
||||
Id: &newUUID, // Intentionally change the UUID. This should just be ignored.
|
||||
Name: "updated name",
|
||||
}
|
||||
expectNewDBCluster := persistence.WorkerCluster{
|
||||
expectNewDBTag := persistence.WorkerTag{
|
||||
UUID: UUID,
|
||||
Name: newAPICluster.Name,
|
||||
Name: newAPITag.Name,
|
||||
Description: "",
|
||||
}
|
||||
// TODO: expect SocketIO broadcast of the cluster update.
|
||||
mf.persistence.EXPECT().FetchWorkerCluster(gomock.Any(), UUID).Return(&expectDBCluster, nil)
|
||||
mf.persistence.EXPECT().SaveWorkerCluster(gomock.Any(), &expectNewDBCluster)
|
||||
echo = mf.prepareMockedJSONRequest(newAPICluster)
|
||||
require.NoError(t, mf.flamenco.UpdateWorkerCluster(echo, UUID))
|
||||
// TODO: expect SocketIO broadcast of the tag update.
|
||||
mf.persistence.EXPECT().FetchWorkerTag(gomock.Any(), UUID).Return(&expectDBTag, nil)
|
||||
mf.persistence.EXPECT().SaveWorkerTag(gomock.Any(), &expectNewDBTag)
|
||||
echo = mf.prepareMockedJSONRequest(newAPITag)
|
||||
require.NoError(t, mf.flamenco.UpdateWorkerTag(echo, UUID))
|
||||
assertResponseNoContent(t, echo)
|
||||
|
||||
// Delete.
|
||||
mf.persistence.EXPECT().DeleteWorkerCluster(gomock.Any(), UUID)
|
||||
// TODO: expect SocketIO broadcast of the cluster deletion.
|
||||
echo = mf.prepareMockedJSONRequest(newAPICluster)
|
||||
require.NoError(t, mf.flamenco.DeleteWorkerCluster(echo, UUID))
|
||||
mf.persistence.EXPECT().DeleteWorkerTag(gomock.Any(), UUID)
|
||||
// TODO: expect SocketIO broadcast of the tag deletion.
|
||||
echo = mf.prepareMockedJSONRequest(newAPITag)
|
||||
require.NoError(t, mf.flamenco.DeleteWorkerTag(echo, UUID))
|
||||
assertResponseNoContent(t, echo)
|
||||
}
|
||||
|
@ -65,11 +65,11 @@ func (f *Flamenco) SetWorkerSleepSchedule(e echo.Context, workerUUID string) err
|
||||
DaysOfWeek: schedule.DaysOfWeek,
|
||||
}
|
||||
if err := dbSchedule.StartTime.Scan(schedule.StartTime); err != nil {
|
||||
logger.Warn().Err(err).Msg("bad request received, cannot parse schedule start time")
|
||||
logger.Warn().Interface("schedule", schedule).Err(err).Msg("bad request received, cannot parse schedule start time")
|
||||
return sendAPIError(e, http.StatusBadRequest, "invalid format for schedule start time")
|
||||
}
|
||||
if err := dbSchedule.EndTime.Scan(schedule.EndTime); err != nil {
|
||||
logger.Warn().Err(err).Msg("bad request received, cannot parse schedule end time")
|
||||
logger.Warn().Interface("schedule", schedule).Err(err).Msg("bad request received, cannot parse schedule end time")
|
||||
return sendAPIError(e, http.StatusBadRequest, "invalid format for schedule end time")
|
||||
}
|
||||
|
||||
@ -84,6 +84,5 @@ func (f *Flamenco) SetWorkerSleepSchedule(e echo.Context, workerUUID string) err
|
||||
return sendAPIError(e, http.StatusInternalServerError, "error fetching sleep schedule: %v", err)
|
||||
}
|
||||
|
||||
logger.Info().Interface("schedule", schedule).Msg("worker sleep schedule updated")
|
||||
return e.NoContent(http.StatusNoContent)
|
||||
}
|
||||
|
@ -73,8 +73,11 @@ type Base struct {
|
||||
Meta ConfMeta `yaml:"_meta"`
|
||||
|
||||
ManagerName string `yaml:"manager_name"`
|
||||
DatabaseDSN string `yaml:"database"`
|
||||
Listen string `yaml:"listen"`
|
||||
|
||||
DatabaseDSN string `yaml:"database"`
|
||||
DBIntegrityCheck time.Duration `yaml:"database_check_period"`
|
||||
|
||||
Listen string `yaml:"listen"`
|
||||
|
||||
SSDPDiscovery bool `yaml:"autodiscoverable"`
|
||||
|
||||
|
@ -20,6 +20,7 @@ var defaultConfig = Conf{
|
||||
Listen: ":8080",
|
||||
// ListenHTTPS: ":8433",
|
||||
DatabaseDSN: "flamenco-manager.sqlite",
|
||||
DBIntegrityCheck: 1 * time.Hour,
|
||||
SSDPDiscovery: true,
|
||||
LocalManagerStoragePath: "./flamenco-manager-storage",
|
||||
SharedStoragePath: "", // Empty string means "first run", and should trigger the config setup assistant.
|
||||
|
@ -20,8 +20,8 @@ type Author struct {
|
||||
}
|
||||
|
||||
type AuthoredJob struct {
|
||||
JobID string
|
||||
WorkerClusterUUID string
|
||||
JobID string
|
||||
WorkerTagUUID string
|
||||
|
||||
Name string
|
||||
JobType string
|
||||
|
@ -127,8 +127,8 @@ func (s *Service) Compile(ctx context.Context, sj api.SubmittedJob) (*AuthoredJo
|
||||
aj.Storage.ShamanCheckoutID = *sj.Storage.ShamanCheckoutId
|
||||
}
|
||||
|
||||
if sj.WorkerCluster != nil {
|
||||
aj.WorkerClusterUUID = *sj.WorkerCluster
|
||||
if sj.WorkerTag != nil {
|
||||
aj.WorkerTagUUID = *sj.WorkerTag
|
||||
}
|
||||
|
||||
compiler, err := vm.getCompileJob()
|
||||
|
@ -45,12 +45,12 @@ func exampleSubmittedJob() api.SubmittedJob {
|
||||
"user.name": "Sybren Stüvel",
|
||||
}}
|
||||
sj := api.SubmittedJob{
|
||||
Name: "3Д рендеринг",
|
||||
Priority: 50,
|
||||
Type: "simple-blender-render",
|
||||
Settings: &settings,
|
||||
Metadata: &metadata,
|
||||
WorkerCluster: ptr("acce9983-e663-4210-b3cc-f7bfa629cb21"),
|
||||
Name: "3Д рендеринг",
|
||||
Priority: 50,
|
||||
Type: "simple-blender-render",
|
||||
Settings: &settings,
|
||||
Metadata: &metadata,
|
||||
WorkerTag: ptr("acce9983-e663-4210-b3cc-f7bfa629cb21"),
|
||||
}
|
||||
return sj
|
||||
}
|
||||
@ -80,7 +80,7 @@ func TestSimpleBlenderRenderHappy(t *testing.T) {
|
||||
|
||||
// Properties should be copied as-is.
|
||||
assert.Equal(t, sj.Name, aj.Name)
|
||||
assert.Equal(t, *sj.WorkerCluster, aj.WorkerClusterUUID)
|
||||
assert.Equal(t, *sj.WorkerTag, aj.WorkerTagUUID)
|
||||
assert.Equal(t, sj.Type, aj.JobType)
|
||||
assert.Equal(t, sj.Priority, aj.Priority)
|
||||
assert.EqualValues(t, sj.Settings.AdditionalProperties, aj.Settings)
|
||||
@ -139,7 +139,7 @@ func TestSimpleBlenderRenderHappy(t *testing.T) {
|
||||
assert.Equal(t, expectDeps, tVideo.Dependencies)
|
||||
}
|
||||
|
||||
func TestJobWithoutCluster(t *testing.T) {
|
||||
func TestJobWithoutTag(t *testing.T) {
|
||||
c := mockedClock(t)
|
||||
|
||||
s, err := Load(c)
|
||||
@ -151,20 +151,20 @@ func TestJobWithoutCluster(t *testing.T) {
|
||||
|
||||
sj := exampleSubmittedJob()
|
||||
|
||||
// Try with nil WorkerCluster.
|
||||
// Try with nil WorkerTag.
|
||||
{
|
||||
sj.WorkerCluster = nil
|
||||
sj.WorkerTag = nil
|
||||
aj, err := s.Compile(ctx, sj)
|
||||
require.NoError(t, err)
|
||||
assert.Zero(t, aj.WorkerClusterUUID)
|
||||
assert.Zero(t, aj.WorkerTagUUID)
|
||||
}
|
||||
|
||||
// Try with empty WorkerCluster.
|
||||
// Try with empty WorkerTag.
|
||||
{
|
||||
sj.WorkerCluster = ptr("")
|
||||
sj.WorkerTag = ptr("")
|
||||
aj, err := s.Compile(ctx, sj)
|
||||
require.NoError(t, err)
|
||||
assert.Zero(t, aj.WorkerClusterUUID)
|
||||
assert.Zero(t, aj.WorkerTagUUID)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -4,7 +4,12 @@ const JOB_TYPE = {
|
||||
label: "Simple Blender Render",
|
||||
settings: [
|
||||
// Settings for artists to determine:
|
||||
{ key: "frames", type: "string", required: true, eval: "f'{C.scene.frame_start}-{C.scene.frame_end}'",
|
||||
{ key: "frames", type: "string", required: true,
|
||||
eval: "f'{C.scene.frame_start}-{C.scene.frame_end}'",
|
||||
evalInfo: {
|
||||
showLinkButton: true,
|
||||
description: "Scene frame range",
|
||||
},
|
||||
description: "Frame range to render. Examples: '47', '1-30', '3, 5-10, 47-327'" },
|
||||
{ key: "chunk_size", type: "int32", default: 1, description: "Number of frames to render in one Blender render task",
|
||||
visible: "submission" },
|
||||
|
@ -15,8 +15,6 @@ import (
|
||||
"github.com/glebarez/sqlite"
|
||||
)
|
||||
|
||||
const checkPeriod = 1 * time.Hour
|
||||
|
||||
// DB provides the database interface.
|
||||
type DB struct {
|
||||
gormDB *gorm.DB
|
||||
@ -117,17 +115,9 @@ func openDBWithConfig(dsn string, config *gorm.Config) (*DB, error) {
|
||||
sqlDB.SetMaxIdleConns(1) // Max num of connections in the idle connection pool.
|
||||
sqlDB.SetMaxOpenConns(1) // Max num of open connections to the database.
|
||||
|
||||
// Enable foreign key checks.
|
||||
log.Trace().Msg("enabling SQLite foreign key checks")
|
||||
if tx := gormDB.Exec("PRAGMA foreign_keys = 1"); tx.Error != nil {
|
||||
return nil, fmt.Errorf("enabling foreign keys: %w", tx.Error)
|
||||
}
|
||||
var fkEnabled int
|
||||
if tx := gormDB.Raw("PRAGMA foreign_keys").Scan(&fkEnabled); tx.Error != nil {
|
||||
return nil, fmt.Errorf("checking whether the database has foreign key checks enabled: %w", tx.Error)
|
||||
}
|
||||
if fkEnabled == 0 {
|
||||
log.Error().Msg("SQLite database does not want to enable foreign keys, this may cause data loss")
|
||||
// Always enable foreign key checks, to make SQLite behave like a real database.
|
||||
if err := db.pragmaForeignKeys(true); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Write-ahead-log journal may improve writing speed.
|
||||
@ -167,3 +157,36 @@ func (db *DB) Close() error {
|
||||
}
|
||||
return sqldb.Close()
|
||||
}
|
||||
|
||||
func (db *DB) pragmaForeignKeys(enabled bool) error {
|
||||
var (
|
||||
value int
|
||||
noun string
|
||||
)
|
||||
switch enabled {
|
||||
case false:
|
||||
value = 0
|
||||
noun = "disabl"
|
||||
case true:
|
||||
value = 1
|
||||
noun = "enabl"
|
||||
}
|
||||
|
||||
log.Trace().Msgf("%sing SQLite foreign key checks", noun)
|
||||
|
||||
// SQLite doesn't seem to like SQL parameters for `PRAGMA`, so `PRAGMA foreign_keys = ?` doesn't work.
|
||||
sql := fmt.Sprintf("PRAGMA foreign_keys = %d", value)
|
||||
|
||||
if tx := db.gormDB.Exec(sql); tx.Error != nil {
|
||||
return fmt.Errorf("%sing foreign keys: %w", noun, tx.Error)
|
||||
}
|
||||
var fkEnabled int
|
||||
if tx := db.gormDB.Raw("PRAGMA foreign_keys").Scan(&fkEnabled); tx.Error != nil {
|
||||
return fmt.Errorf("checking whether the database has foreign key checks %sed: %w", noun, tx.Error)
|
||||
}
|
||||
if fkEnabled != value {
|
||||
return fmt.Errorf("SQLite database does not want to %se foreign keys, this may cause data loss", noun)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -4,9 +4,37 @@ package persistence
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/rs/zerolog/log"
|
||||
)
|
||||
|
||||
func (db *DB) migrate() error {
|
||||
log.Debug().Msg("auto-migrating database")
|
||||
|
||||
// There is an issue with the GORM auto-migration, in that it doesn't always
|
||||
// disable foreign key constraints when it should. Due to limitations of
|
||||
// SQLite, not all 'alter table' commands you'd want to use are available. As
|
||||
// a workaround, these steps are performed:
|
||||
//
|
||||
// 1. create a new table with the desired schema,
|
||||
// 2. copy the data over,
|
||||
// 3. drop the old table,
|
||||
// 4. rename the new table to the old name.
|
||||
//
|
||||
// Step #3 will wreak havoc with the database when foreign key constraint
|
||||
// checks are active.
|
||||
|
||||
if err := db.pragmaForeignKeys(false); err != nil {
|
||||
return fmt.Errorf("disabling foreign key checks before auto-migration: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
err := db.pragmaForeignKeys(true)
|
||||
if err != nil {
|
||||
// There is no way that Flamenco Manager should be runnign with foreign key checks disabled.
|
||||
log.Fatal().Err(err).Msg("re-enabling foreign key checks after auto-migration failed")
|
||||
}
|
||||
}()
|
||||
|
||||
err := db.gormDB.AutoMigrate(
|
||||
&Job{},
|
||||
&JobBlock{},
|
||||
@ -16,7 +44,7 @@ func (db *DB) migrate() error {
|
||||
&Task{},
|
||||
&TaskFailure{},
|
||||
&Worker{},
|
||||
&WorkerCluster{},
|
||||
&WorkerTag{},
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to automigrate database: %v", err)
|
||||
|
@ -9,10 +9,10 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
ErrJobNotFound = PersistenceError{Message: "job not found", Err: gorm.ErrRecordNotFound}
|
||||
ErrTaskNotFound = PersistenceError{Message: "task not found", Err: gorm.ErrRecordNotFound}
|
||||
ErrWorkerNotFound = PersistenceError{Message: "worker not found", Err: gorm.ErrRecordNotFound}
|
||||
ErrWorkerClusterNotFound = PersistenceError{Message: "worker cluster not found", Err: gorm.ErrRecordNotFound}
|
||||
ErrJobNotFound = PersistenceError{Message: "job not found", Err: gorm.ErrRecordNotFound}
|
||||
ErrTaskNotFound = PersistenceError{Message: "task not found", Err: gorm.ErrRecordNotFound}
|
||||
ErrWorkerNotFound = PersistenceError{Message: "worker not found", Err: gorm.ErrRecordNotFound}
|
||||
ErrWorkerTagNotFound = PersistenceError{Message: "worker tag not found", Err: gorm.ErrRecordNotFound}
|
||||
)
|
||||
|
||||
type PersistenceError struct {
|
||||
@ -40,8 +40,8 @@ func workerError(errorToWrap error, message string, msgArgs ...interface{}) erro
|
||||
return wrapError(translateGormWorkerError(errorToWrap), message, msgArgs...)
|
||||
}
|
||||
|
||||
func workerClusterError(errorToWrap error, message string, msgArgs ...interface{}) error {
|
||||
return wrapError(translateGormWorkerClusterError(errorToWrap), message, msgArgs...)
|
||||
func workerTagError(errorToWrap error, message string, msgArgs ...interface{}) error {
|
||||
return wrapError(translateGormWorkerTagError(errorToWrap), message, msgArgs...)
|
||||
}
|
||||
|
||||
func wrapError(errorToWrap error, message string, format ...interface{}) error {
|
||||
@ -86,11 +86,11 @@ func translateGormWorkerError(gormError error) error {
|
||||
return gormError
|
||||
}
|
||||
|
||||
// translateGormWorkerClusterError translates a Gorm error to a persistence layer error.
|
||||
// translateGormWorkerTagError translates a Gorm error to a persistence layer error.
|
||||
// This helps to keep Gorm as "implementation detail" of the persistence layer.
|
||||
func translateGormWorkerClusterError(gormError error) error {
|
||||
func translateGormWorkerTagError(gormError error) error {
|
||||
if errors.Is(gormError, gorm.ErrRecordNotFound) {
|
||||
return ErrWorkerClusterNotFound
|
||||
return ErrWorkerTagNotFound
|
||||
}
|
||||
return gormError
|
||||
}
|
||||
|
@ -12,7 +12,9 @@ import (
|
||||
|
||||
var ErrIntegrity = errors.New("database integrity check failed")
|
||||
|
||||
const integrityCheckTimeout = 2 * time.Second
|
||||
const (
|
||||
integrityCheckTimeout = 2 * time.Second
|
||||
)
|
||||
|
||||
type PragmaIntegrityCheckResult struct {
|
||||
Description string `gorm:"column:integrity_check"`
|
||||
@ -25,6 +27,38 @@ type PragmaForeignKeyCheckResult struct {
|
||||
FKID int `gorm:"column:fkid"`
|
||||
}
|
||||
|
||||
// PeriodicIntegrityCheck periodically checks the database integrity.
|
||||
// This function only returns when the context is done.
|
||||
func (db *DB) PeriodicIntegrityCheck(
|
||||
ctx context.Context,
|
||||
period time.Duration,
|
||||
onErrorCallback func(),
|
||||
) {
|
||||
if period == 0 {
|
||||
log.Info().Msg("database: periodic integrity check disabled")
|
||||
return
|
||||
}
|
||||
|
||||
log.Info().
|
||||
Stringer("period", period).
|
||||
Msg("database: periodic integrity check starting")
|
||||
defer log.Debug().Msg("database: periodic integrity check stopping")
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-time.After(period):
|
||||
}
|
||||
|
||||
ok := db.performIntegrityCheck(ctx)
|
||||
if !ok {
|
||||
log.Error().Msg("database: periodic integrity check failed")
|
||||
onErrorCallback()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// performIntegrityCheck uses a few 'pragma' SQL statements to do some integrity checking.
|
||||
// Returns true on OK, false if there was an issue. Issues are always logged.
|
||||
func (db *DB) performIntegrityCheck(ctx context.Context) (ok bool) {
|
||||
@ -50,26 +84,26 @@ func (db *DB) pragmaIntegrityCheck(ctx context.Context) (ok bool) {
|
||||
Raw("PRAGMA integrity_check").
|
||||
Scan(&issues)
|
||||
if tx.Error != nil {
|
||||
log.Error().Err(tx.Error).Msg("database error checking integrity")
|
||||
log.Error().Err(tx.Error).Msg("database: error checking integrity")
|
||||
return false
|
||||
}
|
||||
|
||||
switch len(issues) {
|
||||
case 0:
|
||||
log.Warn().Msg("database integrity check returned nothing, expected explicit 'ok'; treating as an implicit 'ok'")
|
||||
log.Warn().Msg("database: integrity check returned nothing, expected explicit 'ok'; treating as an implicit 'ok'")
|
||||
return true
|
||||
case 1:
|
||||
if issues[0].Description == "ok" {
|
||||
log.Debug().Msg("database integrity check ok")
|
||||
log.Debug().Msg("database: integrity check ok")
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
log.Error().Int("num_issues", len(issues)).Msg("database integrity check failed")
|
||||
log.Error().Int("num_issues", len(issues)).Msg("database: integrity check failed")
|
||||
for _, issue := range issues {
|
||||
log.Error().
|
||||
Str("description", issue.Description).
|
||||
Msg("database integrity check failure")
|
||||
Msg("database: integrity check failure")
|
||||
}
|
||||
|
||||
return false
|
||||
@ -91,23 +125,23 @@ func (db *DB) pragmaForeignKeyCheck(ctx context.Context) (ok bool) {
|
||||
Raw("PRAGMA foreign_key_check").
|
||||
Scan(&issues)
|
||||
if tx.Error != nil {
|
||||
log.Error().Err(tx.Error).Msg("database error checking foreign keys")
|
||||
log.Error().Err(tx.Error).Msg("database: error checking foreign keys")
|
||||
return false
|
||||
}
|
||||
|
||||
if len(issues) == 0 {
|
||||
log.Debug().Msg("database foreign key check ok")
|
||||
log.Debug().Msg("database: foreign key check ok")
|
||||
return true
|
||||
}
|
||||
|
||||
log.Error().Int("num_issues", len(issues)).Msg("database foreign key check failed")
|
||||
log.Error().Int("num_issues", len(issues)).Msg("database: foreign key check failed")
|
||||
for _, issue := range issues {
|
||||
log.Error().
|
||||
Str("table", issue.Table).
|
||||
Int("rowid", issue.RowID).
|
||||
Str("parent", issue.Parent).
|
||||
Int("fkid", issue.FKID).
|
||||
Msg("database foreign key relation missing")
|
||||
Msg("database: foreign key relation missing")
|
||||
}
|
||||
|
||||
return false
|
||||
|
@ -36,8 +36,8 @@ type Job struct {
|
||||
|
||||
Storage JobStorageInfo `gorm:"embedded;embeddedPrefix:storage_"`
|
||||
|
||||
WorkerClusterID *uint
|
||||
WorkerCluster *WorkerCluster `gorm:"foreignkey:WorkerClusterID;references:ID;constraint:OnDelete:SET NULL"`
|
||||
WorkerTagID *uint
|
||||
WorkerTag *WorkerTag `gorm:"foreignkey:WorkerTagID;references:ID;constraint:OnDelete:SET NULL"`
|
||||
}
|
||||
|
||||
type StringInterfaceMap map[string]interface{}
|
||||
@ -148,92 +148,113 @@ func (db *DB) StoreAuthoredJob(ctx context.Context, authoredJob job_compilers.Au
|
||||
},
|
||||
}
|
||||
|
||||
// Find and assign the worker cluster.
|
||||
if authoredJob.WorkerClusterUUID != "" {
|
||||
dbCluster, err := fetchWorkerCluster(tx, authoredJob.WorkerClusterUUID)
|
||||
// Find and assign the worker tag.
|
||||
if authoredJob.WorkerTagUUID != "" {
|
||||
dbTag, err := fetchWorkerTag(tx, authoredJob.WorkerTagUUID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dbJob.WorkerClusterID = &dbCluster.ID
|
||||
dbJob.WorkerCluster = dbCluster
|
||||
dbJob.WorkerTagID = &dbTag.ID
|
||||
dbJob.WorkerTag = dbTag
|
||||
}
|
||||
|
||||
if err := tx.Create(&dbJob).Error; err != nil {
|
||||
return jobError(err, "storing job")
|
||||
}
|
||||
|
||||
uuidToTask := make(map[string]*Task)
|
||||
for _, authoredTask := range authoredJob.Tasks {
|
||||
var commands []Command
|
||||
for _, authoredCommand := range authoredTask.Commands {
|
||||
commands = append(commands, Command{
|
||||
Name: authoredCommand.Name,
|
||||
Parameters: StringInterfaceMap(authoredCommand.Parameters),
|
||||
})
|
||||
}
|
||||
|
||||
dbTask := Task{
|
||||
Name: authoredTask.Name,
|
||||
Type: authoredTask.Type,
|
||||
UUID: authoredTask.UUID,
|
||||
Job: &dbJob,
|
||||
Priority: authoredTask.Priority,
|
||||
Status: api.TaskStatusQueued,
|
||||
Commands: commands,
|
||||
// dependencies are stored below.
|
||||
}
|
||||
if err := tx.Create(&dbTask).Error; err != nil {
|
||||
return taskError(err, "storing task: %v", err)
|
||||
}
|
||||
|
||||
uuidToTask[authoredTask.UUID] = &dbTask
|
||||
}
|
||||
|
||||
// Store the dependencies between tasks.
|
||||
for _, authoredTask := range authoredJob.Tasks {
|
||||
if len(authoredTask.Dependencies) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
dbTask, ok := uuidToTask[authoredTask.UUID]
|
||||
if !ok {
|
||||
return taskError(nil, "unable to find task %q in the database, even though it was just authored", authoredTask.UUID)
|
||||
}
|
||||
|
||||
deps := make([]*Task, len(authoredTask.Dependencies))
|
||||
for i, t := range authoredTask.Dependencies {
|
||||
depTask, ok := uuidToTask[t.UUID]
|
||||
if !ok {
|
||||
return taskError(nil, "finding task with UUID %q; a task depends on a task that is not part of this job", t.UUID)
|
||||
}
|
||||
deps[i] = depTask
|
||||
}
|
||||
dependenciesbatchsize := 1000
|
||||
for j := 0; j < len(deps); j += dependenciesbatchsize {
|
||||
end := j + dependenciesbatchsize
|
||||
if end > len(deps) {
|
||||
end = len(deps)
|
||||
}
|
||||
currentDeps := deps[j:end]
|
||||
dbTask.Dependencies = currentDeps
|
||||
tx.Model(&dbTask).Where("UUID = ?", dbTask.UUID)
|
||||
subQuery := tx.Model(dbTask).Updates(Task{Dependencies: currentDeps})
|
||||
if subQuery.Error != nil {
|
||||
return taskError(subQuery.Error, "error with storing dependencies of task %q issue exists in dependencies %d to %d", authoredTask.UUID, j, end)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
return db.storeAuthoredJobTaks(ctx, tx, &dbJob, &authoredJob)
|
||||
})
|
||||
}
|
||||
|
||||
// StoreAuthoredJobTaks is a low-level function that is only used for recreating an existing job's tasks.
|
||||
// It stores `authoredJob`'s tasks, but attaches them to the already-persisted `job`.
|
||||
func (db *DB) StoreAuthoredJobTaks(
|
||||
ctx context.Context,
|
||||
job *Job,
|
||||
authoredJob *job_compilers.AuthoredJob,
|
||||
) error {
|
||||
tx := db.gormDB.WithContext(ctx)
|
||||
return db.storeAuthoredJobTaks(ctx, tx, job, authoredJob)
|
||||
}
|
||||
|
||||
func (db *DB) storeAuthoredJobTaks(
|
||||
ctx context.Context,
|
||||
tx *gorm.DB,
|
||||
dbJob *Job,
|
||||
authoredJob *job_compilers.AuthoredJob,
|
||||
) error {
|
||||
|
||||
uuidToTask := make(map[string]*Task)
|
||||
for _, authoredTask := range authoredJob.Tasks {
|
||||
var commands []Command
|
||||
for _, authoredCommand := range authoredTask.Commands {
|
||||
commands = append(commands, Command{
|
||||
Name: authoredCommand.Name,
|
||||
Parameters: StringInterfaceMap(authoredCommand.Parameters),
|
||||
})
|
||||
}
|
||||
|
||||
dbTask := Task{
|
||||
Name: authoredTask.Name,
|
||||
Type: authoredTask.Type,
|
||||
UUID: authoredTask.UUID,
|
||||
Job: dbJob,
|
||||
Priority: authoredTask.Priority,
|
||||
Status: api.TaskStatusQueued,
|
||||
Commands: commands,
|
||||
// dependencies are stored below.
|
||||
}
|
||||
if err := tx.Create(&dbTask).Error; err != nil {
|
||||
return taskError(err, "storing task: %v", err)
|
||||
}
|
||||
|
||||
uuidToTask[authoredTask.UUID] = &dbTask
|
||||
}
|
||||
|
||||
// Store the dependencies between tasks.
|
||||
for _, authoredTask := range authoredJob.Tasks {
|
||||
if len(authoredTask.Dependencies) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
dbTask, ok := uuidToTask[authoredTask.UUID]
|
||||
if !ok {
|
||||
return taskError(nil, "unable to find task %q in the database, even though it was just authored", authoredTask.UUID)
|
||||
}
|
||||
|
||||
deps := make([]*Task, len(authoredTask.Dependencies))
|
||||
for i, t := range authoredTask.Dependencies {
|
||||
depTask, ok := uuidToTask[t.UUID]
|
||||
if !ok {
|
||||
return taskError(nil, "finding task with UUID %q; a task depends on a task that is not part of this job", t.UUID)
|
||||
}
|
||||
deps[i] = depTask
|
||||
}
|
||||
dependenciesbatchsize := 1000
|
||||
for j := 0; j < len(deps); j += dependenciesbatchsize {
|
||||
end := j + dependenciesbatchsize
|
||||
if end > len(deps) {
|
||||
end = len(deps)
|
||||
}
|
||||
currentDeps := deps[j:end]
|
||||
dbTask.Dependencies = currentDeps
|
||||
tx.Model(&dbTask).Where("UUID = ?", dbTask.UUID)
|
||||
subQuery := tx.Model(dbTask).Updates(Task{Dependencies: currentDeps})
|
||||
if subQuery.Error != nil {
|
||||
return taskError(subQuery.Error, "error with storing dependencies of task %q issue exists in dependencies %d to %d", authoredTask.UUID, j, end)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// FetchJob fetches a single job, without fetching its tasks.
|
||||
func (db *DB) FetchJob(ctx context.Context, jobUUID string) (*Job, error) {
|
||||
dbJob := Job{}
|
||||
findResult := db.gormDB.WithContext(ctx).
|
||||
Limit(1).
|
||||
Preload("WorkerCluster").
|
||||
Preload("WorkerTag").
|
||||
Find(&dbJob, "uuid = ?", jobUUID)
|
||||
if findResult.Error != nil {
|
||||
return nil, jobError(findResult.Error, "fetching job")
|
||||
|
@ -108,16 +108,16 @@ func (db *DB) WorkersLeftToRun(ctx context.Context, job *Job, taskType string) (
|
||||
Select("uuid").
|
||||
Where("id not in (?)", blockedWorkers)
|
||||
|
||||
if job.WorkerClusterID == nil {
|
||||
if job.WorkerTagID == nil {
|
||||
// Count all workers, so no extra restrictions are necessary.
|
||||
} else {
|
||||
// Only count workers in the job's cluster.
|
||||
jobCluster := db.gormDB.
|
||||
Table("worker_cluster_membership").
|
||||
// Only count workers in the job's tag.
|
||||
jobTag := db.gormDB.
|
||||
Table("worker_tag_membership").
|
||||
Select("worker_id").
|
||||
Where("worker_cluster_id = ?", *job.WorkerClusterID)
|
||||
Where("worker_tag_id = ?", *job.WorkerTagID)
|
||||
query = query.
|
||||
Where("id in (?)", jobCluster)
|
||||
Where("id in (?)", jobTag)
|
||||
}
|
||||
|
||||
// Find the workers NOT blocked.
|
||||
|
@ -126,14 +126,14 @@ func TestWorkersLeftToRun(t *testing.T) {
|
||||
worker1 := createWorker(ctx, t, db)
|
||||
worker2 := createWorkerFrom(ctx, t, db, *worker1)
|
||||
|
||||
// Create one worker cluster. It will not be used by this job, but one of the
|
||||
// Create one worker tag. It will not be used by this job, but one of the
|
||||
// workers will be assigned to it. It can get this job's tasks, though.
|
||||
// Because the job is clusterless, it can be run by all.
|
||||
cluster1 := WorkerCluster{UUID: "11157623-4b14-4801-bee2-271dddab6309", Name: "Cluster 1"}
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &cluster1))
|
||||
// Because the job is tagless, it can be run by all.
|
||||
tag1 := WorkerTag{UUID: "11157623-4b14-4801-bee2-271dddab6309", Name: "Tag 1"}
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &tag1))
|
||||
workerC1 := createWorker(ctx, t, db, func(w *Worker) {
|
||||
w.UUID = "c1c1c1c1-0000-1111-2222-333333333333"
|
||||
w.Clusters = []*WorkerCluster{&cluster1}
|
||||
w.Tags = []*WorkerTag{&tag1}
|
||||
})
|
||||
|
||||
uuidMap := func(workers ...*Worker) map[string]bool {
|
||||
@ -172,43 +172,43 @@ func TestWorkersLeftToRun(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkersLeftToRunWithClusters(t *testing.T) {
|
||||
func TestWorkersLeftToRunWithTags(t *testing.T) {
|
||||
ctx, cancel, db := persistenceTestFixtures(t, schedulerTestTimeout)
|
||||
defer cancel()
|
||||
|
||||
// Create clusters.
|
||||
cluster1 := WorkerCluster{UUID: "11157623-4b14-4801-bee2-271dddab6309", Name: "Cluster 1"}
|
||||
cluster2 := WorkerCluster{UUID: "22257623-4b14-4801-bee2-271dddab6309", Name: "Cluster 2"}
|
||||
cluster3 := WorkerCluster{UUID: "33357623-4b14-4801-bee2-271dddab6309", Name: "Cluster 3"}
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &cluster1))
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &cluster2))
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &cluster3))
|
||||
// Create tags.
|
||||
tag1 := WorkerTag{UUID: "11157623-4b14-4801-bee2-271dddab6309", Name: "Tag 1"}
|
||||
tag2 := WorkerTag{UUID: "22257623-4b14-4801-bee2-271dddab6309", Name: "Tag 2"}
|
||||
tag3 := WorkerTag{UUID: "33357623-4b14-4801-bee2-271dddab6309", Name: "Tag 3"}
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &tag1))
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &tag2))
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &tag3))
|
||||
|
||||
// Create a job in cluster1.
|
||||
// Create a job in tag1.
|
||||
authoredJob := createTestAuthoredJobWithTasks()
|
||||
authoredJob.WorkerClusterUUID = cluster1.UUID
|
||||
authoredJob.WorkerTagUUID = tag1.UUID
|
||||
job := persistAuthoredJob(t, ctx, db, authoredJob)
|
||||
|
||||
// Clusters 1 + 3
|
||||
// Tags 1 + 3
|
||||
workerC13 := createWorker(ctx, t, db, func(w *Worker) {
|
||||
w.UUID = "c13c1313-0000-1111-2222-333333333333"
|
||||
w.Clusters = []*WorkerCluster{&cluster1, &cluster3}
|
||||
w.Tags = []*WorkerTag{&tag1, &tag3}
|
||||
})
|
||||
// Cluster 1
|
||||
// Tag 1
|
||||
workerC1 := createWorker(ctx, t, db, func(w *Worker) {
|
||||
w.UUID = "c1c1c1c1-0000-1111-2222-333333333333"
|
||||
w.Clusters = []*WorkerCluster{&cluster1}
|
||||
w.Tags = []*WorkerTag{&tag1}
|
||||
})
|
||||
// Cluster 2 worker, this one should never appear.
|
||||
// Tag 2 worker, this one should never appear.
|
||||
createWorker(ctx, t, db, func(w *Worker) {
|
||||
w.UUID = "c2c2c2c2-0000-1111-2222-333333333333"
|
||||
w.Clusters = []*WorkerCluster{&cluster2}
|
||||
w.Tags = []*WorkerTag{&tag2}
|
||||
})
|
||||
// No clusters, so should be able to run only clusterless jobs. Which is none
|
||||
// No tags, so should be able to run only tagless jobs. Which is none
|
||||
// in this test.
|
||||
createWorker(ctx, t, db, func(w *Worker) {
|
||||
w.UUID = "00000000-0000-1111-2222-333333333333"
|
||||
w.Clusters = nil
|
||||
w.Tags = nil
|
||||
})
|
||||
|
||||
uuidMap := func(workers ...*Worker) map[string]bool {
|
||||
@ -219,7 +219,7 @@ func TestWorkersLeftToRunWithClusters(t *testing.T) {
|
||||
return theMap
|
||||
}
|
||||
|
||||
// All Cluster 1 workers, no blocklist.
|
||||
// All Tag 1 workers, no blocklist.
|
||||
left, err := db.WorkersLeftToRun(ctx, job, "blender")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, uuidMap(workerC13, workerC1), left)
|
||||
@ -230,7 +230,7 @@ func TestWorkersLeftToRunWithClusters(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, uuidMap(workerC13), left)
|
||||
|
||||
// All clustered workers blocked.
|
||||
// All taged workers blocked.
|
||||
_ = db.AddWorkerToJobBlocklist(ctx, job, workerC13, "blender")
|
||||
left, err = db.WorkersLeftToRun(ctx, job, "blender")
|
||||
assert.NoError(t, err)
|
||||
|
@ -64,7 +64,7 @@ func (db *DB) QueryJobs(ctx context.Context, apiQ api.JobsQuery) ([]*Job, error)
|
||||
}
|
||||
}
|
||||
|
||||
q.Preload("Cluster")
|
||||
q.Preload("Tag")
|
||||
|
||||
result := []*Job{}
|
||||
tx := q.Scan(&result)
|
||||
|
@ -757,7 +757,7 @@ func createWorker(ctx context.Context, t *testing.T, db *DB, updaters ...func(*W
|
||||
Software: "3.0",
|
||||
Status: api.WorkerStatusAwake,
|
||||
SupportedTaskTypes: "blender,ffmpeg,file-management",
|
||||
Clusters: nil,
|
||||
Tags: nil,
|
||||
}
|
||||
|
||||
for _, updater := range updaters {
|
||||
|
@ -26,7 +26,7 @@ func (db *DB) ScheduleTask(ctx context.Context, w *Worker) (*Task, error) {
|
||||
logger := log.With().Str("worker", w.UUID).Logger()
|
||||
logger.Trace().Msg("finding task for worker")
|
||||
|
||||
hasWorkerClusters, err := db.HasWorkerClusters(ctx)
|
||||
hasWorkerTags, err := db.HasWorkerTags(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -37,7 +37,7 @@ func (db *DB) ScheduleTask(ctx context.Context, w *Worker) (*Task, error) {
|
||||
var task *Task
|
||||
txErr := db.gormDB.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
|
||||
var err error
|
||||
task, err = findTaskForWorker(tx, w, hasWorkerClusters)
|
||||
task, err = findTaskForWorker(tx, w, hasWorkerTags)
|
||||
if err != nil {
|
||||
if isDatabaseBusyError(err) {
|
||||
logger.Trace().Err(err).Msg("database busy while finding task for worker")
|
||||
@ -84,7 +84,7 @@ func (db *DB) ScheduleTask(ctx context.Context, w *Worker) (*Task, error) {
|
||||
return task, nil
|
||||
}
|
||||
|
||||
func findTaskForWorker(tx *gorm.DB, w *Worker, checkWorkerClusters bool) (*Task, error) {
|
||||
func findTaskForWorker(tx *gorm.DB, w *Worker, checkWorkerTags bool) (*Task, error) {
|
||||
task := Task{}
|
||||
|
||||
// If a task is alreay active & assigned to this worker, return just that.
|
||||
@ -129,21 +129,21 @@ func findTaskForWorker(tx *gorm.DB, w *Worker, checkWorkerClusters bool) (*Task,
|
||||
Where("TF.worker_id is NULL"). // Not failed before
|
||||
Where("tasks.type not in (?)", blockedTaskTypesQuery) // Non-blocklisted
|
||||
|
||||
if checkWorkerClusters {
|
||||
// The system has one or more clusters, so limit the available jobs to those
|
||||
// that have no cluster, or overlap with the Worker's clusters.
|
||||
if len(w.Clusters) == 0 {
|
||||
// Clusterless workers only get clusterless jobs.
|
||||
if checkWorkerTags {
|
||||
// The system has one or more tags, so limit the available jobs to those
|
||||
// that have no tag, or overlap with the Worker's tags.
|
||||
if len(w.Tags) == 0 {
|
||||
// Tagless workers only get tagless jobs.
|
||||
findTaskQuery = findTaskQuery.
|
||||
Where("jobs.worker_cluster_id is NULL")
|
||||
Where("jobs.worker_tag_id is NULL")
|
||||
} else {
|
||||
// Clustered workers get clusterless jobs AND jobs of their own clusters.
|
||||
clusterIDs := []uint{}
|
||||
for _, cluster := range w.Clusters {
|
||||
clusterIDs = append(clusterIDs, cluster.ID)
|
||||
// Taged workers get tagless jobs AND jobs of their own tags.
|
||||
tagIDs := []uint{}
|
||||
for _, tag := range w.Tags {
|
||||
tagIDs = append(tagIDs, tag.ID)
|
||||
}
|
||||
findTaskQuery = findTaskQuery.
|
||||
Where("jobs.worker_cluster_id is NULL or worker_cluster_id in ?", clusterIDs)
|
||||
Where("jobs.worker_tag_id is NULL or worker_tag_id in ?", tagIDs)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -291,87 +291,87 @@ func TestPreviouslyFailed(t *testing.T) {
|
||||
assert.Equal(t, att2.Name, task.Name, "the second task should have been chosen")
|
||||
}
|
||||
|
||||
func TestWorkerClusterJobWithCluster(t *testing.T) {
|
||||
func TestWorkerTagJobWithTag(t *testing.T) {
|
||||
ctx, cancel, db := persistenceTestFixtures(t, schedulerTestTimeout)
|
||||
defer cancel()
|
||||
|
||||
// Create worker clusters:
|
||||
cluster1 := WorkerCluster{UUID: "f0157623-4b14-4801-bee2-271dddab6309", Name: "Cluster 1"}
|
||||
cluster2 := WorkerCluster{UUID: "2f71dba1-cf92-4752-8386-f5926affabd5", Name: "Cluster 2"}
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &cluster1))
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &cluster2))
|
||||
// Create worker tags:
|
||||
tag1 := WorkerTag{UUID: "f0157623-4b14-4801-bee2-271dddab6309", Name: "Tag 1"}
|
||||
tag2 := WorkerTag{UUID: "2f71dba1-cf92-4752-8386-f5926affabd5", Name: "Tag 2"}
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &tag1))
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &tag2))
|
||||
|
||||
// Create a worker in cluster1:
|
||||
// Create a worker in tag1:
|
||||
workerC := linuxWorker(t, db, func(w *Worker) {
|
||||
w.Clusters = []*WorkerCluster{&cluster1}
|
||||
w.Tags = []*WorkerTag{&tag1}
|
||||
})
|
||||
|
||||
// Create a worker without cluster:
|
||||
// Create a worker without tag:
|
||||
workerNC := linuxWorker(t, db, func(w *Worker) {
|
||||
w.UUID = "c53f8f68-4149-4790-991c-ba73a326551e"
|
||||
w.Clusters = nil
|
||||
w.Tags = nil
|
||||
})
|
||||
|
||||
{ // Test job with different cluster:
|
||||
{ // Test job with different tag:
|
||||
authTask := authorTestTask("the task", "blender")
|
||||
job := authorTestJob("499cf0f8-e83d-4cb1-837a-df94789d07db", "simple-blender-render", authTask)
|
||||
job.WorkerClusterUUID = cluster2.UUID
|
||||
job.WorkerTagUUID = tag2.UUID
|
||||
constructTestJob(ctx, t, db, job)
|
||||
|
||||
task, err := db.ScheduleTask(ctx, &workerC)
|
||||
require.NoError(t, err)
|
||||
assert.Nil(t, task, "job with different cluster should not be scheduled")
|
||||
assert.Nil(t, task, "job with different tag should not be scheduled")
|
||||
}
|
||||
|
||||
{ // Test job with matching cluster:
|
||||
{ // Test job with matching tag:
|
||||
authTask := authorTestTask("the task", "blender")
|
||||
job := authorTestJob("5d4c2321-0bb7-4c13-a9dd-32a2c0cd156e", "simple-blender-render", authTask)
|
||||
job.WorkerClusterUUID = cluster1.UUID
|
||||
job.WorkerTagUUID = tag1.UUID
|
||||
constructTestJob(ctx, t, db, job)
|
||||
|
||||
task, err := db.ScheduleTask(ctx, &workerC)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, task, "job with matching cluster should be scheduled")
|
||||
require.NotNil(t, task, "job with matching tag should be scheduled")
|
||||
assert.Equal(t, authTask.UUID, task.UUID)
|
||||
|
||||
task, err = db.ScheduleTask(ctx, &workerNC)
|
||||
require.NoError(t, err)
|
||||
assert.Nil(t, task, "job with cluster should not be scheduled for worker without cluster")
|
||||
assert.Nil(t, task, "job with tag should not be scheduled for worker without tag")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkerClusterJobWithoutCluster(t *testing.T) {
|
||||
func TestWorkerTagJobWithoutTag(t *testing.T) {
|
||||
ctx, cancel, db := persistenceTestFixtures(t, schedulerTestTimeout)
|
||||
defer cancel()
|
||||
|
||||
// Create worker cluster:
|
||||
cluster1 := WorkerCluster{UUID: "f0157623-4b14-4801-bee2-271dddab6309", Name: "Cluster 1"}
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &cluster1))
|
||||
// Create worker tag:
|
||||
tag1 := WorkerTag{UUID: "f0157623-4b14-4801-bee2-271dddab6309", Name: "Tag 1"}
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &tag1))
|
||||
|
||||
// Create a worker in cluster1:
|
||||
// Create a worker in tag1:
|
||||
workerC := linuxWorker(t, db, func(w *Worker) {
|
||||
w.Clusters = []*WorkerCluster{&cluster1}
|
||||
w.Tags = []*WorkerTag{&tag1}
|
||||
})
|
||||
|
||||
// Create a worker without cluster:
|
||||
// Create a worker without tag:
|
||||
workerNC := linuxWorker(t, db, func(w *Worker) {
|
||||
w.UUID = "c53f8f68-4149-4790-991c-ba73a326551e"
|
||||
w.Clusters = nil
|
||||
w.Tags = nil
|
||||
})
|
||||
|
||||
// Test cluster-less job:
|
||||
// Test tag-less job:
|
||||
authTask := authorTestTask("the task", "blender")
|
||||
job := authorTestJob("b6a1d859-122f-4791-8b78-b943329a9989", "simple-blender-render", authTask)
|
||||
constructTestJob(ctx, t, db, job)
|
||||
|
||||
task, err := db.ScheduleTask(ctx, &workerC)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, task, "job without cluster should always be scheduled to worker in some cluster")
|
||||
require.NotNil(t, task, "job without tag should always be scheduled to worker in some tag")
|
||||
assert.Equal(t, authTask.UUID, task.UUID)
|
||||
|
||||
task, err = db.ScheduleTask(ctx, &workerNC)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, task, "job without cluster should always be scheduled to worker without cluster")
|
||||
require.NotNil(t, task, "job without tag should always be scheduled to worker without tag")
|
||||
assert.Equal(t, authTask.UUID, task.UUID)
|
||||
}
|
||||
|
||||
|
@ -96,8 +96,8 @@ type WorkerTestFixture struct {
|
||||
ctx context.Context
|
||||
done func()
|
||||
|
||||
worker *Worker
|
||||
cluster *WorkerCluster
|
||||
worker *Worker
|
||||
tag *WorkerTag
|
||||
}
|
||||
|
||||
func workerTestFixtures(t *testing.T, testContextTimeout time.Duration) WorkerTestFixture {
|
||||
@ -113,21 +113,21 @@ func workerTestFixtures(t *testing.T, testContextTimeout time.Duration) WorkerTe
|
||||
SupportedTaskTypes: "blender,ffmpeg,file-management",
|
||||
}
|
||||
|
||||
wc := WorkerCluster{
|
||||
wc := WorkerTag{
|
||||
UUID: uuid.New(),
|
||||
Name: "arbejdsklynge",
|
||||
Description: "Worker cluster in Danish",
|
||||
Description: "Worker tag in Danish",
|
||||
}
|
||||
|
||||
require.NoError(t, db.CreateWorker(ctx, &w))
|
||||
require.NoError(t, db.CreateWorkerCluster(ctx, &wc))
|
||||
require.NoError(t, db.CreateWorkerTag(ctx, &wc))
|
||||
|
||||
return WorkerTestFixture{
|
||||
db: db,
|
||||
ctx: ctx,
|
||||
done: cancel,
|
||||
|
||||
worker: &w,
|
||||
cluster: &wc,
|
||||
worker: &w,
|
||||
tag: &wc,
|
||||
}
|
||||
}
|
||||
|
@ -1,112 +0,0 @@
|
||||
package persistence
|
||||
|
||||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type WorkerCluster struct {
|
||||
Model
|
||||
|
||||
UUID string `gorm:"type:char(36);default:'';unique;index"`
|
||||
Name string `gorm:"type:varchar(64);default:'';unique"`
|
||||
Description string `gorm:"type:varchar(255);default:''"`
|
||||
|
||||
Workers []*Worker `gorm:"many2many:worker_cluster_membership;constraint:OnDelete:CASCADE"`
|
||||
}
|
||||
|
||||
func (db *DB) CreateWorkerCluster(ctx context.Context, wc *WorkerCluster) error {
|
||||
if err := db.gormDB.WithContext(ctx).Create(wc).Error; err != nil {
|
||||
return fmt.Errorf("creating new worker cluster: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// HasWorkerClusters returns whether there are any clusters defined at all.
|
||||
func (db *DB) HasWorkerClusters(ctx context.Context) (bool, error) {
|
||||
var count int64
|
||||
tx := db.gormDB.WithContext(ctx).
|
||||
Model(&WorkerCluster{}).
|
||||
Count(&count)
|
||||
if err := tx.Error; err != nil {
|
||||
return false, workerClusterError(err, "counting worker clusters")
|
||||
}
|
||||
return count > 0, nil
|
||||
}
|
||||
|
||||
func (db *DB) FetchWorkerCluster(ctx context.Context, uuid string) (*WorkerCluster, error) {
|
||||
tx := db.gormDB.WithContext(ctx)
|
||||
return fetchWorkerCluster(tx, uuid)
|
||||
}
|
||||
|
||||
// fetchWorkerCluster fetches the worker cluster using the given database instance.
|
||||
func fetchWorkerCluster(gormDB *gorm.DB, uuid string) (*WorkerCluster, error) {
|
||||
w := WorkerCluster{}
|
||||
tx := gormDB.First(&w, "uuid = ?", uuid)
|
||||
if tx.Error != nil {
|
||||
return nil, workerClusterError(tx.Error, "fetching worker cluster")
|
||||
}
|
||||
return &w, nil
|
||||
}
|
||||
|
||||
func (db *DB) SaveWorkerCluster(ctx context.Context, cluster *WorkerCluster) error {
|
||||
if err := db.gormDB.WithContext(ctx).Save(cluster).Error; err != nil {
|
||||
return workerClusterError(err, "saving worker cluster")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteWorkerCluster deletes the given cluster, after unassigning all workers from it.
|
||||
func (db *DB) DeleteWorkerCluster(ctx context.Context, uuid string) error {
|
||||
tx := db.gormDB.WithContext(ctx).
|
||||
Where("uuid = ?", uuid).
|
||||
Delete(&WorkerCluster{})
|
||||
if tx.Error != nil {
|
||||
return workerClusterError(tx.Error, "deleting worker cluster")
|
||||
}
|
||||
if tx.RowsAffected == 0 {
|
||||
return ErrWorkerClusterNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (db *DB) FetchWorkerClusters(ctx context.Context) ([]*WorkerCluster, error) {
|
||||
clusters := make([]*WorkerCluster, 0)
|
||||
tx := db.gormDB.WithContext(ctx).Model(&WorkerCluster{}).Scan(&clusters)
|
||||
if tx.Error != nil {
|
||||
return nil, workerClusterError(tx.Error, "fetching all worker clusters")
|
||||
}
|
||||
return clusters, nil
|
||||
}
|
||||
|
||||
func (db *DB) fetchWorkerClustersWithUUID(ctx context.Context, clusterUUIDs []string) ([]*WorkerCluster, error) {
|
||||
clusters := make([]*WorkerCluster, 0)
|
||||
tx := db.gormDB.WithContext(ctx).
|
||||
Model(&WorkerCluster{}).
|
||||
Where("uuid in ?", clusterUUIDs).
|
||||
Scan(&clusters)
|
||||
if tx.Error != nil {
|
||||
return nil, workerClusterError(tx.Error, "fetching all worker clusters")
|
||||
}
|
||||
return clusters, nil
|
||||
}
|
||||
|
||||
func (db *DB) WorkerSetClusters(ctx context.Context, worker *Worker, clusterUUIDs []string) error {
|
||||
clusters, err := db.fetchWorkerClustersWithUUID(ctx, clusterUUIDs)
|
||||
if err != nil {
|
||||
return workerClusterError(err, "fetching worker clusters")
|
||||
}
|
||||
|
||||
err = db.gormDB.WithContext(ctx).
|
||||
Model(worker).
|
||||
Association("Clusters").
|
||||
Replace(clusters)
|
||||
if err != nil {
|
||||
return workerClusterError(err, "updating worker clusters")
|
||||
}
|
||||
return nil
|
||||
}
|
@ -1,165 +0,0 @@
|
||||
package persistence
|
||||
|
||||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.blender.org/flamenco/internal/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestCreateFetchCluster(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
// Test fetching non-existent cluster
|
||||
fetchedCluster, err := f.db.FetchWorkerCluster(f.ctx, "7ee21bc8-ff1a-42d2-a6b6-cc4b529b189f")
|
||||
assert.ErrorIs(t, err, ErrWorkerClusterNotFound)
|
||||
assert.Nil(t, fetchedCluster)
|
||||
|
||||
// New cluster creation is already done in the workerTestFixtures() call.
|
||||
assert.NotNil(t, f.cluster)
|
||||
|
||||
fetchedCluster, err = f.db.FetchWorkerCluster(f.ctx, f.cluster.UUID)
|
||||
require.NoError(t, err)
|
||||
assert.NotNil(t, fetchedCluster)
|
||||
|
||||
// Test contents of fetched cluster.
|
||||
assert.Equal(t, f.cluster.UUID, fetchedCluster.UUID)
|
||||
assert.Equal(t, f.cluster.Name, fetchedCluster.Name)
|
||||
assert.Equal(t, f.cluster.Description, fetchedCluster.Description)
|
||||
assert.Zero(t, fetchedCluster.Workers)
|
||||
}
|
||||
|
||||
func TestFetchDeleteClusters(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
// Single cluster was created by fixture.
|
||||
has, err := f.db.HasWorkerClusters(f.ctx)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, has, "expecting HasWorkerClusters to return true")
|
||||
|
||||
secondCluster := WorkerCluster{
|
||||
UUID: uuid.New(),
|
||||
Name: "arbeiderscluster",
|
||||
Description: "Worker cluster in Dutch",
|
||||
}
|
||||
|
||||
require.NoError(t, f.db.CreateWorkerCluster(f.ctx, &secondCluster))
|
||||
|
||||
allClusters, err := f.db.FetchWorkerClusters(f.ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, allClusters, 2)
|
||||
var allClusterIDs [2]string
|
||||
for idx := range allClusters {
|
||||
allClusterIDs[idx] = allClusters[idx].UUID
|
||||
}
|
||||
assert.Contains(t, allClusterIDs, f.cluster.UUID)
|
||||
assert.Contains(t, allClusterIDs, secondCluster.UUID)
|
||||
|
||||
has, err = f.db.HasWorkerClusters(f.ctx)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, has, "expecting HasWorkerClusters to return true")
|
||||
|
||||
// Test deleting the 2nd cluster.
|
||||
require.NoError(t, f.db.DeleteWorkerCluster(f.ctx, secondCluster.UUID))
|
||||
|
||||
allClusters, err = f.db.FetchWorkerClusters(f.ctx)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, allClusters, 1)
|
||||
assert.Equal(t, f.cluster.UUID, allClusters[0].UUID)
|
||||
|
||||
// Test deleting the 1st cluster.
|
||||
require.NoError(t, f.db.DeleteWorkerCluster(f.ctx, f.cluster.UUID))
|
||||
has, err = f.db.HasWorkerClusters(f.ctx)
|
||||
require.NoError(t, err)
|
||||
assert.False(t, has, "expecting HasWorkerClusters to return false")
|
||||
}
|
||||
|
||||
func TestAssignUnassignWorkerClusters(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
assertClusters := func(msgLabel string, clusterUUIDs ...string) {
|
||||
w, err := f.db.FetchWorker(f.ctx, f.worker.UUID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Catch doubly-reported clusters, as the maps below would hide those cases.
|
||||
assert.Len(t, w.Clusters, len(clusterUUIDs), msgLabel)
|
||||
|
||||
expectClusters := make(map[string]bool)
|
||||
for _, cid := range clusterUUIDs {
|
||||
expectClusters[cid] = true
|
||||
}
|
||||
|
||||
actualClusters := make(map[string]bool)
|
||||
for _, c := range w.Clusters {
|
||||
actualClusters[c.UUID] = true
|
||||
}
|
||||
|
||||
assert.Equal(t, expectClusters, actualClusters, msgLabel)
|
||||
}
|
||||
|
||||
secondCluster := WorkerCluster{
|
||||
UUID: uuid.New(),
|
||||
Name: "arbeiderscluster",
|
||||
Description: "Worker cluster in Dutch",
|
||||
}
|
||||
|
||||
require.NoError(t, f.db.CreateWorkerCluster(f.ctx, &secondCluster))
|
||||
|
||||
// By default the Worker should not be part of a cluster.
|
||||
assertClusters("default cluster assignment")
|
||||
|
||||
require.NoError(t, f.db.WorkerSetClusters(f.ctx, f.worker, []string{f.cluster.UUID}))
|
||||
assertClusters("setting one cluster", f.cluster.UUID)
|
||||
|
||||
// Double assignments should also just work.
|
||||
require.NoError(t, f.db.WorkerSetClusters(f.ctx, f.worker, []string{f.cluster.UUID, f.cluster.UUID}))
|
||||
assertClusters("setting twice the same cluster", f.cluster.UUID)
|
||||
|
||||
// Multiple cluster memberships.
|
||||
require.NoError(t, f.db.WorkerSetClusters(f.ctx, f.worker, []string{f.cluster.UUID, secondCluster.UUID}))
|
||||
assertClusters("setting two different clusters", f.cluster.UUID, secondCluster.UUID)
|
||||
|
||||
// Remove memberships.
|
||||
require.NoError(t, f.db.WorkerSetClusters(f.ctx, f.worker, []string{secondCluster.UUID}))
|
||||
assertClusters("unassigning from first cluster", secondCluster.UUID)
|
||||
require.NoError(t, f.db.WorkerSetClusters(f.ctx, f.worker, []string{}))
|
||||
assertClusters("unassigning from second cluster")
|
||||
}
|
||||
|
||||
func TestSaveWorkerCluster(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
f.cluster.Name = "übercluster"
|
||||
f.cluster.Description = "ʻO kēlā hui ma laila"
|
||||
require.NoError(t, f.db.SaveWorkerCluster(f.ctx, f.cluster))
|
||||
|
||||
fetched, err := f.db.FetchWorkerCluster(f.ctx, f.cluster.UUID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, f.cluster.Name, fetched.Name)
|
||||
assert.Equal(t, f.cluster.Description, fetched.Description)
|
||||
}
|
||||
|
||||
func TestDeleteWorkerClusterWithWorkersAssigned(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
// Assign the worker.
|
||||
require.NoError(t, f.db.WorkerSetClusters(f.ctx, f.worker, []string{f.cluster.UUID}))
|
||||
|
||||
// Delete the cluster.
|
||||
require.NoError(t, f.db.DeleteWorkerCluster(f.ctx, f.cluster.UUID))
|
||||
|
||||
// Check the Worker has been unassigned from the cluster.
|
||||
w, err := f.db.FetchWorker(f.ctx, f.worker.UUID)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, w.Clusters)
|
||||
}
|
112
internal/manager/persistence/worker_tag.go
Normal file
112
internal/manager/persistence/worker_tag.go
Normal file
@ -0,0 +1,112 @@
|
||||
package persistence
|
||||
|
||||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type WorkerTag struct {
|
||||
Model
|
||||
|
||||
UUID string `gorm:"type:char(36);default:'';unique;index"`
|
||||
Name string `gorm:"type:varchar(64);default:'';unique"`
|
||||
Description string `gorm:"type:varchar(255);default:''"`
|
||||
|
||||
Workers []*Worker `gorm:"many2many:worker_tag_membership;constraint:OnDelete:CASCADE"`
|
||||
}
|
||||
|
||||
func (db *DB) CreateWorkerTag(ctx context.Context, wc *WorkerTag) error {
|
||||
if err := db.gormDB.WithContext(ctx).Create(wc).Error; err != nil {
|
||||
return fmt.Errorf("creating new worker tag: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// HasWorkerTags returns whether there are any tags defined at all.
|
||||
func (db *DB) HasWorkerTags(ctx context.Context) (bool, error) {
|
||||
var count int64
|
||||
tx := db.gormDB.WithContext(ctx).
|
||||
Model(&WorkerTag{}).
|
||||
Count(&count)
|
||||
if err := tx.Error; err != nil {
|
||||
return false, workerTagError(err, "counting worker tags")
|
||||
}
|
||||
return count > 0, nil
|
||||
}
|
||||
|
||||
func (db *DB) FetchWorkerTag(ctx context.Context, uuid string) (*WorkerTag, error) {
|
||||
tx := db.gormDB.WithContext(ctx)
|
||||
return fetchWorkerTag(tx, uuid)
|
||||
}
|
||||
|
||||
// fetchWorkerTag fetches the worker tag using the given database instance.
|
||||
func fetchWorkerTag(gormDB *gorm.DB, uuid string) (*WorkerTag, error) {
|
||||
w := WorkerTag{}
|
||||
tx := gormDB.First(&w, "uuid = ?", uuid)
|
||||
if tx.Error != nil {
|
||||
return nil, workerTagError(tx.Error, "fetching worker tag")
|
||||
}
|
||||
return &w, nil
|
||||
}
|
||||
|
||||
func (db *DB) SaveWorkerTag(ctx context.Context, tag *WorkerTag) error {
|
||||
if err := db.gormDB.WithContext(ctx).Save(tag).Error; err != nil {
|
||||
return workerTagError(err, "saving worker tag")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteWorkerTag deletes the given tag, after unassigning all workers from it.
|
||||
func (db *DB) DeleteWorkerTag(ctx context.Context, uuid string) error {
|
||||
tx := db.gormDB.WithContext(ctx).
|
||||
Where("uuid = ?", uuid).
|
||||
Delete(&WorkerTag{})
|
||||
if tx.Error != nil {
|
||||
return workerTagError(tx.Error, "deleting worker tag")
|
||||
}
|
||||
if tx.RowsAffected == 0 {
|
||||
return ErrWorkerTagNotFound
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (db *DB) FetchWorkerTags(ctx context.Context) ([]*WorkerTag, error) {
|
||||
tags := make([]*WorkerTag, 0)
|
||||
tx := db.gormDB.WithContext(ctx).Model(&WorkerTag{}).Scan(&tags)
|
||||
if tx.Error != nil {
|
||||
return nil, workerTagError(tx.Error, "fetching all worker tags")
|
||||
}
|
||||
return tags, nil
|
||||
}
|
||||
|
||||
func (db *DB) fetchWorkerTagsWithUUID(ctx context.Context, tagUUIDs []string) ([]*WorkerTag, error) {
|
||||
tags := make([]*WorkerTag, 0)
|
||||
tx := db.gormDB.WithContext(ctx).
|
||||
Model(&WorkerTag{}).
|
||||
Where("uuid in ?", tagUUIDs).
|
||||
Scan(&tags)
|
||||
if tx.Error != nil {
|
||||
return nil, workerTagError(tx.Error, "fetching all worker tags")
|
||||
}
|
||||
return tags, nil
|
||||
}
|
||||
|
||||
func (db *DB) WorkerSetTags(ctx context.Context, worker *Worker, tagUUIDs []string) error {
|
||||
tags, err := db.fetchWorkerTagsWithUUID(ctx, tagUUIDs)
|
||||
if err != nil {
|
||||
return workerTagError(err, "fetching worker tags")
|
||||
}
|
||||
|
||||
err = db.gormDB.WithContext(ctx).
|
||||
Model(worker).
|
||||
Association("Tags").
|
||||
Replace(tags)
|
||||
if err != nil {
|
||||
return workerTagError(err, "updating worker tags")
|
||||
}
|
||||
return nil
|
||||
}
|
165
internal/manager/persistence/worker_tag_test.go
Normal file
165
internal/manager/persistence/worker_tag_test.go
Normal file
@ -0,0 +1,165 @@
|
||||
package persistence
|
||||
|
||||
// SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.blender.org/flamenco/internal/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestCreateFetchTag(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
// Test fetching non-existent tag
|
||||
fetchedTag, err := f.db.FetchWorkerTag(f.ctx, "7ee21bc8-ff1a-42d2-a6b6-cc4b529b189f")
|
||||
assert.ErrorIs(t, err, ErrWorkerTagNotFound)
|
||||
assert.Nil(t, fetchedTag)
|
||||
|
||||
// New tag creation is already done in the workerTestFixtures() call.
|
||||
assert.NotNil(t, f.tag)
|
||||
|
||||
fetchedTag, err = f.db.FetchWorkerTag(f.ctx, f.tag.UUID)
|
||||
require.NoError(t, err)
|
||||
assert.NotNil(t, fetchedTag)
|
||||
|
||||
// Test contents of fetched tag.
|
||||
assert.Equal(t, f.tag.UUID, fetchedTag.UUID)
|
||||
assert.Equal(t, f.tag.Name, fetchedTag.Name)
|
||||
assert.Equal(t, f.tag.Description, fetchedTag.Description)
|
||||
assert.Zero(t, fetchedTag.Workers)
|
||||
}
|
||||
|
||||
func TestFetchDeleteTags(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
// Single tag was created by fixture.
|
||||
has, err := f.db.HasWorkerTags(f.ctx)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, has, "expecting HasWorkerTags to return true")
|
||||
|
||||
secondTag := WorkerTag{
|
||||
UUID: uuid.New(),
|
||||
Name: "arbeiderstag",
|
||||
Description: "Worker tag in Dutch",
|
||||
}
|
||||
|
||||
require.NoError(t, f.db.CreateWorkerTag(f.ctx, &secondTag))
|
||||
|
||||
allTags, err := f.db.FetchWorkerTags(f.ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, allTags, 2)
|
||||
var allTagIDs [2]string
|
||||
for idx := range allTags {
|
||||
allTagIDs[idx] = allTags[idx].UUID
|
||||
}
|
||||
assert.Contains(t, allTagIDs, f.tag.UUID)
|
||||
assert.Contains(t, allTagIDs, secondTag.UUID)
|
||||
|
||||
has, err = f.db.HasWorkerTags(f.ctx)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, has, "expecting HasWorkerTags to return true")
|
||||
|
||||
// Test deleting the 2nd tag.
|
||||
require.NoError(t, f.db.DeleteWorkerTag(f.ctx, secondTag.UUID))
|
||||
|
||||
allTags, err = f.db.FetchWorkerTags(f.ctx)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, allTags, 1)
|
||||
assert.Equal(t, f.tag.UUID, allTags[0].UUID)
|
||||
|
||||
// Test deleting the 1st tag.
|
||||
require.NoError(t, f.db.DeleteWorkerTag(f.ctx, f.tag.UUID))
|
||||
has, err = f.db.HasWorkerTags(f.ctx)
|
||||
require.NoError(t, err)
|
||||
assert.False(t, has, "expecting HasWorkerTags to return false")
|
||||
}
|
||||
|
||||
func TestAssignUnassignWorkerTags(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
assertTags := func(msgLabel string, tagUUIDs ...string) {
|
||||
w, err := f.db.FetchWorker(f.ctx, f.worker.UUID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Catch doubly-reported tags, as the maps below would hide those cases.
|
||||
assert.Len(t, w.Tags, len(tagUUIDs), msgLabel)
|
||||
|
||||
expectTags := make(map[string]bool)
|
||||
for _, cid := range tagUUIDs {
|
||||
expectTags[cid] = true
|
||||
}
|
||||
|
||||
actualTags := make(map[string]bool)
|
||||
for _, c := range w.Tags {
|
||||
actualTags[c.UUID] = true
|
||||
}
|
||||
|
||||
assert.Equal(t, expectTags, actualTags, msgLabel)
|
||||
}
|
||||
|
||||
secondTag := WorkerTag{
|
||||
UUID: uuid.New(),
|
||||
Name: "arbeiderstag",
|
||||
Description: "Worker tag in Dutch",
|
||||
}
|
||||
|
||||
require.NoError(t, f.db.CreateWorkerTag(f.ctx, &secondTag))
|
||||
|
||||
// By default the Worker should not be part of a tag.
|
||||
assertTags("default tag assignment")
|
||||
|
||||
require.NoError(t, f.db.WorkerSetTags(f.ctx, f.worker, []string{f.tag.UUID}))
|
||||
assertTags("setting one tag", f.tag.UUID)
|
||||
|
||||
// Double assignments should also just work.
|
||||
require.NoError(t, f.db.WorkerSetTags(f.ctx, f.worker, []string{f.tag.UUID, f.tag.UUID}))
|
||||
assertTags("setting twice the same tag", f.tag.UUID)
|
||||
|
||||
// Multiple tag memberships.
|
||||
require.NoError(t, f.db.WorkerSetTags(f.ctx, f.worker, []string{f.tag.UUID, secondTag.UUID}))
|
||||
assertTags("setting two different tags", f.tag.UUID, secondTag.UUID)
|
||||
|
||||
// Remove memberships.
|
||||
require.NoError(t, f.db.WorkerSetTags(f.ctx, f.worker, []string{secondTag.UUID}))
|
||||
assertTags("unassigning from first tag", secondTag.UUID)
|
||||
require.NoError(t, f.db.WorkerSetTags(f.ctx, f.worker, []string{}))
|
||||
assertTags("unassigning from second tag")
|
||||
}
|
||||
|
||||
func TestSaveWorkerTag(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
f.tag.Name = "übertag"
|
||||
f.tag.Description = "ʻO kēlā hui ma laila"
|
||||
require.NoError(t, f.db.SaveWorkerTag(f.ctx, f.tag))
|
||||
|
||||
fetched, err := f.db.FetchWorkerTag(f.ctx, f.tag.UUID)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, f.tag.Name, fetched.Name)
|
||||
assert.Equal(t, f.tag.Description, fetched.Description)
|
||||
}
|
||||
|
||||
func TestDeleteWorkerTagWithWorkersAssigned(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
// Assign the worker.
|
||||
require.NoError(t, f.db.WorkerSetTags(f.ctx, f.worker, []string{f.tag.UUID}))
|
||||
|
||||
// Delete the tag.
|
||||
require.NoError(t, f.db.DeleteWorkerTag(f.ctx, f.tag.UUID))
|
||||
|
||||
// Check the Worker has been unassigned from the tag.
|
||||
w, err := f.db.FetchWorker(f.ctx, f.worker.UUID)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, w.Tags)
|
||||
}
|
@ -27,11 +27,11 @@ type Worker struct {
|
||||
LastSeenAt time.Time `gorm:"index"` // Should contain UTC timestamps.
|
||||
|
||||
StatusRequested api.WorkerStatus `gorm:"type:varchar(16);default:''"`
|
||||
LazyStatusRequest bool `gorm:"type:smallint;default:0"`
|
||||
LazyStatusRequest bool `gorm:"type:smallint;default:false"`
|
||||
|
||||
SupportedTaskTypes string `gorm:"type:varchar(255);default:''"` // comma-separated list of task types.
|
||||
|
||||
Clusters []*WorkerCluster `gorm:"many2many:worker_cluster_membership;constraint:OnDelete:CASCADE"`
|
||||
Tags []*WorkerTag `gorm:"many2many:worker_tag_membership;constraint:OnDelete:CASCADE"`
|
||||
}
|
||||
|
||||
func (w *Worker) Identifier() string {
|
||||
@ -73,7 +73,7 @@ func (db *DB) CreateWorker(ctx context.Context, w *Worker) error {
|
||||
func (db *DB) FetchWorker(ctx context.Context, uuid string) (*Worker, error) {
|
||||
w := Worker{}
|
||||
tx := db.gormDB.WithContext(ctx).
|
||||
Preload("Clusters").
|
||||
Preload("Tags").
|
||||
First(&w, "uuid = ?", uuid)
|
||||
if tx.Error != nil {
|
||||
return nil, workerError(tx.Error, "fetching worker")
|
||||
|
@ -319,18 +319,18 @@ func TestDeleteWorker(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeleteWorkerWithClusterAssigned(t *testing.T) {
|
||||
func TestDeleteWorkerWithTagAssigned(t *testing.T) {
|
||||
f := workerTestFixtures(t, 1*time.Second)
|
||||
defer f.done()
|
||||
|
||||
// Assign the worker.
|
||||
require.NoError(t, f.db.WorkerSetClusters(f.ctx, f.worker, []string{f.cluster.UUID}))
|
||||
require.NoError(t, f.db.WorkerSetTags(f.ctx, f.worker, []string{f.tag.UUID}))
|
||||
|
||||
// Delete the Worker.
|
||||
require.NoError(t, f.db.DeleteWorker(f.ctx, f.worker.UUID))
|
||||
|
||||
// Check the Worker has been unassigned from the cluster.
|
||||
cluster, err := f.db.FetchWorkerCluster(f.ctx, f.cluster.UUID)
|
||||
// Check the Worker has been unassigned from the tag.
|
||||
tag, err := f.db.FetchWorkerTag(f.ctx, f.tag.UUID)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, cluster.Workers)
|
||||
assert.Empty(t, tag.Workers)
|
||||
}
|
||||
|
@ -9,6 +9,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/benbjohnson/clock"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
|
||||
"git.blender.org/flamenco/internal/manager/persistence"
|
||||
@ -80,6 +81,11 @@ func (ss *SleepScheduler) SetSchedule(ctx context.Context, workerUUID string, sc
|
||||
return fmt.Errorf("persisting sleep schedule of worker %s: %w", workerUUID, err)
|
||||
}
|
||||
|
||||
logger := addLoggerFields(zerolog.Ctx(ctx), schedule)
|
||||
logger.Info().
|
||||
Str("worker", schedule.Worker.Identifier()).
|
||||
Msg("sleep scheduler: new schedule for worker")
|
||||
|
||||
return ss.ApplySleepSchedule(ctx, schedule)
|
||||
}
|
||||
|
||||
@ -239,3 +245,19 @@ func (ss *SleepScheduler) mayUpdateWorker(worker *persistence.Worker) bool {
|
||||
shouldSkip := skipWorkersInStatus[worker.Status]
|
||||
return !shouldSkip
|
||||
}
|
||||
|
||||
func addLoggerFields(logger *zerolog.Logger, schedule *persistence.SleepSchedule) zerolog.Logger {
|
||||
logCtx := logger.With()
|
||||
|
||||
if schedule.Worker != nil {
|
||||
logCtx = logCtx.Str("worker", schedule.Worker.Identifier())
|
||||
}
|
||||
|
||||
logCtx = logCtx.
|
||||
Bool("isActive", schedule.IsActive).
|
||||
Str("daysOfWeek", schedule.DaysOfWeek).
|
||||
Stringer("startTime", schedule.StartTime).
|
||||
Stringer("endTime", schedule.EndTime)
|
||||
|
||||
return logCtx.Logger()
|
||||
}
|
||||
|
@ -32,7 +32,7 @@ func NewWorkerUpdate(worker *persistence.Worker) api.SocketIOWorkerUpdate {
|
||||
workerUpdate.LastSeen = &worker.LastSeenAt
|
||||
}
|
||||
|
||||
// TODO: add cluster IDs.
|
||||
// TODO: add tag IDs.
|
||||
|
||||
return workerUpdate
|
||||
}
|
||||
|
@ -90,7 +90,7 @@ func NewCommandExecutor(cli CommandLineRunner, listener CommandListener, timeSer
|
||||
|
||||
// file-management
|
||||
"move-directory": ce.cmdMoveDirectory,
|
||||
"copy-file": ce.cmdCopyFile,
|
||||
"copy-file": ce.cmdCopyFile,
|
||||
}
|
||||
|
||||
return ce
|
||||
|
@ -215,7 +215,6 @@ func fileCopy(src, dest string) (error, string) {
|
||||
return nil, msg
|
||||
}
|
||||
|
||||
|
||||
func fileExists(filename string) bool {
|
||||
_, err := os.Stat(filename)
|
||||
return !errors.Is(err, fs.ErrNotExist)
|
||||
|
@ -345,7 +345,6 @@ func TestCmdCopyFileDestinationExists(t *testing.T) {
|
||||
assert.Error(t, f.run())
|
||||
}
|
||||
|
||||
|
||||
func TestCmdCopyFileSourceIsDir(t *testing.T) {
|
||||
f := newCmdCopyFileFixture(t)
|
||||
defer f.finish(t)
|
||||
@ -372,7 +371,6 @@ func TestCmdCopyFileSourceIsDir(t *testing.T) {
|
||||
assert.Error(t, f.run())
|
||||
}
|
||||
|
||||
|
||||
func newCmdCopyFileFixture(t *testing.T) cmdCopyFileFixture {
|
||||
mockCtrl := gomock.NewController(t)
|
||||
ce, mocks := testCommandExecutor(t, mockCtrl)
|
||||
@ -410,11 +408,11 @@ func (f cmdCopyFileFixture) finish(t *testing.T) {
|
||||
f.mockCtrl.Finish()
|
||||
}
|
||||
|
||||
func (f cmdCopyFileFixture) run() error {
|
||||
func (f cmdCopyFileFixture) run() error {
|
||||
cmd := api.Command{
|
||||
Name: "copy-file",
|
||||
Parameters: map[string]interface{}{
|
||||
"src": f.absolute_src_path,
|
||||
"src": f.absolute_src_path,
|
||||
"dest": f.absolute_dest_path,
|
||||
},
|
||||
}
|
||||
|
230
internal/worker/mocks/client.gen.go
generated
230
internal/worker/mocks/client.gen.go
generated
@ -116,44 +116,44 @@ func (mr *MockFlamencoClientMockRecorder) CheckSharedStoragePathWithResponse(arg
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CheckSharedStoragePathWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).CheckSharedStoragePathWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// CreateWorkerClusterWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) CreateWorkerClusterWithBodyWithResponse(arg0 context.Context, arg1 string, arg2 io.Reader, arg3 ...api.RequestEditorFn) (*api.CreateWorkerClusterResponse, error) {
|
||||
// CreateWorkerTagWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) CreateWorkerTagWithBodyWithResponse(arg0 context.Context, arg1 string, arg2 io.Reader, arg3 ...api.RequestEditorFn) (*api.CreateWorkerTagResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1, arg2}
|
||||
for _, a := range arg3 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "CreateWorkerClusterWithBodyWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.CreateWorkerClusterResponse)
|
||||
ret := m.ctrl.Call(m, "CreateWorkerTagWithBodyWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.CreateWorkerTagResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// CreateWorkerClusterWithBodyWithResponse indicates an expected call of CreateWorkerClusterWithBodyWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) CreateWorkerClusterWithBodyWithResponse(arg0, arg1, arg2 interface{}, arg3 ...interface{}) *gomock.Call {
|
||||
// CreateWorkerTagWithBodyWithResponse indicates an expected call of CreateWorkerTagWithBodyWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) CreateWorkerTagWithBodyWithResponse(arg0, arg1, arg2 interface{}, arg3 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1, arg2}, arg3...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateWorkerClusterWithBodyWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).CreateWorkerClusterWithBodyWithResponse), varargs...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateWorkerTagWithBodyWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).CreateWorkerTagWithBodyWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// CreateWorkerClusterWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) CreateWorkerClusterWithResponse(arg0 context.Context, arg1 api.CreateWorkerClusterJSONRequestBody, arg2 ...api.RequestEditorFn) (*api.CreateWorkerClusterResponse, error) {
|
||||
// CreateWorkerTagWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) CreateWorkerTagWithResponse(arg0 context.Context, arg1 api.CreateWorkerTagJSONRequestBody, arg2 ...api.RequestEditorFn) (*api.CreateWorkerTagResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1}
|
||||
for _, a := range arg2 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "CreateWorkerClusterWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.CreateWorkerClusterResponse)
|
||||
ret := m.ctrl.Call(m, "CreateWorkerTagWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.CreateWorkerTagResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// CreateWorkerClusterWithResponse indicates an expected call of CreateWorkerClusterWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) CreateWorkerClusterWithResponse(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
|
||||
// CreateWorkerTagWithResponse indicates an expected call of CreateWorkerTagWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) CreateWorkerTagWithResponse(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1}, arg2...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateWorkerClusterWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).CreateWorkerClusterWithResponse), varargs...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateWorkerTagWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).CreateWorkerTagWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// DeleteJobWhatWouldItDoWithResponse mocks base method.
|
||||
@ -196,24 +196,24 @@ func (mr *MockFlamencoClientMockRecorder) DeleteJobWithResponse(arg0, arg1 inter
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteJobWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).DeleteJobWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// DeleteWorkerClusterWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) DeleteWorkerClusterWithResponse(arg0 context.Context, arg1 string, arg2 ...api.RequestEditorFn) (*api.DeleteWorkerClusterResponse, error) {
|
||||
// DeleteWorkerTagWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) DeleteWorkerTagWithResponse(arg0 context.Context, arg1 string, arg2 ...api.RequestEditorFn) (*api.DeleteWorkerTagResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1}
|
||||
for _, a := range arg2 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "DeleteWorkerClusterWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.DeleteWorkerClusterResponse)
|
||||
ret := m.ctrl.Call(m, "DeleteWorkerTagWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.DeleteWorkerTagResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// DeleteWorkerClusterWithResponse indicates an expected call of DeleteWorkerClusterWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) DeleteWorkerClusterWithResponse(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
|
||||
// DeleteWorkerTagWithResponse indicates an expected call of DeleteWorkerTagWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) DeleteWorkerTagWithResponse(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1}, arg2...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteWorkerClusterWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).DeleteWorkerClusterWithResponse), varargs...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteWorkerTagWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).DeleteWorkerTagWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// DeleteWorkerWithResponse mocks base method.
|
||||
@ -396,46 +396,6 @@ func (mr *MockFlamencoClientMockRecorder) FetchTaskWithResponse(arg0, arg1 inter
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchTaskWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).FetchTaskWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// FetchWorkerClusterWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) FetchWorkerClusterWithResponse(arg0 context.Context, arg1 string, arg2 ...api.RequestEditorFn) (*api.FetchWorkerClusterResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1}
|
||||
for _, a := range arg2 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "FetchWorkerClusterWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.FetchWorkerClusterResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// FetchWorkerClusterWithResponse indicates an expected call of FetchWorkerClusterWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) FetchWorkerClusterWithResponse(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1}, arg2...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerClusterWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).FetchWorkerClusterWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// FetchWorkerClustersWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) FetchWorkerClustersWithResponse(arg0 context.Context, arg1 ...api.RequestEditorFn) (*api.FetchWorkerClustersResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0}
|
||||
for _, a := range arg1 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "FetchWorkerClustersWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.FetchWorkerClustersResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// FetchWorkerClustersWithResponse indicates an expected call of FetchWorkerClustersWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) FetchWorkerClustersWithResponse(arg0 interface{}, arg1 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0}, arg1...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerClustersWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).FetchWorkerClustersWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// FetchWorkerSleepScheduleWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) FetchWorkerSleepScheduleWithResponse(arg0 context.Context, arg1 string, arg2 ...api.RequestEditorFn) (*api.FetchWorkerSleepScheduleResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@ -456,6 +416,46 @@ func (mr *MockFlamencoClientMockRecorder) FetchWorkerSleepScheduleWithResponse(a
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerSleepScheduleWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).FetchWorkerSleepScheduleWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// FetchWorkerTagWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) FetchWorkerTagWithResponse(arg0 context.Context, arg1 string, arg2 ...api.RequestEditorFn) (*api.FetchWorkerTagResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1}
|
||||
for _, a := range arg2 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "FetchWorkerTagWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.FetchWorkerTagResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// FetchWorkerTagWithResponse indicates an expected call of FetchWorkerTagWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) FetchWorkerTagWithResponse(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1}, arg2...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerTagWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).FetchWorkerTagWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// FetchWorkerTagsWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) FetchWorkerTagsWithResponse(arg0 context.Context, arg1 ...api.RequestEditorFn) (*api.FetchWorkerTagsResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0}
|
||||
for _, a := range arg1 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "FetchWorkerTagsWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.FetchWorkerTagsResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// FetchWorkerTagsWithResponse indicates an expected call of FetchWorkerTagsWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) FetchWorkerTagsWithResponse(arg0 interface{}, arg1 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0}, arg1...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FetchWorkerTagsWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).FetchWorkerTagsWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// FetchWorkerWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) FetchWorkerWithResponse(arg0 context.Context, arg1 string, arg2 ...api.RequestEditorFn) (*api.FetchWorkerResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@ -1016,46 +1016,6 @@ func (mr *MockFlamencoClientMockRecorder) SetTaskStatusWithResponse(arg0, arg1,
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetTaskStatusWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).SetTaskStatusWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// SetWorkerClustersWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) SetWorkerClustersWithBodyWithResponse(arg0 context.Context, arg1, arg2 string, arg3 io.Reader, arg4 ...api.RequestEditorFn) (*api.SetWorkerClustersResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1, arg2, arg3}
|
||||
for _, a := range arg4 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "SetWorkerClustersWithBodyWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.SetWorkerClustersResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// SetWorkerClustersWithBodyWithResponse indicates an expected call of SetWorkerClustersWithBodyWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) SetWorkerClustersWithBodyWithResponse(arg0, arg1, arg2, arg3 interface{}, arg4 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1, arg2, arg3}, arg4...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetWorkerClustersWithBodyWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).SetWorkerClustersWithBodyWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// SetWorkerClustersWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) SetWorkerClustersWithResponse(arg0 context.Context, arg1 string, arg2 api.SetWorkerClustersJSONRequestBody, arg3 ...api.RequestEditorFn) (*api.SetWorkerClustersResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1, arg2}
|
||||
for _, a := range arg3 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "SetWorkerClustersWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.SetWorkerClustersResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// SetWorkerClustersWithResponse indicates an expected call of SetWorkerClustersWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) SetWorkerClustersWithResponse(arg0, arg1, arg2 interface{}, arg3 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1, arg2}, arg3...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetWorkerClustersWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).SetWorkerClustersWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// SetWorkerSleepScheduleWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) SetWorkerSleepScheduleWithBodyWithResponse(arg0 context.Context, arg1, arg2 string, arg3 io.Reader, arg4 ...api.RequestEditorFn) (*api.SetWorkerSleepScheduleResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@ -1096,6 +1056,46 @@ func (mr *MockFlamencoClientMockRecorder) SetWorkerSleepScheduleWithResponse(arg
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetWorkerSleepScheduleWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).SetWorkerSleepScheduleWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// SetWorkerTagsWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) SetWorkerTagsWithBodyWithResponse(arg0 context.Context, arg1, arg2 string, arg3 io.Reader, arg4 ...api.RequestEditorFn) (*api.SetWorkerTagsResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1, arg2, arg3}
|
||||
for _, a := range arg4 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "SetWorkerTagsWithBodyWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.SetWorkerTagsResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// SetWorkerTagsWithBodyWithResponse indicates an expected call of SetWorkerTagsWithBodyWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) SetWorkerTagsWithBodyWithResponse(arg0, arg1, arg2, arg3 interface{}, arg4 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1, arg2, arg3}, arg4...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetWorkerTagsWithBodyWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).SetWorkerTagsWithBodyWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// SetWorkerTagsWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) SetWorkerTagsWithResponse(arg0 context.Context, arg1 string, arg2 api.SetWorkerTagsJSONRequestBody, arg3 ...api.RequestEditorFn) (*api.SetWorkerTagsResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1, arg2}
|
||||
for _, a := range arg3 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "SetWorkerTagsWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.SetWorkerTagsResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// SetWorkerTagsWithResponse indicates an expected call of SetWorkerTagsWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) SetWorkerTagsWithResponse(arg0, arg1, arg2 interface{}, arg3 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1, arg2}, arg3...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetWorkerTagsWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).SetWorkerTagsWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// ShamanCheckoutRequirementsWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) ShamanCheckoutRequirementsWithBodyWithResponse(arg0 context.Context, arg1 string, arg2 io.Reader, arg3 ...api.RequestEditorFn) (*api.ShamanCheckoutRequirementsResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
@ -1416,44 +1416,44 @@ func (mr *MockFlamencoClientMockRecorder) TaskUpdateWithResponse(arg0, arg1, arg
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "TaskUpdateWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).TaskUpdateWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// UpdateWorkerClusterWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) UpdateWorkerClusterWithBodyWithResponse(arg0 context.Context, arg1, arg2 string, arg3 io.Reader, arg4 ...api.RequestEditorFn) (*api.UpdateWorkerClusterResponse, error) {
|
||||
// UpdateWorkerTagWithBodyWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) UpdateWorkerTagWithBodyWithResponse(arg0 context.Context, arg1, arg2 string, arg3 io.Reader, arg4 ...api.RequestEditorFn) (*api.UpdateWorkerTagResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1, arg2, arg3}
|
||||
for _, a := range arg4 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "UpdateWorkerClusterWithBodyWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.UpdateWorkerClusterResponse)
|
||||
ret := m.ctrl.Call(m, "UpdateWorkerTagWithBodyWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.UpdateWorkerTagResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// UpdateWorkerClusterWithBodyWithResponse indicates an expected call of UpdateWorkerClusterWithBodyWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) UpdateWorkerClusterWithBodyWithResponse(arg0, arg1, arg2, arg3 interface{}, arg4 ...interface{}) *gomock.Call {
|
||||
// UpdateWorkerTagWithBodyWithResponse indicates an expected call of UpdateWorkerTagWithBodyWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) UpdateWorkerTagWithBodyWithResponse(arg0, arg1, arg2, arg3 interface{}, arg4 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1, arg2, arg3}, arg4...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateWorkerClusterWithBodyWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).UpdateWorkerClusterWithBodyWithResponse), varargs...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateWorkerTagWithBodyWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).UpdateWorkerTagWithBodyWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// UpdateWorkerClusterWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) UpdateWorkerClusterWithResponse(arg0 context.Context, arg1 string, arg2 api.UpdateWorkerClusterJSONRequestBody, arg3 ...api.RequestEditorFn) (*api.UpdateWorkerClusterResponse, error) {
|
||||
// UpdateWorkerTagWithResponse mocks base method.
|
||||
func (m *MockFlamencoClient) UpdateWorkerTagWithResponse(arg0 context.Context, arg1 string, arg2 api.UpdateWorkerTagJSONRequestBody, arg3 ...api.RequestEditorFn) (*api.UpdateWorkerTagResponse, error) {
|
||||
m.ctrl.T.Helper()
|
||||
varargs := []interface{}{arg0, arg1, arg2}
|
||||
for _, a := range arg3 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "UpdateWorkerClusterWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.UpdateWorkerClusterResponse)
|
||||
ret := m.ctrl.Call(m, "UpdateWorkerTagWithResponse", varargs...)
|
||||
ret0, _ := ret[0].(*api.UpdateWorkerTagResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// UpdateWorkerClusterWithResponse indicates an expected call of UpdateWorkerClusterWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) UpdateWorkerClusterWithResponse(arg0, arg1, arg2 interface{}, arg3 ...interface{}) *gomock.Call {
|
||||
// UpdateWorkerTagWithResponse indicates an expected call of UpdateWorkerTagWithResponse.
|
||||
func (mr *MockFlamencoClientMockRecorder) UpdateWorkerTagWithResponse(arg0, arg1, arg2 interface{}, arg3 ...interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]interface{}{arg0, arg1, arg2}, arg3...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateWorkerClusterWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).UpdateWorkerClusterWithResponse), varargs...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateWorkerTagWithResponse", reflect.TypeOf((*MockFlamencoClient)(nil).UpdateWorkerTagWithResponse), varargs...)
|
||||
}
|
||||
|
||||
// WorkerStateChangedWithBodyWithResponse mocks base method.
|
||||
|
@ -534,10 +534,10 @@ paths:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
|
||||
/api/v3/worker-mgt/workers/{worker_id}/setclusters:
|
||||
summary: Update the cluster membership of this Worker.
|
||||
/api/v3/worker-mgt/workers/{worker_id}/settags:
|
||||
summary: Update the tag membership of this Worker.
|
||||
post:
|
||||
operationId: setWorkerClusters
|
||||
operationId: setWorkerTags
|
||||
tags: [worker-mgt]
|
||||
parameters:
|
||||
- name: worker_id
|
||||
@ -545,12 +545,12 @@ paths:
|
||||
required: true
|
||||
schema: { type: string, format: uuid }
|
||||
requestBody:
|
||||
description: The list of cluster IDs this worker should be a member of.
|
||||
description: The list of worker tag IDs this worker should be a member of.
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/WorkerClusterChangeRequest"
|
||||
$ref: "#/components/schemas/WorkerTagChangeRequest"
|
||||
responses:
|
||||
"204":
|
||||
description: Status change was accepted.
|
||||
@ -611,84 +611,84 @@ paths:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Error"
|
||||
|
||||
/api/v3/worker-mgt/clusters:
|
||||
summary: Manage worker clusters.
|
||||
/api/v3/worker-mgt/tags:
|
||||
summary: Manage worker tags.
|
||||
get:
|
||||
operationId: fetchWorkerClusters
|
||||
summary: Get list of worker clusters.
|
||||
operationId: fetchWorkerTags
|
||||
summary: Get list of worker tags.
|
||||
tags: [worker-mgt]
|
||||
responses:
|
||||
"200":
|
||||
description: Worker clusters.
|
||||
description: Worker tags.
|
||||
content:
|
||||
application/json:
|
||||
schema: { $ref: "#/components/schemas/WorkerClusterList" }
|
||||
schema: { $ref: "#/components/schemas/WorkerTagList" }
|
||||
post:
|
||||
operationId: createWorkerCluster
|
||||
summary: Create a new worker cluster.
|
||||
operationId: createWorkerTag
|
||||
summary: Create a new worker tag.
|
||||
tags: [worker-mgt]
|
||||
requestBody:
|
||||
description: The worker cluster.
|
||||
description: The worker tag.
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/WorkerCluster"
|
||||
$ref: "#/components/schemas/WorkerTag"
|
||||
responses:
|
||||
"200":
|
||||
description: The cluster was created. The created cluster is returned, so that the caller can know its UUID.
|
||||
description: The tag was created. The created tag is returned, so that the caller can know its UUID.
|
||||
content:
|
||||
application/json:
|
||||
schema: { $ref: "#/components/schemas/WorkerCluster" }
|
||||
schema: { $ref: "#/components/schemas/WorkerTag" }
|
||||
default:
|
||||
description: Error message
|
||||
content:
|
||||
application/json:
|
||||
schema: { $ref: "#/components/schemas/Error" }
|
||||
|
||||
/api/v3/worker-mgt/cluster/{cluster_id}:
|
||||
summary: Get, update, or delete a worker cluster.
|
||||
/api/v3/worker-mgt/tag/{tag_id}:
|
||||
summary: Get, update, or delete a worker tag.
|
||||
parameters:
|
||||
- name: cluster_id
|
||||
- name: tag_id
|
||||
in: path
|
||||
required: true
|
||||
schema: { type: string, format: uuid }
|
||||
get:
|
||||
operationId: fetchWorkerCluster
|
||||
summary: Get a single worker cluster.
|
||||
operationId: fetchWorkerTag
|
||||
summary: Get a single worker tag.
|
||||
tags: [worker-mgt]
|
||||
responses:
|
||||
"200":
|
||||
description: The worker cluster.
|
||||
description: The worker tag.
|
||||
content:
|
||||
application/json:
|
||||
schema: { $ref: "#/components/schemas/WorkerCluster" }
|
||||
schema: { $ref: "#/components/schemas/WorkerTag" }
|
||||
put:
|
||||
operationId: updateWorkerCluster
|
||||
summary: Update an existing worker cluster.
|
||||
operationId: updateWorkerTag
|
||||
summary: Update an existing worker tag.
|
||||
tags: [worker-mgt]
|
||||
requestBody:
|
||||
description: The updated worker cluster.
|
||||
description: The updated worker tag.
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/WorkerCluster"
|
||||
$ref: "#/components/schemas/WorkerTag"
|
||||
responses:
|
||||
"204":
|
||||
description: The cluster update has been stored.
|
||||
description: The tag update has been stored.
|
||||
default:
|
||||
description: Error message
|
||||
content:
|
||||
application/json:
|
||||
schema: { $ref: "#/components/schemas/Error" }
|
||||
delete:
|
||||
operationId: deleteWorkerCluster
|
||||
summary: Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
operationId: deleteWorkerTag
|
||||
summary: Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
tags: [worker-mgt]
|
||||
responses:
|
||||
"204":
|
||||
description: The cluster has been removed.
|
||||
description: The tag has been removed.
|
||||
default:
|
||||
description: Unexpected error.
|
||||
content:
|
||||
@ -1683,6 +1683,32 @@ components:
|
||||
"eval":
|
||||
type: string
|
||||
description: Python expression to be evaluated in order to determine the default value for this setting.
|
||||
"evalInfo":
|
||||
type: object
|
||||
description: Meta-data for the 'eval' expression.
|
||||
properties:
|
||||
"showLinkButton":
|
||||
type: boolean
|
||||
default: false
|
||||
description: >
|
||||
Enables the 'eval on submit' toggle button behavior for this setting.
|
||||
A toggle button will be shown in Blender's submission interface.
|
||||
When toggled on, the `eval` expression will determine the setting's
|
||||
value. Manually editing the setting is then no longer possible, and
|
||||
instead of an input field, the 'description' string is shown.
|
||||
|
||||
An example use is the to-be-rendered frame range, which by default
|
||||
automatically follows the scene range, but can be overridden
|
||||
manually when desired.
|
||||
"description":
|
||||
type: string
|
||||
default: ""
|
||||
description: >
|
||||
Description of what the 'eval' expression is doing.
|
||||
It is also used as placeholder text to show when the manual
|
||||
input field is hidden (because eval-on-submit has been toggled
|
||||
on by the user).
|
||||
required: [showLinkButton, description]
|
||||
"visible":
|
||||
$ref: "#/components/schemas/AvailableJobSettingVisibility"
|
||||
"required":
|
||||
@ -1758,12 +1784,12 @@ components:
|
||||
test/debug scripts easier, as they can use a static document on all
|
||||
platforms.
|
||||
"storage": { $ref: "#/components/schemas/JobStorageInfo" }
|
||||
"worker_cluster":
|
||||
"worker_tag":
|
||||
type: string
|
||||
format: uuid
|
||||
description: >
|
||||
Worker Cluster that should execute this job. When a cluster ID is
|
||||
given, only Workers in that cluster will be scheduled to work on it.
|
||||
Worker tag that should execute this job. When a tag ID is
|
||||
given, only Workers in that tag will be scheduled to work on it.
|
||||
If empty or ommitted, all workers can work on this job.
|
||||
required: [name, type, priority, submitter_platform]
|
||||
example:
|
||||
@ -2364,10 +2390,10 @@ components:
|
||||
type: array
|
||||
items: { type: string }
|
||||
"task": { $ref: "#/components/schemas/WorkerTask" }
|
||||
"clusters":
|
||||
"tags":
|
||||
type: array
|
||||
items: { $ref: "#/components/schemas/WorkerCluster" }
|
||||
description: Clusters of which this Worker is a member.
|
||||
items: { $ref: "#/components/schemas/WorkerTag" }
|
||||
description: Tags of which this Worker is a member.
|
||||
required:
|
||||
- id
|
||||
- name
|
||||
@ -2421,17 +2447,17 @@ components:
|
||||
start_time: "09:00"
|
||||
end_time: "18:00"
|
||||
|
||||
WorkerCluster:
|
||||
WorkerTag:
|
||||
type: object
|
||||
description: >
|
||||
Cluster of workers. A job can optionally specify which cluster it should
|
||||
be limited to. Workers can be part of multiple clusters simultaneously.
|
||||
Tag of workers. A job can optionally specify which tag it should
|
||||
be limited to. Workers can be part of multiple tags simultaneously.
|
||||
properties:
|
||||
"id":
|
||||
type: string
|
||||
format: uuid
|
||||
description: >
|
||||
UUID of the cluster. Can be ommitted when creating a new cluster, in
|
||||
UUID of the tag. Can be ommitted when creating a new tag, in
|
||||
which case a random UUID will be assigned.
|
||||
"name":
|
||||
type: string
|
||||
@ -2442,25 +2468,25 @@ components:
|
||||
name: GPU-EEVEE
|
||||
description: All workers that can do GPU rendering with EEVEE.
|
||||
|
||||
WorkerClusterList:
|
||||
WorkerTagList:
|
||||
type: object
|
||||
properties:
|
||||
"clusters":
|
||||
"tags":
|
||||
type: array
|
||||
items: { $ref: "#/components/schemas/WorkerCluster" }
|
||||
items: { $ref: "#/components/schemas/WorkerTag" }
|
||||
|
||||
WorkerClusterChangeRequest:
|
||||
WorkerTagChangeRequest:
|
||||
type: object
|
||||
description: Request to change which clusters this Worker is assigned to.
|
||||
description: Request to change which tags this Worker is assigned to.
|
||||
properties:
|
||||
"cluster_ids":
|
||||
"tag_ids":
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
format: uuid
|
||||
required: [cluster_ids]
|
||||
required: [tag_ids]
|
||||
example:
|
||||
"cluster_ids": ["4312d68c-ea6d-4566-9bf6-e9f09be48ceb"]
|
||||
"tag_ids": ["4312d68c-ea6d-4566-9bf6-e9f09be48ceb"]
|
||||
|
||||
securitySchemes:
|
||||
worker_auth:
|
||||
|
500
pkg/api/openapi_client.gen.go
generated
500
pkg/api/openapi_client.gen.go
generated
@ -212,24 +212,24 @@ type ClientInterface interface {
|
||||
// GetVersion request
|
||||
GetVersion(ctx context.Context, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// DeleteWorkerCluster request
|
||||
DeleteWorkerCluster(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
// DeleteWorkerTag request
|
||||
DeleteWorkerTag(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// FetchWorkerCluster request
|
||||
FetchWorkerCluster(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
// FetchWorkerTag request
|
||||
FetchWorkerTag(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// UpdateWorkerCluster request with any body
|
||||
UpdateWorkerClusterWithBody(ctx context.Context, clusterId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
// UpdateWorkerTag request with any body
|
||||
UpdateWorkerTagWithBody(ctx context.Context, tagId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
UpdateWorkerCluster(ctx context.Context, clusterId string, body UpdateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
UpdateWorkerTag(ctx context.Context, tagId string, body UpdateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// FetchWorkerClusters request
|
||||
FetchWorkerClusters(ctx context.Context, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
// FetchWorkerTags request
|
||||
FetchWorkerTags(ctx context.Context, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// CreateWorkerCluster request with any body
|
||||
CreateWorkerClusterWithBody(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
// CreateWorkerTag request with any body
|
||||
CreateWorkerTagWithBody(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
CreateWorkerCluster(ctx context.Context, body CreateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
CreateWorkerTag(ctx context.Context, body CreateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// FetchWorkers request
|
||||
FetchWorkers(ctx context.Context, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
@ -240,16 +240,16 @@ type ClientInterface interface {
|
||||
// FetchWorker request
|
||||
FetchWorker(ctx context.Context, workerId string, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// SetWorkerClusters request with any body
|
||||
SetWorkerClustersWithBody(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
SetWorkerClusters(ctx context.Context, workerId string, body SetWorkerClustersJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// RequestWorkerStatusChange request with any body
|
||||
RequestWorkerStatusChangeWithBody(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
RequestWorkerStatusChange(ctx context.Context, workerId string, body RequestWorkerStatusChangeJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// SetWorkerTags request with any body
|
||||
SetWorkerTagsWithBody(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
SetWorkerTags(ctx context.Context, workerId string, body SetWorkerTagsJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
// FetchWorkerSleepSchedule request
|
||||
FetchWorkerSleepSchedule(ctx context.Context, workerId string, reqEditors ...RequestEditorFn) (*http.Response, error)
|
||||
|
||||
@ -822,8 +822,8 @@ func (c *Client) GetVersion(ctx context.Context, reqEditors ...RequestEditorFn)
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) DeleteWorkerCluster(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewDeleteWorkerClusterRequest(c.Server, clusterId)
|
||||
func (c *Client) DeleteWorkerTag(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewDeleteWorkerTagRequest(c.Server, tagId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -834,8 +834,8 @@ func (c *Client) DeleteWorkerCluster(ctx context.Context, clusterId string, reqE
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) FetchWorkerCluster(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewFetchWorkerClusterRequest(c.Server, clusterId)
|
||||
func (c *Client) FetchWorkerTag(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewFetchWorkerTagRequest(c.Server, tagId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -846,8 +846,8 @@ func (c *Client) FetchWorkerCluster(ctx context.Context, clusterId string, reqEd
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) UpdateWorkerClusterWithBody(ctx context.Context, clusterId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewUpdateWorkerClusterRequestWithBody(c.Server, clusterId, contentType, body)
|
||||
func (c *Client) UpdateWorkerTagWithBody(ctx context.Context, tagId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewUpdateWorkerTagRequestWithBody(c.Server, tagId, contentType, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -858,8 +858,8 @@ func (c *Client) UpdateWorkerClusterWithBody(ctx context.Context, clusterId stri
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) UpdateWorkerCluster(ctx context.Context, clusterId string, body UpdateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewUpdateWorkerClusterRequest(c.Server, clusterId, body)
|
||||
func (c *Client) UpdateWorkerTag(ctx context.Context, tagId string, body UpdateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewUpdateWorkerTagRequest(c.Server, tagId, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -870,8 +870,8 @@ func (c *Client) UpdateWorkerCluster(ctx context.Context, clusterId string, body
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) FetchWorkerClusters(ctx context.Context, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewFetchWorkerClustersRequest(c.Server)
|
||||
func (c *Client) FetchWorkerTags(ctx context.Context, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewFetchWorkerTagsRequest(c.Server)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -882,8 +882,8 @@ func (c *Client) FetchWorkerClusters(ctx context.Context, reqEditors ...RequestE
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) CreateWorkerClusterWithBody(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewCreateWorkerClusterRequestWithBody(c.Server, contentType, body)
|
||||
func (c *Client) CreateWorkerTagWithBody(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewCreateWorkerTagRequestWithBody(c.Server, contentType, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -894,8 +894,8 @@ func (c *Client) CreateWorkerClusterWithBody(ctx context.Context, contentType st
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) CreateWorkerCluster(ctx context.Context, body CreateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewCreateWorkerClusterRequest(c.Server, body)
|
||||
func (c *Client) CreateWorkerTag(ctx context.Context, body CreateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewCreateWorkerTagRequest(c.Server, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -942,30 +942,6 @@ func (c *Client) FetchWorker(ctx context.Context, workerId string, reqEditors ..
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) SetWorkerClustersWithBody(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewSetWorkerClustersRequestWithBody(c.Server, workerId, contentType, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req = req.WithContext(ctx)
|
||||
if err := c.applyEditors(ctx, req, reqEditors); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) SetWorkerClusters(ctx context.Context, workerId string, body SetWorkerClustersJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewSetWorkerClustersRequest(c.Server, workerId, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req = req.WithContext(ctx)
|
||||
if err := c.applyEditors(ctx, req, reqEditors); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) RequestWorkerStatusChangeWithBody(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewRequestWorkerStatusChangeRequestWithBody(c.Server, workerId, contentType, body)
|
||||
if err != nil {
|
||||
@ -990,6 +966,30 @@ func (c *Client) RequestWorkerStatusChange(ctx context.Context, workerId string,
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) SetWorkerTagsWithBody(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewSetWorkerTagsRequestWithBody(c.Server, workerId, contentType, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req = req.WithContext(ctx)
|
||||
if err := c.applyEditors(ctx, req, reqEditors); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) SetWorkerTags(ctx context.Context, workerId string, body SetWorkerTagsJSONRequestBody, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewSetWorkerTagsRequest(c.Server, workerId, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req = req.WithContext(ctx)
|
||||
if err := c.applyEditors(ctx, req, reqEditors); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return c.Client.Do(req)
|
||||
}
|
||||
|
||||
func (c *Client) FetchWorkerSleepSchedule(ctx context.Context, workerId string, reqEditors ...RequestEditorFn) (*http.Response, error) {
|
||||
req, err := NewFetchWorkerSleepScheduleRequest(c.Server, workerId)
|
||||
if err != nil {
|
||||
@ -2380,13 +2380,13 @@ func NewGetVersionRequest(server string) (*http.Request, error) {
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewDeleteWorkerClusterRequest generates requests for DeleteWorkerCluster
|
||||
func NewDeleteWorkerClusterRequest(server string, clusterId string) (*http.Request, error) {
|
||||
// NewDeleteWorkerTagRequest generates requests for DeleteWorkerTag
|
||||
func NewDeleteWorkerTagRequest(server string, tagId string) (*http.Request, error) {
|
||||
var err error
|
||||
|
||||
var pathParam0 string
|
||||
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "cluster_id", runtime.ParamLocationPath, clusterId)
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "tag_id", runtime.ParamLocationPath, tagId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -2396,7 +2396,7 @@ func NewDeleteWorkerClusterRequest(server string, clusterId string) (*http.Reque
|
||||
return nil, err
|
||||
}
|
||||
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/cluster/%s", pathParam0)
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/tag/%s", pathParam0)
|
||||
if operationPath[0] == '/' {
|
||||
operationPath = "." + operationPath
|
||||
}
|
||||
@ -2414,13 +2414,13 @@ func NewDeleteWorkerClusterRequest(server string, clusterId string) (*http.Reque
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewFetchWorkerClusterRequest generates requests for FetchWorkerCluster
|
||||
func NewFetchWorkerClusterRequest(server string, clusterId string) (*http.Request, error) {
|
||||
// NewFetchWorkerTagRequest generates requests for FetchWorkerTag
|
||||
func NewFetchWorkerTagRequest(server string, tagId string) (*http.Request, error) {
|
||||
var err error
|
||||
|
||||
var pathParam0 string
|
||||
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "cluster_id", runtime.ParamLocationPath, clusterId)
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "tag_id", runtime.ParamLocationPath, tagId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -2430,7 +2430,7 @@ func NewFetchWorkerClusterRequest(server string, clusterId string) (*http.Reques
|
||||
return nil, err
|
||||
}
|
||||
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/cluster/%s", pathParam0)
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/tag/%s", pathParam0)
|
||||
if operationPath[0] == '/' {
|
||||
operationPath = "." + operationPath
|
||||
}
|
||||
@ -2448,24 +2448,24 @@ func NewFetchWorkerClusterRequest(server string, clusterId string) (*http.Reques
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewUpdateWorkerClusterRequest calls the generic UpdateWorkerCluster builder with application/json body
|
||||
func NewUpdateWorkerClusterRequest(server string, clusterId string, body UpdateWorkerClusterJSONRequestBody) (*http.Request, error) {
|
||||
// NewUpdateWorkerTagRequest calls the generic UpdateWorkerTag builder with application/json body
|
||||
func NewUpdateWorkerTagRequest(server string, tagId string, body UpdateWorkerTagJSONRequestBody) (*http.Request, error) {
|
||||
var bodyReader io.Reader
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bodyReader = bytes.NewReader(buf)
|
||||
return NewUpdateWorkerClusterRequestWithBody(server, clusterId, "application/json", bodyReader)
|
||||
return NewUpdateWorkerTagRequestWithBody(server, tagId, "application/json", bodyReader)
|
||||
}
|
||||
|
||||
// NewUpdateWorkerClusterRequestWithBody generates requests for UpdateWorkerCluster with any type of body
|
||||
func NewUpdateWorkerClusterRequestWithBody(server string, clusterId string, contentType string, body io.Reader) (*http.Request, error) {
|
||||
// NewUpdateWorkerTagRequestWithBody generates requests for UpdateWorkerTag with any type of body
|
||||
func NewUpdateWorkerTagRequestWithBody(server string, tagId string, contentType string, body io.Reader) (*http.Request, error) {
|
||||
var err error
|
||||
|
||||
var pathParam0 string
|
||||
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "cluster_id", runtime.ParamLocationPath, clusterId)
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "tag_id", runtime.ParamLocationPath, tagId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -2475,7 +2475,7 @@ func NewUpdateWorkerClusterRequestWithBody(server string, clusterId string, cont
|
||||
return nil, err
|
||||
}
|
||||
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/cluster/%s", pathParam0)
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/tag/%s", pathParam0)
|
||||
if operationPath[0] == '/' {
|
||||
operationPath = "." + operationPath
|
||||
}
|
||||
@ -2495,8 +2495,8 @@ func NewUpdateWorkerClusterRequestWithBody(server string, clusterId string, cont
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewFetchWorkerClustersRequest generates requests for FetchWorkerClusters
|
||||
func NewFetchWorkerClustersRequest(server string) (*http.Request, error) {
|
||||
// NewFetchWorkerTagsRequest generates requests for FetchWorkerTags
|
||||
func NewFetchWorkerTagsRequest(server string) (*http.Request, error) {
|
||||
var err error
|
||||
|
||||
serverURL, err := url.Parse(server)
|
||||
@ -2504,7 +2504,7 @@ func NewFetchWorkerClustersRequest(server string) (*http.Request, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/clusters")
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/tags")
|
||||
if operationPath[0] == '/' {
|
||||
operationPath = "." + operationPath
|
||||
}
|
||||
@ -2522,19 +2522,19 @@ func NewFetchWorkerClustersRequest(server string) (*http.Request, error) {
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewCreateWorkerClusterRequest calls the generic CreateWorkerCluster builder with application/json body
|
||||
func NewCreateWorkerClusterRequest(server string, body CreateWorkerClusterJSONRequestBody) (*http.Request, error) {
|
||||
// NewCreateWorkerTagRequest calls the generic CreateWorkerTag builder with application/json body
|
||||
func NewCreateWorkerTagRequest(server string, body CreateWorkerTagJSONRequestBody) (*http.Request, error) {
|
||||
var bodyReader io.Reader
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bodyReader = bytes.NewReader(buf)
|
||||
return NewCreateWorkerClusterRequestWithBody(server, "application/json", bodyReader)
|
||||
return NewCreateWorkerTagRequestWithBody(server, "application/json", bodyReader)
|
||||
}
|
||||
|
||||
// NewCreateWorkerClusterRequestWithBody generates requests for CreateWorkerCluster with any type of body
|
||||
func NewCreateWorkerClusterRequestWithBody(server string, contentType string, body io.Reader) (*http.Request, error) {
|
||||
// NewCreateWorkerTagRequestWithBody generates requests for CreateWorkerTag with any type of body
|
||||
func NewCreateWorkerTagRequestWithBody(server string, contentType string, body io.Reader) (*http.Request, error) {
|
||||
var err error
|
||||
|
||||
serverURL, err := url.Parse(server)
|
||||
@ -2542,7 +2542,7 @@ func NewCreateWorkerClusterRequestWithBody(server string, contentType string, bo
|
||||
return nil, err
|
||||
}
|
||||
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/clusters")
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/tags")
|
||||
if operationPath[0] == '/' {
|
||||
operationPath = "." + operationPath
|
||||
}
|
||||
@ -2657,53 +2657,6 @@ func NewFetchWorkerRequest(server string, workerId string) (*http.Request, error
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewSetWorkerClustersRequest calls the generic SetWorkerClusters builder with application/json body
|
||||
func NewSetWorkerClustersRequest(server string, workerId string, body SetWorkerClustersJSONRequestBody) (*http.Request, error) {
|
||||
var bodyReader io.Reader
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bodyReader = bytes.NewReader(buf)
|
||||
return NewSetWorkerClustersRequestWithBody(server, workerId, "application/json", bodyReader)
|
||||
}
|
||||
|
||||
// NewSetWorkerClustersRequestWithBody generates requests for SetWorkerClusters with any type of body
|
||||
func NewSetWorkerClustersRequestWithBody(server string, workerId string, contentType string, body io.Reader) (*http.Request, error) {
|
||||
var err error
|
||||
|
||||
var pathParam0 string
|
||||
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "worker_id", runtime.ParamLocationPath, workerId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
serverURL, err := url.Parse(server)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/workers/%s/setclusters", pathParam0)
|
||||
if operationPath[0] == '/' {
|
||||
operationPath = "." + operationPath
|
||||
}
|
||||
|
||||
queryURL, err := serverURL.Parse(operationPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", queryURL.String(), body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req.Header.Add("Content-Type", contentType)
|
||||
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewRequestWorkerStatusChangeRequest calls the generic RequestWorkerStatusChange builder with application/json body
|
||||
func NewRequestWorkerStatusChangeRequest(server string, workerId string, body RequestWorkerStatusChangeJSONRequestBody) (*http.Request, error) {
|
||||
var bodyReader io.Reader
|
||||
@ -2751,6 +2704,53 @@ func NewRequestWorkerStatusChangeRequestWithBody(server string, workerId string,
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewSetWorkerTagsRequest calls the generic SetWorkerTags builder with application/json body
|
||||
func NewSetWorkerTagsRequest(server string, workerId string, body SetWorkerTagsJSONRequestBody) (*http.Request, error) {
|
||||
var bodyReader io.Reader
|
||||
buf, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bodyReader = bytes.NewReader(buf)
|
||||
return NewSetWorkerTagsRequestWithBody(server, workerId, "application/json", bodyReader)
|
||||
}
|
||||
|
||||
// NewSetWorkerTagsRequestWithBody generates requests for SetWorkerTags with any type of body
|
||||
func NewSetWorkerTagsRequestWithBody(server string, workerId string, contentType string, body io.Reader) (*http.Request, error) {
|
||||
var err error
|
||||
|
||||
var pathParam0 string
|
||||
|
||||
pathParam0, err = runtime.StyleParamWithLocation("simple", false, "worker_id", runtime.ParamLocationPath, workerId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
serverURL, err := url.Parse(server)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
operationPath := fmt.Sprintf("/api/v3/worker-mgt/workers/%s/settags", pathParam0)
|
||||
if operationPath[0] == '/' {
|
||||
operationPath = "." + operationPath
|
||||
}
|
||||
|
||||
queryURL, err := serverURL.Parse(operationPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", queryURL.String(), body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req.Header.Add("Content-Type", contentType)
|
||||
|
||||
return req, nil
|
||||
}
|
||||
|
||||
// NewFetchWorkerSleepScheduleRequest generates requests for FetchWorkerSleepSchedule
|
||||
func NewFetchWorkerSleepScheduleRequest(server string, workerId string) (*http.Request, error) {
|
||||
var err error
|
||||
@ -3313,24 +3313,24 @@ type ClientWithResponsesInterface interface {
|
||||
// GetVersion request
|
||||
GetVersionWithResponse(ctx context.Context, reqEditors ...RequestEditorFn) (*GetVersionResponse, error)
|
||||
|
||||
// DeleteWorkerCluster request
|
||||
DeleteWorkerClusterWithResponse(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*DeleteWorkerClusterResponse, error)
|
||||
// DeleteWorkerTag request
|
||||
DeleteWorkerTagWithResponse(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*DeleteWorkerTagResponse, error)
|
||||
|
||||
// FetchWorkerCluster request
|
||||
FetchWorkerClusterWithResponse(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*FetchWorkerClusterResponse, error)
|
||||
// FetchWorkerTag request
|
||||
FetchWorkerTagWithResponse(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*FetchWorkerTagResponse, error)
|
||||
|
||||
// UpdateWorkerCluster request with any body
|
||||
UpdateWorkerClusterWithBodyWithResponse(ctx context.Context, clusterId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*UpdateWorkerClusterResponse, error)
|
||||
// UpdateWorkerTag request with any body
|
||||
UpdateWorkerTagWithBodyWithResponse(ctx context.Context, tagId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*UpdateWorkerTagResponse, error)
|
||||
|
||||
UpdateWorkerClusterWithResponse(ctx context.Context, clusterId string, body UpdateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*UpdateWorkerClusterResponse, error)
|
||||
UpdateWorkerTagWithResponse(ctx context.Context, tagId string, body UpdateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*UpdateWorkerTagResponse, error)
|
||||
|
||||
// FetchWorkerClusters request
|
||||
FetchWorkerClustersWithResponse(ctx context.Context, reqEditors ...RequestEditorFn) (*FetchWorkerClustersResponse, error)
|
||||
// FetchWorkerTags request
|
||||
FetchWorkerTagsWithResponse(ctx context.Context, reqEditors ...RequestEditorFn) (*FetchWorkerTagsResponse, error)
|
||||
|
||||
// CreateWorkerCluster request with any body
|
||||
CreateWorkerClusterWithBodyWithResponse(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*CreateWorkerClusterResponse, error)
|
||||
// CreateWorkerTag request with any body
|
||||
CreateWorkerTagWithBodyWithResponse(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*CreateWorkerTagResponse, error)
|
||||
|
||||
CreateWorkerClusterWithResponse(ctx context.Context, body CreateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*CreateWorkerClusterResponse, error)
|
||||
CreateWorkerTagWithResponse(ctx context.Context, body CreateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*CreateWorkerTagResponse, error)
|
||||
|
||||
// FetchWorkers request
|
||||
FetchWorkersWithResponse(ctx context.Context, reqEditors ...RequestEditorFn) (*FetchWorkersResponse, error)
|
||||
@ -3341,16 +3341,16 @@ type ClientWithResponsesInterface interface {
|
||||
// FetchWorker request
|
||||
FetchWorkerWithResponse(ctx context.Context, workerId string, reqEditors ...RequestEditorFn) (*FetchWorkerResponse, error)
|
||||
|
||||
// SetWorkerClusters request with any body
|
||||
SetWorkerClustersWithBodyWithResponse(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*SetWorkerClustersResponse, error)
|
||||
|
||||
SetWorkerClustersWithResponse(ctx context.Context, workerId string, body SetWorkerClustersJSONRequestBody, reqEditors ...RequestEditorFn) (*SetWorkerClustersResponse, error)
|
||||
|
||||
// RequestWorkerStatusChange request with any body
|
||||
RequestWorkerStatusChangeWithBodyWithResponse(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*RequestWorkerStatusChangeResponse, error)
|
||||
|
||||
RequestWorkerStatusChangeWithResponse(ctx context.Context, workerId string, body RequestWorkerStatusChangeJSONRequestBody, reqEditors ...RequestEditorFn) (*RequestWorkerStatusChangeResponse, error)
|
||||
|
||||
// SetWorkerTags request with any body
|
||||
SetWorkerTagsWithBodyWithResponse(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*SetWorkerTagsResponse, error)
|
||||
|
||||
SetWorkerTagsWithResponse(ctx context.Context, workerId string, body SetWorkerTagsJSONRequestBody, reqEditors ...RequestEditorFn) (*SetWorkerTagsResponse, error)
|
||||
|
||||
// FetchWorkerSleepSchedule request
|
||||
FetchWorkerSleepScheduleWithResponse(ctx context.Context, workerId string, reqEditors ...RequestEditorFn) (*FetchWorkerSleepScheduleResponse, error)
|
||||
|
||||
@ -4118,14 +4118,14 @@ func (r GetVersionResponse) StatusCode() int {
|
||||
return 0
|
||||
}
|
||||
|
||||
type DeleteWorkerClusterResponse struct {
|
||||
type DeleteWorkerTagResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
JSONDefault *Error
|
||||
}
|
||||
|
||||
// Status returns HTTPResponse.Status
|
||||
func (r DeleteWorkerClusterResponse) Status() string {
|
||||
func (r DeleteWorkerTagResponse) Status() string {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.Status
|
||||
}
|
||||
@ -4133,21 +4133,21 @@ func (r DeleteWorkerClusterResponse) Status() string {
|
||||
}
|
||||
|
||||
// StatusCode returns HTTPResponse.StatusCode
|
||||
func (r DeleteWorkerClusterResponse) StatusCode() int {
|
||||
func (r DeleteWorkerTagResponse) StatusCode() int {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type FetchWorkerClusterResponse struct {
|
||||
type FetchWorkerTagResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
JSON200 *WorkerCluster
|
||||
JSON200 *WorkerTag
|
||||
}
|
||||
|
||||
// Status returns HTTPResponse.Status
|
||||
func (r FetchWorkerClusterResponse) Status() string {
|
||||
func (r FetchWorkerTagResponse) Status() string {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.Status
|
||||
}
|
||||
@ -4155,21 +4155,21 @@ func (r FetchWorkerClusterResponse) Status() string {
|
||||
}
|
||||
|
||||
// StatusCode returns HTTPResponse.StatusCode
|
||||
func (r FetchWorkerClusterResponse) StatusCode() int {
|
||||
func (r FetchWorkerTagResponse) StatusCode() int {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type UpdateWorkerClusterResponse struct {
|
||||
type UpdateWorkerTagResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
JSONDefault *Error
|
||||
}
|
||||
|
||||
// Status returns HTTPResponse.Status
|
||||
func (r UpdateWorkerClusterResponse) Status() string {
|
||||
func (r UpdateWorkerTagResponse) Status() string {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.Status
|
||||
}
|
||||
@ -4177,21 +4177,21 @@ func (r UpdateWorkerClusterResponse) Status() string {
|
||||
}
|
||||
|
||||
// StatusCode returns HTTPResponse.StatusCode
|
||||
func (r UpdateWorkerClusterResponse) StatusCode() int {
|
||||
func (r UpdateWorkerTagResponse) StatusCode() int {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type FetchWorkerClustersResponse struct {
|
||||
type FetchWorkerTagsResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
JSON200 *WorkerClusterList
|
||||
JSON200 *WorkerTagList
|
||||
}
|
||||
|
||||
// Status returns HTTPResponse.Status
|
||||
func (r FetchWorkerClustersResponse) Status() string {
|
||||
func (r FetchWorkerTagsResponse) Status() string {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.Status
|
||||
}
|
||||
@ -4199,22 +4199,22 @@ func (r FetchWorkerClustersResponse) Status() string {
|
||||
}
|
||||
|
||||
// StatusCode returns HTTPResponse.StatusCode
|
||||
func (r FetchWorkerClustersResponse) StatusCode() int {
|
||||
func (r FetchWorkerTagsResponse) StatusCode() int {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type CreateWorkerClusterResponse struct {
|
||||
type CreateWorkerTagResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
JSON200 *WorkerCluster
|
||||
JSON200 *WorkerTag
|
||||
JSONDefault *Error
|
||||
}
|
||||
|
||||
// Status returns HTTPResponse.Status
|
||||
func (r CreateWorkerClusterResponse) Status() string {
|
||||
func (r CreateWorkerTagResponse) Status() string {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.Status
|
||||
}
|
||||
@ -4222,7 +4222,7 @@ func (r CreateWorkerClusterResponse) Status() string {
|
||||
}
|
||||
|
||||
// StatusCode returns HTTPResponse.StatusCode
|
||||
func (r CreateWorkerClusterResponse) StatusCode() int {
|
||||
func (r CreateWorkerTagResponse) StatusCode() int {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.StatusCode
|
||||
}
|
||||
@ -4295,28 +4295,6 @@ func (r FetchWorkerResponse) StatusCode() int {
|
||||
return 0
|
||||
}
|
||||
|
||||
type SetWorkerClustersResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
JSONDefault *Error
|
||||
}
|
||||
|
||||
// Status returns HTTPResponse.Status
|
||||
func (r SetWorkerClustersResponse) Status() string {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.Status
|
||||
}
|
||||
return http.StatusText(0)
|
||||
}
|
||||
|
||||
// StatusCode returns HTTPResponse.StatusCode
|
||||
func (r SetWorkerClustersResponse) StatusCode() int {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type RequestWorkerStatusChangeResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
@ -4339,6 +4317,28 @@ func (r RequestWorkerStatusChangeResponse) StatusCode() int {
|
||||
return 0
|
||||
}
|
||||
|
||||
type SetWorkerTagsResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
JSONDefault *Error
|
||||
}
|
||||
|
||||
// Status returns HTTPResponse.Status
|
||||
func (r SetWorkerTagsResponse) Status() string {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.Status
|
||||
}
|
||||
return http.StatusText(0)
|
||||
}
|
||||
|
||||
// StatusCode returns HTTPResponse.StatusCode
|
||||
func (r SetWorkerTagsResponse) StatusCode() int {
|
||||
if r.HTTPResponse != nil {
|
||||
return r.HTTPResponse.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type FetchWorkerSleepScheduleResponse struct {
|
||||
Body []byte
|
||||
HTTPResponse *http.Response
|
||||
@ -4976,65 +4976,65 @@ func (c *ClientWithResponses) GetVersionWithResponse(ctx context.Context, reqEdi
|
||||
return ParseGetVersionResponse(rsp)
|
||||
}
|
||||
|
||||
// DeleteWorkerClusterWithResponse request returning *DeleteWorkerClusterResponse
|
||||
func (c *ClientWithResponses) DeleteWorkerClusterWithResponse(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*DeleteWorkerClusterResponse, error) {
|
||||
rsp, err := c.DeleteWorkerCluster(ctx, clusterId, reqEditors...)
|
||||
// DeleteWorkerTagWithResponse request returning *DeleteWorkerTagResponse
|
||||
func (c *ClientWithResponses) DeleteWorkerTagWithResponse(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*DeleteWorkerTagResponse, error) {
|
||||
rsp, err := c.DeleteWorkerTag(ctx, tagId, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseDeleteWorkerClusterResponse(rsp)
|
||||
return ParseDeleteWorkerTagResponse(rsp)
|
||||
}
|
||||
|
||||
// FetchWorkerClusterWithResponse request returning *FetchWorkerClusterResponse
|
||||
func (c *ClientWithResponses) FetchWorkerClusterWithResponse(ctx context.Context, clusterId string, reqEditors ...RequestEditorFn) (*FetchWorkerClusterResponse, error) {
|
||||
rsp, err := c.FetchWorkerCluster(ctx, clusterId, reqEditors...)
|
||||
// FetchWorkerTagWithResponse request returning *FetchWorkerTagResponse
|
||||
func (c *ClientWithResponses) FetchWorkerTagWithResponse(ctx context.Context, tagId string, reqEditors ...RequestEditorFn) (*FetchWorkerTagResponse, error) {
|
||||
rsp, err := c.FetchWorkerTag(ctx, tagId, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseFetchWorkerClusterResponse(rsp)
|
||||
return ParseFetchWorkerTagResponse(rsp)
|
||||
}
|
||||
|
||||
// UpdateWorkerClusterWithBodyWithResponse request with arbitrary body returning *UpdateWorkerClusterResponse
|
||||
func (c *ClientWithResponses) UpdateWorkerClusterWithBodyWithResponse(ctx context.Context, clusterId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*UpdateWorkerClusterResponse, error) {
|
||||
rsp, err := c.UpdateWorkerClusterWithBody(ctx, clusterId, contentType, body, reqEditors...)
|
||||
// UpdateWorkerTagWithBodyWithResponse request with arbitrary body returning *UpdateWorkerTagResponse
|
||||
func (c *ClientWithResponses) UpdateWorkerTagWithBodyWithResponse(ctx context.Context, tagId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*UpdateWorkerTagResponse, error) {
|
||||
rsp, err := c.UpdateWorkerTagWithBody(ctx, tagId, contentType, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseUpdateWorkerClusterResponse(rsp)
|
||||
return ParseUpdateWorkerTagResponse(rsp)
|
||||
}
|
||||
|
||||
func (c *ClientWithResponses) UpdateWorkerClusterWithResponse(ctx context.Context, clusterId string, body UpdateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*UpdateWorkerClusterResponse, error) {
|
||||
rsp, err := c.UpdateWorkerCluster(ctx, clusterId, body, reqEditors...)
|
||||
func (c *ClientWithResponses) UpdateWorkerTagWithResponse(ctx context.Context, tagId string, body UpdateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*UpdateWorkerTagResponse, error) {
|
||||
rsp, err := c.UpdateWorkerTag(ctx, tagId, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseUpdateWorkerClusterResponse(rsp)
|
||||
return ParseUpdateWorkerTagResponse(rsp)
|
||||
}
|
||||
|
||||
// FetchWorkerClustersWithResponse request returning *FetchWorkerClustersResponse
|
||||
func (c *ClientWithResponses) FetchWorkerClustersWithResponse(ctx context.Context, reqEditors ...RequestEditorFn) (*FetchWorkerClustersResponse, error) {
|
||||
rsp, err := c.FetchWorkerClusters(ctx, reqEditors...)
|
||||
// FetchWorkerTagsWithResponse request returning *FetchWorkerTagsResponse
|
||||
func (c *ClientWithResponses) FetchWorkerTagsWithResponse(ctx context.Context, reqEditors ...RequestEditorFn) (*FetchWorkerTagsResponse, error) {
|
||||
rsp, err := c.FetchWorkerTags(ctx, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseFetchWorkerClustersResponse(rsp)
|
||||
return ParseFetchWorkerTagsResponse(rsp)
|
||||
}
|
||||
|
||||
// CreateWorkerClusterWithBodyWithResponse request with arbitrary body returning *CreateWorkerClusterResponse
|
||||
func (c *ClientWithResponses) CreateWorkerClusterWithBodyWithResponse(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*CreateWorkerClusterResponse, error) {
|
||||
rsp, err := c.CreateWorkerClusterWithBody(ctx, contentType, body, reqEditors...)
|
||||
// CreateWorkerTagWithBodyWithResponse request with arbitrary body returning *CreateWorkerTagResponse
|
||||
func (c *ClientWithResponses) CreateWorkerTagWithBodyWithResponse(ctx context.Context, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*CreateWorkerTagResponse, error) {
|
||||
rsp, err := c.CreateWorkerTagWithBody(ctx, contentType, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseCreateWorkerClusterResponse(rsp)
|
||||
return ParseCreateWorkerTagResponse(rsp)
|
||||
}
|
||||
|
||||
func (c *ClientWithResponses) CreateWorkerClusterWithResponse(ctx context.Context, body CreateWorkerClusterJSONRequestBody, reqEditors ...RequestEditorFn) (*CreateWorkerClusterResponse, error) {
|
||||
rsp, err := c.CreateWorkerCluster(ctx, body, reqEditors...)
|
||||
func (c *ClientWithResponses) CreateWorkerTagWithResponse(ctx context.Context, body CreateWorkerTagJSONRequestBody, reqEditors ...RequestEditorFn) (*CreateWorkerTagResponse, error) {
|
||||
rsp, err := c.CreateWorkerTag(ctx, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseCreateWorkerClusterResponse(rsp)
|
||||
return ParseCreateWorkerTagResponse(rsp)
|
||||
}
|
||||
|
||||
// FetchWorkersWithResponse request returning *FetchWorkersResponse
|
||||
@ -5064,23 +5064,6 @@ func (c *ClientWithResponses) FetchWorkerWithResponse(ctx context.Context, worke
|
||||
return ParseFetchWorkerResponse(rsp)
|
||||
}
|
||||
|
||||
// SetWorkerClustersWithBodyWithResponse request with arbitrary body returning *SetWorkerClustersResponse
|
||||
func (c *ClientWithResponses) SetWorkerClustersWithBodyWithResponse(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*SetWorkerClustersResponse, error) {
|
||||
rsp, err := c.SetWorkerClustersWithBody(ctx, workerId, contentType, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseSetWorkerClustersResponse(rsp)
|
||||
}
|
||||
|
||||
func (c *ClientWithResponses) SetWorkerClustersWithResponse(ctx context.Context, workerId string, body SetWorkerClustersJSONRequestBody, reqEditors ...RequestEditorFn) (*SetWorkerClustersResponse, error) {
|
||||
rsp, err := c.SetWorkerClusters(ctx, workerId, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseSetWorkerClustersResponse(rsp)
|
||||
}
|
||||
|
||||
// RequestWorkerStatusChangeWithBodyWithResponse request with arbitrary body returning *RequestWorkerStatusChangeResponse
|
||||
func (c *ClientWithResponses) RequestWorkerStatusChangeWithBodyWithResponse(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*RequestWorkerStatusChangeResponse, error) {
|
||||
rsp, err := c.RequestWorkerStatusChangeWithBody(ctx, workerId, contentType, body, reqEditors...)
|
||||
@ -5098,6 +5081,23 @@ func (c *ClientWithResponses) RequestWorkerStatusChangeWithResponse(ctx context.
|
||||
return ParseRequestWorkerStatusChangeResponse(rsp)
|
||||
}
|
||||
|
||||
// SetWorkerTagsWithBodyWithResponse request with arbitrary body returning *SetWorkerTagsResponse
|
||||
func (c *ClientWithResponses) SetWorkerTagsWithBodyWithResponse(ctx context.Context, workerId string, contentType string, body io.Reader, reqEditors ...RequestEditorFn) (*SetWorkerTagsResponse, error) {
|
||||
rsp, err := c.SetWorkerTagsWithBody(ctx, workerId, contentType, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseSetWorkerTagsResponse(rsp)
|
||||
}
|
||||
|
||||
func (c *ClientWithResponses) SetWorkerTagsWithResponse(ctx context.Context, workerId string, body SetWorkerTagsJSONRequestBody, reqEditors ...RequestEditorFn) (*SetWorkerTagsResponse, error) {
|
||||
rsp, err := c.SetWorkerTags(ctx, workerId, body, reqEditors...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseSetWorkerTagsResponse(rsp)
|
||||
}
|
||||
|
||||
// FetchWorkerSleepScheduleWithResponse request returning *FetchWorkerSleepScheduleResponse
|
||||
func (c *ClientWithResponses) FetchWorkerSleepScheduleWithResponse(ctx context.Context, workerId string, reqEditors ...RequestEditorFn) (*FetchWorkerSleepScheduleResponse, error) {
|
||||
rsp, err := c.FetchWorkerSleepSchedule(ctx, workerId, reqEditors...)
|
||||
@ -6190,15 +6190,15 @@ func ParseGetVersionResponse(rsp *http.Response) (*GetVersionResponse, error) {
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// ParseDeleteWorkerClusterResponse parses an HTTP response from a DeleteWorkerClusterWithResponse call
|
||||
func ParseDeleteWorkerClusterResponse(rsp *http.Response) (*DeleteWorkerClusterResponse, error) {
|
||||
// ParseDeleteWorkerTagResponse parses an HTTP response from a DeleteWorkerTagWithResponse call
|
||||
func ParseDeleteWorkerTagResponse(rsp *http.Response) (*DeleteWorkerTagResponse, error) {
|
||||
bodyBytes, err := ioutil.ReadAll(rsp.Body)
|
||||
defer func() { _ = rsp.Body.Close() }()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &DeleteWorkerClusterResponse{
|
||||
response := &DeleteWorkerTagResponse{
|
||||
Body: bodyBytes,
|
||||
HTTPResponse: rsp,
|
||||
}
|
||||
@ -6216,22 +6216,22 @@ func ParseDeleteWorkerClusterResponse(rsp *http.Response) (*DeleteWorkerClusterR
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// ParseFetchWorkerClusterResponse parses an HTTP response from a FetchWorkerClusterWithResponse call
|
||||
func ParseFetchWorkerClusterResponse(rsp *http.Response) (*FetchWorkerClusterResponse, error) {
|
||||
// ParseFetchWorkerTagResponse parses an HTTP response from a FetchWorkerTagWithResponse call
|
||||
func ParseFetchWorkerTagResponse(rsp *http.Response) (*FetchWorkerTagResponse, error) {
|
||||
bodyBytes, err := ioutil.ReadAll(rsp.Body)
|
||||
defer func() { _ = rsp.Body.Close() }()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &FetchWorkerClusterResponse{
|
||||
response := &FetchWorkerTagResponse{
|
||||
Body: bodyBytes,
|
||||
HTTPResponse: rsp,
|
||||
}
|
||||
|
||||
switch {
|
||||
case strings.Contains(rsp.Header.Get("Content-Type"), "json") && rsp.StatusCode == 200:
|
||||
var dest WorkerCluster
|
||||
var dest WorkerTag
|
||||
if err := json.Unmarshal(bodyBytes, &dest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -6242,15 +6242,15 @@ func ParseFetchWorkerClusterResponse(rsp *http.Response) (*FetchWorkerClusterRes
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// ParseUpdateWorkerClusterResponse parses an HTTP response from a UpdateWorkerClusterWithResponse call
|
||||
func ParseUpdateWorkerClusterResponse(rsp *http.Response) (*UpdateWorkerClusterResponse, error) {
|
||||
// ParseUpdateWorkerTagResponse parses an HTTP response from a UpdateWorkerTagWithResponse call
|
||||
func ParseUpdateWorkerTagResponse(rsp *http.Response) (*UpdateWorkerTagResponse, error) {
|
||||
bodyBytes, err := ioutil.ReadAll(rsp.Body)
|
||||
defer func() { _ = rsp.Body.Close() }()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &UpdateWorkerClusterResponse{
|
||||
response := &UpdateWorkerTagResponse{
|
||||
Body: bodyBytes,
|
||||
HTTPResponse: rsp,
|
||||
}
|
||||
@ -6268,22 +6268,22 @@ func ParseUpdateWorkerClusterResponse(rsp *http.Response) (*UpdateWorkerClusterR
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// ParseFetchWorkerClustersResponse parses an HTTP response from a FetchWorkerClustersWithResponse call
|
||||
func ParseFetchWorkerClustersResponse(rsp *http.Response) (*FetchWorkerClustersResponse, error) {
|
||||
// ParseFetchWorkerTagsResponse parses an HTTP response from a FetchWorkerTagsWithResponse call
|
||||
func ParseFetchWorkerTagsResponse(rsp *http.Response) (*FetchWorkerTagsResponse, error) {
|
||||
bodyBytes, err := ioutil.ReadAll(rsp.Body)
|
||||
defer func() { _ = rsp.Body.Close() }()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &FetchWorkerClustersResponse{
|
||||
response := &FetchWorkerTagsResponse{
|
||||
Body: bodyBytes,
|
||||
HTTPResponse: rsp,
|
||||
}
|
||||
|
||||
switch {
|
||||
case strings.Contains(rsp.Header.Get("Content-Type"), "json") && rsp.StatusCode == 200:
|
||||
var dest WorkerClusterList
|
||||
var dest WorkerTagList
|
||||
if err := json.Unmarshal(bodyBytes, &dest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -6294,22 +6294,22 @@ func ParseFetchWorkerClustersResponse(rsp *http.Response) (*FetchWorkerClustersR
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// ParseCreateWorkerClusterResponse parses an HTTP response from a CreateWorkerClusterWithResponse call
|
||||
func ParseCreateWorkerClusterResponse(rsp *http.Response) (*CreateWorkerClusterResponse, error) {
|
||||
// ParseCreateWorkerTagResponse parses an HTTP response from a CreateWorkerTagWithResponse call
|
||||
func ParseCreateWorkerTagResponse(rsp *http.Response) (*CreateWorkerTagResponse, error) {
|
||||
bodyBytes, err := ioutil.ReadAll(rsp.Body)
|
||||
defer func() { _ = rsp.Body.Close() }()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &CreateWorkerClusterResponse{
|
||||
response := &CreateWorkerTagResponse{
|
||||
Body: bodyBytes,
|
||||
HTTPResponse: rsp,
|
||||
}
|
||||
|
||||
switch {
|
||||
case strings.Contains(rsp.Header.Get("Content-Type"), "json") && rsp.StatusCode == 200:
|
||||
var dest WorkerCluster
|
||||
var dest WorkerTag
|
||||
if err := json.Unmarshal(bodyBytes, &dest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -6405,15 +6405,15 @@ func ParseFetchWorkerResponse(rsp *http.Response) (*FetchWorkerResponse, error)
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// ParseSetWorkerClustersResponse parses an HTTP response from a SetWorkerClustersWithResponse call
|
||||
func ParseSetWorkerClustersResponse(rsp *http.Response) (*SetWorkerClustersResponse, error) {
|
||||
// ParseRequestWorkerStatusChangeResponse parses an HTTP response from a RequestWorkerStatusChangeWithResponse call
|
||||
func ParseRequestWorkerStatusChangeResponse(rsp *http.Response) (*RequestWorkerStatusChangeResponse, error) {
|
||||
bodyBytes, err := ioutil.ReadAll(rsp.Body)
|
||||
defer func() { _ = rsp.Body.Close() }()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &SetWorkerClustersResponse{
|
||||
response := &RequestWorkerStatusChangeResponse{
|
||||
Body: bodyBytes,
|
||||
HTTPResponse: rsp,
|
||||
}
|
||||
@ -6431,15 +6431,15 @@ func ParseSetWorkerClustersResponse(rsp *http.Response) (*SetWorkerClustersRespo
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// ParseRequestWorkerStatusChangeResponse parses an HTTP response from a RequestWorkerStatusChangeWithResponse call
|
||||
func ParseRequestWorkerStatusChangeResponse(rsp *http.Response) (*RequestWorkerStatusChangeResponse, error) {
|
||||
// ParseSetWorkerTagsResponse parses an HTTP response from a SetWorkerTagsWithResponse call
|
||||
func ParseSetWorkerTagsResponse(rsp *http.Response) (*SetWorkerTagsResponse, error) {
|
||||
bodyBytes, err := ioutil.ReadAll(rsp.Body)
|
||||
defer func() { _ = rsp.Body.Close() }()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &RequestWorkerStatusChangeResponse{
|
||||
response := &SetWorkerTagsResponse{
|
||||
Body: bodyBytes,
|
||||
HTTPResponse: rsp,
|
||||
}
|
||||
|
134
pkg/api/openapi_server.gen.go
generated
134
pkg/api/openapi_server.gen.go
generated
@ -110,21 +110,21 @@ type ServerInterface interface {
|
||||
// Get the Flamenco version of this Manager
|
||||
// (GET /api/v3/version)
|
||||
GetVersion(ctx echo.Context) error
|
||||
// Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
// (DELETE /api/v3/worker-mgt/cluster/{cluster_id})
|
||||
DeleteWorkerCluster(ctx echo.Context, clusterId string) error
|
||||
// Get a single worker cluster.
|
||||
// (GET /api/v3/worker-mgt/cluster/{cluster_id})
|
||||
FetchWorkerCluster(ctx echo.Context, clusterId string) error
|
||||
// Update an existing worker cluster.
|
||||
// (PUT /api/v3/worker-mgt/cluster/{cluster_id})
|
||||
UpdateWorkerCluster(ctx echo.Context, clusterId string) error
|
||||
// Get list of worker clusters.
|
||||
// (GET /api/v3/worker-mgt/clusters)
|
||||
FetchWorkerClusters(ctx echo.Context) error
|
||||
// Create a new worker cluster.
|
||||
// (POST /api/v3/worker-mgt/clusters)
|
||||
CreateWorkerCluster(ctx echo.Context) error
|
||||
// Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
// (DELETE /api/v3/worker-mgt/tag/{tag_id})
|
||||
DeleteWorkerTag(ctx echo.Context, tagId string) error
|
||||
// Get a single worker tag.
|
||||
// (GET /api/v3/worker-mgt/tag/{tag_id})
|
||||
FetchWorkerTag(ctx echo.Context, tagId string) error
|
||||
// Update an existing worker tag.
|
||||
// (PUT /api/v3/worker-mgt/tag/{tag_id})
|
||||
UpdateWorkerTag(ctx echo.Context, tagId string) error
|
||||
// Get list of worker tags.
|
||||
// (GET /api/v3/worker-mgt/tags)
|
||||
FetchWorkerTags(ctx echo.Context) error
|
||||
// Create a new worker tag.
|
||||
// (POST /api/v3/worker-mgt/tags)
|
||||
CreateWorkerTag(ctx echo.Context) error
|
||||
// Get list of workers.
|
||||
// (GET /api/v3/worker-mgt/workers)
|
||||
FetchWorkers(ctx echo.Context) error
|
||||
@ -135,12 +135,12 @@ type ServerInterface interface {
|
||||
// (GET /api/v3/worker-mgt/workers/{worker_id})
|
||||
FetchWorker(ctx echo.Context, workerId string) error
|
||||
|
||||
// (POST /api/v3/worker-mgt/workers/{worker_id}/setclusters)
|
||||
SetWorkerClusters(ctx echo.Context, workerId string) error
|
||||
|
||||
// (POST /api/v3/worker-mgt/workers/{worker_id}/setstatus)
|
||||
RequestWorkerStatusChange(ctx echo.Context, workerId string) error
|
||||
|
||||
// (POST /api/v3/worker-mgt/workers/{worker_id}/settags)
|
||||
SetWorkerTags(ctx echo.Context, workerId string) error
|
||||
|
||||
// (GET /api/v3/worker-mgt/workers/{worker_id}/sleep-schedule)
|
||||
FetchWorkerSleepSchedule(ctx echo.Context, workerId string) error
|
||||
|
||||
@ -661,69 +661,69 @@ func (w *ServerInterfaceWrapper) GetVersion(ctx echo.Context) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// DeleteWorkerCluster converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) DeleteWorkerCluster(ctx echo.Context) error {
|
||||
// DeleteWorkerTag converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) DeleteWorkerTag(ctx echo.Context) error {
|
||||
var err error
|
||||
// ------------- Path parameter "cluster_id" -------------
|
||||
var clusterId string
|
||||
// ------------- Path parameter "tag_id" -------------
|
||||
var tagId string
|
||||
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "cluster_id", runtime.ParamLocationPath, ctx.Param("cluster_id"), &clusterId)
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "tag_id", runtime.ParamLocationPath, ctx.Param("tag_id"), &tagId)
|
||||
if err != nil {
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter cluster_id: %s", err))
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter tag_id: %s", err))
|
||||
}
|
||||
|
||||
// Invoke the callback with all the unmarshalled arguments
|
||||
err = w.Handler.DeleteWorkerCluster(ctx, clusterId)
|
||||
err = w.Handler.DeleteWorkerTag(ctx, tagId)
|
||||
return err
|
||||
}
|
||||
|
||||
// FetchWorkerCluster converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) FetchWorkerCluster(ctx echo.Context) error {
|
||||
// FetchWorkerTag converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) FetchWorkerTag(ctx echo.Context) error {
|
||||
var err error
|
||||
// ------------- Path parameter "cluster_id" -------------
|
||||
var clusterId string
|
||||
// ------------- Path parameter "tag_id" -------------
|
||||
var tagId string
|
||||
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "cluster_id", runtime.ParamLocationPath, ctx.Param("cluster_id"), &clusterId)
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "tag_id", runtime.ParamLocationPath, ctx.Param("tag_id"), &tagId)
|
||||
if err != nil {
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter cluster_id: %s", err))
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter tag_id: %s", err))
|
||||
}
|
||||
|
||||
// Invoke the callback with all the unmarshalled arguments
|
||||
err = w.Handler.FetchWorkerCluster(ctx, clusterId)
|
||||
err = w.Handler.FetchWorkerTag(ctx, tagId)
|
||||
return err
|
||||
}
|
||||
|
||||
// UpdateWorkerCluster converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) UpdateWorkerCluster(ctx echo.Context) error {
|
||||
// UpdateWorkerTag converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) UpdateWorkerTag(ctx echo.Context) error {
|
||||
var err error
|
||||
// ------------- Path parameter "cluster_id" -------------
|
||||
var clusterId string
|
||||
// ------------- Path parameter "tag_id" -------------
|
||||
var tagId string
|
||||
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "cluster_id", runtime.ParamLocationPath, ctx.Param("cluster_id"), &clusterId)
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "tag_id", runtime.ParamLocationPath, ctx.Param("tag_id"), &tagId)
|
||||
if err != nil {
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter cluster_id: %s", err))
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter tag_id: %s", err))
|
||||
}
|
||||
|
||||
// Invoke the callback with all the unmarshalled arguments
|
||||
err = w.Handler.UpdateWorkerCluster(ctx, clusterId)
|
||||
err = w.Handler.UpdateWorkerTag(ctx, tagId)
|
||||
return err
|
||||
}
|
||||
|
||||
// FetchWorkerClusters converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) FetchWorkerClusters(ctx echo.Context) error {
|
||||
// FetchWorkerTags converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) FetchWorkerTags(ctx echo.Context) error {
|
||||
var err error
|
||||
|
||||
// Invoke the callback with all the unmarshalled arguments
|
||||
err = w.Handler.FetchWorkerClusters(ctx)
|
||||
err = w.Handler.FetchWorkerTags(ctx)
|
||||
return err
|
||||
}
|
||||
|
||||
// CreateWorkerCluster converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) CreateWorkerCluster(ctx echo.Context) error {
|
||||
// CreateWorkerTag converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) CreateWorkerTag(ctx echo.Context) error {
|
||||
var err error
|
||||
|
||||
// Invoke the callback with all the unmarshalled arguments
|
||||
err = w.Handler.CreateWorkerCluster(ctx)
|
||||
err = w.Handler.CreateWorkerTag(ctx)
|
||||
return err
|
||||
}
|
||||
|
||||
@ -768,22 +768,6 @@ func (w *ServerInterfaceWrapper) FetchWorker(ctx echo.Context) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// SetWorkerClusters converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) SetWorkerClusters(ctx echo.Context) error {
|
||||
var err error
|
||||
// ------------- Path parameter "worker_id" -------------
|
||||
var workerId string
|
||||
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "worker_id", runtime.ParamLocationPath, ctx.Param("worker_id"), &workerId)
|
||||
if err != nil {
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter worker_id: %s", err))
|
||||
}
|
||||
|
||||
// Invoke the callback with all the unmarshalled arguments
|
||||
err = w.Handler.SetWorkerClusters(ctx, workerId)
|
||||
return err
|
||||
}
|
||||
|
||||
// RequestWorkerStatusChange converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) RequestWorkerStatusChange(ctx echo.Context) error {
|
||||
var err error
|
||||
@ -800,6 +784,22 @@ func (w *ServerInterfaceWrapper) RequestWorkerStatusChange(ctx echo.Context) err
|
||||
return err
|
||||
}
|
||||
|
||||
// SetWorkerTags converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) SetWorkerTags(ctx echo.Context) error {
|
||||
var err error
|
||||
// ------------- Path parameter "worker_id" -------------
|
||||
var workerId string
|
||||
|
||||
err = runtime.BindStyledParameterWithLocation("simple", false, "worker_id", runtime.ParamLocationPath, ctx.Param("worker_id"), &workerId)
|
||||
if err != nil {
|
||||
return echo.NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Invalid format for parameter worker_id: %s", err))
|
||||
}
|
||||
|
||||
// Invoke the callback with all the unmarshalled arguments
|
||||
err = w.Handler.SetWorkerTags(ctx, workerId)
|
||||
return err
|
||||
}
|
||||
|
||||
// FetchWorkerSleepSchedule converts echo context to params.
|
||||
func (w *ServerInterfaceWrapper) FetchWorkerSleepSchedule(ctx echo.Context) error {
|
||||
var err error
|
||||
@ -1010,16 +1010,16 @@ func RegisterHandlersWithBaseURL(router EchoRouter, si ServerInterface, baseURL
|
||||
router.GET(baseURL+"/api/v3/tasks/:task_id/logtail", wrapper.FetchTaskLogTail)
|
||||
router.POST(baseURL+"/api/v3/tasks/:task_id/setstatus", wrapper.SetTaskStatus)
|
||||
router.GET(baseURL+"/api/v3/version", wrapper.GetVersion)
|
||||
router.DELETE(baseURL+"/api/v3/worker-mgt/cluster/:cluster_id", wrapper.DeleteWorkerCluster)
|
||||
router.GET(baseURL+"/api/v3/worker-mgt/cluster/:cluster_id", wrapper.FetchWorkerCluster)
|
||||
router.PUT(baseURL+"/api/v3/worker-mgt/cluster/:cluster_id", wrapper.UpdateWorkerCluster)
|
||||
router.GET(baseURL+"/api/v3/worker-mgt/clusters", wrapper.FetchWorkerClusters)
|
||||
router.POST(baseURL+"/api/v3/worker-mgt/clusters", wrapper.CreateWorkerCluster)
|
||||
router.DELETE(baseURL+"/api/v3/worker-mgt/tag/:tag_id", wrapper.DeleteWorkerTag)
|
||||
router.GET(baseURL+"/api/v3/worker-mgt/tag/:tag_id", wrapper.FetchWorkerTag)
|
||||
router.PUT(baseURL+"/api/v3/worker-mgt/tag/:tag_id", wrapper.UpdateWorkerTag)
|
||||
router.GET(baseURL+"/api/v3/worker-mgt/tags", wrapper.FetchWorkerTags)
|
||||
router.POST(baseURL+"/api/v3/worker-mgt/tags", wrapper.CreateWorkerTag)
|
||||
router.GET(baseURL+"/api/v3/worker-mgt/workers", wrapper.FetchWorkers)
|
||||
router.DELETE(baseURL+"/api/v3/worker-mgt/workers/:worker_id", wrapper.DeleteWorker)
|
||||
router.GET(baseURL+"/api/v3/worker-mgt/workers/:worker_id", wrapper.FetchWorker)
|
||||
router.POST(baseURL+"/api/v3/worker-mgt/workers/:worker_id/setclusters", wrapper.SetWorkerClusters)
|
||||
router.POST(baseURL+"/api/v3/worker-mgt/workers/:worker_id/setstatus", wrapper.RequestWorkerStatusChange)
|
||||
router.POST(baseURL+"/api/v3/worker-mgt/workers/:worker_id/settags", wrapper.SetWorkerTags)
|
||||
router.GET(baseURL+"/api/v3/worker-mgt/workers/:worker_id/sleep-schedule", wrapper.FetchWorkerSleepSchedule)
|
||||
router.POST(baseURL+"/api/v3/worker-mgt/workers/:worker_id/sleep-schedule", wrapper.SetWorkerSleepSchedule)
|
||||
router.POST(baseURL+"/api/v3/worker/register-worker", wrapper.RegisterWorker)
|
||||
|
429
pkg/api/openapi_spec.gen.go
generated
429
pkg/api/openapi_spec.gen.go
generated
@ -18,218 +18,223 @@ import (
|
||||
// Base64 encoded, gzipped, json marshaled Swagger object
|
||||
var swaggerSpec = []string{
|
||||
|
||||
"H4sIAAAAAAAC/+y96XLcOLYg/CqIvF+Eq+LLTMmSvKn/jNtLldx2WWPJXTPRrpCQJDITFgmwCVDpbIci",
|
||||
"7kPMm8zciPkx99e8QN03msA5AAiSYC6yJavct39UW0kSy8HB2ZfPg0TmhRRMaDU4/DxQyZzlFP75VCk+",
|
||||
"Eyw9perC/J0ylZS80FyKwWHjKeGKUKLNv6giXJu/S5YwfslSMlkSPWfkV1lesHI8GA6KUhas1JzBLInM",
|
||||
"cypS+DfXLId//H8lmw4OB/+yUy9ux65s5xl+MLgaDvSyYIPDAS1LujR/f5QT87X9WemSi5n9/awouSy5",
|
||||
"XgYvcKHZjJXuDfw18rmgefzB6jGVprpaux0DvxN80+yIqov+hVQVT82DqSxzqgeH+MOw/eLVcFCyv1e8",
|
||||
"ZOng8G/uJQMcuxe/tmALLSgFIAlXNazP6zc/r5x8ZIk2C3x6SXlGJxl7JScnTGuznA7mnHAxyxhR+JzI",
|
||||
"KaHklZwQM5qKIMhc8gT/2Rzn1zkTZMYvmRiSjOdcA55d0oyn5r8VU0RL85tixA4yJm9FtiSVMmskC67n",
|
||||
"BIEGk5u5PQp2gN9GtpRNaZXp7rpO54zYh7gOouZyIexiSKVYSRZm7SnTrMy5gPnnXDmQjHH4YMz4FP6X",
|
||||
"HS1lpnlhJ+KinsjgYzmlCYNBWcq12TqOaNc/pZliwy5w9ZyVZtE0y+SCmE/bCyV0qs07c0Y+ygmZU0Um",
|
||||
"jAmiqknOtWbpmPwqqywlPC+yJUlZxvCzLCPsE1c4IFUXikxliUN/lJMhoSI1BETmBc/MO1yPP4ga0SdS",
|
||||
"ZowK2NElzbrwOV7quRSEfSpKphSXAPwJI+btimqWGhjJMsUNunNgsJPm0fl1+bMZdlHjgi27azhKmdB8",
|
||||
"yllpB/EoPyR5pbRZTyX43ytERHtoH+1FiM5jLgYtZ5G78FQsCfukS0poOatyQ2Ecvk2K5dh8qMYnMmfH",
|
||||
"eLeWP/xIEnMMlWKpeTMpGdUMt2rv3zJYQ33Fa8qyBQrxPGcpp5plS1IyMxShsNWUTbng5oOhIQQwvZly",
|
||||
"CDCRlbYroqXmSZXR0p9DDz6oauLI5yqqGyFUJ/ZLf9W3HuHUfn7JFbeXbMsR/mq+5JkhwG0qbnDMrmxD",
|
||||
"yntSg6JFgKvJyDxBiCPOObCSZ1VZMqGzJZGGVFI3LiBxQCzVmJz//PTk5xfPz14evX5xdvz09OdzFARS",
|
||||
"XrJEy3JJCqrn5P8n5x8GO/8C//swOCe0KJhIWYpHyESVm/1NecbOzPuD4SDlpfsn/GyZ1pyqOUvP6jd/",
|
||||
"i9yRvnPp0lALgWD3wcVEDkEVOXrurgxs2xCOP2dm/eWY/CKJYMqQE6XLKtFVyRT5ATiEGpKUJ2YqWnKm",
|
||||
"fiS0ZERVRSFL3d66XfzQCA/7e2bTmaR6MAS83nSTAeqEN9Mj4zDGPbUEltGkcOTcfnN+SGi2oEsFL43J",
|
||||
"OdB1oKfnh4ge8LUlXe+PkJcDQC0HKMkPGb9ghDqgEZqmIyl+HJPzBZvEhlmwSc21AOtyKuiMGaI2JJNK",
|
||||
"EyE1MlA7C7IlwOMxOZ/zNGVmgYJdshKG/lMbly1pNCtFJmNeBOCAAGtmFzRr0hp3WjVAcaYBEB0Ll8Fw",
|
||||
"sGCTtWcWx0gnBNV4gsIzV+QNgKBEzsg1UESaG74VkZiYphGx62eq5uGNBy5DjjokQBHLrTI6YRlJ5lTM",
|
||||
"2BCXYUYmC565n8fk1PzMFfIRKerD92yXCVWVhrNQFNC8cNCc1NyPqgB2TDVrkPcahrCk7WR0N8HG+kVM",
|
||||
"hu2Ify3ibAkULi+Yc4hnsY5gG3SIMPXXXGlHoYDk9iNGFwmc+H69jZ82OGHPruspYhu0F/6Y6vmzOUsu",
|
||||
"3jFlxeWWfE8rFbkMz+u/DAwW86UTBfTcINwPQuofLZ2OCktcFFWPdA6PECMXVKEOYTBvykWKszgSHx1Y",
|
||||
"neG0UZUERZ458wu1rESWhm6No0ILMLPoSmEQv9CprEQaXZOSVZmslTiCIznBD9pHikCzK/LDhnse2gNb",
|
||||
"c+QvuUjrE98I/3oQJqJ6dfdhqF4oSFClZMKpRpJsdnPGxOUlLQcWMfoFCGdf6JyHfUBKZrQKELEpUajM",
|
||||
"Wq0Y6N0nllSarbN79BsVPGUPHjsYx+lO8EnsWF6UpSy7+/mJCVbyhDDzmJRMFVIoFrPQpBFU//n09Jig",
|
||||
"GYGYN7z47gciR4aVJlmVor6Fl2KZSZoSJRGrPQBxtQ3YGiURlsYFGjy4FOMP4pmZ7MHuvuc6IAqA5kY1",
|
||||
"nVDFzJNJpZaGOzECC3WLssxLCk25IJTce8d0uRw9NXrsPXx1zijohWZ5XKQ8oZopq+ku5jyZE81zVBXN",
|
||||
"UTClSUKFERpLpktulN6X0qjMTiyxA3IFgotBE2qEY8fL7ynL98y7ScaZ0MAFJVEyZ0YxnJGSUSUF0BEQ",
|
||||
"p9gnvDycZmRCkws5nSLH9JYhJ0p2zVI5U4rOYrjXQi449/r9GGa9zGjORCL/ykplDRXsE80LpI0zDgLo",
|
||||
"/nhv9OjhaJam+wfpg/3Hzgp1OPjvsiodBzPUZi5LfemGGuyP90c0K+Z0dzAcxH4mP3TG/tGpyTX6wiq2",
|
||||
"khgay4i8EDxr3gkLBn8VvNSWMyo0yLLzKqfCIKCqcvjMYIu5fRkzmDupeJY6EypISzQHNeQ8XNX5EMaS",
|
||||
"wGvqT2impLtx+PX5jOtzYr+CexQVrFoH7/bXAoU3HhqIxrDhFZpfaZa9nQ4O/7aa2p84MdB8dTVsSwU0",
|
||||
"0fzSKzMrBAOUVJUm7gsjhTpLUpRXoqkjRuDNA5Baec6UpnkR3igjlo7Mk9iYYNpiZ5YgsPSMRkSPo6m1",
|
||||
"eWQMpjEs3X9hJWx77H4FxLB+pDqGIDmKYz5VWpYodLtr6KXBJi1YuXIeAcT790fPHWxfgdl4jcV5U2O3",
|
||||
"Eam9rbsq0vg5nPrNyymeLb463nBTbZnGLNgdej1tYAT3yPbb1W+Ix3/OZHKRcaX7pfIFMHZl+VjJgLqD",
|
||||
"rZSlJGElcBjwiaDsLg2/UQVL+JQnDjk3EozC9bwQulzGZKLuSx1Je7VzAfdztpGHwb/dQ0RbJ1APHfoS",
|
||||
"ekjIc3s9jsRURu6QmEpCJ7Iy18LcDcPPJwwvVS0M4PU3t8k+6Io1ak5zKs4SI2rKmKYQCvMn8DJxLwcm",
|
||||
"LreAkuXykqWEZlLM0LXgbBIRmb8FoPZaekDzmir9DkRflh7ldMbiMHohZDWbh2ITsAsaSBcFZwkjWs5w",
|
||||
"iymfTllpnuEJgvHYfE0omUulRyXLqOaXjLx/99rJKuZmjkq7HMLNesbkVAJzA3MYWoXevR6an4wYJahm",
|
||||
"5MPgsxHSrnY+S+FNkKqaTvknpq4+DJB4Nc/KfNBEyzKLUiE7TEPnWOPJaR0FTBWM1HMUb5imRt4EXpWm",
|
||||
"YMKm2XHzvnW5RMNmX064Lmm5JLkdzEF/TN7IEpSKImOfQuOilTRzadAarACVEaDJOR1Pxsm5oUH1gRvA",
|
||||
"XjAw4wdSWVFK2Mfh4KQouWbkZclnc6P0VYqVY5ZTnplVLyclE/9lYhVhWc7cG1Z4O4EXyIn+v//nkmUB",
|
||||
"XBtwOrYOxWdgL+rSpNCFmtNPPDdK3P3d3eEg5wL/2u1Ksa0z84P0HNZJYAOKH5YuK9bzrWdsTsEEboGK",
|
||||
"sEjMMaBXtAA6Y1gU5fhjYRRl84+/V6zC1+CLkWf6A9wHqxgagCsD65G/QAZPYiqqX1YfVFFfiJsU8Fng",
|
||||
"2LI6HBr0vgpvb1M3x2ftsvpOScuyl6bZh0DUvJl5aEUjL/+Y61EpsO8iPTZvIfFiKZnyjCnkEIIlRq0p",
|
||||
"lzFq06LGZzER6d4zxwqOnt8L9FCQM5zm1+YaoZNzTJ5yI7YLXKn7JMZhnH5rOZrjNNNS5n7rfXJ9DNCn",
|
||||
"VF2okyrPabmMuefzIuNTzlKSWVEHXbQO6mPyDPVn1NHhYW2YNz+5Q2LUyK1UXXS5L3y1sWkIgiTsgjew",
|
||||
"SvZSbfVfK4Z7DggixA4MDh8YVbcm6n1k8mo4AMfx2WQJwRUoTZ6Bv8Yi+m/uX2dcNAiGpwOWRPzWUVbt",
|
||||
"Wj7X1O9+XIf/Yu7zkmfaaI819xk6XvL66C8valYS9QLL6VSx5kJ3YwutQfV5i9AKtSG97ttR6FjYZlfB",
|
||||
"qbVvxTumq1KgH8lgGEp41FFPboVO2MI2gn0Q+tNG6n4E7jOlA+pveqdQ777mXbKK5jMppnxWldSFqTTX",
|
||||
"w9VLXir9rhKrpGvUeg3T4yhKGlo3NR/WljY7HykroWqnkw/cAEmIkilbkCk1VFMNifU7CilGEGtipNsk",
|
||||
"XC/wAyJLr6x5X9TEsGPC8kIb6mve0nMGXsoqS8U9TSasN/4ASP4LsNWlG+kUsApdUqGmrCRPj4/Aie58",
|
||||
"MXGHgUJu+FomNB4g9NxzD2BNhvGYSwFz2Y/HaxXn9izt3Q3DA16BJX+lJXf+kjaCnOmFXNAIG3or2GhB",
|
||||
"l+TSfoweQgO3XCoNBndp7iNDOyq41w3nMgJOkdEE/MXII88/G5H16twqLrzE2B4nPcwhIMEKBpS4gEbv",
|
||||
"FaLOhk9OFzKyJjC72UnTjmPaCyrMLr/IqDZ6zMjbAjDSCDi7HWSy9IvuQzT4aL3qbU12NaDdlxuc19Mq",
|
||||
"5Uw0vSvW6mF1ARUVT1vDqFVcahWFaqNPh4e9oUVhYAyn7A6FmC1D0JH2oUwcAwsjG17+hbHiXSVENFTx",
|
||||
"yNv/F8HFRRiQnC7JBWOFIUrCyW9xaSfvzNM90Fpm7xHAUdh/53WHFat1vpVQtK9NjV45XFi8PtKWtqHw",
|
||||
"PGfkHB8Z7sTOidmKtYyG0XJ4fcwkAO+ZNP8V7JO2YQVIpM8Nrz4fkvMmEM7Jm/cnp0aZPYfosR5Eb6Fz",
|
||||
"C5Aean0wimG5dzAeOQ9xSyW13tjVF6vlP4wMf+sO72/mlwalhaXrOYp1K2/mTX7HZoZtlyxF+tuFJE3T",
|
||||
"kim1ZdC2pb/xmyanekFLtuIarqNav/qbg3Kdj9k486ZPtZ04/EVh35YBOFCFod8OEMNBgkF/sMJBAIWe",
|
||||
"1cdO64QlVcn10jubWxRwU6/jKnfjCdNV8VQprjQVGoXPmJ8+FPLkxMh2Tl0GucuMQvwwXWptbV4vwJFP",
|
||||
"N4jk7I9c+FaCWncLUXiCOPes1wJ+wkD9t3YTa9LmJTn5+eneg4d47VWVD4ni/4DIyMlSM4UCWcqUWR7J",
|
||||
"7KJcBEDXwNGyT8Js4D5E8jOoY4THM4lC6OBwsP9gsnvw5H6y92iyu7+/n96fTg4eTJPdR4+f0Pt7Cd19",
|
||||
"OLmfPjzYTfcePHzy6PHu5PHuo5Q92D1IH+3uPWG7ZiD+DzY4vH+wdwD+R5wtk7MZF7Nwqof7k0d7ycP9",
|
||||
"yZODvYNpen9/8mT/0e508nB39+GT3ce7yT69/+DR/UfJdJ+mBwd7D/cfTO4/fpQ8pI+fPNh99KSeau/R",
|
||||
"VVfndxA5jlJb82sgPTpFyPLrMGzbjQP8HKRJa7O39vq2NQpoOFVeKUJfYjDJmBwJIrOUlc6FrJy93o4F",
|
||||
"8xoO8LFSaO7/4LdDjp5/GKBdyGnH3hHtQyYorgJ0tXNrchmprJrtqIQJNjLUawej5EdHz897wgItymyo",
|
||||
"+OLaX/KMnRQsWasD4+DD5jGtv00194+ZYM0zNKi1TiWW/3IN9LDuzjZigOJsQV/7fPScCutNa3qkqWoM",
|
||||
"Cq4uG85JXe5CfY3JaSBdfDnybRCosOGR+KPuEjirglEndVGkvJZW2UUHdDguKbYcxLIeD00Z9Yjewxcz",
|
||||
"s89pZIVNUhuOGR0D6MznrmWMNWn0YK1PxazGjjfsF3abAP6V63ntL9kI1E4JT4CcTXpAP7Ri6pCkrGAi",
|
||||
"hbwxARoeijPf+dlsKnsGx9Hjiumcami1XnW8HTdYJS6EXAiIqMgkTVEfw6CUqFkAB3uHq4EUJaunXVvw",
|
||||
"AEGjAbteWeKGhIZbERBugb31H37zvDCKMs7V8LRAzKakDD5zLGUYHqW1TcjmdWflpZE7XsJQPrIGEM1w",
|
||||
"Evua+Y19spGlXq4PI1hvCwfqi+nvw82gRTiRv25fGVcC8v2lWIM5vk3C0Xbo4vlvy3O/FiFcSfRKlp6s",
|
||||
"09zarETBZzXHoqkRiq1OF0R+UWtVJR+q3d29h94ebKWzShnM7xiatbQDRuZCYcrfAytA3VNNd0fM000D",
|
||||
"C+8WllhvGL4aDrIAQFvaWm7BVdI69azWkP3WG4aQ5pqi2CGTC6aP3r6Sk/fg+43mVyqmfWL7kCgjZctL",
|
||||
"VhL3tXM2QAYa2CwVBgILtgD/4tCoQ+ySy0qdIa6e+1AzR/piJ/pPH4jq7H7NgX6heZjuGk+uboB7K99t",
|
||||
"GMXkUy8fRD3iJZuWTM3PfADESht+kBNgNX77PYZe4G7uKQzCqB2jgHCYOqmUjZ5VzgkFf4KDkyZzSHG4",
|
||||
"5GlFMZKDLGCWGROsRLu+JDkVSzeITaQvSppontCs1w+6PRD7y15sGyi8Mc4tqDqzAaI99SXwinoTh325",
|
||||
"viPmomtpnRwNv4cl+OZliBowh3WPp/fIlLMstd8OneRSR7KC23kjZwjvCWe2lTqCWh5NpFtF1sIQ0z76",
|
||||
"ZnFUljWORmJBfRaEA6BdaTxVccOwYz2v8omACMW1mBWPlo0lMdaByfgvP8kqSBkq31+h44QJcON6go+3",
|
||||
"WEEux44Kvj0n7BKsMFD2QEub7uzE5OBN89AA017FMXnmxsQs7RnT4XO0vYGvz1xsd4Hd35mcKYxrEIzZ",
|
||||
"zLUi4wnX2dJNO2HIlcCzbh4th34jCbXhMP5dM4YUGKf2g5awnsbUU4cyH+XkR1DezOvmlXvKrIeA19Jc",
|
||||
"1hhrk8VaqS9yNG+d73LTwg6xQVw6rPPE9HMpzNfSsgmVHVKJ+gcjqY3X87IWospiVf2H1VsP1Ha/DAg3",
|
||||
"rf+Kaux9oIjQSqrJBTcnOt0KBj6oNsteyQkkY2TZrz7IwPJqqi4yOcOH4bVeuepTqi5ey1kfFTu1l4Ak",
|
||||
"80pcWCENwj38nS2lzEnKkCOn+NDmK5olwW2ll5Kn5uMUN91klzE8NjvpOq3MIjwS2aWNyRu69NmKeZVp",
|
||||
"XkAKoGBoiWefdNQV7GjZSlQ9RWffdlhYU0mzjVWYaIbfREI+BUj2i8gAjI6MbKNOryckh8llW8uhm4Ft",
|
||||
"uA1XWy+zWsfslwqtzWJj1/nmpmSxmGjjWbP1Ya/M3FqBiUhONsFFfHMVNtrYH4ePvRpYXPFy8jnyTabr",
|
||||
"2G47rpGTgoylr6M52fCJDXDWnNuZYixm7qB1PCZX4XrN+y7bPShHsdna16P+wq3+S5G/E5jxBV+dJT7z",
|
||||
"YtOPG6FJN6vW9Ccvx65Z/+1y40QvV5i/Gy1lU/vtg5ovWtYpC0077SbB91+eo2Qf7P/+P8h//Ovv//b7",
|
||||
"v//+v37/t//419//9+///vv/DJUmUN/DQHQ7y1mSp4PDwWf75xV4hitxcYam2n2zJ2204zNapVy6UPUp",
|
||||
"z5iNMNhBPWlHTXc+yolCT/f9vf0xDBke8vEvP5k/CzU43DsYDqYlzQ2NGdwf3d8dDAegZqkzWZ5d8pTJ",
|
||||
"waH9ZTAcyEoXlcZSWeyTZsImz48LGzUHW7FvddeFM/mV7cTBZWt6dcYrpdQrx7OF2rBC1FltJBxkXFSf",
|
||||
"AoyGgN6RBbXVL7sZ/iHmrNEJfdrepmU911hzQgRZZ+hwr9ZhQRuZR+qkqB6odSKnUewXM6KWSrO8zrG0",
|
||||
"37aqNkGyVCJngivWtTzbl631CUI2Mrlg5SihivmIDjuFW5SNvv+AB/phMCQfBgsuUrlQ+EdKywUX+G9Z",
|
||||
"MDFRqfmD6WRMTvxUMi+o5r5U50/yniLnZSVAQ/zp7duT8z+RshLkHEJPZUZSrjRkLUGst9E/qU9iKqSC",
|
||||
"wl1+kYZ7P1XONE8zYnY0bOyDfBigNl5+GLi4CVtxFG2hTtqEkmFFCSnIVJEPg6Yh3o33YVDDPpfKaNqg",
|
||||
"8F8wopnSOymbVDNbiUwRRhWHml9WT3fZbRjYyxOSygRqPUJueZY1dhZVC/osbOaHs83Lhg1JIgse+t7O",
|
||||
"28Wjxma0c19Kslt47NT+VedPG4rPUsKt2QjNZKlkStzTJKc6wYxqmuiKZn6kTszSKZawBKOKatcjAzyS",
|
||||
"WRqkBzVrmLbLwfmaps569UEcNRZopLkcmduwDiOAEjTLgirlNJC+DPokq5RmkfI8KDuQZ/gc7Sb2FroS",
|
||||
"Q3UuobVV2sHI0XOfwWCtj1alRi8b1f5NB35DctIqQ3JglobxFWDRxEQYWQYbNdjmCiAYtHRf+BU1Df8b",
|
||||
"qZZWDulaLyNELyaRxOtUnzp9GitTQyqQcs5GF9nkKgcNCR+zMZmwqSxZnVEQZJSMt1Mmv2Z165uoW4KJ",
|
||||
"iGeT5ZlL7NgmJdMqFpG1bqj4bqEjg2qiZWXwdI3IjKqaWHolxfxf6tHTpWhsp6B8++LfN1UuxZGibU58",
|
||||
"0xIrbRU+Vnc8rC7uL9OaQuPWtre2Rgj4JqQtMh6Y7r7ICREP5DKEBmKRWka8YSM4qYspga1u7cxVmcUn",
|
||||
"fv/udeigrWcnXCuWTX3Qp1yITNJ0k2SN2tTnTxHLbsD++07lC+ol+PRoJad61C6jEDP11hPepUoI4a2+",
|
||||
"RimEMNm9q1hXShPWLfBSozuWHJKNmrq1kxjE4S72b2movEvE8LrWxQ0pkpup76RWuRfwmXfIQ46yFeu0",
|
||||
"tFQaVTPEPBsRBJ45oFhwYlAiEEU+KM/81Ej6/vQgGE4WmFv5JyKtnaX1Ap8JiNH4AeQb6ZJTzx29tXZz",
|
||||
"ITVhJbVJgL6iWluKN8v6cZ1hvZvOm3Fhy8HbIAcIOr+nSOJrjmMuLg8rKAG5Jm8vWbkouWYo23NZKTCh",
|
||||
"iqDwm6ueExUfYk6X13JmnSmeBqBfx0nFrlS5WTScCkzIaJnxnuKwukECt6ASUeSqE9+iukHJIII/YaAj",
|
||||
"gjLPBSYw4ziRuOhVOXNfRgVWXDI3aewS1XvcrHCgtan6aiCd9ErUbCKxNVaVUigjIqfkyt1IKF2fs3yC",
|
||||
"J7uREIyf2nGjcnBxFgC8JaYcE/usY6hfGUi3mbWnf6wvT0jUVtNaDxrQyTYivwGkGhF5QTHKaCri1W+d",
|
||||
"ulq2Ak2TNTrKW6Pcsz7N26ncdZG/MXmK1gEqPK015AdCfJYuhcF+xnVgLYOCLkBAxl4Ht7JYQUswTHm/",
|
||||
"rsNcorj5jQoGVM6Wn68t5d3tNmoRmuFTSX46fk/QYurJ6YsXf33xYlzXX/3p+P0IfuvaVFvtZ7b2Odm9",
|
||||
"jMkz3KwzHbSqJFFw5tqXQVK2sKRg5yqpSGVOYGBPk23jqY1MDJsSK3irnzpZjGi6cCI1YjAmUkuXct9A",
|
||||
"DNUhN3YnBjmaJ2y/OOOpMqs72L+/lz58nIwYfZiODh48fDh6Mpk+HLEn090nE3bwOGGTSDGhxijB/V4f",
|
||||
"ZbUq9DscdS3EXtsSmv0k+mvQ2aveZbzepIRnl0luawxp86TVEHSj90MPU9+D4J0aNezVtb+MvHc04kVR",
|
||||
"LCkZSMZyJKQeaZZlIyqWUrAwyftwsD/e66Ovh39zji9z2aZ5wWa2K8uobssxGA5yrpIICl4zC98u/PPX",
|
||||
"Z15tfQxnasaAx6YYriMRJ3wm3vYc1tPjI+jGFUD9rC5orRZ0NmPlqOK3fQjdxdw8xB3fjwO5s6IVAM8Y",
|
||||
"K06s2ToS1mEee7O2S8JAC5Cr53OiDQumIiVMpBjc4FUTF/zu69ildNk0sfixDSkHG8eYPC2KjDMb4IHB",
|
||||
"HdJ8yMHkfJ7SpTqT07MFYxfnkNQI7zR/Ny+7IODICkGdE2TvYDSXVUl+/vnwzZu6rFlHVghGHhwOckl0",
|
||||
"RSBbBILv0jNQmA8H9x8f7u5iaQ5rr7GOa2VW4N7afWLe6goLjUm6mZ80YSPFClpiGN1CjjIGzYFctVkL",
|
||||
"dSMRmLGANjN20QNm8sOHQS7Reagr5zf8cUxegKMiZ1Qo8mHALlm5NOO5mrIdRK33HzBFAGhPfRUHms/x",
|
||||
"iHcPqPXDtUViP/awCc3GuMGKV9wLTTXrM4fZ6JUyLCK0efRL1JgVDLbRotIWjfR5bXRBL1gXua4TprN5",
|
||||
"slfjuzBM1kAdU1pxXcMBVYakmEOAEifDgWbKviKn04yLeBBtfwxQrwCJxKq2FFlpsk53hpQDG/EYMeap",
|
||||
"s4z+Y7k6papZG8oqLGh+Cdv1AZGqXaQobtQmG2uhUmTKBVfzlrNz63yQTU5x6Pe34jz7zKd/poonK7TD",
|
||||
"a1tGv13k3NcqU/TV4to2aY/hE8VaKlHpC3tdw4K7XmZwLuLNLE3NqrOfr+swiiecRAwXp+im9kphgJVX",
|
||||
"KBVDhSUj8+ShnnJGq1itg/eKlVALz6byWcQ7ej4kBVVqIcvUdxkBMdiWPDRCjrMv1mqIQUwADFxsc43q",
|
||||
"nc61LgZXV9B0Cx1yELOe6EAG9id+ymhuXUn4pTrc2Zm6mEAud7p1/jDcn7ykZW6zYyD7czAcZDxhNiHd",
|
||||
"2zReX+53xl8sFuOZqMaynO3Yb9TOrMhG++PdMRPjuc6xhDnXWWO1ue9OUwvs98e7Y5CCZMEELThoUOYn",
|
||||
"LKkAJ7NDC75zub+TtCukzlCx8SX1jlJoxaSbpVQNymA2O4y2t7vroGokfYPBRtDEZNadj9bDhXi7YS5v",
|
||||
"cz44vCbQhcHqzGfVIwo6umpWjHaeZrGtaacrnaYzhXW9NAXdpB7jhUgLyW2m4sy2FO4M2MkpNZCPgncH",
|
||||
"Qm92nKrUB+yXXKR/9vWxjrEIxo2BO94TLQLvl7ISdbkskIF9F7pmu+mvsi6s0xZZx4nvOrUwDH5RSuhI",
|
||||
"3Ti5l9zmbsmS5LJk5NnrI9cDDZ0pEKemyIJChBtIU247MaQopIqcFNRSihwVsJo/y3T51aDRqgkZAYvr",
|
||||
"/iZL64uDyCCsgygx6AvTe28ejxo15ror/aV5cYe4SAxLgyOdcsHuHk79lWYcHKI0xKbrIFMLT61X9bIe",
|
||||
"3/WirQ9yLVHBigujIHB3Bco2Kkh8U6w9vjX8/KdATCy0UWNksw7HGna3xTi9yAi1pTaVIl5iIaovOvIt",
|
||||
"2qVcDRtjLWmeNcdqy8XrEKR9EO+gv+IliwseXTlh5Wk8TRKmfJP8WGH4yJA+eFtITXBj98Dn/rZg4unx",
|
||||
"kUu5zjK5sB34XDPpHStJ2gM9JwVNLsxhfxD9x62YrooRdaVK+8nOCb1k0eqoN0N4olNFmWYIVkO76SWi",
|
||||
"dwspDyI5YC1kgIjxBZvQonDmitSoSNMqy+qqGNoWTTZy5d0jJe/rkJ+eKj1YPLVkyOQ41M40O1ySaSWw",
|
||||
"n3wG7Z/WoLdBiBhm9xbB7cfBBufb+ewK51ztfHZOk6tVJKnBDJvNao0Czg3sbCU6q8IFpXlqxdlao7dR",
|
||||
"cbrliowWH5kwcP70T9imXr/dIDONl6DanmI6La1VLyprlK5qtJcPi1aZL61JwNWsMsjpC1ahqW9L/W7V",
|
||||
"chodjXrrWPWjqk9a2h5L62YF/4mh19iA+gLkrIuctc0H5L1yre6ZF9ppmo6QmazIWkMy6vscsAlmaE0p",
|
||||
"dD00jCOW3EEmVNWFaCelXKhG+tb1Mb7e4/Y47rr69HB+SI7B4lg3wuobfXq7h/xKTmypj5zrDnrepMax",
|
||||
"YkFgXK+MhIe802Z1GVHNhp8GJa8UQPvg/t7NywinnqL69DWm6Qyy3Gx3cJfm1nwhmuTGFaRZZkuSVqzV",
|
||||
"QTyhydwhnx8K7oOUJDOiCcqdtyYewQPiqvs3KQHimA0Gg/L3suzckaC3fij7YIuqxnCvmjl/zF7KzqVC",
|
||||
"1X6DqwV67be9X0mwhFXX6yCei7/lhfDZmYaKYve/uREof3l7itmQtuQfbzbeHxI9l9Vs/p8X6o9yoQCt",
|
||||
"1lwnwH6/bzMSmNKgGNiCmxPXdUAnj1yzRnG4frM808n8p0xOaKPEE6R43SwX6SsUt4FAM4xfuVNX986l",
|
||||
"L8PtoWIZbZrcIxdBq2XI+mXlpW3oH/lcrTm+t9AABXty1llCMwB0z3Ja5/d31zQzTiahJaEt3nUTFLLu",
|
||||
"2xnTuttl5TE+C1o0YgmA8W0LJY0ejf1YBFANjKE2KhyTraFoAZ8aEgZUB8iYbY0IH47vDK2Be+urLBjA",
|
||||
"b4aQdRfNKTTuhHBwkRIlIfCmi4aG4u58Nv/9heZspTZnixBspMu5Ae+MatUupdArFeCzNumwMY6eRxmY",
|
||||
"Qis8D4k15xOkzwZlq33lhui5qA1OQw1uEWhRhdS/5HejIgAMUBnfQSkIynFuDMR6Ks92/XhdEH7GoJCr",
|
||||
"un5YF5DP4XdU9NZjtU/Z7cfpdWErv20iXD5HEhTQMV9V2pfO0CWfzQyDuV2i9V6wTwXWFIGIva47AaPt",
|
||||
"/IJd8Yoh4SLJqhTlGVtcGbuMGg4uZ9jqAKVkW47ED5LTpQ+js3YEmlzMSlmJdEx+kb69l/IZLbbgG/lh",
|
||||
"yfSPTRuDx6x+kembYsStaPPc1e1tM52WTPNRTjbQDPEjkZIgdL7vPu5MMplcZD6JJH4z30FD9ldy8mf/",
|
||||
"9m0eyI1IXPVWYlpXVRj8/WFhyyViyvmyYD/a8uCNFvVwB9xwGzp/3N2kScIKqDjDhC45s3ookBU7yV0j",
|
||||
"KmZRfrW2G4q58wEItr3f3wavbu6ir0QuUH9WIJjRiGZSIzyDsi5w++8SKiCNAq2tmW9WN7ZxewA0SSXE",
|
||||
"v9lm5H7LqrnD1VIHOrU9qoU12Puljm0U9La6jNr594CUf3ArQPOor2ERiA7q6yKsRiDFdFgBpMecCprA",
|
||||
"cV1m4w/OIt1ObK5Nj3VSsAVxsBlfz4DrJvJZxVR5xoim1r29vgo3ruW3W4ILXsHvfejbNyaaK5DVSwL1",
|
||||
"FiwYmi7qtQhap0WsQs8TXw7mj42cjapIPajZTAEChyqs5ZpoetIY7jpI2lyQxVQwNvvDdnlHyrcP85L/",
|
||||
"HwSNm5vcBol9y6CV7PkU3vo+eDLsxafgxGVFhDFnKqxOpDqSzx0TC6ldN9RUgl5O9aob2LCJvBffcRyJ",
|
||||
"FnOqR9DkaYT67CiVvTjlbU6/zqn+1Xx0pJ9/LwLfc2uy6ZPzXoUt0iI2CIN8gQyFDZRd3Rdn04H8bhwF",
|
||||
"nIeuYKtzsGJ5viHYmTI5s4ErvfIYmIxsu596lno4NCxBSTCRLf0qEilcGG+2dFNwRfxpO++DKwiNPZlR",
|
||||
"8JSV7jFKfR1YhLiK/fd2XCveHawpuYJpNzvY35CLvjlJzAsV9qt1blVi23nfnvMp2oE8FpbrunAbJu1a",
|
||||
"hQfhAcivd5/cPLH0K6FZyWi6tPV5rcBwcCsBBCUjC/MfPD2IGhEziD0j56oF0bqp7XlwTRDleTInUljz",
|
||||
"/q2xm6rFblpECmoDM0LrPu14/dUyz7i4sO3nEEEtBDAkRCNRsUCpjOiSZYH1DbvQIrWw7Tlt2eSEZpm/",
|
||||
"4HXwTU0/EKjtgGW7IEpUeJlgMWF3bkPc6EqaEbYe3pRyhCd7o1Qk1v56U4LyDWhJtPtzbL2+iQ6Uz5cg",
|
||||
"zocHMQxrfJh3bLtk60q5U1cGuosT6tA6hIHtWY8x+oUstbIXv2a8dmNrEf4pJolQF2Dk2UZ7QN/g1gUt",
|
||||
"YZdsXEVNduBdpY2A4JfQvSUw7M5n10H9aucz/ML/scKhHjZTliVz0XAtGXDj3vhQXbErMLpXt/LDDzvz",
|
||||
"BhWYXVtpX3w5Mqvb/Saz+oq2Nx37H2ugvaEh8k5dorDQSN3oO9ryvSFgBvdlFfH2GPnPjYzDmFHFEhXe",
|
||||
"bCfMbXVINmUl8X3kXTuLzCZZfRjs7T7+MPCIVdcGBqUC/Hu6KoUT6evtKS/HYZipb9zfOXDMlKOZkjiG",
|
||||
"kjmTghGWKRinLgkcWyZgCwBwzihmAVsQ/rcRTjN6RsXoudnn6D0MMIjAMGisG4OhLPmMC5rBnGZ86IaB",
|
||||
"NYczGdYotvKCUeOCljDYgNCFAeC+rZLnqlwKQjm8AZ1fZhzDSNft7a1d2OilXdhgbazSJvKMTDTTI6VL",
|
||||
"RvMmhfCa+oQLc7+H63M5n+EcKsT/69kVnRjaNSnu7T5e97pFxwYiWpKDQcqPoiOU9nOjDmAI8YTpBbPI",
|
||||
"7pqf10THa+02HAQWgN0Ayg7d8aKzw2VQdh7ECtGGnb/X3Fp3A+ubYxGvKGViiwxPmPnQzz9ZNu4dShTn",
|
||||
"vVfokECXa1u6CKhLCI7bDoBew4GAM9gQ6H6+Q36RmtV9rBsP4X5OZZnwSbYkSSZtXfSfT0+PSSKFYAm2",
|
||||
"z8d+IxJqa1nCa+thqcZ5McI+0UQTRXNmJUktXa8iksrKCHn4gRp/EO5UMTsIb1NdWThyAmQi02UvKw3T",
|
||||
"UM0UtXbRBUsoOYJ1ceezbQdxtdoAbbujbhB26btL3E0Doa1cHXWcYNEzMZV31LLc7HOywmwX+WLFye/Y",
|
||||
"IvqrT9+1ZflekMDtZxUuQKMVhw89AU1tiQk+nFNFBPQWIEum7xY6hREInZ42GKmdMyz/g3tf4wCzxRta",
|
||||
"YQe+1/UaxNO26f9a5Ds1L94d5NPsk94pMsrFlsUwTtvA+V7wKoiLokqTKVsEHc3tBu4p3PYG1Cv8xI/n",
|
||||
"GnusxKrNggKCPh23ilVf3wLZ6Zb03ccFIAv8DgIDsAkOBJRhgPklI2w6ZYl2Yi00usQRqCILlmX2fWeB",
|
||||
"h56jjNrk9HmVU6EwBhqEU3AhX3LaTZivG1eYOwKlZ92NwoBGuFj1vTonXCjNaNoqbRPUBe2twuCbfdwY",
|
||||
"S3fpGG6qa1c+9HkdjR64dfWC1ZUCULVTvqcrNh9yJmBts1FRm8yWhNbTRSR0PIZRPtM7trXCzue6TcMG",
|
||||
"WSXN/gqbKuWu4YlP9LjLEdlh7V3fnAQuSCWw5qpqdDX1oetul2jzN2MpyHKtj7cG/5pQ7jVg/npI3uqX",
|
||||
"ESfzLWBE0NwrBu1Xe/e+nj/WePmlLLKoInDGCktdQH99broRjG33uAgAr2kIc9hom8f5q4cJ+XcnK9RW",
|
||||
"uqICPfpQFmtTJGog4dBuFQqRIxUjtIu7q4jhmpi5xkGqW7uWr3vyH35tbE2NV2QoLtqv9t/LeLFKCA64",
|
||||
"M5dl+0tyyxTTN82uw2bQ9upiaHwLMOVdJEOiZG1fTGiWWcPihZALCPt6//7o+d25uD5gRLDFda4sSj9d",
|
||||
"1Izf0KDR0roLegs3s+9K/gW8B26t6+6j2ghONgnDfepE7IajIlbxugu8nc+2DPwWot5Gqqwf9ubTiDul",
|
||||
"YS3+eP5mYwjvpmTptLSF7St0pJECJDLPff9Q8J0mEOoLjhtbzrE23Cx8FwQuyLltKXIOSh16HpsvYaiH",
|
||||
"bZgwNAJAQbgmU14qPSZPxRItQfha2FUgGMb5KoHkV757x/Xk2m+KU1+bFKzgzJumIy98R5FN5BySMg0d",
|
||||
"r/0RO3vyZjd/RzEdyju99qyOuHOrh3bDokWrGUmcjTua67j10XPVUA9rH5trCEvk9GaMX3fELtWLmEHB",
|
||||
"WgcthIia88JbPHyXkm2QdZ3p1R5it9XM94Ky0fY5d8Eie9eRcjOT6WJ7pMwYK0YqaCm4juU1exB+T/yv",
|
||||
"ubNNivlDZE2j6eKqzGgWSnhCxr68m2i4hq9+U4y4MUq1DhlconP7FK9t5vJNH7+pgeua9MlIc9Kb6Bpt",
|
||||
"8yJo3vKlYM8tVrrOviv4I77ohe+bO/9GO+J+wRj4Ei7qVm03DhIs7ZfdOz6duxOo5pbfsLl0tIYOD6yP",
|
||||
"xMhh9ZcqglRG+RvJ6XSFYsBn4u10upHv5+7B0nbGAxLb6In3N2izF1qnyotQAaaKuN6dawD+jGYZhkw6",
|
||||
"U42WJLO+QFdRFWx6es6W90pGZlDPxQ4/7j0VseZQxI1ebTtF/6XOmaYp1fQbWGPDTrZ/iCu9MRo+rfSc",
|
||||
"CY2dpm1/KoMNLp6zz3TwxTiJ0dBawgw2EVgGnIrXBx7FWG2zcaOCcXBqg2+NHLBSpxjUHYr7BFIhSf8X",
|
||||
"dxurtscQl2bmmwGXmLohlj1A6EWFUVK3dI6TsEj755vWqf1EMa2ldmooj6dbS6h/YMrjvIsIImdchjiI",
|
||||
"xFu9FKGJIRsZS7FAImZvWYoyagZmOXQBBy0XddaQpTKsHGUyoRkQOJqpr03VLlljN1XM1wQRSiv4rJXH",
|
||||
"bfD6zRWptVb43thyqPkWtDnoI1e/SFeU1OeG+kpdgTHuYHf/K7b8QhTrRcxjVrqOC8+Z4Eg6bRGBuB0d",
|
||||
"4/gsy7Ot/QGjwGfqClVlmVyg48KCxW695LO5JkIubBTh/u0yGHeRqIDEOPTmGSkcVofpbZA2P5PQytmm",
|
||||
"h+CF2/LSWl8h9eMH0Fh3mwCnnMJZxpthRMP4+q+LGRINw99DRKzdSd91tLJR0DL++lYNO1Y3BDZ2S+pE",
|
||||
"E9VsCm4xydXGVNImlfmx6/put20w+ULmFHgbzM6HRC8LnkAApO1SAgJzUcpZyZQaQhsTLNAD3GdKeVaV",
|
||||
"bC2HcXxFMZE2vHYG3G50KGHNSrb+puzkdDnio7Lqj219Q5fWlFKJ7yIz5g1d/oWx4p3t1/99qWcYfW7F",
|
||||
"mDqFOpCYAz98wKDKSpAdcsFY4fzydRQ6eVu4AkyQzUe5UIQS9LuHMqn3Z8Sc8T2I3JHoQdkLVtZaE1d1",
|
||||
"aPxq1JaVLio9KkqZVskqQd8Qy7fw8rF7904wByictfOxYLNtU5qH9ttCzL5VNvTehtnQIP3ZPF/XO+Pg",
|
||||
"/v2bv2ivmZjpua8g9KewY1LKU+yTa6gsJRYEI/sJJrfble7f/EqP6RKSXqFdEy1tn5uD+w9uw42gqqKQ",
|
||||
"pTmoNyzllJwuC+sxAxQjiFFOmJz4nO26+2EYCnaw9+R2Omu5IhLIKYF0SElyKpZkai62rVZn4yX0vJRa",
|
||||
"Z8zWtPtDSR6YLG4AnUulSckSTKH39fdgvygPBCnjHIBTFS6sqnaEMKGwgB4mcoD0bk/ZfHlPkZTPmIIK",
|
||||
"vO0zJs98Cj8EjR3/8hPA+dXxi5+IRSUzaJFRIeJBW6sEHj2v8omgPFM7RckuOVs4ssRLrDroqD1B6u/E",
|
||||
"IIBoeemoeVVmg8PBziAwQrWJ1VEzIqrTgcxhimcHkCnTrcbxSk6cmRRktL9XrOQG/eo2f8NWT4dxoxSl",
|
||||
"igz69Pio2RctNJHJPK8EiptQ5SPWXbzhwI1MYLHhjV8TgRbhvV1JsSOU2Ya5K6XM3Io6k4HTMVJvBnP4",
|
||||
"/SzAJ+oCBBaCvlfbRznxZdXCOWzNgKvfrv5fAAAA///4FEakHgIBAA==",
|
||||
"H4sIAAAAAAAC/+y96XIbObYg/CoI3i/CVfGRlCx5Kav/jNtLlartsseSu2aiXSGBmSAJKwlkA0jRbIcj",
|
||||
"7kPMm8zciPkx99e8QN03msA5ABKZieQiW7LKfftHtcXMxHJwcPbl4yCTi1IKJoweHH0c6GzOFhT++Vhr",
|
||||
"PhMsP6X6wv6dM50pXhouxeCo8ZRwTSgx9l9UE27s34pljF+ynExWxMwZ+VWqC6bGg+GgVLJkynAGs2Ry",
|
||||
"saAih39zwxbwj/9PsengaPAve/Xi9tzK9p7gB4NPw4FZlWxwNKBK0ZX9+72c2K/dz9ooLmbu97NScam4",
|
||||
"WUUvcGHYjCn/Bv6a+FzQRfrB+jG1oabauB0LvxN80+6I6ov+hVQVz+2DqVQLagZH+MOw/eKn4UCxv1dc",
|
||||
"sXxw9Df/kgWO20tYW7SFFpQikMSrGtbn9VuYV07es8zYBT6+pLygk4L9LCcnzBi7nA7mnHAxKxjR+JzI",
|
||||
"KaHkZzkhdjSdQJC55Bn+sznOr3MmyIxfMjEkBV9wA3h2SQue2/9WTBMj7W+aETfImLwSxYpU2q6RLLmZ",
|
||||
"EwQaTG7nDijYAX4b2XI2pVVhuus6nTPiHuI6iJ7LpXCLIZVmiizt2nNmmFpwAfPPufYgGePw0ZjpKcIv",
|
||||
"e0bKwvDSTcRFPZHFRzWlGYNBWc6N3TqO6NY/pYVmwy5wzZwpu2haFHJJ7KfthRI6NfadOSPv5YTMqSYT",
|
||||
"xgTR1WTBjWH5mPwqqyInfFEWK5KzguFnRUHYB65xQKovNJlKhUO/l5MhoSK3BEQuSl7Yd7gZvxM1ok+k",
|
||||
"LBgVsKNLWnTh83pl5lIQ9qFUTGsuAfgTRuzbFTUstzCSKscN+nNgsJPm0YV1hbMZdlHDDnssprK7kJfM",
|
||||
"0FFODXUDMXLHvnwnWloX4ztH7w5qMGif0tP6L3uPlnNq0pNYipxLu35yDOSZFlpaDMktxS4LmrG5LAAe",
|
||||
"7IOxQLGohGhqB1xQUdGCcFFWhkw5s2eqyZznORPkuwnLaKURvCMpRnj+NT4YOZsVLCdSeG5gcfP7xpnW",
|
||||
"0LQzv+Di4s+VMS0IJFH1mbAoreuN23lwCXfc1GQCY5EJm9NLLlX3WMnj1qtLXhQWZcKV+nPBRM7UHY1j",
|
||||
"O7CG60WAHNU7HcJ6zu16zuODgHGbGOfWcEcjzo3JS4B2sYouXU0vOexUECFJIcWMKVJKrfmkYHhvuNCG",
|
||||
"0RzoqohPDFd0JwLeHU/9LCDsPsfvxGN7beiiLOCQ3GzEyNGEjRRAgOVkquiCEUXFjA3Jcs6zuT1Yf3No",
|
||||
"ZeSCGp7BHqbS0g8cRmdMhO8mlSEZtYdC5CVTCpFp4ffuSKS2bCx9+1t8roU3TTRJcasLture2OOcCcOn",
|
||||
"nKlwZR3kh2RRaWOXWwn+9wr5h6O17x3/SpIHe7upmiVY2GOxIuyDUZRQNasWVjDwbGJSrsb2Qz0+kQv2",
|
||||
"GgnE6rvviYUq3lwjSaYYNQxR2RGRVbSGeq81oHag/HyxYDmnhhUropgdilDYas6mXHD7wdDiGUxvpxwC",
|
||||
"TGRl3IqoMjyrCqrCPesh47qaeKlnnbCUkC9O3JeBQ+88wqn7/JLDLbrCCH+1X/LCyk1tpLQ45la2pcB0",
|
||||
"UoOiJTdVk5F9ghBHnAvk60mlFBOmWBFpJRzqxwUkjmQcPSbnPz0++enZ07Pnxy+enb1+fPrTOcrvOVcs",
|
||||
"M1KtSEnNnPz/5PzdYO9f4H/vBueElqW9/u4uMlEt7P6mvGBn9n1737jy/4Sfnaw5p3rO8rP6zd8Sd6Tv",
|
||||
"XLqij4NAtPvoYqJgRzU5fuqvDGw7IuBj8oskgmkrBWijqsxUimnyHQh2ekhyntmpqOJMf0+oYkRXZSmV",
|
||||
"aW/dLX5oZf7DA7vpQlIzGAJeb7vJCHUarN4j4zAl9Hr23ORg5+6b8yNCiyVdIU0fk/OaX50fIXrA1450",
|
||||
"vT1GERwA6gQ3Rb4r+AUj1AON0DwfSfH9mJwv2SQ1zJJNam4IWLeggs6YJWpI64U0SNTdLJ6xvZeTMTlH",
|
||||
"WeL8iAh2yRQM/ac2LjvSaFeKsqF9EYADeqedXdCiSWv8adUAxZkGQHQcXAbDwZJNNp5ZGiO97lLjCUo5",
|
||||
"XFtGTmdMOcZsgCLShWX+CUWHGZrQln6ieh7feOAy5LhDAjRx3KqgE1aQbI5MFpZhR0bBA38ek1P7M9fI",
|
||||
"R6SoDz9Iy0zoSlnO4kTKINM3J7X3oypBiqaG9Uh0sKTdVGs/wdZmgZTq2dHaWsTZEShcXjTnEM9iE8G2",
|
||||
"6JBg6i+4Np5CAcntR4wuEnit+2obP21wwp5d11OkNugu/Gtq5k/mLLt4w7TTcltquZX4u5vvaCQrLwqY",
|
||||
"uUW474Q03zs6nRSWQGBNa7woywJGLqlG1d9i3pSLHGfxJD45sD7DaZOWBBR55iws1LESqSzdGieFFmBm",
|
||||
"yZXCIGGhU1mJPLkmLSuVbZQ4oiM5wQ/aR4pAcysKw8Z7HroD23Dkz7nI6xPfCv96ECZhMenuw1K9WJCg",
|
||||
"WsuMU4Mk2e7mjInLS6oGDjH6BQhvFuych3tAFLM6GIjYlGi0QTljFtC7DyyrDNtkruy3BQbKHj32ME7T",
|
||||
"neiT1LE8U0qq7n5+ZIIpnhFmHxPFdCmFZinDap5A9Z9OT18TtP4R+0YQ38NA5Niy0qyocjST4KVYFZLm",
|
||||
"REvE6gBAXG0DtkXhlsYF2im5tHrlEzvZ/f3DwHWCbSGnhk4o6pqTSq8sd2IEFuoX5ZiXFIZyQSi584YZ",
|
||||
"tRo9nhqm7uCrc0bBfGGXx0XOM2qYdgYq1FANX6C+bY+C6aB8KmYUZ/mYPAdN1YslbkCuQXCxaEKtcOx5",
|
||||
"+R3t+J59Nys4E2A2ySXRcsGsYjgjilEtwTpBQJxiH/DycFqQCc0u5HSKHDMYdL0o2bUmL5jWdJbCvRZy",
|
||||
"wbnX76cw63lBF0xk8q9MaWdkcjq//eeMgwB6OD4YPXwwmuX54b38/uEP3nh8NPjvslKegw3AXqPMpR9q",
|
||||
"cDg+HNGinNP9wXCQ+pl81xn7e68m1+gLq9hJYmgsI/FC9Kx5JxwYwlUIUtuCUWFAlp1XCyosAupqAZ9Z",
|
||||
"bLG3r2AWcycVL3Lv+QBpiS5ADTmPV3U+hLEk8Jr6EzDFuRuHX5/PuDkn7iu4R0nBqnXwfn8tUASbv4Vo",
|
||||
"Cht+Rq8JLYpX08HR39ZT+xMvBtqvPg3bUgHNDL8MyswawQAlVW2I/8JKod4AnOSVaOpIEXj7AKRWvmDa",
|
||||
"0EUZ3ygrlo7sk9SYYJFmZ44gsPyMJkSP46mzeRQMprEsPXzhJGxvJvMrIJb1I9WxBMlTHPupNlKh0O2v",
|
||||
"YZAGm7Rg7cp5AhBv3x4/9bD9Gbw9GxxF2/qorEgdXFRVmafP4TRsXk7xbPHV8Zabass0dsH+0OtpI99V",
|
||||
"QLbfPv2GePznQmYXBdemXypfAmPXjo8pBtQdXBwsJxlTwGHAlYmyu7T8Rpcs41OeeeTcSjCK1/NMGLVK",
|
||||
"yUTdlzqS9nqfIO7nbCvHYHi7h4i2TqAeOnYB9pCQp+56pP0g9ldCJ7Iy6KTw1nW8gEEYwOsPzgp80BVr",
|
||||
"9JwuqDjLrKgpU5pCLMyfwMvEvxyZuPwCFFvIS5YTWkgxQ4+gt0lsY3FurqUHNC+oNm+c6fx4QWcsDaNn",
|
||||
"QlazeSw2AbugkXRRcpYxYuQMt5jz6ZQp+wxPEIzH9mtCyVxqM1KsoIZfMvL2zQsvq9ibWVvyuV3PmJxK",
|
||||
"YG5gDkOr0JsXQ/uTFaMENYy8G3y0QtqnvY9SBBOkrqZT/oHpT+8GSLyaZ2U/aKKlKpJUyA3T0Dk2OGBb",
|
||||
"RwFTRSP1HMVLZqiVN4FX5TmYsGnxunnfulyiYbNXE24UVSuycIN56I/JS6lAqSgL9iE2LjpJcyEtWoMV",
|
||||
"oLICNDmn48k4O7c0qD5wC9gLBmb8SCorlYR9HA1OSsUNI88Vn82t0ldppsZsQXlhV72aKCb+y8QpwlLN",
|
||||
"/BtOeDuBF8iJ+b//55IVEVwbcHrt4gCegL2oS5PiyIcF/cAXVom7u78/HCy4wL/2u1Js68zCID2HdRLZ",
|
||||
"gNKHZVTFer4NjM0rmMAtUBEWmT0GDGYogc5YFkU5/lhaRdn+4+8Vq/A1+GIUmP4A98EqhgbgysJ6FC5Q",
|
||||
"099U41FYVh9UUV9ImxTwWeS4dDocGvS+CG9vUzfPZ92y+k7JSNVL09xDIGrBzOxdhkH+sdej0mDfRXps",
|
||||
"30LixXIy5QXTyCEEy6xao1YpatOixmcpEenOE88Kjp/eifRQkDO85tfmGnFswpg85lZsF7hS/0mKw3j9",
|
||||
"1nE0z2mmSi7C1vvk+hSgT6m+0CfVYkHVKhVVsygLPuUsJ4UTdTCywkN9TJ6g/ow6OjysDfP2J39IjFq5",
|
||||
"leqLLveFr7Y2DUFsk1vwFlbJXqqt/2vFcM8RQYSQn8HRfavq1kS9j0x+Gg4g3uNssoKYKJQmz8Bf4xD9",
|
||||
"N/+vMy4aBCPQAUcifusoq24tH2vqdzetw38293nOC2O1x5r7DD0veXH8l2c1K0l6geV0qllzofuphdag",
|
||||
"+rhDRJTekl737Sh2LOyyq+jU2rfiDTOVEuhHshiGEh711JM7oRO2sItgH0XstZG6H4H7TOmA+tveKdS7",
|
||||
"r3iXnKL5RIopn1WK+hCj5nq4fs6VNm8qsU66Rq3XMj2OoqSldVP7YW1pc/MRVQldO51C9AtIQpRM2ZJM",
|
||||
"qaWaekic31FIMYIQMSvdZvF6gR8QqYKyFnxRE8uOCVuUxlJf+5aZM/BSVkUu7hgyYb3xB0DyMZIo30qn",
|
||||
"gFUYRYWeMkUevz4GJ7r3xaQdBhq54QuZ0XRc39PAPYA1WcZjLwXM5T4eb1Sc27O0dzeMD3gNlvyVKu79",
|
||||
"JW0EOTNLuaQJNvRKsNGSrsil+xg9hBBCJrUBg7u099EFK4F7nUO0kWIQhrawBw488vyjFVk/nTvFhSsM",
|
||||
"j/LSwxwCEpxgQImPQw5eIept+OR0KRNrArObmzTvOKaDoMLc8suCGqvHjIItAAMEgbO7QSarsOg+RIOP",
|
||||
"NqvezmRXA9p/ucV5Pa5yzkTTu+KsHk4X0EnxtDWMXsel1lGoNvp0eNhLWpYWxnDK/lCI3TIEHZkQysQx",
|
||||
"Hjix4dVfGCvfVEIkI4yPg/1/GV1chAFZ0BW5YKy0REl4+S0t7Sw683QPtJbZewRwFPbfBN1hzWq9byUW",
|
||||
"7WtTY1AOlw6vj42jbSg8zxk5x0eWO7FzYrfiLKNxkCteHzsJwHsm7X8F+2BcWAES6XPLq8+H5LwJhHPy",
|
||||
"8u3JqVVmzyF67Hyr0LwWIAPU+mCUwvLgYDz2HuKWSuq8sesvVst/mBj+xh3eX80vDUoLyzdzFOdW3s6b",
|
||||
"/IbNLNtWLEf624UkzXPFtN4x18LR3/RNk1OzpIqtuYabqNav4eagXBdiNs6C6VPvJg5/VraGYwAeVHHG",
|
||||
"hgfEcJBh0B+scBBBoWf1qdM6YVmluFkFZ3OLAm7rdVznbjxhpiofa821ocKg8Jny08dCnpxY2c6ryyB3",
|
||||
"2VFIGKZLrZ3N6xk48ukWkZz9kQtfS1DrbiEJTxDnnvRawE8YqP/ObuJM2lyRk58eH9x/gNdeV4sh0fwf",
|
||||
"EBk5WRmmUSBzAdekcIvyEQBdA0fLPgmzgfsQyc+gjhEezyQKoYOjweH9yf69R3ezg4eT/cPDw/zudHLv",
|
||||
"/jTbf/jDI3r3IKP7DyZ38wf39vOD+w8ePfxhf/LD/sOc3d+/lz/cP3jE9u1A/B9scHT33sE98D/ibIWc",
|
||||
"zbiYxVM9OJw8PMgeHE4e3Tu4N83vHk4eHT7cn04e7O8/eLT/w352SO/ef3j3YTY9pPm9ewcPDu9P7v7w",
|
||||
"MHtAf3h0f//ho3qqg4efujq/h8jrJLW1v0bSo1eEHL+Ow7b9OD4zI9jsnb2+bY0CGk51UIrQlxhNMibH",
|
||||
"gmAyh/MBa2+vd2PBvJYDvK80mvvfhe2Q46fvBmgX8tpxcESHkAmKqwBd7dyZXEa6qGZ7EOE/stRrD6Pk",
|
||||
"R8dPz3vCAh3KbKn44tqf84KdlCzbqAPj4MPmMW2+TTX3T5lg7TM0qLVOJZW2dgX0cO7ONmKA4uxAX/t8",
|
||||
"zJwK501reqSpbgwKri4Xzkl97kJ9jclpJF18PvJtEaiw5ZGEo+4SOKeCUS91UaS8jla5RUd0OC0pthzE",
|
||||
"sh4PTRn1iMHDl0xXookVNkltPGZyDKAzH7uWMdak0YONPhW7GjfesF/YbQL4V27mtb9kK1B7JTwDcjbp",
|
||||
"Af3QialDkrOSiRzSPQVoeCjOfONns63sGR1Hjyumc6qx1Xrd8XbcYJW4EHIpIKKikDRHfQyDUpJmARzs",
|
||||
"Da4GUpScnnZlwQMEjQbsemWJaxIabkRAuAH21n/4zfPCKMo0V8PTAjGbEhV95lnKMD5KZ5uQzevO1KWV",
|
||||
"O57zgkWRNYBolpO41+xv7IOLLA1yfRzBelM4UF/McB+uBy3iicJ1+8K4EpHvz8UaTM1vEo62QxfPf1ee",
|
||||
"+6UI4Vqip1h+sklza7MSDZ/VHIvmVih2Ol0U+UWdVZW8q/b3Dx4Ee7CTziptMb9jaDbSDZiYC4WpcA+c",
|
||||
"AHVHN90dKU83jSy8O1hig2H403BQRADa0dZyA66S1qkXtYYctt4whDTXlMQOmV0wc/zqZzl5C77fZH6l",
|
||||
"ZibUoxgSbaVseckU8V97ZwNkoIHNUmMgsGBL8C8OrTrELrms9Bni6nkINfOkL3Wi//SBqN7u1xzoF7qI",
|
||||
"013TydUNcO/ku42jmELq5f2kR1yxqWJ6fhYCINba8KOcAKfxu+8x9AJ3c0djEEbtGAWEw9RJrV30rPZO",
|
||||
"KPgTHJw0m0OKwyXPK4qRHGQJs8yYYArt+pIsqFj5QVz9i1LRDPLye/2guwOxv1rNroHCW+PckuozFyDa",
|
||||
"UxYGr2gwcbiX6ztiL7qRzsnR8Hs4gm9fhqgBqJnA8zt11QvNzNBLLnUkK7idt3KG8J5wZldgJyrB00S6",
|
||||
"dWQtDjHto28OR6WqcTQRCxqyIDwA3UrTqYpbhh2bebWYCIhQ3IhZ6WjZVBJjHZiM/wqTrIOUpfL9hXVO",
|
||||
"mAA3biD4eIs15HLs6ejbc8IuwQoDZQ+MdOnOXkyO3rQPLTDdVRyTJ35MzNKeMRM/R9sb+PrsxfYX2P9d",
|
||||
"yJnGuAbBmMtcKwuecVOs/LQThlwJPOv20WoYNpJRFw4T3rVjSIFxat8ZCetpTD31KPNeTr4H5c2+bl+5",
|
||||
"o+16CHgt7WVNsTZZbpT6Ekfzyvsuty3skBrEp8N6T0w/l8J8LSObUNkjlah/sJLaeDMvayGqLNfVf1i/",
|
||||
"9UhtD8uAcNP6r6TG3geKBK2khlxwkbtbvzUMQlBtUfwsJ5CMURS/hiADx6upvijkDB/G13rtqk+pvngh",
|
||||
"Z31U7NRdApLNK3HhhDQI9wh3Vkm5IDlDjpzjQ5evaJcEt5VeSp7bj3PcdJNdpvDY7qTrtLKLCEjkljYm",
|
||||
"L+kqZCsuqsLwElIABUNLPPtgkq5gT8vWouopOvt2w8KaStptrMNEO/w2EvIpQLJfRAZgdGRkF3V6NSE5",
|
||||
"Ti7bWQ7dDmzDXbjaZpnVOWY/V2ht1gi8yjfXJYulRJvAmp0Pe23m1hpMRHKyDS7im+uw0cX+eHzs1cDS",
|
||||
"ipeXz5FvMlPHdrtxrZwUZSx9Gc3JhU9sgbP23M40YylzB63jMbmO12vf99nuUTmK7da+GfWXfvWfi/yd",
|
||||
"wIzP+OosC5kX237cCE26XrWmP3k5dc36b5cfJ3m54vzdZCmb2m8f1Xwxsk5ZaNpptwm+//wcJffg8Pf/",
|
||||
"Qf7jX3//t9///ff/9fu//ce//v6/f//33/9nrDSB+h4HortZzrJFPjgafHR/fgLPcCUuztBUe2j3ZKx2",
|
||||
"fEarnEsfqj7lBXMRBnuoJ+3p6d57OdHo6b57cDiGIeNDfv3Lj/bPUg+ODu4NB1AsTw+OBndHd/cHwwGo",
|
||||
"WfpMqrNLnjM5OHK/DIYDWZmyMlgqi30wTLjk+XHpouZgK+6t7rpwprCyvTS4XE2vznhKSrN2PFeoDStE",
|
||||
"ndVGwkHBRfUhwmgI6B05UDv9spvhH2POBp0wpO1tW413gzUnRpBNhg7/ah0WtJV5pE6K6oFaJ3IaxX4x",
|
||||
"I3qlDVvUOZbu21bVJkiWyuRMcM26lmf3srM+QchGIZdMjTKqWYjocFP4Rbno+3d4oO8GQ/JusOQil0uN",
|
||||
"f+RULbnAf8uSiYnO7R/MZGNyEqaSi5IaHirs/ijvaHKuKgEa4o+vXp2c/4moSpBzCD2VBcm5NpC1BLHe",
|
||||
"Vv+kIYnJF7cMi7Tc+7H2pnlaELujYWMf5N0AtXH1buDjJlyhYLSFemkTSoaVClKQqSbvBk1DvB/v3aCG",
|
||||
"/UJqq2mDwn/BiGHa7OVsUs1cJTJNGNUcan45Pd1nt2FgL89ILjOo9Qi55UXR2FlSLeizsNkfzrYvGzYk",
|
||||
"mSx57Hs7bxePGtvRzkMpyW7hsVP3V50/bSk+ywl3ZiM0k+WSaXHHkAU1GWZU08xUtAgjdWKWTrGEJRhV",
|
||||
"dLseGeCRLPIoPahZerhdDi6UnvXWq3fiuLFAK80tkLkN6zACKEGzKqnWrZqjnQz6JNBRbiCGztBe4m6f",
|
||||
"Ly1U5xD+6nWkGTl+GrIWnMXRqdHoWaOGhOJtE0YsicmrAq+/XQrGU4AFExNfpIo2ZrHLFzywaOi/CCtp",
|
||||
"Gvq3UiWd3NG1ViaIXEoCSZeTP/X6MxaQh9Qf7Z2LPpLJVwoaEj5mYzJhU6lYnUEQZZCMd1Mev2QR+uuo",
|
||||
"U4KJh2eT1ZlP5NglBdMpEom1bqno7qATgypiZGXxdIOIjKqZWAWlxP5fHtDTp2TsppB8/Rr911UexZOe",
|
||||
"XU5825IqbZU91R4gbgIQLtOGfgDOlrexJgj4IqTrBRCZ6j7L6ZAO3LKEBmKPWka7YSMYqYspkW1u48yV",
|
||||
"KtITv33zInbI1rMTbjQrpiHIUy5FIWm+TXJGbdoLp4hlNmD/fafyGfURQjq0llMzapdNSJl26wlvU+WD",
|
||||
"+FZfofRBnNzeVaQrbQjrFnSp0R1LDMlGDd3aKQzibxf7dzRM3iZieFVr4pYUyc/Ud1Lr3An4LDjgISfZ",
|
||||
"i3LSUWlUxRDzXAQQeOKAYsGJQUlAFPWwoYCV7MPpQfCbLDGX8k9EOrtK6wU+ExCT8R3IN9Ino557euvs",
|
||||
"5EIawhR1SX+hglpbarfL+n6TIb2bvltw4cq/u6AGCDK/o0kWaoxj7i2PKyYBuSavLplaKm4YyvJcVhpM",
|
||||
"piIq9Oar5STFh5ST5YWcOedJoAHox/FSsS9NbhcNpwITMqoK3lMM1jRI4A5UIolcdaJbUh9QDCL2MwY6",
|
||||
"ISjvXGDCMo6TiINelyP3eVRgzSXzk6YuUb3H7QoFOhtqqP7RySEvz6I9tiSD18Q969jC18aqbWdQ6R/r",
|
||||
"83P+DE21eTilSCk83+fa0xcovL9giwni6VYiPX56SnsWgNrVNgPoi+1IbnRUjai7qOBkMt3w02+d2lmu",
|
||||
"ykyTHXpqW6PZi21K+HUvza7KURtH1wfP+tH7bwemvkbO+9pM7gzY7pdR8I4krKiaZYoBp5QjIc3IsKIY",
|
||||
"UbGSgsVJnkeDw/FBH+yP/uYN31Zymy5KNnNdGUZ1Wf7BcLDgOkvUs7liFq5b+Mcvf7Pa8hnO1IwBTU3h",
|
||||
"kLn/yE74TLzqOazHr4+hUVEE9bO6oK1e0tmMqVHFb/oQuou5foh7mpAGcmdFawBeMFaeODNWwq1rHwcz",
|
||||
"lw/CRo3Q1/M4MVQZiENiIkfnZhBVfPBrqGOV01VT5QpjW9oLOs+YPC7LgjPn4EXnrrQfcjBBned0pc/k",
|
||||
"9GzJ2MU5JDXBO83f7cs+CDCxQhDvBDm4N5rLSpGffjp6+bIua4StKmoMjEceHA0WkpiKQLQ4BN/kZyBA",
|
||||
"Hw3u/nC0v4+p+U5/c44rbVfg39p/ZN/qIFhzkm7mF83YSLOSKgyjWcpRwaA5iK826aBuWawdC2gzYxc9",
|
||||
"YCbfvRssJDoPTOX9Bt+PyTMwXC4YFZq8G7BLplZ2PF9TsttuLew/koIAoD31FTxoPqYjXgOgNg/XZpdh",
|
||||
"7GETmo1xoxWvuReGGtanHjvvtYqLiGzv/U4qt9FgWy0qb9HIkNdCl/SCdZHrKm767ZM9Gt/FYXIW6pjS",
|
||||
"husaDqi2JMUeApQ4GA4M0+4VOZ1avSJps+iPAUgUGcOgeiRWteboCrjU6Y4QcuwinhLKvT4r6D9W61Mq",
|
||||
"mrVhnIMB1bG4XRcQqdpFguJGrcI5jVWTKRdcz/sarA2/4CkOw/7WnGefOeXPVPNsjeR4ZUvJ14uc+VJl",
|
||||
"Sr5YXMs25fFDokhLh1GhsM8VLDqbZYbTlLPtlM5iZYA8RkcgFcHMUqxcNP/Ks386I9xETnGo2wh2g3Fw",
|
||||
"uzkTbGk5uJzW4ZtWtSOa27+pYGDY6LLtjrbTKDduh84l+fH1W4JBEcGC8uzZX589G9ctFn58/XYEvyW4",
|
||||
"drM76M5hZYbOxuSJ67voPIWtIqjUxWqiUdxlelNwYSsqcrkgMGAwv7hW0Ft5E7e1S2yQ20/pbEuqXBPi",
|
||||
"gAS6o4O7HVhEaJ6oobMznoMsf+/w7kH+4IdsxOiDfHTv/oMHo0eT6YMRezTdfzRh937I2CQhxocRItl7",
|
||||
"c8LEOlncj7gWOl6F7ixmV/U4aWj4tGZqNEBsZyVqVoj9eFVnTzo5JGGAOEUXczjtiIN8Qg0WqiFZ/WQR",
|
||||
"2xTOaJWqS/BWMwV161zanWMSx0+HpKRaL6XKQ0cQUFldeUKrkHjbYG0ysKgHgAEmbFlevdO5MeXg0ydo",
|
||||
"kIXONIgvz0ykrwbqfMrowrmB8Et9tLc39fF7XO51a/JhaD55TtXCZbJApuZgOCh4xlzyeCBOLy4PO+Mv",
|
||||
"l8vxTFRjqWZ77hu9NyuL0eF4f8zEeG4WWG6cm6Kx2kXoJFMr13fH+2PQWGTJBC05WDvsT1j+AE5mj5Z8",
|
||||
"7/JwL2tXM52hESKUvzvOoW2SaZY9tSiDmecw2sH+voeq1cotBlulEBNP99477xTi7ZZ5t8354PCaQBcW",
|
||||
"q4uQAY8o6GUgu2KMTGkWxpp2Osjhpf4bBNQBAarHeCbyUnKXVThzHYQ7A3byPy3kk+DdgzCZPW/W6AP2",
|
||||
"cy7yP4daVq+xYMW1gTvdvywB7+eyEnVpK9BXQ8e4Zkf3L7IurKmWWMdJ6BC1tML4Uklo+t44uefc5VlJ",
|
||||
"RRZSMfLkxbHvV4aOEIgp02RJIRoNNB+/nRRSlFInTgrqHiWOCnjnn2W++mLQaNVvTIDFd2qTyvnRIKoH",
|
||||
"axZKDNDCVNzrx6NGPbjuSn9pXtwhLhJDyOBIp1yw24dTf6UFB2cmjbHpKsjUwlPnEb2sx/d9Y+uD3EhU",
|
||||
"sDrCKAqyXYOyjWoPXxVrX98Yfv5TICYWxagxslkzYwO722GcXmSEOlDbShHPsWjUZx35Dq1NPg0bY63o",
|
||||
"omiO1ZaLNyFI+yDeQC/ES5YWPLpywtrTeJxlTIeG9qki7okhQ6C1kIbgxu6Av/xVycTj18c+Pboo5NJ1",
|
||||
"y/ONn/ecJOkO9JyUNLuwh/1O9B+3ZqYqR9SXFe0nOyf0kiUrmV4P4UlOlWSaMVgt7aaXiN4tpLyXyNdq",
|
||||
"IQNEdy/ZhJalN5LkVkWaVkVRV7Dwzf2tXHn7SMnbOlynp6IOFjpVDJkchzqXdocrMq0E9n4voFXTBvS2",
|
||||
"CJHC7N6Ctf042OB8ex99kZtPex+9g/PTOpLUYIbNxrJWAecWdq5qnFPhojI6teLsPEe7qDjd0kJWi09M",
|
||||
"GDlq+ydsU6/frpGZpstF7U4xvZbWqu1UNMpMNVrBxwWm7JfOJODrS1nkDMWl0Cy/o363bjmN7kO9Naf6",
|
||||
"UTUkGO2OpXVjgf/E0CtsQH8GctYFydrmA/JW+7b0LAjtNM9HyEzWZJghGQ09CdgEs6mmFDoUWsaRSswg",
|
||||
"E6rrorETJZe6kWp1dYyv97g7jvsOPD2cHxJbsJDVtbD6Rk/d7iH/LCeuLMeCmw56XqfGsWZB4AirrISH",
|
||||
"vNNlYFlRzYWORuWpNED73t2D65cRTgNFDalmzNAZZKS5Tt4+Ja35QjIhjWtIiSxWJK9Yq9t3RrO5R74w",
|
||||
"FNwHKUlhRROUO29MPIIHxFfib1ICxDHn2YFS9VJ17kjUBz+WfbCdVGO4n5v5ecxdys6lQtV+i6sFeu3X",
|
||||
"vV9ZtIR11+teOm9+xwsRMiktFcVOfXMrUP7y6hQzF115Pt5skj8kZi6r2fw/L9Qf5UIBWm24ToD9Yd92",
|
||||
"JDClQeGuJbcnbmrvLE9cs0Yht36zPDPZ/MdCTmijHBOkZ10vF+kr6raFQDNMX7lTX6POpxrD7aFilWxw",
|
||||
"3CMXQVtkyNRl6tI13098rjcc3ytoVoL9M+sMnxkAumc5rfP7u29wmSaT0D7QFdq6DgpZ99hMad3tEvDo",
|
||||
"R4d2ipiuP75poaTRT7EfiwCqkTHUhXZgojQUGOBTS8KA6gAZc20M4cPxraE1cG9DRQQL+O0Qsu54OYUm",
|
||||
"mxDbIXKiJQTJddHQUty9j/a/v9AFW6vNuYIBW+lyfsBbo1q1yx70SgX4rE06XDxy4FEWptC2LkBiw/lE",
|
||||
"qa9RielQZSF5LnqL09CDGwRaUiENL4Xd6AQAI1TGd1AKgtKZWwOxniqw3TBeF4QfMSjkU13rqwvIp/A7",
|
||||
"KnqbsTqk2/bj9Kawld+2ES6fIgmK6FioAB3KXBjFZzPLYG6WaL0V7EOJ9T8gurbrTsAYrLBgX3hiSLjI",
|
||||
"iipHecYVQsaOoJaDyxm2JUAp2ZUOCYMs6CqEvDo7As0uZkpWIh+TX2RoxaVDiJorzka+WzHzfdPGEDCr",
|
||||
"X2T6qhhxI9o89zV220ynJdO8l5MtNEP8SOQkSnPpu497k0JmF0VI+ErfzDfQPP1nOflzePsmD+RaJK56",
|
||||
"Kymtqyot/n63dKUNMV18VbLvXSnvRjt5uAN+uC2dP/5u0ixjJVSLYcIozpweCmTFTXLbiIpdVFit61xi",
|
||||
"73wEgl3v99fBq+u76GuRC9SfNQhmNaKZNAjPqCQL3P7bhApIo0Bra+aG1k1o/B4ATXIJ8W+ucXjYsm7u",
|
||||
"cL3UgU7tgGpxvfR+qWMXBb2tLqN2/i0g5R/cCtA86itYBJKDhpoG6xFIMxNX7+gxp4Im8LoukfEHZ5F+",
|
||||
"Jy4vrsc6KdiSeNiMr2bA9ROFVAGqA2NEU+vBQV91Gt+e2y/BB6/g9yH07SsTzTXIGiSBegsODE0X9UYE",
|
||||
"rVOY1qHnSSjl8sdGzkZFox7UbKbrgUMV1nJFND1pDHcVJG0uyGEqGJvDYfscQR1afQXJ/w+Cxs1N7oLE",
|
||||
"ob3PWvZ8Cm99GzwZ9hJScNKyIsKYMx1XFtIdyeeWiYXUrRvqIUHfpXrVDWzYRt5L7ziNRMs5NSNoyDRC",
|
||||
"fXaUy16cCjanX+fU/Go/OjZPvxWB76kz2fTJeT/H7cwSNgiLfJEMhc2OffKmt+lA7iKOAs5DX2TVO1ix",
|
||||
"tN4Q7EyFnLnAlV55DExGrjVPPUs9HBqWoJyXKFZhFZkUPoy3WPkpuCbhtL33wRdvxv7JKHjKyvQYpb4M",
|
||||
"LGJcxV55e75t7h7Wg1zDtJvd5q/JRd+cJOWFinvLercqca23b875lOwWngrL9R2zLZP2bb2j8ADk1/uP",
|
||||
"rp9YhpXQQjGar1xtXScw3LuRAALFyNL+B08PokbEDGLPyLluQbRuQHseXRNEeZ7NiRTOvH9j7KZqsZsW",
|
||||
"kXqCzfxp3VMdr79eLQouLlyrOERQBwEMCTFIVBxQKiu6FEVkfcOOsUgtXCtNV/I4o0URLngdfFPTDwRq",
|
||||
"O2DZLYgSHV8mWEzcSdsSN7qWZsRtgrelHPHJXisVSbWq3pagfAVakuzUnFpvaHgDpe4liPPxQQzjejz2",
|
||||
"Hdfa2LlSbtWVgU7ghHq0jmHg+stjjH4pldHu4teM121sI8I/xiQR6gOMAttoDxia0fqgJexojauoyQ68",
|
||||
"q40VEMISurcEht376Ludf9r7CL/wf6xxqMeNj6ViPhquJQNu3cceagl2BUb/6k5++GFn3qh6sm8BHQon",
|
||||
"J2b1u99m1lCN9rpj/1PNrrc0RN6qSxQXBaqbcifbszcEzOi+rCPeASP/uZFxmDKqOKLCm61/uSvzwqZM",
|
||||
"kdDz3beeKFyS1bvBwf4P7wYBseq6vqBUgH/PVEp4kb7eng5yHIaZhib7nQPHTDlaaIljaLlgUjDCCg3j",
|
||||
"1OV8U8sEbAEAzhnFLGAHwv82wmlGT6gYPbX7HL2FAQYJGEZNcFMwlIrPuKAFzGnHh04WWC+4kHF9YScv",
|
||||
"WDUuat+CzQJ9GADu2yl5vmyNIJTDG9ClZcYxjHTT3l65hY2eu4UNNsYqbSPPyMwwM9JGMbpoUoigqU+4",
|
||||
"sPd7uDmX8wnOoWP8v5pd0YuhXZPiwf4Pm1536NhAREdyMEj5YXIE5T636gCGEE+YWTKH7L5ReU10gtbu",
|
||||
"wkFgAVjJX3XoThCdPS6DsnM/0ZSj0aV7w631N7C+OQ7xSiUzV614wuyHYf7JqnHvUKI4771CRwQ6Ursy",
|
||||
"Y0BdYnDcdAD0Bg4EnMGFQPfzHfKLNKzuOd14CPdzKlXGJ8WKZIV0Nc1/Oj19TTIpBMuw1T32CpFQB88R",
|
||||
"Xle7TjfOixH2gWaGaLpgTpI00vcVIrmsrJCHH+jxO+FPFbOD8DbV5cESJ0AmMl/1stI4DdVOUWsXXbDE",
|
||||
"kiNYF/c+ulYOn9YboF0n0y3CLkNniNtpIHQVqJOOEyxQKKbyllqWmz1K1pjtEl+sOfk9VwB//en7lirf",
|
||||
"ChL4/azDBWiS4vGhJ6CpLTHBh3OqiYC+AGTFzO1CpzgCodOPBiO1FwzL/+DeNzjAXPGGVthB6Eu9AfGM",
|
||||
"a9C/EflO7Yu3B/kM+2D2yoJysWMxjNM2cL4VvIrioqg2ZMqWUffxedy7fyvqFX8SxvNNOdZi1XZBAVGP",
|
||||
"jRvFqi9vgex0Ovrm4wKQBX4DgQHYwAYCyjDA/JIRNp2yzHixFppS4ghUkyUrCve+t8BDf1BGXXL6vFpQ",
|
||||
"oTEGGoRTcCFfctpNmK+rz9o7AmWi/Y3CgEa4WPW9OidcaMNo3iptE9Xw7a3CEJp2XBtL9+kYfqorVz4M",
|
||||
"eR2NfrV19YL1lQJQtdOh/yo2DvImYOOyUVGbLFaE1tMlJHQ8htFiZvYMndmTmG2XTVIXUd1WETd0Vid2",
|
||||
"3OYI7LguNlQVhstQCayvqhvdR0OYut0d2vbtGBqyWetjrMG8IWR7DVi/HCJHBXDTZDzafAKFg9Afv9a7",
|
||||
"12343uwLsL2ySsAUqyY1gfrlueNGeLoubi2AXdGgZTHNNW8L1wmT6m9PZqerVkUFeuWhtNU2yNJAtKHb",
|
||||
"JhT9R2pEaBM3+wjZhli3cGD6Rq7Zi558hbqvsh6vySZcxq/137N0UUlw4n/1C7Ab4t8gpYMm1HUoC9pD",
|
||||
"fVwL1NXXwWUxJFrW9r6MFoUz9F0IuYQwrLdvj5/enksYAjgEW+56/VASaaJe+rZFzck2XbgbuG19V+0v",
|
||||
"YMX3a9101/RWMHLJEP5TL+o2HAapytNd4O19dOXYdxC9tlIpw7DXn87bKdHqcCfwKBfLdzslPq8tLV0v",
|
||||
"rmODNz+Ti0XowQk+zAxCbsGB4soq1gaUZei8wAU5d214zkG5Qg9g8yUMuXBNRoaWiZeEGzLlSpsxeSxW",
|
||||
"aJHB1+Lq/tEw3mcIZL0KHW+uJnd+VZz60qRgDcfdNi14GbrwbCOvkJwZ6Bodjtjbdbe7+dtYlZzO3+14",
|
||||
"c9NHd11CRLKLz20wNt0SO1AvAm5nDfIYvRNSeoG619DZkKe/CTTs9OHpwcGujE6On+qGCaH2u/qWuERO",
|
||||
"/zlxNCpibCGF0NBzXgYL2K+742fBWDnSUefNTVyu2arzW2J5zZ1tU0cfgloavUnXJSWzWKgTMvXl7UTB",
|
||||
"DZTrq2LEtXHSTcjgc4zbp3hly1TojfpV7VJXpE1WgJPKW9Ya3SUTaN5yY2C7K6Z8A+w18hu+GOTt6zv/",
|
||||
"RtfuNdYnSfzqb9Q04yHB8n5xveNOuT0xYn75DfNKR1HoyGj1kViWV3+pE0hl9b2RnE7XiF58Jl5Np1u5",
|
||||
"YG4fLF1TOiCxjXZ0f4MOd7ExSl3EOi/VxLe43QDwJ7QoMFrRW2eMJIVzw/lipmC+M3O2uqMYmUEpFTf8",
|
||||
"uPdUxIZDEdd6td0U/Zd6wQzNqaFfwdgaN3z+Q1zprdHwcWXmTBhsyO5aQ1ls8KGUfdaCz8ZJDEQ2EmZw",
|
||||
"Obgy4lS8PvAkxhqXCJsUjKNTG3xt5ICVeu2mbuTdJ5AKSfq/uN1YtTuG+Ayv0DNbYdaEWPUAoRcVRlnd",
|
||||
"+TxNwhJd0q/b5hMmSmkttf9CBzzdWUL9A1MeR9XduXl7MoQlZMG4oAnNLNkoWI61CTFxylGUUTMmyqML",
|
||||
"+Fa5qBN2HJVhalTIjBZA4GihvzRVu2SN3VQp9xIEB63hs04ed3Hj11cf1hnee8O6odxa1GGgj1z9In09",
|
||||
"0JCWGYpkRXaPe/uHX7DbFqJYL2K+Zso3O3jKBEfS6fL306ZzDKFzLI9mhl+iJZaBe9TXiCoKuURfhQOL",
|
||||
"27ris7khQi5dAN/hzTIYf5GogJw0dOBhU3B9oTGzDDLWZxK6KLvMDLxwO15a5x6kYfwIGptuE+CUVzhV",
|
||||
"ug9FMoKu/7rYIdH+9i0Eo7qd9F1HJxtxgUv0gYFXsmq4sbrRp6lbUud46GY/bodJviylli6fK4xdl1a7",
|
||||
"aYPJZzKnhlFXXwyJWZU8g9hD1yAEBOZSyZliWg+hgwjWxgHuM6W8qBTbyGE8X9FM5A1HnQW3Hx2qRzPF",
|
||||
"Nt+UvQVdjfhIVf1hpS/pyplSKvFNJKW8pKu/MFa+QY/zN6aeYeC3E2Pq7OVIYo5c7xGDUpUge+SCsdK7",
|
||||
"4usAcPKq9LWPIJGOcqEJJehqj2XS4JRJ+d97ELkj0YOyF62stSau66j09agtK1NWZlQqmVfZOkHfEstX",
|
||||
"8PJr/+6tYA5Qs2rvfclmu2YTD923pZh9rUTkgy0TkUH6cym2vm3Fvbt3r/+ivWBiZuaheM+f4mZFOc+x",
|
||||
"Ra2lspQ4EIzcJ5hX7lZ6eP0rfU1XkG8KnZKoci1m7t29fxNuBF2VpVT2oF6ynFNyuiqdxwxQjCBGeWFy",
|
||||
"EtKl68aDcfTXvYNHN9PUytdvQE4JpENKsqBiRab2YrtCcc4tbeZKGlMwV07uDyV5YJ62BfRCakMUyzB7",
|
||||
"PZS+g/2iPBBla3MATlX6SKraEcKExtp1mEMB0rs7ZfvlHU1yPmMaG/i3zpg8CdnzECf2+pcfAc4/v372",
|
||||
"I3GoZActCypEOk5rncBj5tViIigv9F6p2CVnS0+WuMKCf57aE6T+XgwCiKpLT80rVQyOBnuDyAjVJlbH",
|
||||
"zSCoTvMvjymBHUCSSrcQxs9y4s2kIKP9vWKKW/SrO+wNW+0Uxo0qkDox6OPXx82WZLGJTC4WlUBxEwps",
|
||||
"pBp7Nxy4iQkcNrwMayLQnbu3ISg2Y7LbsHdFycKvqDMZOB0TpV4wfT7MAnyizv13EAxt0t7LSahoFs/h",
|
||||
"0vU//fbp/wUAAP//QMYS9PwEAQA=",
|
||||
}
|
||||
|
||||
// GetSwagger returns the content of the embedded swagger specification file
|
||||
|
86
pkg/api/openapi_types.gen.go
generated
86
pkg/api/openapi_types.gen.go
generated
@ -174,6 +174,16 @@ type AvailableJobSetting struct {
|
||||
// Python expression to be evaluated in order to determine the default value for this setting.
|
||||
Eval *string `json:"eval,omitempty"`
|
||||
|
||||
// Meta-data for the 'eval' expression.
|
||||
EvalInfo *struct {
|
||||
// Description of what the 'eval' expression is doing. It is also used as placeholder text to show when the manual input field is hidden (because eval-on-submit has been toggled on by the user).
|
||||
Description string `json:"description"`
|
||||
|
||||
// Enables the 'eval on submit' toggle button behavior for this setting. A toggle button will be shown in Blender's submission interface. When toggled on, the `eval` expression will determine the setting's value. Manually editing the setting is then no longer possible, and instead of an input field, the 'description' string is shown.
|
||||
// An example use is the to-be-rendered frame range, which by default automatically follows the scene range, but can be overridden manually when desired.
|
||||
ShowLinkButton bool `json:"showLinkButton"`
|
||||
} `json:"evalInfo,omitempty"`
|
||||
|
||||
// Identifier for the setting, must be unique within the job type.
|
||||
Key string `json:"key"`
|
||||
|
||||
@ -636,8 +646,8 @@ type SubmittedJob struct {
|
||||
// If this field is ommitted, the check is bypassed.
|
||||
TypeEtag *string `json:"type_etag,omitempty"`
|
||||
|
||||
// Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
WorkerCluster *string `json:"worker_cluster,omitempty"`
|
||||
// Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
WorkerTag *string `json:"worker_tag,omitempty"`
|
||||
}
|
||||
|
||||
// The task as it exists in the Manager database, i.e. before variable replacement.
|
||||
@ -719,9 +729,6 @@ type Worker struct {
|
||||
// Embedded struct due to allOf(#/components/schemas/WorkerSummary)
|
||||
WorkerSummary `yaml:",inline"`
|
||||
// Embedded fields due to inline allOf schema
|
||||
// Clusters of which this Worker is a member.
|
||||
Clusters *[]WorkerCluster `json:"clusters,omitempty"`
|
||||
|
||||
// IP address of the Worker
|
||||
IpAddress string `json:"ip_address"`
|
||||
|
||||
@ -729,29 +736,13 @@ type Worker struct {
|
||||
Platform string `json:"platform"`
|
||||
SupportedTaskTypes []string `json:"supported_task_types"`
|
||||
|
||||
// Tags of which this Worker is a member.
|
||||
Tags *[]WorkerTag `json:"tags,omitempty"`
|
||||
|
||||
// Task assigned to a Worker.
|
||||
Task *WorkerTask `json:"task,omitempty"`
|
||||
}
|
||||
|
||||
// Cluster of workers. A job can optionally specify which cluster it should be limited to. Workers can be part of multiple clusters simultaneously.
|
||||
type WorkerCluster struct {
|
||||
Description *string `json:"description,omitempty"`
|
||||
|
||||
// UUID of the cluster. Can be ommitted when creating a new cluster, in which case a random UUID will be assigned.
|
||||
Id *string `json:"id,omitempty"`
|
||||
Name string `json:"name"`
|
||||
}
|
||||
|
||||
// Request to change which clusters this Worker is assigned to.
|
||||
type WorkerClusterChangeRequest struct {
|
||||
ClusterIds []string `json:"cluster_ids"`
|
||||
}
|
||||
|
||||
// WorkerClusterList defines model for WorkerClusterList.
|
||||
type WorkerClusterList struct {
|
||||
Clusters *[]WorkerCluster `json:"clusters,omitempty"`
|
||||
}
|
||||
|
||||
// List of workers.
|
||||
type WorkerList struct {
|
||||
Workers []WorkerSummary `json:"workers"`
|
||||
@ -818,6 +809,25 @@ type WorkerSummary struct {
|
||||
Version string `json:"version"`
|
||||
}
|
||||
|
||||
// Tag of workers. A job can optionally specify which tag it should be limited to. Workers can be part of multiple tags simultaneously.
|
||||
type WorkerTag struct {
|
||||
Description *string `json:"description,omitempty"`
|
||||
|
||||
// UUID of the tag. Can be ommitted when creating a new tag, in which case a random UUID will be assigned.
|
||||
Id *string `json:"id,omitempty"`
|
||||
Name string `json:"name"`
|
||||
}
|
||||
|
||||
// Request to change which tags this Worker is assigned to.
|
||||
type WorkerTagChangeRequest struct {
|
||||
TagIds []string `json:"tag_ids"`
|
||||
}
|
||||
|
||||
// WorkerTagList defines model for WorkerTagList.
|
||||
type WorkerTagList struct {
|
||||
Tags *[]WorkerTag `json:"tags,omitempty"`
|
||||
}
|
||||
|
||||
// WorkerTask defines model for WorkerTask.
|
||||
type WorkerTask struct {
|
||||
// Embedded struct due to allOf(#/components/schemas/TaskSummary)
|
||||
@ -871,18 +881,18 @@ type ShamanFileStoreParams struct {
|
||||
// SetTaskStatusJSONBody defines parameters for SetTaskStatus.
|
||||
type SetTaskStatusJSONBody TaskStatusChange
|
||||
|
||||
// UpdateWorkerClusterJSONBody defines parameters for UpdateWorkerCluster.
|
||||
type UpdateWorkerClusterJSONBody WorkerCluster
|
||||
// UpdateWorkerTagJSONBody defines parameters for UpdateWorkerTag.
|
||||
type UpdateWorkerTagJSONBody WorkerTag
|
||||
|
||||
// CreateWorkerClusterJSONBody defines parameters for CreateWorkerCluster.
|
||||
type CreateWorkerClusterJSONBody WorkerCluster
|
||||
|
||||
// SetWorkerClustersJSONBody defines parameters for SetWorkerClusters.
|
||||
type SetWorkerClustersJSONBody WorkerClusterChangeRequest
|
||||
// CreateWorkerTagJSONBody defines parameters for CreateWorkerTag.
|
||||
type CreateWorkerTagJSONBody WorkerTag
|
||||
|
||||
// RequestWorkerStatusChangeJSONBody defines parameters for RequestWorkerStatusChange.
|
||||
type RequestWorkerStatusChangeJSONBody WorkerStatusChangeRequest
|
||||
|
||||
// SetWorkerTagsJSONBody defines parameters for SetWorkerTags.
|
||||
type SetWorkerTagsJSONBody WorkerTagChangeRequest
|
||||
|
||||
// SetWorkerSleepScheduleJSONBody defines parameters for SetWorkerSleepSchedule.
|
||||
type SetWorkerSleepScheduleJSONBody WorkerSleepSchedule
|
||||
|
||||
@ -934,18 +944,18 @@ type ShamanCheckoutRequirementsJSONRequestBody ShamanCheckoutRequirementsJSONBod
|
||||
// SetTaskStatusJSONRequestBody defines body for SetTaskStatus for application/json ContentType.
|
||||
type SetTaskStatusJSONRequestBody SetTaskStatusJSONBody
|
||||
|
||||
// UpdateWorkerClusterJSONRequestBody defines body for UpdateWorkerCluster for application/json ContentType.
|
||||
type UpdateWorkerClusterJSONRequestBody UpdateWorkerClusterJSONBody
|
||||
// UpdateWorkerTagJSONRequestBody defines body for UpdateWorkerTag for application/json ContentType.
|
||||
type UpdateWorkerTagJSONRequestBody UpdateWorkerTagJSONBody
|
||||
|
||||
// CreateWorkerClusterJSONRequestBody defines body for CreateWorkerCluster for application/json ContentType.
|
||||
type CreateWorkerClusterJSONRequestBody CreateWorkerClusterJSONBody
|
||||
|
||||
// SetWorkerClustersJSONRequestBody defines body for SetWorkerClusters for application/json ContentType.
|
||||
type SetWorkerClustersJSONRequestBody SetWorkerClustersJSONBody
|
||||
// CreateWorkerTagJSONRequestBody defines body for CreateWorkerTag for application/json ContentType.
|
||||
type CreateWorkerTagJSONRequestBody CreateWorkerTagJSONBody
|
||||
|
||||
// RequestWorkerStatusChangeJSONRequestBody defines body for RequestWorkerStatusChange for application/json ContentType.
|
||||
type RequestWorkerStatusChangeJSONRequestBody RequestWorkerStatusChangeJSONBody
|
||||
|
||||
// SetWorkerTagsJSONRequestBody defines body for SetWorkerTags for application/json ContentType.
|
||||
type SetWorkerTagsJSONRequestBody SetWorkerTagsJSONBody
|
||||
|
||||
// SetWorkerSleepScheduleJSONRequestBody defines body for SetWorkerSleepSchedule for application/json ContentType.
|
||||
type SetWorkerSleepScheduleJSONRequestBody SetWorkerSleepScheduleJSONBody
|
||||
|
||||
|
@ -32,12 +32,17 @@
|
||||
<dt class="field-name" title="ID">ID</dt>
|
||||
<dd><span @click="copyElementText" class="click-to-copy">{{ jobData.id }}</span></dd>
|
||||
|
||||
<template v-if="workerCluster">
|
||||
<!-- TODO: fetch cluster name and show that instead, and allow editing of the cluster. -->
|
||||
<dt class="field-name" title="Worker Cluster">Cluster</dt>
|
||||
<dd :title="workerCluster.description"><span @click="copyElementData" class="click-to-copy"
|
||||
:data-clipboard="workerCluster.id">{{
|
||||
workerCluster.name }}</span></dd>
|
||||
<template v-if="workerTag">
|
||||
<!-- TODO: fetch tag name and show that instead, and allow editing of the tag. -->
|
||||
<dt class="field-name" title="Worker Tag">Tag</dt>
|
||||
<dd :title="workerTag.description">
|
||||
<span
|
||||
@click="copyElementData"
|
||||
class="click-to-copy"
|
||||
:data-clipboard="workerTag.id"
|
||||
>{{ workerTag.name }}</span
|
||||
>
|
||||
</dd>
|
||||
</template>
|
||||
|
||||
<dt class="field-name" title="Name">Name</dt>
|
||||
@ -128,11 +133,10 @@ export default {
|
||||
this._refreshJobSettings(this.jobData);
|
||||
}
|
||||
|
||||
this.workers.refreshClusters()
|
||||
.catch((error) => {
|
||||
const errorMsg = JSON.stringify(error); // TODO: handle API errors better.
|
||||
this.notifs.add(`Error: ${errorMsg}`);
|
||||
});
|
||||
this.workers.refreshTags().catch((error) => {
|
||||
const errorMsg = JSON.stringify(error); // TODO: handle API errors better.
|
||||
this.notifs.add(`Error: ${errorMsg}`);
|
||||
});
|
||||
},
|
||||
computed: {
|
||||
hasJobData() {
|
||||
@ -156,9 +160,9 @@ export default {
|
||||
}
|
||||
return this.jobData.settings;
|
||||
},
|
||||
workerCluster() {
|
||||
if (!this.jobData.worker_cluster) return undefined;
|
||||
return this.workers.clustersByID[this.jobData.worker_cluster];
|
||||
workerTag() {
|
||||
if (!this.jobData.worker_tag) return undefined;
|
||||
return this.workers.tagsByID[this.jobData.worker_tag];
|
||||
},
|
||||
},
|
||||
watch: {
|
||||
|
@ -34,21 +34,24 @@
|
||||
</dd>
|
||||
</dl>
|
||||
|
||||
<section class="worker-clusters" v-if="workers.clusters && workers.clusters.length">
|
||||
<h3 class="sub-title">Clusters</h3>
|
||||
<section class="worker-tags" v-if="workers.tags && workers.tags.length">
|
||||
<h3 class="sub-title">Tags</h3>
|
||||
<ul>
|
||||
<li v-for="cluster in workers.clusters">
|
||||
<switch-checkbox :isChecked="thisWorkerClusters[cluster.id]" :label="cluster.name" :title="cluster.description"
|
||||
@switch-toggle="toggleWorkerCluster(cluster.id)">
|
||||
<li v-for="tag in workers.tags">
|
||||
<switch-checkbox
|
||||
:isChecked="thisWorkerTags[tag.id]"
|
||||
:label="tag.name"
|
||||
:title="tag.description"
|
||||
@switch-toggle="toggleWorkerTag(tag.id)"
|
||||
>
|
||||
</switch-checkbox>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="hint" v-if="hasClustersAssigned">
|
||||
This worker will only pick up jobs assigned to one of its clusters, and clusterless jobs.
|
||||
</p>
|
||||
<p class="hint" v-else>
|
||||
This worker will only pick up clusterless jobs.
|
||||
<p class="hint" v-if="hasTagsAssigned">
|
||||
This worker will only pick up jobs assigned to one of its tags, and
|
||||
tagless jobs.
|
||||
</p>
|
||||
<p class="hint" v-else>This worker will only pick up tagless jobs.</p>
|
||||
</section>
|
||||
|
||||
<section class="sleep-schedule" :class="{ 'is-schedule-active': workerSleepSchedule.is_active }">
|
||||
@ -140,7 +143,7 @@ import { useNotifs } from '@/stores/notifications'
|
||||
import { useWorkers } from '@/stores/workers'
|
||||
|
||||
import * as datetime from "@/datetime";
|
||||
import { WorkerMgtApi, WorkerSleepSchedule, WorkerClusterChangeRequest } from '@/manager-api';
|
||||
import { WorkerMgtApi, WorkerSleepSchedule, WorkerTagChangeRequest } from "@/manager-api";
|
||||
import { getAPIClient } from "@/api-client";
|
||||
import { workerStatus } from "../../statusindicator";
|
||||
import LinkWorkerTask from '@/components/LinkWorkerTask.vue';
|
||||
@ -165,18 +168,17 @@ export default {
|
||||
notifs: useNotifs(),
|
||||
copyElementText: copyElementText,
|
||||
workers: useWorkers(),
|
||||
thisWorkerClusters: {}, // Mapping from UUID to 'isAssigned' boolean.
|
||||
thisWorkerTags: {}, // Mapping from UUID to 'isAssigned' boolean.
|
||||
};
|
||||
},
|
||||
mounted() {
|
||||
// Allow testing from the JS console:
|
||||
window.workerDetailsVue = this;
|
||||
|
||||
this.workers.refreshClusters()
|
||||
.catch((error) => {
|
||||
const errorMsg = JSON.stringify(error); // TODO: handle API errors better.
|
||||
this.notifs.add(`Error: ${errorMsg}`);
|
||||
});
|
||||
this.workers.refreshTags().catch((error) => {
|
||||
const errorMsg = JSON.stringify(error); // TODO: handle API errors better.
|
||||
this.notifs.add(`Error: ${errorMsg}`);
|
||||
});
|
||||
},
|
||||
watch: {
|
||||
workerData(newData, oldData) {
|
||||
@ -191,7 +193,7 @@ export default {
|
||||
this.fetchWorkerSleepSchedule();
|
||||
}
|
||||
|
||||
this.updateThisWorkerClusters(newData);
|
||||
this.updateThisWorkerTags(newData);
|
||||
},
|
||||
},
|
||||
computed: {
|
||||
@ -210,10 +212,10 @@ export default {
|
||||
workerSleepScheduleStatusLabel() {
|
||||
return this.workerSleepSchedule.is_active ? 'Enabled' : 'Disabled';
|
||||
},
|
||||
hasClustersAssigned() {
|
||||
const clusterIDs = this.getAssignedClusterIDs();
|
||||
return clusterIDs && clusterIDs.length > 0;
|
||||
}
|
||||
hasTagsAssigned() {
|
||||
const tagIDs = this.getAssignedTagIDs();
|
||||
return tagIDs && tagIDs.length > 0;
|
||||
},
|
||||
},
|
||||
methods: {
|
||||
fetchWorkerSleepSchedule() {
|
||||
@ -262,46 +264,48 @@ export default {
|
||||
}
|
||||
this.api.deleteWorker(this.workerData.id);
|
||||
},
|
||||
updateThisWorkerClusters(newWorkerData) {
|
||||
if (!newWorkerData || !newWorkerData.clusters) {
|
||||
this.thisWorkerClusters = {};
|
||||
updateThisWorkerTags(newWorkerData) {
|
||||
if (!newWorkerData || !newWorkerData.tags) {
|
||||
this.thisWorkerTags = {};
|
||||
return;
|
||||
}
|
||||
|
||||
const assignedClusters = newWorkerData.clusters.reduce(
|
||||
(accu, cluster) => { accu[cluster.id] = true; return accu; },
|
||||
{});
|
||||
this.thisWorkerClusters = assignedClusters;
|
||||
const assignedTags = newWorkerData.tags.reduce((accu, tag) => {
|
||||
accu[tag.id] = true;
|
||||
return accu;
|
||||
}, {});
|
||||
this.thisWorkerTags = assignedTags;
|
||||
},
|
||||
toggleWorkerCluster(clusterID) {
|
||||
console.log("Toggled", clusterID);
|
||||
this.thisWorkerClusters[clusterID] = !this.thisWorkerClusters[clusterID];
|
||||
console.log("New assignment:", plain(this.thisWorkerClusters))
|
||||
toggleWorkerTag(tagID) {
|
||||
console.log("Toggled", tagID);
|
||||
this.thisWorkerTags[tagID] = !this.thisWorkerTags[tagID];
|
||||
console.log("New assignment:", plain(this.thisWorkerTags));
|
||||
|
||||
// Construct cluster change request.
|
||||
const clusterIDs = this.getAssignedClusterIDs();
|
||||
const changeRequest = new WorkerClusterChangeRequest(clusterIDs);
|
||||
// Construct tag change request.
|
||||
const tagIDs = this.getAssignedTagIDs();
|
||||
const changeRequest = new WorkerTagChangeRequest(tagIDs);
|
||||
|
||||
// Send to the Manager.
|
||||
this.api.setWorkerClusters(this.workerData.id, changeRequest)
|
||||
this.api
|
||||
.setWorkerTags(this.workerData.id, changeRequest)
|
||||
.then(() => {
|
||||
this.notifs.add('Cluster assignment updated');
|
||||
this.notifs.add("Tag assignment updated");
|
||||
})
|
||||
.catch((error) => {
|
||||
const errorMsg = JSON.stringify(error); // TODO: handle API errors better.
|
||||
this.notifs.add(`Error: ${errorMsg}`);
|
||||
});
|
||||
},
|
||||
getAssignedClusterIDs() {
|
||||
const clusterIDs = [];
|
||||
for (let clusterID in this.thisWorkerClusters) {
|
||||
getAssignedTagIDs() {
|
||||
const tagIDs = [];
|
||||
for (let tagID in this.thisWorkerTags) {
|
||||
// Values can exist and be set to 'false'.
|
||||
const isAssigned = this.thisWorkerClusters[clusterID];
|
||||
if (isAssigned) clusterIDs.push(clusterID);
|
||||
const isAssigned = this.thisWorkerTags[tagID];
|
||||
if (isAssigned) tagIDs.push(tagID);
|
||||
}
|
||||
return clusterIDs;
|
||||
}
|
||||
}
|
||||
return tagIDs;
|
||||
},
|
||||
},
|
||||
};
|
||||
</script>
|
||||
|
||||
@ -377,11 +381,11 @@ export default {
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.worker-clusters ul {
|
||||
.worker-tags ul {
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
.worker-clusters ul li {
|
||||
.worker-tags ul li {
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
</style>
|
||||
|
49
web/app/src/manager-api/index.js
generated
49
web/app/src/manager-api/index.js
generated
@ -15,6 +15,7 @@
|
||||
import ApiClient from './ApiClient';
|
||||
import AssignedTask from './model/AssignedTask';
|
||||
import AvailableJobSetting from './model/AvailableJobSetting';
|
||||
import AvailableJobSettingEvalInfo from './model/AvailableJobSettingEvalInfo';
|
||||
import AvailableJobSettingSubtype from './model/AvailableJobSettingSubtype';
|
||||
import AvailableJobSettingType from './model/AvailableJobSettingType';
|
||||
import AvailableJobSettingVisibility from './model/AvailableJobSettingVisibility';
|
||||
@ -73,9 +74,6 @@ import TaskUpdate from './model/TaskUpdate';
|
||||
import TaskWorker from './model/TaskWorker';
|
||||
import Worker from './model/Worker';
|
||||
import WorkerAllOf from './model/WorkerAllOf';
|
||||
import WorkerCluster from './model/WorkerCluster';
|
||||
import WorkerClusterChangeRequest from './model/WorkerClusterChangeRequest';
|
||||
import WorkerClusterList from './model/WorkerClusterList';
|
||||
import WorkerList from './model/WorkerList';
|
||||
import WorkerRegistration from './model/WorkerRegistration';
|
||||
import WorkerSignOn from './model/WorkerSignOn';
|
||||
@ -85,6 +83,9 @@ import WorkerStateChanged from './model/WorkerStateChanged';
|
||||
import WorkerStatus from './model/WorkerStatus';
|
||||
import WorkerStatusChangeRequest from './model/WorkerStatusChangeRequest';
|
||||
import WorkerSummary from './model/WorkerSummary';
|
||||
import WorkerTag from './model/WorkerTag';
|
||||
import WorkerTagChangeRequest from './model/WorkerTagChangeRequest';
|
||||
import WorkerTagList from './model/WorkerTagList';
|
||||
import WorkerTask from './model/WorkerTask';
|
||||
import WorkerTaskAllOf from './model/WorkerTaskAllOf';
|
||||
import JobsApi from './manager/JobsApi';
|
||||
@ -144,6 +145,12 @@ export {
|
||||
*/
|
||||
AvailableJobSetting,
|
||||
|
||||
/**
|
||||
* The AvailableJobSettingEvalInfo model constructor.
|
||||
* @property {module:model/AvailableJobSettingEvalInfo}
|
||||
*/
|
||||
AvailableJobSettingEvalInfo,
|
||||
|
||||
/**
|
||||
* The AvailableJobSettingSubtype model constructor.
|
||||
* @property {module:model/AvailableJobSettingSubtype}
|
||||
@ -492,24 +499,6 @@ export {
|
||||
*/
|
||||
WorkerAllOf,
|
||||
|
||||
/**
|
||||
* The WorkerCluster model constructor.
|
||||
* @property {module:model/WorkerCluster}
|
||||
*/
|
||||
WorkerCluster,
|
||||
|
||||
/**
|
||||
* The WorkerClusterChangeRequest model constructor.
|
||||
* @property {module:model/WorkerClusterChangeRequest}
|
||||
*/
|
||||
WorkerClusterChangeRequest,
|
||||
|
||||
/**
|
||||
* The WorkerClusterList model constructor.
|
||||
* @property {module:model/WorkerClusterList}
|
||||
*/
|
||||
WorkerClusterList,
|
||||
|
||||
/**
|
||||
* The WorkerList model constructor.
|
||||
* @property {module:model/WorkerList}
|
||||
@ -564,6 +553,24 @@ export {
|
||||
*/
|
||||
WorkerSummary,
|
||||
|
||||
/**
|
||||
* The WorkerTag model constructor.
|
||||
* @property {module:model/WorkerTag}
|
||||
*/
|
||||
WorkerTag,
|
||||
|
||||
/**
|
||||
* The WorkerTagChangeRequest model constructor.
|
||||
* @property {module:model/WorkerTagChangeRequest}
|
||||
*/
|
||||
WorkerTagChangeRequest,
|
||||
|
||||
/**
|
||||
* The WorkerTagList model constructor.
|
||||
* @property {module:model/WorkerTagList}
|
||||
*/
|
||||
WorkerTagList,
|
||||
|
||||
/**
|
||||
* The WorkerTask model constructor.
|
||||
* @property {module:model/WorkerTask}
|
||||
|
366
web/app/src/manager-api/manager/WorkerMgtApi.js
generated
366
web/app/src/manager-api/manager/WorkerMgtApi.js
generated
@ -15,12 +15,12 @@
|
||||
import ApiClient from "../ApiClient";
|
||||
import Error from '../model/Error';
|
||||
import Worker from '../model/Worker';
|
||||
import WorkerCluster from '../model/WorkerCluster';
|
||||
import WorkerClusterChangeRequest from '../model/WorkerClusterChangeRequest';
|
||||
import WorkerClusterList from '../model/WorkerClusterList';
|
||||
import WorkerList from '../model/WorkerList';
|
||||
import WorkerSleepSchedule from '../model/WorkerSleepSchedule';
|
||||
import WorkerStatusChangeRequest from '../model/WorkerStatusChangeRequest';
|
||||
import WorkerTag from '../model/WorkerTag';
|
||||
import WorkerTagChangeRequest from '../model/WorkerTagChangeRequest';
|
||||
import WorkerTagList from '../model/WorkerTagList';
|
||||
|
||||
/**
|
||||
* WorkerMgt service.
|
||||
@ -43,15 +43,15 @@ export default class WorkerMgtApi {
|
||||
|
||||
|
||||
/**
|
||||
* Create a new worker cluster.
|
||||
* @param {module:model/WorkerCluster} workerCluster The worker cluster.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerCluster} and HTTP response
|
||||
* Create a new worker tag.
|
||||
* @param {module:model/WorkerTag} workerTag The worker tag.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerTag} and HTTP response
|
||||
*/
|
||||
createWorkerClusterWithHttpInfo(workerCluster) {
|
||||
let postBody = workerCluster;
|
||||
// verify the required parameter 'workerCluster' is set
|
||||
if (workerCluster === undefined || workerCluster === null) {
|
||||
throw new Error("Missing the required parameter 'workerCluster' when calling createWorkerCluster");
|
||||
createWorkerTagWithHttpInfo(workerTag) {
|
||||
let postBody = workerTag;
|
||||
// verify the required parameter 'workerTag' is set
|
||||
if (workerTag === undefined || workerTag === null) {
|
||||
throw new Error("Missing the required parameter 'workerTag' when calling createWorkerTag");
|
||||
}
|
||||
|
||||
let pathParams = {
|
||||
@ -66,21 +66,21 @@ export default class WorkerMgtApi {
|
||||
let authNames = [];
|
||||
let contentTypes = ['application/json'];
|
||||
let accepts = ['application/json'];
|
||||
let returnType = WorkerCluster;
|
||||
let returnType = WorkerTag;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/clusters', 'POST',
|
||||
'/api/v3/worker-mgt/tags', 'POST',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new worker cluster.
|
||||
* @param {module:model/WorkerCluster} workerCluster The worker cluster.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with data of type {@link module:model/WorkerCluster}
|
||||
* Create a new worker tag.
|
||||
* @param {module:model/WorkerTag} workerTag The worker tag.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with data of type {@link module:model/WorkerTag}
|
||||
*/
|
||||
createWorkerCluster(workerCluster) {
|
||||
return this.createWorkerClusterWithHttpInfo(workerCluster)
|
||||
createWorkerTag(workerTag) {
|
||||
return this.createWorkerTagWithHttpInfo(workerTag)
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
@ -134,19 +134,19 @@ export default class WorkerMgtApi {
|
||||
|
||||
|
||||
/**
|
||||
* Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
* @param {String} clusterId
|
||||
* Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
* @param {String} tagId
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing HTTP response
|
||||
*/
|
||||
deleteWorkerClusterWithHttpInfo(clusterId) {
|
||||
deleteWorkerTagWithHttpInfo(tagId) {
|
||||
let postBody = null;
|
||||
// verify the required parameter 'clusterId' is set
|
||||
if (clusterId === undefined || clusterId === null) {
|
||||
throw new Error("Missing the required parameter 'clusterId' when calling deleteWorkerCluster");
|
||||
// verify the required parameter 'tagId' is set
|
||||
if (tagId === undefined || tagId === null) {
|
||||
throw new Error("Missing the required parameter 'tagId' when calling deleteWorkerTag");
|
||||
}
|
||||
|
||||
let pathParams = {
|
||||
'cluster_id': clusterId
|
||||
'tag_id': tagId
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
@ -160,19 +160,19 @@ export default class WorkerMgtApi {
|
||||
let accepts = ['application/json'];
|
||||
let returnType = null;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/cluster/{cluster_id}', 'DELETE',
|
||||
'/api/v3/worker-mgt/tag/{tag_id}', 'DELETE',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove this worker cluster. This unassigns all workers from the cluster and removes it.
|
||||
* @param {String} clusterId
|
||||
* Remove this worker tag. This unassigns all workers from the tag and removes it.
|
||||
* @param {String} tagId
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}
|
||||
*/
|
||||
deleteWorkerCluster(clusterId) {
|
||||
return this.deleteWorkerClusterWithHttpInfo(clusterId)
|
||||
deleteWorkerTag(tagId) {
|
||||
return this.deleteWorkerTagWithHttpInfo(tagId)
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
@ -225,91 +225,6 @@ export default class WorkerMgtApi {
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get a single worker cluster.
|
||||
* @param {String} clusterId
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerCluster} and HTTP response
|
||||
*/
|
||||
fetchWorkerClusterWithHttpInfo(clusterId) {
|
||||
let postBody = null;
|
||||
// verify the required parameter 'clusterId' is set
|
||||
if (clusterId === undefined || clusterId === null) {
|
||||
throw new Error("Missing the required parameter 'clusterId' when calling fetchWorkerCluster");
|
||||
}
|
||||
|
||||
let pathParams = {
|
||||
'cluster_id': clusterId
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
let headerParams = {
|
||||
};
|
||||
let formParams = {
|
||||
};
|
||||
|
||||
let authNames = [];
|
||||
let contentTypes = [];
|
||||
let accepts = ['application/json'];
|
||||
let returnType = WorkerCluster;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/cluster/{cluster_id}', 'GET',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single worker cluster.
|
||||
* @param {String} clusterId
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with data of type {@link module:model/WorkerCluster}
|
||||
*/
|
||||
fetchWorkerCluster(clusterId) {
|
||||
return this.fetchWorkerClusterWithHttpInfo(clusterId)
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get list of worker clusters.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerClusterList} and HTTP response
|
||||
*/
|
||||
fetchWorkerClustersWithHttpInfo() {
|
||||
let postBody = null;
|
||||
|
||||
let pathParams = {
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
let headerParams = {
|
||||
};
|
||||
let formParams = {
|
||||
};
|
||||
|
||||
let authNames = [];
|
||||
let contentTypes = [];
|
||||
let accepts = ['application/json'];
|
||||
let returnType = WorkerClusterList;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/clusters', 'GET',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of worker clusters.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with data of type {@link module:model/WorkerClusterList}
|
||||
*/
|
||||
fetchWorkerClusters() {
|
||||
return this.fetchWorkerClustersWithHttpInfo()
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* @param {String} workerId
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerSleepSchedule} and HTTP response
|
||||
@ -354,6 +269,91 @@ export default class WorkerMgtApi {
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get a single worker tag.
|
||||
* @param {String} tagId
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerTag} and HTTP response
|
||||
*/
|
||||
fetchWorkerTagWithHttpInfo(tagId) {
|
||||
let postBody = null;
|
||||
// verify the required parameter 'tagId' is set
|
||||
if (tagId === undefined || tagId === null) {
|
||||
throw new Error("Missing the required parameter 'tagId' when calling fetchWorkerTag");
|
||||
}
|
||||
|
||||
let pathParams = {
|
||||
'tag_id': tagId
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
let headerParams = {
|
||||
};
|
||||
let formParams = {
|
||||
};
|
||||
|
||||
let authNames = [];
|
||||
let contentTypes = [];
|
||||
let accepts = ['application/json'];
|
||||
let returnType = WorkerTag;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/tag/{tag_id}', 'GET',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single worker tag.
|
||||
* @param {String} tagId
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with data of type {@link module:model/WorkerTag}
|
||||
*/
|
||||
fetchWorkerTag(tagId) {
|
||||
return this.fetchWorkerTagWithHttpInfo(tagId)
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get list of worker tags.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerTagList} and HTTP response
|
||||
*/
|
||||
fetchWorkerTagsWithHttpInfo() {
|
||||
let postBody = null;
|
||||
|
||||
let pathParams = {
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
let headerParams = {
|
||||
};
|
||||
let formParams = {
|
||||
};
|
||||
|
||||
let authNames = [];
|
||||
let contentTypes = [];
|
||||
let accepts = ['application/json'];
|
||||
let returnType = WorkerTagList;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/tags', 'GET',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of worker tags.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with data of type {@link module:model/WorkerTagList}
|
||||
*/
|
||||
fetchWorkerTags() {
|
||||
return this.fetchWorkerTagsWithHttpInfo()
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Get list of workers.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing data of type {@link module:model/WorkerList} and HTTP response
|
||||
@ -443,56 +443,6 @@ export default class WorkerMgtApi {
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* @param {String} workerId
|
||||
* @param {module:model/WorkerClusterChangeRequest} workerClusterChangeRequest The list of cluster IDs this worker should be a member of.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing HTTP response
|
||||
*/
|
||||
setWorkerClustersWithHttpInfo(workerId, workerClusterChangeRequest) {
|
||||
let postBody = workerClusterChangeRequest;
|
||||
// verify the required parameter 'workerId' is set
|
||||
if (workerId === undefined || workerId === null) {
|
||||
throw new Error("Missing the required parameter 'workerId' when calling setWorkerClusters");
|
||||
}
|
||||
// verify the required parameter 'workerClusterChangeRequest' is set
|
||||
if (workerClusterChangeRequest === undefined || workerClusterChangeRequest === null) {
|
||||
throw new Error("Missing the required parameter 'workerClusterChangeRequest' when calling setWorkerClusters");
|
||||
}
|
||||
|
||||
let pathParams = {
|
||||
'worker_id': workerId
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
let headerParams = {
|
||||
};
|
||||
let formParams = {
|
||||
};
|
||||
|
||||
let authNames = [];
|
||||
let contentTypes = ['application/json'];
|
||||
let accepts = ['application/json'];
|
||||
let returnType = null;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/workers/{worker_id}/setclusters', 'POST',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {String} workerId
|
||||
* @param {module:model/WorkerClusterChangeRequest} workerClusterChangeRequest The list of cluster IDs this worker should be a member of.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}
|
||||
*/
|
||||
setWorkerClusters(workerId, workerClusterChangeRequest) {
|
||||
return this.setWorkerClustersWithHttpInfo(workerId, workerClusterChangeRequest)
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* @param {String} workerId
|
||||
* @param {module:model/WorkerSleepSchedule} workerSleepSchedule The new sleep schedule.
|
||||
@ -544,24 +494,23 @@ export default class WorkerMgtApi {
|
||||
|
||||
|
||||
/**
|
||||
* Update an existing worker cluster.
|
||||
* @param {String} clusterId
|
||||
* @param {module:model/WorkerCluster} workerCluster The updated worker cluster.
|
||||
* @param {String} workerId
|
||||
* @param {module:model/WorkerTagChangeRequest} workerTagChangeRequest The list of worker tag IDs this worker should be a member of.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing HTTP response
|
||||
*/
|
||||
updateWorkerClusterWithHttpInfo(clusterId, workerCluster) {
|
||||
let postBody = workerCluster;
|
||||
// verify the required parameter 'clusterId' is set
|
||||
if (clusterId === undefined || clusterId === null) {
|
||||
throw new Error("Missing the required parameter 'clusterId' when calling updateWorkerCluster");
|
||||
setWorkerTagsWithHttpInfo(workerId, workerTagChangeRequest) {
|
||||
let postBody = workerTagChangeRequest;
|
||||
// verify the required parameter 'workerId' is set
|
||||
if (workerId === undefined || workerId === null) {
|
||||
throw new Error("Missing the required parameter 'workerId' when calling setWorkerTags");
|
||||
}
|
||||
// verify the required parameter 'workerCluster' is set
|
||||
if (workerCluster === undefined || workerCluster === null) {
|
||||
throw new Error("Missing the required parameter 'workerCluster' when calling updateWorkerCluster");
|
||||
// verify the required parameter 'workerTagChangeRequest' is set
|
||||
if (workerTagChangeRequest === undefined || workerTagChangeRequest === null) {
|
||||
throw new Error("Missing the required parameter 'workerTagChangeRequest' when calling setWorkerTags");
|
||||
}
|
||||
|
||||
let pathParams = {
|
||||
'cluster_id': clusterId
|
||||
'worker_id': workerId
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
@ -575,20 +524,71 @@ export default class WorkerMgtApi {
|
||||
let accepts = ['application/json'];
|
||||
let returnType = null;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/cluster/{cluster_id}', 'PUT',
|
||||
'/api/v3/worker-mgt/workers/{worker_id}/settags', 'POST',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Update an existing worker cluster.
|
||||
* @param {String} clusterId
|
||||
* @param {module:model/WorkerCluster} workerCluster The updated worker cluster.
|
||||
* @param {String} workerId
|
||||
* @param {module:model/WorkerTagChangeRequest} workerTagChangeRequest The list of worker tag IDs this worker should be a member of.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}
|
||||
*/
|
||||
updateWorkerCluster(clusterId, workerCluster) {
|
||||
return this.updateWorkerClusterWithHttpInfo(clusterId, workerCluster)
|
||||
setWorkerTags(workerId, workerTagChangeRequest) {
|
||||
return this.setWorkerTagsWithHttpInfo(workerId, workerTagChangeRequest)
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Update an existing worker tag.
|
||||
* @param {String} tagId
|
||||
* @param {module:model/WorkerTag} workerTag The updated worker tag.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}, with an object containing HTTP response
|
||||
*/
|
||||
updateWorkerTagWithHttpInfo(tagId, workerTag) {
|
||||
let postBody = workerTag;
|
||||
// verify the required parameter 'tagId' is set
|
||||
if (tagId === undefined || tagId === null) {
|
||||
throw new Error("Missing the required parameter 'tagId' when calling updateWorkerTag");
|
||||
}
|
||||
// verify the required parameter 'workerTag' is set
|
||||
if (workerTag === undefined || workerTag === null) {
|
||||
throw new Error("Missing the required parameter 'workerTag' when calling updateWorkerTag");
|
||||
}
|
||||
|
||||
let pathParams = {
|
||||
'tag_id': tagId
|
||||
};
|
||||
let queryParams = {
|
||||
};
|
||||
let headerParams = {
|
||||
};
|
||||
let formParams = {
|
||||
};
|
||||
|
||||
let authNames = [];
|
||||
let contentTypes = ['application/json'];
|
||||
let accepts = ['application/json'];
|
||||
let returnType = null;
|
||||
return this.apiClient.callApi(
|
||||
'/api/v3/worker-mgt/tag/{tag_id}', 'PUT',
|
||||
pathParams, queryParams, headerParams, formParams, postBody,
|
||||
authNames, contentTypes, accepts, returnType, null
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Update an existing worker tag.
|
||||
* @param {String} tagId
|
||||
* @param {module:model/WorkerTag} workerTag The updated worker tag.
|
||||
* @return {Promise} a {@link https://www.promisejs.org/|Promise}
|
||||
*/
|
||||
updateWorkerTag(tagId, workerTag) {
|
||||
return this.updateWorkerTagWithHttpInfo(tagId, workerTag)
|
||||
.then(function(response_and_data) {
|
||||
return response_and_data.data;
|
||||
});
|
||||
|
@ -12,6 +12,7 @@
|
||||
*/
|
||||
|
||||
import ApiClient from '../ApiClient';
|
||||
import AvailableJobSettingEvalInfo from './AvailableJobSettingEvalInfo';
|
||||
import AvailableJobSettingSubtype from './AvailableJobSettingSubtype';
|
||||
import AvailableJobSettingType from './AvailableJobSettingType';
|
||||
import AvailableJobSettingVisibility from './AvailableJobSettingVisibility';
|
||||
@ -79,6 +80,9 @@ class AvailableJobSetting {
|
||||
if (data.hasOwnProperty('eval')) {
|
||||
obj['eval'] = ApiClient.convertToType(data['eval'], 'String');
|
||||
}
|
||||
if (data.hasOwnProperty('evalInfo')) {
|
||||
obj['evalInfo'] = AvailableJobSettingEvalInfo.constructFromObject(data['evalInfo']);
|
||||
}
|
||||
if (data.hasOwnProperty('visible')) {
|
||||
obj['visible'] = AvailableJobSettingVisibility.constructFromObject(data['visible']);
|
||||
}
|
||||
@ -141,6 +145,11 @@ AvailableJobSetting.prototype['default'] = undefined;
|
||||
*/
|
||||
AvailableJobSetting.prototype['eval'] = undefined;
|
||||
|
||||
/**
|
||||
* @member {module:model/AvailableJobSettingEvalInfo} evalInfo
|
||||
*/
|
||||
AvailableJobSetting.prototype['evalInfo'] = undefined;
|
||||
|
||||
/**
|
||||
* @member {module:model/AvailableJobSettingVisibility} visible
|
||||
*/
|
||||
|
88
web/app/src/manager-api/model/AvailableJobSettingEvalInfo.js
generated
Normal file
88
web/app/src/manager-api/model/AvailableJobSettingEvalInfo.js
generated
Normal file
@ -0,0 +1,88 @@
|
||||
/**
|
||||
* Flamenco manager
|
||||
* Render Farm manager API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.0.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
* https://openapi-generator.tech
|
||||
* Do not edit the class manually.
|
||||
*
|
||||
*/
|
||||
|
||||
import ApiClient from '../ApiClient';
|
||||
|
||||
/**
|
||||
* The AvailableJobSettingEvalInfo model module.
|
||||
* @module model/AvailableJobSettingEvalInfo
|
||||
* @version 0.0.0
|
||||
*/
|
||||
class AvailableJobSettingEvalInfo {
|
||||
/**
|
||||
* Constructs a new <code>AvailableJobSettingEvalInfo</code>.
|
||||
* Meta-data for the 'eval' expression.
|
||||
* @alias module:model/AvailableJobSettingEvalInfo
|
||||
* @param showLinkButton {Boolean} Enables the 'eval on submit' toggle button behavior for this setting. A toggle button will be shown in Blender's submission interface. When toggled on, the `eval` expression will determine the setting's value. Manually editing the setting is then no longer possible, and instead of an input field, the 'description' string is shown. An example use is the to-be-rendered frame range, which by default automatically follows the scene range, but can be overridden manually when desired.
|
||||
* @param description {String} Description of what the 'eval' expression is doing. It is also used as placeholder text to show when the manual input field is hidden (because eval-on-submit has been toggled on by the user).
|
||||
*/
|
||||
constructor(showLinkButton, description) {
|
||||
|
||||
AvailableJobSettingEvalInfo.initialize(this, showLinkButton, description);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes the fields of this object.
|
||||
* This method is used by the constructors of any subclasses, in order to implement multiple inheritance (mix-ins).
|
||||
* Only for internal use.
|
||||
*/
|
||||
static initialize(obj, showLinkButton, description) {
|
||||
obj['showLinkButton'] = showLinkButton || false;
|
||||
obj['description'] = description || '';
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs a <code>AvailableJobSettingEvalInfo</code> from a plain JavaScript object, optionally creating a new instance.
|
||||
* Copies all relevant properties from <code>data</code> to <code>obj</code> if supplied or a new instance if not.
|
||||
* @param {Object} data The plain JavaScript object bearing properties of interest.
|
||||
* @param {module:model/AvailableJobSettingEvalInfo} obj Optional instance to populate.
|
||||
* @return {module:model/AvailableJobSettingEvalInfo} The populated <code>AvailableJobSettingEvalInfo</code> instance.
|
||||
*/
|
||||
static constructFromObject(data, obj) {
|
||||
if (data) {
|
||||
obj = obj || new AvailableJobSettingEvalInfo();
|
||||
|
||||
if (data.hasOwnProperty('showLinkButton')) {
|
||||
obj['showLinkButton'] = ApiClient.convertToType(data['showLinkButton'], 'Boolean');
|
||||
}
|
||||
if (data.hasOwnProperty('description')) {
|
||||
obj['description'] = ApiClient.convertToType(data['description'], 'String');
|
||||
}
|
||||
}
|
||||
return obj;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Enables the 'eval on submit' toggle button behavior for this setting. A toggle button will be shown in Blender's submission interface. When toggled on, the `eval` expression will determine the setting's value. Manually editing the setting is then no longer possible, and instead of an input field, the 'description' string is shown. An example use is the to-be-rendered frame range, which by default automatically follows the scene range, but can be overridden manually when desired.
|
||||
* @member {Boolean} showLinkButton
|
||||
* @default false
|
||||
*/
|
||||
AvailableJobSettingEvalInfo.prototype['showLinkButton'] = false;
|
||||
|
||||
/**
|
||||
* Description of what the 'eval' expression is doing. It is also used as placeholder text to show when the manual input field is hidden (because eval-on-submit has been toggled on by the user).
|
||||
* @member {String} description
|
||||
* @default ''
|
||||
*/
|
||||
AvailableJobSettingEvalInfo.prototype['description'] = '';
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
export default AvailableJobSettingEvalInfo;
|
||||
|
16
web/app/src/manager-api/model/Job.js
generated
16
web/app/src/manager-api/model/Job.js
generated
@ -97,8 +97,8 @@ class Job {
|
||||
if (data.hasOwnProperty('storage')) {
|
||||
obj['storage'] = JobStorageInfo.constructFromObject(data['storage']);
|
||||
}
|
||||
if (data.hasOwnProperty('worker_cluster')) {
|
||||
obj['worker_cluster'] = ApiClient.convertToType(data['worker_cluster'], 'String');
|
||||
if (data.hasOwnProperty('worker_tag')) {
|
||||
obj['worker_tag'] = ApiClient.convertToType(data['worker_tag'], 'String');
|
||||
}
|
||||
if (data.hasOwnProperty('id')) {
|
||||
obj['id'] = ApiClient.convertToType(data['id'], 'String');
|
||||
@ -170,10 +170,10 @@ Job.prototype['submitter_platform'] = undefined;
|
||||
Job.prototype['storage'] = undefined;
|
||||
|
||||
/**
|
||||
* Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
* @member {String} worker_cluster
|
||||
* Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
* @member {String} worker_tag
|
||||
*/
|
||||
Job.prototype['worker_cluster'] = undefined;
|
||||
Job.prototype['worker_tag'] = undefined;
|
||||
|
||||
/**
|
||||
* UUID of the Job
|
||||
@ -249,10 +249,10 @@ SubmittedJob.prototype['submitter_platform'] = undefined;
|
||||
*/
|
||||
SubmittedJob.prototype['storage'] = undefined;
|
||||
/**
|
||||
* Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
* @member {String} worker_cluster
|
||||
* Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
* @member {String} worker_tag
|
||||
*/
|
||||
SubmittedJob.prototype['worker_cluster'] = undefined;
|
||||
SubmittedJob.prototype['worker_tag'] = undefined;
|
||||
// Implement JobAllOf interface:
|
||||
/**
|
||||
* UUID of the Job
|
||||
|
10
web/app/src/manager-api/model/SubmittedJob.js
generated
10
web/app/src/manager-api/model/SubmittedJob.js
generated
@ -81,8 +81,8 @@ class SubmittedJob {
|
||||
if (data.hasOwnProperty('storage')) {
|
||||
obj['storage'] = JobStorageInfo.constructFromObject(data['storage']);
|
||||
}
|
||||
if (data.hasOwnProperty('worker_cluster')) {
|
||||
obj['worker_cluster'] = ApiClient.convertToType(data['worker_cluster'], 'String');
|
||||
if (data.hasOwnProperty('worker_tag')) {
|
||||
obj['worker_tag'] = ApiClient.convertToType(data['worker_tag'], 'String');
|
||||
}
|
||||
}
|
||||
return obj;
|
||||
@ -136,10 +136,10 @@ SubmittedJob.prototype['submitter_platform'] = undefined;
|
||||
SubmittedJob.prototype['storage'] = undefined;
|
||||
|
||||
/**
|
||||
* Worker Cluster that should execute this job. When a cluster ID is given, only Workers in that cluster will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
* @member {String} worker_cluster
|
||||
* Worker tag that should execute this job. When a tag ID is given, only Workers in that tag will be scheduled to work on it. If empty or ommitted, all workers can work on this job.
|
||||
* @member {String} worker_tag
|
||||
*/
|
||||
SubmittedJob.prototype['worker_cluster'] = undefined;
|
||||
SubmittedJob.prototype['worker_tag'] = undefined;
|
||||
|
||||
|
||||
|
||||
|
18
web/app/src/manager-api/model/Worker.js
generated
18
web/app/src/manager-api/model/Worker.js
generated
@ -13,10 +13,10 @@
|
||||
|
||||
import ApiClient from '../ApiClient';
|
||||
import WorkerAllOf from './WorkerAllOf';
|
||||
import WorkerCluster from './WorkerCluster';
|
||||
import WorkerStatus from './WorkerStatus';
|
||||
import WorkerStatusChangeRequest from './WorkerStatusChangeRequest';
|
||||
import WorkerSummary from './WorkerSummary';
|
||||
import WorkerTag from './WorkerTag';
|
||||
import WorkerTask from './WorkerTask';
|
||||
|
||||
/**
|
||||
@ -102,8 +102,8 @@ class Worker {
|
||||
if (data.hasOwnProperty('task')) {
|
||||
obj['task'] = WorkerTask.constructFromObject(data['task']);
|
||||
}
|
||||
if (data.hasOwnProperty('clusters')) {
|
||||
obj['clusters'] = ApiClient.convertToType(data['clusters'], [WorkerCluster]);
|
||||
if (data.hasOwnProperty('tags')) {
|
||||
obj['tags'] = ApiClient.convertToType(data['tags'], [WorkerTag]);
|
||||
}
|
||||
}
|
||||
return obj;
|
||||
@ -167,10 +167,10 @@ Worker.prototype['supported_task_types'] = undefined;
|
||||
Worker.prototype['task'] = undefined;
|
||||
|
||||
/**
|
||||
* Clusters of which this Worker is a member.
|
||||
* @member {Array.<module:model/WorkerCluster>} clusters
|
||||
* Tags of which this Worker is a member.
|
||||
* @member {Array.<module:model/WorkerTag>} tags
|
||||
*/
|
||||
Worker.prototype['clusters'] = undefined;
|
||||
Worker.prototype['tags'] = undefined;
|
||||
|
||||
|
||||
// Implement WorkerSummary interface:
|
||||
@ -220,10 +220,10 @@ WorkerAllOf.prototype['supported_task_types'] = undefined;
|
||||
*/
|
||||
WorkerAllOf.prototype['task'] = undefined;
|
||||
/**
|
||||
* Clusters of which this Worker is a member.
|
||||
* @member {Array.<module:model/WorkerCluster>} clusters
|
||||
* Tags of which this Worker is a member.
|
||||
* @member {Array.<module:model/WorkerTag>} tags
|
||||
*/
|
||||
WorkerAllOf.prototype['clusters'] = undefined;
|
||||
WorkerAllOf.prototype['tags'] = undefined;
|
||||
|
||||
|
||||
|
||||
|
12
web/app/src/manager-api/model/WorkerAllOf.js
generated
12
web/app/src/manager-api/model/WorkerAllOf.js
generated
@ -12,7 +12,7 @@
|
||||
*/
|
||||
|
||||
import ApiClient from '../ApiClient';
|
||||
import WorkerCluster from './WorkerCluster';
|
||||
import WorkerTag from './WorkerTag';
|
||||
import WorkerTask from './WorkerTask';
|
||||
|
||||
/**
|
||||
@ -67,8 +67,8 @@ class WorkerAllOf {
|
||||
if (data.hasOwnProperty('task')) {
|
||||
obj['task'] = WorkerTask.constructFromObject(data['task']);
|
||||
}
|
||||
if (data.hasOwnProperty('clusters')) {
|
||||
obj['clusters'] = ApiClient.convertToType(data['clusters'], [WorkerCluster]);
|
||||
if (data.hasOwnProperty('tags')) {
|
||||
obj['tags'] = ApiClient.convertToType(data['tags'], [WorkerTag]);
|
||||
}
|
||||
}
|
||||
return obj;
|
||||
@ -100,10 +100,10 @@ WorkerAllOf.prototype['supported_task_types'] = undefined;
|
||||
WorkerAllOf.prototype['task'] = undefined;
|
||||
|
||||
/**
|
||||
* Clusters of which this Worker is a member.
|
||||
* @member {Array.<module:model/WorkerCluster>} clusters
|
||||
* Tags of which this Worker is a member.
|
||||
* @member {Array.<module:model/WorkerTag>} tags
|
||||
*/
|
||||
WorkerAllOf.prototype['clusters'] = undefined;
|
||||
WorkerAllOf.prototype['tags'] = undefined;
|
||||
|
||||
|
||||
|
||||
|
@ -1,74 +0,0 @@
|
||||
/**
|
||||
* Flamenco manager
|
||||
* Render Farm manager API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.0.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
* https://openapi-generator.tech
|
||||
* Do not edit the class manually.
|
||||
*
|
||||
*/
|
||||
|
||||
import ApiClient from '../ApiClient';
|
||||
|
||||
/**
|
||||
* The WorkerClusterChangeRequest model module.
|
||||
* @module model/WorkerClusterChangeRequest
|
||||
* @version 0.0.0
|
||||
*/
|
||||
class WorkerClusterChangeRequest {
|
||||
/**
|
||||
* Constructs a new <code>WorkerClusterChangeRequest</code>.
|
||||
* Request to change which clusters this Worker is assigned to.
|
||||
* @alias module:model/WorkerClusterChangeRequest
|
||||
* @param clusterIds {Array.<String>}
|
||||
*/
|
||||
constructor(clusterIds) {
|
||||
|
||||
WorkerClusterChangeRequest.initialize(this, clusterIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes the fields of this object.
|
||||
* This method is used by the constructors of any subclasses, in order to implement multiple inheritance (mix-ins).
|
||||
* Only for internal use.
|
||||
*/
|
||||
static initialize(obj, clusterIds) {
|
||||
obj['cluster_ids'] = clusterIds;
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs a <code>WorkerClusterChangeRequest</code> from a plain JavaScript object, optionally creating a new instance.
|
||||
* Copies all relevant properties from <code>data</code> to <code>obj</code> if supplied or a new instance if not.
|
||||
* @param {Object} data The plain JavaScript object bearing properties of interest.
|
||||
* @param {module:model/WorkerClusterChangeRequest} obj Optional instance to populate.
|
||||
* @return {module:model/WorkerClusterChangeRequest} The populated <code>WorkerClusterChangeRequest</code> instance.
|
||||
*/
|
||||
static constructFromObject(data, obj) {
|
||||
if (data) {
|
||||
obj = obj || new WorkerClusterChangeRequest();
|
||||
|
||||
if (data.hasOwnProperty('cluster_ids')) {
|
||||
obj['cluster_ids'] = ApiClient.convertToType(data['cluster_ids'], ['String']);
|
||||
}
|
||||
}
|
||||
return obj;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* @member {Array.<String>} cluster_ids
|
||||
*/
|
||||
WorkerClusterChangeRequest.prototype['cluster_ids'] = undefined;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
export default WorkerClusterChangeRequest;
|
||||
|
@ -14,20 +14,20 @@
|
||||
import ApiClient from '../ApiClient';
|
||||
|
||||
/**
|
||||
* The WorkerCluster model module.
|
||||
* @module model/WorkerCluster
|
||||
* The WorkerTag model module.
|
||||
* @module model/WorkerTag
|
||||
* @version 0.0.0
|
||||
*/
|
||||
class WorkerCluster {
|
||||
class WorkerTag {
|
||||
/**
|
||||
* Constructs a new <code>WorkerCluster</code>.
|
||||
* Cluster of workers. A job can optionally specify which cluster it should be limited to. Workers can be part of multiple clusters simultaneously.
|
||||
* @alias module:model/WorkerCluster
|
||||
* Constructs a new <code>WorkerTag</code>.
|
||||
* Tag of workers. A job can optionally specify which tag it should be limited to. Workers can be part of multiple tags simultaneously.
|
||||
* @alias module:model/WorkerTag
|
||||
* @param name {String}
|
||||
*/
|
||||
constructor(name) {
|
||||
|
||||
WorkerCluster.initialize(this, name);
|
||||
WorkerTag.initialize(this, name);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -40,15 +40,15 @@ class WorkerCluster {
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs a <code>WorkerCluster</code> from a plain JavaScript object, optionally creating a new instance.
|
||||
* Constructs a <code>WorkerTag</code> from a plain JavaScript object, optionally creating a new instance.
|
||||
* Copies all relevant properties from <code>data</code> to <code>obj</code> if supplied or a new instance if not.
|
||||
* @param {Object} data The plain JavaScript object bearing properties of interest.
|
||||
* @param {module:model/WorkerCluster} obj Optional instance to populate.
|
||||
* @return {module:model/WorkerCluster} The populated <code>WorkerCluster</code> instance.
|
||||
* @param {module:model/WorkerTag} obj Optional instance to populate.
|
||||
* @return {module:model/WorkerTag} The populated <code>WorkerTag</code> instance.
|
||||
*/
|
||||
static constructFromObject(data, obj) {
|
||||
if (data) {
|
||||
obj = obj || new WorkerCluster();
|
||||
obj = obj || new WorkerTag();
|
||||
|
||||
if (data.hasOwnProperty('id')) {
|
||||
obj['id'] = ApiClient.convertToType(data['id'], 'String');
|
||||
@ -67,25 +67,25 @@ class WorkerCluster {
|
||||
}
|
||||
|
||||
/**
|
||||
* UUID of the cluster. Can be ommitted when creating a new cluster, in which case a random UUID will be assigned.
|
||||
* UUID of the tag. Can be ommitted when creating a new tag, in which case a random UUID will be assigned.
|
||||
* @member {String} id
|
||||
*/
|
||||
WorkerCluster.prototype['id'] = undefined;
|
||||
WorkerTag.prototype['id'] = undefined;
|
||||
|
||||
/**
|
||||
* @member {String} name
|
||||
*/
|
||||
WorkerCluster.prototype['name'] = undefined;
|
||||
WorkerTag.prototype['name'] = undefined;
|
||||
|
||||
/**
|
||||
* @member {String} description
|
||||
*/
|
||||
WorkerCluster.prototype['description'] = undefined;
|
||||
WorkerTag.prototype['description'] = undefined;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
export default WorkerCluster;
|
||||
export default WorkerTag;
|
||||
|
74
web/app/src/manager-api/model/WorkerTagChangeRequest.js
generated
Normal file
74
web/app/src/manager-api/model/WorkerTagChangeRequest.js
generated
Normal file
@ -0,0 +1,74 @@
|
||||
/**
|
||||
* Flamenco manager
|
||||
* Render Farm manager API
|
||||
*
|
||||
* The version of the OpenAPI document: 1.0.0
|
||||
*
|
||||
*
|
||||
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
||||
* https://openapi-generator.tech
|
||||
* Do not edit the class manually.
|
||||
*
|
||||
*/
|
||||
|
||||
import ApiClient from '../ApiClient';
|
||||
|
||||
/**
|
||||
* The WorkerTagChangeRequest model module.
|
||||
* @module model/WorkerTagChangeRequest
|
||||
* @version 0.0.0
|
||||
*/
|
||||
class WorkerTagChangeRequest {
|
||||
/**
|
||||
* Constructs a new <code>WorkerTagChangeRequest</code>.
|
||||
* Request to change which tags this Worker is assigned to.
|
||||
* @alias module:model/WorkerTagChangeRequest
|
||||
* @param tagIds {Array.<String>}
|
||||
*/
|
||||
constructor(tagIds) {
|
||||
|
||||
WorkerTagChangeRequest.initialize(this, tagIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes the fields of this object.
|
||||
* This method is used by the constructors of any subclasses, in order to implement multiple inheritance (mix-ins).
|
||||
* Only for internal use.
|
||||
*/
|
||||
static initialize(obj, tagIds) {
|
||||
obj['tag_ids'] = tagIds;
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs a <code>WorkerTagChangeRequest</code> from a plain JavaScript object, optionally creating a new instance.
|
||||
* Copies all relevant properties from <code>data</code> to <code>obj</code> if supplied or a new instance if not.
|
||||
* @param {Object} data The plain JavaScript object bearing properties of interest.
|
||||
* @param {module:model/WorkerTagChangeRequest} obj Optional instance to populate.
|
||||
* @return {module:model/WorkerTagChangeRequest} The populated <code>WorkerTagChangeRequest</code> instance.
|
||||
*/
|
||||
static constructFromObject(data, obj) {
|
||||
if (data) {
|
||||
obj = obj || new WorkerTagChangeRequest();
|
||||
|
||||
if (data.hasOwnProperty('tag_ids')) {
|
||||
obj['tag_ids'] = ApiClient.convertToType(data['tag_ids'], ['String']);
|
||||
}
|
||||
}
|
||||
return obj;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* @member {Array.<String>} tag_ids
|
||||
*/
|
||||
WorkerTagChangeRequest.prototype['tag_ids'] = undefined;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
export default WorkerTagChangeRequest;
|
||||
|
@ -12,21 +12,21 @@
|
||||
*/
|
||||
|
||||
import ApiClient from '../ApiClient';
|
||||
import WorkerCluster from './WorkerCluster';
|
||||
import WorkerTag from './WorkerTag';
|
||||
|
||||
/**
|
||||
* The WorkerClusterList model module.
|
||||
* @module model/WorkerClusterList
|
||||
* The WorkerTagList model module.
|
||||
* @module model/WorkerTagList
|
||||
* @version 0.0.0
|
||||
*/
|
||||
class WorkerClusterList {
|
||||
class WorkerTagList {
|
||||
/**
|
||||
* Constructs a new <code>WorkerClusterList</code>.
|
||||
* @alias module:model/WorkerClusterList
|
||||
* Constructs a new <code>WorkerTagList</code>.
|
||||
* @alias module:model/WorkerTagList
|
||||
*/
|
||||
constructor() {
|
||||
|
||||
WorkerClusterList.initialize(this);
|
||||
WorkerTagList.initialize(this);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -38,18 +38,18 @@ class WorkerClusterList {
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs a <code>WorkerClusterList</code> from a plain JavaScript object, optionally creating a new instance.
|
||||
* Constructs a <code>WorkerTagList</code> from a plain JavaScript object, optionally creating a new instance.
|
||||
* Copies all relevant properties from <code>data</code> to <code>obj</code> if supplied or a new instance if not.
|
||||
* @param {Object} data The plain JavaScript object bearing properties of interest.
|
||||
* @param {module:model/WorkerClusterList} obj Optional instance to populate.
|
||||
* @return {module:model/WorkerClusterList} The populated <code>WorkerClusterList</code> instance.
|
||||
* @param {module:model/WorkerTagList} obj Optional instance to populate.
|
||||
* @return {module:model/WorkerTagList} The populated <code>WorkerTagList</code> instance.
|
||||
*/
|
||||
static constructFromObject(data, obj) {
|
||||
if (data) {
|
||||
obj = obj || new WorkerClusterList();
|
||||
obj = obj || new WorkerTagList();
|
||||
|
||||
if (data.hasOwnProperty('clusters')) {
|
||||
obj['clusters'] = ApiClient.convertToType(data['clusters'], [WorkerCluster]);
|
||||
if (data.hasOwnProperty('tags')) {
|
||||
obj['tags'] = ApiClient.convertToType(data['tags'], [WorkerTag]);
|
||||
}
|
||||
}
|
||||
return obj;
|
||||
@ -59,14 +59,14 @@ class WorkerClusterList {
|
||||
}
|
||||
|
||||
/**
|
||||
* @member {Array.<module:model/WorkerCluster>} clusters
|
||||
* @member {Array.<module:model/WorkerTag>} tags
|
||||
*/
|
||||
WorkerClusterList.prototype['clusters'] = undefined;
|
||||
WorkerTagList.prototype['tags'] = undefined;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
export default WorkerClusterList;
|
||||
export default WorkerTagList;
|
||||
|
@ -15,16 +15,16 @@ export const useWorkers = defineStore('workers', {
|
||||
*/
|
||||
activeWorkerID: "",
|
||||
|
||||
/** @type {API.WorkerCluster[]} */
|
||||
clusters: [],
|
||||
/** @type {API.WorkerTag[]} */
|
||||
tags: [],
|
||||
|
||||
/* Mapping from cluster UUID to API.WorkerCluster. */
|
||||
clustersByID: {},
|
||||
/* Mapping from tag UUID to API.WorkerTag. */
|
||||
tagsByID: {},
|
||||
}),
|
||||
actions: {
|
||||
setActiveWorkerID(workerID) {
|
||||
this.$patch({
|
||||
activeWorker: {id: workerID, settings: {}, metadata: {}},
|
||||
activeWorker: { id: workerID, settings: {}, metadata: {} },
|
||||
activeWorkerID: workerID,
|
||||
});
|
||||
},
|
||||
@ -47,22 +47,21 @@ export const useWorkers = defineStore('workers', {
|
||||
});
|
||||
},
|
||||
/**
|
||||
* Fetch the available worker clusters from the Manager.
|
||||
* Fetch the available worker tags from the Manager.
|
||||
*
|
||||
* @returns a promise.
|
||||
*/
|
||||
refreshClusters() {
|
||||
refreshTags() {
|
||||
const api = new WorkerMgtApi(getAPIClient());
|
||||
return api.fetchWorkerClusters()
|
||||
.then((resp) => {
|
||||
this.clusters = resp.clusters;
|
||||
return api.fetchWorkerTags().then((resp) => {
|
||||
this.tags = resp.tags;
|
||||
|
||||
let clustersByID = {};
|
||||
for (let cluster of this.clusters) {
|
||||
clustersByID[cluster.id] = cluster;
|
||||
}
|
||||
this.clustersByID = clustersByID;
|
||||
})
|
||||
let tagsByID = {};
|
||||
for (let tag of this.tags) {
|
||||
tagsByID[tag.id] = tag;
|
||||
}
|
||||
this.tagsByID = tagsByID;
|
||||
});
|
||||
},
|
||||
},
|
||||
})
|
||||
});
|
||||
|
@ -16,7 +16,35 @@ SocketIO messages have an *event name* and *room name*.
|
||||
- **Manager** typically sends to all clients in a specific *room*. Which client
|
||||
has joined which room is determined by the Manager as well. By default every
|
||||
client joins the "job updates" and "chat" rooms. This is done in the
|
||||
`OnConnection` handler defined in `registerSIOEventHandlers()`.
|
||||
`OnConnection` handler defined in `registerSIOEventHandlers()`. Clients can
|
||||
send messages to the Manager to change which rooms they are in.
|
||||
- Received messages (regardless of by whom) are handled based only on their
|
||||
*event name*. The *room name* only determines *which* client receives those
|
||||
*event name*. The *room name* only determines *which client* receives those
|
||||
messages.
|
||||
|
||||
## Technical Details
|
||||
|
||||
The following files & directories are relevant to the SocketIO broadcasting
|
||||
system on the Manager/backend side:
|
||||
|
||||
`internal/manager/webupdates`
|
||||
: package for the SocketIO broadcasting system
|
||||
|
||||
`internal/manager/webupdates/sio_rooms.go`
|
||||
: contains the list of predefined SocketIO *rooms* and *event types*. Note that
|
||||
there are more rooms than listed in that file; there are dynamic room name
|
||||
like `job-fa48930a-105c-4125-a7f7-0aa1651dcd57` that cannot be listed there as
|
||||
constants.
|
||||
|
||||
`internal/manager/webupdates/job_updates.go`
|
||||
: sending job-related updates.
|
||||
|
||||
`internal/manager/webupdates/worker_updates.go`
|
||||
: sending worker-related updates.
|
||||
|
||||
`pkg/api/flamenco-openapi.yaml`
|
||||
: the OpenAPI specification also includes the structures sent over SocketIO.
|
||||
Search for `SocketIOJobUpdate`; the rest is defined in its vicinity.
|
||||
|
||||
For a relatively simple example of a job update broadcast, see
|
||||
`func (f *Flamenco) SetJobPriority(...)` in `internal/manager/api_impl/jobs.go`.
|
||||
|
Loading…
Reference in New Issue
Block a user