[Blender_Kistsu] Remove Un-used Collections during Update Output Collection #35

Closed
Nick Alberelli wants to merge 0 commits from fix/update_output_collection into main

When changing the target branch, be careful to rebase the branch in your fork to match. See documentation.
870 changed files with 50051 additions and 65750 deletions

8
.gitattributes vendored
View File

@ -1,6 +1,2 @@
*.mp4 filter=lfs diff=lfs merge=lfs -text .png filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text .mp4 filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.whl filter=lfs diff=lfs merge=lfs -text
*.gif filter=lfs diff=lfs merge=lfs -text
*.webp filter=lfs diff=lfs merge=lfs -text

View File

@ -1,29 +0,0 @@
name: Bug
about: File a bug report
labels:
- "Type/Report"
- "Status/Needs Triage"
- "Priority/Normal"
body:
- type: markdown
attributes:
value: |
Thank you for wanting to submit a report and help us improve our tools.
### Please provide a .blend file with bug reports, and use the latest version of the tools.
- type: textarea
id: body
attributes:
label: "Description"
hide_label: true
value: |
**System Information**
Operating system: (eg. Windows 11)
Blender Version: (eg. 4.2.3 LTS)
Add-on Name & Version: (eg. EasyWeight v1.0.4)
**Short description of error**
**Exact steps for others to reproduce the error**
Based on default cube or simplified attached .blend file.

View File

@ -1,22 +0,0 @@
name: Character Rig Issue
about: File a report about one of the characters in our library
labels:
- "Type/Report"
- "Status/Needs Triage"
- "Priority/Normal"
body:
- type: markdown
attributes:
value: |
### Please note that not all characters may be perfectly compatible with the latest version of Blender.
- type: textarea
id: body
attributes:
label: "Description"
hide_label: true
value: |
@Mets
Character:
Blender Version:
**Description of the issue**

View File

@ -1,35 +0,0 @@
name: CloudRig Bug
about: File a bug report about CloudRig
labels:
- "Type/Report"
- "Status/Needs Triage"
- "Priority/Normal"
body:
- type: markdown
attributes:
value: |
Thank you for wanting to submit a report and help me improve CloudRig.
### Please provide a .blend file with bug reports.
If your issue is about a metarig that fails to generate, please use the "Report Bug" button in the interface:
Select your metarig and go to Properties->Object Data->CloudRig->Generation Log, and click on "Report Bug".
### Please make sure you use the CloudRig version that corresponds to your Blender version.
- type: textarea
id: body
attributes:
label: "Description"
hide_label: true
value: |
**System Information**
Operating system:
Blender Version:
CloudRig Version:
**Short description of error**
**Exact steps for others to reproduce the error**
Based on the default human metarig or a simplified attached .blend file.
@Mets

2
.gitignore vendored
View File

@ -51,7 +51,7 @@ target/
.python-version .python-version
# dotenv # dotenv
*.env .env
# virtualenv # virtualenv
.venv .venv

30
AUTHORS
View File

@ -1,30 +0,0 @@
# This is the list of Blender Studio Tools authors for copyright purposes.
#
# This does not necessarily list everyone who has contributed code.
# To see the full list of contributors, see the revision history in source
# control.
# Names should be added to this file with this pattern:
#
# For individuals:
# Name <email address>
#
# For organizations:
# Organization <fnmatch pattern>
#
# BEGIN individuals section.
Andy Goralczyk <andy@blender.org>
Demeter Dazdik <demeter@blender.org>
Francesco Siddi <francesco@blender.org>
Nick Alberelli <nick@blender.org>
Paul Golter <paul@blender.org>
Sebastian Parborg <sebastian@blender.org>
Simon Thommes <simon@blender.org>
# Please DO NOT APPEND here. See comments at the top of the file.
# END individuals section.
# BEGIN organizations section.
Blender Foundation <*@blender.org>
# Please DO NOT APPEND here. See comments at the top of the file.
# END organizations section.

340
LICENSE
View File

@ -1,340 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.

View File

@ -1,19 +1,3 @@
# Blender Studio Pipeline # Blender Studio Pipeline
The complete collection of documents, add-ons, scripts and tools that make up the Blender Studio pipeline. Learn more at [studio.blender.org](https://studio.blender.org/pipeline/). The complete collection of documents, add-ons, scripts and tools that make up the Blender Studio pipeline. Learn more at [studio.blender.org](https://studio.blender.org/pipeline-and-tools/).
## Development Setup
Before checking out this repo, ensure that you have `git-lfs` installed and enabled (use `git lfs install` to verify this).
To learn more see https://git-lfs.com/
#### Developer Tip
If you are working with a multiple remotes for this repository (e.g. fork and upstream) and you are receiving errors related to git lfs like `smudge filter lfs failed` you can try enabling autodetect in the repo's local git config with the following command.
```bash
git config lfs.remote.autodetect true
```
##### Requirements
git lfs version: `3.3.0+`

View File

@ -1,6 +1,21 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
from media_viewer import ( from media_viewer import (
vars, vars,

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation
import sys import sys
import re import re

View File

@ -1,7 +1,3 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors
#
# SPDX-License-Identifier: GPL-2.0-or-later
from __future__ import annotations from __future__ import annotations
from typing import ( from typing import (
List, List,
@ -20,6 +16,7 @@ from copy import copy
import bpy import bpy
import gpu import gpu
import bgl
import blf import blf
from gpu_extras.batch import batch_for_shader from gpu_extras.batch import batch_for_shader
from bpy.app.handlers import persistent from bpy.app.handlers import persistent
@ -97,7 +94,7 @@ def draw_toggle(region_name: str):
bot_right = (top_left[0] + width, top_left[1] - height) bot_right = (top_left[0] + width, top_left[1] - height)
coordinates = [top_left, top_right, bot_left, bot_right] coordinates = [top_left, top_right, bot_left, bot_right]
shader = gpu.shader.from_builtin("UNIFORM_COLOR") shader = gpu.shader.from_builtin("2D_UNIFORM_COLOR")
batch = batch_for_shader( batch = batch_for_shader(
shader, shader,
"TRIS", "TRIS",
@ -117,7 +114,7 @@ def draw_text(region_name: str):
y = region.height + offset_y y = region.height + offset_y
x = 0 + offset_x x = 0 + offset_x
font_id = 0 font_id = 0
gpu.state.blend_set('ALPHA') bgl.glEnable(bgl.GL_BLEND)
blf.position(font_id, x, y, 0) blf.position(font_id, x, y, 0)
blf.size(font_id, 12, 72) blf.size(font_id, 12, 72)
blf.color(font_id, 1, 1, 1, 0.9) blf.color(font_id, 1, 1, 1, 0.9)
@ -378,7 +375,7 @@ class ButtonDrawer:
def __init__( def __init__(
self, self,
): ):
self._shader = gpu.shader.from_builtin("UNIFORM_COLOR") self._shader = gpu.shader.from_builtin("2D_UNIFORM_COLOR")
self.draw_arrow = True self.draw_arrow = True
self.draw_rect = False self.draw_rect = False
self._arrow_direction = "UP" self._arrow_direction = "UP"
@ -401,8 +398,8 @@ class ButtonDrawer:
def draw_button( def draw_button(
self, button: Button, region: bpy.types.Region, color: Float4 self, button: Button, region: bpy.types.Region, color: Float4
) -> None: ) -> None:
gpu.state.blend_set('ALPHA') bgl.glEnable(bgl.GL_BLEND)
gpu.state.line_width_set(0) bgl.glLineWidth(0)
coords = button.get_region_coords(region) coords = button.get_region_coords(region)
# Draw rectangle. # Draw rectangle.
@ -444,7 +441,7 @@ class ButtonDrawer:
return return
# Create line batch and draw it. # Create line batch and draw it.
gpu.state.line_width_set(3) bgl.glLineWidth(3)
self._shader.bind() self._shader.bind()
self._shader.uniform_float("color", color) self._shader.uniform_float("color", color)
# print(f"Drawing points: {line_pos}") # print(f"Drawing points: {line_pos}")

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
import bpy import bpy
from typing import Any, Union, Dict, List, Tuple from typing import Any, Union, Dict, List, Tuple

View File

@ -1,12 +1,29 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
from typing import List, Dict, Tuple, Union, Any, Optional, Set from typing import List, Dict, Tuple, Union, Any, Optional, Set
from pathlib import Path from pathlib import Path
import bpy import bpy
import gpu import gpu
import bgl
import gpu_extras.presets import gpu_extras.presets
from mathutils import Matrix from mathutils import Matrix
@ -201,7 +218,6 @@ class MV_OT_render_review_img_editor(bpy.types.Operator):
layer_index=layer_idx, layer_index=layer_idx,
pass_index=pass_idx, pass_index=pass_idx,
) )
image_gpu_tex = gpu.texture.from_image(image)
# Create a Buffer on GPU that will be used to first render the image into, # Create a Buffer on GPU that will be used to first render the image into,
# then the annotation. # then the annotation.
@ -216,7 +232,8 @@ class MV_OT_render_review_img_editor(bpy.types.Operator):
with frame_buffer.bind(): with frame_buffer.bind():
# Debugging: Flood image with color. # Debugging: Flood image with color.
#frame_buffer.clear(color=(0.0, 0.0, 0.0, 1.0), depth=1.0, stencil=0) # bgl.glClearColor(0, 1, 0, 1)
# bgl.glClear(bgl.GL_COLOR_BUFFER_BIT)
with gpu.matrix.push_pop(): with gpu.matrix.push_pop():
# Our drawing is not in the right place, we need to use # Our drawing is not in the right place, we need to use
@ -235,16 +252,21 @@ class MV_OT_render_review_img_editor(bpy.types.Operator):
gpu.matrix.load_projection_matrix(mat) gpu.matrix.load_projection_matrix(mat)
# Draw the texture. # Draw the texture.
gpu_extras.presets.draw_texture_2d(image_gpu_tex, (0, 0), 1, 1) gpu_extras.presets.draw_texture_2d(image.bindcode, (0, 0), 1, 1)
# Draw grease pencil over it. # Draw grease pencil over it.
gpu_opsdata.draw_callback(GP_DRAWER, frame=frame) gpu_opsdata.draw_callback(GP_DRAWER, frame=frame)
buffer = gpu_texture.read() # Create the buffer with dimensions: r, g, b, a (width * height * 4)
# Make sure that we use bgl.GL_FLOAT as this solves the colorspace issue
# that the saved image would be in linear space. (?)
buffer = bgl.Buffer(bgl.GL_FLOAT, width * height * 4)
bgl.glReadBuffer(bgl.GL_BACK)
bgl.glReadPixels(0, 0, width, height, bgl.GL_RGBA, bgl.GL_FLOAT, buffer)
# new_image.scale(width, height) does not seem to do a difference? # new_image.scale(width, height) does not seem to do a difference?
# Set new_image.pixels to the composited buffer. # Set new_image.pixels to the composited buffer.
new_image.pixels = [f_val for row in buffer.to_list() for color in row for f_val in color] new_image.pixels = [v for v in buffer]
return new_image return new_image

View File

@ -1,11 +1,29 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
import math
from typing import List, Dict, Tuple, Union, Any, Optional, Set from typing import List, Dict, Tuple, Union, Any, Optional, Set
import bpy import bpy
import gpu import gpu
import bgl
from gpu_extras.batch import batch_for_shader from gpu_extras.batch import batch_for_shader
from . import ops from . import ops
@ -18,7 +36,32 @@ Float2 = Tuple[float, float]
Float3 = Tuple[float, float, float] Float3 = Tuple[float, float, float]
Float4 = Tuple[float, float, float, float] Float4 = Tuple[float, float, float, float]
line_shader = gpu.shader.from_builtin('UNIFORM_COLOR') # Glsl.
gpu_vertex_shader = """
uniform mat4 ModelViewProjectionMatrix;
layout (location = 0) in vec2 pos;
layout (location = 1) in vec4 color;
out vec4 lineColor; // output to the fragment shader
void main()
{
gl_Position = ModelViewProjectionMatrix * vec4(pos.x, pos.y, 0.0, 1.0);
lineColor = color;
}
"""
gpu_fragment_shader = """
out vec4 fragColor;
in vec4 lineColor;
void main()
{
fragColor = lineColor;
}
"""
def get_gpframe_coords( def get_gpframe_coords(
gpframe: bpy.types.GPencilFrame, do_3_dimensions=False gpframe: bpy.types.GPencilFrame, do_3_dimensions=False
@ -82,6 +125,20 @@ def lin2srgb(lin: float) -> float:
class GPDrawerCustomShader: class GPDrawerCustomShader:
def __init__(self):
self._format = gpu.types.GPUVertFormat()
# To find out what attributes are available look here:
# ./source/blender/gpu/GPU_shader.h
self._pos_id = self._format.attr_add(
id="pos", comp_type="F32", len=2, fetch_mode="FLOAT"
)
self._color_id = self._format.attr_add(
id="color", comp_type="F32", len=4, fetch_mode="FLOAT"
)
self.shader = gpu.types.GPUShader(gpu_vertex_shader, gpu_fragment_shader)
def draw(self, gpframe: bpy.types.GPencilFrame, line_widht: int, color: Float4): def draw(self, gpframe: bpy.types.GPencilFrame, line_widht: int, color: Float4):
coords = get_gpframe_coords(gpframe) coords = get_gpframe_coords(gpframe)
@ -89,19 +146,26 @@ class GPDrawerCustomShader:
if not coords: if not coords:
return return
gpu.state.blend_set('ALPHA') # TODO: replace with gpu.state.line_width_set(width)
gpu.state.line_width_set(line_widht) bgl.glEnable(bgl.GL_BLEND)
line_shader.uniform_float("color", color) bgl.glLineWidth(line_widht)
batch = batch_for_shader(
line_shader, 'LINES', colors = [color for c in coords]
{"pos": coords}
) vbo = gpu.types.GPUVertBuf(len=len(coords), format=self._format)
batch.draw(line_shader) vbo.attr_fill(id=self._pos_id, data=coords)
vbo.attr_fill(id=self._color_id, data=colors)
batch = gpu.types.GPUBatch(type="LINES", buf=vbo)
batch.program_set(self.shader)
batch.draw()
class GPDrawerBuiltInShader: class GPDrawerBuiltInShader:
def __init__(self): def __init__(self):
self.shader = gpu.shader.from_builtin("UNIFORM_COLOR") # 2D_UNIFORM_COLOR is not implemented, documentation is deprecated
# use 3D_UNIFORM_COLOR instead.
self.shader = gpu.shader.from_builtin("3D_UNIFORM_COLOR")
def draw(self, gpframe: bpy.types.GPencilLayer, line_width: int, color: Float4): def draw(self, gpframe: bpy.types.GPencilLayer, line_width: int, color: Float4):
@ -116,8 +180,8 @@ class GPDrawerBuiltInShader:
# Question: Can I change the line width using attributes? # Question: Can I change the line width using attributes?
# Or do I have to do it this way? # Or do I have to do it this way?
# TODO: replace with gpu.state.line_width_set(width) # TODO: replace with gpu.state.line_width_set(width)
gpu.state.blend_set('ALPHA') bgl.glEnable(bgl.GL_BLEND)
gpu.state.line_width_set(line_width) bgl.glLineWidth(line_width)
# print(f"Drawing coords: {coords}") # print(f"Drawing coords: {coords}")
batch = batch_for_shader(self.shader, "LINES", {"pos": coords}) batch = batch_for_shader(self.shader, "LINES", {"pos": coords})

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
import logging import logging

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
import subprocess import subprocess
from pathlib import Path from pathlib import Path
@ -375,7 +391,7 @@ class MV_OT_toggle_timeline(bpy.types.Operator):
if area_timeline: if area_timeline:
# Timeline needs to be closed. # Timeline needs to be closed.
timeline_state = TimelineState(area=area_timeline) timeline_state = TimelineState(area=area_timeline)
opsdata.close_area(context, area_timeline) opsdata.close_area(area_timeline)
logger.info("Hide timeline") logger.info("Hide timeline")
elif area_media: elif area_media:
@ -437,7 +453,7 @@ class MV_OT_toggle_filebrowser(bpy.types.Operator):
# If sqe and timeline visible but not filebrowser # If sqe and timeline visible but not filebrowser
# we need to first close timeline and then open it after to # we need to first close timeline and then open it after to
# get correct layout. # get correct layout.
opsdata.close_area(context, area_time) opsdata.close_area(area_time)
# We need to do some custom context assembly here # We need to do some custom context assembly here
# because the bpy.ops.screen.area_close() sets context.screen to NULL. # because the bpy.ops.screen.area_close() sets context.screen to NULL.
@ -452,8 +468,7 @@ class MV_OT_toggle_filebrowser(bpy.types.Operator):
) )
# Screen must be re-drawn, otherwise space.params is None. # Screen must be re-drawn, otherwise space.params is None.
with context.temp_override(**ctx): bpy.ops.wm.redraw_timer(ctx, type="DRAW_WIN_SWAP", iterations=1)
bpy.ops.wm.redraw_timer(type="DRAW_WIN_SWAP", iterations=1)
# Restore previous filebrowser state. # Restore previous filebrowser state.
filebrowser_state.apply_to_area(area_fb) filebrowser_state.apply_to_area(area_fb)
@ -498,7 +513,7 @@ class MV_OT_toggle_filebrowser(bpy.types.Operator):
# Save filebrowser state. # Save filebrowser state.
filebrowser_state = FileBrowserState(area=area_fb) filebrowser_state = FileBrowserState(area=area_fb)
opsdata.close_area(context, area_fb) opsdata.close_area(area_fb)
logger.info("Hide filebrowser") logger.info("Hide filebrowser")
return {"FINISHED"} return {"FINISHED"}
@ -609,9 +624,7 @@ class MV_OT_set_media_area_type(bpy.types.Operator):
# Set annotate tool as active. # Set annotate tool as active.
if area_media.type in ["SEQUENCE_EDITOR", "IMAGE_EDITOR"]: if area_media.type in ["SEQUENCE_EDITOR", "IMAGE_EDITOR"]:
ctx = opsdata.get_context_for_area(area_media) bpy.ops.wm.tool_set_by_id({"area": area_media}, name="builtin.annotate")
with context.temp_override(**ctx):
bpy.ops.wm.tool_set_by_id(name="builtin.annotate")
logger.info(f"Changed active media area to: {area_media.type}") logger.info(f"Changed active media area to: {area_media.type}")
@ -636,8 +649,7 @@ class MV_OT_screen_full_area(bpy.types.Operator):
# active_media_area_obj = area_media # active_media_area_obj = area_media
ctx = opsdata.get_context_for_area(area_media) ctx = opsdata.get_context_for_area(area_media)
with context.temp_override(**ctx): bpy.ops.screen.screen_full_area(ctx, use_hide_panels=True)
bpy.ops.screen.screen_full_area(use_hide_panels=True)
is_fullscreen = not is_fullscreen is_fullscreen = not is_fullscreen
# Select previous filepath if in FILE_BROWSER area. # Select previous filepath if in FILE_BROWSER area.
@ -687,8 +699,7 @@ class MV_OT_jump_folder_up(bpy.types.Operator):
return {"CANCELLED"} return {"CANCELLED"}
ctx = opsdata.get_context_for_area(area_fb) ctx = opsdata.get_context_for_area(area_fb)
with context.temp_override(**ctx): bpy.ops.file.parent(ctx)
bpy.ops.file.parent()
return {"FINISHED"} return {"FINISHED"}
@ -750,8 +761,7 @@ class MV_OT_walk_bookmarks(bpy.types.Operator):
# Run Cleanup. # Run Cleanup.
ctx = opsdata.get_context_for_area(area_fb) ctx = opsdata.get_context_for_area(area_fb)
with context.temp_override(**ctx): bpy.ops.file.bookmark_cleanup(ctx)
bpy.ops.file.bookmark_cleanup()
# !!!!! # !!!!!
# The following section is the most stupid code in the universe. # The following section is the most stupid code in the universe.
@ -903,8 +913,7 @@ class MV_OT_animation_play(bpy.types.Operator):
ctx = opsdata.get_context_for_area(area_media) ctx = opsdata.get_context_for_area(area_media)
with context.temp_override(**ctx): bpy.ops.screen.animation_play(ctx)
bpy.ops.screen.animation_play()
return {"FINISHED"} return {"FINISHED"}
@ -934,8 +943,7 @@ class MV_OT_next_media_file(bpy.types.Operator):
# If not fullscreen, just call select_wall op # If not fullscreen, just call select_wall op
area_fb = opsdata.find_area(context, "FILE_BROWSER") area_fb = opsdata.find_area(context, "FILE_BROWSER")
ctx = opsdata.get_context_for_area(area_fb) ctx = opsdata.get_context_for_area(area_fb)
with context.temp_override(**ctx): bpy.ops.file.select_walk(ctx, "INVOKE_DEFAULT", direction=self.direction)
bpy.ops.file.select_walk("INVOKE_DEFAULT", direction=self.direction)
return {"FINISHED"} return {"FINISHED"}
# Get all files and folders and sort them alphabetically. # Get all files and folders and sort them alphabetically.
@ -1032,8 +1040,7 @@ class MV_OT_set_fb_display_type(bpy.types.Operator):
# Redraw if needed to update params. # Redraw if needed to update params.
if not area_fb.spaces.active.params: if not area_fb.spaces.active.params:
with context.temp_override(**ctx): bpy.ops.wm.redraw_timer(ctx, type="DRAW_WIN_SWAP", iterations=1)
bpy.ops.wm.redraw_timer(type="DRAW_WIN_SWAP", iterations=1)
# Set display type. # Set display type.
area_fb.spaces.active.params.display_type = self.display_type area_fb.spaces.active.params.display_type = self.display_type
@ -1116,13 +1123,15 @@ class MV_OT_pan_media_view(bpy.types.Operator):
if area_media.type == "IMAGE_EDITOR": if area_media.type == "IMAGE_EDITOR":
ctx = opsdata.get_context_for_area(area_media) ctx = opsdata.get_context_for_area(area_media)
with context.temp_override(**ctx): bpy.ops.image.view_pan(
bpy.ops.image.view_pan("EXEC_DEFAULT", offset=(self.deltax, self.deltay)) ctx, "EXEC_DEFAULT", offset=(self.deltax, self.deltay)
)
elif area_media.type == "SEQUENCE_EDITOR": elif area_media.type == "SEQUENCE_EDITOR":
ctx = opsdata.get_context_for_area(area_media, region_type="PREVIEW") ctx = opsdata.get_context_for_area(area_media, region_type="PREVIEW")
with context.temp_override(**ctx): bpy.ops.view2d.pan(
bpy.ops.view2d.pan("EXEC_DEFAULT", deltax=self.deltax, deltay=self.deltay) ctx, "EXEC_DEFAULT", deltax=self.deltax, deltay=self.deltay
)
# Redraw Area. # Redraw Area.
area_media.tag_redraw() area_media.tag_redraw()
@ -1153,21 +1162,19 @@ class MV_OT_zoom_media_view(bpy.types.Operator):
if area_media.type == "IMAGE_EDITOR": if area_media.type == "IMAGE_EDITOR":
ctx = opsdata.get_context_for_area(area_media) ctx = opsdata.get_context_for_area(area_media)
with context.temp_override(**ctx):
if self.direction == "IN": if self.direction == "IN":
bpy.ops.image.view_zoom_in("EXEC_DEFAULT", location=(0.5, 0.5)) bpy.ops.image.view_zoom_in(ctx, "EXEC_DEFAULT", location=(0.5, 0.5))
elif self.direction == "OUT": elif self.direction == "OUT":
bpy.ops.image.view_zoom_out("EXEC_DEFAULT", location=(0.5, 0.5)) bpy.ops.image.view_zoom_out(ctx, "EXEC_DEFAULT", location=(0.5, 0.5))
elif area_media.type == "SEQUENCE_EDITOR": elif area_media.type == "SEQUENCE_EDITOR":
ctx = opsdata.get_context_for_area(area_media, region_type="PREVIEW") ctx = opsdata.get_context_for_area(area_media, region_type="PREVIEW")
with context.temp_override(**ctx):
if self.direction == "IN": if self.direction == "IN":
bpy.ops.view2d.zoom_in("EXEC_DEFAULT") bpy.ops.view2d.zoom_in(ctx, "EXEC_DEFAULT")
elif self.direction == "OUT": elif self.direction == "OUT":
bpy.ops.view2d.zoom_out("EXEC_DEFAULT") bpy.ops.view2d.zoom_out(ctx, "EXEC_DEFAULT")
# Redraw Area. # Redraw Area.
area_media.tag_redraw() area_media.tag_redraw()
@ -1225,7 +1232,7 @@ class MV_OT_delete_active_gpencil_frame(bpy.types.Operator):
return {"CANCELLED"} return {"CANCELLED"}
# Get active layer and remove active frame. # Get active layer and remove active frame.
active_layer = gp_obj.layers[gp_obj.layers.active_index] active_layer = gp_obj.layers.active
if active_layer.active_frame: if active_layer.active_frame:
active_layer.frames.remove(active_layer.active_frame) active_layer.frames.remove(active_layer.active_frame)
@ -1249,9 +1256,8 @@ class MV_OT_delete_all_gpencil_frames(bpy.types.Operator):
return {"CANCELLED"} return {"CANCELLED"}
# Delete all frames of active layer. # Delete all frames of active layer.
active_layer = gp_obj.layers[gp_obj.layers.active_index] for i in reversed(range(len(gp_obj.layers.active.frames))):
for i in reversed(range(len(active_layer.frames))): gp_obj.layers.active.frames.remove(gp_obj.layers.active.frames[i])
active_layer.frames.remove(active_layer.frames[i])
return {"FINISHED"} return {"FINISHED"}
@ -1601,8 +1607,7 @@ class MV_OT_flip_media_view(bpy.types.Operator):
elif active_media_area == "IMAGE_EDITOR": elif active_media_area == "IMAGE_EDITOR":
ctx = opsdata.get_context_for_area(area) ctx = opsdata.get_context_for_area(area)
with context.temp_override(**ctx): bpy.ops.image.flip(ctx, use_flip_x=True)
bpy.ops.image.flip(use_flip_x=True)
else: else:
return {"CANCELLED"} return {"CANCELLED"}

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
import re import re
import json import json
@ -93,11 +109,6 @@ def find_area(context: bpy.types.Context, area_name: str) -> Optional[bpy.types.
else: else:
screen = context.screen screen = context.screen
# On startup (load-post handler) there's no window/screen set in context.
if not screen:
win = context.window_manager.windows[0]
screen = win.screen
for area in screen.areas: for area in screen.areas:
if area.type == area_name: if area.type == area_name:
return area return area
@ -111,8 +122,7 @@ def fit_timeline_view(context: bpy.types.Context, area: bpy.types.Area = None) -
return return
ctx = get_context_for_area(area) ctx = get_context_for_area(area)
with context.temp_override(**ctx): bpy.ops.action.view_all(ctx)
bpy.ops.action.view_all()
def fit_image_editor_view( def fit_image_editor_view(
@ -124,8 +134,7 @@ def fit_image_editor_view(
return return
ctx = get_context_for_area(area) ctx = get_context_for_area(area)
with context.temp_override(**ctx): bpy.ops.image.view_all(ctx, fit_view=True)
bpy.ops.image.view_all(fit_view=True)
def fit_sqe_preview(context: bpy.types.Context, area: bpy.types.Area = None) -> None: def fit_sqe_preview(context: bpy.types.Context, area: bpy.types.Area = None) -> None:
@ -135,8 +144,7 @@ def fit_sqe_preview(context: bpy.types.Context, area: bpy.types.Area = None) ->
return return
ctx = get_context_for_area(area) ctx = get_context_for_area(area)
with context.temp_override(**ctx): bpy.ops.sequencer.view_all_preview(ctx)
bpy.ops.sequencer.view_all_preview()
def fit_view(context: bpy.types.Context, area: bpy.types.Area) -> None: def fit_view(context: bpy.types.Context, area: bpy.types.Area) -> None:
@ -182,8 +190,7 @@ def split_area(
ctx = get_context_for_area(area_split) ctx = get_context_for_area(area_split)
start_areas = screen.areas[:] start_areas = screen.areas[:]
with context.temp_override(**ctx): bpy.ops.screen.area_split(ctx, direction=direction, factor=factor)
bpy.ops.screen.area_split(direction=direction, factor=factor)
for area in screen.areas: for area in screen.areas:
if area not in start_areas: if area not in start_areas:
@ -191,16 +198,15 @@ def split_area(
return area return area
def close_area(context: bpy.types.Context, area: bpy.types.Area) -> None: def close_area(area: bpy.types.Area) -> None:
ctx = get_context_for_area(area) ctx = get_context_for_area(area)
with context.temp_override(**ctx): bpy.ops.screen.area_close(ctx)
bpy.ops.screen.area_close()
def setup_filebrowser_area(filebrowser_area: bpy.types.Area) -> None: def setup_filebrowser_area(filebrowser_area: bpy.types.Area) -> None:
params = filebrowser_area.spaces.active.params params = filebrowser_area.spaces.active.params
params.display_type = "THUMBNAIL" params.display_type = "THUMBNAIL"
params.display_size_discrete = "NORMAL" params.display_size = "NORMAL"
params.use_filter = True params.use_filter = True
params.use_filter_image = True params.use_filter_image = True
params.use_filter_folder = True params.use_filter_folder = True

View File

@ -1,6 +1,23 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
from pathlib import Path from pathlib import Path
from typing import Tuple, Any, List, Union, Dict, Optional from typing import Tuple, Any, List, Union, Dict, Optional

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
import re import re
from typing import List, Dict, Set from typing import List, Dict, Set

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
from __future__ import annotations from __future__ import annotations
import bpy import bpy
@ -72,7 +88,7 @@ class SpaceData:
class FbParams: class FbParams:
directory: str = "" directory: str = ""
display_type: str = "THUMBNAIL" display_type: str = "THUMBNAIL"
display_size_discrete: str = "NORMAL" display_size: str = "NORMAL"
use_filter: bool = True use_filter: bool = True
use_filter_image: bool = True use_filter_image: bool = True
use_filter_folder: bool = True use_filter_folder: bool = True

View File

@ -1,7 +1,3 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors
#
# SPDX-License-Identifier: GPL-2.0-or-later
from typing import Set, Union, Optional, List, Dict, Any from typing import Set, Union, Optional, List, Dict, Any
import bpy import bpy
@ -178,10 +174,6 @@ class MV_PT_review_settings(bpy.types.Panel):
context.preferences.view, context.preferences.view,
"show_playback_fps", "show_playback_fps",
) )
layout.row().prop(
context.preferences.view,
"ui_scale",
)
def MV_TOPBAR_settings(self: Any, context: bpy.types.Context) -> None: def MV_TOPBAR_settings(self: Any, context: bpy.types.Context) -> None:

View File

@ -1,6 +1,22 @@
# SPDX-FileCopyrightText: 2021 Blender Studio Tools Authors # ***** BEGIN GPL LICENSE BLOCK *****
# #
# SPDX-License-Identifier: GPL-2.0-or-later # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# ***** END GPL LICENCE BLOCK *****
#
# (c) 2021, Blender Foundation - Paul Golter
from pathlib import Path from pathlib import Path
import bpy import bpy

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 B

After

Width:  |  Height:  |  Size: 479 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 129 B

After

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 B

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 B

After

Width:  |  Height:  |  Size: 34 KiB

View File

@ -1,2 +0,0 @@
# Env settings for docs publishing
DESTINATION=user@domain:/path/to/destination/

View File

@ -1,338 +0,0 @@
<script setup>
/* Import Vue components for search VPNavBarSearch default theme */
/* TODO: check if import paths are correct */
import VPNavBarAppearance from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarAppearance.vue'
import VPNavBarExtra from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarExtra.vue'
import VPNavBarHamburger from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarHamburger.vue'
import VPNavBarMenu from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarMenu.vue'
import VPNavBarSearch from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarSearch.vue'
import VPNavBarSocialLinks from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarSocialLinks.vue'
import VPNavBarTitle from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarTitle.vue'
import VPNavBarTranslations from '../../node_modules/vitepress/dist/client/theme-default/components/VPNavBarTranslations.vue'
</script>
<template>
<div class="nav-global">
<div class="nav-global-container">
<nav>
<a href="https://studio.blender.org" class="nav-global-logo">
<svg fill-rule="nonzero" viewBox="0 0 200 162.05">
<path d="M61.1 104.56c.05 2.6.88 7.66 2.12 11.61a61.27 61.27 0 0 0 13.24 22.92 68.39 68.39 0 0 0 23.17 16.64 74.46 74.46 0 0 0 30.42 6.32 74.52 74.52 0 0 0 30.4-6.42 68.87 68.87 0 0 0 23.15-16.7 61.79 61.79 0 0 0 13.23-22.97 58.06 58.06 0 0 0 2.07-25.55 59.18 59.18 0 0 0-8.44-23.1 64.45 64.45 0 0 0-15.4-16.98h.02L112.76 2.46l-.16-.12c-4.09-3.14-10.96-3.13-15.46.02-4.55 3.18-5.07 8.44-1.02 11.75l-.02.02 26 21.14-79.23.08h-.1c-6.55.01-12.85 4.3-14.1 9.74-1.27 5.53 3.17 10.11 9.98 10.14v.02l40.15-.07-71.66 55-.27.2c-6.76 5.18-8.94 13.78-4.69 19.23 4.32 5.54 13.51 5.55 20.34.03l39.1-32s-.56 4.32-.52 6.91zm100.49 14.47c-8.06 8.2-19.34 12.86-31.54 12.89-12.23.02-23.5-4.6-31.57-12.79-3.93-4-6.83-8.59-8.61-13.48a35.57 35.57 0 0 1 2.34-29.25 39.1 39.1 0 0 1 9.58-11.4 44.68 44.68 0 0 1 28.24-9.85 44.59 44.59 0 0 1 28.24 9.77 38.94 38.94 0 0 1 9.58 11.36 35.58 35.58 0 0 1 4.33 14.18 35.1 35.1 0 0 1-1.98 15.05 37.7 37.7 0 0 1-8.61 13.52zm-57.6-27.91a23.55 23.55 0 0 1 8.55-16.68 28.45 28.45 0 0 1 18.39-6.57 28.5 28.5 0 0 1 18.38 6.57 23.57 23.57 0 0 1 8.55 16.67c.37 6.83-2.37 13.19-7.2 17.9a28.18 28.18 0 0 1-19.73 7.79c-7.83 0-14.84-3-19.75-7.8a23.13 23.13 0 0 1-7.19-17.88z"></path>
</svg><strong>Blender Studio</strong>
</a>
<button class="nav-global-logo js-nav-global-dropdown-toggle" data-dropdown-id="nav-global-nav-links">
<svg fill-rule="nonzero" viewBox="0 0 850.2 162.05">
<path d="M61.1 104.56c.05 2.6.88 7.66 2.12 11.61a61.27 61.27 0 0 0 13.24 22.92 68.39 68.39 0 0 0 23.17 16.64 74.46 74.46 0 0 0 30.42 6.32 74.52 74.52 0 0 0 30.4-6.42 68.87 68.87 0 0 0 23.15-16.7 61.79 61.79 0 0 0 13.23-22.97 58.06 58.06 0 0 0 2.07-25.55 59.18 59.18 0 0 0-8.44-23.1 64.45 64.45 0 0 0-15.4-16.98h.02L112.76 2.46l-.16-.12c-4.09-3.14-10.96-3.13-15.46.02-4.55 3.18-5.07 8.44-1.02 11.75l-.02.02 26 21.14-79.23.08h-.1c-6.55.01-12.85 4.3-14.1 9.74-1.27 5.53 3.17 10.11 9.98 10.14v.02l40.15-.07-71.66 55-.27.2c-6.76 5.18-8.94 13.78-4.69 19.23 4.32 5.54 13.51 5.55 20.34.03l39.1-32s-.56 4.32-.52 6.91zm100.49 14.47c-8.06 8.2-19.34 12.86-31.54 12.89-12.23.02-23.5-4.6-31.57-12.79-3.93-4-6.83-8.59-8.61-13.48a35.57 35.57 0 0 1 2.34-29.25 39.1 39.1 0 0 1 9.58-11.4 44.68 44.68 0 0 1 28.24-9.85 44.59 44.59 0 0 1 28.24 9.77 38.94 38.94 0 0 1 9.58 11.36 35.58 35.58 0 0 1 4.33 14.18 35.1 35.1 0 0 1-1.98 15.05 37.7 37.7 0 0 1-8.61 13.52zm-57.6-27.91a23.55 23.55 0 0 1 8.55-16.68 28.45 28.45 0 0 1 18.39-6.57 28.5 28.5 0 0 1 18.38 6.57 23.57 23.57 0 0 1 8.55 16.67c.37 6.83-2.37 13.19-7.2 17.9a28.18 28.18 0 0 1-19.73 7.79c-7.83 0-14.84-3-19.75-7.8a23.13 23.13 0 0 1-7.19-17.88z"></path>
<path d="M829.17 133.76h-15.9V64.39h15.13l.77 13.59zM850.07 79q-1.47-.25-3.14-.38-1.6-.13-3.2-.13-5.26 0-8.8 1.92-3.45 1.86-5.25 5.39-1.8 3.46-2.11 8.2l-3.66.07q0-8.78 2.31-15.77 2.3-6.99 6.92-11.1 4.62-4.1 11.54-4.1 1.35 0 3.02.26 1.66.26 2.5.58zm-76.55 56.04q-10.32 0-17.82-4.42-7.5-4.5-11.55-12.06-4.03-7.63-4.03-17.05v-2.63q0-10.84 4.1-18.85 4.1-8.08 11.22-12.5 7.18-4.43 16.22-4.43 10 0 16.6 4.36 6.6 4.3 9.88 12 3.27 7.62 3.27 17.69V104h-53.67V92.53h37.96v-1.22q-.13-4.04-1.54-7.56-1.4-3.53-4.49-5.7-3.01-2.19-8.07-2.19-5.07 0-8.53 2.89-3.46 2.82-5.26 8.01-1.8 5.13-1.8 12.12v2.63q0 5.9 2.19 10.58 2.18 4.68 6.34 7.43 4.17 2.7 9.94 2.7 5.58 0 9.87-2.18 4.36-2.18 7.5-6.29l8.34 8.34q-3.27 4.93-9.87 8.97-6.54 3.98-16.8 3.98zm-88.67 25.39h-15.9V64.39h14.68l1.22 13.33zm45-60.72q0 10.13-3.13 18.15-3.08 7.95-9.1 12.56-5.97 4.62-14.63 4.62-8.72 0-14.49-4.23-5.7-4.3-8.9-11.8-3.21-7.5-4.43-17.12v-4.55q1.22-10.26 4.42-17.89 3.2-7.7 8.91-11.99 5.71-4.36 14.3-4.36 8.79 0 14.81 4.43 6.03 4.42 9.1 12.37 3.15 7.95 3.15 18.47zm-15.9-1.34q0-6.22-1.6-11.29-1.53-5.06-5-8.01-3.4-2.95-8.9-2.95-3.98 0-6.87 1.35-2.88 1.28-4.87 3.65-1.92 2.37-3.01 5.64-1.1 3.2-1.54 7.05v11.6q.77 4.62 2.56 8.47 1.8 3.85 5.13 6.16 3.4 2.24 8.72 2.24 5.51 0 8.91-3.08 3.4-3.14 4.94-8.2 1.54-5.13 1.54-11.29zm-122.51.06q0-10.13 3.84-18.08 3.85-7.95 11.03-12.57 7.25-4.68 17.38-4.68 10.25 0 17.5 4.68 7.24 4.62 11.03 12.57 3.84 7.95 3.84 18.08v1.35q0 10.06-3.84 18.08-3.79 7.95-11.03 12.56-7.18 4.62-17.38 4.62-10.19 0-17.43-4.62-7.25-4.61-11.1-12.56-3.84-8.02-3.84-18.08zm15.9 1.35q0 6.15 1.73 11.28 1.73 5.07 5.32 8.14 3.65 3.08 9.42 3.08 5.71 0 9.3-3.08 3.65-3.07 5.32-8.14 1.73-5.13 1.73-11.28v-1.35q0-6.09-1.73-11.22-1.67-5.13-5.32-8.2-3.65-3.15-9.42-3.15-5.71 0-9.36 3.14-3.6 3.08-5.33 8.21-1.66 5.13-1.66 11.22zm-29.69 33.98h-15.9V35.28h15.9zm-56.67 1.28q-10.33 0-17.83-4.42-7.5-4.5-11.54-12.06-4.04-7.63-4.04-17.05v-2.63q0-10.84 4.1-18.85 4.11-8.08 11.23-12.5 7.18-4.43 16.22-4.43 10 0 16.6 4.36 6.6 4.3 9.88 12 3.27 7.62 3.27 17.69V104H495.2V92.53h37.96v-1.22q-.13-4.04-1.54-7.56-1.41-3.53-4.49-5.7-3.01-2.19-8.08-2.19-5.06 0-8.52 2.89-3.47 2.82-5.26 8.01-1.8 5.13-1.8 12.12v2.63q0 5.9 2.18 10.58t6.35 7.43q4.17 2.7 9.94 2.7 5.58 0 9.87-2.18 4.36-2.18 7.5-6.29l8.34 8.34q-3.27 4.93-9.88 8.97-6.54 3.98-16.8 3.98zM464.3 64.39h16.48l-23.98 69.37h-10.26l1.03-12.57zm-12.25 57 .77 12.37h-10.19l-24.17-69.37H435zm-65.88 13.65q-10.32 0-17.82-4.42-7.5-4.5-11.54-12.06-4.04-7.63-4.04-17.05v-2.63q0-10.84 4.1-18.85 4.1-8.08 11.22-12.5 7.18-4.43 16.22-4.43 10 0 16.6 4.36 6.61 4.3 9.88 12 3.27 7.62 3.27 17.69V104H360.4V92.53h37.95v-1.22q-.12-4.04-1.53-7.56-1.42-3.53-4.5-5.7-3-2.19-8.07-2.19t-8.53 2.89q-3.46 2.82-5.26 8.01-1.8 5.13-1.8 12.12v2.63q0 5.9 2.19 10.58 2.18 4.68 6.35 7.43 4.16 2.7 9.93 2.7 5.58 0 9.88-2.18 4.36-2.18 7.5-6.29l8.33 8.34q-3.27 4.93-9.87 8.97-6.54 3.98-16.8 3.98zm-107.64-1.28.12-13.27h19.75q8.6 0 14.36-3.72 5.77-3.72 8.72-10.65 2.95-6.98 2.95-16.67v-4.87q0-9.94-2.95-16.8-2.88-6.86-8.59-10.45-5.7-3.59-13.91-3.59h-20.84V40.41h20.84q12.5 0 21.93 5.51 9.48 5.52 14.8 15.45 5.33 9.94 5.33 23.34v4.74q0 13.47-5.32 23.4-5.33 9.94-14.94 15.46-9.56 5.45-22.38 5.45zm9.23 0h-16.54V40.4h16.54z"></path>
</svg>
<svg class="nav-global-icon nav-global-icon-dropdown-toggle" height="100px" width="100px" viewBox="0 0 1000 1000">
<path d="m 206.53824,376.41174 a 42,42 0 0 1 71,-29 l 221,220 220,-220 a 42,42 0 1 1 59,59 l -250,250 a 42,42 0 0 1 -59,0 l -250,-250 a 42,42 0 0 1 -12,-30 z"></path>
</svg>
</button>
<ul class="js-nav-global-dropdown nav-global-nav-links nav-global-dropdown" id="nav-global-nav-links">
<!-- TODO: use constant studioURL for links -->
<li>
<a class="js-nav-global-link" href="https://studio.blender.org/films">Films</a>
</li>
<li>
<a class="js-nav-global-link" href="https://studio.blender.org/projects">Projects</a>
</li>
<li>
<a class="js-nav-global-link" href="https://studio.blender.org/training">Training</a>
</li>
<li>
<a class="js-nav-global-link" href="https://studio.blender.org/characters">Characters</a>
</li>
<li>
<a class="js-nav-global-link nav-global-link-active" href="/tools/">Tools</a>
</li>
<li>
<a class="js-nav-global-link" href="https://studio.blender.org/blog">Blog</a>
</li>
</ul>
<ul class="nav-global-links-right">
<li>
<VPNavBarSearch class="search" />
</li>
<li>
<div class="nav-global-apps-dropdown-container">
<button
class="js-nav-global-dropdown-toggle"
data-dropdown-id="nav-global-apps-menu"
>
<svg class="nav-global-icon" height="100px" width="100px" viewBox="0 0 1000 1000">
<path d="m 150.5,899 a 50,50 0 0 1 -49,-50 V 749 a 50,50 0 0 1 49,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m 299,0 a 50,50 0 0 1 -50,-50 V 749 a 50,50 0 0 1 50,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m 299,0 a 50,50 0 0 1 -50,-50 V 749 a 50,50 0 0 1 50,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m -598,-299 a 50,50 0 0 1 -49,-50 V 450 a 50,50 0 0 1 49,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m 299,0 a 50,50 0 0 1 -50,-50 V 450 a 50,50 0 0 1 50,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m 299,0 a 50,50 0 0 1 -50,-50 V 450 a 50,50 0 0 1 50,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m -598,-299 a 50,50 0 0 1 -49,-50 V 151 a 50,50 0 0 1 49,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m 299,0 a 50,50 0 0 1 -50,-50 V 151 a 50,50 0 0 1 50,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z m 299,0 a 50,50 0 0 1 -50,-50 V 151 a 50,50 0 0 1 50,-50 h 100 a 50,50 0 0 1 50,50 v 100 a 50,50 0 0 1 -50,50 z"></path>
</svg>
</button>
<div class="js-nav-global-dropdown nav-global-apps-menu nav-global-dropdown js-dropdown-menu" id="nav-global-apps-menu">
<a href="https://www.blender.org/?utm_medium=nav-global" target="_blank">
<h3>BLENDER.ORG</h3>
</a>
<ul>
<li>
<a href="https://www.blender.org/download/?utm_medium=nav-global" target="_blank">
<figure>
<svg class="nav-global-icon" height="100px" width="100px" viewBox="0 0 1000 1000">
<path d="m 49.15424,599.52895 a 50.360431,50.360431 0 0 0 -49.16137168,50.36043 v 200.24266 c 0,81.53594 68.34629768,149.88226 149.88223168,149.88226 h 700.2498 c 81.53593,0 149.8822,-68.34632 149.8822,-149.88226 V 649.88938 a 50.360431,50.360431 0 1 0 -100.72083,0 v 200.24266 c 0,27.57834 -21.58304,49.16138 -49.16137,49.16138 H 149.8751 c -27.57833,0 -49.16137,-21.58304 -49.16137,-49.16138 V 649.88938 A 50.360431,50.360431 0 0 0 49.15424,599.52895 Z M 249.3969,350.12491 a 50.360431,50.360431 0 0 0 -34.77267,85.13311 l 250.60309,249.40404 a 50.360431,50.360431 0 0 0 70.74442,0 L 785.37577,435.25802 A 50.360431,50.360431 0 1 0 714.63136,364.51361 L 500,579.14497 285.36864,364.51361 A 50.360431,50.360431 0 0 0 249.3969,350.12491 Z M 498.80094,0 A 50.360431,50.360431 0 0 0 449.63957,50.360432 V 649.88938 a 50.360431,50.360431 0 1 0 100.72086,0 V 50.360432 A 50.360431,50.360431 0 0 0 498.80094,0 Z" style="stroke-width:1.19906"></path>
</svg>
</figure>
<div>
<h4>Download</h4>
<p>Get the latest Blender, older versions, or experimental builds.</p>
</div>
</a>
</li>
<li>
<a href="https://www.blender.org/download/releases/?utm_medium=nav-global" target="_blank">
<div>
<h4>What&apos;s New</h4>
<p>Stay up-to-date with the new features in the latest Blender releases.</p>
</div>
</a>
</li>
</ul>
<a href="https://studio.blender.org/?utm_medium=nav-global" target="_blank">
<h3>LEARNING & RESOURCES</h3>
</a>
<ul>
<li>
<a href="https://studio.blender.org/?utm_medium=nav-global" target="_blank">
<figure>
<svg class="nav-global-icon" height="100px" width="100px" viewBox="0 0 1000 1000">
<path d="m 146.70939,1.6802353 c -78.362959,0 -143.678322,64.2057377 -143.6783209,143.6570547 -7.2533835,268.45385 0,463.93349 0,709.63356 0,79.45132 65.3153619,143.65705 143.6783209,143.65705 266.17757,0.51388 460.32009,0 709.61228,0 79.45134,0 143.67832,-64.20573 143.67832,-143.65705 0.37471,-118.45983 0,-235.03162 0,-353.72203 0.017,-0.72264 0.017,-1.4456 0,-2.16825 0.43351,-118.60776 0,-235.80643 0,-353.74328 0,-79.451317 -64.22698,-143.6570547 -143.67832,-143.6570547 -241.21275,-1.18614431 -498.91438,-0.041532 -709.61228,0 z m 0,90.3436617 h 82.71228 V 228.07083 H 93.374735 v -82.73354 c 0,-30.47448 22.860165,-53.313393 53.334655,-53.313393 z m 173.05594,0 h 363.5004 c -5.81542,127.740813 0,236.658243 0,362.416273 h -363.5004 c 0.39671,-121.62159 0,-241.06277 0,-362.416273 z m 453.84406,0 h 82.71228 c 30.4745,0 53.33466,22.838913 53.33466,53.313393 v 82.73354 H 773.60939 Z M 93.374735,318.39324 H 229.42167 V 454.44017 H 93.374735 Z m 680.234655,0 H 909.65633 V 454.44017 H 773.60939 Z M 93.374735,545.86796 H 229.42167 V 681.91489 H 93.374735 Z m 226.390595,0 h 363.5004 c -5.81534,127.74773 0,236.67164 0,362.43753 h -363.5004 c 0.3967,-121.62867 0,-241.07685 0,-362.43753 z m 453.84406,0 H 909.65633 V 681.91489 H 773.60939 Z M 93.374735,772.25856 H 229.42167 v 136.04693 h -82.71228 c -30.47449,0 -53.334655,-22.86016 -53.334655,-53.33464 z m 680.234655,0 h 136.04694 v 82.71229 c 0,30.47448 -22.86016,53.33464 -53.33466,53.33464 h -82.71228 z" style="stroke-width:1.08838"></path>
</svg>
</figure>
<div>
<h4>Blender Studio</h4>
<p>Access production assets and knowledge from the open movies.</p>
</div>
</a>
</li>
<li>
<a href="https://docs.blender.org/manual/en/latest/?utm_medium=nav-global" target="_blank">
<div>
<h4>Manual</h4>
<p>Documentation on the usage and features in Blender.</p>
</div>
</a>
</li>
</ul>
<a href="https://projects.blender.org/?utm_medium=nav-global" target="_blank">
<h3>DEVELOPMENT</h3>
</a>
<ul>
<li>
<a href="https://code.blender.org/?utm_medium=nav-global" target="_blank">
<figure>
<svg class="nav-global-icon" height="100px" width="100px" viewBox="0 0 1000 1000">
<path d="m 683.36434,818.19976 a 45.841084,45.841084 0 0 1 -33.83509,-13.09745 45.841084,45.841084 0 0 1 0,-64.39581 L 890.74067,499.49508 649.52925,259.37512 a 45.841084,45.841084 0 0 1 0,-64.39582 45.841084,45.841084 0 0 1 64.39581,0 l 272.8636,272.8636 a 45.841084,45.841084 0 0 1 0,64.39581 l -272.8636,272.8636 a 45.841084,45.841084 0 0 1 -30.56072,13.09745 z m -363.45431,0 A 45.841084,45.841084 0 0 1 286.07494,805.10231 L 13.211339,532.23871 a 45.841084,45.841084 0 0 1 0,-64.39581 L 286.07494,194.9793 a 45.841084,45.841084 0 0 1 64.39581,0 45.841084,45.841084 0 0 1 0,64.39582 L 109.25933,499.49508 350.47075,740.7065 a 45.841084,45.841084 0 0 1 0,64.39581 45.841084,45.841084 0 0 1 -30.56072,13.09745 z" style="stroke-width:1.09145"></path>
</svg>
</figure>
<div>
<h4>Developers Blog</h4>
<p>Latest development updates, by Blender developers.</p>
</div>
</a>
</li>
<li>
<a href="https://wiki.blender.org/?utm_medium=nav-global" target="_blank">
<div>
<h4>Documentation</h4>
<p>Guidelines, release notes and development docs.</p>
</div>
</a>
</li>
</ul>
<ul>
<li>
<a href="https://opendata.blender.org/?utm_medium=nav-global" target="_blank">
<figure>
<svg class="nav-global-icon" height="100px" width="100px" viewBox="0 0 1000 1000">
<path d="M 499.99424,0 A 55.30474,55.30474 0 0 0 444.6895,55.30474 V 944.69526 A 55.30474,55.30474 0 0 0 499.99424,1000 55.30474,55.30474 0 0 0 555.29898,944.69526 V 55.30474 A 55.30474,55.30474 0 0 0 499.99424,0 Z m 332.95711,332.95711 a 55.30474,55.30474 0 0 0 -55.30474,56.43341 V 944.69526 A 55.30474,55.30474 0 0 0 832.95135,1000 55.30474,55.30474 0 0 0 888.25609,944.69526 V 389.39052 A 55.30474,55.30474 0 0 0 832.95135,332.95711 Z M 167.03713,555.30474 a 55.30474,55.30474 0 0 0 -55.30474,55.30474 V 944.69526 A 55.30474,55.30474 0 0 0 167.03713,1000 55.30474,55.30474 0 0 0 222.34187,944.69526 V 610.60948 a 55.30474,55.30474 0 0 0 -55.30474,-55.30474 z" style="stroke-width:1.12867"></path>
</svg>
</figure>
<div>
<h4>Benchmark</h4>
<p>A platform to collect and share results of the Blender Benchmark.</p>
</div>
</a>
</li>
<li>
<a href="https://conference.blender.org/?utm_medium=nav-global" target="_blank">
<div>
<h4>Blender Conference</h4>
<p>The yearly event that brings the community together.</p>
</div>
</a>
</li>
</ul>
<div class="nav-global-apps-menu-section-donate">
<a href="https://fund.blender.org/?utm_medium=nav-global" target="_blank">
<h3>DONATE</h3>
</a>
<ul>
<li>
<a href="https://fund.blender.org/?utm_medium=nav-global" target="_blank">
<figure>
<svg class="nav-global-icon" height="100px" width="100px" viewBox="0 0 1000 1000">
<path d="M 273.67169,58.416076 C 201.59785,62.59427 135.79129,94.975269 86.697523,145.11359 37.603742,194.20736 4.1781939,260.01391 0,332.08775 -4.1781926,403.11704 22.980065,480.41362 86.697523,545.17562 l 45.960127,45.96013 339.47823,338.43367 a 43.871033,43.871033 0 0 0 61.62835,0 L 872.1979,591.13575 918.15804,545.17562 c 109.67766,-110.72213 109.67766,-290.38445 0,-400.06203 -110.72213,-110.722127 -290.38445,-110.722127 -400.06204,0 l -15.66822,14.62368 -15.66822,-14.62368 C 423.04211,80.351592 345.74553,53.193334 273.67169,58.416076 Z m 5.22274,86.697514 c 48.04922,-3.13365 98.18754,12.53458 146.23677,60.5838 l 47.00468,47.00468 a 43.871033,43.871033 0 0 0 61.62835,0 l 45.96013,-47.00468 c 76.25204,-76.25203 199.50874,-76.25203 276.80532,0 77.29658,77.29658 77.29658,200.5533 0,277.84988 L 810.56956,529.50739 502.42778,837.64917 194.286,529.50739 148.32588,483.54727 C 100.27665,434.45349 84.608431,384.31516 86.697523,336.26594 c 3.133646,-47.00467 26.113717,-95.0539 61.628357,-130.56855 35.51464,-35.51464 82.51932,-58.49471 130.56855,-60.5838 z" style="stroke-width:1.04455"></path>
</svg>
</figure>
<div>
<h4>Development Fund</h4>
<p>Support core development with a monthly contribution.</p>
</div>
</a>
</li>
<li>
<a href="https://www.blender.org/about/donations/?utm_medium=nav-global" target="_blank">
<div>
<h4>One-time Donations</h4>
<p>Perform a single donation with more payment options available.</p>
</div>
</a>
</li>
</ul>
</div>
</div>
</div>
</li>
</ul>
</nav>
</div>
</div>
</template>
<script>
export default {
data() {
return {
isDropdownVisible: false,
dropdownToggles: null,
btnActiveClass: 'nav-global-btn-active',
isVisibleClass: 'is-visible'
};
},
methods: {
initDropdownToggles() {
this.dropdownToggles = window.document.getElementsByClassName("js-nav-global-dropdown-toggle");
},
toggleDropdown(dropdownId) {
const el = window.document.getElementById(dropdownId);
if (el) {
if (el.classList.contains(this.isVisibleClass)) {
this.hideAllDropdowns();
} else {
this.showDropdown(el);
}
}
},
hideAllDropdowns() {
const dropdownMenus = window.document.getElementsByClassName("js-nav-global-dropdown");
if (dropdownMenus) {
for (let i = 0; i < dropdownMenus.length; i++) {
dropdownMenus[i].classList.remove(this.isVisibleClass);
}
}
this.removeActiveStyling();
},
showDropdown(el) {
this.hideAllDropdowns();
el.classList.add(this.isVisibleClass);
this.addActiveStyling();
},
removeActiveStyling() {
for (let i = 0; i < this.dropdownToggles.length; i++) {
this.dropdownToggles[i].classList.remove(this.btnActiveClass);
}
},
addActiveStyling() {
for (let i = 0; i < this.dropdownToggles.length; i++) {
this.dropdownToggles[i].classList.add(this.btnActiveClass);
}
}
},
mounted() {
this.initDropdownToggles();
for (let i = 0; i < this.dropdownToggles.length; i++) {
this.dropdownToggles[i].addEventListener("click", (e) => {
e.stopPropagation();
const dropdownId = this.dropdownToggles[i].getAttribute('data-dropdown-id');
const el = window.document.getElementById(dropdownId);
if (el) {
if (el.classList.contains(this.isVisibleClass)) {
this.hideAllDropdowns();
} else {
this.showDropdown(el);
}
}
});
}
document.body.addEventListener("click", (e) => {
if (!e.target.classList.contains("js-nav-global-dropdown")) {
this.hideAllDropdowns();
}
});
window.addEventListener('keydown', (event) => {
if (event.key === 'Escape') {
this.hideAllDropdowns();
}
});
// Create function navGlobalLinkOpen to force link open with JavaScript
function navGlobalLinkOpen() {
var navGlobalLink = document.querySelectorAll('.js-nav-global-link');
navGlobalLink.forEach(function(item) {
item.addEventListener('click', function(e) {
e.preventDefault();
var attrHref = this.getAttribute('href');
// Save current link to history object dynamically
history.pushState(null, null, window.location.href);
// Redirect to link attrHref dynamically
window.location.replace(attrHref);
});
});
}
// Init function navGlobalLinkOpen
navGlobalLinkOpen();
// Cleanup VPNav Vitepress attributes to make links work
var VPNav = document.querySelector('.VPNav');
for (var i = 0; i < VPNav.attributes.length; i++) {
var attr = VPNav.attributes[i];
if (attr.name.startsWith('data-v')) {
// Cleanup all attributes starting with 'data-v'
VPNav.removeAttribute(attr.name);
}
}
// Make header position fixed if page has sidebar
// TODO: change to Vue route change
setInterval(function() {
var VPContent = document.querySelector('.VPContent');
if (VPContent.classList.contains('has-sidebar')) {
VPNav.classList.add('VPNav-fixed');
} else {
VPNav.classList.remove('VPNav-fixed');
}
}, 200);
}
};
</script>

View File

@ -1,6 +1,3 @@
// Imports for overriding internal components
import { fileURLToPath, URL } from 'node:url'
import { defineConfig } from 'vitepress' import { defineConfig } from 'vitepress'
import { html5Media } from 'markdown-it-html5-media' import { html5Media } from 'markdown-it-html5-media'
@ -8,12 +5,12 @@ const studioURL = 'https://studio.blender.org'
// https://vitepress.dev/reference/site-config // https://vitepress.dev/reference/site-config
export default defineConfig({ export default defineConfig({
base: '/tools/', base: '/pipeline-and-tools/',
title: "Blender Studio", title: "Blender Studio",
description: "Documentation for the Blender Studio pipeline and tools.", description: "Documentation for the Blender Studio pipeline and tools.",
lastUpdated: true, lastUpdated: true,
cleanUrls: true, cleanUrls: true,
srcExclude: ['**/README',], srcExclude: ['**/README.md',],
head: [ head: [
[ [
'script', 'script',
@ -26,11 +23,8 @@ export default defineConfig({
], ],
themeConfig: { themeConfig: {
logo: { logo: {
/*
Logo is injected from Vue component NavBarGlobal
light: '/blender-studio-logo-black.svg', light: '/blender-studio-logo-black.svg',
dark: '/blender-studio-logo-white.svg' dark: '/blender-studio-logo-white.svg'
*/
}, },
siteTitle: false, siteTitle: false,
footer: { footer: {
@ -44,234 +38,67 @@ export default defineConfig({
}, },
// https://vitepress.dev/reference/default-theme-config // https://vitepress.dev/reference/default-theme-config
nav: [ nav: [
/* { text: 'Open Projects', link: `${studioURL}/films`, target: '_self' },
Nav is injected from Vue component NavBarGlobal { text: 'Training', link: `${studioURL}/training`, target: '_self' },
{ text: 'Films', link: `${studioURL}/films` }, { text: 'Blog', link: `${studioURL}/blog`, target: '_self' },
{ text: 'Training', link: `${studioURL}/training` }, { text: 'Pipeline and Tools', link: '/' },
{ text: 'Blog', link: `${studioURL}/blog` }, { text: 'Characters', link: `${studioURL}/characters`, target: '_self' }
{ text: 'Pipeline', link: '/' },
{ text: 'Characters', link: `${studioURL}/characters`, }
*/
], ],
sidebar: [ sidebar: [
{ {
text: 'Blender Studio Tools', text: 'Pipeline Overview',
items: [ items: [
{ text: 'Introduction', link: '/overview/introduction'}, { text: 'Introduction', link: '/pipeline-overview/introduction'},
{ text: 'Design Principles', link: '/overview/design-principles'}, { text: 'Infrastructure', link: '/pipeline-overview/infrastructure'},
{ text: 'Rigging', link: '/pipeline-overview/rigging'},
]
},
{
text: 'Artist Guide',
collapsed: false,
items: [
{text: 'Folder Structure', link: '/td-guide/project_folder_structure'},
{text: 'Project Blender', link: '/artist-guide/project_tools/project-blender' },
{
text: 'Project Tools',
collapsed: true,
items: [
{ text: 'Project Overview', link: '/artist-guide/project_tools/project-overview' },
{
text: 'Project Usage',
collapsed: true,
items: [
{text: 'Introduction', link: '/artist-guide/project_tools/project-usage'},
{text: 'Prepare Edit', link: '/artist-guide/project_tools/usage-sync-edit'},
{text: 'Building Shots', link: '/artist-guide/project_tools/usage-build-shot'},
{text: 'Playblast Shot', link: '/artist-guide/project_tools/usage-playblast'},
{text: 'Update Shot', link: '/artist-guide/project_tools/usage-update-shot'},
{text: 'Flamenco Render', link: '/artist-guide/project_tools/usage-render-flamenco'},
{text: 'Render Review', link: '/artist-guide/project_tools/usage-render-review'},
{text: 'Final Render', link: '/artist-guide/project_tools/usage-final-render'},
],
},
],
},
{ text: 'Debugging', link: '/artist-guide/debugging' },
{ text: 'Kitsu', link: '/artist-guide/kitsu' },
{
text: 'Pre-Production',
collapsed: true,
items: [
{ text: 'Storyboard', link: '/artist-guide/pre-production/storyboard'},
{ text: 'Editorial', link: '/artist-guide/pre-production/editorial'},
{ text: 'Previz', link: '/artist-guide/pre-production/previz'},
{ text: 'Research and Development', link: '/artist-guide/pre-production/research-and-development'},
{ text: 'Concept and Design', link: '/artist-guide/pre-production/concept-and-design'},
]
},
{
text: 'Asset Creation',
collapsed: true,
items: [
{ text: 'Modeling and Sculpting', link: '/artist-guide/asset-creation/modeling'},
{ text: 'Shading', link: '/artist-guide/asset-creation/shading'},
{ text: 'Rigging', link: '/artist-guide/asset-creation/rigging'},
{ text: 'Animation Testing', link: '/artist-guide/asset-creation/animation-testing'},
{ text: '2D Assets', link: '/artist-guide/asset-creation/2d-assets'},
]
},
{
text: 'Shot Production',
collapsed: true,
items: [
{ text: 'Shot Assembly', link: '/artist-guide/shot-production/shot-assembly'},
{ text: 'Layout', link: '/artist-guide/shot-production/layout'},
{ text: 'Animation', link: '/artist-guide/shot-production/animation'},
{ text: 'Lighting', link: '/artist-guide/shot-production/lighting'},
{ text: 'Effects', link: '/artist-guide/shot-production/effects'},
{ text: 'Rendering', link: '/artist-guide/shot-production/rendering'},
{ text: 'Coloring', link: '/artist-guide/shot-production/coloring'},
]
},
],
},
{
text: 'IT and TD Guide',
collapsed: true,
items: [
{ text: 'Infrastructure', link: '/td-guide/infrastructure'},
{text: 'Introduction', link: '/td-guide/project-tools-setup'},
{text: 'Repository', link: '/td-guide/repository'},
{text: 'Python', link: '/td-guide/python'},
{text: 'Folder Structure', link: '/td-guide/project_folder_structure'},
{
text: 'Shared',
collapsed: true,
items: [
{text: 'Syncthing Setup', link: '/td-guide/syncthing-setup'},
{text: 'Populating Shared', link: '/td-guide/populating_shared'},
],
},
{
text: 'SVN',
collapsed: true,
items: [
{text: 'SVN Setup', link: '/td-guide/svn-setup'},
{text: 'Populating SVN', link: '/td-guide/populating_svn'},
],
},
{text: 'Kitsu', link: '/td-guide/kitsu_server'},
{
text: 'Blender',
collapsed: true,
items: [
{text: 'Blender Setup', link: '/td-guide/blender_setup'},
{text: 'Add-Ons Setup', link: '/td-guide/addon_setup'},
{text: 'Add-Ons Preferences', link: '/td-guide/addon_preferences'},
],
},
{text: 'Flamenco', link: '/td-guide/flamenco_setup'},
]
},
{
text: 'Add-ons',
link:'/addons/overview',
collapsed: true,
items: [
{ text: 'Anim Cupboard', link: '/addons/anim_cupboard'},
{ text: 'Asset Pipeline', link: '/addons/asset_pipeline'},
{ text: 'Blender Kitsu', link: '/addons/blender_kitsu'},
{ text: 'Blender SVN', link: '/addons/blender_svn'},
{ text: 'Blender Gizmos', link: '/addons/bone_gizmos'},
{ text: 'Brushstroke Tools', link: '/addons/brushstroke_tools'},
{ text: 'Cache Manager', link: '/addons/cache_manager'},
{
text: 'CloudRig',
collapsed: true,
items: [
{text: 'Introduction', link: '/addons/cloudrig/introduction'},
{text: 'Component Types', link: '/addons/cloudrig/cloudrig-types'},
{text: 'Generator Parameters', link: '/addons/cloudrig/generator-parameters'},
{text: 'Properties UI', link: '/addons/cloudrig/properties-ui'},
{text: 'Organizing Bones', link: '/addons/cloudrig/organizing-bones'},
{text: 'Actions', link: '/addons/cloudrig/actions'},
{text: 'Troubleshooting', link: '/addons/cloudrig/troubleshooting'},
{text: 'Constraint Relinking', link: '/addons/cloudrig/constraint-relinking'},
{text: 'Workflow Boosters', link: '/addons/cloudrig/workflow-enhancements'},
{text: 'Contribute', link: '/addons/cloudrig/code'},
],
},
{ text: 'Contact Sheet', link: '/addons/contactsheet'},
{ text: 'Easy Weight', link: '/addons/easy_weight'},
{ text: 'Geonode Shapekeys', link: '/addons/geonode_shapekeys'},
{ text: 'Grease Converter', link: '/addons/grease_converter'},
{ text: 'Lattice Magic', link: '/addons/lattice_magic'},
{ text: 'Lighting Overrider', link: '/addons/lighting_overrider'},
{ text: 'Pose Shape Keys', link: '/addons/pose_shape_keys'},
] ]
}, },
{ {
text: 'Naming Conventions', text: 'Naming Conventions',
collapsed: true,
items: [ items: [
{ text: 'Introduction', link: '/naming-conventions/introduction'}, { text: 'Introduction', link: '/naming-conventions/introduction'},
{ text: 'File Types', link: '/naming-conventions/file-types'}, { text: 'File Types', link: '/naming-conventions/file-types'},
{ text: 'In-file Naming', link: '/naming-conventions/datablock-names'}, { text: 'In-file Prefixes', link: '/naming-conventions/in-file-prefixes'},
{ text: 'Examples', link: '/naming-conventions/examples'}, { text: 'Examples', link: '/naming-conventions/examples'},
{ text: 'Shared Folder Structure', link: '/naming-conventions/shared-folder-structure'},
{ text: 'SVN Folder Structure', link: '/naming-conventions/svn-folder-structure'},
] ]
}, },
{ {
text: 'Archive', text: 'User Guide',
collapsed: true,
items: [
{text: 'Pipeline Proposal', link: '/archive/pipeline-proposal-2019/introduction'},
{text: 'Attact Updates', link: '/archive/pipeline-proposal-2019/attract-improvements'},
{text: 'Task Companion Add-on', link: '/archive/pipeline-proposal-2019/task-companion-add-on'},
{text: 'Shot Caching', link: '/archive/pipeline-proposal-2019/shot-caching/introduction', items: [
{text: 'Add-on', link: '/archive/pipeline-proposal-2019/shot-caching/add-on', items: [
{text: 'User Stories', link: '/archive/pipeline-proposal-2019/shot-caching/user-stories'},
{text: 'Structural Ideas', link: '/archive/pipeline-proposal-2019/shot-caching/structural-ideas'}
]},
{text: 'Issues', link: '/archive/pipeline-proposal-2019/shot-caching/issues'},
]},
{text: 'Asset Publishing', link: '/archive/pipeline-proposal-2019/asset-publishing/introduction'},
{text: 'Character Pipeline Assistant', link: '/archive/pipeline-proposal-2019/asset-publishing/character-pipeline-assistant'},
],
},
{
text: 'Gentoo',
collapsed: true,
items: [
{
text: 'TD',
collapsed: false, collapsed: false,
items: [ items: [
{ text: 'Overview', link: '/gentoo/td/overview'}, {text: 'Project Setup', link: '/user-guide/project-setup'},
{ text: 'Installation', link: '/gentoo/td/installation'},
{ text: 'Maintenance', link: '/gentoo/td/maintaince'},
{ text: 'Render Farm', link: '/gentoo/td/render_farm'},
{ text: 'Troubleshooting', link: '/gentoo/td/troubleshooting' },
],
},
{ {
text: 'User', text: 'Workstation',
collapsed: false,
items: [ items: [
{ text: 'Introduction', link: '/gentoo/user/introduction' }, { text: 'Introduction', link: '/user-guide/workstations/introduction'},
{ text: 'Installing Software', link: '/gentoo/user/installing-software' }, { text: 'Installing Software', link: '/user-guide/workstations/installing-software'},
{ text: 'Running Blender', link: '/gentoo/user/running-blender' }, { text: 'Running Blender', link: '/user-guide/workstations/running-blender'},
{ text: 'SVN', link: '/gentoo/user/svn' }, { text: 'Troubleshooting', link: '/user-guide/workstations/troubleshooting'},
] ]
}, },
], {text: 'SVN', link: '/user-guide/svn'},
{text: 'Debugging', link: '/user-guide/debugging'}
]
}, },
{
text: 'TD Guide',
collapsed: false,
items: [
{text: 'Project Setup', link: '/td-guide/project-setup'},
{
text: 'Workstation',
items: [
{ text: 'Overview', link: '/td-guide/workstations/overview'},
{ text: 'Installation', link: '/td-guide/workstations/installation'},
{ text: 'Maintenance', link: '/td-guide/workstations/maintaince'},
]
},
]
}
], ],
}, },
markdown: { markdown: {
@ -279,18 +106,6 @@ export default defineConfig({
// Enable the markdown-it-html5-media plugin // Enable the markdown-it-html5-media plugin
md.use(html5Media) md.use(html5Media)
} }
},
// Override internal component 'VPNavBar'
vite: {
resolve: {
alias: [
{
find: /^.*\/VPNavBar\.vue$/,
replacement: fileURLToPath(
new URL('./components/NavBarGlobal.vue', import.meta.url)
)
}
]
}
} }
}) })

View File

@ -5,58 +5,4 @@
--vp-c-brand-dark: #008ae0; --vp-c-brand-dark: #008ae0;
--vp-c-brand-darker: #007ac6; --vp-c-brand-darker: #007ac6;
--vp-local-search-highlight-bg: var(--vp-c-brand-lighter); --vp-local-search-highlight-bg: var(--vp-c-brand-lighter);
--vp-font-family-base: 'Heebo', ui-sans-serif,
system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto,
'Helvetica Neue', Helvetica, Arial, 'Noto Sans', sans-serif,
'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol', 'Noto Color Emoji';
--vp-font-family-mono: ui-monospace, SFMono-Regular, 'SF Mono', Menlo, Monaco,
Consolas, 'Liberation Mono', 'Courier New', monospace;
} }
.VPNav {
/* Fix Vitepress VPNav arbitrary style */
pointer-events: auto !important;
position: relative !important;
width: 100%;
}
.VPNav-fixed {
position: fixed !important;
z-index: 30;
}
@media (max-width: 768px) {
.VPNav-fixed {
position: relative !important;
}
}
.VPNavBar {
background-color: var(--vp-c-bg);
}
.VPNavBar.has-sidebar {
background-color: transparent;
}
.VPHero {
background-image: url("/media/images/hero_banner.webp");
background-size: cover;
margin-bottom: 2em;
background-position: center;
}
.VPHero .name .clip,
.VPHero .tagline,
.VPHero .text {
color: white;
-webkit-text-fill-color: white;
}
/* Web Assets overrides. */
.nav-global .nav-global-logo svg {
display: inline;
}

View File

@ -1,6 +1,4 @@
import DefaultTheme from 'vitepress/theme-without-fonts' import DefaultTheme from 'vitepress/theme'
// Import 'navigation-global.css' from JavaScript is needed as '<style>' tags are not supported in Vitepress client component templates
import './navigation-global.css'
import './custom.css' import './custom.css'
export default DefaultTheme export default DefaultTheme

View File

@ -1,472 +0,0 @@
/* Variables. */
.nav-global {
-webkit-font-smoothing: antialiased;
--nav-global-color-bg: hsl(213, 10%, 14%);
--nav-global-color-text: hsl(213, 5%, 64%);
--nav-global-color-text-secondary: hsl(213, 5%, 44%);
--nav-global-color-text-highlight: hsl(213, 5%, 84%);
--nav-global-color-text-hover: white;
--nav-global-color-text-active: white;
--nav-global-color-primary: hsl(204, 98%, 54%);
--nav-global-color-primary-bg: hsla(204, 100%, 46%, .1);
--nav-global-color-button-bg-hover: hsl(213, 10%, 24%);
--nav-global-color-button-text: var(--nav-global-color-text);
--nav-global-color-menu-bg: var(--nav-global-color-bg);
--nav-global-color-menu-border: hsl(213, 10%, 18%);
--nav-global-color-menu-zindex: 1040;
--nav-global-box-shadow-menu: 0px 5px 15px -2px rgba(0, 0, 0, 0.33), 0px 5px 15px -5px rgba(0, 0, 0, 0.33);
--nav-global-box-shadow-menu-item: 0px 1px 4px 0px rgba(0, 0, 0, 0.05), 0px 15px 20px -1px rgba(0, 0, 0, 0.025);
--nav-global-navbar-height: var(--navbar-primary-height, 56px);
--nav-global-spacer: 15px;
--nav-global-spacer-sm: 10px;
--nav-global-spacer-xs: 5px;
--nav-global-border-radius: 6px;
--nav-global-border-radius-lg: 10px;
--nav-global-button-height: 35px;
--nav-global-link-padding-x: var(--nav-global-spacer);
--nav-global-link-padding-y: var(--nav-global-spacer-sm);
--nav-global-font-size: 14px;
--nav-global-transition-speed: 150ms;
}
/* Reset. */
.nav-global :not(svg|*),
.nav-global *::before,
.nav-global *::after {
-webkit-box-sizing: border-box;
all: unset;
display: revert;
box-sizing: border-box;
}
.nav-global [default-styles] {
all: revert;
}
.nav-global * {
-webkit-text-size-adjust: 100%;
font-family: 'Heebo', -apple-system, BlinkMacSystemFont, "Segoe UI", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
font-variation-settings: 'wght' 400;
font-weight: normal;
}
.nav-global {
background-color: var(--nav-global-color-bg);
color: var(--nav-global-color-text);
display: flex;
position: relative;
z-index: var(--zindex-fixed);
}
.nav-global h3,
.nav-global h4,
.nav-global strong {
font-variation-settings: 'wght' 500;
}
.nav-global figure,
.nav-global section {
display: block;
}
.nav-global svg:not(:root) {
overflow: hidden;
vertical-align: middle;
}
.nav-global .nav-global-container {
flex: 1;
margin: 0 auto;
}
/* Navigation. */
.nav-global nav {
align-items: center;
display: flex;
line-height: var(--nav-global-font-size);
font-size: var(--nav-global-font-size);
min-height: var(--nav-global-navbar-height);
margin: 0 auto;
padding: 0 var(--nav-global-spacer);
position: relative;
}
/* Links. */
.nav-global a:not(.dropdown-item) {
color: inherit;
cursor: pointer;
text-decoration: none;
transition: background-color var(--nav-global-transition-speed) ease-out, color var(--nav-global-transition-speed) ease-out;
}
.nav-global-nav-links a:not(.dropdown-item) {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.nav-global a:not(.dropdown-item):hover {
color: var(--nav-global-color-text-hover);
}
/* Navigation items. */
.nav-global nav>ul {
flex-wrap: wrap;
list-style: none;
margin: 0;
padding: 0;
}
.nav-global nav>ul,
.nav-global nav>ul>li,
.nav-global nav>ul>li>a,
.nav-global-apps-dropdown-container {
align-items: center;
display: inline-flex;
height: 100%;
}
.nav-global-apps-dropdown-container {
position: relative;
}
.nav-global nav>ul>li>a {
padding: var(--nav-global-link-padding-y) var(--nav-global-link-padding-x);
}
.nav-global nav>a.is-active,
.nav-global nav>a.is-active svg,
.nav-global nav>ul>li>a.is-active,
.nav-global .nav-global-link-active,
.nav-global .nav-global-link-active svg {
color: var(--nav-global-color-text-active) !important;
fill: var(--nav-global-color-text-active);
@include font-weight(500);
}
.nav-global .nav-global-links-right {
margin-left: auto;
}
/* Logo. */
.nav-global a.nav-global-logo {
margin-right: var(--nav-global-spacer);
position: relative;
top: 2px;
}
.nav-global a.nav-global-logo strong {
margin-inline: var(--nav-global-spacer-sm);
font-size: 18px;
}
.nav-global .nav-global-logo svg,
.nav-global .nav-global-logo img {
height: 21px;
pointer-events: none;
}
.nav-global a.nav-global-logo svg {
position: relative;
top: -4px;
}
.nav-global svg {
fill: var(--nav-global-color-text);
transition: fill var(--nav-global-transition-speed) ease-out;
}
.nav-global .nav-global-logo:hover svg {
fill: white;
}
/* Apps button. */
.nav-global button,
.nav-global .nav-global-btn {
-webkit-appearance: button;
align-items: center;
background-color: transparent;
border-radius: var(--nav-global-border-radius);
border: 0;
color: var(--nav-global-color-button-text);
cursor: pointer;
display: inline-flex;
font: inherit;
height: var(--nav-global-button-height);
margin: 0;
outline: 0;
overflow: visible;
padding: var(--nav-global-spacer-xs) var(--nav-global-spacer);
text-transform: none;
transition: background-color var(--nav-global-transition-speed) ease-out, color var(--nav-global-transition-speed) ease-out, transform var(--nav-global-transition-speed) ease-out;
white-space: nowrap;
}
.nav-global button span,
.nav-global .nav-global-btn span {
white-space: nowrap;
}
.nav-global button:hover,
.nav-global .nav-global-btn:hover {
background-color: var(--nav-global-color-button-bg-hover);
color: var(--nav-global-color-text-hover);
cursor: pointer;
}
.nav-global button.nav-global-btn-active,
.nav-global .nav-global-btn.nav-global-btn-active {
background-color: var(--nav-global-color-primary-bg);
color: var(--nav-global-color-primary);
}
.nav-global button.nav-global-btn-active svg,
.nav-global .nav-global-btn.nav-global-btn-active svg {
fill: var(--nav-global-color-primary);
}
.nav-global .nav-global-icon {
height: 20px;
pointer-events: none;
width: 20px;
}
.nav-global-icon-dropdown-toggle {
margin-left: var(--nav-global-spacer-xs);
}
.nav-global .dropdown-toggle.active {
color: var(--nav-global-color-text-active)
}
.nav-global button:hover svg,
.nav-global .nav-global-btn:hover svg {
fill: white;
}
/* Apps dropdown menu. */
.nav-global .nav-global-apps-menu {
background-color: var(--nav-global-color-menu-bg);
border-radius: var(--nav-global-border-radius-lg);
border: thin solid var(--nav-global-color-menu-border);
box-shadow: var(--nav-global-box-shadow-menu);
display: none;
padding: var(--nav-global-spacer-sm);
position: absolute;
right: 0;
top: calc(100% + 15px);
visibility: hidden;
width: 640px;
z-index: var(--nav-global-color-menu-zindex);
}
.nav-global .nav-global-dropdown.is-visible {
display: block;
visibility: visible;
}
/* Tiny triangle in the corner. */
.nav-global .nav-global-apps-menu::before {
background-color: var(--nav-global-color-menu-bg);
border-radius: 3px;
border: 2px var(--nav-global-color-menu-bg) solid;
content: '';
display: block;
height: .85rem;
position: absolute;
right: .85rem;
top: -0.25rem;
transform: rotate(45deg);
width: 1rem;
z-index: -1;
}
.nav-global .nav-global-apps-menu ul {
border-bottom: 2px solid rgba(255, 255, 255, .05);
display: grid;
gap: var(--nav-global-spacer-sm);
grid-template-columns: repeat(2, 1fr);
list-style: none;
margin: 0 0 var(--nav-global-spacer-xs) 0;
padding: var(--nav-global-spacer-xs) 0 var(--nav-global-spacer-sm) 0;
}
.nav-global .nav-global-apps-menu ul>li>a {
border-radius: var(--nav-global-border-radius-lg);
display: flex;
flex: 1;
height: 100%;
}
.nav-global .nav-global-apps-menu ul>li>a:hover {
background-color: rgba(255, 255, 255, .05);
color: var(--nav-global-color-text-active);
box-shadow: var(--nav-global-box-shadow-menu-item);
}
.nav-global .nav-global-apps-menu ul>li>a:hover h4,
.nav-global .nav-global-apps-menu ul>li>a:hover svg {
color: var(--nav-global-color-primary);
fill: var(--nav-global-color-primary);
}
.nav-global .nav-global-apps-menu h3 {
color: white;
display: inline-block;
font-size: 13px;
line-height: 18px;
margin: 0;
opacity: .3;
padding-left: var(--nav-global-spacer);
}
.nav-global .nav-global-apps-menu h4 {
color: var(--nav-global-color-text-highlight);
font-size: 17px;
line-height: 18px;
margin: var(--nav-global-spacer-xs) 0 0;
padding: var(--nav-global-spacer-sm) var(--nav-global-spacer) 0;
transition: color var(--nav-global-transition-speed) ease-out;
}
.nav-global .nav-global-apps-menu p {
font-size: 15px;
line-height: 20px;
margin: 0;
opacity: .8;
padding: var(--nav-global-spacer-xs) var(--nav-global-spacer) var(--nav-global-spacer-sm);
}
.nav-global .nav-global-apps-menu figure {
margin: var(--nav-global-spacer) 0 0 var(--nav-global-spacer);
}
.nav-global .nav-global-apps-menu ul>li>a svg {
position: relative;
top: -2px;
}
/* Donate section of the menu. */
.nav-global .nav-global-apps-menu-section-donate ul {
border: none;
margin-bottom: 0;
padding-bottom: 0;
}
.nav-global .nav-global-apps-menu-section-donate a svg {
fill: hsl(352, 90%, 62%) !important;
transition: transform var(--nav-global-transition-speed) ease-out;
}
.nav-global .nav-global-apps-menu-section-donate ul>li:first-child>a {
background-color: hsla(352deg, 90%, 42%, .2);
}
.nav-global .nav-global-apps-menu-section-donate ul>li:first-child>a:hover {
background-color: hsla(352deg, 90%, 42%, .5);
}
.nav-global .nav-global-apps-menu-section-donate ul>li:first-child>a:hover svg {
fill: hsl(352, 90%, 72%) !important;
transform: scale(1.2);
}
.nav-global .nav-global-apps-menu-section-donate ul>li:first-child>a:hover h4 {
color: white;
}
/* Mobile. */
.nav-global button.nav-global-logo {
display: none;
visibility: hidden;
}
@media (max-width: 767px) {
.nav-global-apps-dropdown-container,
.nav-global a.nav-global-logo {
display: none;
}
.nav-global button.nav-global-logo {
display: block;
visibility: visible;
}
.nav-global .nav-global-nav-links {
align-items: flex-start;
background-color: var(--nav-global-color-menu-bg);
border-radius: var(--nav-global-border-radius-lg);
display: none;
flex-direction: column;
left: 1rem;
padding: 0 var(--nav-global-spacer-sm);
position: absolute;
top: calc(100% + .5rem);
visibility: visible;
width: 10rem;
z-index: var(--nav-global-color-menu-zindex);
}
.nav-global .nav-global-nav-links.is-visible {
display: flex;
}
.nav-global .nav-global-nav-links::before {
background-color: var(--nav-global-color-menu-bg);
border-radius: 3px;
border: 2px var(--nav-global-color-menu-bg) solid;
content: '';
display: block;
height: 0.8rem;
position: absolute;
left: 1.5rem;
top: -0.133rem;
transform: rotate(45deg);
width: 1rem;
z-index: -1;
}
.nav-global nav>ul {
height: initial;
}
.nav-global .nav-global-nav-links>li {
border-bottom: 2px solid rgba(255, 255, 255, .05);
width: 100%;
}
.nav-global .nav-global-nav-links>li:last-child {
border: none;
}
.nav-global .nav-global-nav-links>li>a {
padding-inline: 0;
width: 100%;
}
}
/* Site-specific tweaks. */
/* Make sure to start every line with ".nav-global"
* so changes affect the developer navbar only. */
/* Limit navbar width on large screens. */
@media (min-width: 1200px) {
.nav-global .nav-global-container {
max-width: 1170px;
}
}

View File

@ -1,20 +1,6 @@
# Blender Studio Tools Docs # Blender Studio Pipeline Docs
## Installation
To setup the environment to work on Blender Studio Tools Docs.
* Make sure you have NodeJS installed * Make sure you have NodeJS installed
* `cd blender-studio-pipeline/docs/`
* `npm install` * `npm install`
## Review Changes
To review your changes by generating a preview of the site, this preview will update as users save any changes to the site's content.
* `npm run docs:dev` * `npm run docs:dev`
* Refer to the [vitepress docs](https://vitepress.dev/guide/getting-started#up-and-running) for how to edit the documents * Refer to the [vitepress docs](https://vitepress.dev/) for how to edit the documents
## Test Changes
To ensure the site will build on production site without errors.
* `npm run docs:build`
* Refer to the [vitapress docs](https://vitepress.dev/guide/deploy#build-and-test-locally) for more info

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/anim_cupboard/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/asset_pipeline/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/blender_kitsu/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/blender_svn/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/bone_gizmos/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/brushstroke_tools/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/cache_manager/README.md-->

View File

@ -1,74 +0,0 @@
# Actions
<details>
<summary> This page goes over the workflow of rigging a character's face with CloudRig and Action Constraints. To understand how this works, first you must be familiar with the <a href="https://docs.blender.org/manual/en/latest/animation/constraints/relationship/action.html#action-constraint">Action constraint</a>. </summary>
TL;DR: Action constraints allow you to move bones into a predetermined pose using a control bone. For example, you can pose your character smiling, save that pose into an Action, then set up constraints so that you can move a controller that will blend the bones into that smiling pose.
</details>
## The Problem
If you've ever rigged a character with Action constraints though, you've probably run into the tedium of having to set up those constraints. Each bone affected by an action needs to have an Action constraint set up on it. Even with the Copy Attributes addon that lets you copy constraints, iterating on the setup is tedious. When trying to make a symmetrical setup, it's even worse, since you have to do a copy operation for each side, and control bones along the center of the rig need to have both constraints with half influence. It is a painful workflow.
## The Solution
Action Slots! You can find this when selecting a CloudRig metarig and then going to **Properties->Object Data->CloudRig->Actions**:
<img src="/media/addons/cloudrig/cloudrig_actions.png" width=450>
You can add a slot, select an Action, and input all the information about when this action will activate. This is very similar to setting up an individual Action constraint.
These are the steps one might take to set up a blinking and eye opening action:
- On the metarig, create bones named "Blink.L" and "Blink.R" and assign the [Bone Copy](cloudrig-types#bone-copy) component type. Assign colors, widgets, etc as you wish.
- Generate the rig.
- On the generated rig, pose and keyframe your blink pose on frame 20.
- Then, select all the bones you keyed, reset their transforms, and insert a key on frame 10.
- Optionally, you can pose and key an eye open pose on frame 0.
- I highly recommend to select all your keys, press T and set the key interpolation types to Linear.
- Include both left and right eyes in the poses.
- On the Metarig, add an Action Slot.
- Select the action you've just created. (Give it a descriptive name!)
- Select "Blink.L" as the control bone.
- Set the Transform Channel to which axis of movement you want to activate the blink.
- Symmetry is enabled by default. The UI should indicate that it found "Blink.R" as the opposite side control.
- Input the activation parameters. The frame range should be 1-20, and you can leave the transform min/max default for now
<img src="/media/addons/cloudrig/blink_action.png" width=450>
Regenerate the rig. When you move the Blink control, the action should activate and the character should blink with one eye for each control.
<video src="/media/addons/cloudrig/sintel_blink.mp4" title="" controls></video>
## Iterating
As previously explained, this works by creating Action constraints on each affected bone, which causes them to be transformed to the keyed pose when the control bone is moved. But now what if you want to edit the pose that you keyed? You can simply assign the Action to the generated rig in an Action Editor Dope Sheet, and scrub the timeline as normal. But, if you do that, and then also move the control, the pose will be applied double; Once by the active Action selected on the timeline, and once by the Action Constraints. If you find this annoying, you can disable all the constraints of a given Action, using the Enable/Disable Constraints button that CloudRig adds into the Action Editor's Header:
<img src="/media/addons/cloudrig/disable_action_constraints.png" width=1200>
## Symmetry
This option becomes available if the bone name can be identified as belonging to the left or right side. In parantheses you will see what the opposite side bone name is expected to be. If a bone with that name doesn't exist, the Symmetry option will be grayed out.
When enabled, each bone that is affected by the action will also be checked for a bone name that can be identified as belonging to the left or right side. Left-side bones will be controlled by the left-side control, and right-side bones by the right side control.
In the above example, we have a "Blink" action. This contains the keyframes for BOTH the left and right eyes of your character. Then, when we select either "Blink.L" or "Blink.R" bones as our control bone (it doesn't matter which, as long as they both exist), the Symmetry option should appear, and you can enable it. Now, the left-side bones will be controlled by "Blink.L" and the right-side bones by "Blink.R".
Not all bones that were keyframed in an action have to be identifiable as belonging to the left or right side. This is expected to only happen for bones which are in the center of the character. In this case, two Action constraints will be created on that bone; One for the left and one for the right side. And they will both have an influence of 0.5. You can imagine setting up actions for raising the eyebrows. You will have left and right eyebrows, but both of them will affect the center of the forehead by 50% each.
## Corrective Actions
When rigging a face, chances are your actions will only work nicely on their own at first.
For example, you might have a LipsWide action and a LipsUp action, but when both of them are activated, the result is probably a disaster. That's where a Corrective Action comes in.
The idea is exactly the same as a Corrective Shape Key: A corrective action activates when exactly two other actions are also activating. This lets your shapes combine correctly.
To set up a corrective action:
0. Make sure you already have two actions fully set up and working. Let's stick with the "LipsUp" and "LipsWide" example.
1. Create a new action. Naming it something like Lips_Up+Wide would make sense.
2. On the generated rig, pose your controls so that both LipsWide and LipsUp are being activated.
3. Pose and keyframe the necessary corrections in the Lips_Up+Wide action. As with a normal action set-up, you need the center frame of your frame range to be keyframed to the default pose.
4. Go back to your Metarig, add a new Action Slot, select Lips_Up+Wide, then enable the Corrective checkbox.
5. Select the two trigger actions in the selection boxes that appear.
5. The eye icon next to the selection box lets you double-check the trigger action's set-up. If you notice something wrong there and want to fix it, you can use the jump button next to the eye, to jump to that action slot.
6. In the Action Slot list, actions that either correct or are corrected by the currently active one will be marked with a link icon.
7. Now just set your Frame Start/End values and then hit Generate.
Note: It doesn't matter where corrective actions are placed in the list. To achieve correct transform mixing, they will be created above the first trigger in the constraint stack.
## Shape Keys
For any shape key of any object that is parented to the generated rig, if the shape key is named the same as one of the Actions in CloudRig's Action Slots, it will be driven. For symmetrical actions, you can put .L/.R at the end of the shape key's name.
So, if you have a symmetrical blink action set-up like in the first example, and your action was called "Blink", you could create shape keys named "Blink.L" and "Blink.R" on the head mesh. As long as the head mesh is parented to the generated rig, when you re-generate, the shape keys will gain drivers so that they are activated by your blink controls the same way your Action Constraints are.

View File

@ -1,461 +0,0 @@
# Component Types
## Overview
<img src="/media/addons/cloudrig/component_hierarchy.png">
These are CloudRig's component types. Most component types are built on top of others, meaning they inherit each other's functionalities. The image above and the table of contents below shows this inheritance hierarchy.
- [Shared Parameters](#shared-parameters)
- [Chain: Toon](#chain-toon)
- [Chain: Face Grid](#chain-face-grid)
- [Chain: Eyelid](#chain-eyelid)
- [Chain: FK](#chain-fk)
- [Chain: Physics](#chain-physics)
- [Feather](#feather)
- [Spine: IK/FK](#spine-ik-fk)
- [Spine: Squashy](#spine-squashy)
- [Shoulder](#shoulder)
- [Chain: IK](#chain-ik)
- [Chain: Finger](#chain-finger)
- [Limb: Generic](#limb-generic)
- [Limb: Biped Leg](#limb-biped-leg)
- [Curve: With Hooks](#curve-with-hooks)
- [Curve: Spline IK](#curve-spline-ik)
- [Lattice](#lattice)
- [Aim](#aim)
- [Bone Copy](#bone-copy)
- [Chain Intersection](#chain-intersection)
- [Bone Tweak](#bone-tweak)
## Assigning Components
You can assign a component to a bone in the metarig. For chain components, the connected children will be part of the same component, as long as they aren't assigned a component of their own. You can assign components to bones in two places in the UI:
- Properties -> Armature -> CloudRig -> Rig Components (Hit the + button to assign a component to the active bone.)
- Properties -> Bone -> CloudRig Component -> Component Type (Only appears when 'CloudRig' is enabled on the armature.)
<img src="/media/addons/cloudrig/assigning_components.png" width=800>
## Component Samples
Each component type comes with a sample so you can get something up and running quickly and start playing around with it.
You can add these in the 3D View via Add (Shift+A)->Armature->CloudRig Samples:
<img src="/media/addons/cloudrig/add_sample.png" width=500>
## Shared Parameters
All CloudRig component types share some basic functionality, like letting you choose a parent for the component's root, and even specify a parent switching set-up for it.
<details>
<summary> Details & Parameters </summary>
<img src="/media/addons/cloudrig/shared_parameters.png" width=600>
- #### Advanced Mode
This is technically a user preference, but it relates to component parameters. Some component parameters are rarely necessary or only if you want to really fine tune stuff. So, these options are hidden until this option is enabled, to ease the new user experience.
- #### [Constraint Relinking](constraint-relinking)
On any component, you can add constraints to the metarig bones. On generation, these constraints will be moved to the generated bones that make the most sense for a given component type. This is just to allow you to add some constraints when needed, without using Bone Tweak components.
For example, you can add Copy Rotation constraints to the metarig bones of an FK Chain component. That constraint can target the generated rig's bones, even though it's a different armature object. During generation, that constraint will be moved to the corresponding FK control.
- #### Root Parent
If specified, parent the root of this component to the chosen bone. You're choosing from the generated rig's bones here.
If the chosen bone is a bendy bone, additional options appear:
- Use an Armature constraint instead of normal parenting: This constraint takes bendy bone curvature into account, but it also means the parenting transforms will affect the bone's local matrix. If you want to use the bone's local transformations to drive something, you essentially won't be able to.
- Create parent helper bone: This fixes the local matrix issue by creating a parent helper bone for the aforementioned Armature constraint.
- #### Parent Switching
This option lets you create a parent switcher by entering the UI names of each parent on the left side, and the corresponding parent bone on the right side. The bone names are chosen from the generated rig.
The chosen bones will be the available parents for this component's root bone, and a selector will be added to the rig UI.
Different component types may implement parent switching differently. The specific behaviour is explained underneath the checkbox when it is enabled.
- #### Custom Property Storage
This setting will appear for components that need to create custom properties. Custom Properties are used for things like IK/FK switching.
- "Default": A bone named "Properties" will be created to store custom properties.
- "Custom": If you want to store the custom properties on an arbitrary bone, this option lets you select one. The selected bone has to be higher in the metarig hierarchy than this component, else you'll get a warning.
- "Generated": Component types implement their own behaviours for creating a custom property storage bone in a place that makes most sense for that component type. For example, the Biped Leg component will put the properties bone behind the foot control.
- #### Bone Sets
Components organize their bones using parameters called Bone Sets. These live under the Bone Organization sub-panel, which is only visible when Advanced Mode is enabled. Bone Sets are further explained on the [Organizing Bones](organizing-bones#organizing-bones) page.
</details>
## Chain: Toon
The most basic bone chain, consisting of independent controls connected by stretchy bendy bones. Can be useful for long, soft props, like a scarf on the floor, or soft circular things like a car tire.
Scaling the stretch controls uniformally gives the connected bendy bones more volume. Scaling them only on their local Y axis affects only the curvature of the chain.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/cloud_chain_toon.gif" width=500>
<img src="/media/addons/cloudrig/cloud_chain_veejay.gif" width=500>
- #### Stretch Segments
Number of sub-controls for each bone in the meta chain.
- #### Tip Control
Whether there should be a control at the tip of this chain.
- #### B-Bone Density
B-Bone segments will be distributed equally along the chain. As long as this value is >0, each bone will have at least 2 b-bone segments. A high density will not have a severe impact on performance.
- #### Sharp Sections
Bendy bones will not affect the curvature of their neighbours, unless their shared stretch control is scaled up on its local Y axis.
- #### Smooth Spline
Bendy bones will have a wider effect on the curvature of their neighbours, to easily get smoother curves. Works best when Deform Segments is 1, but that is not a requirement. Works fine with Sharp Sections, but it will only take effect once a stretch control is scaled up along its local Y axis.
- #### Preserve Volume
When enabled, deform bones will become fatter when squashed, and slimmer when stretched.
- #### Create Shape Key Helpers
Create helper bones that can be used to read the rotational difference between deform bones. Useful for driving corrective shape keys. These helpers will be prefixed "SKH" for "Shape Key Helper".
- #### Create Deform Controls
Create controls that allow you to translate and scale deform bones by disconnecting them from their neighbours.
</details>
## Chain: Face Grid
Extends the functionality of the Toon Chain with functionality to create intersection controls in locations where multiple chains intersect. Can be used to create a grid of bendy bone chains. Can be useful for faces, but I personally no longer recommend this workflow. As cool as it looks, it's difficult and unintuitive to control small areas, and difficult to set up.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/cloud_face_grid.gif" width=500>
- #### Merge Controls
Create controls for points where multiple Face Grid chains intersect. If a [Chain Intersection](#chain-intersection) component is found at that intersection, that will be used instead of generating one from scratch.
</details>
## Chain: Eyelid
This component should be parented to an Aim component, presumed to be an eyeball. The rotation of that Aim component (eyeball) will affect this eyelid component. The strength of this effect can be adjusted by animators in the rig UI under a Face Settings panel.
This can give a decent fleshy eyelid set-up very quickly, but for a main character, I advise instead to create Action Set-Ups to connect the eyeball's up-down and left-right rotations to hand-crafted eyelid poses. This will allow you to hand craft the way the eyelids react to the eyes in great detail.
## Chain: FK
Extends the functionality of the Toon Chain. In addition to stretch controls, this also creates FK controls, which are parented to each other in a hierarchy. Useful for fingers, tails, hair, appendages, a vast array of things.
<details>
<summary> Parameters </summary>
- #### Create Root
Create a root control for this rig component. This is required for the Hinge Toggle.
- #### Hinge
Set up a hinge toggle. This will add an option to the rig UI. When FK Hinge is enabled, the FK chain doesn't inherit rotation from its parent.
- #### Position Along Bone
How far (0-1) the FK control should be along the length of the bones. A value of 0.5 can make it easier to create smooth curves.
- #### Inherit Scale
Sets the scale inheritance type for FK controls.
- #### Rotation Mode
Rotation Mode for the FK controls.
- #### Duplicate First FK
Create an extra parent control for the first FK control.
- #### Display FK in Center
Display the FK controls in the center of the bone rather than at the head of the bone. Only affects display, no functional difference. Purely up to preference.
<img src="/media/addons/cloudrig/test_animation.gif" width=500>
- #### Test Animation
This panel will only show when the ["Generate Action" Generator Parameter](generator-parameters) is enabled.
When this option is enabled, this component will add keyframes into the generated action which can be used to test character deformations.
- #### Rotation Range
The negative and positive rotation amount in degrees to use for the aforementioned test animation.
- #### Rotation Axes
Which axes you want tested in the test animation. For example for fingers, you probably only need one axis.
</details>
## Chain: Physics
Extends the functionality of the FK Chain component with a physics setup that utilizes Blender's built-in cloth simulation (for better or worse). The FK controls are constrained to a cloth mesh, and can't be posed. However, optional Physics controls can be created to deform the cloth mesh. The simulation is applied on top of this deformation. This can be useful for achieving a video-gamey physics sim for things like a character's ponytail or any other appendage.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/physics_chain.gif" width=500>
- #### Cloth Object
The cloth object that the FK chain will be constrained to with Damped Track constraints. The object should have vertex groups named "PSX-FK control_name". You can leave this unspecified at first and a simple object will be generated for you, which you can later modify.
- #### Force Re-Generate
If you intend to modify the cloth mesh, make sure to disable this option since otherwise re-generating the rig will also re-generate the cloth mesh. Enabling this is useful however when you are still iterating on the shape of the bone chain, in which case you want to re-generate the mesh every time.
- #### Pin Falloff
Type of the vertex weight falloff curve for the chain of vertices making up the cloth mesh.
- #### Pin Falloff Offset
Stretch factor for the pin falloff curve. Increasing this will make the cloth more stiff.
- #### Create Physics Controls
When enabled, this will create a PSX control chain which lets you control the cloth simulation. This will only work on pinned vertices - vertices with a pin weight of 0 will only be affected by the cloth simulation, while a weight of 1 means being fully affected by the armature.
</details>
## Feather
Some small tweaks to the FK Chain component to work a bit better for an individual feather of a bird. Requires a single bone.
This component type comes with no additional parameters.
## Spine: IK/FK
Builds on the FK Chain component with additional option for creating an IK-like set-up for a spine.
<details>
<summary> Parameters </summary>
- #### Create IK Setup
Create an IK-like setup inspired by [BlenRig](https://gitlab.com/jpbouza/BlenRig). This will also add an IK/FK and IK Stretch setting to the rig UI.
- #### Duplicate Controls
Make duplicates of the main spine controls.
- #### World-Align Controls
Align the torso and hips controls fully with the world.
</details>
## Spine: Squashy
Also builds on the FK Chain component, but instead of an ability to have the spine be lead by the hip movements, this set-up allows the torso to be squashed, and the animator can control the amount of volume preservation. Useful for more cartoony stuff.
<details>
<summary> Parameters </summary>
- #### Duplicate Controls
Make duplicates of the main spine controls.
- #### World-Align Controls
Align the torso and hips controls fully with the world.
</details>
## Shoulder
A very simple extension of the FK Chain component, essentially just changes the bone shape.
<details>
<summary> Parameters </summary>
- #### Up Axis
Rotate the bone shape to align with this axis of the bone.
</details>
## Chain: IK
Extends the FK Chain component with IK functionality. The default direction of the pole target is determined based on the curvature of the bone chain. This requires at least 2 bones.
This rig adds IK/FK switching and snapping and IK Stretch settings to the rig UI.
<details>
<summary> Parameters </summary>
- #### Create IK Pole
Whether the IK constraint should use a pole target control, and whether such bone should even be created.
- #### IK At Tail
Put the IK control at the tail of the last bone, rather than at its head.
- #### World-Aligned IK Master
Align the IK master control with the nearest world axis. Not recommended for arms when your resting pose is an A-pose.
- #### Flatten Bone Chain
Although not a parameter, this button will flatten your chain along a plane with as little changes as possible, to ensure predictable IK behaviour.
</details>
## Chain: Finger
Changes the IK Chain component with some specific behaviours useful for fingers. The fingers should bend in their local +X axis.
The IK settings of finger rigs are organized into a sub-sub-panel in the rig UI, because there are usually a lot of fingers, resulting in a lot of UI sliders.
<details>
<summary> Parameters </summary>
- #### Create IK Switch Control
Instead of using a UI slider for FK/IK switching, create a control in the viewport for the switching.
Whether the IK constraint should use a pole target control, and whether such bone should even be created.
</details>
## Limb: Generic
Extends the IK Chain component with cartoony rubber-hose functionality. This requires a chain of exactly 3 bones.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/rubber_hose.gif" width=500>
- #### Duplicate IK Master
Create an extra child control for the IK master.
- #### Limit Elbow Axes
Lock the Y and Z rotation of the elbow, constraining the rig to realistic rotation axes. More importantly, this is necessary for precise IK->FK snapping.
- #### Rubber Hose
This option is only available when Smooth Chain is enabled and Deform Segments is greater than 1.
When this option is enabled, a slider is added to a rig UI which lets you have an automatic cartoony rubber hose limb effect.
- #### With Control
Instead of a UI slider, create a control bone that can be scaled to control the strength of the automatic rubber hose effect.
- #### Type
There are two ways to achieve the rubber hose deformation. One results in lengthening the limbs, while the other results in shortening them. It's a question of which style you prefer.
</details>
## Limb: Biped Leg
Extends the functionality of the Generic Limb component with footroll. This requires a chain of exactly 4 bones.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/cloud_leg.gif" width=500>
- #### Foot Roll
Whether to create a foot roll setup.
- #### Heel Pivot
If you are using foot roll, you can specify a bone whose location will be used as the pivot point for when the foot is rolled backwards.
</details>
## Curve: With Hooks
Create hook controls for an existing Curve object. Multiple splines within a single curve object are supported. Each spline will have its own root control.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/curve.gif" width=500>
- #### Curve
The target curve object to be hooked up to bone controls. Must be chosen!
- #### Custom Name
String to use to name the bones. If not specified, use the base bone's name.
- #### Inherit Scale
Scale inheritance setting of the curve hook and spline root controls.
- #### X Axis Symmetry
Controls will be named with .L/.R suffixes based on their X position. A curve object that is symmetrical around its own X 0 point is expected, otherwise results may be unexpected.
- #### Controls for Handles
For every curve point hook control, create two children that separately control the handles of that curve point.
- #### Rotatable Handles
Set up the handle controls in a way where they can be rotated. Note that they will still allow translation, but if you translate them, rotating them afterwards will be unpredictable.
- #### Separate Radius Control
Instead of using the hook control's size to control the curve point's radius, create a separate child control to do so.
</details>
## Curve: Spline IK
Extends the functionality of the Curve With Hooks component, where instead of adding bones to control an existing curve object, it creates a new curve object along a chain of bones and creates a Spline IK constraint setup. The curve is always re-generated along with the rig. The curve parameter is grayed out, since it will be created for you. You cannot specify a custom curve. Instead, if you want to change the shape of distribution of the curve, simply make those changes in the bone chain. This can contain more information.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/spline_ik.gif" width=500>
- #### Curve Handle Length
A multiplier on curve handle length. 1.0 means the curve handle is long enough to reach the neighbour curve point.
- #### Deform Setup
How this component should behave with Armature modifiers:
- None: Don't create deformation bones. Then this component cannot be used with Armature modifiers.
- Preserve: Preserve the Deform checkbox of the bones as set in the metarig.
- Create: Create DEF- bones that are a separate chain with the Deform checkbox enabled.
- #### Subdivide Bones
When Deform Setup is set to Create, this value defines how many deforming bones to generate along each original bone in the metarig. More bones results in a smoother curvature. However, the Spline IK constraint only supports a chain of up to 255 bones.
- #### Match Controls to Bones
When enabled, control bones will be created at the locations of the meta chain's bones. When disabled, control bones will be distributed an equal distance from each other along the chain.
- #### Number of Hooks
When the above setting is disabled, this specifies how many controls should be placed along the chain.
</details>
## Lattice
Creates a lattice with a root and a hook control. The hook control deforms the inside of the lattice using a spherical vertex group that gets generated. You can manually add Lattice modifiers to objects that you want to be deformed by the created lattice. This is a very performant and very flexible way to slightly nudge or bulge things. Every rig should have a few of these lattice set-ups scattered around, particularly clothing and faces. You never know when it might come in handy, but it often does.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/lattice.gif" width=500>
- #### Lattice
The lattice object that will be generated. If empty, one will be created.
- #### Regenerate
Whether the lattice should be re-generated from scratch or not. Disable this if you want to customize the lattice, otherwise any changes beside the object's name, will be lost when you re-generate the rig.
</details>
## Aim
This rig creates an aim control for a single bone. Can be useful for cameras, eyes, or anything that needs to aim at a target.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/aim.gif" width=500>
- #### Aim Group
Aim rigs belonging to the same Aim Group will have a shared master control generated for them.
- #### Target Distance
Distance of the aim target from the base bone. This value is not in Blender units, but is a value relative to the scale of the rig.
- #### Flatten X
Discard the X component of the eye vector when placing the target control. Useful for eyes that have significant default rotation. This can result in the eye becoming cross-eyed in the default pose, but it prevents the eye targets from crossing each other or being too far from each other.
- #### Create Deform
Create a deform bone for this rig. May not be always needed, for example if you just want to object-parent something to the aim rig, like a camera.
- #### Create Root
Create a root bone for this rig.
- #### Create Sub-Control
Create a secondary control and deform bone attached to the aim control. Useful for fake eye highlights.
</details>
## Bone Copy
This component type lets you copy a connected chain of bones over to the generated rig. Often used just to copy a single bone. Useful for face controls or any other arbitrary control you want to add.
Constraints will be [relinked](constraint-relinking) to the copied bone.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/cloud_bone_parameters.png" width=500>
- #### Create Custom Pivot
Create a parent control whose local translation is not propagated to the main control, but its rotation and scale are.
- #### Create Deform Bone
Create a second bone with the DEF- prefix and the Deform property enabled, so you can use it as a deform bone.
- #### Ensure Free Transformation
If this bone has any constraints, move them to a parent bone prefixed with "CON", unless the constraint name starts with "KEEP".
- #### Custom Properties: UI Sub-panel
Choose which sub-panel the custom properties should be displayed in. If empty, the properties won't appear in the rig UI.
- #### Custom Properties: UI Label
Choose which label the custom properties should be displayed under. If empty, the properties will display at the top of the subpanel.
</details>
## Chain Intersection
This component is an extension of Bone Copy with a special interaction with Face Grid components.
<details>
<summary> Details </summary>
When a Chain Intersection is placed at the same location as one or more Face Grid bones, the Chain Intersection will be used as the intersection control, rather than automatically creating that intersection control.
This can be useful because the automatically generated intersection controls have unwieldy bone names, and their orientation may also need to be customized.
<img src="/media/addons/cloudrig/cloud_chain_anchor.gif" width=500>
This component has no additional parameters.
</details>
## Bone Tweak
This component type lets you tweak aspects of a single bone that is expected to exist in the generated rig.
<details>
<summary> Parameters </summary>
<img src="/media/addons/cloudrig/cloud_tweak_parameters.png" width=500>
- #### Additive Constraints
If true, the constraints on this bone will be added on top of the target bone's already existing constraints, and then [relinked](constraint-relinking). Otherwise, the original constraints will be overwritten.
- #### Tweak Parameters
The bone's properties are split into these categories:
- Transforms
- Locks
- Rotation Mode
- Bone Shape
- Color Palette
- Collections
- Custom Properties
- IK Settings
- B-Bone Settings
Each of these can be chosen to be copied over to the target bone or not. For example if you just want to add some extra constraints to a bone, you probably don't want to overwrite its transforms, bone shape, etc, so you would leave all of those unticked, and they will remain untouched.
</details>

View File

@ -1,184 +0,0 @@
# Contribute
This project has grown large enough that external contributors would be fairly welcome. Just [get in contact with Demeter](https://blender.chat/direct/mets) before you start coding.
### Adding a new component type or parameter
If you have an idea for a new component type or parameter, that is welcome. Here are some notes:
- Open an Issue to discuss the design first.
- Consider if the functionality should just be just a parameter on an existing component type or an entire new type on its own. Usually both are possible, but when one functionality requires multiple parameters that don't make sense on any of the current component types, that's when it should be split out into a new component type.
- No redundant or double functionality. If the new functionality is very similar to that of another component type, let's try to find a way to make them share the relevant code. Maybe that means putting in extra work to also make this functionality work with a bunch of other component types, and a new parameter can be added all the way up in cloud_base.
- Provide clear and convincing explanation of why this functionality is useful, ideally showing a character where the functionality was used where no other solution would've worked as well.
- If I can press it, it should do something. If it doesn't do anything, don't let me press it. (see forced_params dictionary in some component classes)
To get started, check out cloud_template.py. This is what I start with when I start implementing a new component type. That is to say, it's the most basic skeleton code of a CloudRig component type. Note that it inherits a lot of shared functionality from Component_Base.
- Implement all the parameters and code. There is a regular __init__() and then the generator will call `create_bone_infos()` as your entry point.
- Add a component sample in MetaRigs.blend, named according to the convention you'll find in there.
- Add documentation in the wiki's Cloudrig-Types page, again sticking to the conventions established there.
### Conventions
Suggestions for more conventions are welcome, if you find things that could be more consistent. But what's already written down here is unlikely to change. I myself don't always manage to stick to these conventions, but I should. If you'd like to understand a part of the code but can't, feel free to let me know.
- Ideally every file should have fully type annotated functions with clear and verbose docstrings.
- PEP 585 type annotations should be used, with native types like `list`. Avoid the older PEP 484 that used classes like `typing.List`.
- Avoid all short variable names.
- Careful when naming a variable `bone`. It's fine, but it should be `ebone` for EditBone, `pbone` for PoseBone, `bone_info` for BoneInfo instances, and `bone_name` if it's just a string.
- All component types start with `cloud_`
- Always use the Troubleshooting module and the `add_log()` function to warn riggers about potential issues, big or small.
- Functions should be defined top to bottom in roughly the order they run.
- Functions that override an inherited one should specify in the docstring what module they are overriding the function from.
- Always be conscious of whether calls like `bpy.data.objects.get()` should receive a `(string, library)` tuple or not.
- Code is formatted with Black.
## Modules
Here are descriptions of each python module (file) in CloudRig.
<details>
<summary> generation </summary>
- #### cloud_generator.py
This module holds the generation operator, which is an important code entry point. From there, you can walk through the entire generation process.
- #### actions_component.py
The [Actions](actions) generator feature is implemented here. UI is implemented in ui/actions_ui.py.
- #### naming.py
Houses CloudNameManager, which is instantiated by the generator and referenced from all rigs via self.naming, and provides string operators useful in creating and mirroring bone names.
- #### test_animation.py
The "Generate Test Action" feature is implemented here. This is drawn in the Generation tab of a metarig, and it works with FK Chain components to save you time in creating an animation where you rotate all the joints to test deformations.
- #### troubleshooting.py
All troubleshooting features:
- The drawing, storage and functionality of the Generation Log UI seen on metarigs.
- The CloudLogManager class which is instantiated by the generator as self.logger. Components have wrapper functions to auto-fill some parameters, those being `self.add_log()` and `self.raise_generation_error()`. These functions add entries to the log storage.
- All Quick Fix operators that help quickly troubleshoot various problems.
- Bug and stack trace reporting functions (opening the Issues page on this repo and pre-filling it with useful information)
- #### cloudrig.py
This is the file that gets loaded with all generated rigs. This script is not procedurally generated. Instead, a nested dictionary is written to a custom property during generation, called 'ui_data'. This is mostly created in `utils/ui.py/add_ui_data()`, and then used by cloudrig.py to draw all the UI elements.
These UI elements are in the sidebar under the CloudRig panel, and contain settings like custom properties, IK/FK switching, parent switching, snapping and baking.
</details>
<details>
<summary> metarigs </summary>
The `__init__.py` here implements the metarigs and component samples UI lists that appear in the Object->Add->Armature UI. Metarigs and Samples are technically the same thing, and both are loaded from MetaRigs.blend.
</details>
<details>
<summary> operators </summary>
Operators to help with authoring metarigs and speed up workflow.
- **better_bone_extrude**: Binds to the E key, overwriting Blender's default bone extrude operator. Extruding a bone named "Bone1" will result in a bone named "Bone2" rather than "Bone1.001".
- **bone_selection_pie_ops**: Operators for the bone selection pie menu, bound to Alt+D in armature pose/edit/weight paint modes.
- **bone_selection_pie_ui**: UI elements for said pie menu.
- **copy_mirror_components**: Operators for copying and mirroring metarig component parameters. Found in the CloudRig header menu in the 3D View.
- **edit_widget**: An operator bound to Ctrl+Alt+E to toggle edit mode on a bone's widget.
- **flatten_chain**: Flatten a bone chain along a plane, useful for straightening limbs for good IK behaviour. Drawn in the IK Chain component's UI.
- **pie_bone_parenting**: Pie menu bound to the P key for bone parenting, even in pose mode.
- **pie_bone_specials**: Pie menu bound to the X key for deletion and symmetry in armature pose/edit modes.
- **symmetrize**: The improved symmetrize functionality found in the above pie menu.
- **toggle_action_constraints**: Useful in Action-based rigging workflow, button is drawn in the Action editor header.
- **toggle_metarig**: Toggle between metarig and generated rig (visibility, object mode, bone collections, bone selection). Default shortcut: Shift+T.
</details>
<details>
<summary> rig_component_features </summary>
- #### widgets/__init__.py
Like metarigs, most widgets are appended from a Widgets.blend file. This is used
- #### bone_gizmos.py
Bone Gizmos is an experimental/abandoned addon of mine, and this module allows components to interface with this addon.
- #### animation.py
Functions used by [cloud_fk_chain](cloudrig-types#cloud_fk_chain) and the [Generate Test Animation](generator-parameters) feature.
- #### bone_set.py
CloudRig's bone organization system that takes care of creating sets of parameters to customize the collection and color assignment of bones. All BoneInfo instances created during generation should be created with my_bone_set.new(), to ensure that every bone can be organized by the rigger.
- #### bone.py
Abstraction layer for bones, constraints and drivers, which are used all over CloudRig. These avoid a lot of headaches that come with interacting with real Blender data directly (in exchange for other, smaller headaches!).
Existing bones are loaded into BoneInfo instances in `load_metarig_bone_infos()`, which are then turned back into real bones in `write_edit_data()` and `write_pose_data()`.
- #### mechanism.py
Houses the CloudMechanismMixin mix-in class which is inherited by all component types and provides generic utilities to manipulate bones, constraints and drivers.
- #### custom_props.py
Implements the shared parameters of all component types relating to storing and displaying custom properties in the rig UI.
- #### object.py
Houses CloudObjectUtilitiesMixin which is inherited by all component types and provides generic utilities to control actual Blender objects, such as making things visible, assigning things to collections, transform locks, etc.
- #### parent_switching.py
UI for the [Parent Switching shared parameters](cloudrig-types#shared-parameters). This just means creating certain UI data, drivers and constraints, which cloudrig.py will use for displaying parent switching sliders and operators. Those operators are implemented in cloudrig.py.
- #### ui.py
Houses CloudUIMixin which is inherited by all component types and provides utilities for drawing the UI of parameters as well as storing UI data. The `add_ui_data()` function is used to store data in the rig's `ui_data` custom property, which will be later read by cloudrig.py in `draw_rig_settings()` to draw the rig's UI.
</details>
<details>
<summary> rig_components </summary>
All the [component types](cloudrig-types) in the feature set.
Also has cloud_template which is the base I use when starting a new component type.
All component types inherit from cloud_base.py/Component_Base.
Entry points are of course `__init__()` and `create_bone_infos()`.
</details>
<details>
<summary> ui </summary>
- **actions_ui**: UI for the Action system.
- **cloudrig_dropdown_menu**: The "CloudRig" editor header menu in armature pose/edit mode.
- **cloudrig_main_panel**: The "CloudRig" panel and "Generation" sub-panel in the Properties editor on armatures.
- **rig_component_list**: The "Component List" sub-panel in the Properties->Armature editor.
- **rig_component_subpanels**: The parameter sub-panels in the Properties->Bone editor.
- **rig_component_ui**: The parameter main panel in the Properties->Bone editor.
</details>
<details>
<summary> utils </summary>
- **curve.py**: Utility functions used by curve-based components, particularly to help with curve symmetry.
- **lattice.py**: Some utilities used by cloud_lattice, taken from my Lattice Magic addon.
- **maths.py**: Any pure math, even if it is only used in one place, goes here. That means this module should never import anything from any other part of CloudRig.
- **misc.py**: Code that hasn't been organized yet. Ideally this module shouldn't exist, since it's not clear what is in it.
- **post_gen.py**: Code that could be useful to run from post-generation scripts. Not actually used anywhere in the add-on.
</details>
<details>
<summary> Repo root </summary>
- **__init__.py**
Where the add-on registers itself into Blender's RNA system. I implement a pattern where each sub-folder's __init__.py should import its contents and put them in a "modules" list. The listed modules will be traversed recursively here, and any registerable classes they might store in a "registry" list will be registered, and their register() and unregister() functions will be called as appropriate.
- **manual.py**
Makes sure right clicking on CloudRig properties and then clicking on Open Manual goes to the relevant page on this wiki.
- **versioning.py**
Metarig versioning.
All metarigs store a version number, and this module adds an app handler that runs whenever a new blend file is loaded, to check for metarigs whose version is lower than the current one. If it finds any, it will automatically do its best to upgrade the metarig's [component types and parameters](cloudrig-types) to the latest correct names and values.
For example, the cloud_copy and cloud_tweak bone types used to be a single component type with an enum to switch between the two behaviours. When that split was implemented, the old enum value is still accessible, and is used to assign the new correct component type accordingly.
</details>

View File

@ -1,14 +0,0 @@
# Constraint Relinking
When working with CloudRig, you generally don't have to define individual constraints yourself - after all, the whole point of [rig components](cloudrig-types) is to generate the bones and constraints for you.
But unique characters have unique needs, where you may want to **add additional constraints** on specific bones that were generated.
So, if you want to tweak a generated rig by adding a constraint, you have a few options:
- Create a [Tweak Bone](cloudrig-types#tweak-bone) with the name of the bone you want to add the constraint to, and add the constraint. This can make your metarig messy with the addition of many bones.
- Implement adding your constraint in the [post-generation script](generator-parameters#post-generation-script). This is the most flexible solution, but these scripts can also get bloated and messy, and they are a bit secretive. Plus, not everybody knows how to code.
- Your third option is to use Constraint Relinking.
You can do this when you want to add constraints to the primary controls of a rig component, such as the `STR-` controls of a [Toon Chain](cloudrig-types#chain-toon) component, or the `FK-` controls of an [FK Chain](cloudrig-types#chain-fk) component.
All you have to do is add a constraint on the corresponding metarig bone. You can let the constraint target the generated rig if you want, but if you want to avoid that, you can also use a naming convention where you put the target bone's name as the constraint name, separated by an `@` character. See the video below.
<video src="/media/addons/cloudrig/constraint_relinking.mp4" controls></video>

View File

@ -1,22 +0,0 @@
# Generator Parameters
These parameters are found under Properties->Armature->CloudRig->Generation, and are the high level options used for generating a rig from a metarig.
<img src="/media/addons/cloudrig/generator_parameters.png" width=450>
### Target Rig
The armature object used as the generation target. If empty, a new one will be created, and assigned here. You may rename the object.
### Widget Collection
The collection where rig widgets will be stored. This collection shouldn't contain anything else, since this is also used for detecting duplicate or unused widgets. If not specified, it will be created. You may rename the collection. If the refresh icon is enabled, widgets will be force-reloaded each time you regenerate.
### Ensure Root
Name of the root bone. All rigs must have a root bone, so if there is no bone with this name, it will be created. This bone should have the Bone Copy component assigned. Bones without parents will be automatically parented to this bone. Just like any other Bone Copy component, you may rename or customize this bone however you want. However, note that when you rename the bone, you will need to re-select it in this selection box.
### Properties Bone
Name of the default properties bone to create, when necessary. For example, for a limb rig with IK/FK sliders, those sliders are properties, which need to be stored somewhere. This setting specifies a bone name to create as fallback for that storage.
### Post-Generation Script
You may specify a text datablock stored in this .blend file, to be executed as a python script as one of the last steps of rig generation. You can use this to make hyper-specific tweaks to your rigs, or loop over all the bones and change some settings. You may leave it empty.
### Generate Action
This option is relevant when your rig contains FK Chain components. If this option is enabled, a test Action will be generated with keyframes defined by the generated rig's hierarchy and the parameters of each FK Chain component. The purpose of this Action is to help you in a common weight painting workflow, where the character is rotated on each of their joint to test deformations. If you want to make manual tweaks to this action, make sure to disable this option, so that your pose tweaks don't get overwritten when you regenerate the rig.

View File

@ -1,57 +0,0 @@
# CloudRig
Welcome to CloudRig!
This page aims to give you an overview of CloudRig's features, with links for further diving into each topic.
## Accessing these pages from Blender
You can right click on most CloudRig UI elements and click on Online Manual to open the relevant page in a browser.
## General workflow
The rig generation workflow with CloudRig revolves around inputting parameters on a simple **Metarig**, which is then used to generate the **Control Rig**, which is what you will actually use to control your character. The Control Rig can be re-generated based on new Metarig parameters or proportions as many times as needed, until you get exactly what you want.
- Create a skeleton outlining the proportions of your character. This is your Metarig.
- Assign a Rig Component to specific bones that define the features that you want in your rig, such as a spine, arms, and legs.
- Customize the components using their parameters to change the available features and have the level of complexity you need.
- Generate the rig.
The simplicity of this workflow allows for fast iteration. If you want to add or remove a feature from the rig, there's no need for copy-pasting and renaming of hundreds of bones, or worse, manually making changes to dozens of bones, constraints and drivers just to make a slight change to how the rig behaves. You just tick or untick a checkbox and hit Regenerate.
So how is it done?
## Getting Started
Spawn the Cloud Basic Human preset metarig via **Add->Armature->CloudRig Metarigs->Cloud Human**.
Now try selecting the "UpperArm.L" bone in pose mode. Then go to **Properties->Bone->CloudRig Component**.
<img src="/media/addons/cloudrig/cloudrig_component.png" width=400>
As you can see here, this bone is assigned the "Limb: Generic" Component Type, and you can see all of its parameters in this panel. This Component Type and these parameters determine the behaviour of the control rig that will be generated when you press the Generate CloudRig button.
CloudRig aims to cover a wide and ever expanding variety of needs with its component types. This is facilitated by the fact that it is being used in production at the Blender Studio. Whatever weird character (or prop!) those crazy guys come up with, CloudRig has to be able to generate a rig for it.
The [CloudRig Types](cloudrig-types) page covers all the available component types included in CloudRig.
## Starting From Scratch
If you want to start from a fresh armature, all you need to do is enable CloudRig under **Properties->Data->CloudRig**. Here you will also find the [Generation Parameters](generator-parameters), which are a few high level pieces of data used for the generation process. The Rig Components sub-panel shows you a hierarchical list of bones which have components assigned to them. You'll also find some other panels, which are mentioned below.
## Actions & Face Rigging
If you want to rig faces with CloudRig, you will probably want to use a combnination of component types such as [Bone Copy](cloudrig-types#bone-copy), [Chain: Eyelid](cloudrig-types#chain-eyelid) and [Aim](cloudrig-types#aim). But the real magic will happen in the [Action system](actions). You can get an example of this by playing with the included Sintel metarig.
## Generation Log
There are many ways to make mistakes while using CloudRig, also while rigging in general. CloudRig will NEVER automatically fix your mistakes, but it will try to detect them and give you suggestions or even one-button solutions to fix them.
After generating your rig, you will find a list of potential issues here, in the Generation Log panel. Some of these issues will have a tailor-fitted button to fix the issue. This is all handled by the [Troubleshooting](troubleshooting) module.
## Rig UI
Once generated, select your generated rig, and press the N key to bring up the Sidebar. You should see a CloudRig tab, which contains the rig UI. This is where the animators will be able to find rig settings and a collection selector.
<img src="/media/addons/cloudrig/rig_ui.png" width=400>
CloudRig provides a way to add arbitrary custom properties to this UI as well, in case you want to allow animators to customize a character's outfits, materials, etc, with a nice and clean UI. Check out the [Properties UI](properties-ui) page to see how that works.
## Organizing Bones
If you don't like the collections that CloudRig assigns the generated bones to by default, you can customize them in the [Bone Organization](organizing-bones#organizing-bones-1) parameters, which are only visible when [Advanced Mode](cloudrig-types#advaned-mode) is enabled.

View File

@ -1,64 +0,0 @@
# Organizing Bones
In Blender, bones can be organized using [Bone Collections](https://docs.blender.org/manual/en/latest/animation/armatures/bones/bone_collections.html), or [Bone Selection Sets](https://docs.blender.org/manual/en/dev/addons/animation/bone_selection_sets.html), if you use that addon.
## Bone Collections
If you generate the Cloud Basic Human metarig and select the generated rig, you should be able to find this panel under **Sidebar(N panel)->CloudRig->Bone Collections**:
<img src="/media/addons/cloudrig/sidebar_collections.png" width=500>
You can also summon this collection list menu using Shift+M on a CloudRig armature.
## Organizing Bones
If you want to customize which generated bones get placed in which bone collections, you can do this using Bone Sets.
Let's say we have a strand of hair rigged with the FK Chain component type, and we want the hair FK bones to go on a Hair collection that we created, not the "FK Controls" collection that it uses by default.
Organizing bones is considered an advanced feature, so enable **Advanced Mode**.
At the bottom of the parameters, you'll find the Bone Organization sub-panel:
<img src="/media/addons/cloudrig/bone_sets.png" width=500>
And as you can see, all we have to do is assign our Hair collection as one of the collections that the FK Controls of this component should be assigned to. You can assign them to as many collections as you wish.
#### Bone Colors
You can also choose a color preset to assign. This preset will be converted to a custom color, meaning your personal color presets will propagate to whoever uses your rig. This ensures the rig looks the same way, no matter who's using it.
Additionally, you can change Blender's default color presets to CloudRig's recommended ones in the preferences, using this button:
<img src="/media/addons/cloudrig/bone_color_preset.png" width=800>
## Custom Collections
If you want to create collections even more granularly, you can simply create them on the generated rig, and assign whatever bones you want. You just need to let CloudRig know that you want to preserve these collections when re-generating the rig. And to do that, you first need to enable the collection authoring UI:
<img src="/media/addons/cloudrig/collections_extras.png" width=400>
Then you can create your collections, assign bones, and the important part: Mark the collection as preserved, using the shield icon. These collections will be fully preserved when you regenerate the rig.
<img src="/media/addons/cloudrig/pasted_sel_sets.png" width=400>
## Selection Sets
Instead of having to use multiple systems to organize your bones, CloudRig implements all the features of SelectionSets.
To start, you can easily convert your selection sets to collections:
1. Enable collection authoring UI **on the generated rig**, as mentioned above.
2. Copy Selection Sets to clipboard as you normally would:
<img src="/media/addons/cloudrig/copy_sel_sets.png" width=400>
3. Paste Selection Sets as Collections **on the generated rig** via CloudRig:
<img src="/media/addons/cloudrig/paste_sel_sets.png" width=600>
4. Your selection sets are pasted. The filled circle indicates that they were marked for Quick Access, and the shield indicates that they will be preserved when the rig is re-generated.
<img src="/media/addons/cloudrig/pasted_sel_sets.png" width=400>
## Quick Select
Collections marked with the circle will be included in the Quick Select menu, which is bound to **Shift Alt W** by default:
<img src="/media/addons/cloudrig/collections_quick_select.png" width=300>
## Bone Display Size
You might often encounter that the sizes of the bone shapes are too big or too small for some parts of your character. This can result in an eye sore, or worse, important controls only being visible with Bone X-Ray. For this reason, all CloudRig components' bone shapes will scale according to the BBone scale of the bone in the metarig. You can adjust the BBone scale of bones using Ctrl+Alt+S, but only if your armature's bone display type is set to BBone. This will not affect any behaviour on the rig, it's purely for visual aid.
*Here I increase the BBone scale, then re-generate the rig, to make sure the FK controls are bigger than the mesh.*
<img src="/media/addons/cloudrig/bbone_scale.gif" width=600>

View File

@ -1,201 +0,0 @@
# Properties UI
CloudRig lets you build a custom rig UI, containing built-in properties, custom properties, and even operators. This page aims to guide you through that process even if you're new to Blender as a whole.
## What are Custom Properties?
In Blender, you can define [Custom Properties](https://docs.blender.org/manual/en/latest/files/custom_properties.html) on objects or bones. You can specify their name, min/max/default values, tooltip, whether they're a floating point number, a whole number, a boolean toggle, a color, and so on.
You can then use [Drivers](https://docs.blender.org/manual/en/latest/animation/drivers/index.html) to connect these properties to your character and/or rig, to allow animators to intuitively configure things.
##### Use case examples:
- An integer property to swap between different outfits, by driving the visibility of objects and modifiers.
- A boolean toggle to switch between two different rig behaviours, by driving the influence of a set of bone constraints and bone properties.
- A color property to change the eye color of a character, by driving an RGB node in a material.
##### Some general tips:
- Be sure to properly set the default value of each property.
- You can reset any property in Blender to its default by mouse hovering it and hitting Backspace.
- You can enter a Description for properties, which will be shown when they are mouse hovered.
## What can CloudRig do?
If you find Blender's built-in way of displaying custom properties a bit ugly, limited, and disorganized, that's when CloudRig can offer a bit of help. You can see an example of this on the Cloud Human Metarig.
<img src="/media/addons/cloudrig/props_ui_example.png" width=800>
With CloudRig's UI editor you can:
- Organize properties into collapsible panels.
- Display built-in properties alongside custom properties.
- Put properties under text labels.
- Put more than one property in a single row.
- Easily change the order in which panels, labels, and rows are displayed.
- Have hierarchical properties, where a property is only displayed under another property when it has certain values. Useful for complex clothing options.
- Have one Python Operator displayed next to each property. For example, the generation process creates IK/FK snapping buttons next to their relevant properties.
- Easily work with Blender's linking system, because the Library Overridable setting of displayed properties will be enabled automatically.
- Display a custom name for each value of an Integer or Boolean property.
- Display custom icons for the True/False states of a Boolean property.
## UI Editing Workflow
##### Step 1: Enabling UI Edit Mode
Let's say you want to add a property to your rig's Settings panel. You can find CloudRig's Properties UI Editor by selecting any armature with the CloudRig setting enabled (ie. a metarig), and navigating to: 3D View -> Sidebar (N-panel) -> CloudRig -> Settings -> **UI Edit Mode**.
<img src="/media/addons/cloudrig/props_ui_edit_mode.png" width=800>
Enabling this mode reveals (among other things) the "**Add Property to UI**" button. This pops up the following panel, where you can fill in all the info about what/where/how you want to add to the UI. These are all explained in the next section.
---
##### Step 2: Adding (or editing) a UI Element
Whether you're adding a new UI element or editing an existing one, you will see this same pop-up:
<img src="/media/addons/cloudrig/props_ui_add_prop_simple.png" width=800>
- **Bone icon**: Toggle whether you want to use a convenient bone selector, or type in the data path of a property owner.
- **Property Bone**: Select the bone that should contain the property that you want to add to the UI.
- **Property Owner**: Type in a data path to anything, eg. `pose.bones["Spine"].constraints["Stretch To"]` will point at the Stretch To constraint on the Spine bone.
- **Property Name**: Name of the property on the selected property owner.
- To continue the above constraint example, you could type `influence` in this field, to simply add the constraint's influence slider to the UI.
- If the chosen property owner has Custom Properties, a drop-down selector will be shown of existing ones.
- **Plus icon**: Instead of using the drop-down selector, type in anything, allowing you to create a new property.
- **List icon**: Instead of selecting a single property, add ALL custom properties of the selected owner.
- If the property exists, a preview of how it will look is shown.
***The remaining settings are optional***:
- **Hierarchy icon**: Toggle whether you want to add a UI element to a sub-panel, or a child element to an existing element.
- **Subpanel**: Name of the sub-panel this UI element should be added to. Can be empty, and then it will be placed outside of any sub-panels.
- **Parent Element**: A drop-down selector of all current UI elements. Only visible when the Hierarchy icon is enabled.
- **Parent Value**: What value the parent element's property must be, in order for this property to be visible. You can type in a single number, or comma-separated numbers, like `1, 2, 3`.
- **Label**: If specified, the property will be displayed under this text label. Handy for categorizing things within a panel.
- **Row ID**: When two UI elements share a panel, a label, and a Row ID, they will be displayed next to each other. Handy for left/right properties, or for grouping bone collections.
- **Display Name**: For when you change your mind about the name of a property, but that property has already been used in animations. Changing the name of the property would break those animations, but you can always change the display name without any consequence.
- **Value Names**: Only for Integer and Boolean properties, you can enter a comma-separated list of strings here, eg. `-, Default, Fancy`. This will make it so that "Default" is displayed when the value is 1, and "Fancy" will be displayed when the value is 2. You still need to enter a string for the value 0, even if your property will not use it, which is why I started with a `-,` in this example.
- **True/False Icon**: Only for Boolean properties, you can choose a custom icon for each state.
- **Operator**: You can choose a single operator to be displayed next to this property.
- Selecting an operator will display all available options for that operator, and you can specify them all.
- **Operator Icon**: You can also choose an icon to use for this operator.
That's a lot of options, but most of them are optional, you never see them all at once, and you can edit them any time, so don't feel overwhelmed.
In the example image above, I've pressed the + icon, which let me type in anything, and since the "Shoes" property didn't exist yet on the selected bone, it got created with a value of 1.0.
---
#### Step 3: Configuring a Custom Property
Let's use the gear cog icon to bring up Blender's [built-in custom property editing operator](https://docs.blender.org/manual/en/latest/files/custom_properties.html#editing-properties), and change it to an integer, set the default, min, max, and a tooltip.
<img src="/media/addons/cloudrig/props_ui_edit_prop.png" width=800>
Also note that the **Library Overridable** flag is already enabled. This happened automatically in the previous step. This allows the property to be used even when the rig is linked to another .blend file.
---
#### Step 4: Managing UI Elements.
Along with the gear cog, we have a few other icons next to each property while UI Edit Mode is enabled.
<img src="/media/addons/cloudrig/props_ui_element_ops.png" width=400>
- **Double-Arrow**: You can re-order elements within an area by clicking on this, then moving your mouse up and down, then left click to confirm or right click to cancel. Simple as that! The button won't be visible if there's a single element in the area.
- **Plus**: Add child properties, which will only appear when this property has certain values. Useful for complex outfits.
- **Gear Cog**: Blender's built-in custom property editing operator, as explained [above](#step-3-configuring-a-custom-property).
- **Pencil**: This lets you edit the UI data of a UI element, as described [below](#step-5-editing-a-ui-element).
- **X**: This lets you remove the property from the UI. If you hold Shift while clicking this, it will also remove the underlying property itself.
---
##### Step 5: Editing a UI Element
The pencil icon next to a property lets you edit the property's CloudRig UI settings:
<img src="/media/addons/cloudrig/props_ui_edit_value_names.png" width=800>
As you can see, this looks identical to [adding a UI Element](#step-2-adding-or-editing-a-ui-element).
In this case, I just wanted to edit the Value Names of the property. Now it will show the specified words for each value:
- 0: "Barefoot"
- 1: "Default Shoes"
- 2: "Sandals"
You can see a preview of this near the top of the pop-up panel, but you will only see it in the real UI once you confirm by clicking OK.
#### Some things to note:
- Enum Properties are not possible to create as Custom Properties, which is why I use an Integer instead.
- The "Row ID" and "Display Name" fields were filled in automatically when we added the property to the UI.
## Drivers
Of course at the end of the day, these properties don't do anything on their own. They need to be hooked up to things using Drivers. You can do this by right-clicking on properties and using the "Copy as New Driver", "Paste Driver", and "Edit Driver" options. You can learn more from the [Drivers](https://docs.blender.org/manual/en/latest/animation/drivers/usage.html) page of the Blender Manual.
## Example Use Case: Bone Collections
Besides outfit swapping, you can also use this system to make a grid UI of Bone Collections, like so:
<img src="/media/addons/cloudrig/props_ui_bone_collections.png" width=600>
Which is done like this:
<img src="/media/addons/cloudrig/props_ui_bone_collections_edit.png" width=600>
## Example Use Case: Custom Operator
You can implement your own Python operator in a text datablock, then display it next to a property.
CloudRig's generation process uses this to add the IK/FK snapping&baking operators, among others.
This example implements a preset button for hair colors:
```python
import bpy
from bpy.props import StringProperty, EnumProperty
presets = {
'Blonde' : [0.696779, 0.565850, 0.183357],
'Dark' : [0.065083, 0.015941, 0.004878],
'Green' : [0.007476, 0.196397, 0.007357]
}
class MyPresetOperator(bpy.types.Operator):
"""Set some properties according to a hard-coded preset"""
bl_idname = "object.my_preset_operator"
bl_label = "Apply Preset"
prop_bone: StringProperty()
prop_name: StringProperty()
preset_color: EnumProperty(
name="Preset",
items=[
('Blonde', 'Blonde', 'Blonde'),
('Dark', 'Dark', 'Dark'),
('Green', 'Green', 'Green'),
])
@classmethod
def poll(cls, context):
return context.active_object is not None
def invoke(self, context, _event):
return context.window_manager.invoke_props_dialog(self)
def draw(self, context):
self.layout.prop(self, 'preset_color')
def execute(self, context):
prop_bone = context.active_object.pose.bones[self.prop_bone]
prop_bone[self.prop_name] = presets[self.preset_color]
return {'FINISHED'}
def register():
bpy.utils.register_class(MyPresetOperator)
def unregister():
bpy.utils.unregister_class(MyPresetOperator)
if __name__ == "__main__":
register()
```
Then add a color custom property to the UI, and configure this operator next to it, like so:
<img src="/media/addons/cloudrig/props_ui_operator_example.png" width=600>
And the resulting UI looks like this:
<img src="/media/addons/cloudrig/props_ui_operator_result.png" width=600>

View File

@ -1,20 +0,0 @@
# Troubleshooting
CloudRig implements a Generation Log, which shows a list of potential issues detected during the rig generation process. Each time you re-generate the rig, this list is re-populated. Ideally, you want to keep this list completely empty.
<img src="/media/addons/cloudrig/generation_log.png" width=600>
If rig generation fails, you will also find the error message here, along with a Bug Report button.
Many issues will have a button to let you fix them as quick as possible.
Feel free to submit suggestions for issues that this system currently doesn't detect.
### Naming
Certain bone naming conventions are reserved for CloudRig's generation process. The following bone naming habits in a metarig **should be avoided**:
- Identifying left/right sides with anything other than a ".L/.R" suffix
- There should be no other dot-separated suffix besides this.
- Any .00x ending on any bone name should be avoided.
- Avoid prefixes that are used by CloudRig: "DEF-", "ORG-", "STR-". These are only allowed in the metarig in the case of a Bone Tweak component.
- For all chain rig types, incrementing bone names should always correspond to parent/child relationships. For example, "Bone2" must be the child of "Bone1". Bones that are named like this should not be siblings, or unrelated to each other.
Such names may result in strange and tricky to identify generation errors.

View File

@ -1,75 +0,0 @@
# Workflow Enhancements
CloudRig includes several quality of life features, each with a default hotkey. If these interfere with your workflow, you can easily rebind or disable them in the preferences.
<img src="/media/addons/cloudrig/hotkeys_ui.png">
## MetaRig Swapping / Generation
- **Shift+T** swaps between a metarig and its generated rig, syncing bone collections, visibility, and selection.
- **Shift+T** on a mesh object enters pose mode on its deforming armature, if any.
- **Ctrl+Alt+R** regenerates the active metarig/rig. If there is only one metarig in the scene, it doesn't need to be active.
## Better Duplicate & Extrude
- **E** (Extrude) and **Shift+D** (Duplicate) increment bone names:
- Duplicating `Bone1` creates `Bone2`, not `Bone1.001`.
- Hold **Shift** while confirming to keep the original numbering.
- Handles occupied names: if `Bone2` exists, it creates `Bone3`.
- Supports symmetry: increments names on the opposite side.
- **Shift+D** also copies drivers on bone and constraint properties.
<video src="/media/addons/cloudrig/better_duplicate_extrude.mp4" controls></video>
## Bone Selection Pie (Alt+D)
Select bones related to the active bone. Available in Pose, Weight Paint, and Edit modes.
<img src="/media/addons/cloudrig/pie_bone_find.png">
- **Up/Down**: Select a bone with a higher/lower number in its name, e.g., from `Hair1.L` to `Hair2.L`.
- **Left/Right**: Select the parent bone or a child bone. Multiple children are shown in a drop-down menu.
- **Top Left/Right**: Select bones that target or are targeted by this bone via constraints.
- **Bottom Left**: Select the start and end handles of Bendy Bones.
- **Bottom Right**: Open a pop-up menu to search for a bone by name.
## Bone Specials Pie (X)
Bone deletion and symmetry.
<img src="/media/addons/cloudrig/pie_bone_specials.png">
- **Toggle Armature X-Mirror**: Toggle symmetrical armature editing.
- **Toggle Pose X-Mirror**: Toggle symmetrical posing.
- **Delete**: Deletes selected bones and their drivers. Works in Pose Mode. Indicates X-Mirror status to prevent accidental deletions.
- **(Enhanced) Symmetrize**: Works in Pose Mode. Symmetrizes Actions of Action Constraints. Attempts to symmetrize drivers.
## Bone Parenting Pie (P)
Quickly parent and un-parent bones without having to enter Edit Mode.
<img src="/media/addons/cloudrig/pie_bone_parenting.png">
- **Clear Parent**: Clear the parent of selected bones.
- **Selected to Active**: Parent all selected bones to the active one.
- **Disconnect**: Disconnect a bone from its parent without un-parenting, allowing free translation.
- **Parent & Connect**: Parent selected bones to the active one, and connect them to the parent.
- **Active to All Selected**: Parent the active bone to all other selected bones equally using an [Armature Constraint](https://docs.blender.org/manual/en/latest/animation/constraints/relationship/armature.html).
<video src="/media/addons/cloudrig/parent_active_to_all_selected.mp4" controls></video>
- **Parent Object to All Selected**: Parent selected objects outside of the active armature equally among all selected bones using Armature Constraints.
<video src="/media/addons/cloudrig/parent_object_to_selected_bones.mp4" controls></video>
## Edit Custom Shapes Pie (Ctrl+Alt+E)
A comprehensive toolset to manage bone custom shapes.
<img src="/media/addons/cloudrig/pie_edit_widget.png">
- **Edit Transforms**: Quick access to custom shape transform properties.
- **Unassign Custom Shape**: Remove the custom shape from selected bones.
- **Assign Selected Object**: Set the selected mesh object as the custom shape of selected bones.
- **Reload Custom Shapes**: Reload widgets from the Widgets.blend file, discarding any modifications to them.
- **Edit Custom Shapes**: Enter mesh edit mode on the selected bones' widgets. Press Ctrl+Alt+E again to return to pose mode.
- **Select Custom Shape**: Assign a widget from a library to the selected bones. Local objects named "WGT-" will also be listed.
- **Duplicate & Edit Custom Shapes**: Duplicate selected bones' widgets before editing them. Handy when you want to edit only one usage of a widget, not all of them.
- **Copy to Selected**: Copy the custom shape and transforms from the active bone selected bones.
## Bone Collections pop-up (Shift+M)
A pop-up menu to can access bone collections without leaving the 3D View.
Available with the rig, even if a user doesn't have CloudRig installed.
<img src="/media/addons/cloudrig/bone_collections_popup.png">
## Quick Select (Shift+Alt+W)
Pops up a list of collections that were [marked](organizing-bones#selection-sets) to be included in this list. Clicking on one of them selects the bones within. Shift+Click extends the selection. Ctrl+Click symmetrizes the selection. Alt+Click deselects the collection's bones.
Available with the rig, even if a user doesn't have CloudRig installed.

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/contactsheet/README.md-->

View File

@ -1,77 +0,0 @@
# Easy Weight
Easy Weight is an addon focused on quality of life improvements for weight painting in Blender.
---
### Installation
In Blender 4.2, Easy Weight can be added through the official extensions repository, so you can simply search for it in Blender.
For older versions, find installation instructions [here](https://studio.blender.org/pipeline/addons/overview).
## Features
Easy Weight allows you to control some scene-level tool settings at the user preference level. You can find these in the add-on's preferences.
![EasyWeight Preferences](../media/addons/easy_weight/prefs.png)
- **Easy Weight Paint Mode**: You don't have to reveal or select the armature to enter weight paint mode. The add-on will take care of it for you.
- **Always Auto-Clean**: A new feature in the add-on which cleans zero-weights after each brush stroke, and automatically ensures opposite side vertex groups when using a Mirror Modifier.
- **Always Show Zero Weights**: Forces [Blender's Show Zero Weights](https://docs.blender.org/manual/en/latest/editors/3dview/display/overlays.html#bpy-types-toolsettings-vertex-group-user) overlay option to "Active".
- **Always Auto Normalize**: Forces [Blender's Auto-Normalize](https://docs.blender.org/manual/en/latest/sculpt_paint/weight_paint/tool_settings/options.html#bpy-types-toolsettings-use-auto-normalize) setting to be always on.
- **Always Multi-Paint**: Forces [Blender's Multi-Paint](https://docs.blender.org/manual/en/latest/sculpt_paint/weight_paint/tool_settings/options.html#bpy-types-toolsettings-use-multipaint) setting to be always on.
### Weight Paint Pie (W)
On the **W** key you'll find a handy nested pie menu:
![EasyWeight Pie](../media/addons/easy_weight/pie.png)
- These three options will affect all weight paint brushes in the scene.
- [Accumulate](https://docs.blender.org/manual/en/latest/sculpt_paint/brush/brush_settings.html#advanced)
- [Falloff Shape](https://docs.blender.org/manual/en/latest/sculpt_paint/brush/falloff.html)
- [Paint Through Mesh](https://docs.blender.org/manual/en/latest/sculpt_paint/brush/brush_settings.html#advanced)
- Operators:
- **Focus Deforming Bones**: Reveal and isolate all deforming bones contributing to the active mesh.
- [Normalize Deform Groups](https://docs.blender.org/manual/en/latest/sculpt_paint/weight_paint/editing.html#bpy-ops-object-vertex-group-normalize-all): Ensure influence of deforming groups adds up to exactly 1.0 on each vertex.
- [Smooth Vertex Weights](https://docs.blender.org/manual/en/latest/sculpt_paint/weight_paint/editing.html#smooth): Smooth weights of all deforming bones on selected vertices.
- [Transfer All Groups to Selected Objects](https://docs.blender.org/manual/en/latest/sculpt_paint/weight_paint/editing.html#transfer-weights): Useful for transferring weights to clothes.
- **Clear Empty Deform Groups**: Remove vertex groups associated with deforming bones, which don't have any weights at all.
- **Clear Unused Groups**: Remove vertex groups which are not associated with a deforming bone, and not used by any shape key, modifier, or constraint.
- **Symmetrize Weights of Selected**: Symmetrize the weights of selected bones, either left->right or right->left depending on the selection. Will handle center vertex groups as well. Implemented using KDTree, which means it will work even if the mesh topology isn't perfectly symmetrical.
- [Assign Automatic From Bones](https://docs.blender.org/manual/en/latest/sculpt_paint/weight_paint/editing.html#bpy-ops-paint-weight-from-bones)
- Overlay Settings:
- Bones: Toggles drawing of bones altogether.
- Wireframe: Toggles mesh wireframe overlay.
- [Weight Contours](https://docs.blender.org/manual/en/latest/editors/3dview/display/overlays.html#weight-paint-overlays): Additional visualization to see how smooth your weight gradients really are.
- [Armature Display Type](https://docs.blender.org/manual/en/latest/animation/armatures/properties/display.html#bpy-types-armature-display-type): How bones without custom widgets should be displayed.
- In Front (X-Ray): Whether bones should be drawn even when behind meshes.
## Hunting Rogue Weights
![Weight Islands](../media/addons/easy_weight/weight_islands.png)
The Weight Islands panel lets you hunt down unintended rogue weights on a mesh. The workflow goes something like this:
- After pressing Calculate Weight Islands and waiting a few seconds, you will see a list of all vertex groups which consist of more than a single island.
- Clicking the magnifying glass icon will focus the smallest island in the group, so you can decide what to do with it.
- If the island is rogue weights, you can subtract them and go back to the previous step. If not, you can press the checkmark icon next to the magnifying glass, and the vertex group will be hidden from the list.
- Continue with this process until all entries are gone from the list.
- In the end, you can be 100% sure that you have no rogue weights anywhere on your mesh. You may click the X button to remove the islands' data, since you no longer need it.
## Vertex Group Operators
![Vertex Group Menu](../media/addons/easy_weight/vg_context_menu.png)
The Vertex Groups context menu is re-organized with more icons and better labels, as well as some additional operators:
- **Delete Empty Deform Groups**: Delete deforming groups that don't have any weights.
- **Delete Unused Non-Deform Groups**: Delete non-deforming groups that aren't used anywhere, even if they do have weights.
- **Delete Unselected Deform Groups**: Delete all deforming groups that don't correspond to a selected pose bone. Only in Weight Paint mode.
- **Focus Deforming Bones**: Reveal and select all bones deforming this mesh. Only in Weight Paint mode.
- **Symmetrize Vertex Groups**: Symmetrizes vertex groups from left to right side, creating missing groups as needed.
If you have any more suggestions, feel free to open an Issue with a feature request.
## Force Apply Mirror Modifier
In Blender, you cannot apply a mirror modifier to meshes that have shape keys.
![Force Apply Mirror](../media/addons/easy_weight/force_apply_mirror.png)
This operator tries to anyways, by duplicating your mesh, flipping it on the X axis and merging into the original. It will also flip vertex groups, shape keys, shape key masks, and even (attempt) shape key drivers, assuming everything is named with .L/.R suffixes.

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/geonode_shapekeys/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/grease_converter/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/lattice_magic/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/lighting_overrider/README.md-->

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/README.md-->

View File

@ -1,41 +0,0 @@
# Pose Shape Keys
This add-on enables a workflow where you can continue iterating on your vertex weights and bone constraints after you've already created your shape keys, without having to re-sculpt those shape keys. To put another way, you can think of shape keys as a final shape rather than as deltas on some deformation.
The only limitation is that there is some precision loss when using bendy bone deformations.
It also lets you manage multiple copies of a shape key together. Each copy can have a different vertex group mask, or be applied mirrored around the X axis.
You can find a video tutorial and more detailed explanation of how it works [here](https://studio.blender.org/blog/rig-with-shape-keys-like-never-before/).
## Basic Workflow:
- Create a pose whose deformation you want to correct. A pose is defined as an Action and a frame number.
- Create a Pose Key on the deformed mesh. Assign the action and the frame number.
- Press "Store Evaluated Mesh". This will create a copy of your mesh with all deformations applied.
- Sculpt this mesh into the desired shape.
- Go back to the deformed mesh, and assign one or more Shape Keys to the Pose Key.
- Press "Set Pose" to ensure that the rig is in the pose you created and specified earlier.
- Press "Overwrite Shape Keys".
- When you activate your shape key, your deformed mesh should now look identical to your sculpted shape.
- If you have more than one shape key, the same data will be pushed into each.
The purpose of this is that each copy of the shape key can have a different mask assigned to it.
This can streamline symmetrical workflows, since you can push to a left and a right-side shape key in a single click.
# Example use cases:
### 1. Sculpted facial expressions applied directly on top of a bone deformation based rig:
- A character artist can sculpt facial expressions to great quality and detail
- You pose the rig to be as close to this sculpted shape as possible, and create a rig control that blends into this pose using Action Constraints.
- Using the add-on, create corrective shape keys that blend your posed mesh into the shape of the sculpt.
- Hook up those corrective shape keys to the rig via drivers
- You now have the precise result of the sculpted facial expression, while retaining the freedom of bone-based controls that can move, scale and rotate!
### 2. Author finger correctives 24-at-a-time:
- Create a fist pose where all finger bones (4x2x3=24) are bent by around 90 degrees.
- Create a Pose Key and a storage object, and sculpt the desired deformation result.
- On the rigged mesh, create the 24 shape keys within the PoseKey; One for each section of each finger.
- Assign vertex groups to them that mask the affected areas.
- Normalize the vertex masks.
- Now you can push the sculpted fist shape into all 24 shape keys at the same time.
- Create drivers so each shape key is driven by the corresponding finger bone.
- You can now tweak and iterate on the sculpted shape, and update all 24 shape keys with the click of a single button.

View File

@ -1 +0,0 @@
<!--@include: ../../scripts-blender/addons/render_review/README.md-->

View File

@ -1,157 +0,0 @@
# Character Pipeline Assistant
::: warning Legacy Documentation
This is a legacy document originally written by Simon in 2020.
:::
![Character Pipeline](/media/archive/pipeline-proposal-2019/char_pipeline_01.png)
Jan 2021
# Proposal for updated version for 'Sprite Fright' (August 2020)
With the Sprite Fright project starting to get into a phase where the characters are being designed and tested for shading, rigging and animation, a solid character pipeline will become necessary soon.
So we (Demeter and Simon) re-evaluated the character update tool made for the Settlers project on how applicable it is and how it can be adapted to be more flexible and stable.
So we came up with the following proposal:
## Idea
The idea is the same as was implemented for Settlers but allowing an additional step of adjusting the mesh. For this an additional publishing step is introduced to export the model in a way that makes it easily mergeable with the rig (merging objects, applying mirror modifiers etc).
## File Structure
Character folder including:
- char.modeling.blend (working file)
- char.geometry.blend (source file) → published from char.modeling.blend
- char.rigging.blend (working file/ source file) → sourcing char.geometry.blend
- char.shading.blend (working file/ source file) → sourcing char.geometry.blend
- char.blend (master file) → published from char.rigging.blend and char.shading.blend
![Character Pipeline](/media/archive/pipeline-proposal-2019/char_pipeline_02.png)
(Green Blocks are working files, grey files are not worked in, but only procedurally updated)
## Data Flow
### Geometry
The model of the character is appended from char.modeling.blend as a collection into char.geometry.blend, where all adjustments to the mesh, that have to be made for rigging, are reproducibly automated.
This automation has to be manually revised depending on the rig and can be specified as a script in the char.geometry.blend file.
Rigging and Shading file take the geometry directly from the published model version in char.geometry.blend.
The master file takes...
...from the geometry file (option A):
- mesh data / multires data / displacement maps
...from the rigging file:
- mesh data / multires data / displacement maps (option B)
- armature
- modifier stack
- weight paint layers
- shape keys
- drivers
- physics properties
- support objects
...from the shading file:
- material slots
- shaders
- UV layers
- vertex color layers
However, the master file can also source the data from itself, as to only use published states of shading/rigging, without drawing from work-in-progress states of the respective files.
That way every department has an individual control over publishing their updates.
## Scripts
For the automation of updating the different versions of a character in the respective files a couple of scripts are necessary, that can largely be based on the first version implemented for 'Settlers'.
The necessary scripts are:
- publish geometry + support adjustable script to specify necessary changes (merging objects, applying modifers, etc.)
- import geometry and merge with rigging file
- import geometry and merge with shading file
- import and merge all into master file
To be able to easily update the individual scripts across multiple characters, we are proposing to create an add-on that includes all the functionality and can be maintained and pulled via git.
## Example workflow:
- Julien makes a change in the modeling file that is not significant to the vertex order of the mesh
- Demeter and Simon can work in their respective files simultaneously with no restrictions
- Julien publishes his changes to the geometry by pressing the update button in the geometry file
- To import the changes into their working files, Demeter and Simon press the update buttons in their respective files and
- Once a significant change has been made that should be passed to the animation department, Julien, Demeter, Simon (or whoever) updates the master file and commits it to the SVN
## Benefits
- Flexible, simultaneous work to a certain degree
- High level of automation
- Publishing steps for individual version control per department
## Potential Issues
This pipeline relies heavily on the fact that object names and vertex order don't change. But the additional publishing step of the geometry allow manual interference when something goes wrong and optionally the data-transfer modifier can be used to prevent data from breaking due to vertex order.
However, to avoid issues like that, this pipeline should only be picked up after major geometry changes likes adding or removing vertices in the base mesh.
This pipeline is assuming that there are no driver dependencies on the rig in the shader. However, if that should be the case (as it was for 'Settlers'), it can be adjusted to retain that dependency.
## Alternatives
A potential alternative could be using library overrides to link between files instead of appending and only using one single publishing step in the master file. However, this is not feasible with the current, early state of library overrides, as a high level of functionality would be required.
---
# First version from Settlers (April 2020)
This proof of concept for publishing was created by Simon Thommes during the settlers project. It merges rigging and shading files into a final asset blend file.
![Character Pipeline](/media/archive/pipeline-proposal-2019/3-character_pipeline_tool.mp4)
### Target **Workflow**
Separate working files for modelling, rigging and shading that are automatically merged in a single character.blend asset file. This allows for working in parallel and introduces an additional publishing step for each module.
## **Current state (Settlers)**
Video breakdown of functionality for version 1 (Not the latest iteration):
[https://cloud.blender.org/p/settlers/5ea02055cc64ecf31415351c](https://cloud.blender.org/p/settlers/5ea02055cc64ecf31415351c)
## Features
- merging .rig.blend and .shading.blend files
- updating either [rig + geometry] or [shading] separately
- merging the following data by object name
- From the .rig file:
- Mesh, Armature, Weight Paint, Modifiers, Constraints, etc.
- From the .shading file:
- Materials by slot
- UVs by layer name and vertex order
- VCols by layer name and vertex order
- updating shading drivers pointing to rig objects
*(all drivers within nodegroups using the naming convention DR-'object_name' get repointed to the object 'object_name')*
## Shortcomings
- system is fragile regarding name changes
- data-blocks worked on by multiple modules are not easily mergeable
- python scripts are accumulated with .### notation
## Potential features
- Outsourcing the mesh data to the .modeling file and transferring rigging data by vertex order
- Overwriting modifier settings relevant for shading from the .shading file (e.g. mirrored UVs)

View File

@ -1,57 +0,0 @@
# Asset Publishing
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
**In a nutshell, a publish is the outcome of a process or task.** We introduce publishing into our workflow for the following reasons:
- Formalizing what the output of a given task is.
- Separate working versions from used assets to keep files lighter and to remove confusing content that might be relevant to the working conditions in the file, but not for the next step in the production chain.
- File consistency checks to keep files cleaner
There are different possible scenarios to implement this in our work:
- Simplest way: Copy to new location with check and file cleanup for keeping file sizes to a minimum.
- For libraries: Push individual asset to larger collection file.
- More complicated way: Merge files which are associated with the same asset together into the final linkable asset.
![Pipeline](/media/archive/pipeline-proposal-2019/pipeline_proposal.png)
There is no direct link to files that are being worked on.
# Possible Applications
Most studios use publishing as a form of version control. Since we currently use SVN for that (which does not scale well for larger productions) it's not immediately necessary to create a full fledged publishing system.
To serve as a testing ground, we can limit this to the following tasks and find out where it can be useful in the future. This can also be limited to hero assets that have a more complicated creation process.
## Characters
Task result: link-able asset file with collections.
![Pipeline](/media/archive/pipeline-proposal-2019/publish_char.png)
## Pose library
Task result: Action datablock that can be referenced from animation files.
![Pipeline](/media/archive/pipeline-proposal-2019/publish_poselib.png)
## Props, envs and sets
Task result: link-able asset file with collections.
![Pipeline](/media/archive/pipeline-proposal-2019/publish_asset.png)
# File cleanup
The goal is to remove clutter from files and reduce file-size and loading time. Prevent missing textures and links to files which are not in the project.
- remove orphan data from file.
- remove lights and world.
- remove animation data not associated with drivers and pose libs.
- detect paths outside the project tree (and not /render). Links to /shared are cleaned as well.
- remove collections that are used as helpers, do not deform and are not rendered.

View File

@ -1,146 +0,0 @@
# Attract improvements
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
# General goals of Attract
- Define and manage shots and film assets
- Create the relationships between shots, assets and tasks
- Task status tracking and communication both with internal and external entities
- Scheduling of tasks
- Assist with communication of external and internal team members (Review, feedback)
# During project pre-production and set-up
Attract is a communication tool. It holds and displays information relevant to production, its core elements and tasks people have to accomplish.
The following section tries to break down the usage of Attract in production, roughly in chronological order.
![Pre Production Timeline](/media/archive/pipeline-proposal-2019/pre_production_timeline.png)
### Project definition
At the top in the hierarchy lies the project that needs to be managed. It contains relevant information which trickles down to each individual component such as:
- Frames per second
- Resolution
- Start frame (preroll) of each shot file.
- Frame Handles of each shot file.
- Viewport render presets
- Render presets
- Other Pipeline definitions
- file paths relative to project root
- absolute file paths
- naming conventions
- more
![Project Structure](/media/archive/pipeline-proposal-2019/project_structure.png)
*A project contains these items. Shots are associated with sequences. Assets are specific to each project. Tasks can be completely separate or be associated with a shot (layout, animation, lighting, etc.) or an asset (modeling, shading, rigging, etc.)*
### Editorial
While the movie is in its early stages, we already set up the **edit.blend** file. This is the starting point for Attract to know which sequences and shots are in the project. The edit defines order of shots and their length. This already works in the Blender Cloud add-on.
**We know:**
- Shot numbers
- Length or each shot
- Order of shots
- Sequences associated with shots.
### Breakdown
Once the **edit** is live, more items in the project can be defined in the web interface of Attract:
- What assets do we need
- Tasks for each asset
- People associated with the tasks
- Time-frame for tasks
- Shot details
- Which shot is associated with which assets
- Tasks for each shot
- Time estimation of each shot task
### Asset Creation
Once we have a list of assets, the asset itself can become a 'real' file within our production repository. Also the asset definition on Attract can communicate more details:
- Which files (collections, objects, materials) are associated with an asset.
- Variations, revisions and views
### Shot file building
Attract could provide a simple interface for creating shot files in an automated fashion. This file can be derived from predetermined templates (e.g. blend files) which can differ based on the sequence and task type.
![Shot Builder](/media/archive/pipeline-proposal-2019/shot_builder_flow.png)
### Sequence template
Typically, movie sequences differ on their environment and which characters are in them. On Spring we tried to split up the film sequences roughly based on locations. In practice, sequence templates don't define how the file is set up, but **what** can be in it.
- They define a set of assets that are needed in each shot of the sequence
- If needed there can be more than one template for a sequence.
- Two sequences can also use the same template (e.g. in *Spring,* shots from *06-stampede* started out from the same assets as *07-rescue*)
- shots can take sequence definitions as preset but are not tied to them. In the above mock up, they would just define which checks are set when the template is loaded, but users can add or remove assets.
### Task template
Files can be configured differently depending on a task. The task template defines **how** it is set up exactly. Some examples:
**Animation**
- Flatter collection hierarchy for more (and faster) control over what is visible. Each character has their own collection to control visibility. Props are split into their own collections. Rigs have their own collections as well.
- Sequence editor is set up with the latest layout render of the shot.
- Scene is set to use eevee as render engine for viewport renders on the farm.
- Lighting setup for playblasting
**Lighting**
- More generalized collection layout (characters, props, set elements, lighting, cache, etc)
- Cycles as render engine with general render presets already set as a starting point.
- Master lighting from sequence is already linked in.
# During production cycles
![Shot Builder](/media/archive/pipeline-proposal-2019/production_timeline_2.png)
During production Attract becomes the main landing page relevant information. The artists have to see at a glance:
- Which tasks are assigned to them
- What is their status?
- Overview of the deadlines
- What other tasks depend on their work
- What are crucial notes and requested changes
- Easy access to review process
Some initial ideas for organizing the interface better to help with people's day-to-day work. We have to make more detailed research on what artists need from Attract that can benefit their workflow. A big criteria is that filtering and searching of tasks should be possible with a range of customization.
## User dashboard
The landing page for the user. Displays a list of the user's current tasks and notifications. It should be possible to filter at least by type, due date and status.
![Attract Dashboard](/media/archive/pipeline-proposal-2019/attract_dashboard.png)
## Shot list
This is currently the main view of Attract: A list of all the shots in a film and their associated tasks. In the future they should be further grouped by sequences, the grouping can be collapsed. Once shots get more tasks the overview becomes very cluttered in the current layout. Hence, details of tasks and discussions should be moved to their own page.
![Attract Shot List](/media/archive/pipeline-proposal-2019/attract_shot_list.png)
## Asset list
List of all the assets and their associated tasks, grouped by asset type.
![Attract Shot List](/media/archive/pipeline-proposal-2019/attract_asset_list.png)
## Task page
Detailed info on the task. Discussion and task related dependencies (what tasks does a shot depend on for finalizing).
![Attract Shot List](/media/archive/pipeline-proposal-2019/attract_task_page.png)
## Contact sheet
A very simple page that shows a current rendered thumbnail for each individual shot in front of dark grey background. This helps the lighting team to improve the continuity of their work.
![Attract Shot List](/media/archive/pipeline-proposal-2019/attract_contact_sheet.png)

View File

@ -1,89 +0,0 @@
# Pipeline proposal 2019
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
This is a proposal of what we can do in the near future to make film production in our studio smoother. It's clear that with our limited resources we cannot implement absolutely every method. Hence, we are first going to focus on optimizing our main production cycles.
Everything noted here should be possible with the current version (with minor improvements and fixes). To handle the complexities of a production, some development needs to happen in the override system.
The graphic below simplifies the production timeline significantly for the sake of clearer communication. We can separate production into two parts: **Asset creation** (chars, props, libs and sets) and **Shot production cycles** (layout, shot building, animation, lighting, etc).
![Production Timeline](/media/archive/pipeline-proposal-2019/production_timeline.png)
Layout and story-boarding are currently left out as a target of this proposal*,* developing a robust layout workflow requires additional work. We currently assume that layout is done per-sequence like in *Spring* and delivered as a series of edited shots or grease-pencil files.
This document aims to improve our efficiency during shot cycles by also addressing issues during set-up during pre-production and asset creation.
# The main issues
Here are the main problems we faced in the past in the most generalized way possible.
## Too much manual work
Artists should focus on the quality and the artistic goals of their everyday work, not the technical aspects. Manual naming and set-up of files have in the past left a lot of room for human errors that trickle down the process. Since our staff is limited, this manual work has been responsible for numerous bottlenecks. Manual labour currently includes (but isn't limited to):
- Shot creation
- Naming of assets, scenes, shots, takes
- Play-blasting of animation to multiple locations for approval and passing-along
- Setting names for directories
- Keeping track of assets statuses
- Browsing assets and linking
- Passing animation data along to lighting
- Keeping layout cameras synchronized with sets and shots
- Synchronizing constraints and relationships between objects in animation, simulation and lighting.
## Unclear communication and approval process
There has not been a single place to track the planning of production related tasks. Attract is meant to fill in that role, but still has limited functionality that needs the be expanded. For projects with around 8 people on-site it is already vital to have a communication tool to keep track of each person's jobs, reviewing work and pushing it further into the next department. It gets more tricky once people are off-site, on different time zones and cannot communicate in real-time.
- pushing tasks to next line of the process depends on direct communication
- extra overhead with external team members
- review is not streamlined for quick access
- order of production steps is not clearly defined and optimized (see Spring anim post mortem)
- There should be a clear responsibility per task. (see Coffee Run 2K issue)
## Work files and results are mixed together
For the past 10 years, Open Movie productions have heavily relied on linking rigged characters and assets directly into shots. Updates in the process always trickle down instantly. This make changes more immediate, but can also break files very often. A change in an asset can make partial re-renders of shots impossible. If the rigger breaks the rig with a commit, the animator cannot work and is interrupted until the file is either reverted or fixed. The question is whether the benefits of direct linking outweigh the delays caused by broken files.
## Scaling up
As a result of the points mentioned above, it is very difficult to scale up our current production methods to accommodate for productions larger than 50 shots. This proposal aims for letting us handle multiple productions with more than 100 shots each. We should aim to make it robust enough to handle a feature film production.
# Proposed solutions
Here are some of the possible improvements. Jump to the sub-pages to read more.
## Improved production tracking
Communication is the most important task during a production. By improving our task tracking tool *Attract* we can make sure that there is a basic tool set for people to know what they have to do, how much time they have for it, and what the context of their work is.
Sharing our production tools is the main business model of the Blender Cloud, it is very important that we give this goal a higher priority.
[Attract improvements](attract-improvements)
## Automation
We have to make clearer steps towards automating tasks that require a higher degree of precision. Also repetitive jobs like creating files for people to edit (e.g. anim prep) or cleaning up scenes can easily be done by scripts and add-ons.
We should also strive to automate communication tools like reviewing rendered media; an animator should not have to worry about where to save a play-blast (or create the location if necessary), they only have to initiate it, but then the system takes care of storing it and notifying the reviewers.
[Task Companion Add-on](task-companion-add-on)
## Caching animation
In the past we have exclusively relied on Blender's linking system to bring animated characters into a file where they can be lit and rendered. Linking of data-blocks in and between scene files is one of the strengths of Blender and should still be used to its full potential in the future. However, when everything is linked, everything can change and break at once. At some point the links have to be broken to produce repeatable operations.
Using either Alembic or USD to write animation data to caches means we can keep shading and grooming linked, but we do not rely on the rig or constraints set up by the animator. Once animation in a shot has been approved, it can be cached to be reliably repeatable during rendering.
[Shot caching](shot-caching/introduction)
## Publishing
It's a common concept in the industry and means more than just committing your changes to SVN. We should still keep our files version controlled, but introduce an additional step at the end to 'publish' the changes made.
Published assets exist next to version controlled working files. This makes it easier to **pinpoint outputs of tasks**. The artist clearly defines when their work is pushed forward in the process.
[Asset Publishing](asset-publishing/introduction))

View File

@ -1,14 +0,0 @@
# Proof of concept Add-on
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
This is a proof of concept for a more streamlined caching workflow. It also addresses some shortcomings of Blender's current cache system, namely the lack of property animation.
![Shot Builder](/media/archive/pipeline-proposal-2019/cache_tool.mp4)
This work is largely based on the research below:
* [Caching Workflow User Stories](user-stories.md)
* [Structural Ideas](structural-ideas.md)

View File

@ -1,43 +0,0 @@
# Shot caching
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
Currently Alembic caching is limited to deformed meshes and position of objects. Hair is not fully implemented as the resulting hair curves cannot be handled by Blender's particle system. That means, it's not possible to simulate or use the more advanced shading parameters of particle hair geometry in Cycles.
This is a proposal of how to handle caching of animated characters, while at the same time enabling us to simulate the hair in the conventional way.
![Shot Builder](/media/archive/pipeline-proposal-2019/shot_caching.png)
### Animation file
1. Animator chooses which characters or props to cache
2. Subsurf is disabled on all objects in the asset
3. For each asset (char, prop), one cache file is created per shot
4. in the caching options, hair curves should be disabled.
5. The renderfarm generates a play-blast of the shot together with cache and puts each into the right directory.
### Simulation file
1. Sim-file links in hair emitter meshes that need to be simulated
2. Hair emitters get stripped of all deforming modifiers
3. Add mesh sequence cache modifier on hair emitter mesh in place of the previous deforming modifiers and load the animation cache into it.
4. Subsurf modifier is retained, has to be at the same resolution as render level for sim bake (!)
5. Do hair dynamics bake, perform to disk cache (to /render), results in a directory full of bphys files.
### Lighting file
1. Link all assets that were originally cached in anim file as full collection hierarchies
2. Make all objects local (only objects, material updates should thus be still reflected in the file). (Or use Overrides once possible).
3. Strip objects of all deforming modifiers.
4. Add mesh sequence cache on all objects before subsurf, hair particles should still be intact
5. On hair systems that were simulated, check 'hair dynamics' and load the bphys cache from render
6. This should result in the render meshes following the abc cache and particle systems following the bpys cache.
More notes and research on how we can do caching currently:
[Proof of concept Add-on](https://www.notion.so/Proof-of-concept-Add-on-7f7b3686ab234c6f9daa54ff1db07c48)
[Issues and show-stoppers in Blender](https://www.notion.so/Issues-and-show-stoppers-in-Blender-de90bbaed4cc4580a4462cfd2f7f6c4c)

View File

@ -1,15 +0,0 @@
# Issues and show-stoppers in Blender
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
- Materials need to exist (appended or linked) in blend file before alembic import so names can be matched (could also be an automated post process)
- *Sybrens idea: have some way to automatically (re)map materials based on name, and/or auto-create dummy materials otherwise. Still TBD.*
- Hair curves support of intercept and UV maps not clear (or non-existent)
- Particle hair currently lags one frame behind playback (and render)
- Blenders hair simulation system does not support sim from alembic cache
- We have to find a workaround for simulating hair, alembic curves (result of anim export) cannot be simulated
- No support for caching animated values (camera cache, driven material values)
- *Design task in [T69046](https://developer.blender.org/T69046)*

View File

@ -1,59 +0,0 @@
# Structural Ideas
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
![Anim](/media/archive/pipeline-proposal-2019/cosmos_shot_caching_notes.png)
**Anim.cache.view**: exactly what the animator sees during animation. Good for faster playblasts (e.g. when there are groups of chars in a shot) and verifying correct bake (debugging).
**Anim.cache.sim**: Only objects that are needed to create interaction during sim (collision, flow,..._)
**Anim.cache.lighting**: Similar to render but with less dense geo and less hair (faster viewport feedback during lighting)
**Anim.cache.render**: geometry used in final rendering. Hair particles at final density
- Could be possible to put this data all in one cache file, and only read certain parts of it. Or even create collections in import to manage their visibility.
- Alternatively there could only be .view and .render caches.
- Lighting file then needs a way to simplify .render cache for faster viewport performance.
- Writing performance is important. We might not want to generate the entire high res cache when only sim interaction objects are needed for example.
### Ideas on structuring character assets
This time they may need to be structured in a certain way to allow for caching different parts of the asset.
Collection layout could be:
![Anim](/media/archive/pipeline-proposal-2019/image3.png)
### Anim file creation notes (unrelated to caching)
- How to handle collection visibility in anim file?
- In the past animators found collection setup confusing, they need simplicity and less items in subcollections.
- Char e.g. should only consist of 2 elements: geo and rig. Easier for animator to handle
- Is it possible to create automatic overrides without having to instance the collection
- Select collection instance -> Make Library Override is currently the only way
- Caches should be linked to SVN revision or at least we should be able to identify where it came from in the history
- The animator is not allowed to move collections within the main asset hierarchy which is linked into the blend file. Otherwise the override fails to update in the future
- We have to be careful how overrides are made. If it happens without purging orphan data in the blend file, it can happen that objects get different names (numbering with .002, etc). In that case, alembic import will most likely fail to match names. (e.g. if an existing cache is re-exported but with different object naming)
### Lighting File and rendering notes
- How to put linked materials back on ABC import?
- works: link in materials, import alembic
- Hair material is left out
- Cycles visibility needs to be restored in imported objects
- For example, victors corneas were set to only render in camera. This avoids shadow casting on the eyeball. On import victors eyes were black because visibility wasnt set.
- Particles are currently still lagging one frame behind
- Are ABC curves rendered as cycles hair?
- What about intercept, parent mesh UV, etc
- Render resolution is used by default, needs automation before export to cache different resolutions
- Vertex group and face sets should be on in export to make materials work properly
- Camera cache needs animation of properties
- How are material drivers (rig influences shading) handled?
- Hair curves do not get emitted from render resolution mesh if export caps subdivision at 0. This causes issues with dense, short fur for example. The origin of the particle has to be on the high res mesh
- Animation cache can exclude particle hair
- Blenders hair system does not work with cached hair

View File

@ -1,84 +0,0 @@
# Caching workflow user stories
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
This is a list of potential ways of caching an animation to Alembic and then reading it back from another file.
## Without hair cache
### Anim file
1. Animator chooses which characters or props to cache
2. Subsurf is disabled on all objects in the asset
3. For each asset (char, prop), one cache file is created per shot
4. Farm generates playblast of the shot together with cache and puts each into the right directory
![Anim](/media/archive/pipeline-proposal-2019/01_02_01_D2.anim_contact.mp4)
### Lighting File
1. Lighting file is set up and is aware what assets are associated with its shot
2. Materials from assets get linked in from original asset files so they get applied correctly on objects
3. Caches get imported into lighting file
4. Restore collection hierarchy from asset files
5. Restore object cycles visibility from asset files
6. Restore Subsurf modifiers
7. Lighting artist can choose whether they want to see low res preview or final render resolution assets
![Lighting](/media/archive/pipeline-proposal-2019/01_02_01_D2.lighting1.mp4)
### Sim File
1. Simulation file loads sim interaction cache (for example to let hair collide with head. Or smoke collide with character)
2. Sim artist defines influence of cache objects in scene
3. Export sim cache to shot folder location
![Sim](/media/archive/pipeline-proposal-2019/01_02_01_D2.sim1.mp4)
### Lighting File
1. Import sim cache into lighting file.
*Note that this example does not include hair simulation. This cannot be done with alembic currently. In order to enable hair sim, emitters need to be cached, then the particle data has to be transferred back. After that, blender can simulate the hair to its own internal cache system. The resulting sim can be linked to lighting as a collection.*
---
## Alternate solution for using simulated hair in the lighting stage
### Sim File
- Link character collection from the animation file
- make hair particle emitters local (on object level) and enable hair dynamics
- add force fields and effectors and do the simulation
- bake the sim into blender's hair sim cache
---
## Very Adventurous Cache Method That Might Work ™
This is the short version without putting objects into collections. All the steps can be automated for more convenience. Needs further research to see if particles deform correctly with subsurf cache objects.
### Anim file
1. Cache is written from anim file as usual (select objects to cache, export ABC) all on subsurf 0
### Sim File
1. sim file links in hair emitter meshes that need to be simulated
2. hair emitters get stripped of all deforming modifiers
3. subsurf modifier is retained, has to be at the same resolution as render level for sim bake (!)
4. Do hair dynamics bake, perform to disk cache (to render), results in a directory full of bphys files.
### Lighting File
1. lighting file links all objects that were originally cached in anim file
2. make all objects local (only objects, material updates should thus be still reflected in the file)
3. strip objects of all deforming modifiers
4. add mesh sequence cache on all objects before subsurf, hair particles should still be intact
5. on hair systems that were simulated, check 'hair dynamics' and load the bphys cache from render
6. this should result in the render meshes following the abc cache and particle systems following the bpys cache.
![Sim](/media/archive/pipeline-proposal-2019/01_02_01_D2.lighting.adventurous_cache.mp4)

View File

@ -1,76 +0,0 @@
# Task companion add-on
::: warning Legacy Documentation
This is a legacy document originally written by Andy in 2019 as part of the Spring production retrospective.
:::
We need an add-on that adapts to the current task an artist has to work on. On *Spring* and *Agent* this used to be called *Shot Tool,* it was only used for building shot files and for automation during lighting.
For the near future we should have a more general **tool that assists artist during their task.** The tool needs to be aware of what the purpose of the currently opened file is, and which task it is associated with. For this purpose it has to connect to Attract like the Blender Cloud add-on (it could also be included in it).
**Task awareness:**
- Asset editing, animation and lighting require slightly different working methods
- Is the current file a library asset file or a shot file
- File paths associated with the task
- Naming and collection hierarchy conventions
**Shot awareness:**
- Which shot is this file associated with
- Which assets are currently used/needed in this shot
- Which assets need to be cached (and where they go to)
- What is the output of the specific shot task (cache, collection, etc) and where does it go.
# Task context
This is how the helper add-on can change based on the context.
## Asset editing
- Conform objects to naming conventions
- Create collections to set up asset variations/LODs
- Push latest render to Attract asset page for feedback.
- Publishing of the asset.
- File integrity check
## General shot context
- Fetch frame start and end updates from Attract if they have changed
- check correct file naming and output paths
- Quick access to assets that need to be linked in manually.
- Provide interface for manual switching of assets (variations/LODs)
- File cleanup
## Animation
- Creating a playblast for review.
→ Renders animation with eevee on the farm. Puts output to shot footage directory and posts it on task page in Attract for review)
- Shot publish
→ Cache animation to Alembic. Create Eevee render and put into shot footage directory.
## Simulation
- Set output directories based on type of simulation and cache
## Lighting
- loading animation caches.
- updating to newer publish of caches
- Setting render defaults
- Push latest render to contact sheet in Attract
## Editorial
- Current Blender Cloud add-on functionality (managing and sending shot definitions to Attract)
- Load shots as new clips from shared directory
- Replace shots/versions/iterations
## Shot review
- Load animation to be reviewed from Attract
- Send rendered clip (with annotations) as attachment to the relevant task page on Attract

View File

@ -1,5 +0,0 @@
# 2D Assets: Texturing, Matte Paintings, Brushes...
::: warning
6 October 2023 - The content of this page is currently being edited/updated.
:::

View File

@ -1,49 +0,0 @@
# Animation Testing
## General requirements
To start with the animation testing we need **the initial version of the character rigs**. For that, the character design needs to be approved, with sculpts expression of the facials, that later will be applied to the pose library and some heroe poses for the body, that will guide the animation posing marking some do and donts.
## Goal of the task
The goal of the animation testing is to:
- Test the rig weight painting and deformations, having feedback sessions with rigging and modeling until it moves correctly
- Develop an animation style with the help of the sculpts and heroe poses
- Create clear guidelines for both art style and animation style
The steps to follow are the next:
* Test the character rigs
* Feedback with rigging and modeling
* Create selection sets
* Create animation design guides and pose libraries
* Animation tests
## Test the character rigs
This is what we call **stress testing** the character: basically we start doing simple transformations on the controllers to see if they behave how we expect, seeing if there are any flaws on the weight painting or the behavior of the controllers, also we check if more controllers are needed on the rig.
### Feedback with rigging/modeling
In case we find any problems with the rig, we have feedback with rigging. They will decide if they can fix it or if it is needed to have some modeling tweaks in the geometry, this process of the feedback can be repeated as many times as needed.
### Create selection sets
This step is made by the animators but translated to the rigger: he will add them to the character blend file so the selection sets will be by default every time the rig is linked.
The process we use is to create a selection of controllers we are gonna need for certain parts, for example the face of the character, and like this it is really easy to select them any time we want.
For parts like the arms/legs, we just make the selection sets from one side of the body and flip the selection when we want to select the opposite side, like this we keep the pop up selection sets menu cleaner.
## Create animation design guide and pose libraries
We will create the animation design guide based on the sculpts and heroe poses modeling will provide, this is basically a do & don'ts in terms of animation. Some benefits of this is giving all the animators a common reference point making poses and expressions more consistent, will give the new animators an easier jumping onboard and adjusting to the animation style and will shorten the time needed for feedback and revisions resulting in a faster animation output.
Next to that, creating an animation poselib will make all the process even faster because everybody will share the same approved poses, being easier to keep the character on model.
Animation poselibs will be created at least for facials and hands, being this the more complex part to pose.
## Animation test
We will do this **to find the animation style and personality of the characters**.
We will start mostly with some kind of walkcycle, because is the easiest way to find personality in motion, and later we will start to do more complex animation test, body mechanics and later lipsyncs if needed, until we find the correct personality and animation style ( level of cartoon/realism, be on 1s or 2s, etc).

View File

@ -1,152 +0,0 @@
# Modeling
::: warning Work in Progress
October 17th 2023 - The content of this page is currently being edited/updated.
:::
## **General**
* The **approved concept design** is required at this point. But overlap in these stages is possible.
* The important difference for this page is the focus on **delivering a production ready model** based on an approved design.
## Organic
There needs to be a handover of design material:
* Concept Design. Typically sculpted ([T-Pose](https://studio.blender.org/films/sprite-fright/39362c69939a56/?asset=4319) & optional [Hero Pose](https://studio.blender.org/training/stylized-character-workflow/6-pose-test/))
* [Deformation Tests](https://studio.blender.org/films/sprite-fright/39362c69939a56/?asset=4356) (Like [expressions](https://studio.blender.org/films/sprite-fright/character_lineup/?asset=4340) & [poses](https://studio.blender.org/films/sprite-fright/sprite/?asset=4243))
* [Style guides & limitations](https://studio.blender.org/films/sprite-fright/3993b3b741b636/?asset=4910)
**The outcome usually is a final topology**. More can be added if required:
* Basic UV maps
* Animation Shape Keys
* Baked textures from sculpted details
* Pivot Point Markers (via empty objects)
* Helper Shape Keys (Closed eyes, open mouth)
* Hero Pose Shape Keys
* Cleanup (Applied Transforms, manifold check, naming, dummy materials)
### Topology
**The goal is to create an optimized topology** for next asset creation tasks (shading, texturing, rigging & animation). \
**A full UV unwrap** is also added at this stage and the modeling is done with them in mind.
Typically we do a retopology since the shape and design was already developed via sculpting and loose modeling. \
For more, read the [full explanation of our workflow and theory](https://studio.blender.org/blog/live-retopology-at-bcon22/).
We also have a [deeper look into the workflow for stylized characters \
](https://studio.blender.org/training/stylized-character-workflow/chapter/5d384edea5b8f5c2c32c8507/)as well as [realistic characters](https://studio.blender.org/training/realistic-human-research/chapter/retopology-layering/). \
This information can be extrapolated to other types of assets.
* [Retopology in Blender](https://studio.blender.org/blog/live-retopology-at-bcon22/)
* [Retopology & Layers from Charge](https://studio.blender.org/training/realistic-human-research/chapter/retopology-layering/)
* [Retopology Course based on Spring/Coffee Run](https://studio.blender.org/training/stylized-character-workflow/chapter/5d384edea5b8f5c2c32c8507/)
* [Retopology Cheat Sheets](https://studio.blender.org/training/stylized-character-workflow/chapter/5e5fea8470bde75aac156718/)
### UV Unwrapping
**The topology heavily influences where UV seams can be placed.** Especially for clothing this is vital to keep in mind.
The focus is mostly on **minimized stretching and texel density**. \
UDIMs are great for this to order uv islands by needed detail level. \
Areas that are closer to the camera are given higher texture resolution, \
while covered or hidden parts could be heavily reduced in resolution or even stripped of textures.
**Pattern aligned UV maps** are useful. \
They align texture patterns better on surfaces like for clothing. \
Add them as secondary UV maps, as they can have more visible stretching.
* [UV Mapping Live Streams for Snow](https://www.youtube.com/watch?v=_LI28r-Nk5g&list=PLav47HAVZMjl5VQRoVPd0481fsJNNQi9J&index=9&ab_channel=BlenderStudio)
### Sculpted Details & Animation Shapes
Existing sculpted surface details are ideally reprojected onto the subdivided retopology and refined for production. \
For this the **multiresolution modifier** is used and the final topology should be made of **relatively evenly distributed quads for the best results**.
For the use of sculpted animation shapes and detailed sculpted layers we use Shape Keys extensively as well as the \
[Sculpt Layers](https://blenderartists.org/t/sculpt-layers-addon/1288145) addon when needed. \
Also see the use of [Corrective Shape Keys](https://hackmd.io/VZ4wN5VmQBS5w9MLFolD3A) for already rigged characters.
* [Layered Sculpting from Charge](https://studio.blender.org/training/realistic-human-research/chapter/shapes-and-baking/)
* [Manual Clothing Dynamics from Charge](https://studio.blender.org/training/realistic-human-research/chapter/clothing-shapes-rotation/)
* [Baking Sculpted Details from Charge](https://studio.blender.org/training/realistic-human-research/chapter/baking-and-exporting/)
#### Helper Shape Keys
For handover to the next steps its good to add some shape keys.
For example shapes to **open/close the eyes and mouth**. If multiple objects are affected, the shape keys can be driven by a single property. \
These shape keys are extremely helpful for texturing and rigging.
For previz purposes a **full hero pose** is also useful. This way the texturing, shading and grooming could be rendered in an appealing pose instead of overly relying on the default T-Pose.
## Static/Hard-Surface
### Reference
For simple “static” assets it helps to use webshops or manufacturer websites to view specific dimensions of an object. These often come with many reference images and videos. The best way to gather reference is by physically holding it and being able to measure the object.
Creating hard-surface assets often comes with mechanical or moving parts, making the modeling more elaborate. With a specific design or concept, it helps to create a collage of images to work more accurately when parts move and slide in a believable manner.
These may include videos, photos, separate dimensions, technical diagrams, and blueprints (multiple angles) to help understand the asset better. When using blueprints or diagrams make sure the vertical and horizontal lines are perfectly straight before modeling.
### Setup
Starting with a cube to only show “Wire”, match the rough dimensions to maintain consistent scale and create a visual guide with the help of imported reference images or dimensions. Linking existing assets or characters to the scene is another useful way to match scale and proportions. For example, the hands of a character that will use the prop.
### Blocking
Starting with simple shapes and topology to block a basic version of the asset. Use modifiers to increase the speed and flexibility which is usually preferred at this stage for changes later.
Common modifiers used are:
* If the asset is symmetrical use the Mirror modifier
* Using the Solidify modifier to add thickness to a planar surface
* The Bevel modifier adds a chamfer to the edges. It is common in conjunction with the solidify modifier
* Array modifier is useful to repeat multiple instances of the same object in multiple directions or based on another object to repeat the object radially
* Edge Split modifier can be used to create non-destructive part lines using Mark Sharp in edit mode, in conjunction with the Solidify and Bevel modifiers
* Booleans to quickly shape an object without managing topology too much
* Curve modifier has a lot of uses combined with the array modifier to deform the mesh
Testing movement is achievable in different ways without the use of an armature. The way the asset is split into different collections can be important with some of these methods:
* Using Empty objects where the various (instanced) collections are parented to. Using instances of the collections is more flexible to avoid unwanted changes in topology
* Using shapekeys to animate more simplified movement
* Using the 3D cursor with basic object parenting to manipulate simplified set-ups
Creating a simplified animated version is a good method to improve the model and design. This new information feeds back into modeling and changes can be made easily and viewed again with the instanced collections.
### Refinement
At this point it is wise to strategically apply modifiers to remain flexible enough before fully committing. Retopology might be necessary to improve the surface curvature and clean-up any previous booleans.
Adding smaller details like screws, cables, insets and more to increase believability. Adding Subdivision Surface modifier, creasing and manual bevels for smoothing and control over edge sharpness. Inspecting the model from different angles helps determine if bigger shapes arent lost and to avoid errors.
### Sets
::: warning Work in Progress
October 17th 2023 - The content of this page is currently being edited/updated.
:::
## Delivery
Any asset needs refinements and additions to make sure they are ready to hand over:
* **Check for doubles** or vertices that are too close and merge them (Use Merge by Distance)
* **Consistent direction of normals** (Face Orientation overlay and Recalculate Normals)
* **Use smooth shading on objects**
* **Apply the object scale** and if needed also the location & rotation
* **Apply most modifiers**. Any that are best to have static in the model.
* **Double check UV Maps**. Any cleanup modeling could affect them negatively.
* **Logical naming and sorting** into collections
* **Add placeholder materials** with basic viewport colors. Very helpful for handover to initial shading and animation
* **Delete unused attributes** and any data that isnt needed for the handover
* **Add Empty objects to mark pivot points** of deformations. These are helpful helper objects especially if the rotation points on joints are not obvious

View File

@ -1,38 +0,0 @@
---
outline: deep
---
# Rigging
::: info
Original doc by Demeter, copied over from studio.blender.org. The original doc should be retired once this goes live.
:::
Our rigging workflow for characters as well as complex props, is based on procedurally generating control rigs, then manually weight painting and authoring corrective shape keys. We've built some custom tools to make these processes more efficient, foolproof and as iterative as possible. We try to tailor each rig to the needs of the animators on any given production.
## Generating Control Rigs
When a character model is fresh out of modeling/retopo, generating the control rig is the first step in rigging it. The control rig is generated using CloudRig, our extension to Blender's Rigify add-on. Features in CloudRig are tweaked and added according to the needs of each production.
* [CloudRig Repo/Download](https://gitlab.com/blender/CloudRig)
* [CloudRig Video Documentation](https://studio.blender.org/training/blender-studio-rigging-tools/)
The facial rigging is usually done with Rigify's [Action set-up system](https://studio.blender.org/training/blender-studio-rigging-tools/actions/).
## Weight Painting
Weight painting character meshes to the generated rig has so far happened simply manually, as there hasn't been a need to mass-produce characters with high quality deformation. Still, the **Easy Weight** add-on helps to make this manual weight painting workflow efficient with some custom UI, and less error-prone with a rogue weight checking system. There is also a short tutorial series about my weight painting workflow, which also includes a section about this add-on specifically.
* [Download Easy Weights](https://studio.blender.org/pipeline/addons/easy_weight)
* [Weight Painting Course](https://studio.blender.org/training/weight-painting/)
## Corrective Shape Keys
After creating the control rig and weighting the mesh to its deform bones, we want to create shape keys to improve the quality of the deformations, as a final level of polish.
Normally, the control rig and the weight painting has to be finalized before corrective shape keys can be authored, but our workflow with the **Pose Shape Keys** add-on allows us to make changes to the rig and the weights while preserving the resulting corrective shapes. This lets us work in a less restricted way when it comes to iteration.
* [Download Pose Shape Keys](https://studio.blender.org/pipeline/addons/pose_shape_keys)
* [Pose Shape Keys Tutorial Video](https://studio.blender.org/training/blender-studio-rigging-tools/pose-shape-keys/)
## Examples
* [Snow Character Rigging Live Streams (20 hours)](https://www.youtube.com/watch?v=SB3qIbwvq8Y&list=PLav47HAVZMjnA3P7yQvneyQPiVxZ6erFS)
* [Blender Conference 2022 Live Character Rigging (50 minutes)](https://conference.blender.org/2022/presentations/1723/)
* [Example Character Rigs](https://studio.blender.org/characters/) (Everything starting with Lunte)

View File

@ -1,85 +0,0 @@
# Shading
::: warning
6 October 2023 - The content of this page is currently being edited/updated.
:::
## Requirements
* Reference (Concepts, Material)
* Base Models for tests/ Final Topology for task
* Topology requirements
* Required LOD
## Initial Choices
### Rendering Style Choice (PBR vs NPR)
* General shading model
### Engine Choice (Cycles vs Eevee)
* Eevee shading settings
* Cycles shader displacement
* Hybrid workflow
### Workflow Choice (Procedural vs Data)
* Ups and Downs
* Texture specs and color management
* Semi-procedural
### Coordinate space (UV vs 3D)
* Deformation
* Procedurals
## Workflow
### Data Layers
Besides texture maps, data that the shader can use can be written directly onto the geometry. This data thus depends highly on the topology and needs to be checked under topological changes.
#### UV Maps
There are [certain requirements for UV maps](https://studio.blender.org/pipeline/pipeline-overview/asset-creation/modeling#uv-unwrapping) that are used for image texture painting and baking. At the same time UV maps can be used as a way to align generic patterns (procedural or not) to the surface in a way that takes advantage of the topology.
#### Color Attributes
Since the geometry is typically lower resolution than the texture maps color attributes are more suited for colors that are soft and gradual, follow the topology or are meant as a preview for quick editing.
It can also be useful to bake a preview of the color from the shader into a color attribute instead of a texture for viewport display.
#### Attributes
Beyond these specialized attribute types any generic type of data on the respective domains can be used for shading. With Blender's node tools this allows to make a name-based attribute system for the production that the tools can hook into in a convenient way for the editing process.
Developing these tools should be part of the pre-production phase.
### Texture Baking
* HiRes to LowRes
* Procedural to texture
* Pattern baking
### Shading Data Generation
#### Geometry Data
Besides using the stored data from baked or painted maps and attributes there are several types of implicit data to be used in the shader, such as for example the curve intercept or the pointiness of the mesh. This type of data is not stored on the geometry but implicitly derived and can change depending on outside parameters like the deformation.
#### Geometry Nodes
On top of the available shading information it can be incredibly useful to generate additional data for shading on the fly using geometry nodes. Instead of writing information as static data it can be generated depending on the context. For example to generate a mask based on the proximity of a mesh to another. This data can be calculated with Geometry Nodes and stored as an attribute that can be used in the shader.
([Example](https://youtu.be/1nvzwhbL-k0?si=BY-X-1Xe9D4FyySj&t=458))
### Camera Based Effects
### Node Group Structure (modularity)
#### Main shader
A lot of the time it is useful to build the shader network in such a modular way that large parts can be shared between assets of the entire production. Breaking down the base of all shaders into just a few main shaders makes it very easy to make adjustments on a global scale.
#### Utility node groups
Project specific, as well as generic node-setups that are reused throughout the production should be isolated into utility nodegroups and shared. That was adding functionality, making tweaks or improving the behavior propagates whereever they are used.
---
All these nodegroups should sit in their respective library files and get linked into the asset files where they can be integrated into a local shading network.
Iterating over these library node-groups comes with the responsibility to make sure that there are no regressional changes. New nodegroup inputs should be set up in a way that the default means no visible change. Otherwise this can have consequences for the assets that use these libraries.
### External Control (Custom Properties)

View File

@ -1,233 +0,0 @@
# Kitsu
We have been using [Kitsu](https://www.cg-wire.com/) as task and asset manager for our productions. It runs at `kitsu.blender.studio`.
The following doc describes how we use it at the Blender Studio, which may of course differ from other studios.
## Getting started
General documentation: https://kitsu.cg-wire.com/
## Connection
Everyone has to get a personal profile. Once you are registered as a user:
- `login`: your email address
- `password`: default (for the 1st connection only)
Each user is assigned a type of profile that gives [specific accesses and rights](https://kitsu.cg-wire.com/permissions/#permissions).
- *Artist*: has access only to the projects theyre assigned to, can only update/react to tasks they are assigned to
- *Supervisor*: has access only to the projects theyre assigned to, can update/react only to the Departments they are assigned to.
- *Production Manager*: has all access to all the projects and can create new ones
- *Studio Manager*: has all access to every project and Kitsu parameters
- *Client*to see (to define in general settings) and can only interact in a limited way. Depending on the settings, their comments can be subject to approval before being published.
- *Vendor*: partner studio, they have similar permissions to the artists but they see only the tasks they are assigned to.
### Creating a new profile
The People page lists every account that exists, active or not. You can see the names, emails, roles and Departments.
![Creating Account](/media/user-guide/kitsu/kitsu_people_v001.png)
On the top right you'll find the "Add User" button. From there, fill in the form:
![Creating Account](/media/user-guide/kitsu/kitsu_people-new1_v001.png)
The role chosen will be key as it gives different types of accesses:
![Creating Account](/media/user-guide/kitsu/kitsu_people-new3_v001.png)
Departments will give info to the other people managing projects (very useful when you don't know everyone and have to go through the list), as well as restrict Artists' and Supervisors' access to only their projects and their departments' tasks:
![Creating Account](/media/user-guide/kitsu/kitsu_people-new2_v001.png)
Then create the user, send invitation or give the 'How to' to your new team member!
## Main principles
Kitsu provides a **platform for users to manage and collaborate on projects**. It is organized around projects, which are created and managed by *Production Managers* and *Studio Managers*. Users are assigned to projects and given access to specific tasks and assets.
Kitsu includes features such as statuses, asset types, assets, shots, playlists, notifications, and newsfeed. Users can follow updates on a project through these features. Kitsu also includes an overview info section, which provides a summary of the project.
### Main page
![Kitsu Main Page](/media/user-guide/kitsu/kitsu_mainpage_v001.png)
### Statuses
Here is the list of statuses we have created at the studio, and their function:
![Kitsu Statuses](/media/user-guide/kitsu/kitsu_statuses_v002.png)
#### Usual cycle of approval
`NR``TODO``WIP` (artist starts working) → `WFA` (artist shows director/supervisor)
`DONE` (task is approved)
`GO` (first step is approved, go on to finish the task. E.g. blocking then splining)
→ artist puts it in `WFA` again, for another cycle
`RTK` (feedback to be applied)
→ artist puts it in `WFA` again, for another cycle
### Asset Types
We created different Types of Assets to organize them on the page (they show in categories) and apply different Tasks to each type.
Example: **Characters** will need `Concept`, `Modeling`, `Sculpting`, `Rigging`, `Shading` and `Anim Test` but **FX** will need `Concept`, `Shading` and `Anim Test`, or **Props** will only need `Concept`, `Modeling`, `Shading` and `Rigging`.
![Asset Types](/media/user-guide/kitsu/kitsu_assets-types_v001.png)
*(This image is an old version - to be updated)*
### Tasks
Kitsu is organized around Tasks, which we create as we need them. They will form the columns on the Asset and Shot pages.
Each task is linked to a **Department** - people too can be linked to one or several Department. This allows to group things and adapt the interface to only show one specific department, if thats useful. In the images below, those departments are symbolized by the colored dot on the left of each task.
Those tasks each have a color and can be selected and put in a different order on each project; see *Settings*.
For Assets:
![Tasks Assets](/media/user-guide/kitsu/kitsu_tasks-assets_v001.png)
For Shots:
![Tasks Shots](/media/user-guide/kitsu/kitsu_tasks-shots_v001.png)
Apart from the Assets and Shots pages, we have **access to each Task of a project** if we click on the name at the top of a column: it opens a new page with more info (retake count,... ) and filter options.
This is especially useful when we want to put in estimates (for the time to spend on a task), deadlines, or work on the Schedule Tab.
![Tasks Page](/media/user-guide/kitsu/kitsu_tasks-page_v001.png)
Example of the schedule used in animation on *Pet Projects*, for shots not approved yet:
![Schedule](/media/user-guide/kitsu/kitsu_schedule-animation_v001.png)
### Search bars
![Search Bar](/media/user-guide/kitsu/kitsu_search-bar_v001.png)
On almost every page, you will find a search bar. This is dynamic (no need to press Enter, unless you want to *save* the search parameters) and works on this principle:
- **`task`=`status`**
- `task`= **`-`** `status` (shows every task that does *not* have that status)
- `assettype` e.g.: *to be written*
By default, every Statuses and Tasks are included, but each project can select those it needs.
#### Importing data with .csv files
On each Assets page, Shots page or Task Page you can import a `.csv` file. This allows for rapid ingestion and creation of Assets or Shots.
![CSV import/export](/media/user-guide/kitsu/kitsu_csv-buttons_v001.png)
You can then download all this data again, which gives you a `.csv` file with all the info on the page (including assignations and current status)
→ this is especially useful if we want to do statistics or follow quotas
#### Status Automations
Trigger automatic changes to make the workflow more efficient. The automations have to be added on each project, depending on our needs.
![Status Automations](/media/user-guide/kitsu/kitsu_status-automation_v001.png)
### Assets
*to be written*
### Shots
*to be written*
### Playlists
There are 2 ways to create playlists:
#### Playlist page
![Playlist Page](/media/user-guide/kitsu/kitsu_playlist-page_v001.png)
![Playlist Page, info](/media/user-guide/kitsu/kitsu_playlist-page-create_v001.png)
![Playlist Page, modifications](/media/user-guide/kitsu/kitsu_playlist-page-view_v001.png)
#### Playlists on the fly
Directly on the Assets or Shots page, you can select tasks:
![Playlists on the Fly, with Assets](/media/user-guide/kitsu/kitsu_playlist-fly-create_v001.png)
![Playlists on the Fly, saving](/media/user-guide/kitsu/kitsu_playlist-fly-save_v001.png)
![Playlists on the Fly, info](/media/user-guide/kitsu/kitsu_playlist-fly-save2_v001.png)
### Following the updates
#### Notifications
The bell on the top right.
This shows notifications for your particular user, when you've been tagged in a comment or for Tasks you are assigned to.
#### News Feed page
Very useful to check what people have been up to on a project.
For example, look for everything that has been approved (`DONE`) or put on `RTK`, or only `RIG` tasks.
Or look for only one person in particular.
![Newsfeed Page](/media/user-guide/kitsu/kitsu_newsfeed_v001.png)
### Overview info
The pages Asset Type Stats or Sequence Stats: can be downloaded as `.csv` files too.
Here with the pie chart view:
![Overview 1](/media/user-guide/kitsu/kitsu_overview-assets-pies_v001.png)
Here with the numbers showing:
![Overview 2](/media/user-guide/kitsu/kitsu_overview-assets-numbers_v001.png)
## Setting up a new project
### Main info
Studio Managers and Production Managers can access the "Create a new production" page through the main page or the My productions page.
There, you'll find many information to fill in:
![Create a project](/media/user-guide/kitsu/kitsu_settings-newproject_v001.png)
All categories need to be filled with at least one item. The most important will be "2 - Parameters" as the `Type of project` will set up a different array of technical parameters; a series will have multiple episodes, a feature will have a more robust data mangement than a short film.
### More settings
#### Parameters
You can add more info, a brief and even update the picture:
![Project Parameters](/media/user-guide/kitsu/kitsu_settings-parameters_v001.png)
#### Adding people
Once your project is created, you need to **assign people to it** so they can see it and interact on the different pages. For that, go to the Team page.
This is especially true for *Supervisors* and *Artists* who cannot see productions they are not assigned to.
![Team Page](/media/user-guide/kitsu/kitsu_team_v001.png)
#### Tasks, Statuses and Asset Types
As mentioned before, by default all tasks ans statuses are available in a project. If you wish to define stricter parameters, you can do so in the settings.
For each tab, the system is the same: you chose what to add from a drop-down list.
Tasks can be ordered, per project:
![Project Tasks](/media/user-guide/kitsu/kitsu_settings-tasks_v001.png)
In the example below, we chose to use all the Statuses existing in our Kitsu:
![Project Statuses](/media/user-guide/kitsu/kitsu_settings-statuses_v001.png)
For Asset Types:
![Project Asset Types](/media/user-guide/kitsu/kitsu_settings-assets_v001.png)
#### Automations
As for other parameters, Automations can be added from a list of those existing in the global Kitsu:
![Project Asset Types](/media/user-guide/kitsu/kitsu_settings-status-automation_v001.png)
### Creating tasks
*to be written*
### Importing data
*to be written*

View File

@ -1,6 +0,0 @@
# Concept and Design
::: warning
6 October 2023 - The content of this page is currently being edited/updated.
:::

View File

@ -1,89 +0,0 @@
# Editorial
::: warning Work in Progress
October 20th 2023 - The content of this page is currently being edited/updated.
:::
At Blender Studio we use the Blender VSE to create and maintain a story edit. A couple of reference links for now:
* [Charge Previsualization](https://studio.blender.org/blog/charge-previsualization/)
* [Story Pencil](https://www.youtube.com/watch?v=b25kfE6qd_c) - Currently not used at Blender Studio, but worth checking when working with storyboard artists.
## Requirements, any of the following:
* Film script
* Thumbnails/sketches
* Storyboard drawings
* Previs images
* Previs videos
* Concept art
* Temp music
* Temp sfx
* Temp vocals
## Goal of the task
During the early stages of the project, the goal is to take any available materials into the VSE and create a rough animatic version of the film
- based on the most current version of the scrip
- or based on the most recent feedback from the director.
Over the course of the project, sections of the rough animatic **get replaced** with
- alternative storyboard drawings,
- layout shots,
- animated playblasts
- and final renders.
So at any given moment, parts of the edit are in different stages until the entire thing finally crosses the finish line with only final renders.
## Resolution
Set the file to the correct resolution, our most commonly used one is 2048x858. For some films its helpful to also include a letterbox border, to add subtitles or notes from the director. In those cases we usually simply add a bit to the height to make room for it.
As an example, in Sprite Fright the final resolution was 2048x858 but in the beginning we were working with 2048x1146 in the edit and adding two black color strips (above and below) to keep the film area 2048x858. Then a text strip got added above to indicate which scene we were in, and a text strip below for the subtitles.
## Export settings
We usually export an **.mp4 file**, with the video codec **H.264** and audio codec AAC.
## Versioning
The edit file should get “-v001” at the end of its name (e.g. `short_film-edit-v001.blend`).
The version number is bumped up whenever there is a new export of the film, so that the export has the same version number as the work file (e.g. `short_film-edit-v001.mp4`). This might usually happen 1-4 times a week, depending on the situation.
## Legacy cleaning
As the project goes forward and gets more refined, it becomes increasingly more important to **keep the edit organized and tidy**.
A lingering alt version of a scene might be allowed to exist hidden inside the edit for 2-3 versions but it needs to be eventually deleted and the edit cleaned up. Otherwise it becomes too bogged down in legacy issues. You will always have the older versions to go back to, if an old scene needs to be revived.
## Organizing external files
All external files for the edit are usually put inside a well organized “editorial” folder, or in an adjacent folder.
### Organizing strips
In the beginning there is no hard and fast rule. Whatever works best for the editor to create a rough animatic.
As the project progresses and the complexity level of the edit gets raised, it becomes more important to create some rules of thumb.
- Allow 4-12 channels for the storyboards/previs/shots (or however many are needed),
- 3-5 for the temp music and ambiance,
- 2-3 channels for dialogue (depending on how many characters are in the film)
- and 10-20 channels for temp sfx.
The temp sfx can get very messy and unwieldy so be very selective in the beginning how many to use and for what purpose.
### Organizing temp vocals / scratch dialogue
When the film is dialogue heavy, it can be a nightmare to keep track of all the different versions of takes for each and every character.
For Sprite Fright I made a new workspace for each character, with its own scene data pinned to it. There I could have a long edit of the various recordings of that character, using markers to help organize them. When I went over a recording, I would cut each take and move the ones I liked up one channel and the ones I didnt like down one channel. Once I had the takes I needed, I would copy the strips, go back to the “main edit” workspace, paste them there and integrate them into the edit.with
### Adding meta strips
At some point during the production process, the animatic will become solid enough that scenes and shots will get **officially named**. The edit is not necessarily locked, but at this point changes to the story will be kept to a minimum.
In a single channel, a color strip will be created for each shot (to represent the position and timing of each shot). The **Kitsu addon** can then use those color strips to create meta strips out of them, all in a single channel. Each meta strip represents a shot and is connected to Kitsu, so that once any edit changes are made the information can be sent to Kitsu.
### Organizing exported shots
Once a shot is exported via the Kitsu addon, it needs to be **manually inserted** into the edit (this is for quality control and troubleshooting) and placed below the corresponding meta strip. From there on out the Kitsu addon can be used to update the shot to the latest version, if a new version has been exported.

View File

@ -1,5 +0,0 @@
# Previsualization
::: warning Work in Progress
October 20th 2023 - The content of this page is currently being edited/updated.
:::

View File

@ -1,5 +0,0 @@
# Research and Development
::: warning
6 October 2023 - The content of this page is currently being edited/updated.
:::

View File

@ -1,35 +0,0 @@
# Storyboard
## General requirements
* The script of the film
* Sequence Briefing from the director
* Concept art and character designs
## Goal of the task
Storyboarding is the first step into visualizing the vision of the director.
This is the time to
- try different angles,
- beats,
- acting choices
to determine the storytelling in visual format.
The goal is to create a version of the movie that eliminates as many alternative possibilities as possible and is solid enough to be transferred to the next stage: Previz.
## Storyboarding using Story Pencil
For the workflow of storyboarding we use the Grease Pencil tool combined with the Story Pencil addon.
The advantage of this workflow is the **possibility to combine 2D drawings within a 3d environment** which allows us to use 3d environment as a base to create spacial awareness from the beginning that can be used in modeling/previz as a point of reference.
The Story Pencil addon allows us to **use the VSE editor** in Blender and swap easily between shots and edit, or alter different shots.
## Workflow
To start, we have been using rough sketches to simply **convey the idea** and **progression** of each shot. A first pass will be shown to the director and any feedback will be addressed.
Once a version is approved by the director, we'll continue to **refine** the drawings and add more definition to the shot by using a **2 color shading method**.
As an initial edit pass with Story Pencil, once we finalize a sequence, the drawings will be exported to individual images for the editorial department. They can use these images to refine the timing and adjust composition to the drawings if needed.

View File

@ -1,185 +0,0 @@
# Project Blender
Project Tools will store a version of Blender within the `shared` directory. This version of Blender is internal to that project. This allows for multiple Blenders to be installed on your system, each with their own preferences tailored specifically to that project. The main advantage to running/managing Blender using the Project Tools scripts is that it will synchronize the Blender version and Shared Add-Ons across for all users contributing the the project. Project Tools also allows you to run a custom build of Blender with the Add-Ons and preferences set for your project.
<!---
TODO Note from Julien:
An important info atm is that the `datafiles` folder is NOT being used from the Project Blender. This folder is directly referenced from the primary Blender preferences (on Linux at `/home/<user>/.config/blender/<version>/datafiles/`)
So if there are any World HDRIs and Matcaps that you'd like to use, these will be availible on both Blender versions.
--->
## Blender Setup
The next step is to deploy the required software onto each of the studio's workstations.
### Using our scripts to download the latest Blender LTS or daily build version
```bash
# Linux/Mac
cd ~/data/your_project_name/svn/tools
./update_blender.py
```
```bash
# Windows
cd %HOMEPATH%\data\your_project_name\svn\tools
python update_blender.py
```
This will download the latest blender to `data/your_project_name/local/blender`
::: info Choosing Branch to Install
You can specify a [daily build](https://builder.blender.org/download/daily/) branch to fetch by editing the `BLENDER_BRANCH` variable in the script file.
:::
### Manually deploying Blender versions of your choosing
You can download and put any Blender release into the `your_project_name/shared/artifacts/blender` folder with their corresponding shasum file.
NOTE: If you do this, it is strongly adviced to not run the `update_blender.py` script as it will overwrite your files.
There are a few things to keep in mind though:
1. It has to be the `.zip` release for Windows, `.tar.gz` for Linux, and `.dmg` for Mac.
2. Each file has to have a shasum file. You can generate this yourself easily on Linux with:
`shasum256 file.tar.gz > file.tar.gz.sha256`
3. The file names for the Blender archives has to have the following naming scheme:
Linux:
`blender-linux.x86_64.tar.xz`
Mac:
`blender-darwin.arm64.dmg` or `blender-darwin.x86_64.dmg`
Windows:
`blender-windows.arm64.zip` or `blender-windows.amd64.zip`
Note that the file names doesn't have to match exactly with the examples above as long as their corresponding shasum file is picked up by the following file globbing schema:
`"blender*" + operating_system + "." + architecture + "*.sha256"`
4. There can be no ambiguity on which archive the `run_blender.py` script should use. So for example you can not have `blender-windows.arm64.zip` and `blender2-windows.arm64.zip` in the `your_project_name/shared/artifacts/blender` folder at the same time.
## Create Shortcut
Once your project has been setup using the "Project Tools" scripts Blender should be available inside your application's native application launcher. The run Blender script will take the correct blender version for your operating system from `your_project_name/shared/artifacts/blender` and extract it to the local directory. Along with any add-ons in the `your_project_name/shared/artifacts/addons` folder. Your Blender preferences are stored on a per project basis in `{directory-path}`
### Create Linux Shortcut
```bash
cd ~/data/your_project_name/svn/tools
./install_desktop_file.sh
```
::: info Available on Gentoo
To learn more about running the Blender if you are on a Gentoo system please see the [Gentoo guide](/gentoo/user/running-blender.md), including how to run a [debug build](/gentoo/user/running-blender.md#debug-build).
:::
#### Launch with Custom Build on Linux
You must run the Create Linux Shortcut step before running a custom build. This will launch blender using your custom binary, but with the Add-Ons and preferences of your project.
1. Navigate to your custom Blender binary
2. Right Click the binary
3. Select `Open with > Blender your_project_name`
<!---
TODO Replace Image with Project-Tools version
![Image of Blender Icon in KDE Taskbar/Start Menu](/media/artist-guide/launch_blender.mp4)
--->
### Create Windows Shortcut
1. Open the directory `%HOMEPATH%\data\your_project_name\svn\tools`
2. Create a shortcut to `launch_blender_win.bat` on your desktop
### Create Mac Shortcut
1. Open the directory `~/data/your_project_name/svn/tools`
2. In finder, select the `launch_blender_mac.command` and press `ctrl+shift+command+t` to add it to the dock.
## Launch via Terminal
To launch Blender from the terminal, open the tools directory within your project folder, and from the terminal use the run Blender script.
```bash
# Linux/Mac
cd ~/data/your_project_name/svn/tools
./run_blender.py
```
```bash
# Windows
cd %HOMEPATH%\data\your_project_name\svn\tools
python run_blender.py
```
::: warning Command Line Arguments
Note: Command Line Arguments also known as Flags are not supported by the `run_blender.py` script.
:::
## Update Blender
This script will fetch the latest Blender download from https://builder.blender.org/download/ The Blender download for Linux, Mac, and Windows will be downloaded into the `your_project_name/shared/artifacts/blender` folder. It will keep up to 10 previous downloaded versions for backup. This Blender doesn't update automatically, at least one user in the project must manually initiate an update, all users will receive this update because blender is stored within the `shared` directory.
::: info Blender Studio Users
Internally to the Blender Studio only, the blender inside your project is automatically updated overnight, not manual update is required.
:::
```bash
# Linux/Mac
cd ~/data/your_project_name/svn/tools
./update_blender.py
```
```bash
# Windows
cd %HOMEPATH%\data\your_project_name\svn\tools
python update_blender.py
```
## Rollback Blender
Use `rollback_blender.py` to switch the "current" version hosted in `your_project_name/shared/artifacts/blender` to one the older downloads, rolling back affects all users using your project. This is intended to be used to rollback to an older version in case of bugs in newer downloaded versions.
```bash
# Linux/Mac
cd ~/data/your_project_name/svn/tools
./rollback_blender.py
```
```bash
# Windows
cd %HOMEPATH%\data\your_project_name\svn\tools
python rollback_blender.py
```
### Run a previous version of Blender Locally
In some cases users may want to run a previous version of Blender on their machine without affecting other users.
```bash
# Linux/Mac
cd ~/data/your_project_name/svn/tools
./run_blender_previous.py
```
```bash
# Windows
cd %HOMEPATH%\your_project_name\svn\tools
python rollback_blender_local.py
```
## Update Blender Studio Add-Ons
All Add-Ons in the Blender Studio Pipeline repository can be quickly downloaded using the `update_addons.py` script.
```bash
# Linux/Mac
cd ~/data/your_project_name/svn/tools
./update_addons.py
```
```bash
# Windows
cd %HOMEPATH%\data\your_project_name\svn\tools
python update_addons.py
```
*To learn more see [Add-On Setup page](/td-guide/addon_setup.md)*
::: info Gentoo Users
Flamenco is installed and updated by the package manager of your Gentoo workstation. To learn more see [Update Local Add-Ons](/gentoo/td/maintaince#update-local-add-ons) in the Gentoo section.
:::

View File

@ -1,29 +0,0 @@
# Project Overview
## Introduction
"Project Tools" is a collection of scripts included in the [Blender Studio Pipeline](https://projects.blender.org/studio/blender-studio-pipeline) repository, developed to assist you in running and managing one or more projects. These scripts automate and standardize many common operations involved in setting up a production pipeline.
It all starts with the directory layout, which should be deployed by a Technical Director following the [Project Tools Setup Guide](/td-guide/project-tools-setup.md). This standard directory layout defines where things like .blend files and playblasts are stored. It enables project tools the ability to ensure all users are running the same Blender and have a similar experience with minimal setup required by the individual users.
Join the discussion at the [Blender Studio Pipeline Channel](https://blender.chat/channel/blender-studio-pipeline) on Blender Chat. The central hub for discussion about the Blender Studio Pipeline.
## Directory Layout
Typically projects are stored at the following path `/data/your_project_name` where `data` is at the root of the filesystem. This is for consistency between computers at the Blender Studio. External studios can use a different root folder for their projects for example a user's home folder.
The project folder contains all data related to a project including .blend files, playblasts, the blender that is used on the project for all operating systems and even preferences are stored within the project.
* `local` This is where the local copy of Blender and the add-ons will be installed. This directory is populated by the `run_blender.py` script with the Blender & Add-Ons from `shared`.
* `shared` This is the folder that should be shared over the network, it contains renders, playblast and other items that don't require version control. (By using Syncthing, NFS shares, Samba, Dropbox, etc)
* `svn` This the versioned controlled folder where the .blend production files will live.
```bash
.
└── my_project/
├── local # The local copy of Blender and the add-ons will be installed.
├── shared # Shared over the network (Syncthing, NFS, Dropbox, etc)
└── svn # Contains the `.blend` production files. (SVN, GIT-LFS, etc)
```
To learn the layout of the above directories, see the [`shared`](/naming-conventions/shared-folder-structure.md) and [`svn`](/naming-conventions/svn-folder-structure.md) directory overviews.

View File

@ -1,7 +0,0 @@
# Project Usage
Once your project is set-up there are several things that users can do when using the pipeline, including creating new shots on Kitsu directly from Blender, automatically building shots based on Kitsu data and updating the frame range of existing shots within the project.

View File

@ -1,48 +0,0 @@
## Creating your first Asset
The next step is to create an asset and store that information into the Kitsu Server.
1. Launch Blender via [Project Blender](/artist-guide/project_tools/project-blender.md) Guide
2. Under `Edit>Preferences>Add-Ons` ensure `Asset Pipeline` is enabled
3. Follow the [asset pipeline guide](https://studio.blender.org/pipeline/addons/asset_pipeline) to create a new asset collection, ensure these assets are marked as an [Asset in Blender](https://docs.blender.org/manual/en/latest/files/asset_libraries/introduction.html#creating-an-asset).
4. Save the above asset within the directory `your_project_name/svn/pro/assets/char` (or similar depending on type)
## Kitsu Casting
Casting is the process of associating a Kitsu Asset Entity with a given shot, this is how the Shot Builder knows what Assets to link into a given shot.
1. Please follow the [Kitsu Breakdown](https://kitsu.cg-wire.com/getting-started-production/) guide to Cast your assets to shots.
## Load Asset Data into Kitsu
To match Assets File to the casting breakdown on the Kitsu server, we need to tag the Asset with a filepath and collection. This can be done via the Blender Kitsu Add-On. This data will be used to match a Kitsu Asset with a Collection within a .blend file, that will be linked into all shots that have that Asset "casted in it".
1. Open the file for a given Asset.
2. Under the Kitsu>Context Panel, check the following settings.
- **Type** is set to Asset.
- **Asset Type** is set to the correct Asset Type (Collection, Prop, etc)
- **Asset** Field is set to the matching entry on the Kitsu server for the current file.
3. Under the Kitsu>Context>Set Asset sub-panel...
- **Collecton** is set to the Asset's parent collection.
- Run the **Set Kitsu Asset** operator to send the current filepath and selected collection to the Kitsu Server.
![Set Kitsu Asset](/media/pipeline-overview/shot-production/kitsu_set_asset.jpg)
If you are using the Asset Pipeline, the latest publish file will be prompted to confirm using the latest Publish as the Asset target.
![Publish Asset Pipeline with Set Kitsu Asset](/media/pipeline-overview/shot-production/kitsu_asset_with_asset_pipeline.jpg)
You should now see the filepath and collection under the Asset's Metadata on the Kitsu.
![Kitsu Asset Metadata](/media/pipeline-overview/shot-production/kitsu_asset_metadata.jpg)
## Building your First Shot
Before building your first shot, you will need to customize your production's Shot Builder hooks. Inside your productions `assets/scripts/shot-builder` directory the Shot Builder hook file should be stored, based on the [example](https://projects.blender.org/studio/blender-studio-pipeline/src/branch/main/scripts-blender/addons/blender_kitsu/shot_builder/hook_examples) included in the Add-On. This file can be automatically created at the correct directory using an operator in the **Blender Kitsu** Add-On preferences. Hooks are used to extend the functionality of the shot builder, and can be customized on a per project basis.
1. Open `Edit>Preferences>Add-Ons`
2. Search for the **Blender Kitsu** Add-On
3. In the **Blender Kitsu** Add-On preferences find the Shot Builder section
4. Run the Operator `Save Shot Builder Hook File`
5. Edit the file `your_project/svn/pro/assets/scripts/shot-builder/hooks.py` to customize your hooks.
6. Open a new Blender File, select `File>New>Shot File`
7. Select the desired Sequence/Shot from Kitsu and select OK to start Building
8. New file will be saved to `your_project_name/svn/pro/shots/{sequence}/{shot}/{shot}.blend`

View File

@ -1,2 +0,0 @@
# Building Shots
<!--@include: usage-build-shot-core.md-->

View File

@ -1,19 +0,0 @@
# Final Render
Once the approved image sequences have been loaded into the main edit you are ready to create a final render of your film.
1. Open your Edit .blend file
2. Render Video as PNG Sequence
1. Under `Properties>Output` Set the output directory to `your_project_name/shared/editorial/deliver/frames/`
2. Set the File Format to `PNG`
3. Select `Render>Render Animation`
3. Render Audio
1. Select `Render>Render Audio`
2. In the Side Panel select Container `.wav`
3. Set the output directory to `your_project_name/shared/editorial/deliver/audio/`
4. Run Deliver script
1. Copy the `delivery.py` from `your_project_name/blender-studio-pipeline/film-delivery/` to the directory `your_project_name/shared/editorial/deliver/`
2. Enter delivery directory `cd /your_project_name/shared/editorial/deliver/
3. Encode audio with `./deliver.py --encode_audio audio/{name_of_audio}.wav`
4. Encode video with `.deliver.py --encode_video frames/`
5. Finally `.delivery.py --mux`
5. Final Render will be found in the `mux` directory

View File

@ -1,22 +0,0 @@
# Playblast Shot
## Playblast your First Shot
Once your first shot is animated you are ready to render a playblast of this shot, which will be later imported into your edit .blend file.
1. Launch Blender via [Launching Software] Guide and Open a Shot
2. In the Kitsu Sidepanel, under context, use the refresh icon to reload your file's context if it is not set already.
3. In the Kitsu Sidepanel under Playblast tools you are now ready to create a new playblast version, select `Create Playblast` which will render a preview of your shot and add it as a comment on your Kitsu Task for this shot
1. Status: select the status to set your Kitsu Task to
2. Comment: add any notes you would like to include in your comment
3. Use Current Viewport Shading: Enable this to render will the settings from your current viewport
4. Thumbnail Frame: Which frame in your current file should be the thumbnail for your preview file
## Loading First Playblast into the Edit
For each new task type, Anim/Layout etc needs to be added manually, then it can update the shot afterwards
Returning to your edit .blend file, we can now load the playblast from the animation file into the edit.
1. Open your edit .blend file inside the directory `/your_project_name/svn/edit`
2. Select the Metadata Strip associated with your shot
3. From the Sequencer Side Panel select `Import 1 Shot Playblast`
3. Select the Task Type you would like to load the playblast from and an empty channel
4. Your new playblast will be imported with the same timing as the corresponding metadata strip

View File

@ -1,12 +0,0 @@
# Render Shot with Flamenco
<!--- TODO improve description --->
Once your shots are all ready to go, you can now render a final EXR from each of your shot files.
1. Open a shot file
2. In the properties panel navigate to Output, and set your file format to OpenEXR with Previews Enabled
3. In the properties panel navigate under Flamenco
1. Select `Fetch Job Types`
2. From the Dropdown select `Simple Blender Render`
3. Set Render Output Directory to `your_project_name/render/`
4. Set Add Path Components to `3`
5. Finally Select `Submit to Flamenco`

View File

@ -1,19 +0,0 @@
# Render Review
## Review and Approve Renders
Once your shot(s) have been rendered by Flamenco, you are ready to review your renders using the Render Review Add-On. This Add-On allows you to review different versions of your Render, assuming you rendered with Flamenco for multiple versions, and select the version you would like to use as your approved version.
1. Ensure Blender Kitsu Add-On is enabled and Logged In
2. From `File>New>Render Review`
3. From the dialogue box select a Sequence Name you would like to Review
4. Select the video strip for the render you are reviewing
5. Select `Push to Edit & Approve Render`
- Push to Edit will take the mp4 of your flamenco render and add it to the `your_project_name/shared/editorial/shots` directory
- Approve Render will copy the Image Sequence of your flamenco render to the `your_project_name/shared/editorial/frames` directory
## Import Approved Renders into VSE
Renders approved by the render review Add-On can be automatically imported into your edit using the same function used to update the playblast of shots in the edit.
1. Open your Edit .blend file
2. Select the video strip representing the shot that has an approved render. In the Kitsu Sidebar under General Tools select, `^` to load the next playblast from that shot automatically, which is an **mp4 preview** of your final render
3. Select the metadata strips representing all the shots you have approved renders for. Use `Import Image Sequence` operator to import the final image sequences for each shot as EXR or JPG and load it to a new channel in the VSE

View File

@ -1,13 +0,0 @@
# Prepare Edit
## Sync your Edit with Kitsu Server
Most productions begin with a previz or storyboard step, showing the overall direction and plan for the production. By inputting this as video strip(s) into a VSE file we can automatically create the corresponding shots on the Kitsu Server directly from the VSE.
1. If not already ensure your project settings are setup [Blender Kitsu Add-On Preferences](https://studio.blender.org/pipeline/addons/blender_kitsu#how-to-get-started)
2. At the directory `your_project_name/svn/edit` create a new "Video Editing" File.
3. Populate your new edit file with your previz video strips
4. With your first strip selected, in the Blender Kitsu side panel of the VSE select "Create Metadata Strip from Active Shot” to create a new Metadata Strip.
5. Next select “Init Active Shot”, and enter the Shot and Sequence names you would like to submit to Kitsu.
6. Finally select the “Submit New Shot Button” to submit this new Shot to the Kitsu Server.
Repeat steps 4-6 for each shot in the sequence. Multiple Metadata Strips can be made out of a single previz strip if required, adjust the shots timing by simply trimming the Metadata Strip in the timeline. Below is an example of a previz sequence with Metadata Strips.

View File

@ -1,12 +0,0 @@
# Update a Shot's Frame Range
During production in some cases the frame range of a shot will change, either adjusting a shot's length or adjusting it's position in the edit. Once adjusted, we can update the shot .blend file's frame range so new playblasts will match this updated frame range. Once a new playblast is available the shot can automatically be updated in the VSE via the Blender Kitsu Add-On
1. Open your edit .blend file inside the directory `your_project_name/svn/edit`
2. Select a shot and it's Metadata Strip, adjust the timing of both strips so they remain in sync.
3. Select your Metadata Strip, in the Kitsu Sidebar of the VSE Under Push select `Metadata 1 Shot` to push your shot's new frame range to the Kitsu Server
4. Open your shot .blend file inside the directory
`your_project_name/svn/pro/shots/{sequence_name}/{shot_name}/`
5. Inside the Kitsu Sidebar, under Playblast tools, if your frame range on Kitsu has changed you will see a red `Pull Frame Range` button. Select it to update the file's Frame Range
6. Adjust the shot's animation to accommodate the new frame range, then under Playblast use the `+` button to create a new version, then select `Create Playblast` to render a new playblast
7. Open your edit .blend file, and select the movie strip for the shot you would like to update.
8. In the Kitsu Sidebar under General Tools select, `^` to load the next playblast from that shot automatically

View File

@ -1,69 +0,0 @@
# Animation
## General requirements
In order to start the animation process the animators require **animation-ready rigs**. This means the rigs that are needed to animate a shot are **well tested and refined** by the rigging department.
In case the rigs are partially ready (i.e. when the character's facial rig is in progress) this needs to be communicated with the director and coordinator in order to decide if a shot is ready to be animated. A shot might heavily rely on body mechanics where a facial performance is minimal. In this case, the animator starts animating the body rig. Once the rig gets the facial rig updated, the animator is able to finish the shot by animating the face as well.
Another factor that is required for the animation process is the **shotfile created by the shotbuilder**. This file includes the characters, props and the set. The shotbuilder also has generated the correct framerange and (if needed) an audiofile.
## Goal of the task
The goal of the animation process is to create an animation performance according to the directors briefing. The characters and/or props need to be moving in a **believable** way and expected to be **cohesive** to the style that has been set in pre production.
These steps are taken into account for successfully completing a task in animation:
* Briefing
* Scene setup
* Blocking pass
* Asset updates
* Polishing pass
* Shot delivery
## Briefing
Before the animator can start on his assigned shot, they need to get a detailed briefing from the director about the **performance** and **intent** of the shot/sequence.
The animator has a chance to ask questions and discusses acting beats with the director if needed. Once it is clear what needs to be done in animation, the animator can open the shotfile and prepare for animation.
## Scene setup
When opening the shotfile for the first time, it is important to **check** if
- the shotbuilder has built the shot correctly
- all the required assets are available in the scene
- as well as the framerate and camera
Once this is the case, the animator starts to do a **technical planning** for the shot. For example, when a character is in a car, they need to be parent constrained accordingly. This setup is necessary in order to continue the animation process and eliminate as many technical issues as possible along the way.
## Blocking pass
A first pass of the animated shot will contain a **rough version** of the animation performance in order to get feedback quickly from the director.
This way, any feedback can be addressed early on and will give the director the opportunity to change the performance if necessary.
## Asset updates
Assets are continuously updated during production so it is important to **keep rigs and props up to date** to avoid issues with out-of-sync assets later on.
Once a shot is finished, you make sure all assets are up to date so the lighting department won't run into unexpected issues when opening the shot in a lighting file.
## Polishing pass
Once a blocking pass is approved by the director, it is the task of the animator to refine the animation to a final state.
This means they will add the necessary inbetweens and animating details to the character that haven't been refined in the blocking pass. Think of things like earrings, tails, fingers etc.
The Polishing pass also ensures the characters/props have **proper contact** with the environment and no intersections are visible on screen.
## Shot delivery
Once a shot is finished, we run a bunch of checks to make sure the lighting department can take over without unexpected issues. This includes checking:
* Framerate is set correctly
* The latest version is rendered to kitsu
* All assets are up to date and linked into the output collection
* Checking if animation is good for motion blur (start and end frame, parent switches)
Once all of these factors are set correctly, the shot is ready to commit for lighting to take over.

View File

@ -1,5 +0,0 @@
# Coloring
::: warning Work in Progress
October 6th 2023 - The content of this page is currently being edited/updated.
:::

View File

@ -1,44 +0,0 @@
# Effects
## Requirements
* Usually final animation at this point to ensure no mismatch in contact points, timing, etc. This is important as a lot of the FX are usually custom for the exact file and animation, so any further changes would require them to be redone.
* There Should be at least a basic pass on lighting for most FX to be placed in the right context with the visuals. This depends on the specific production and type of FX though. Preliminary FX can be useful before that, usually we just go with the layout version, or what the animators mock up.
* Different file for each type of FX in the shot, set up in the same way as the lighting file with linked animation data and local collection for the FX
## Workflows
### Set FX
Sometimes FX dont need to be made uniquely for a shot but can just be created once in a way that they can be used in all shots that use the asset they belong to (e.g. Charge - Flags outside hut).
### Physics Simulations
The use of physics simulations on a shot level is relatively straight-forward. Small simulations can be cached directly into the FX file. Usually the cache should live outside in a place that is accessible by everyone working on the shot.
### Procedural FX
#### Geometry Nodes/Modifiers
A lot of the time instead of doing an elaborate physics simulation it is simpler and gives more control to fake something with a procedural setup (e.g. Charge - Paint can explosion). This can be very custom solutions on a shot basis or a general setup that is reusable over multiple shots (e.g. Sprite Fright melting).
#### Parametric FX assets
Whenever some FX need to be reused over and over in a generic way that needs some simple adjustments/animation for the shot, we use specific FX assets that usually have a rig with just a few bones for transforms and constraints. Usually those also have parameters to control a Geometry Nodes setup (e.g. [Charge - Bullet Impacts](https://studio.blender.org/films/charge/3b0f29b4825fa2/?asset=6191))
#### Shaders
Some FX need to hook into the shader of an existing asset, like a character (e.g. Charge - Bruised and sweaty face). Usually we make this a part of the asset itself and then expose the controlling parameter in a way that we can control it from outside using custom properties on the object level, so we can control it in the rig.
### Frame by frame
#### Grease Pencil
Some FX are better drawn directly, so Grease Pencil is a great way to do that.
#### 'Keymesh' Style
The same applies for 3D meshes. In some cases it can be easier or better for a few frames to sculpt a mesh frame by frame (e.g. Sprite Fright - Bird Poop Wipe). The currently most reliable way to do that, until Blender has native functionality, is to create a separate object per frame and animate its scale to be 1 on the required frame and 0 on any other
Of course, this technique does not allow for 'proper' motion blur.
## Cache Files
* It's very important that any data that is cached is accessible to the render farm and anyone working on the shot. We work with a cache directory that is centralized on our server that everyone at the studio, including our render farm machines, has mounted on `/render/`.
* Since this directory is outside of our project's file structure, these paths must be absolute to work reliably.
* There are different file formates viable for caching data to disk. Usually it is best to stick to Blender's native caching formats, when possible to make absolutely sure no data is lost.
## Integration into Lighting File
* The FX collection(s) need to be linked back into the lighting file and integrated properly into the view layer setup and comp. **It is important that the collection is directly linked into the lighting scene, not instanced. This allows proper visibility control.**
* For FX that affect the lighting, either by casting additional light (e.g. sparks) or obscuring it, the lighting needs to be adjusted accordingly.

View File

@ -1,5 +0,0 @@
# Layout
::: warning
October 6th 2023 - The content of this page is currently being edited/updated.
:::

Some files were not shown because too many files have changed in this diff Show More