Initial gpu and vulkan tech doc. #2

Merged
Jeroen Bakker merged 2 commits from :gpu-vulkan into main 2023-05-09 16:25:57 +02:00
15 changed files with 736 additions and 2 deletions

View File

@ -0,0 +1,403 @@
# GPU Module
The GPU module is an abstraction layer between Blender and an Operating System
Graphics Library layer (GL). These GLs are abstracted away in GPUBackends. There
is a GLBackend that provides support to OpenGL 3.3 on Windows, Mac and Linux.
There is also a Metal backend for Apple devices. Vulkan backend is currently
in development.
GPU module can be used to draw geometry or perform computational tasks using a
GPU. This overview is targeted to developers who want to have a quick start how
they can use the GPU module to draw or compute. Basic knowledge of an GL (OpenGL
core profile 3.3 or similar) is required as we use similar concepts.
## Drawing pipeline
This section gives an overview of the drawing pipeline of the GPU module.
``` mermaid
classDiagram
direction LR
class GPUBatch
class GPUShader {
-GLSL vertex_code
-GLSL fragment_code
}
class GPUFramebuffer
class GPUVertBuf
class GPUIndexBuffer
class GPUPrimType {
<<Enumeration>>
GPU_PRIM_POINTS,
GPU_PRIM_LINES,
GPU_PRIM_TRIS,
...
}
class GPUShaderInterface
class GPUTexture
GPUBatch o--> GPUIndexBuffer
GPUBatch o--> GPUVertBuf
GPUBatch *--> GPUPrimType
GPUBatch o..> GPUShader: draws using
GPUShader o..> GPUFramebuffer: onto
GPUShader *--> GPUShaderInterface
GPUFramebuffer o--> GPUTexture
```
## Textures
Textures are used to hold pixel data. Textures can be 1, 2 or 3 dimensional,
cubemap and an array of 2d textures/cubemaps. The internal storage of a texture
(how the pixels are stored in memory on the GPU) can be set when creating a
texture.
``` cpp title="Create a texture"
/* Create an empty texture with HD resolution where pixels are stored as half floats. */
GPUTexture *texture = GPU_texture_create_2d("MyTexture", 1920, 1080, 1, 0, GPU_RGBA16F, NULL);
```
## Frame buffer
A frame buffer is a group of textures you can render onto. These textures are
arranged in a fixed set of slots. The first slot is reserved for a depth/stencil
buffer. The other slots can be filled with regular textures, cube maps, or layer
textures.
GPU_framebuffer_ensure_config is used to create/update the configuration of a
framebuffer.
``` cpp title="Create a framebuffer"
GPUFramebuffer *fb = NULL;
GPU_framebuffer_ensure_config(&fb, {
GPU_ATTACHMENT_NONE, // Slot reserved for depth/stencil buffer.
GPU_ATTACHMENT_TEXTURE(texture),
})
```
## Shader Create Info
The GPU module supports multiple GL backends. The challenge of multiple GL
backends is that GLSL is different on all those platforms. It isn't possible
to compile an OpenGL GLSL on Vulkan as the GLSL differs. The big differences
between the GLSL's are how they define and locate resources.
This section of the documentation will handle how to create GLSL that can
safely be cross compiled to different backends.
### GPUShaderCreateInfo
#### Defining a new compile unit
When creating a new compile unit to contain a GPUShaderCreateInfo definition
it needs to be added to the `SHADER_CREATE_INFOS` in `gpu/CMakeLists.txt` this
will automatically register the definition in a registry.
Each of the compile unit should include `gpu_shader_create_info.h`.
#### Interface Info
Interfaces are data that are passed between shader stages (Vertex => Fragment stage).
Attributes can be `flat`, `smooth` or `no_perspective` describing the interpolation
mode between.
``` cpp title="Example interface info"
GPU_SHADER_INTERFACE_INFO(text_iface, "")
.flat(Type::VEC4, "color_flat")
.no_perspective(Type::VEC2, "texCoord_interp")
.flat(Type::INT, "glyph_offset")
.flat(Type::IVEC2, "glyph_dim")
.flat(Type::INT, "interp_size")
```
#### Shader Info
Shader Info describes
- Where to find required data (vertex_in, push constant).
- Textures/samplers to use (sampler)
- Where to store the final data (fragment_out)
- It describes the data format between the shader stages (vertex_out).
- The source code of each stage (vertex_source, fragment_source)
Shader infos can reuse other infos to reduce code duplication (`additional_info` would
load the data from the given shader info into the new shader info.
The active GPU backend will adapt the GLSL source to generate those part of the code.
``` cpp title="Example Shader Info"
GPU_SHADER_CREATE_INFO(gpu_shader_text)
// vertex_in define vertex buffer inputs. They will be passed to the vertex stage.
.vertex_in(0, Type::VEC4, "pos")
.vertex_in(1, Type::VEC4, "col")
.vertex_in(2, Type::IVEC2, "glyph_size")
.vertex_in(3, Type::INT, "offset")
// vertex_out define the structure of the output of the vertex stage.
// This would be the input of the fragment stage or geometry stage.
.vertex_out(text_iface)
// definition of the output of the fragment stage.
.fragment_out(0, Type::VEC4, "fragColor")
// Flat uniforms aren't supported anymore and should be added as push constants.
// Note that push constants are limited to 128 bytes. Use uniform buffers
// when more space is required.
// Internal Matrices are automatically bound to push constants when they exists.
// Matrices inside a uniform buffer is the responsibility of the developer.
.push_constant(0, Type::MAT4, "ModelViewProjectionMatrix")
// Define a sampler location.
.sampler(0, ImageType::FLOAT_2D, "glyph", Frequency::PASS)
// Specify the vertex and fragment shader source.
// dependencies can be automatically included by using `#pragma BLENDER_REQUIRE`
.vertex_source("gpu_shader_text_vert.glsl")
.fragment_source("gpu_shader_text_frag.glsl")
// Add all info of the GPUShaderCreateInfo with the given name.
// Provides limited form of inheritance.
.additional_info("gpu_srgb_to_framebuffer_space")
// Create info is marked to be should be compilable.
// By default a create info is not compilable.
// Compilable shaders are compiled when using shader builder.
.do_static_compilation(true);
```
##### Shader Source Order
Shader source order does not follow the order of the methods call made to the
create info. Instead it follows this fixed order:
- Standard Defines: GPU module defines (`GPU_SHADER`, `GPU_VERTEX_SHADER`, OS, GPU vendor, and extensions) and MSL glue.
- Create Info Defines: `.define`.
- Typedef Sources: `.typedef_source`.
- Resources Declarations: `.sampler`, `.image`, `.uniform_buf` and `.storage_buf`.
- Layout Declarations: `.geometry_layout`, `.local_group_size`.
- Interface Declarations: `.vertex_in`, `.vertex_out`, `.fragment_out`, `.fragment_out`.
- Main Dependencies: All files inside `#pragma BLENDER_REQUIRE` directives.
- Main: Shader stage source file `.vertex_source`, `.fragment_source`, `.geometry_source` or `.compute_source`.
- NodeTree Dependencies: All files needed by the nodetree functions. Only for shaders from Blender Materials.
- NodeTree: Definition of the nodetree functions (ex: `nodetree_surface()`). Only for shaders from Blender Materials.
#### Buffer Structures
Previously structs that were used on CPU/GPU would be written twice. Once using
the CPU data types and once that uses the GPU data types. Developers were
responsible to keep those structures consistent.
Shared structs can be defined in 'shader_shared' header files. For example the
`GPU_shader_shared.h`. These headers can be included in C and CPP compile units.
``` cpp
/* In GPU_shader_shared.h */
struct MyStruct {
float4x4 modelMatrix;
float4 colors[3];
bool1 do_fill;
float dim_factor;
float thickness;
float _pad;
};
BLI_STATIC_ASSERT_ALIGN(MyStruct, 16)
```
Developer is still responsible to layout the struct (alignment and padding) so
it can be used on the GPU.
> NOTE: See [[Style_Guide/GLSL#Shared_Shader_Files|these rules]] about correctly
> packing members.
``` cpp
GPU_SHADER_CREATE_INFO(my_shader)
.typedef_source("GPU_shader_shared.h")
.uniform_buf(0, "MyStruct", "my_struct");
```
This will create a uniform block binding at location 0 with content of type
`MyStruct` named `my_struct`. The struct members can then be accessed also using
`my_struct` (ex: `my_struct.thickness`).
Uniform and storage buffers can also be declared as array of struct like this:
``` cpp
GPU_SHADER_CREATE_INFO(my_shader)
.typedef_source("GPU_shader_shared.h")
.storage_buf(0, "MyStruct", "my_struct[]");
```
A shader create info can contain multiple 'typedef_source'. They are included
only once and before any resource declaration (see gpu shader source ordering).
#### Geometry shaders
Due to specific requirements of certain gpu backend input and output parameters of this stage
should always use a named structure.
#### Compute shaders
For compute shaders the workgroup size must be defined. This can be done by calling
the `local_group_size` function. This function accepts 1-3 parameters to define the
local working group size for the x, y and z dimension.
``` cpp
GPU_SHADER_CREATE_INFO(draw_hair_refine_compute)
/* ... */
/* define a local group size where x=1 and y=1, z isn't defined. Missing parameters would fallback
* to the platform default value. For OpenGL 4.3 this is also 1. */
.local_group_size(1, 1)
.compute_source("common_hair_refine_comp.glsl")
/* ... */
```
### C & C++ Code sharing
Code that needs to be shared between CPU and GPU implementation can also be put into
'shader_shared' header files. However, only a subset of C and C++ syntax is allowed
for cross compilation to work:
- No arrays except as input parameters.
- No parameter reference `&` and likewise `out` and `inout`.
- No pointers or references.
- No iterators.
- No namespace.
- No template.
- Use float suffix by default for float literals to avoid double promotion in C++.
- Functions working on floats (ex: `round()`, `exp()`, `pow()` ...) might have different
implementation on CPU and GPU.
> NOTE: See {{BugReport|103026}} for more detail.
You can also declare `enum` inside these files. They will be correctly translated by
our translation layer for GLSL compilation. However some rules to apply:
- Always use `u` suffix for enum values. GLSL do not support implicit cast and enums are treated as `uint` under the hood.
- Define all enum values. This is to simplify our custom pre-processor code.
- (C++ only) Always use `uint32_t` as underlying type (`enum eMyEnum : uint32_t`).
- (C only) do '''NOT''' use enum types inside UBO/SSBO structs and use `uint` instead (because `enum` size is implementation dependent in C).
See [Shader builder](shader_builder.md) for a validation tool for shaders.
## Shader
A GPUShader is a program that runs on the GPU. The program can have several stages
depending on the its usage. When rendering geometry it should at least have a vertex and fragment stage, it can have an optional geometry stage. It is not recommended to use geometry stages as Apple doesn't have support for it.
``` cpp title="Create shader"
GPUShader *sh_depth = GPU_shader_create_from_info_name("my_shader");
```
This will lookup the shader create info with the name `my_shader`, loads and compile
the vertex and fragment stage and link the stages into a program that can be used on
the GPU. It also generates a GPUShaderInterface that handles lookup to input parameters
(attributes, uniforms, uniform buffers, textures and shader storage buffer objects).
## Geometry
Geometry is defined by a `GPUPrimType`, one index buffer (IBO) and one or more vertex
buffers (VBOs). The GPUPrimType defines how the index buffer should be interpreted.
Indices inside the index buffer define the order how to read elements from the vertex
buffer(s). Vertex buffers are a table where each row contains the data of an element.
When multiple vertex buffers are used they are considered to be different columns of
the same table. This matches how GL backends organize geometry on GPUs.
Index buffers can be created by using a `GPUIndexBufferBuilder`
``` cpp title="Create Index Buffer"
GPUIndexBufBuilder ibuf
/* Construct a builder to create an index buffer that has 6 indexes.
* And the number of elements in the vertex buffer is 12. */
GPU_indexbuf_init(&ibuf GPU_PRIM_TRIS, 6, 12);
GPU_indexbuf_add_tri_verts(&ibuf, 0, 1, 2);
GPU_indexbuf_add_tri_verts(&ibuf, 2, 1, 3);
GPU_indexbuf_add_tri_verts(&ibuf, 4, 5, 6);
GPU_indexbuf_add_tri_verts(&ibuf, 6, 5, 7);
GPU_indexbuf_add_tri_verts(&ibuf, 8, 9, 10);
GPU_indexbuf_add_tri_verts(&ibuf, 10, 9, 11);
GPUIndexBuf *ibo = GPU_indexbuf_build(&builder)
```
Vertex buffers contain data and attributes inside vertex buffers should match the attributes of the shader.
Before a buffer can be created, the format of the buffer should be defined.
``` cpp title="Create Vertex Format"
static GPUVertFormat format = {0};
GPU_vertformat_clear(&format);
GPU_vertformat_attr_add(&format, "pos", GPU_COMP_32, 2, GPU_FETCH_FLOAT);
```
``` cpp title="Create Vertex Buffer"
GPUVertBuf *vbo = GPU_vertbuf_create_with_format(&format);
GPU_vertbuf_data_alloc(vbo, 12);
```
``` cpp title="Fill vertex buffer with data"
for (int i = 0; i < 12; i ++) {
GPU_vertbuf_attr_set(vbo, pos.id, i, positions[i]);
}
```
## Batch
Use GPUBatches to draw geometry. A GPUBatch combines the geometry with a shader
and its parameters and has functions to perform a draw call. To perform a draw
call the next steps should be taken.
1. Construct its geometry.
2. Construct a GPUBatch with the geometry.
3. Attach a GPUShader to the GPUBatch with the `GPU_batch_set_shader` function or attach a built in shader using the `GPU_batch_program*` functions.
4. Set the parameters of the GPUShader using the `GPU_batch_uniform*`/`GPU_batch_texture_bind` functions.
5. Perform a `GPU_batch_draw*` function.
This will draw on the geometry on the active frame buffer using the shader and the loaded parameters.
> NOTE: GPUTextures can be used as render target or as input of a shader, but not inside the same drawing call.
## Immediate mode and built in shaders
To ease development for drawing panels/UI buttons the GPU module provides an
immediate mode. This is a wrapper on top of what is explained above, but in a
more legacy opengl fashion.
Blender provides builtin shaders. This is widely used to draw the user interface.
A shader can be activated by calling `immBindBuiltinProgram`
``` cpp
immBindBuiltinProgram(GPU_SHADER_2D_UNIFORM_COLOR);
```
This shader program needs a vertex buffer with a pos attribute, and a color can be set as uniform.
``` cpp
GPUVertFormat *format = immVertexFormat();
uint pos = GPU_vertformat_attr_add(format, "pos", GPU_COMP_F32, 2, GPU_FETCH_FLOAT);
```
``` cpp
/* Set the color attribute of the shader. */
immUniformColor4f(0.0f, 0.5f, 0.0f, 1.0f);
```
Fill the vertex buffer with the starting and ending position of the line to draw.
``` cpp
/* Construct a line index buffer with 2 elements (start point and end point to draw) */
immBegin(GPU_PRIM_LINES, 2);
immVertex2f(pos, 0.0, 100.0);
immVertex2f(pos, 100.0, 0.0);
immEnd();
```
By calling `immEnd` the data drawn on the GPU.
> NOTE: Use GPUBatches directly in cases where performance matters. Immediate mode buffers aren't cached, which can lead to poor performance.
## Tools
- Validate shaders when compiling Blender using [Shader builder](shader_builder.md)
- GPU debugging [Renderdoc](renderdoc.md)

View File

@ -0,0 +1,21 @@
# Renderdoc
Renderdoc is a widely used open source GPU debugger. Blender has several options
to benefit using renderdoc for debugging.
## Frame capturing
When Blender is compiled with `WITH_RENDERDOC=On` you can start and stop a renderdoc
frame capture from within Blenders' source code.
1. Add `GPU_debug_capture_begin`/`GPU_debug_capture_end` around the code you want to capture.
2. Start renderdoc and launch from within renderdoc using the `--debug-gpu-renderdoc` command
line parameter
3. Every time the `GPU_debug_capture_begin/end` pair is reached it will automatically record
a frame capture.
## Command grouping
With the command line parameter `--debug-gpu` blender will add meta-data to the buffers and
commands. This makes it easier to navigate complex frame captures. Command grouping
is automatically enabled when running with the `--debug-gpu-renderdoc` option

View File

@ -0,0 +1,12 @@
# ShaderBuilder
Using the CMAKE option `WITH_GPU_BUILDTIME_SHADER_BUILDER=On` will precompile each shader to
make sure that the syntax is correct and that all the generated code compiles and
links with the main shader code.
Only shaders that are part of the `SHADER_CREATE_INFOS` and `.do_static_compilation(true)`
is set, will be compiled. Enabling this option would reduce compile roundtrips when
developing shaders as during compile time the shaders are validated, compiled and linked
on the used platform.
Shader builder checks against all GPU backends that can run on your system.

View File

@ -0,0 +1,6 @@
# Buffers
## References
- `source/blender/gpu/vulkan/vk_buffer.cc` generic Vulkan buffer.
- `source/blender/gpu/vulkan/vk_index_buffer.cc`, `source.blender/gpu/vulkan/vk_vertex_buffer.cc`, `source/blender/gpu/vulkan/vk_pixel_buffer.cc`, `source/blender/gpu/vulkan/vk_storage_buffer.cc`, `source/blender/gpu/vulkan/vk_uniform_buffer.cc` Implementation of different buffer types.

View File

@ -0,0 +1,12 @@
# Command buffer
## Resource tracking
Submission-id
- descriptor sets and push constants uses current submission id to determine if the resources can be recycled.
## Planned changes
- Just in time encoding to add resource synchronization between individual commands that are simultaneously in flight.

View File

@ -0,0 +1,131 @@
# Vulkan backend
The `gpu` module has a generic API that can be used to communicate with
different backends like OpenGL, Metal or Vulkan. This section describes
how the Vulkan backend is structured and gives some background on
specific choices made.
## Vulkan in a nutshell
> NOTE: This is not blender specific and doesn't cover all aspects of vulkan.
> It is used as an introduction how vulkan is structured for people who
> who have some familiarity with OpenGL.
>
> The links in this section navigate to a Blender specific explanation.
Compared to OpenGL index, vertex, uniform & storage buffers, vulkan has only a
single [buffer](buffers.md) type. How the buffer can be used in determined by a
usage bitflag. In short a buffer is a chunk of memory available on the GPU.
There are also [Images](images.md). The memory that an image uses on the GPU cannot be
accessed directly from the host. To change the data of an image, an intermediate buffer
is needed. Reason is that images can be stored more optimal (performance wise) in GPU
memory, but how is device/vendor specific. The intermediate buffer will hide differences
between implementations.
To run code on the GPU [pipelines](pipelines.md) and [shaders](shaders.md) are needed.
> NOTE: the term of a shader doesn't map directly between the definition that
> vulkan uses and the definition that Blender uses.
>
> The term shader in Blender maps to an OpenGL program, which is the combination
> of all different shader stages that are needed in a pipeline.
>
> In Vulkan a shader (module) is compiled GLSL code that can be used as a stage inside
> a pipeline.
>
> From now on we will use shader stage to refer to a single stage and shader to refer
> to the set of shader stages that work together inside a pipeline.
There are 2 main types of pipelines. One for compute tasks and one for graphical tasks. There
are other pipelines as well, but we ignore them for now as Blender doesn't use them (yet).
A pipeline contains everything what needs to happen on the GPU logic-wise during a
single draw or dispatch command. Dispatch commands are used to invoke compute tasks, draw
commands to invoke graphical tasks.
A pipeline has multiple shader stages. A graphics pipeline typically
has a vertex and fragment stage. For each stage a shader module can be assigned.
Although similar to OpenGL, the main difference is that any state change on the GPU
requires a different pipeline. If you need a different blending to store the final
pixel in the framebuffer, you will need another pipeline.
The buffers and images that are needed by a pipeline are organized in descriptor sets. A
descriptor set doesn't contain the buffers and images, it only references existing buffers
and images.
In vulkan a pipeline can have a small number of descriptor sets (typically up to 4).
They are organized based on how likely the references needs to be updated for another
reference. When a reference is changed a new descriptor set needs to be created and
uploaded as the previous one can still be used by another command.
> NOTE: It is possible to swap out a descriptor set for another one. For example when a specific
> combination of resources are often reused. In that case you might not want to recreate a
> descriptor set and upload it to the GPU.
[Push Constants](push_constants.md) is a small buffer (typically 128 or 256 bytes and
device specific) that can be sent with an individual command. Push constants are
typically used for variables in shader stages that are likely to change for each
command. Push constants are faster then using a uniform buffer.
Multiple commands are added to a [command buffer](command_buffer.md) and submitted in one go
to the device command queue. When resources are used by multiple commands inside the command
buffer synchronization needs to happen. It could be that one command is writing to a buffer,
and another command reads it. Vulkan has different ways to influence the synchronization.
> NOTE: It is often said that when you understand the synchronization you will understand how
> and why vulkan is structured in the way it is.
Synchronization can happen between devices and queues and command buffers (semaphores and
fences), between GPU and CPU (fences), between commands and between resource usages inside
the same command buffer/queue (command barriers/memory barriers)
> NOTE: We won't go into to detail how they work as we want to keep this section introductory.
> There are many great vulkan resources that explain in detail why and how these can
> be used.
>
> In blender most synchronization will be hidden for most developers/users, only when
> developing inside the GPU backend these concepts should be understood in more depth.
### Random topics
- [VKFrameBuffer](vk_frame_buffer.md) Blender uses top left as the origin of frame buffers, Vulkan uses bottom left.
- [VKVertexBuffer](vk_vertex_buffer.md) Vulkan only support data conversion when there are
benefits when using them. Eg saving memory bandwidth vs processing.
## Naming convention
The vulkan backend has some additional naming conventions in order to clarify if a Vulkan native
structure/attribute is passed along or it is from the GPU module. Reasoning is that Vulkan
uses the Prefix `Vk` for their structures and Blender uses `VK` for their structures. To make
the code more readable we added the next naming convention:
- Any parameter, attribute, variable that contains a Vulkan native data type must be prefixed
with `vk_`. It is not allowed to name parameters, attributes and variables that contains a
GPU module struct to start with `vk_`.
``` cpp title="Naming example"
// Allowed:
VKBuffer buffer
VkBuffer vk_buffer
//Not Allowed:
VKBuffer vk_buffer
VkBuffer buffer
```
## Development tools
### Build options
Several build options are available for development.
- `WITH_VULKAN_BACKEND`: Turn this option on to compile blender with Vulkan backend.
- `WITH_VULKAN_GUARDEDALLOC`: Guard driver allocated memory with guardedalloc.
### Validation layers
- Validation Layers
## References
- The vulkan backend is located in `source/blender/gpu/vulkan`.
- Platform specific parts are located in `intern/ghost`. Mainly in `GHOST_ContextVK.cc`.

View File

@ -0,0 +1,9 @@
# VKPipeline
A pipeline is a combination of different aspects that are done inside a draw call.
## Graphic pipeline
## Compute pipeline

View File

@ -0,0 +1,61 @@
# Push constants
Push constants is a way to quickly provide a small amount of uniform data to shaders.
It should be much quicker than UBOs but a huge limitation is the size of data - spec
requires 128 bytes to be available for a push constant range. There are platforms
that provide more data (Mesa+RDNA provides 256 bytes).
In Blender some shader requires more data than available as push constant. As shaders
can also be part of an add-on we don't have full control on the data size.
> NOTE: As of February 2023 there are 50 shaders in blender that are between 128 and 256 bytes.
> Most of them are related to point clouds drawing. There are also 4 shaders larger than
> 256 bytes. They are for widget drawing and Eevee motion blur.
>
> - Widget drawing must be migrated to use uniform buffers.
> - The Eevee motion blur shaders are part of Eevee-legacy and will be replaced
> with Eevee-next, that doesn't have this issue.
## Storage types
Blender should be able to work even when shaders use more push constants than can fit
inside the limits of the physical device. Therefore we provide 2 storage types for
storing/communicating push constants with the shader.
- `StorageType::PUSH_CONSTANTS`: will be selected when the push constants fits inside the limits
of the physical device.
- `StorageType::UNIFORM_BUFFER`: will be selected when the push constants don't fit inside the
limits of the physical device.
Uniform buffers can handle upto 64kb of data, but require require more memory to store the
same data as it requires std140 memory layout. See below for more information about the
different memory layouts.
The selection which storage type will be used in determined in the shader interface
`VKShaderInterface`.
``` mermaid
classDiagram
class StorageType{
<<Enumeration>>
NONE
PUSH_CONSTANTS
UNIFORM_BUFFER
}
```
## Memory layout
Push constants memory layout is std430. Uniform buffers is std140.
`vk_memory_layout.hh/cc` provides some utilities to modify memory based on the needed memory
layout. A small overview of differences between std140 and std430:
- In std140 each element inside an array (`float[]`) are aligned at 16 bytes; in std430 they
are aligned based on the alignment the element type. In this case `float` that are aligned at
4 bytes.

View File

@ -0,0 +1,33 @@
# VKFrameBuffer
## Viewport Orientation
Blender uses top left as the origin of the framebuffer. Vulkan uses the bottom left.
When drawing to the on-screen framebuffer each draw command is flipped. This is done
by providing a negative viewport.
```mermaid
classDiagram
class VKFrameBuffer {
-bool flip_viewport_
+VkViewport vk_viewport_get() const
}
```
Framebuffers have an attribute to indicate that all draw/blit operations to this frame
buffer should be flipped.
Draw commands are automatically flipped as the `VkViewport` created for the graphics
pipeline is flipped. This is done in `VKFrameBuffer::vk_viewport_get()`.
When transferring data from framebuffer A to framebuffer B the flipping only needs
to happen when `flip_viewport_` differs. When different the `dstOffsets` of the
`VkBlitCmdImage` is flipped. This is done in `VKFrameBuffer::blit_to`.
## References
* `source/blender/gpu/vulkan/vk_framebuffer.hh`
* `source/blender/gpu.vulkan/vk_framebuffer.cc`

View File

@ -0,0 +1,36 @@
# Vertex Buffer
## Data conversion
Blender can use a `GPU_COMP_I32`/`GPU_COMP_U32` and use `GPU_FETCH_INT_TO_FLOAT`
to bind it to a float attribute. Vulkan doesn't support this because it adds
no benefit to the GPU.
> NOTE: `GPU_COMP_U8/I8/U16/I16` with `GPU_FETCH_INT_TO_FLOAT` are natively supported
> as they trade in a bit of work to reduce memory bandwidth.
Although we should remove these cases in the code-base, we should still add the data
conversion as add-ons might use them.
``` cpp title="vk_data_conversion.hh"
bool conversion_needed(const GPUVertFormat &vertex_format);
void convert_in_place(void *data, const GPUVertFormat &vertex_format, const uint vertex_len);
```
Based on a `GPUVertFormat` it can be checked if there are some attributes
that requires conversion on the host side.
`convert_in_place` only changes the buffer (`data`) the vertex_format is still the
original.
When binding the vertex buffers to the shader the VkFormat of those
attributes are also changed.
``` cpp title="vk_common.hh"
VkFormat to_vk_format(const GPUVertCompType type,
const uint32_t size,
const GPUVertFetchMode fetch_mode);
```
`GPU_COMP_I32`/`GPU_COMP_U32` with `GPU_FETCH_INT_TO_FLOAT` would return a
`VK_FORMAT_*_SFLOAT` as the data should already be converted to floats by
calling `convert_in_place`.

View File

@ -0,0 +1,10 @@
# Eevee & Viewport
## GPU module
- [GPU Module](gpu/index.md)
- [Vulkan Backend](gpu/vulkan/index.md)
## Draw Manager

View File

@ -107,8 +107,8 @@ nav:
- 'rendering/index.md' - 'rendering/index.md'
# - 'Nodes & Physics': # - 'Nodes & Physics':
# - 'nodes_and_physics/index.md' # - 'nodes_and_physics/index.md'
# - 'Eevee & Viewport': - 'Eevee & Viewport':
# - 'eevee_and_viewport/index.md' - 'eevee_and_viewport/index.md'
# - 'Sculpt & Paint': # - 'Sculpt & Paint':
# - 'sculpt_and_paint/index.md' # - 'sculpt_and_paint/index.md'
# - 'Modeling': # - 'Modeling':