Compare commits

...

58 Commits

Author SHA1 Message Date
62499cff23 Fix storing the mask texture in the right brush. 2022-03-21 15:37:27 +01:00
94d7db2cc0 Use brush texture as color texture. 2022-03-21 14:27:51 +01:00
b05d10dde7 Connecting color texture. 2022-03-21 13:35:23 +01:00
1f974d50e7 Split sculpt brush texture eval. 2022-03-21 12:34:39 +01:00
15d8262ec4 Connect brush strength and blend mode. 2022-03-21 12:08:38 +01:00
479b0362cc Combine innerloop data. 2022-03-21 10:43:44 +01:00
ad72b34607 comments, 2022-03-18 13:59:35 +01:00
b4a9cd89d9 Only store loop indices when extracting. 2022-03-18 12:54:22 +01:00
48094bf864 Some tweaks to the encoding. 2022-03-16 17:24:19 +01:00
8b38096f08 Clear flags after flush. 2022-03-16 14:26:39 +01:00
78935ed2ba Initial support for tiled textures. 2022-03-16 14:10:42 +01:00
5fec40e84b Add support for byte4 buffers. 2022-03-16 11:32:56 +01:00
c59c11d420 Use template callback function. 2022-03-16 10:43:32 +01:00
b42b72298a Remove unused code to simplify the inner loop. 2022-03-16 10:37:28 +01:00
caed948253 Use a templated painting kernel. 2022-03-16 10:31:07 +01:00
145edbe4e0 Rename members + adding operators. 2022-03-16 08:14:54 +01:00
018d9676cb Face set automasking support. 2022-03-15 12:05:43 +01:00
ab1bf1e80e Increase performance of pixel encoding. 2022-03-15 10:16:07 +01:00
aaa5167d99 Remove painting artifacts at border of nodes. 2022-03-15 09:45:55 +01:00
0df700f5bc New extraction method. 2022-03-15 09:11:26 +01:00
d101a25b2f Tag triangles for painting gives additional performance. 2022-03-14 15:30:01 +01:00
64e535cab5 Use pixel encoding to reduce memory needs. 2022-03-14 14:54:06 +01:00
fc42aca848 Use struct of structs. 2022-03-14 07:54:55 +01:00
f69c0f1e37 Fix Compilation error. 2022-03-11 14:53:11 +01:00
414e527cd1 Add support for brush textures. 2022-03-11 11:55:05 +01:00
795b55b417 Use struct of structs. 2022-03-11 11:21:47 +01:00
50d265e231 Rasterization in compute shader. 2022-03-11 10:22:43 +01:00
d0d53daeda Use compute shader for extraction. 2022-03-09 17:24:59 +01:00
74b19336af Split up pixel extraction in multiple files. 2022-03-09 13:18:53 +01:00
209e6a547c Small changes 2022-03-09 11:30:01 +01:00
bf73f07356 changes logging. 2022-03-08 15:58:36 +01:00
893a698759 Improved performance. Still takes minutes to build accelleration structures. 2022-03-08 15:51:33 +01:00
0cd7177420 Use different rasterization. 2022-03-08 15:17:33 +01:00
62294c5b37 Fixes. 2022-03-07 15:55:14 +01:00
4cbeae12fd Texture painting second experiment. 2022-03-07 15:23:30 +01:00
b522eb8207 Add compile unit for second prototype. 2022-03-07 09:36:08 +01:00
5a62ad032c Per fragment brush strength. 2022-03-04 16:55:20 +01:00
b164806606 texture painting. 2022-03-04 14:14:14 +01:00
69f765dcda WIP Initial texture painting. 2022-03-04 12:04:43 +01:00
a92355821e Merge branch 'master' into temp-image-buffer-rasterizer 2022-03-04 08:30:58 +01:00
a41c2a5137 Merge branch 'master' into temp-image-buffer-rasterizer 2022-03-02 16:03:01 +01:00
a23b442991 Added empty line at end of file. 2022-03-02 15:09:41 +01:00
0f784485bd Added blending modes. 2022-03-02 15:08:04 +01:00
1ddde2d83c Split rasterizer target to support other drawing targets. 2022-03-02 13:54:43 +01:00
6afa238ced Reverted default constructor. 2022-03-02 10:17:06 +01:00
82c0dbe793 Fix incorrect buffer operations. 2022-03-02 10:14:13 +01:00
b5176e90bf Improved rasterizing quality. 2022-03-01 15:54:54 +01:00
6d1eaf2d87 Check for quality. 2022-02-21 13:42:21 +01:00
15182f0941 Use a constant fragment adder per triangle. 2022-02-21 09:52:58 +01:00
65b806d5f3 Fix glitches around mid vertex. 2022-02-21 08:09:23 +01:00
2778b0690e Added back check when 2 verts are on the same rasterline. 2022-02-18 17:03:48 +01:00
89cecb088f Improve quality by center pixel clamping. 2022-02-18 16:43:38 +01:00
80144db42d Add check if vertex and fragment shader can be linked. 2022-02-18 12:57:51 +01:00
7a5cde3074 Added documentation. 2022-02-18 12:36:07 +01:00
4e721dcd07 Renamed uv->coords, small fixes. 2022-02-18 10:07:03 +01:00
29a3d61df3 Fixed issue with sorting verices. 2022-02-16 16:45:55 +01:00
c51692a568 WIP - Testing other winding order. 2022-02-16 14:24:37 +01:00
8839d30989 Image buffer rasterizer using CPP templates.
For the 3d texture brush project we need a fast CPU based rasterizer.
This is an initial implementation for a rasterizer that easy to extend
and optimize.

The idea is to implement a rasterizer on top of the ImBuf structure. The
implementation uses CPP templates, resulting each usage to be optimized
by the compiler individually.

A user of the rasterizer can define a vertex shader, fragment shader,
the inputs and interface, similar to existing concepts when using
OpenGL.

The rasterizer only supports triangles.

[Future extensions]

Currently the rasterlines are buffered and when the buffer is full it
will be flushed. This is a tradeoff between local memory and branch
prediction. We expect that adding triangles are typically done by a loop
by the caller. But in certain cases we could buffer the input triangles
and take this responsibility for additional performance.

Configurable clamping. When rasterizing the clamping is done to a corner
of a image pixel. Ideally clamping should consired center pixels or use
a pixel coverage to identify how to clamp during rasterization.

Currently only supports float4 as a fragment output type. float, byte
and int textures aren't supported.

Rasterline discard function. For cases that rasterlines don't need to be
drawn based on vertex data. A use case could be that an influence factor
is 0 for the whole triangle.

Current implementation is single threaded. When using multiple threads
with their own rasterizer could lead to render artifacts. We could
provide a scheduler that collects work in buckets based on the
rasterline y.

[Todos]

* Only supports one winding directional. Should be able to support any
  winding direction.
* Use coord as name for the frag position. Current UV is too related to
  a specific usecase.
* Add more test cases.

Differential Revision: https://developer.blender.org/D14126
2022-02-16 12:04:57 +01:00
40 changed files with 3934 additions and 38 deletions

View File

@@ -62,7 +62,7 @@ class VIEW3D_HT_tool_header(Header):
layout.popover("VIEW3D_PT_tools_brush_settings_advanced", text="Brush")
if tool_mode != 'PAINT_WEIGHT':
layout.popover("VIEW3D_PT_tools_brush_texture")
if tool_mode == 'PAINT_TEXTURE':
if tool_mode in ['PAINT_TEXTURE', 'SCULPT']:
layout.popover("VIEW3D_PT_tools_mask_texture")
layout.popover("VIEW3D_PT_tools_brush_stroke")
layout.popover("VIEW3D_PT_tools_brush_falloff")

View File

@@ -633,8 +633,8 @@ class VIEW3D_PT_tools_brush_texture(Panel, View3DPaintPanel):
# TODO, move to space_view3d.py
class VIEW3D_PT_tools_mask_texture(Panel, View3DPaintPanel, TextureMaskPanel):
bl_category = "Tool"
bl_context = ".imagepaint" # dot on purpose (access from topbar)
#bl_category = "Tool"
bl_context = ".paint_common" # dot on purpose (access from topbar)
bl_parent_id = "VIEW3D_PT_tools_brush_settings"
bl_label = "Texture Mask"
bl_options = {'DEFAULT_CLOSED'}
@@ -642,12 +642,17 @@ class VIEW3D_PT_tools_mask_texture(Panel, View3DPaintPanel, TextureMaskPanel):
@classmethod
def poll(cls, context):
settings = cls.paint_settings(context)
return (settings and settings.brush and context.image_paint_object)
if settings is None:
return false
if context.image_paint_object:
return settings.brush
if context.sculpt_object:
return (settings.brush.sculpt_tool == 'TEXTURE_PAINT')
def draw(self, context):
layout = self.layout
brush = context.tool_settings.image_paint.brush
brush = context.tool_settings.sculpt.brush
col = layout.column()
mask_tex_slot = brush.mask_texture_slot

View File

@@ -29,6 +29,7 @@ struct EnumPropertyItem;
struct GHash;
struct GridPaintMask;
struct ImagePool;
struct ImBuf;
struct ListBase;
struct MLoop;
struct MLoopTri;
@@ -620,6 +621,11 @@ typedef struct SculptSession {
struct MDeformVert *dvert_prev;
} wpaint;
struct {
struct Image *image;
struct ImageUser *image_user;
} texture_paint;
/* TODO: identify sculpt-only fields */
// struct { ... } sculpt;
} mode;

View File

@@ -46,6 +46,12 @@ typedef struct {
float (*color)[4];
} PBVHColorBufferNode;
typedef void (*PBVHNodeTexturePaintDataFreeFunc)(void *ptr);
typedef struct PBVHTexturePaintingNode {
void *data;
PBVHNodeTexturePaintDataFreeFunc free_func;
} PBVHTexturePaintingNode;
typedef enum {
PBVH_Leaf = 1 << 0,
@@ -409,6 +415,26 @@ typedef struct PBVHVertexIter {
bool visible;
} PBVHVertexIter;
#ifdef __cplusplus
BLI_INLINE struct BMVert *PBVH_cast_bmvert(void *src)
{
return static_cast<BMVert *>(src);
}
BLI_INLINE float *PBVH_cast_float_ptr(void *src)
{
return static_cast<float *>(src);
}
#else
BLI_INLINE struct BMVert *PBVH_cast_bmvert(void *src)
{
return src;
}
BLI_INLINE float *PBVH_cast_float_ptr(void *src)
{
return src;
}
#endif
void pbvh_vertex_iter_init(PBVH *pbvh, PBVHNode *node, PBVHVertexIter *vi, int mode);
#define BKE_pbvh_vertex_iter_begin(pbvh, node, vi, mode) \
@@ -467,11 +493,11 @@ void pbvh_vertex_iter_init(PBVH *pbvh, PBVHNode *node, PBVHVertexIter *vi, int m
} \
else { \
if (!BLI_gsetIterator_done(&vi.bm_unique_verts)) { \
vi.bm_vert = BLI_gsetIterator_getKey(&vi.bm_unique_verts); \
vi.bm_vert = PBVH_cast_bmvert(BLI_gsetIterator_getKey(&vi.bm_unique_verts)); \
BLI_gsetIterator_step(&vi.bm_unique_verts); \
} \
else { \
vi.bm_vert = BLI_gsetIterator_getKey(&vi.bm_other_verts); \
vi.bm_vert = PBVH_cast_bmvert(BLI_gsetIterator_getKey(&vi.bm_other_verts)); \
BLI_gsetIterator_step(&vi.bm_other_verts); \
} \
vi.visible = !BM_elem_flag_test_bool(vi.bm_vert, BM_ELEM_HIDDEN); \
@@ -481,7 +507,8 @@ void pbvh_vertex_iter_init(PBVH *pbvh, PBVHNode *node, PBVHVertexIter *vi, int m
vi.co = vi.bm_vert->co; \
vi.fno = vi.bm_vert->no; \
vi.index = BM_elem_index_get(vi.bm_vert); \
vi.mask = BM_ELEM_CD_GET_VOID_P(vi.bm_vert, vi.cd_vert_mask_offset); \
vi.mask = PBVH_cast_float_ptr( \
BM_ELEM_CD_GET_VOID_P(vi.bm_vert, vi.cd_vert_mask_offset)); \
}
#define BKE_pbvh_vertex_iter_end \
@@ -527,6 +554,12 @@ const float (*BKE_pbvh_get_vert_normals(const PBVH *pbvh))[3];
PBVHColorBufferNode *BKE_pbvh_node_color_buffer_get(PBVHNode *node);
void BKE_pbvh_node_color_buffer_free(PBVH *pbvh);
/* Texture painting. */
void *BKE_pbvh_node_texture_paint_data_get(const PBVHNode *node);
void BKE_pbvh_node_texture_paint_data_set(PBVHNode *node,
void *texture_paint_data,
PBVHNodeTexturePaintDataFreeFunc free_func);
#ifdef __cplusplus
}
#endif

View File

@@ -679,6 +679,7 @@ void BKE_pbvh_free(PBVH *pbvh)
if (node->bm_other_verts) {
BLI_gset_free(node->bm_other_verts, NULL);
}
BKE_pbvh_node_texture_paint_data_set(node, NULL, NULL);
}
}
@@ -3072,3 +3073,32 @@ void BKE_pbvh_respect_hide_set(PBVH *pbvh, bool respect_hide)
{
pbvh->respect_hide = respect_hide;
}
/* -------------------------------------------------------------------- */
/** \name Texture painting operations
* \{ */
void *BKE_pbvh_node_texture_paint_data_get(const PBVHNode *node)
{
BLI_assert(node->flag & PBVH_Leaf);
return node->texture_painting.data;
}
void BKE_pbvh_node_texture_paint_data_set(PBVHNode *node,
void *texture_paint_data,
PBVHNodeTexturePaintDataFreeFunc free_func)
{
BLI_assert(node->flag & PBVH_Leaf);
if (node->texture_painting.data != NULL) {
node->texture_painting.free_func(node->texture_painting.data);
node->texture_painting.data = NULL;
node->texture_painting.free_func = NULL;
}
if (texture_paint_data != NULL) {
BLI_assert(free_func);
node->texture_painting.data = texture_paint_data;
node->texture_painting.free_func = free_func;
}
}

View File

@@ -93,6 +93,7 @@ struct PBVHNode {
/* Used to store the brush color during a stroke and composite it over the original color */
PBVHColorBufferNode color_buffer;
PBVHTexturePaintingNode texture_painting;
};
typedef enum {

View File

@@ -563,6 +563,8 @@ template<typename T, int Size> struct vec_base : public vec_struct_base<T, Size>
}
};
using ushort2 = vec_base<int16_t, 2>;
using int2 = vec_base<int32_t, 2>;
using int3 = vec_base<int32_t, 3>;
using int4 = vec_base<int32_t, 4>;

View File

@@ -283,6 +283,8 @@ extern "C" {
(_VA_ELEM15(v, a, b, c, d, e, f, g, h, i, j, k, l, m, n) || _VA_ELEM2(v, o))
#define _VA_ELEM17(v, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p) \
(_VA_ELEM16(v, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o) || _VA_ELEM2(v, p))
#define _VA_ELEM18(v, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q) \
(_VA_ELEM17(v, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p) || _VA_ELEM2(v, q))
/* clang-format on */
/* reusable ELEM macro */

View File

@@ -65,6 +65,11 @@ set(SRC
sculpt_paint_color.c
sculpt_pose.c
sculpt_smooth.c
#sculpt_texture_paint_b.cc
sculpt_texture_paint_d.cc
#sculpt_texture_paint_pixel_extraction_b.cc
#sculpt_texture_paint_pixel_extraction_c.cc
sculpt_texture_paint_pixel_extraction_d.cc
sculpt_transform.c
sculpt_undo.c
sculpt_uv.c

View File

@@ -2235,6 +2235,7 @@ static float brush_strength(const Sculpt *sd,
case SCULPT_TOOL_SLIDE_RELAX:
return alpha * pressure * overlap * feather * 2.0f;
case SCULPT_TOOL_PAINT:
case SCULPT_TOOL_TEXTURE_PAINT:
final_pressure = pressure * pressure;
return final_pressure * overlap * feather;
case SCULPT_TOOL_SMEAR:
@@ -2324,33 +2325,27 @@ static float brush_strength(const Sculpt *sd,
}
}
float SCULPT_brush_strength_factor(SculptSession *ss,
const Brush *br,
const float brush_point[3],
const float len,
const float vno[3],
const float fno[3],
const float mask,
const int vertex_index,
const int thread_id)
float SCULPT_brush_texture_eval(SculptSession *ss,
const Brush *brush,
const MTex *mtex,
const float brush_point[3],
const int thread_id,
float r_rgba[4])
{
StrokeCache *cache = ss->cache;
const Scene *scene = cache->vc->scene;
const MTex *mtex = &br->mtex;
float avg = 1.0f;
float rgba[4];
float point[3];
sub_v3_v3v3(point, brush_point, cache->plane_offset);
if (!mtex->tex) {
avg = 1.0f;
return 1.0f;
}
else if (mtex->brush_map_mode == MTEX_MAP_MODE_3D) {
/* Get strength by feeding the vertex location directly into a texture. */
avg = BKE_brush_sample_tex_3d(scene, br, point, rgba, 0, ss->tex_pool);
return BKE_brush_sample_tex_3d(scene, brush, point, r_rgba, 0, ss->tex_pool);
}
else if (ss->texcache) {
float avg;
float symm_point[3], point_2d[2];
/* Quite warnings. */
float x = 0.0f, y = 0.0f;
@@ -2377,21 +2372,41 @@ float SCULPT_brush_strength_factor(SculptSession *ss,
x = symm_point[0];
y = symm_point[1];
x *= br->mtex.size[0];
y *= br->mtex.size[1];
x *= mtex->size[0];
y *= mtex->size[1];
x += br->mtex.ofs[0];
y += br->mtex.ofs[1];
x += mtex->ofs[0];
y += mtex->ofs[1];
avg = paint_get_tex_pixel(&br->mtex, x, y, ss->tex_pool, thread_id);
avg = paint_get_tex_pixel(mtex, x, y, ss->tex_pool, thread_id);
avg += br->texture_sample_bias;
avg += brush->texture_sample_bias;
return avg;
}
else {
const float point_3d[3] = {point_2d[0], point_2d[1], 0.0f};
avg = BKE_brush_sample_tex_3d(scene, br, point_3d, rgba, 0, ss->tex_pool);
return BKE_brush_sample_tex_3d(scene, brush, point_3d, r_rgba, 0, ss->tex_pool);
}
}
return 1.0f;
}
float SCULPT_brush_strength_factor_custom_automask(SculptSession *ss,
const Brush *br,
const MTex *mtex,
const float brush_point[3],
const float len,
const float vno[3],
const float fno[3],
const float mask,
const float automask_factor,
const int thread_id)
{
StrokeCache *cache = ss->cache;
float avg = 1.0f;
float rgba[4];
avg = SCULPT_brush_texture_eval(ss, br, mtex, brush_point, thread_id, rgba);
/* Hardness. */
float final_len = len;
@@ -2416,11 +2431,27 @@ float SCULPT_brush_strength_factor(SculptSession *ss,
avg *= 1.0f - mask;
/* Auto-masking. */
avg *= SCULPT_automasking_factor_get(cache->automasking, ss, vertex_index);
avg *= automask_factor;
return avg;
}
float SCULPT_brush_strength_factor(SculptSession *ss,
const Brush *br,
const float brush_point[3],
const float len,
const float vno[3],
const float fno[3],
const float mask,
const int vertex_index,
const int thread_id)
{
const float automask_factor = SCULPT_automasking_factor_get(
ss->cache->automasking, ss, vertex_index);
return SCULPT_brush_strength_factor_custom_automask(
ss, br, &br->mtex, brush_point, len, vno, fno, mask, automask_factor, thread_id);
}
bool SCULPT_search_sphere_cb(PBVHNode *node, void *data_v)
{
SculptSearchSphereData *data = data_v;
@@ -3171,6 +3202,10 @@ static void do_brush_action(Sculpt *sd, Object *ob, Brush *brush, UnifiedPaintSe
return;
}
if (brush->sculpt_tool == SCULPT_TOOL_TEXTURE_PAINT && type != PBVH_FACES) {
return;
}
/* Build a list of all nodes that are potentially within the brush's area of influence */
if (SCULPT_tool_needs_all_pbvh_nodes(brush)) {
@@ -3216,6 +3251,11 @@ static void do_brush_action(Sculpt *sd, Object *ob, Brush *brush, UnifiedPaintSe
}
}
if (brush->sculpt_tool == SCULPT_TOOL_TEXTURE_PAINT) {
/* TODO should perhaps move to higher level.... doing this per step is not needed. */
SCULPT_init_texture_paint(ob);
}
/* For anchored brushes with spherical falloff, we start off with zero radius, thus we have no
* PBVH nodes on the first brush step. */
if (totnode ||
@@ -3387,6 +3427,9 @@ static void do_brush_action(Sculpt *sd, Object *ob, Brush *brush, UnifiedPaintSe
case SCULPT_TOOL_SMEAR:
SCULPT_do_smear_brush(sd, ob, nodes, totnode);
break;
case SCULPT_TOOL_TEXTURE_PAINT:
SCULPT_do_texture_paint_brush(sd, ob, nodes, totnode);
break;
}
if (!ELEM(brush->sculpt_tool, SCULPT_TOOL_SMOOTH, SCULPT_TOOL_MASK) &&
@@ -3935,6 +3978,8 @@ static const char *sculpt_tool_name(Sculpt *sd)
return "Paint Brush";
case SCULPT_TOOL_SMEAR:
return "Smear Brush";
case SCULPT_TOOL_TEXTURE_PAINT:
return "Texture Paint Brush";
}
return "Sculpting";
@@ -4589,7 +4634,8 @@ static bool sculpt_needs_connectivity_info(const Sculpt *sd,
(brush->sculpt_tool == SCULPT_TOOL_SLIDE_RELAX) ||
(brush->sculpt_tool == SCULPT_TOOL_CLOTH) || (brush->sculpt_tool == SCULPT_TOOL_SMEAR) ||
(brush->sculpt_tool == SCULPT_TOOL_DRAW_FACE_SETS) ||
(brush->sculpt_tool == SCULPT_TOOL_DISPLACEMENT_SMEAR));
(brush->sculpt_tool == SCULPT_TOOL_DISPLACEMENT_SMEAR) ||
(brush->sculpt_tool == SCULPT_TOOL_TEXTURE_PAINT));
}
void SCULPT_stroke_modifiers_check(const bContext *C, Object *ob, const Brush *brush)
@@ -5035,6 +5081,14 @@ void SCULPT_flush_update_step(bContext *C, SculptUpdateType update_flags)
multires_mark_as_modified(depsgraph, ob, MULTIRES_COORDS_MODIFIED);
}
if (update_flags & SCULPT_UPDATE_TEXTURE) {
/* When using the texture paint brush only the texture changes. The Geometry and shading should
* not be touched.*/
SCULPT_flush_texture_paint(ob);
ED_region_tag_redraw(region);
return;
}
DEG_id_tag_update(&ob->id, ID_RECALC_SHADING);
/* Only current viewport matters, slower update for all viewports will
@@ -5250,6 +5304,9 @@ static void sculpt_stroke_update_step(bContext *C,
else if (ELEM(brush->sculpt_tool, SCULPT_TOOL_PAINT, SCULPT_TOOL_SMEAR)) {
SCULPT_flush_update_step(C, SCULPT_UPDATE_COLOR);
}
else if (brush->sculpt_tool == SCULPT_TOOL_TEXTURE_PAINT) {
SCULPT_flush_update_step(C, SCULPT_UPDATE_TEXTURE);
}
else {
SCULPT_flush_update_step(C, SCULPT_UPDATE_COORDS);
}

View File

@@ -20,6 +20,10 @@
#include "BLI_gsqueue.h"
#include "BLI_threads.h"
#ifdef __cplusplus
extern "C" {
#endif
struct AutomaskingCache;
struct KeyBlock;
struct Object;
@@ -39,6 +43,7 @@ typedef enum SculptUpdateType {
SCULPT_UPDATE_MASK = 1 << 1,
SCULPT_UPDATE_VISIBILITY = 1 << 2,
SCULPT_UPDATE_COLOR = 1 << 3,
SCULPT_UPDATE_TEXTURE = 1 << 4,
} SculptUpdateType;
typedef struct SculptCursorGeometryInfo {
@@ -1119,6 +1124,12 @@ const float *SCULPT_brush_frontface_normal_from_falloff_shape(SculptSession *ss,
/**
* Return a multiplier for brush strength on a particular vertex.
*/
float SCULPT_brush_texture_eval(SculptSession *ss,
const Brush *brush,
const MTex *mtex,
const float brush_point[3],
const int thread_id,
float r_rgba[4]);
float SCULPT_brush_strength_factor(struct SculptSession *ss,
const struct Brush *br,
const float point[3],
@@ -1128,6 +1139,16 @@ float SCULPT_brush_strength_factor(struct SculptSession *ss,
float mask,
int vertex_index,
int thread_id);
float SCULPT_brush_strength_factor_custom_automask(struct SculptSession *ss,
const struct Brush *br,
const MTex *mtex,
const float point[3],
float len,
const float vno[3],
const float fno[3],
float mask,
float automask_factor,
int thread_id);
/**
* Tilts a normal by the x and y tilt values using the view axis.
@@ -1608,6 +1629,10 @@ void SCULPT_do_draw_face_sets_brush(Sculpt *sd, Object *ob, PBVHNode **nodes, in
/* Paint Brush. */
void SCULPT_do_paint_brush(Sculpt *sd, Object *ob, PBVHNode **nodes, int totnode);
void SCULPT_do_texture_paint_brush(Sculpt *sd, Object *ob, PBVHNode **nodes, int totnode);
void SCULPT_init_texture_paint(Object *ob);
void SCULPT_flush_texture_paint(Object *ob);
void SCULPT_extract_pixels(Object *ob, PBVHNode **nodes, int totnode);
/* Smear Brush. */
void SCULPT_do_smear_brush(Sculpt *sd, Object *ob, PBVHNode **nodes, int totnode);
@@ -1719,3 +1744,7 @@ void SCULPT_bmesh_topology_rake(
void SCULPT_OT_brush_stroke(struct wmOperatorType *ot);
/* end sculpt_ops.c */
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,211 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup edsculpt
*/
#include "DNA_material_types.h"
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_scene_types.h"
#include "DNA_windowmanager_types.h"
#include "BKE_brush.h"
#include "BKE_context.h"
#include "BKE_customdata.h"
#include "BKE_image.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_mesh_mapping.h"
#include "BKE_pbvh.h"
#include "PIL_time_utildefines.h"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "IMB_rasterizer.hh"
#include "WM_types.h"
#include "bmesh.h"
#include "ED_uvedit.h"
#include "sculpt_intern.h"
namespace blender::ed::sculpt_paint::texture_paint {
using namespace imbuf::rasterizer;
struct VertexInput {
float3 pos;
float2 uv;
VertexInput(float3 pos, float2 uv) : pos(pos), uv(uv)
{
}
};
class VertexShader : public AbstractVertexShader<VertexInput, float3> {
public:
float2 image_size;
void vertex(const VertexInputType &input, VertexOutputType *r_output) override
{
r_output->coord = input.uv * image_size;
r_output->data = input.pos;
}
};
class FragmentShader : public AbstractFragmentShader<float3, float4> {
public:
float4 color;
const Brush *brush = nullptr;
SculptBrushTest test;
SculptBrushTestFn sculpt_brush_test_sq_fn;
void fragment(const FragmentInputType &input, FragmentOutputType *r_output) override
{
copy_v4_v4(*r_output, color);
float strength = sculpt_brush_test_sq_fn(&test, input) ?
BKE_brush_curve_strength(brush, sqrtf(test.dist), test.radius) :
0.0f;
(*r_output)[3] *= strength;
}
};
using RasterizerType = Rasterizer<VertexShader, FragmentShader, AlphaBlendMode>;
struct TexturePaintingUserData {
Object *ob;
Brush *brush;
PBVHNode **nodes;
};
static void do_task_cb_ex(void *__restrict userdata,
const int n,
const TaskParallelTLS *__restrict UNUSED(tls))
{
TexturePaintingUserData *data = static_cast<TexturePaintingUserData *>(userdata);
Object *ob = data->ob;
SculptSession *ss = ob->sculpt;
const Brush *brush = data->brush;
ImBuf *drawing_target = ss->mode.texture_paint.drawing_target;
RasterizerType rasterizer;
Mesh *mesh = static_cast<Mesh *>(ob->data);
MLoopUV *ldata_uv = static_cast<MLoopUV *>(CustomData_get_layer(&mesh->ldata, CD_MLOOPUV));
if (ldata_uv == nullptr) {
return;
}
rasterizer.activate_drawing_target(drawing_target);
rasterizer.vertex_shader().image_size = float2(drawing_target->x, drawing_target->y);
srgb_to_linearrgb_v3_v3(rasterizer.fragment_shader().color, brush->rgb);
FragmentShader &fragment_shader = rasterizer.fragment_shader();
fragment_shader.color[3] = 1.0f;
fragment_shader.brush = brush;
fragment_shader.sculpt_brush_test_sq_fn = SCULPT_brush_test_init_with_falloff_shape(
ss, &fragment_shader.test, brush->falloff_shape);
PBVHVertexIter vd;
MVert *mvert = SCULPT_mesh_deformed_mverts_get(ss);
rctf &region_to_update = data->region_to_update[n];
BLI_rctf_init_minmax(&region_to_update);
BKE_pbvh_vertex_iter_begin (ss->pbvh, data->nodes[n], vd, PBVH_ITER_UNIQUE) {
MeshElemMap *vert_map = &ss->pmap[vd.index];
for (int j = 0; j < ss->pmap[vd.index].count; j++) {
const MPoly *p = &ss->mpoly[vert_map->indices[j]];
if (p->totloop < 3) {
continue;
}
float poly_center[3];
const MLoop *loopstart = &ss->mloop[p->loopstart];
BKE_mesh_calc_poly_center(p, &ss->mloop[p->loopstart], mvert, poly_center);
if (!fragment_shader.sculpt_brush_test_sq_fn(&fragment_shader.test, poly_center)) {
continue;
}
for (int triangle = 0; triangle < p->totloop - 2; triangle++) {
const int v1_index = loopstart[0].v;
const int v2_index = loopstart[triangle + 1].v;
const int v3_index = loopstart[triangle + 2].v;
const int v1_loop_index = p->loopstart;
const int v2_loop_index = p->loopstart + triangle + 1;
const int v3_loop_index = p->loopstart + triangle + 2;
VertexInput v1(mvert[v1_index].co, ldata_uv[v1_loop_index].uv);
VertexInput v2(mvert[v2_index].co, ldata_uv[v2_loop_index].uv);
VertexInput v3(mvert[v3_index].co, ldata_uv[v3_loop_index].uv);
rasterizer.draw_triangle(v1, v2, v3);
BLI_rctf_do_minmax_v(&region_to_update, v1.uv);
BLI_rctf_do_minmax_v(&region_to_update, v2.uv);
BLI_rctf_do_minmax_v(&region_to_update, v3.uv);
}
}
}
BKE_pbvh_vertex_iter_end;
rasterizer.deactivate_drawing_target();
}
extern "C" {
void SCULPT_do_texture_paint_brush(Sculpt *sd, Object *ob, PBVHNode **nodes, int totnode)
{
SculptSession *ss = ob->sculpt;
Brush *brush = BKE_paint_brush(&sd->paint);
void *lock;
Image *image;
ImageUser *image_user;
ED_object_get_active_image(ob, 1, &image, &image_user, nullptr, nullptr);
if (image == nullptr) {
return;
}
ImBuf *image_buffer = BKE_image_acquire_ibuf(image, image_user, &lock);
if (image_buffer == nullptr) {
return;
}
ss->mode.texture_paint.drawing_target = image_buffer;
TexturePaintingUserData data = {nullptr};
data.ob = ob;
data.brush = brush;
data.nodes = nodes;
data.region_to_update.resize(totnode);
TaskParallelSettings settings;
BKE_pbvh_parallel_range_settings(&settings, true, totnode);
TIMEIT_START(texture_painting);
BLI_task_parallel_range(0, totnode, &data, do_task_cb_ex, &settings);
TIMEIT_END(texture_painting);
for (int i = 0; i < totnode; i++) {
rcti region_to_update;
region_to_update.xmin = data.region_to_update[i].xmin * image_buffer->x;
region_to_update.xmax = data.region_to_update[i].xmax * image_buffer->x;
region_to_update.ymin = data.region_to_update[i].ymin * image_buffer->y;
region_to_update.ymax = data.region_to_update[i].ymax * image_buffer->y;
/* TODO: Tiled images. */
BKE_image_partial_update_mark_region(
image, static_cast<ImageTile *>(image->tiles.first), image_buffer, &region_to_update);
}
BKE_image_release_ibuf(image, image_buffer, lock);
ss->mode.texture_paint.drawing_target = nullptr;
}
void SCULPT_flush_texture_paint(Object *UNUSED(ob))
{
}
}
} // namespace blender::ed::sculpt_paint::texture_paint

View File

@@ -0,0 +1,199 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup edsculpt
*/
#include "DNA_material_types.h"
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_scene_types.h"
#include "DNA_windowmanager_types.h"
#include "BKE_brush.h"
#include "BKE_context.h"
#include "BKE_customdata.h"
#include "BKE_image.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_mesh_mapping.h"
#include "BKE_pbvh.h"
#include "PIL_time_utildefines.h"
#include "BLI_math_color_blend.h"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "IMB_rasterizer.hh"
#include "WM_types.h"
#include "bmesh.h"
#include "ED_uvedit.h"
#include "sculpt_intern.h"
#include "sculpt_texture_paint_intern.hh"
namespace blender::ed::sculpt_paint::texture_paint {
namespace painting {
static void do_task_cb_ex(void *__restrict userdata,
const int n,
const TaskParallelTLS *__restrict tls)
{
TexturePaintingUserData *data = static_cast<TexturePaintingUserData *>(userdata);
Object *ob = data->ob;
SculptSession *ss = ob->sculpt;
ImBuf *drawing_target = ss->mode.texture_paint.drawing_target;
const Brush *brush = data->brush;
PBVHNode *node = data->nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
BLI_assert(node_data != nullptr);
const int thread_id = BLI_task_parallel_thread_id(tls);
SculptBrushTest test;
SculptBrushTestFn sculpt_brush_test_sq_fn = SCULPT_brush_test_init_with_falloff_shape(
ss, &test, brush->falloff_shape);
float3 brush_srgb(brush->rgb[0], brush->rgb[1], brush->rgb[2]);
float4 brush_linear;
srgb_to_linearrgb_v3_v3(brush_linear, brush_srgb);
brush_linear[3] = 1.0f;
MVert *mvert = SCULPT_mesh_deformed_mverts_get(ss);
const float brush_strength = ss->cache->bstrength;
for (int i = 0; i < node_data->pixels.size(); i++) {
const float3 &local_pos = node_data->pixels.local_position(i, mvert);
if (!sculpt_brush_test_sq_fn(&test, local_pos)) {
continue;
}
float4 &color = node_data->pixels.color(i);
const int2 &image_coord = node_data->pixels.image_coord(i);
/* Although currently the pixel is loaded each time. I expect additional performance
* improvement when moving the flushing to higher level on the callstack. */
if (!node_data->pixels.is_dirty(i)) {
int pixel_index = image_coord.y * drawing_target->x + image_coord.x;
copy_v4_v4(color, &drawing_target->rect_float[pixel_index * 4]);
}
// const float falloff_strength = BKE_brush_curve_strength(brush, sqrtf(test.dist),
// test.radius);
const float3 normal(0.0f, 0.0f, 0.0f);
const float3 face_normal(0.0f, 0.0f, 0.0f);
const float mask = 0.0f;
const float falloff_strength = SCULPT_brush_strength_factor(
ss, brush, local_pos, sqrtf(test.dist), normal, face_normal, mask, 0, thread_id);
blend_color_interpolate_float(color, color, brush_linear, falloff_strength * brush_strength);
node_data->pixels.mark_dirty(i);
BLI_rcti_do_minmax_v(&node_data->dirty_region, image_coord);
node_data->flags.dirty = true;
}
}
} // namespace painting
struct ImageData {
void *lock = nullptr;
Image *image = nullptr;
ImageUser *image_user = nullptr;
ImBuf *image_buffer = nullptr;
~ImageData()
{
BKE_image_release_ibuf(image, image_buffer, lock);
}
static bool init_active_image(Object *ob, ImageData *r_image_data)
{
ED_object_get_active_image(
ob, 1, &r_image_data->image, &r_image_data->image_user, nullptr, nullptr);
if (r_image_data->image == nullptr) {
return false;
}
r_image_data->image_buffer = BKE_image_acquire_ibuf(
r_image_data->image, r_image_data->image_user, &r_image_data->lock);
if (r_image_data->image_buffer == nullptr) {
return false;
}
return true;
}
};
extern "C" {
void SCULPT_do_texture_paint_brush(Sculpt *sd, Object *ob, PBVHNode **nodes, int totnode)
{
SculptSession *ss = ob->sculpt;
Brush *brush = BKE_paint_brush(&sd->paint);
ImageData image_data;
if (!ImageData::init_active_image(ob, &image_data)) {
return;
}
ss->mode.texture_paint.drawing_target = image_data.image_buffer;
TexturePaintingUserData data = {nullptr};
data.ob = ob;
data.brush = brush;
data.nodes = nodes;
TaskParallelSettings settings;
BKE_pbvh_parallel_range_settings(&settings, true, totnode);
TIMEIT_START(texture_painting);
BLI_task_parallel_range(0, totnode, &data, painting::do_task_cb_ex, &settings);
TIMEIT_END(texture_painting);
ss->mode.texture_paint.drawing_target = nullptr;
}
void SCULPT_init_texture_paint(Object *ob)
{
SculptSession *ss = ob->sculpt;
ImageData image_data;
if (!ImageData::init_active_image(ob, &image_data)) {
return;
}
ss->mode.texture_paint.drawing_target = image_data.image_buffer;
PBVHNode **nodes;
int totnode;
BKE_pbvh_search_gather(ss->pbvh, NULL, NULL, &nodes, &totnode);
SCULPT_extract_pixels(ob, nodes, totnode);
MEM_freeN(nodes);
ss->mode.texture_paint.drawing_target = nullptr;
}
void SCULPT_flush_texture_paint(Object *ob)
{
ImageData image_data;
if (!ImageData::init_active_image(ob, &image_data)) {
return;
}
SculptSession *ss = ob->sculpt;
PBVHNode **nodes;
int totnode;
BKE_pbvh_search_gather(ss->pbvh, NULL, NULL, &nodes, &totnode);
for (int n = 0; n < totnode; n++) {
PBVHNode *node = nodes[n];
NodeData *data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
if (data == nullptr) {
continue;
}
if (data->flags.dirty) {
data->flush(*image_data.image_buffer);
data->mark_region(*image_data.image, *image_data.image_buffer);
}
}
MEM_freeN(nodes);
}
}
} // namespace blender::ed::sculpt_paint::texture_paint

View File

@@ -0,0 +1,497 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup edsculpt
*/
#include "DNA_material_types.h"
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_scene_types.h"
#include "DNA_windowmanager_types.h"
#include "BKE_brush.h"
#include "BKE_context.h"
#include "BKE_customdata.h"
#include "BKE_image.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_mesh_mapping.h"
#include "BKE_pbvh.h"
#include "PIL_time_utildefines.h"
#include "BLI_math_color_blend.h"
#include "BLI_math_vec_types.hh"
#include "BLI_string_ref.hh"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "IMB_colormanagement.h"
#include "IMB_imbuf.h"
#include "IMB_imbuf_types.h"
#include "IMB_imbuf_wrappers.hh"
#include "WM_types.h"
#include "bmesh.h"
#include "ED_uvedit.h"
#include "sculpt_intern.h"
#include "sculpt_texture_paint_intern.hh"
namespace blender::ed::sculpt_paint::texture_paint {
namespace painting {
/** Reading and writing to image buffer with 4 float channels. */
class ImageBufferFloat4 {
private:
int pixel_offset;
public:
void set_image_position(ImBuf *image_buffer, ushort2 image_pixel_position)
{
pixel_offset = int(image_pixel_position.y) * image_buffer->x + int(image_pixel_position.x);
}
void goto_next_pixel()
{
pixel_offset += 1;
}
float4 read_pixel(ImBuf *image_buffer) const
{
return &image_buffer->rect_float[pixel_offset * 4];
}
void store_pixel(ImBuf *image_buffer, const float4 pixel_data) const
{
copy_v4_v4(&image_buffer->rect_float[pixel_offset * 4], pixel_data);
}
const char *get_colorspace_name(ImBuf *image_buffer)
{
return IMB_colormanagement_get_float_colorspace(image_buffer);
}
};
/** Reading and writing to image buffer with 4 byte channels. */
class ImageBufferByte4 {
private:
int pixel_offset;
public:
void set_image_position(ImBuf *image_buffer, ushort2 image_pixel_position)
{
pixel_offset = int(image_pixel_position.y) * image_buffer->x + int(image_pixel_position.x);
}
void goto_next_pixel()
{
pixel_offset += 1;
}
float4 read_pixel(ImBuf *image_buffer) const
{
float4 result;
rgba_uchar_to_float(result,
static_cast<const uchar *>(
static_cast<const void *>(&(image_buffer->rect[pixel_offset]))));
return result;
}
void store_pixel(ImBuf *image_buffer, const float4 pixel_data) const
{
rgba_float_to_uchar(
static_cast<uchar *>(static_cast<void *>(&image_buffer->rect[pixel_offset])), pixel_data);
}
const char *get_colorspace_name(ImBuf *image_buffer)
{
return IMB_colormanagement_get_rect_colorspace(image_buffer);
}
};
template<typename ImagePixelAccessor> class PaintingKernel {
ImagePixelAccessor image_accessor;
SculptSession *ss;
const Brush *brush;
MTex mtex;
const int thread_id;
const MVert *mvert;
float4 brush_color;
float brush_strength;
SculptBrushTestFn brush_test_fn;
SculptBrushTest test;
/* Pointer to the last used image buffer to detect when buffers are switched. */
void *last_used_image_buffer_ptr = nullptr;
const char *last_used_color_space = nullptr;
public:
explicit PaintingKernel(SculptSession *ss,
const Brush *brush,
const int thread_id,
const MVert *mvert)
: ss(ss), brush(brush), thread_id(thread_id), mvert(mvert)
{
init_brush_strength();
init_brush_test();
mtex = brush->mtex;
}
bool paint(const Triangles &triangles, const PixelsPackage &encoded_pixels, ImBuf *image_buffer)
{
if (image_buffer != last_used_image_buffer_ptr) {
last_used_image_buffer_ptr = image_buffer;
init_brush_color(image_buffer);
}
image_accessor.set_image_position(image_buffer, encoded_pixels.start_image_coordinate);
const TrianglePaintInput triangle = triangles.get_paint_input(encoded_pixels.triangle_index);
Pixel pixel = get_start_pixel(triangle, encoded_pixels);
const Pixel add_pixel = get_delta_pixel(triangle, encoded_pixels, pixel);
bool pixels_painted = false;
for (int x = 0; x < encoded_pixels.num_pixels; x++) {
if (!brush_test_fn(&test, pixel.pos)) {
pixel += add_pixel;
image_accessor.goto_next_pixel();
continue;
}
if (mtex.tex) {
SCULPT_brush_texture_eval(ss, brush, &mtex, pixel.pos, thread_id, brush_color);
}
float4 color = image_accessor.read_pixel(image_buffer);
const float3 normal(0.0f, 0.0f, 0.0f);
const float3 face_normal(0.0f, 0.0f, 0.0f);
const float mask = 0.0f;
const float falloff_strength = SCULPT_brush_strength_factor_custom_automask(
ss,
brush,
&brush->mask_mtex,
pixel.pos,
sqrtf(test.dist),
normal,
face_normal,
mask,
triangle.automasking_factor,
thread_id);
float4 paint_color = brush_color * falloff_strength * brush_strength;
float4 buffer_color;
blend_color_mix_float(buffer_color, color, paint_color);
buffer_color *= brush->alpha;
IMB_blend_color_float(color, color, buffer_color, static_cast<IMB_BlendMode>(brush->blend));
image_accessor.store_pixel(image_buffer, color);
pixels_painted = true;
image_accessor.goto_next_pixel();
pixel += add_pixel;
}
return pixels_painted;
}
private:
void init_brush_color(ImBuf *image_buffer)
{
/* TODO: use StringRefNull. */
const char *to_colorspace = image_accessor.get_colorspace_name(image_buffer);
if (last_used_color_space == to_colorspace) {
return;
}
copy_v4_fl4(brush_color, brush->rgb[0], brush->rgb[1], brush->rgb[2], 1.0);
/* TODO: unsure. brush color is stored in float sRGB. */
const char *from_colorspace = IMB_colormanagement_role_colorspace_name_get(
COLOR_ROLE_COLOR_PICKING);
ColormanageProcessor *cm_processor = IMB_colormanagement_colorspace_processor_new(
from_colorspace, to_colorspace);
IMB_colormanagement_processor_apply_v4(cm_processor, brush_color);
IMB_colormanagement_processor_free(cm_processor);
last_used_color_space = to_colorspace;
}
void init_brush_strength()
{
brush_strength = ss->cache->bstrength;
}
void init_brush_test()
{
brush_test_fn = SCULPT_brush_test_init_with_falloff_shape(ss, &test, brush->falloff_shape);
}
/** Extract the staring pixel from the given encoded_pixels belonging to the triangle. */
Pixel get_start_pixel(const TrianglePaintInput &triangle,
const PixelsPackage &encoded_pixels) const
{
return init_pixel(triangle, encoded_pixels.start_barycentric_coord.decode());
}
/**
* Extract the delta pixel that will be used to advance a Pixel instance to the next pixel. */
Pixel get_delta_pixel(const TrianglePaintInput &triangle,
const PixelsPackage &encoded_pixels,
const Pixel &start_pixel) const
{
Pixel result = init_pixel(triangle,
encoded_pixels.start_barycentric_coord.decode() +
triangle.add_barycentric_coord_x);
return result - start_pixel;
}
Pixel init_pixel(const TrianglePaintInput &triangle, const float3 weights) const
{
const int3 &vert_indices = triangle.vert_indices;
Pixel result;
interp_v3_v3v3v3(result.pos,
mvert[vert_indices[0]].co,
mvert[vert_indices[1]].co,
mvert[vert_indices[2]].co,
weights);
return result;
}
};
static void do_vertex_brush_test(void *__restrict userdata,
const int n,
const TaskParallelTLS *__restrict UNUSED(tls))
{
TexturePaintingUserData *data = static_cast<TexturePaintingUserData *>(userdata);
const Object *ob = data->ob;
SculptSession *ss = ob->sculpt;
const Brush *brush = data->brush;
SculptBrushTest test;
SculptBrushTestFn sculpt_brush_test_sq_fn = SCULPT_brush_test_init_with_falloff_shape(
ss, &test, brush->falloff_shape);
PBVHNode *node = data->nodes[n];
PBVHVertexIter vd;
BKE_pbvh_vertex_iter_begin (ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
if (sculpt_brush_test_sq_fn(&test, vd.co)) {
data->vertex_brush_tests[vd.index] = true;
}
data->automask_factors[vd.index] = SCULPT_automasking_factor_get(
ss->cache->automasking, ss, vd.index);
}
BKE_pbvh_vertex_iter_end;
}
static void do_task_cb_ex(void *__restrict userdata,
const int n,
const TaskParallelTLS *__restrict tls)
{
TexturePaintingUserData *data = static_cast<TexturePaintingUserData *>(userdata);
Object *ob = data->ob;
SculptSession *ss = ob->sculpt;
const Brush *brush = data->brush;
PBVHNode *node = data->nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
BLI_assert(node_data != nullptr);
/* Propagate vertex brush test to triangle. This should be extended with brush overlapping edges
* and faces only. */
std::vector<bool> triangle_brush_test_results(node_data->triangles.size());
int last_poly_index = -1;
Triangles &triangles = node_data->triangles;
for (int triangle_index = 0; triangle_index < triangles.size(); triangle_index++) {
TrianglePaintInput &triangle = triangles.get_paint_input(triangle_index);
int3 &vert_indices = triangle.vert_indices;
for (int i = 0; i < 3; i++) {
triangle_brush_test_results[triangle_index] = triangle_brush_test_results[triangle_index] ||
data->vertex_brush_tests[vert_indices[i]];
}
const int poly_index = triangles.get_poly_index(triangle_index);
if (last_poly_index != poly_index) {
last_poly_index = poly_index;
float automasking_factor = 1.0f;
for (int t_index = triangle_index;
t_index < triangles.size() && triangles.get_poly_index(t_index) == poly_index;
t_index++) {
for (int i = 0; i < 3; i++) {
automasking_factor = min_ff(automasking_factor, data->automask_factors[vert_indices[i]]);
}
}
for (int t_index = triangle_index;
t_index < triangles.size() && triangles.get_poly_index(t_index) == poly_index;
t_index++) {
triangles.set_automasking_factor(t_index, automasking_factor);
}
}
}
const int thread_id = BLI_task_parallel_thread_id(tls);
MVert *mvert = SCULPT_mesh_deformed_mverts_get(ss);
PaintingKernel<ImageBufferFloat4> kernel_float4(ss, brush, thread_id, mvert);
PaintingKernel<ImageBufferByte4> kernel_byte4(ss, brush, thread_id, mvert);
Image *image = ss->mode.texture_paint.image;
ImageUser image_user = *ss->mode.texture_paint.image_user;
/* TODO: should we lock? */
void *image_lock;
LISTBASE_FOREACH (ImageTile *, tile, &image->tiles) {
imbuf::ImageTileWrapper image_tile(tile);
image_user.tile = image_tile.get_tile_number();
TileData *tile_data = node_data->find_tile_data(image_tile);
if (tile_data == nullptr) {
/* This node doesn't paint on this tile. */
continue;
}
ImBuf *image_buffer = BKE_image_acquire_ibuf(image, &image_user, &image_lock);
if (image_buffer == nullptr) {
continue;
}
for (const PixelsPackage &encoded_pixels : tile_data->encoded_pixels) {
if (!triangle_brush_test_results[encoded_pixels.triangle_index]) {
continue;
}
bool pixels_painted = false;
if (image_buffer->rect_float != nullptr) {
pixels_painted = kernel_float4.paint(triangles, encoded_pixels, image_buffer);
}
else {
pixels_painted = kernel_byte4.paint(triangles, encoded_pixels, image_buffer);
}
if (pixels_painted) {
int2 start_image_coord(encoded_pixels.start_image_coordinate.x,
encoded_pixels.start_image_coordinate.y);
BLI_rcti_do_minmax_v(&tile_data->dirty_region, start_image_coord);
BLI_rcti_do_minmax_v(&tile_data->dirty_region,
start_image_coord + int2(encoded_pixels.num_pixels + 1, 0));
node_data->flags.dirty = true;
}
}
BKE_image_release_ibuf(image, image_buffer, image_lock);
node_data->flags.dirty |= tile_data->flags.dirty;
}
}
} // namespace painting
struct ImageData {
void *lock = nullptr;
Image *image = nullptr;
ImageUser *image_user = nullptr;
~ImageData()
{
}
static bool init_active_image(Object *ob, ImageData *r_image_data)
{
ED_object_get_active_image(
ob, 1, &r_image_data->image, &r_image_data->image_user, nullptr, nullptr);
if (r_image_data->image == nullptr) {
return false;
}
return true;
}
};
extern "C" {
void SCULPT_do_texture_paint_brush(Sculpt *sd, Object *ob, PBVHNode **nodes, int totnode)
{
SculptSession *ss = ob->sculpt;
Brush *brush = BKE_paint_brush(&sd->paint);
ImageData image_data;
if (!ImageData::init_active_image(ob, &image_data)) {
return;
}
ss->mode.texture_paint.image = image_data.image;
ss->mode.texture_paint.image_user = image_data.image_user;
Mesh *mesh = (Mesh *)ob->data;
TexturePaintingUserData data = {nullptr};
data.ob = ob;
data.brush = brush;
data.nodes = nodes;
data.vertex_brush_tests = std::vector<bool>(mesh->totvert);
data.automask_factors = Vector<float>(mesh->totvert);
TaskParallelSettings settings;
BKE_pbvh_parallel_range_settings(&settings, true, totnode);
TIMEIT_START(texture_painting);
BLI_task_parallel_range(0, totnode, &data, painting::do_vertex_brush_test, &settings);
BLI_task_parallel_range(0, totnode, &data, painting::do_task_cb_ex, &settings);
TIMEIT_END(texture_painting);
ss->mode.texture_paint.image = nullptr;
ss->mode.texture_paint.image_user = nullptr;
}
void SCULPT_init_texture_paint(Object *ob)
{
SculptSession *ss = ob->sculpt;
ImageData image_data;
if (!ImageData::init_active_image(ob, &image_data)) {
return;
}
ss->mode.texture_paint.image = image_data.image;
ss->mode.texture_paint.image_user = image_data.image_user;
PBVHNode **nodes;
int totnode;
BKE_pbvh_search_gather(ss->pbvh, NULL, NULL, &nodes, &totnode);
SCULPT_extract_pixels(ob, nodes, totnode);
MEM_freeN(nodes);
ss->mode.texture_paint.image = nullptr;
ss->mode.texture_paint.image_user = nullptr;
}
void SCULPT_flush_texture_paint(Object *ob)
{
ImageData image_data;
if (!ImageData::init_active_image(ob, &image_data)) {
return;
}
SculptSession *ss = ob->sculpt;
PBVHNode **nodes;
int totnode;
BKE_pbvh_search_gather(ss->pbvh, NULL, NULL, &nodes, &totnode);
for (int n = 0; n < totnode; n++) {
PBVHNode *node = nodes[n];
NodeData *data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
if (data == nullptr) {
continue;
}
if (data->flags.dirty) {
Image *image = image_data.image;
ImageUser image_user = *image_data.image_user;
void *image_lock;
LISTBASE_FOREACH (ImageTile *, tile, &image_data.image->tiles) {
imbuf::ImageTileWrapper image_tile(tile);
image_user.tile = image_tile.get_tile_number();
ImBuf *image_buffer = BKE_image_acquire_ibuf(image, &image_user, &image_lock);
if (image_buffer == nullptr) {
continue;
}
data->mark_region(*image_data.image, image_tile, *image_buffer);
BKE_image_release_ibuf(image, image_buffer, image_lock);
}
data->flags.dirty = false;
}
}
MEM_freeN(nodes);
}
}
} // namespace blender::ed::sculpt_paint::texture_paint

View File

@@ -0,0 +1,312 @@
#include "IMB_imbuf_wrappers.hh"
namespace blender::ed::sculpt_paint::texture_paint {
struct Polygon {
};
/* Stores a barycentric coordinate in a float2. */
struct EncodedBarycentricCoord {
float2 encoded;
EncodedBarycentricCoord &operator=(const float3 decoded)
{
encoded = float2(decoded.x, decoded.y);
return *this;
}
float3 decode() const
{
return float3(encoded.x, encoded.y, 1.0 - encoded.x - encoded.y);
}
};
/**
* Loop incides. Only stores 2 indices, the third one is always `loop_indices[1] + 1`.
* Second could be delta encoded with the first loop index.
*/
struct EncodedLoopIndices {
int2 encoded;
EncodedLoopIndices(const int3 decoded) : encoded(decoded.x, decoded.y)
{
}
int3 decode() const
{
return int3(encoded.x, encoded.y, encoded.y + 1);
}
};
struct Triangle {
int3 loop_indices;
int3 vert_indices;
int poly_index;
float3 add_barycentric_coord_x;
float automasking_factor;
};
struct TrianglePaintInput {
int3 vert_indices;
float3 add_barycentric_coord_x;
float automasking_factor;
TrianglePaintInput(const Triangle &triangle)
: vert_indices(triangle.vert_indices),
add_barycentric_coord_x(triangle.add_barycentric_coord_x),
automasking_factor(triangle.automasking_factor)
{
}
};
struct Triangles {
Vector<TrianglePaintInput> paint_input;
Vector<EncodedLoopIndices> loop_indices;
Vector<int> poly_indices;
public:
void append(const Triangle &triangle)
{
paint_input.append(TrianglePaintInput(triangle));
loop_indices.append(triangle.loop_indices);
poly_indices.append(triangle.poly_index);
}
int3 get_loop_indices(const int index) const
{
return loop_indices[index].decode();
}
int get_poly_index(const int index)
{
return poly_indices[index];
}
TrianglePaintInput &get_paint_input(const int index)
{
return paint_input[index];
}
const TrianglePaintInput &get_paint_input(const int index) const
{
return paint_input[index];
}
void set_automasking_factor(const int index, const float automasking_factor)
{
get_paint_input(index).automasking_factor = automasking_factor;
}
void cleanup_after_init()
{
loop_indices.clear();
}
uint64_t size() const
{
return paint_input.size();
}
uint64_t mem_size() const
{
return loop_indices.size() * sizeof(EncodedLoopIndices) +
paint_input.size() * sizeof(TrianglePaintInput) + poly_indices.size() * sizeof(int);
}
};
/**
* Encode multiple sequential pixels to reduce memory footprint.
*
* Memory footprint can be reduced more.
* v Only store 2 barycentric coordinates. the third one can be extracted
* from the 2 known ones.
* - start_image_coordinate can be a delta encoded with the previous package.
* initial coordinate could be at the triangle level or pbvh.
* v num_pixels could be delta encoded or at least be a short.
* X triangle index could be delta encoded.
* - encode everything in using variable bits per structure.
* first byte would indicate the number of bytes used per element.
* - only store triangle index when it changes.
*/
struct PixelsPackage {
/** Barycentric coordinate of the first encoded pixel. */
EncodedBarycentricCoord start_barycentric_coord;
/** Image coordinate starting of the first encoded pixel. */
ushort2 start_image_coordinate;
/** Number of sequetial pixels encoded in this package. */
ushort num_pixels;
/** Reference to the pbvh triangle index. */
ushort triangle_index;
};
struct Pixel {
/** object local position of the pixel on the surface. */
float3 pos;
Pixel &operator+=(const Pixel &other)
{
pos += other.pos;
return *this;
}
Pixel operator-(const Pixel &other) const
{
Pixel result;
result.pos = pos - other.pos;
return result;
}
};
struct PixelData {
int2 pixel_pos;
float3 local_pos;
float3 weights;
int3 vertices;
float4 content;
};
struct Pixels {
Vector<int2> image_coordinates;
Vector<float3> local_positions;
Vector<int3> vertices;
Vector<float3> weights;
Vector<float4> colors;
std::vector<bool> dirty;
uint64_t size() const
{
return image_coordinates.size();
}
bool is_dirty(uint64_t index) const
{
return dirty[index];
}
const int2 &image_coord(uint64_t index) const
{
return image_coordinates[index];
}
const float3 &local_position(uint64_t index) const
{
return local_positions[index];
}
const float3 local_position(uint64_t index, MVert *mvert) const
{
const int3 &verts = vertices[index];
const float3 &weight = weights[index];
const float3 &pos1 = mvert[verts.x].co;
const float3 &pos2 = mvert[verts.y].co;
const float3 &pos3 = mvert[verts.z].co;
float3 local_pos;
interp_v3_v3v3v3(local_pos, pos1, pos2, pos3, weight);
return local_pos;
}
const float4 &color(uint64_t index) const
{
return colors[index];
}
float4 &color(uint64_t index)
{
return colors[index];
}
void clear_dirty()
{
dirty = std::vector<bool>(size(), false);
}
void mark_dirty(uint64_t index)
{
dirty[index] = true;
}
void append(PixelData &pixel)
{
image_coordinates.append(pixel.pixel_pos);
local_positions.append(pixel.local_pos);
colors.append(pixel.content);
weights.append(pixel.weights);
vertices.append(pixel.vertices);
dirty.push_back(false);
}
};
struct TileData {
short tile_number;
struct {
bool dirty : 1;
} flags;
/* Dirty region of the tile in image space. */
rcti dirty_region;
Vector<PixelsPackage> encoded_pixels;
TileData()
{
flags.dirty = false;
BLI_rcti_init_minmax(&dirty_region);
}
void mark_region(Image &image, const imbuf::ImageTileWrapper &image_tile, ImBuf &image_buffer)
{
print_rcti_id(&dirty_region);
BKE_image_partial_update_mark_region(
&image, image_tile.image_tile, &image_buffer, &dirty_region);
BLI_rcti_init_minmax(&dirty_region);
flags.dirty = false;
}
};
struct NodeData {
struct {
bool dirty : 1;
} flags;
rctf uv_region;
Vector<TileData> tiles;
Triangles triangles;
NodeData()
{
flags.dirty = false;
}
void init_pixels_rasterization(Object *ob, PBVHNode *node, ImBuf *image_buffer);
TileData *find_tile_data(const imbuf::ImageTileWrapper &image_tile)
{
for (TileData &tile : tiles) {
if (tile.tile_number == image_tile.get_tile_number()) {
return &tile;
}
}
return nullptr;
}
void mark_region(Image &image, const imbuf::ImageTileWrapper &image_tile, ImBuf &image_buffer)
{
TileData *tile = find_tile_data(image_tile);
if (tile) {
tile->mark_region(image, image_tile, image_buffer);
}
}
static void free_func(void *instance)
{
NodeData *node_data = static_cast<NodeData *>(instance);
MEM_delete(node_data);
}
};
struct TexturePaintingUserData {
Object *ob;
Brush *brush;
PBVHNode **nodes;
std::vector<bool> vertex_brush_tests;
Vector<float> automask_factors;
};
} // namespace blender::ed::sculpt_paint::texture_paint

View File

@@ -0,0 +1,233 @@
#include "DNA_material_types.h"
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_scene_types.h"
#include "DNA_windowmanager_types.h"
#include "BKE_brush.h"
#include "BKE_context.h"
#include "BKE_customdata.h"
#include "BKE_image.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_mesh_mapping.h"
#include "BKE_pbvh.h"
#include "PIL_time_utildefines.h"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "IMB_rasterizer.hh"
#include "WM_types.h"
#include "bmesh.h"
#include "ED_uvedit.h"
#include "sculpt_intern.h"
#include "sculpt_texture_paint_intern.hh"
namespace blender::ed::sculpt_paint::texture_paint {
namespace rasterization {
using namespace imbuf::rasterizer;
struct VertexInput {
float3 pos;
float2 uv;
VertexInput(float3 pos, float2 uv) : pos(pos), uv(uv)
{
}
};
class VertexShader : public AbstractVertexShader<VertexInput, float3> {
public:
float2 image_size;
void vertex(const VertexInputType &input, VertexOutputType *r_output) override
{
r_output->coord = input.uv * image_size;
r_output->data = input.pos;
}
};
struct FragmentOutput {
float3 local_pos;
};
class FragmentShader : public AbstractFragmentShader<float3, FragmentOutput> {
public:
ImBuf *image_buffer;
public:
void fragment(const FragmentInputType &input, FragmentOutputType *r_output) override
{
r_output->local_pos = input;
}
};
struct NodeDataPair {
ImBuf *image_buffer;
NodeData *node_data;
struct {
/* Rasterizer doesn't support glCoord yet, so for now we just store them in a runtime section.
*/
int2 last_known_pixel_pos;
} runtime;
};
class AddPixel : public AbstractBlendMode<FragmentOutput, NodeDataPair> {
public:
void blend(NodeDataPair *dest, const FragmentOutput &source) const override
{
PixelData new_pixel;
new_pixel.local_pos = source.local_pos;
new_pixel.pixel_pos = dest->runtime.last_known_pixel_pos;
const int pixel_offset = new_pixel.pixel_pos[1] * dest->image_buffer->x +
new_pixel.pixel_pos[0];
new_pixel.content = float4(dest->image_buffer->rect_float[pixel_offset * 4]);
new_pixel.flags.dirty = false;
dest->node_data->pixels.append(new_pixel);
dest->runtime.last_known_pixel_pos[0] += 1;
}
};
class NodeDataDrawingTarget : public AbstractDrawingTarget<NodeDataPair, NodeDataPair> {
private:
NodeDataPair *active_ = nullptr;
public:
uint64_t get_width() const
{
return active_->image_buffer->x;
}
uint64_t get_height() const
{
return active_->image_buffer->y;
};
NodeDataPair *get_pixel_ptr(uint64_t x, uint64_t y)
{
active_->runtime.last_known_pixel_pos = int2(x, y);
return active_;
};
int64_t get_pixel_stride() const
{
return 0;
};
bool has_active_target() const
{
return active_ != nullptr;
}
void activate(NodeDataPair *instance)
{
active_ = instance;
};
void deactivate()
{
active_ = nullptr;
}
};
using RasterizerType = Rasterizer<VertexShader, FragmentShader, AddPixel, NodeDataDrawingTarget>;
static void init_rasterization_task_cb_ex(void *__restrict userdata,
const int n,
const TaskParallelTLS *__restrict UNUSED(tls))
{
TexturePaintingUserData *data = static_cast<TexturePaintingUserData *>(userdata);
Object *ob = data->ob;
SculptSession *ss = ob->sculpt;
PBVHNode *node = data->nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
// TODO: reinit when texturing on different image?
if (node_data != nullptr) {
return;
}
TIMEIT_START(init_texture_paint_for_node);
node_data = MEM_new<NodeData>(__func__);
node_data->init_pixels_rasterization(ob, node, ss->mode.texture_paint.drawing_target);
BKE_pbvh_node_texture_paint_data_set(node, node_data, NodeData::free_func);
TIMEIT_END(init_texture_paint_for_node);
}
static void init_using_rasterization(Object *ob, int totnode, PBVHNode **nodes)
{
TIMEIT_START(init_using_rasterization);
TexturePaintingUserData data = {nullptr};
data.ob = ob;
data.nodes = nodes;
TaskParallelSettings settings;
BKE_pbvh_parallel_range_settings(&settings, true, totnode);
BLI_task_parallel_range(0, totnode, &data, init_rasterization_task_cb_ex, &settings);
TIMEIT_END(init_using_rasterization);
}
} // namespace rasterization
void NodeData::init_pixels_rasterization(Object *ob, PBVHNode *node, ImBuf *image_buffer)
{
using namespace rasterization;
Mesh *mesh = static_cast<Mesh *>(ob->data);
MLoopUV *ldata_uv = static_cast<MLoopUV *>(CustomData_get_layer(&mesh->ldata, CD_MLOOPUV));
if (ldata_uv == nullptr) {
return;
}
RasterizerType rasterizer;
NodeDataPair node_data_pair;
rasterizer.vertex_shader().image_size = float2(image_buffer->x, image_buffer->y);
rasterizer.fragment_shader().image_buffer = image_buffer;
node_data_pair.node_data = this;
node_data_pair.image_buffer = image_buffer;
rasterizer.activate_drawing_target(&node_data_pair);
SculptSession *ss = ob->sculpt;
MVert *mvert = SCULPT_mesh_deformed_mverts_get(ss);
PBVHVertexIter vd;
BKE_pbvh_vertex_iter_begin (ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
MeshElemMap *vert_map = &ss->pmap[vd.index];
for (int j = 0; j < ss->pmap[vd.index].count; j++) {
const MPoly *p = &ss->mpoly[vert_map->indices[j]];
if (p->totloop < 3) {
continue;
}
const MLoop *loopstart = &ss->mloop[p->loopstart];
for (int triangle = 0; triangle < p->totloop - 2; triangle++) {
const int v1_index = loopstart[0].v;
const int v2_index = loopstart[triangle + 1].v;
const int v3_index = loopstart[triangle + 2].v;
const int v1_loop_index = p->loopstart;
const int v2_loop_index = p->loopstart + triangle + 1;
const int v3_loop_index = p->loopstart + triangle + 2;
VertexInput v1(mvert[v1_index].co, ldata_uv[v1_loop_index].uv);
VertexInput v2(mvert[v2_index].co, ldata_uv[v2_loop_index].uv);
VertexInput v3(mvert[v3_index].co, ldata_uv[v3_loop_index].uv);
rasterizer.draw_triangle(v1, v2, v3);
}
}
}
BKE_pbvh_vertex_iter_end;
rasterizer.deactivate_drawing_target();
}
} // namespace blender::ed::sculpt_paint::texture_paint
extern "C" {
void SCULPT_extract_pixels(Object *ob, PBVHNode **nodes, int totnode)
{
TIMEIT_START(extract_pixels);
blender::ed::sculpt_paint::texture_paint::rasterization::init_using_rasterization(
ob, totnode, nodes);
TIMEIT_END(extract_pixels);
}
}

View File

@@ -0,0 +1,202 @@
#include "DNA_material_types.h"
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_scene_types.h"
#include "DNA_windowmanager_types.h"
#include "BKE_brush.h"
#include "BKE_context.h"
#include "BKE_customdata.h"
#include "BKE_image.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_mesh_mapping.h"
#include "BKE_pbvh.h"
#include "PIL_time_utildefines.h"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "IMB_rasterizer.hh"
#include "WM_types.h"
#include "bmesh.h"
#include "ED_uvedit.h"
#include "sculpt_intern.h"
#include "sculpt_texture_paint_intern.hh"
namespace blender::ed::sculpt_paint::texture_paint::barycentric_extraction {
struct BucketEntry {
PBVHNode *node;
const MPoly *poly;
rctf uv_bounds;
};
struct Bucket {
static const int Size = 16;
Vector<BucketEntry> entries;
rctf bounds;
};
static bool init_using_intersection(SculptSession *ss,
Bucket &bucket,
ImBuf *image_buffer,
MVert *mvert,
MLoopUV *ldata_uv,
float2 uv,
int2 xy)
{
const int pixel_offset = xy[1] * image_buffer->x + xy[0];
for (BucketEntry &entry : bucket.entries) {
if (!BLI_rctf_isect_pt_v(&entry.uv_bounds, uv)) {
continue;
}
const MPoly *p = entry.poly;
const MLoop *loopstart = &ss->mloop[p->loopstart];
for (int triangle = 0; triangle < p->totloop - 2; triangle++) {
const int v1_loop_index = p->loopstart;
const int v2_loop_index = p->loopstart + triangle + 1;
const int v3_loop_index = p->loopstart + triangle + 2;
const float2 v1_uv = ldata_uv[v1_loop_index].uv;
const float2 v2_uv = ldata_uv[v2_loop_index].uv;
const float2 v3_uv = ldata_uv[v3_loop_index].uv;
float3 weights;
barycentric_weights_v2(v1_uv, v2_uv, v3_uv, uv, weights);
if (weights[0] < 0.0 || weights[0] > 1.0 || weights[1] < 0.0 || weights[1] > 1.0 ||
weights[2] < 0.0 || weights[2] > 1.0) {
continue;
}
const int v1_index = loopstart[0].v;
const int v2_index = loopstart[triangle + 1].v;
const int v3_index = loopstart[triangle + 2].v;
const float3 v1_pos = mvert[v1_index].co;
const float3 v2_pos = mvert[v2_index].co;
const float3 v3_pos = mvert[v3_index].co;
float3 local_pos;
interp_v3_v3v3v3(local_pos, v1_pos, v2_pos, v3_pos, weights);
PixelData new_pixel;
new_pixel.local_pos = local_pos;
new_pixel.pixel_pos = xy;
new_pixel.vertices = int3(v1_index, v2_index, v3_index);
new_pixel.weights = weights;
new_pixel.content = float4(&image_buffer->rect_float[pixel_offset * 4]);
PBVHNode *node = entry.node;
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
node_data->pixels.append(new_pixel);
return true;
}
}
return false;
}
static void init_using_intersection(Object *ob, int totnode, PBVHNode **nodes)
{
Vector<PBVHNode *> nodes_to_initialize;
for (int n = 0; n < totnode; n++) {
PBVHNode *node = nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
if (node_data != nullptr) {
continue;
}
node_data = MEM_new<NodeData>(__func__);
BKE_pbvh_node_texture_paint_data_set(node, node_data, NodeData::free_func);
BLI_rctf_init_minmax(&node_data->uv_region);
nodes_to_initialize.append(node);
}
if (nodes_to_initialize.size() == 0) {
return;
}
TIMEIT_START(extract_pixels);
Mesh *mesh = static_cast<Mesh *>(ob->data);
MLoopUV *ldata_uv = static_cast<MLoopUV *>(CustomData_get_layer(&mesh->ldata, CD_MLOOPUV));
if (ldata_uv == nullptr) {
return;
}
SculptSession *ss = ob->sculpt;
MVert *mvert = SCULPT_mesh_deformed_mverts_get(ss);
ImBuf *image_buffer = ss->mode.texture_paint.drawing_target;
int pixels_added = 0;
Bucket bucket;
for (int y_bucket = 0; y_bucket < image_buffer->y; y_bucket += Bucket::Size) {
printf("%d: %d pixels added.\n", y_bucket, pixels_added);
for (int x_bucket = 0; x_bucket < image_buffer->x; x_bucket += Bucket::Size) {
bucket.entries.clear();
BLI_rctf_init(&bucket.bounds,
float(x_bucket) / image_buffer->x,
float(x_bucket + Bucket::Size) / image_buffer->x,
float(y_bucket) / image_buffer->y,
float(y_bucket + Bucket::Size) / image_buffer->y);
// print_rctf_id(&bucket.bounds);
for (int n = 0; n < nodes_to_initialize.size(); n++) {
PBVHNode *node = nodes_to_initialize[n];
PBVHVertexIter vd;
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
if (BLI_rctf_is_valid(&node_data->uv_region)) {
if (!BLI_rctf_isect(&bucket.bounds, &node_data->uv_region, nullptr)) {
continue;
}
}
BKE_pbvh_vertex_iter_begin (ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
MeshElemMap *vert_map = &ss->pmap[vd.index];
for (int j = 0; j < ss->pmap[vd.index].count; j++) {
const MPoly *p = &ss->mpoly[vert_map->indices[j]];
if (p->totloop < 3) {
continue;
}
rctf poly_bound;
BLI_rctf_init_minmax(&poly_bound);
for (int l = 0; l < p->totloop; l++) {
const int v_loop_index = p->loopstart + l;
const float2 v_uv = ldata_uv[v_loop_index].uv;
BLI_rctf_do_minmax_v(&poly_bound, v_uv);
BLI_rctf_do_minmax_v(&node_data->uv_region, v_uv);
}
if (BLI_rctf_isect(&bucket.bounds, &poly_bound, nullptr)) {
BucketEntry entry;
entry.node = node;
entry.poly = p;
entry.uv_bounds = poly_bound;
bucket.entries.append(entry);
}
}
}
BKE_pbvh_vertex_iter_end;
}
// printf("Loaded %ld entries in bucket\n", bucket.entries.size());
if (bucket.entries.size() == 0) {
continue;
}
for (int y = y_bucket; y < image_buffer->y && y < y_bucket + Bucket::Size; y++) {
for (int x = x_bucket; x < image_buffer->x && x < x_bucket + Bucket::Size; x++) {
float2 uv(float(x) / image_buffer->x, float(y) / image_buffer->y);
if (init_using_intersection(ss, bucket, image_buffer, mvert, ldata_uv, uv, int2(x, y))) {
pixels_added++;
}
}
}
}
}
TIMEIT_END(extract_pixels);
}
} // namespace blender::ed::sculpt_paint::texture_paint::barycentric_extraction
extern "C" {
void SCULPT_extract_pixels(Object *ob, PBVHNode **nodes, int totnode)
{
blender::ed::sculpt_paint::texture_paint::barycentric_extraction::init_using_intersection(
ob, totnode, nodes);
}
}

View File

@@ -0,0 +1,210 @@
#include "DNA_material_types.h"
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_scene_types.h"
#include "DNA_windowmanager_types.h"
#include "BKE_brush.h"
#include "BKE_context.h"
#include "BKE_customdata.h"
#include "BKE_image.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_mesh_mapping.h"
#include "BKE_pbvh.h"
#include "PIL_time_utildefines.h"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "IMB_rasterizer.hh"
#include "WM_types.h"
#include "bmesh.h"
#include "ED_uvedit.h"
#include "GPU_compute.h"
#include "GPU_shader.h"
#include "GPU_shader_shared.h"
#include "GPU_texture.h"
#include "GPU_uniform_buffer.h"
#include "GPU_vertex_buffer.h"
#include "GPU_vertex_format.h"
#include "sculpt_intern.h"
#include "sculpt_texture_paint_intern.hh"
namespace blender::ed::sculpt_paint::texture_paint::barycentric_extraction {
static void init_using_intersection(Object *ob, int totnode, PBVHNode **nodes)
{
Vector<PBVHNode *> nodes_to_initialize;
for (int n = 0; n < totnode; n++) {
PBVHNode *node = nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
if (node_data != nullptr) {
continue;
}
node_data = MEM_new<NodeData>(__func__);
BKE_pbvh_node_texture_paint_data_set(node, node_data, NodeData::free_func);
nodes_to_initialize.append(node);
}
if (nodes_to_initialize.size() == 0) {
return;
}
printf("%ld nodes to initialize\n", nodes_to_initialize.size());
Mesh *mesh = static_cast<Mesh *>(ob->data);
MLoopUV *ldata_uv = static_cast<MLoopUV *>(CustomData_get_layer(&mesh->ldata, CD_MLOOPUV));
if (ldata_uv == nullptr) {
return;
}
SculptSession *ss = ob->sculpt;
static GPUVertFormat format = {0};
GPU_vertformat_attr_add(&format, "uv1", GPU_COMP_F32, 2, GPU_FETCH_FLOAT);
GPU_vertformat_attr_add(&format, "uv2", GPU_COMP_F32, 2, GPU_FETCH_FLOAT);
GPU_vertformat_attr_add(&format, "uv3", GPU_COMP_F32, 2, GPU_FETCH_FLOAT);
GPU_vertformat_attr_add(&format, "node_index", GPU_COMP_I32, 1, GPU_FETCH_INT);
GPU_vertformat_attr_add(&format, "poly_index", GPU_COMP_I32, 1, GPU_FETCH_INT);
GPUShader *shader = GPU_shader_get_builtin_shader(GPU_SHADER_SCULPT_PIXEL_EXTRACTION);
GPU_shader_bind(shader);
const int polygons_loc = GPU_shader_get_ssbo(shader, "polygons");
BLI_assert(polygons_loc != -1);
ImBuf *image_buffer = ss->mode.texture_paint.drawing_target;
GPUTexture *pixels_tex = GPU_texture_create_2d(
"gpu_shader_compute_2d", image_buffer->x, image_buffer->y, 1, GPU_RGBA32I, nullptr);
GPU_texture_image_bind(pixels_tex, GPU_shader_get_texture_binding(shader, "pixels"));
int pixels_added = 0;
int4 clear_pixel(-1);
GPU_texture_clear(pixels_tex, GPU_DATA_INT, clear_pixel);
for (int n = 0; n < nodes_to_initialize.size(); n++) {
printf("node [%d/%ld]\n", n + 1, nodes_to_initialize.size());
Vector<TexturePaintPolygon> polygons;
PBVHNode *node = nodes[n];
PBVHVertexIter vd;
BKE_pbvh_vertex_iter_begin (ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
MeshElemMap *vert_map = &ss->pmap[vd.index];
for (int j = 0; j < ss->pmap[vd.index].count; j++) {
const int poly_index = vert_map->indices[j];
const MPoly *p = &ss->mpoly[poly_index];
if (p->totloop < 3) {
continue;
}
for (int l = 0; l < p->totloop - 2; l++) {
const int v1_loop_index = p->loopstart;
const int v2_loop_index = p->loopstart + l + 1;
const int v3_loop_index = p->loopstart + l + 2;
TexturePaintPolygon polygon;
polygon.uv[0] = ldata_uv[v1_loop_index].uv;
polygon.uv[1] = ldata_uv[v2_loop_index].uv;
polygon.uv[2] = ldata_uv[v3_loop_index].uv;
polygon.pbvh_node_index = n;
polygon.poly_index = poly_index;
polygons.append(polygon);
}
}
}
BKE_pbvh_vertex_iter_end;
printf("%ld polygons loaded\n", polygons.size());
GPUVertBuf *vbo = GPU_vertbuf_create_with_format(&format);
GPU_vertbuf_data_alloc(vbo, polygons.size());
for (int i = 0; i < polygons.size(); i++) {
GPU_vertbuf_vert_set(vbo, i, &polygons[i]);
}
GPU_vertbuf_bind_as_ssbo(vbo, polygons_loc);
int2 calc_size(image_buffer->x, image_buffer->y);
const int batch_size = 10000;
for (int batch = 0; batch * batch_size < polygons.size(); batch++) {
printf("batch %d\n", batch);
GPU_shader_uniform_1i(shader, "from_polygon", batch * batch_size);
GPU_shader_uniform_1i(
shader, "to_polygon", min_ii(batch * batch_size + batch_size, polygons.size()));
GPU_compute_dispatch(shader, calc_size.x, calc_size.y, 1);
}
GPU_memory_barrier(GPU_BARRIER_TEXTURE_FETCH);
TexturePaintPixel *pixels = static_cast<TexturePaintPixel *>(
GPU_texture_read(pixels_tex, GPU_DATA_INT, 0));
GPU_vertbuf_discard(vbo);
MVert *mvert = SCULPT_mesh_deformed_mverts_get(ss);
for (int y = 0; y < calc_size.y; y++) {
for (int x = 0; x < calc_size.x; x++) {
int pixel_offset = y * image_buffer->x + x;
float2 uv(float(x) / image_buffer->x, float(y) / image_buffer->y);
TexturePaintPixel *pixel = &pixels[pixel_offset];
// printf("%d %d: %d %d\n", x, y, pixel->poly_index, pixel->pbvh_node_index);
if (pixel->poly_index == -1 || pixel->pbvh_node_index != n) {
/* No intersection detected.*/
continue;
}
BLI_assert(pixel->pbvh_node_index < nodes_to_initialize.size());
PBVHNode *node = nodes_to_initialize[pixel->pbvh_node_index];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
const MPoly *p = &ss->mpoly[pixel->poly_index];
const MLoop *loopstart = &ss->mloop[p->loopstart];
bool intersection_validated = false;
for (int l = 0; l < p->totloop - 2; l++) {
const int v1_loop_index = p->loopstart;
const int v2_loop_index = p->loopstart + l + 1;
const int v3_loop_index = p->loopstart + l + 2;
const float2 v1_uv = ldata_uv[v1_loop_index].uv;
const float2 v2_uv = ldata_uv[v2_loop_index].uv;
const float2 v3_uv = ldata_uv[v3_loop_index].uv;
float3 weights;
barycentric_weights_v2(v1_uv, v2_uv, v3_uv, uv, weights);
if (weights[0] < 0.0 || weights[0] > 1.0 || weights[1] < 0.0 || weights[1] > 1.0 ||
weights[2] < 0.0 || weights[2] > 1.0) {
continue;
}
const int v1_index = loopstart[0].v;
const int v2_index = loopstart[l + 1].v;
const int v3_index = loopstart[l + 2].v;
const float3 v1_pos = mvert[v1_index].co;
const float3 v2_pos = mvert[v2_index].co;
const float3 v3_pos = mvert[v3_index].co;
float3 local_pos;
interp_v3_v3v3v3(local_pos, v1_pos, v2_pos, v3_pos, weights);
PixelData new_pixel;
new_pixel.local_pos = local_pos;
new_pixel.pixel_pos = int2(x, y);
new_pixel.content = float4(&image_buffer->rect_float[pixel_offset * 4]);
node_data->pixels.append(new_pixel);
pixels_added += 1;
intersection_validated = true;
break;
}
BLI_assert(intersection_validated);
}
}
printf("%d pixels added\n", pixels_added);
MEM_freeN(pixels);
}
GPU_shader_unbind();
GPU_texture_free(pixels_tex);
}
} // namespace blender::ed::sculpt_paint::texture_paint::barycentric_extraction
extern "C" {
void SCULPT_extract_pixels(Object *ob, PBVHNode **nodes, int totnode)
{
TIMEIT_START(extract_pixels);
blender::ed::sculpt_paint::texture_paint::barycentric_extraction::init_using_intersection(
ob, totnode, nodes);
TIMEIT_END(extract_pixels);
}
}

View File

@@ -0,0 +1,293 @@
#include "DNA_material_types.h"
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_scene_types.h"
#include "DNA_windowmanager_types.h"
#include "BKE_brush.h"
#include "BKE_context.h"
#include "BKE_customdata.h"
#include "BKE_image.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_mesh_mapping.h"
#include "BKE_pbvh.h"
#include "PIL_time_utildefines.h"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "IMB_rasterizer.hh"
#include "WM_types.h"
#include "bmesh.h"
#include "ED_uvedit.h"
#include "GPU_compute.h"
#include "GPU_shader.h"
#include "GPU_shader_shared.h"
#include "GPU_texture.h"
#include "GPU_uniform_buffer.h"
#include "GPU_vertex_buffer.h"
#include "GPU_vertex_format.h"
#include "sculpt_intern.h"
#include "sculpt_texture_paint_intern.hh"
namespace blender::ed::sculpt_paint::texture_paint::packed_pixels {
/* Express co as term of movement along 2 edges of a triangle. */
static float3 barycentric_weights(const float2 v1,
const float2 v2,
const float2 v3,
const float2 co)
{
float3 weights;
barycentric_weights_v2(v1, v2, v3, co, weights);
return weights;
}
static bool is_inside_triangle(const float3 barycentric_weights)
{
return barycentric_inside_triangle_v2(barycentric_weights);
}
static bool has_been_visited(std::vector<bool> &visited_polygons, const int poly_index)
{
bool visited = visited_polygons[poly_index];
visited_polygons[poly_index] = true;
return visited;
}
static void extract_barycentric_pixels(TileData &tile_data,
const ImBuf *image_buffer,
TrianglePaintInput &triangle,
const int triangle_index,
const float2 uvs[3],
const int minx,
const int miny,
const int maxx,
const int maxy)
{
int best_num_pixels = 0;
for (int y = miny; y < maxy; y++) {
bool start_detected = false;
float3 barycentric;
PixelsPackage package;
package.triangle_index = triangle_index;
package.num_pixels = 0;
int x;
for (x = minx; x < maxx; x++) {
float2 uv(float(x) / image_buffer->x, float(y) / image_buffer->y);
barycentric = barycentric_weights(uvs[0], uvs[1], uvs[2], uv);
const bool is_inside = is_inside_triangle(barycentric);
if (!start_detected && is_inside) {
start_detected = true;
package.start_image_coordinate = ushort2(x, y);
package.start_barycentric_coord = barycentric;
}
else if (start_detected && !is_inside) {
break;
}
}
if (!start_detected) {
continue;
}
package.num_pixels = x - package.start_image_coordinate.x;
if (package.num_pixels > best_num_pixels) {
triangle.add_barycentric_coord_x = (barycentric - package.start_barycentric_coord.decode()) /
package.num_pixels;
best_num_pixels = package.num_pixels;
}
tile_data.encoded_pixels.append(package);
}
}
static void init_triangles(SculptSession *ss,
PBVHNode *node,
NodeData *node_data,
std::vector<bool> &visited_polygons)
{
PBVHVertexIter vd;
BKE_pbvh_vertex_iter_begin (ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
MeshElemMap *vert_map = &ss->pmap[vd.index];
for (int j = 0; j < ss->pmap[vd.index].count; j++) {
const int poly_index = vert_map->indices[j];
if (has_been_visited(visited_polygons, poly_index)) {
continue;
}
const MPoly *p = &ss->mpoly[poly_index];
const MLoop *loopstart = &ss->mloop[p->loopstart];
for (int l = 0; l < p->totloop - 2; l++) {
Triangle triangle;
triangle.loop_indices = int3(p->loopstart, p->loopstart + l + 1, p->loopstart + l + 2);
triangle.vert_indices = int3(loopstart[0].v, loopstart[l + 1].v, loopstart[l + 2].v);
triangle.poly_index = poly_index;
node_data->triangles.append(triangle);
}
}
}
BKE_pbvh_vertex_iter_end;
}
struct EncodePixelsUserData {
Image *image;
ImageUser *image_user;
Vector<PBVHNode *> *nodes;
MLoopUV *ldata_uv;
};
static void do_encode_pixels(void *__restrict userdata,
const int n,
const TaskParallelTLS *__restrict UNUSED(tls))
{
EncodePixelsUserData *data = static_cast<EncodePixelsUserData *>(userdata);
Image *image = data->image;
ImageUser image_user = *data->image_user;
PBVHNode *node = (*data->nodes)[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
LISTBASE_FOREACH (ImageTile *, tile, &data->image->tiles) {
imbuf::ImageTileWrapper image_tile(tile);
image_user.tile = image_tile.get_tile_number();
ImBuf *image_buffer = BKE_image_acquire_ibuf(image, &image_user, nullptr);
if (image_buffer == nullptr) {
continue;
}
float2 tile_offset = float2(image_tile.get_tile_offset());
TileData tile_data;
Triangles &triangles = node_data->triangles;
for (int triangle_index = 0; triangle_index < triangles.size(); triangle_index++) {
int3 loop_indices = triangles.get_loop_indices(triangle_index);
float2 uvs[3] = {
float2(data->ldata_uv[loop_indices[0]].uv) - tile_offset,
float2(data->ldata_uv[loop_indices[1]].uv) - tile_offset,
float2(data->ldata_uv[loop_indices[2]].uv) - tile_offset,
};
const float minv = clamp_f(min_fff(uvs[0].y, uvs[1].y, uvs[2].y), 0.0f, 1.0f);
const int miny = floor(minv * image_buffer->y);
const float maxv = clamp_f(max_fff(uvs[0].y, uvs[1].y, uvs[2].y), 0.0f, 1.0f);
const int maxy = min_ii(ceil(maxv * image_buffer->y), image_buffer->y);
const float minu = clamp_f(min_fff(uvs[0].x, uvs[1].x, uvs[2].x), 0.0f, 1.0f);
const int minx = floor(minu * image_buffer->x);
const float maxu = clamp_f(max_fff(uvs[0].x, uvs[1].x, uvs[2].x), 0.0f, 1.0f);
const int maxx = min_ii(ceil(maxu * image_buffer->x), image_buffer->x);
TrianglePaintInput &triangle = triangles.get_paint_input(triangle_index);
extract_barycentric_pixels(
tile_data, image_buffer, triangle, triangle_index, uvs, minx, miny, maxx, maxy);
}
BKE_image_release_ibuf(image, image_buffer, nullptr);
if (tile_data.encoded_pixels.is_empty()) {
continue;
}
tile_data.tile_number = image_tile.get_tile_number();
node_data->tiles.append(tile_data);
}
node_data->triangles.cleanup_after_init();
}
static void init(const Object *ob, int totnode, PBVHNode **nodes)
{
Vector<PBVHNode *> nodes_to_initialize;
for (int n = 0; n < totnode; n++) {
PBVHNode *node = nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
if (node_data != nullptr) {
continue;
}
node_data = MEM_new<NodeData>(__func__);
BKE_pbvh_node_texture_paint_data_set(node, node_data, NodeData::free_func);
nodes_to_initialize.append(node);
}
if (nodes_to_initialize.size() == 0) {
return;
}
printf("%lld nodes to initialize\n", nodes_to_initialize.size());
Mesh *mesh = static_cast<Mesh *>(ob->data);
MLoopUV *ldata_uv = static_cast<MLoopUV *>(CustomData_get_layer(&mesh->ldata, CD_MLOOPUV));
if (ldata_uv == nullptr) {
return;
}
SculptSession *ss = ob->sculpt;
std::vector<bool> visited_polygons(mesh->totpoly);
for (int n = 0; n < nodes_to_initialize.size(); n++) {
PBVHNode *node = nodes_to_initialize[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
init_triangles(ss, node, node_data, visited_polygons);
}
EncodePixelsUserData user_data;
user_data.image = ss->mode.texture_paint.image;
user_data.image_user = ss->mode.texture_paint.image_user;
user_data.ldata_uv = ldata_uv;
user_data.nodes = &nodes_to_initialize;
TaskParallelSettings settings;
BKE_pbvh_parallel_range_settings(&settings, true, nodes_to_initialize.size());
BLI_task_parallel_range(0, nodes_to_initialize.size(), &user_data, do_encode_pixels, &settings);
{
int64_t compressed_data_len = 0;
int64_t num_pixels = 0;
for (int n = 0; n < totnode; n++) {
PBVHNode *node = nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
compressed_data_len += node_data->triangles.mem_size();
for (const TileData &tile_data : node_data->tiles) {
compressed_data_len += tile_data.encoded_pixels.size() * sizeof(PixelsPackage);
for (const PixelsPackage &encoded_pixels : tile_data.encoded_pixels) {
num_pixels += encoded_pixels.num_pixels;
}
}
}
printf("Encoded %lld pixels in %lld bytes (%f bytes per pixel)\n",
num_pixels,
compressed_data_len,
float(compressed_data_len) / num_pixels);
}
//#define DO_WATERTIGHT_CHECK
#ifdef DO_WATERTIGHT_CHECK
for (int n = 0; n < nodes_to_initialize.size(); n++) {
PBVHNode *node = nodes[n];
NodeData *node_data = static_cast<NodeData *>(BKE_pbvh_node_texture_paint_data_get(node));
for (PixelsPackage &encoded_pixels : node_data->encoded_pixels) {
int pixel_offset = encoded_pixels.start_image_coordinate.y * image_buffer->x +
encoded_pixels.start_image_coordinate.x;
for (int x = 0; x < encoded_pixels.num_pixels; x++) {
copy_v4_fl(&image_buffer->rect_float[pixel_offset * 4], 1.0);
pixel_offset += 1;
}
}
}
#endif
}
} // namespace blender::ed::sculpt_paint::texture_paint::packed_pixels
extern "C" {
void SCULPT_extract_pixels(Object *ob, PBVHNode **nodes, int totnode)
{
using namespace blender::ed::sculpt_paint::texture_paint::packed_pixels;
TIMEIT_START(extract_pixels);
init(ob, totnode, nodes);
TIMEIT_END(extract_pixels);
}
}

View File

@@ -354,6 +354,8 @@ set(GLSL_SRC
shaders/gpu_shader_gpencil_stroke_frag.glsl
shaders/gpu_shader_gpencil_stroke_geom.glsl
shaders/gpu_shader_sculpt_pixel_extraction_comp.glsl
shaders/gpu_shader_cfg_world_clip_lib.glsl
shaders/gpu_shader_colorspace_lib.glsl
@@ -433,6 +435,7 @@ set(SRC_SHADER_CREATE_INFOS
shaders/infos/gpu_shader_simple_lighting_info.hh
shaders/infos/gpu_shader_text_info.hh
shaders/infos/gpu_srgb_to_framebuffer_space_info.hh
shaders/infos/gpu_shader_sculpt_pixel_extraction_info.hh
)
set(SHADER_CREATE_INFOS_CONTENT "")

View File

@@ -341,6 +341,7 @@ typedef enum eGPUBuiltinShader {
GPU_SHADER_INSTANCE_VARIYING_COLOR_VARIYING_SIZE, /* Uniformly scaled */
/* grease pencil drawing */
GPU_SHADER_GPENCIL_STROKE,
GPU_SHADER_SCULPT_PIXEL_EXTRACTION,
/* specialized for widget drawing */
GPU_SHADER_2D_AREA_BORDERS,
GPU_SHADER_2D_WIDGET_BASE,

View File

@@ -66,3 +66,17 @@ struct MultiRectCallData {
float4 calls_data[MAX_CALLS * 3];
};
BLI_STATIC_ASSERT_ALIGN(struct MultiRectCallData, 16)
struct TexturePaintPolygon {
float2 uv[3];
int pbvh_node_index;
int poly_index;
};
BLI_STATIC_ASSERT_ALIGN(struct TexturePaintPolygon, 16)
struct TexturePaintPixel {
int pbvh_node_index;
int poly_index;
int2 _pad;
};
BLI_STATIC_ASSERT_ALIGN(struct TexturePaintPixel, 16)

View File

@@ -332,6 +332,9 @@ static const GPUShaderStages builtin_shader_stages[GPU_SHADER_BUILTIN_LEN] = {
[GPU_SHADER_GPENCIL_STROKE] = {.name = "GPU_SHADER_GPENCIL_STROKE",
.create_info = "gpu_shader_gpencil_stroke"},
[GPU_SHADER_SCULPT_PIXEL_EXTRACTION] = {.name = "GPU_SHADER_SCULPT_PIXEL_EXTRACTION",
.create_info = "gpu_shader_sculpt_pixel_extraction"}
};
GPUShader *GPU_shader_get_builtin_shader_with_config(eGPUBuiltinShader shader,

View File

@@ -379,7 +379,7 @@ BuiltinBits gpu_shader_dependency_get_builtins(const StringRefNull shader_source
return shader::BuiltinBits::NONE;
}
if (g_sources->contains(shader_source_name) == false) {
std::cout << "Error: Could not find \"" << shader_source_name
std::cerr << "Error: Could not find \"" << shader_source_name
<< "\" in the list of registered source.\n";
BLI_assert(0);
return shader::BuiltinBits::NONE;

View File

@@ -433,8 +433,10 @@ inline bool validate_data_format(eGPUTextureFormat tex_format, eGPUDataFormat da
case GPU_RG16UI:
case GPU_R32UI:
return data_format == GPU_DATA_UINT;
case GPU_RG16I:
case GPU_R16I:
case GPU_RG16I:
case GPU_RG32I:
case GPU_RGBA32I:
return data_format == GPU_DATA_INT;
case GPU_R8:
case GPU_RG8:

View File

@@ -0,0 +1,44 @@
float cross_tri_v2(vec2 v1, vec2 v2, vec2 v3)
{
return (v1.x - v2.x) * (v2.y - v3.y) + (v1.y - v2.y) * (v3.x - v2.x);
}
void main()
{
ivec2 pixel_coord = ivec2(gl_GlobalInvocationID.xy);
ivec2 image_size = imageSize(pixels);
vec2 pixel_uv = vec2(pixel_coord) / vec2(image_size);
if (imageLoad(pixels, pixel_coord).x != -1) {
return;
};
for (int poly_index = from_polygon; poly_index < to_polygon; poly_index++) {
vec2 uv0 = polygons[poly_index * 2].xy;
vec2 uv1 = polygons[poly_index * 2].zw;
vec2 uv2 = polygons[poly_index * 2 + 1].xy;
int pbvh_node_index = floatBitsToInt(polygons[poly_index * 2 + 1].z);
int pbvh_poly_index = floatBitsToInt(polygons[poly_index * 2 + 1].w);
vec3 weights = vec3(cross_tri_v2(uv1, uv2, pixel_uv),
cross_tri_v2(uv2, uv0, pixel_uv),
cross_tri_v2(uv0, uv1, pixel_uv));
float tot_weight = weights.x + weights.y + weights.z;
if (tot_weight == 0.0) {
continue;
}
weights /= tot_weight;
if (any(lessThan(weights, vec3(0.0))) || any(greaterThan(weights, vec3(1.0)))) {
/* No barycentric intersection detected with current polygon, continue with next.*/
continue;
}
/* Found solution for this pixel. */
ivec4 pixel = ivec4(pbvh_node_index, pbvh_poly_index, 0, 255);
imageStore(pixels, pixel_coord, pixel);
break;
}
}

View File

@@ -0,0 +1,11 @@
#include "gpu_shader_create_info.hh"
GPU_SHADER_CREATE_INFO(gpu_shader_sculpt_pixel_extraction)
.local_group_size(1, 1)
.storage_buf(0, Qualifier::READ, "vec4", "polygons[]")
.image(1, GPU_RGBA32I, Qualifier::READ_WRITE, ImageType::INT_2D, "pixels")
.push_constant(Type::INT, "from_polygon")
.push_constant(Type::INT, "to_polygon")
.compute_source("gpu_shader_sculpt_pixel_extraction_comp.glsl")
.typedef_source("GPU_shader_shared.h")
.do_static_compilation(true);

View File

@@ -190,3 +190,17 @@ set_source_files_properties(
)
blender_add_lib(bf_imbuf "${SRC}" "${INC}" "${INC_SYS}" "${LIB}")
if(WITH_GTESTS)
set(TEST_SRC
intern/rasterizer_test.cc
)
set(TEST_INC
)
set(TEST_LIB
)
include(GTestTesting)
blender_add_test_lib(bf_imbuf_tests "${TEST_SRC}" "${INC};${TEST_INC}" "${INC_SYS}" "${LIB};${TEST_LIB}")
endif()

View File

@@ -0,0 +1,46 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. */
/** \file
* \ingroup imbuf
*/
/* TODO: should be moved to BKE_image_wrappers. */
/* TODO: should be reused by image engine. */
#pragma once
#include "DNA_image_types.h"
#include "BLI_math_vec_types.hh"
namespace blender::imbuf {
struct ImageTileWrapper {
ImageTile *image_tile;
ImageTileWrapper(ImageTile *image_tile) : image_tile(image_tile)
{
}
int get_tile_number() const
{
return image_tile->tile_number;
}
int2 get_tile_offset() const
{
return int2(get_tile_x_offset(), get_tile_y_offset());
}
int get_tile_x_offset() const
{
int tile_number = get_tile_number();
return (tile_number - 1001) % 10;
}
int get_tile_y_offset() const
{
int tile_number = get_tile_number();
return (tile_number - 1001) / 10;
}
};
} // namespace blender::imbuf

View File

@@ -0,0 +1,711 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup imbuf
*
* Rasterizer to render triangles onto an image buffer.
*
* The implementation is template based and follows a (very limited) OpenGL pipeline.
*
* ## Basic usage
*
* In order to use it you have to define the data structure for a single vertex.
*
* \code{.cc}
* struct VertexInput {
* float2 uv;
* };
* \endcode
*
* A vertex shader is required to transfer the vertices to actual coordinates in the image buffer.
* The vertex shader will store vertex specific data in a VertexOutInterface.
*
* \code{.cc}
* class MyVertexShader : public AbstractVertexShader<VertexInput, float> {
* public:
* float4x4 mat;
* void vertex(const VertexInputType &input, VertexOutputType *r_output) override
* {
* float2 coord = float2(mat * float3(input.uv[0], input.uv[1], 0.0));
* r_output->coord = coord * image_size;
* r_output->data = 1.0;
* }
* };
* \endcode
*
* A fragment shader is required to actually generate the pixel that will be stored in the buffer.
*
* \code{.cc}
* class FragmentShader : public AbstractFragmentShader<float, float4> {
* public:
* void fragment(const FragmentInputType &input, FragmentOutputType *r_output) override
* {
* *r_output = float4(input, input, input, 1.0);
* }
* };
* \endcode
*
* Create a rasterizer with the vertex and fragment shader and start drawing.
* It is required to call flush to make sure that all triangles are drawn to the image buffer.
*
* \code{.cc}
* Rasterizer<MyVertexShader, MyFragmentShader> rasterizer(&image_buffer);
* rasterizer.activate_drawing_target(&image_buffer);
* rasterizer.get_vertex_shader().mat = float4x4::identity();
* rasterizer.draw_triangle(
* VertexInput{float2(0.0, 1.0)},
* VertexInput{float2(1.0, 1.0)},
* VertexInput{float2(1.0, 0.0)},
* );
* rasterizer.flush();
* \endcode
*/
#pragma once
#include "BLI_math.h"
#include "BLI_math_vec_types.hh"
#include "BLI_vector.hh"
#include "IMB_imbuf.h"
#include "IMB_imbuf_types.h"
#include "intern/rasterizer_blending.hh"
#include "intern/rasterizer_clamping.hh"
#include "intern/rasterizer_stats.hh"
#include "intern/rasterizer_target.hh"
#include <optional>
// #define DEBUG_PRINT
namespace blender::imbuf::rasterizer {
/** The default number of rasterlines to buffer before flushing to the image buffer. */
constexpr int64_t DefaultRasterlinesBufferSize = 4096;
/**
* Interface data of the vertex stage.
*/
template<
/**
* Data type per vertex generated by the vertex shader and transferred to the fragment shader.
*
* The data type should implement the +=, +, =, -, / and * operator.
*/
typename Inner>
class VertexOutInterface {
public:
using InnerType = Inner;
using Self = VertexOutInterface<InnerType>;
/** Coordinate of a vertex inside the image buffer. (0..image_buffer.x, 0..image_buffer.y). */
float2 coord;
InnerType data;
Self &operator+=(const Self &other)
{
coord += other.coord;
data += other.data;
return *this;
}
Self &operator=(const Self &other)
{
coord = other.coord;
data = other.data;
return *this;
}
Self operator-(const Self &other) const
{
Self result;
result.coord = coord - other.coord;
result.data = data - other.data;
return result;
}
Self operator+(const Self &other) const
{
Self result;
result.coord = coord + other.coord;
result.data = data + other.data;
return result;
}
Self operator/(const float divider) const
{
Self result;
result.coord = coord / divider;
result.data = data / divider;
return result;
}
Self operator*(const float multiplier) const
{
Self result;
result.coord = coord * multiplier;
result.data = data * multiplier;
return result;
}
};
/**
* Vertex shader
*/
template<typename VertexInput, typename VertexOutput> class AbstractVertexShader {
public:
using VertexInputType = VertexInput;
using VertexOutputType = VertexOutInterface<VertexOutput>;
virtual void vertex(const VertexInputType &input, VertexOutputType *r_output) = 0;
};
/**
* Fragment shader will render a single fragment onto the ImageBuffer.
* FragmentInput - The input data from the vertex stage.
* FragmentOutput points to the memory location to write to in the image buffer.
*/
template<typename FragmentInput, typename FragmentOutput> class AbstractFragmentShader {
public:
using FragmentInputType = FragmentInput;
using FragmentOutputType = FragmentOutput;
virtual void fragment(const FragmentInputType &input, FragmentOutputType *r_output) = 0;
};
/**
* RasterLine - data to render a single rasterline of a triangle.
*/
template<typename FragmentInput> class Rasterline {
public:
/** Row where this rasterline will be rendered. */
uint32_t y;
/** Starting X coordinate of the rasterline. */
uint32_t start_x;
/** Ending X coordinate of the rasterline. */
uint32_t end_x;
/** Input data for the fragment shader on (start_x, y). */
FragmentInput start_data;
/** Delta to add to the start_input to create the data for the next fragment. */
FragmentInput fragment_add;
Rasterline(uint32_t y,
uint32_t start_x,
uint32_t end_x,
FragmentInput start_data,
FragmentInput fragment_add)
: y(y), start_x(start_x), end_x(end_x), start_data(start_data), fragment_add(fragment_add)
{
}
};
template<typename Rasterline, int64_t BufferSize> class Rasterlines {
public:
Vector<Rasterline, BufferSize> buffer;
explicit Rasterlines()
{
buffer.reserve(BufferSize);
}
virtual ~Rasterlines() = default;
void append(const Rasterline &value)
{
buffer.append(value);
BLI_assert(buffer.capacity() == BufferSize);
}
bool is_empty() const
{
return buffer.is_empty();
}
bool has_items() const
{
return buffer.has_items();
}
bool is_full() const
{
return buffer.size() == BufferSize;
}
void clear()
{
buffer.clear();
BLI_assert(buffer.size() == 0);
BLI_assert(buffer.capacity() == BufferSize);
}
};
template<typename VertexShader,
typename FragmentShader,
/**
* A blend mode integrates the result of the fragment shader with the drawing target.
*/
typename BlendMode = CopyBlendMode,
typename DrawingTarget = ImageBufferDrawingTarget,
/**
* To improve branching performance the rasterlines are buffered and flushed when this
* treshold is reached.
*/
int64_t RasterlinesSize = DefaultRasterlinesBufferSize,
/**
* Statistic collector. Should be a subclass of AbstractStats or implement the same
* interface.
*
* Is used in test cases to check what decision was made.
*/
typename Statistics = NullStats>
class Rasterizer {
public:
using InterfaceInnerType = typename VertexShader::VertexOutputType::InnerType;
using RasterlineType = Rasterline<InterfaceInnerType>;
using VertexInputType = typename VertexShader::VertexInputType;
using VertexOutputType = typename VertexShader::VertexOutputType;
using FragmentInputType = typename FragmentShader::FragmentInputType;
using FragmentOutputType = typename FragmentShader::FragmentOutputType;
using TargetBufferType = typename DrawingTarget::BufferType;
using PixelType = typename DrawingTarget::PixelType;
/** Check if the vertex shader and the fragment shader can be linked together. */
static_assert(std::is_same_v<InterfaceInnerType, FragmentInputType>);
/** Check if the output of the fragment shader can be used as source of the Blend Mode. */
static_assert(std::is_same_v<FragmentOutputType, typename BlendMode::SourceType>);
private:
VertexShader vertex_shader_;
FragmentShader fragment_shader_;
Rasterlines<RasterlineType, RasterlinesSize> rasterlines_;
const CenterPixelClampingMethod clamping_method;
const BlendMode blending_mode_;
DrawingTarget drawing_target_;
public:
Statistics stats;
explicit Rasterizer()
{
}
/** Activate the giver image buffer to be used as the active drawing target. */
void activate_drawing_target(TargetBufferType *new_drawing_target)
{
deactivate_drawing_target();
drawing_target_.activate(new_drawing_target);
}
/**
* Deactivate active drawing target.
*
* Will flush any rasterlines before deactivating.
*/
void deactivate_drawing_target()
{
if (has_active_drawing_target()) {
flush();
}
drawing_target_.deactivate();
BLI_assert(!has_active_drawing_target());
}
bool has_active_drawing_target() const
{
return drawing_target_.has_active_target();
}
virtual ~Rasterizer() = default;
VertexShader &vertex_shader()
{
return vertex_shader_;
}
FragmentShader &fragment_shader()
{
return fragment_shader_;
}
void draw_triangle(const VertexInputType &p1,
const VertexInputType &p2,
const VertexInputType &p3)
{
BLI_assert_msg(has_active_drawing_target(),
"Drawing requires an active drawing target. Use `activate_drawing_target` to "
"activate a drawing target.");
stats.increase_triangles();
std::array<VertexOutputType, 3> vertex_out;
vertex_shader_.vertex(p1, &vertex_out[0]);
vertex_shader_.vertex(p2, &vertex_out[1]);
vertex_shader_.vertex(p3, &vertex_out[2]);
/* Early check if all coordinates are on a single of the buffer it is imposible to render into
* the buffer*/
const VertexOutputType &p1_out = vertex_out[0];
const VertexOutputType &p2_out = vertex_out[1];
const VertexOutputType &p3_out = vertex_out[2];
const bool triangle_not_visible = (p1_out.coord[0] < 0.0 && p2_out.coord[0] < 0.0 &&
p3_out.coord[0] < 0.0) ||
(p1_out.coord[1] < 0.0 && p2_out.coord[1] < 0.0 &&
p3_out.coord[1] < 0.0) ||
(p1_out.coord[0] >= drawing_target_.get_width() &&
p2_out.coord[0] >= drawing_target_.get_width() &&
p3_out.coord[0] >= drawing_target_.get_width()) ||
(p1_out.coord[1] >= drawing_target_.get_height() &&
p2_out.coord[1] >= drawing_target_.get_height() &&
p3_out.coord[1] >= drawing_target_.get_height());
if (triangle_not_visible) {
stats.increase_discarded_triangles();
return;
}
rasterize_triangle(vertex_out);
}
/**
* Flush any not drawn rasterlines onto the active drawing target.
*/
void flush()
{
if (rasterlines_.is_empty()) {
return;
}
stats.increase_flushes();
for (const RasterlineType &rasterline : rasterlines_.buffer) {
render_rasterline(rasterline);
}
rasterlines_.clear();
}
private:
void rasterize_triangle(std::array<VertexOutputType, 3> &vertex_out)
{
#ifdef DEBUG_PRINT
printf("%s 1: (%.4f,%.4f) 2: (%.4f,%.4f) 3: (%.4f %.4f)\n",
__func__,
vertex_out[0].coord[0],
vertex_out[0].coord[1],
vertex_out[1].coord[0],
vertex_out[1].coord[1],
vertex_out[2].coord[0],
vertex_out[2].coord[1]);
#endif
std::array<VertexOutputType *, 3> sorted_vertices = order_triangle_vertices(vertex_out);
const int min_rasterline_y = clamping_method.scanline_for(sorted_vertices[0]->coord[1]);
const int mid_rasterline_y = clamping_method.scanline_for(sorted_vertices[1]->coord[1]);
const int max_rasterline_y = clamping_method.scanline_for(sorted_vertices[2]->coord[1]) - 1;
/* left and right branch. */
VertexOutputType left = *sorted_vertices[0];
VertexOutputType right = *sorted_vertices[0];
VertexOutputType *left_target;
VertexOutputType *right_target;
if (sorted_vertices[1]->coord[0] < sorted_vertices[2]->coord[0]) {
left_target = sorted_vertices[1];
right_target = sorted_vertices[2];
}
else {
left_target = sorted_vertices[2];
right_target = sorted_vertices[1];
}
VertexOutputType left_add = calc_branch_delta(left, *left_target);
VertexOutputType right_add = calc_branch_delta(right, *right_target);
/* Change winding order to match the steepness of the edges. */
if (right_add.coord[0] < left_add.coord[0]) {
std::swap(left_add, right_add);
std::swap(left_target, right_target);
}
/* Calculate the adder for each x pixel. This is constant for the whole triangle. It is
* calculated at the midline to reduce edge cases. */
const InterfaceInnerType fragment_add = calc_fragment_delta(
sorted_vertices, left, right, left_add, right_add, left_target);
/* Perform a substep to make sure that the data of left and right match the data on the anchor
* point (center of the pixel). */
update_branches_to_min_anchor_line(*sorted_vertices[0], left, right, left_add, right_add);
/* Add rasterlines from min_rasterline_y to mid_rasterline_y. */
rasterize_loop(
min_rasterline_y, mid_rasterline_y, left, right, left_add, right_add, fragment_add);
/* Special case when mid vertex is on the same rasterline as the min vertex.
* In this case we need to split the right/left branches. Comparing the x coordinate to find
* the branch that should hold the mid vertex.
*/
if (min_rasterline_y == mid_rasterline_y) {
update_branch_for_flat_bottom(*sorted_vertices[0], *sorted_vertices[1], left, right);
}
update_branches_at_mid_anchor_line(
*sorted_vertices[1], *sorted_vertices[2], left, right, left_add, right_add);
/* Add rasterlines from mid_rasterline_y to max_rasterline_y. */
rasterize_loop(
mid_rasterline_y, max_rasterline_y, left, right, left_add, right_add, fragment_add);
}
/**
* Rasterize multiple sequential lines.
*
* Create and buffer rasterlines between #from_y and #to_y.
* The #left and #right branches are incremented for each rasterline.
*/
void rasterize_loop(int32_t from_y,
int32_t to_y,
VertexOutputType &left,
VertexOutputType &right,
const VertexOutputType &left_add,
const VertexOutputType &right_add,
const InterfaceInnerType &fragment_add)
{
for (int y = from_y; y < to_y; y++) {
if (y >= 0 && y < drawing_target_.get_height()) {
std::optional<RasterlineType> rasterline = clamped_rasterline(
y, left.coord[0], right.coord[0], left.data, fragment_add);
if (rasterline) {
append(*rasterline);
}
}
left += left_add;
right += right_add;
}
}
/**
* Update the left or right branch for when the mid vertex is on the same rasterline as the min
* vertex.
*/
void update_branch_for_flat_bottom(const VertexOutputType &min_vertex,
const VertexOutputType &mid_vertex,
VertexOutputType &r_left,
VertexOutputType &r_right) const
{
if (min_vertex.coord[0] > mid_vertex.coord[0]) {
r_left = mid_vertex;
}
else {
r_right = mid_vertex;
}
}
void update_branches_to_min_anchor_line(const VertexOutputType &min_vertex,
VertexOutputType &r_left,
VertexOutputType &r_right,
const VertexOutputType &left_add,
const VertexOutputType &right_add)
{
const float distance_to_minline_anchor_point = clamping_method.distance_to_scanline_anchor(
min_vertex.coord[1]);
r_left += left_add * distance_to_minline_anchor_point;
r_right += right_add * distance_to_minline_anchor_point;
}
void update_branches_at_mid_anchor_line(const VertexOutputType &mid_vertex,
const VertexOutputType &max_vertex,
VertexOutputType &r_left,
VertexOutputType &r_right,
VertexOutputType &r_left_add,
VertexOutputType &r_right_add)
{
/* When both are the same we should the left/right branches are the same. */
const float distance_to_midline_anchor_point = clamping_method.distance_to_scanline_anchor(
mid_vertex.coord[1]);
/* Use the x coordinate to identify which branch should be modified. */
const float distance_to_left = abs(r_left.coord[0] - mid_vertex.coord[0]);
const float distance_to_right = abs(r_right.coord[0] - mid_vertex.coord[0]);
if (distance_to_left < distance_to_right) {
r_left = mid_vertex;
r_left_add = calc_branch_delta(r_left, max_vertex);
r_left += r_left_add * distance_to_midline_anchor_point;
}
else {
r_right = mid_vertex;
r_right_add = calc_branch_delta(r_right, max_vertex);
r_right += r_right_add * distance_to_midline_anchor_point;
}
}
/**
* Calculate the delta adder between two sequential fragments in the x-direction.
*
* Fragment adder is constant and can be calculated once and reused for each rasterline of
* the same triangle. However the calculation requires a distance that might not be known at
* the first scanline that is added. Therefore this method uses the mid scanline as there is
* the max x_distance.
*
* \returns the adder that can be added the previous fragment data.
*/
InterfaceInnerType calc_fragment_delta(const std::array<VertexOutputType *, 3> &sorted_vertices,
const VertexOutputType &left,
const VertexOutputType &right,
const VertexOutputType &left_add,
const VertexOutputType &right_add,
const VertexOutputType *left_target)
{
const float distance_min_to_mid = sorted_vertices[1]->coord[1] - sorted_vertices[0]->coord[1];
if (distance_min_to_mid == 0.0f) {
VertexOutputType *mid_left = (sorted_vertices[1]->coord[0] < sorted_vertices[0]->coord[0]) ?
sorted_vertices[1] :
sorted_vertices[0];
VertexOutputType *mid_right = (sorted_vertices[1]->coord[0] < sorted_vertices[0]->coord[0]) ?
sorted_vertices[0] :
sorted_vertices[1];
return (mid_right->data - mid_left->data) / (mid_right->coord[0] - mid_left->coord[0]);
}
if (left_target == sorted_vertices[1]) {
VertexOutputType mid_right = right + right_add * distance_min_to_mid;
return (mid_right.data - sorted_vertices[1]->data) /
max_ff(mid_right.coord[0] - sorted_vertices[1]->coord[0], 1.0);
}
VertexOutputType mid_left = left + left_add * distance_min_to_mid;
return (sorted_vertices[1]->data - mid_left.data) /
max_ff(sorted_vertices[1]->coord[0] - mid_left.coord[0], 1.0);
}
/**
* Calculate the delta adder between two rasterlines for the given edge.
*/
VertexOutputType calc_branch_delta(const VertexOutputType &from, const VertexOutputType &to)
{
const float num_rasterlines = max_ff(to.coord[1] - from.coord[1], 1.0f);
VertexOutputType result = (to - from) / num_rasterlines;
return result;
}
std::array<VertexOutputType *, 3> order_triangle_vertices(
std::array<VertexOutputType, 3> &vertex_out)
{
std::array<VertexOutputType *, 3> sorted;
/* Find min v-coordinate and store at index 0. */
sorted[0] = &vertex_out[0];
for (int i = 1; i < 3; i++) {
if (vertex_out[i].coord[1] < sorted[0]->coord[1]) {
sorted[0] = &vertex_out[i];
}
}
/* Find max v-coordinate and store at index 2. */
sorted[2] = &vertex_out[0];
for (int i = 1; i < 3; i++) {
if (vertex_out[i].coord[1] > sorted[2]->coord[1]) {
sorted[2] = &vertex_out[i];
}
}
/* Exit when all 3 have the same v coordinate. Use the original input order. */
if (sorted[0]->coord[1] == sorted[2]->coord[1]) {
for (int i = 0; i < 3; i++) {
sorted[i] = &vertex_out[i];
}
BLI_assert(sorted[0] != sorted[1] && sorted[0] != sorted[2] && sorted[1] != sorted[2]);
return sorted;
}
/* Find mid v-coordinate and store at index 1. */
sorted[1] = &vertex_out[0];
for (int i = 0; i < 3; i++) {
if (sorted[0] != &vertex_out[i] && sorted[2] != &vertex_out[i]) {
sorted[1] = &vertex_out[i];
break;
}
}
BLI_assert(sorted[0] != sorted[1] && sorted[0] != sorted[2] && sorted[1] != sorted[2]);
BLI_assert(sorted[0]->coord[1] <= sorted[1]->coord[1]);
BLI_assert(sorted[0]->coord[1] < sorted[2]->coord[1]);
BLI_assert(sorted[1]->coord[1] <= sorted[2]->coord[1]);
return sorted;
}
std::optional<RasterlineType> clamped_rasterline(int32_t y,
float start_x,
float end_x,
InterfaceInnerType start_data,
const InterfaceInnerType fragment_add)
{
BLI_assert(y >= 0 && y < drawing_target_.get_height());
stats.increase_rasterlines();
if (start_x >= end_x) {
stats.increase_discarded_rasterlines();
return std::nullopt;
}
if (end_x < 0) {
stats.increase_discarded_rasterlines();
return std::nullopt;
}
if (start_x >= drawing_target_.get_width()) {
stats.increase_discarded_rasterlines();
return std::nullopt;
}
/* Is created rasterline clamped and should be added to the statistics. */
bool is_clamped = false;
/* Clamp the start_x to the first visible column anchor. */
int32_t start_xi = clamping_method.column_for(start_x);
float delta_to_anchor = clamping_method.distance_to_column_anchor(start_x);
if (start_xi < 0) {
delta_to_anchor += -start_xi;
start_xi = 0;
is_clamped = true;
}
start_data += fragment_add * delta_to_anchor;
uint32_t end_xi = clamping_method.column_for(end_x);
if (end_xi > drawing_target_.get_width()) {
end_xi = drawing_target_.get_width();
is_clamped = true;
}
if (is_clamped) {
stats.increase_clamped_rasterlines();
}
#ifdef DEBUG_PRINT
printf("%s y(%d) x(%d-%u)\n", __func__, y, start_xi, end_xi);
#endif
return RasterlineType(y, (uint32_t)start_xi, end_xi, start_data, fragment_add);
}
void render_rasterline(const RasterlineType &rasterline)
{
FragmentInputType data = rasterline.start_data;
PixelType *pixel_ptr = drawing_target_.get_pixel_ptr(rasterline.start_x, rasterline.y);
for (uint32_t x = rasterline.start_x; x < rasterline.end_x; x++) {
FragmentOutputType fragment_out;
fragment_shader_.fragment(data, &fragment_out);
blending_mode_.blend(pixel_ptr, fragment_out);
data += rasterline.fragment_add;
pixel_ptr += drawing_target_.get_pixel_stride();
}
stats.increase_drawn_fragments(rasterline.end_x - rasterline.start_x);
}
void append(const RasterlineType &rasterline)
{
rasterlines_.append(rasterline);
if (rasterlines_.is_full()) {
flush();
}
}
};
} // namespace blender::imbuf::rasterizer

View File

@@ -0,0 +1,41 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
#pragma once
#include "BLI_math.h"
#include "BLI_math_vec_types.hh"
#include "BLI_sys_types.h"
namespace blender::imbuf::rasterizer {
/** How to integrate the result of a fragment shader into its drawing target. */
template<typename Source, typename Destination> class AbstractBlendMode {
public:
using SourceType = Source;
using DestinationType = Destination;
virtual void blend(Destination *dest, const Source &source) const = 0;
};
/**
* Copy the result of the fragment shader into float[4] without any modifications.
*/
class CopyBlendMode : public AbstractBlendMode<float4, float> {
public:
void blend(float *dest, const float4 &source) const override
{
copy_v4_v4(dest, source);
}
};
class AlphaBlendMode : public AbstractBlendMode<float4, float> {
public:
void blend(float *dest, const float4 &source) const override
{
interp_v3_v3v3(dest, dest, source, source[3]);
dest[3] = 1.0;
}
};
} // namespace blender::imbuf::rasterizer

View File

@@ -0,0 +1,82 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
#pragma once
/** \file
* \ingroup imbuf
*
* Pixel clamping determines how the edges of geometry is clamped to pixels.
*/
#include "BLI_sys_types.h"
namespace blender::imbuf::rasterizer {
class AbstractPixelClampingMethod {
public:
virtual float distance_to_scanline_anchor(float y) const = 0;
virtual float distance_to_column_anchor(float y) const = 0;
virtual int scanline_for(float y) const = 0;
virtual int column_for(float x) const = 0;
};
class CenterPixelClampingMethod : public AbstractPixelClampingMethod {
public:
float distance_to_scanline_anchor(float y) const override
{
return distance_to_anchor(y);
}
float distance_to_column_anchor(float x) const override
{
return distance_to_anchor(x);
}
int scanline_for(float y) const override
{
return this->round(y);
}
int column_for(float x) const override
{
return this->round(x);
}
private:
float distance_to_anchor(float value) const
{
float fract = to_fract(value);
float result;
if (fract <= 0.5f) {
result = 0.5f - fract;
}
else {
result = 1.5f - fract;
}
BLI_assert(result >= 0.0f);
BLI_assert(result < 1.0f);
return result;
}
int round(float value) const
{
/* Cannot use std::round as it rounds away from 0. */
float fract = to_fract(value);
int result;
if (fract > 0.5f) {
result = ceilf(value);
}
else {
result = floorf(value);
}
return result;
}
float to_fract(float value) const
{
return value - floor(value);
}
};
} // namespace blender::imbuf::rasterizer

View File

@@ -0,0 +1,89 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
#pragma once
#include "BLI_sys_types.h"
namespace blender::imbuf::rasterizer {
class AbstractStats {
public:
virtual void increase_triangles() = 0;
virtual void increase_discarded_triangles() = 0;
virtual void increase_flushes() = 0;
virtual void increase_rasterlines() = 0;
virtual void increase_clamped_rasterlines() = 0;
virtual void increase_discarded_rasterlines() = 0;
virtual void increase_drawn_fragments(uint64_t fragments_drawn) = 0;
};
class Stats : public AbstractStats {
public:
int64_t triangles = 0;
int64_t discarded_triangles = 0;
int64_t flushes = 0;
int64_t rasterlines = 0;
int64_t clamped_rasterlines = 0;
int64_t discarded_rasterlines = 0;
int64_t drawn_fragments = 0;
void increase_triangles() override
{
triangles += 1;
}
void increase_discarded_triangles() override
{
discarded_triangles += 1;
}
void increase_flushes() override
{
flushes += 1;
}
void increase_rasterlines() override
{
rasterlines += 1;
}
void increase_clamped_rasterlines() override
{
clamped_rasterlines += 1;
}
void increase_discarded_rasterlines() override
{
discarded_rasterlines += 1;
}
void increase_drawn_fragments(uint64_t fragments_drawn) override
{
drawn_fragments += fragments_drawn;
}
void reset()
{
triangles = 0;
discarded_triangles = 0;
flushes = 0;
rasterlines = 0;
clamped_rasterlines = 0;
discarded_rasterlines = 0;
drawn_fragments = 0;
}
};
class NullStats : public AbstractStats {
public:
void increase_triangles() override{};
void increase_discarded_triangles() override{};
void increase_flushes() override{};
void increase_rasterlines() override{};
void increase_clamped_rasterlines() override{};
void increase_discarded_rasterlines() override{};
void increase_drawn_fragments(uint64_t UNUSED(fragments_drawn)) override
{
}
};
} // namespace blender::imbuf::rasterizer

View File

@@ -0,0 +1,75 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
#pragma once
/** \file
* \ingroup imbuf
*
* Rasterizer drawing target.
*/
#include "BLI_sys_types.h"
namespace blender::imbuf::rasterizer {
/**
* An abstract implementation of a drawing target. Will make it possible to switch to other render
* targets then only ImBuf types.
*/
template<typename Buffer, typename Pixel = float> class AbstractDrawingTarget {
public:
using BufferType = Buffer;
using PixelType = Pixel;
virtual uint64_t get_width() const = 0;
virtual uint64_t get_height() const = 0;
virtual PixelType *get_pixel_ptr(uint64_t x, uint64_t y) = 0;
virtual int64_t get_pixel_stride() const = 0;
virtual bool has_active_target() const = 0;
virtual void activate(BufferType *instance) = 0;
virtual void deactivate() = 0;
};
class ImageBufferDrawingTarget : public AbstractDrawingTarget<ImBuf, float> {
private:
ImBuf *image_buffer_ = nullptr;
public:
bool has_active_target() const override
{
return image_buffer_ != nullptr;
}
void activate(ImBuf *image_buffer) override
{
image_buffer_ = image_buffer;
}
void deactivate() override
{
image_buffer_ = nullptr;
}
uint64_t get_width() const override
{
return image_buffer_->x;
};
uint64_t get_height() const override
{
return image_buffer_->y;
}
float *get_pixel_ptr(uint64_t x, uint64_t y) override
{
BLI_assert(has_active_target());
uint64_t pixel_index = y * image_buffer_->x + x;
return &image_buffer_->rect_float[pixel_index * 4];
}
int64_t get_pixel_stride() const override
{
return 4;
}
};
} // namespace blender::imbuf::rasterizer

View File

@@ -0,0 +1,429 @@
/* SPDX-License-Identifier: Apache-2.0 */
#include "testing/testing.h"
#include "BLI_float4x4.hh"
#include "BLI_path_util.h"
#include "IMB_rasterizer.hh"
namespace blender::imbuf::rasterizer::tests {
const uint32_t IMBUF_SIZE = 256;
struct VertexInput {
float2 uv;
float value;
VertexInput(float2 uv, float value) : uv(uv), value(value)
{
}
};
class VertexShader : public AbstractVertexShader<VertexInput, float4> {
public:
float2 image_size;
float4x4 vp_mat;
void vertex(const VertexInputType &input, VertexOutputType *r_output) override
{
float2 coord = float2(vp_mat * float3(input.uv[0], input.uv[1], 0.0));
r_output->coord = coord * image_size;
r_output->data = float4(input.value, input.value, input.value, 1.0);
}
};
class FragmentShader : public AbstractFragmentShader<float4, float4> {
public:
void fragment(const FragmentInputType &input, FragmentOutputType *r_output) override
{
*r_output = input;
}
};
using RasterizerType = Rasterizer<VertexShader,
FragmentShader,
CopyBlendMode,
ImageBufferDrawingTarget,
DefaultRasterlinesBufferSize,
Stats>;
/* Draw 2 triangles that fills the entire image buffer and see if each pixel is touched. */
TEST(imbuf_rasterizer, draw_triangle_edge_alignment_quality)
{
ImBuf image_buffer;
IMB_initImBuf(&image_buffer, IMBUF_SIZE, IMBUF_SIZE, 32, IB_rectfloat);
RasterizerType rasterizer;
rasterizer.activate_drawing_target(&image_buffer);
VertexShader &vertex_shader = rasterizer.vertex_shader();
vertex_shader.image_size = float2(image_buffer.x, image_buffer.y);
float clear_color[4] = {0.0f, 0.0f, 0.0f, 0.0f};
float3 location(0.5, 0.5, 0.0);
float3 rotation(0.0, 0.0, 0.0);
float3 scale(1.0, 1.0, 1.0);
for (int i = 0; i < 1000; i++) {
rasterizer.stats.reset();
IMB_rectfill(&image_buffer, clear_color);
rotation[2] = (i / 1000.0) * M_PI * 2;
vertex_shader.vp_mat = float4x4::from_loc_eul_scale(location, rotation, scale);
rasterizer.draw_triangle(VertexInput(float2(-1.0, -1.0), 0.2),
VertexInput(float2(-1.0, 1.0), 0.5),
VertexInput(float2(1.0, -1.0), 1.0));
rasterizer.draw_triangle(VertexInput(float2(1.0, 1.0), 0.2),
VertexInput(float2(-1.0, 1.0), 0.5),
VertexInput(float2(1.0, -1.0), 1.0));
rasterizer.flush();
/* Check if each pixel has been drawn exactly once. */
EXPECT_EQ(rasterizer.stats.drawn_fragments, IMBUF_SIZE * IMBUF_SIZE) << i;
#ifdef DEBUG_SAVE
char file_name[FILE_MAX];
BLI_path_sequence_encode(file_name, "/tmp/test_", ".png", 4, i);
IMB_saveiff(&image_buffer, file_name, IB_rectfloat);
imb_freerectImBuf(&image_buffer);
#endif
}
imb_freerectImbuf_all(&image_buffer);
}
/**
* This test case renders 3 images that should have the same coverage. But using a different edge.
*
* The results should be identical.
*/
TEST(imbuf_rasterizer, edge_pixel_clamping)
{
float clear_color[4] = {0.0f, 0.0f, 0.0f, 0.0f};
ImBuf image_buffer_a;
ImBuf image_buffer_b;
ImBuf image_buffer_c;
int fragments_drawn_a;
int fragments_drawn_b;
int fragments_drawn_c;
RasterizerType rasterizer;
{
IMB_initImBuf(&image_buffer_a, IMBUF_SIZE, IMBUF_SIZE, 32, IB_rectfloat);
rasterizer.stats.reset();
rasterizer.activate_drawing_target(&image_buffer_a);
VertexShader &vertex_shader = rasterizer.vertex_shader();
vertex_shader.image_size = float2(image_buffer_a.x, image_buffer_a.y);
vertex_shader.vp_mat = float4x4::identity();
IMB_rectfill(&image_buffer_a, clear_color);
rasterizer.draw_triangle(VertexInput(float2(0.2, -0.2), 1.0),
VertexInput(float2(1.2, 1.2), 1.0),
VertexInput(float2(1.5, -0.3), 1.0));
rasterizer.flush();
fragments_drawn_a = rasterizer.stats.drawn_fragments;
}
{
IMB_initImBuf(&image_buffer_b, IMBUF_SIZE, IMBUF_SIZE, 32, IB_rectfloat);
rasterizer.stats.reset();
rasterizer.activate_drawing_target(&image_buffer_b);
VertexShader &vertex_shader = rasterizer.vertex_shader();
vertex_shader.image_size = float2(image_buffer_b.x, image_buffer_b.y);
vertex_shader.vp_mat = float4x4::identity();
IMB_rectfill(&image_buffer_b, clear_color);
rasterizer.draw_triangle(VertexInput(float2(0.2, -0.2), 1.0),
VertexInput(float2(1.2, 1.2), 1.0),
VertexInput(float2(1.5, -0.3), 1.0));
rasterizer.flush();
fragments_drawn_b = rasterizer.stats.drawn_fragments;
}
{
IMB_initImBuf(&image_buffer_c, IMBUF_SIZE, IMBUF_SIZE, 32, IB_rectfloat);
rasterizer.stats.reset();
rasterizer.activate_drawing_target(&image_buffer_c);
VertexShader &vertex_shader = rasterizer.vertex_shader();
vertex_shader.image_size = float2(image_buffer_c.x, image_buffer_c.y);
vertex_shader.vp_mat = float4x4::identity();
IMB_rectfill(&image_buffer_c, clear_color);
rasterizer.draw_triangle(VertexInput(float2(0.2, -0.2), 1.0),
VertexInput(float2(1.2, 1.2), 1.0),
VertexInput(float2(10.0, 1.3), 1.0));
rasterizer.flush();
fragments_drawn_c = rasterizer.stats.drawn_fragments;
}
EXPECT_EQ(fragments_drawn_a, fragments_drawn_b);
EXPECT_EQ(memcmp(image_buffer_a.rect_float,
image_buffer_b.rect_float,
sizeof(float) * 4 * IMBUF_SIZE * IMBUF_SIZE),
0);
EXPECT_EQ(fragments_drawn_a, fragments_drawn_c);
EXPECT_EQ(memcmp(image_buffer_a.rect_float,
image_buffer_c.rect_float,
sizeof(float) * 4 * IMBUF_SIZE * IMBUF_SIZE),
0);
EXPECT_EQ(fragments_drawn_b, fragments_drawn_c);
EXPECT_EQ(memcmp(image_buffer_b.rect_float,
image_buffer_c.rect_float,
sizeof(float) * 4 * IMBUF_SIZE * IMBUF_SIZE),
0);
imb_freerectImbuf_all(&image_buffer_a);
imb_freerectImbuf_all(&image_buffer_b);
imb_freerectImbuf_all(&image_buffer_c);
}
/** Use one rasterizer and switch between multiple drawing targets. */
TEST(imbuf_rasterizer, switch_drawing_target)
{
float clear_color[4] = {0.0f, 0.0f, 0.0f, 0.0f};
ImBuf image_buffer_a;
ImBuf image_buffer_b;
ImBuf image_buffer_c;
RasterizerType rasterizer;
IMB_initImBuf(&image_buffer_a, IMBUF_SIZE, IMBUF_SIZE, 32, IB_rectfloat);
IMB_rectfill(&image_buffer_a, clear_color);
VertexShader &vertex_shader = rasterizer.vertex_shader();
vertex_shader.image_size = float2(image_buffer_a.x, image_buffer_a.y);
vertex_shader.vp_mat = float4x4::identity();
rasterizer.activate_drawing_target(&image_buffer_a);
rasterizer.draw_triangle(VertexInput(float2(0.2, -0.2), 1.0),
VertexInput(float2(1.2, 1.2), 1.0),
VertexInput(float2(1.5, -0.3), 1.0));
IMB_initImBuf(&image_buffer_b, IMBUF_SIZE, IMBUF_SIZE, 32, IB_rectfloat);
IMB_rectfill(&image_buffer_b, clear_color);
rasterizer.activate_drawing_target(&image_buffer_b);
rasterizer.draw_triangle(VertexInput(float2(0.2, -0.2), 1.0),
VertexInput(float2(1.2, 1.2), 1.0),
VertexInput(float2(1.5, -0.3), 1.0));
IMB_initImBuf(&image_buffer_c, IMBUF_SIZE, IMBUF_SIZE, 32, IB_rectfloat);
IMB_rectfill(&image_buffer_c, clear_color);
rasterizer.activate_drawing_target(&image_buffer_c);
rasterizer.draw_triangle(VertexInput(float2(0.2, -0.2), 1.0),
VertexInput(float2(1.2, 1.2), 1.0),
VertexInput(float2(10.0, 1.3), 1.0));
rasterizer.flush();
EXPECT_EQ(memcmp(image_buffer_a.rect_float,
image_buffer_b.rect_float,
sizeof(float) * 4 * IMBUF_SIZE * IMBUF_SIZE),
0);
EXPECT_EQ(memcmp(image_buffer_a.rect_float,
image_buffer_c.rect_float,
sizeof(float) * 4 * IMBUF_SIZE * IMBUF_SIZE),
0);
EXPECT_EQ(memcmp(image_buffer_b.rect_float,
image_buffer_c.rect_float,
sizeof(float) * 4 * IMBUF_SIZE * IMBUF_SIZE),
0);
imb_freerectImbuf_all(&image_buffer_a);
imb_freerectImbuf_all(&image_buffer_b);
imb_freerectImbuf_all(&image_buffer_c);
}
TEST(imbuf_rasterizer, center_pixel_clamper_scanline_for)
{
CenterPixelClampingMethod clamper;
EXPECT_EQ(clamper.scanline_for(-2.0f), -2);
EXPECT_EQ(clamper.scanline_for(-1.9f), -2);
EXPECT_EQ(clamper.scanline_for(-1.8f), -2);
EXPECT_EQ(clamper.scanline_for(-1.7f), -2);
EXPECT_EQ(clamper.scanline_for(-1.6f), -2);
EXPECT_EQ(clamper.scanline_for(-1.5f), -2);
EXPECT_EQ(clamper.scanline_for(-1.4f), -1);
EXPECT_EQ(clamper.scanline_for(-1.3f), -1);
EXPECT_EQ(clamper.scanline_for(-1.2f), -1);
EXPECT_EQ(clamper.scanline_for(-1.1f), -1);
EXPECT_EQ(clamper.scanline_for(-1.0f), -1);
EXPECT_EQ(clamper.scanline_for(-0.9f), -1);
EXPECT_EQ(clamper.scanline_for(-0.8f), -1);
EXPECT_EQ(clamper.scanline_for(-0.7f), -1);
EXPECT_EQ(clamper.scanline_for(-0.6f), -1);
EXPECT_EQ(clamper.scanline_for(-0.5f), -1);
EXPECT_EQ(clamper.scanline_for(-0.4f), 0);
EXPECT_EQ(clamper.scanline_for(-0.3f), 0);
EXPECT_EQ(clamper.scanline_for(-0.2f), 0);
EXPECT_EQ(clamper.scanline_for(-0.1f), 0);
EXPECT_EQ(clamper.scanline_for(0.0f), 0);
EXPECT_EQ(clamper.scanline_for(0.1f), 0);
EXPECT_EQ(clamper.scanline_for(0.2f), 0);
EXPECT_EQ(clamper.scanline_for(0.3f), 0);
EXPECT_EQ(clamper.scanline_for(0.4f), 0);
EXPECT_EQ(clamper.scanline_for(0.5f), 0);
EXPECT_EQ(clamper.scanline_for(0.6f), 1);
EXPECT_EQ(clamper.scanline_for(0.7f), 1);
EXPECT_EQ(clamper.scanline_for(0.8f), 1);
EXPECT_EQ(clamper.scanline_for(0.9f), 1);
EXPECT_EQ(clamper.scanline_for(1.0f), 1);
EXPECT_EQ(clamper.scanline_for(1.0f), 1);
EXPECT_EQ(clamper.scanline_for(1.1f), 1);
EXPECT_EQ(clamper.scanline_for(1.2f), 1);
EXPECT_EQ(clamper.scanline_for(1.3f), 1);
EXPECT_EQ(clamper.scanline_for(1.4f), 1);
EXPECT_EQ(clamper.scanline_for(1.5f), 1);
EXPECT_EQ(clamper.scanline_for(1.6f), 2);
EXPECT_EQ(clamper.scanline_for(1.7f), 2);
EXPECT_EQ(clamper.scanline_for(1.8f), 2);
EXPECT_EQ(clamper.scanline_for(1.9f), 2);
EXPECT_EQ(clamper.scanline_for(2.0f), 2);
}
TEST(imbuf_rasterizer, center_pixel_clamper_column_for)
{
CenterPixelClampingMethod clamper;
EXPECT_EQ(clamper.column_for(-2.0f), -2);
EXPECT_EQ(clamper.column_for(-1.9f), -2);
EXPECT_EQ(clamper.column_for(-1.8f), -2);
EXPECT_EQ(clamper.column_for(-1.7f), -2);
EXPECT_EQ(clamper.column_for(-1.6f), -2);
EXPECT_EQ(clamper.column_for(-1.5f), -2);
EXPECT_EQ(clamper.column_for(-1.4f), -1);
EXPECT_EQ(clamper.column_for(-1.3f), -1);
EXPECT_EQ(clamper.column_for(-1.2f), -1);
EXPECT_EQ(clamper.column_for(-1.1f), -1);
EXPECT_EQ(clamper.column_for(-1.0f), -1);
EXPECT_EQ(clamper.column_for(-0.9f), -1);
EXPECT_EQ(clamper.column_for(-0.8f), -1);
EXPECT_EQ(clamper.column_for(-0.7f), -1);
EXPECT_EQ(clamper.column_for(-0.6f), -1);
EXPECT_EQ(clamper.column_for(-0.5f), -1);
EXPECT_EQ(clamper.column_for(-0.4f), 0);
EXPECT_EQ(clamper.column_for(-0.3f), 0);
EXPECT_EQ(clamper.column_for(-0.2f), 0);
EXPECT_EQ(clamper.column_for(-0.1f), 0);
EXPECT_EQ(clamper.column_for(0.0f), 0);
EXPECT_EQ(clamper.column_for(0.1f), 0);
EXPECT_EQ(clamper.column_for(0.2f), 0);
EXPECT_EQ(clamper.column_for(0.3f), 0);
EXPECT_EQ(clamper.column_for(0.4f), 0);
EXPECT_EQ(clamper.column_for(0.5f), 0);
EXPECT_EQ(clamper.column_for(0.6f), 1);
EXPECT_EQ(clamper.column_for(0.7f), 1);
EXPECT_EQ(clamper.column_for(0.8f), 1);
EXPECT_EQ(clamper.column_for(0.9f), 1);
EXPECT_EQ(clamper.column_for(1.0f), 1);
EXPECT_EQ(clamper.column_for(1.0f), 1);
EXPECT_EQ(clamper.column_for(1.1f), 1);
EXPECT_EQ(clamper.column_for(1.2f), 1);
EXPECT_EQ(clamper.column_for(1.3f), 1);
EXPECT_EQ(clamper.column_for(1.4f), 1);
EXPECT_EQ(clamper.column_for(1.5f), 1);
EXPECT_EQ(clamper.column_for(1.6f), 2);
EXPECT_EQ(clamper.column_for(1.7f), 2);
EXPECT_EQ(clamper.column_for(1.8f), 2);
EXPECT_EQ(clamper.column_for(1.9f), 2);
EXPECT_EQ(clamper.column_for(2.0f), 2);
}
TEST(imbuf_rasterizer, center_pixel_clamper_distance_to_scanline_anchorpoint)
{
CenterPixelClampingMethod clamper;
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-2.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.9f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.8f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.7f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.6f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.4f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.3f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.2f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.1f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-1.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.9f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.8f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.7f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.6f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.4f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.3f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.2f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(-0.1f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.1f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.2f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.3f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.4f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.6f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.7f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.8f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(0.9f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.1f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.2f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.3f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.4f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.6f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.7f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.8f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(1.9f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_scanline_anchor(2.0f), 0.5f);
}
TEST(imbuf_rasterizer, center_pixel_clamper_distance_to_column_anchorpoint)
{
CenterPixelClampingMethod clamper;
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-2.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.9f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.8f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.7f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.6f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.4f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.3f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.2f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.1f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-1.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.9f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.8f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.7f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.6f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.4f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.3f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.2f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(-0.1f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.1f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.2f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.3f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.4f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.6f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.7f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.8f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(0.9f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.0f), 0.5f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.1f), 0.4f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.2f), 0.3f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.3f), 0.2f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.4f), 0.1f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.5f), 0.0f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.6f), 0.9f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.7f), 0.8f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.8f), 0.7f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(1.9f), 0.6f);
EXPECT_FLOAT_EQ(clamper.distance_to_column_anchor(2.0f), 0.5f);
}
} // namespace blender::imbuf::rasterizer::tests

View File

@@ -446,6 +446,7 @@ typedef enum eBrushSculptTool {
SCULPT_TOOL_BOUNDARY = 30,
SCULPT_TOOL_DISPLACEMENT_ERASER = 31,
SCULPT_TOOL_DISPLACEMENT_SMEAR = 32,
SCULPT_TOOL_TEXTURE_PAINT = 33,
} eBrushSculptTool;
/* Brush.uv_sculpt_tool */
@@ -499,6 +500,7 @@ typedef enum eBrushCurvesSculptTool {
SCULPT_TOOL_DRAW_FACE_SETS, \
SCULPT_TOOL_PAINT, \
SCULPT_TOOL_SMEAR, \
SCULPT_TOOL_TEXTURE_PAINT, \
\
/* These brushes could handle dynamic topology, \ \
* but user feedback indicates it's better not to */ \

View File

@@ -124,6 +124,8 @@ const EnumPropertyItem rna_enum_brush_sculpt_tool_items[] = {
{SCULPT_TOOL_PAINT, "PAINT", ICON_BRUSH_SCULPT_DRAW, "Paint", ""},
{SCULPT_TOOL_SMEAR, "SMEAR", ICON_BRUSH_SCULPT_DRAW, "Smear", ""},
{SCULPT_TOOL_DRAW_FACE_SETS, "DRAW_FACE_SETS", ICON_BRUSH_MASK, "Draw Face Sets", ""},
{SCULPT_TOOL_TEXTURE_PAINT, "TEXTURE_PAINT", ICON_BRUSH_MASK, "Draw on Images", ""},
{0, NULL, 0, NULL, NULL},
};
/* clang-format on */
@@ -465,7 +467,7 @@ static bool rna_BrushCapabilitiesSculpt_has_sculpt_plane_get(PointerRNA *ptr)
static bool rna_BrushCapabilitiesSculpt_has_color_get(PointerRNA *ptr)
{
Brush *br = (Brush *)ptr->data;
return ELEM(br->sculpt_tool, SCULPT_TOOL_PAINT);
return ELEM(br->sculpt_tool, SCULPT_TOOL_PAINT, SCULPT_TOOL_TEXTURE_PAINT);
}
static bool rna_BrushCapabilitiesSculpt_has_secondary_color_get(PointerRNA *ptr)