1
1

Compare commits

...

36 Commits

Author SHA1 Message Date
d8f46a4452 EEVEE: Motion Blur: Use CFRA instead of depsgraph ctime
This fixes time remapping issues.
2020-06-12 15:17:15 +02:00
a737ca11e1 EEVEE: Motion Blur: Fix issues with animated objects visibility 2020-06-12 14:21:17 +02:00
92520f1791 Merge branch 'master' into eevee-motionblur-object 2020-06-12 14:07:13 +02:00
6a1a894df1 EEVEE: Motion Blur: Support Duplis and assume they are always moving 2020-06-12 14:04:17 +02:00
c98f92f998 EEVEE: Motion Blur: Auto detect animation and deformation on objects
This remove the overhead of deformation and normal motion blur
for all static objects.
2020-06-11 22:27:29 +02:00
4f9d1a5765 EEVEE: Motion Blur: Use less VRAM and improve larger blur
Larger blur now use a more stable approach. We repeat the expand process
instead of making tiles bigger.
2020-06-11 20:37:51 +02:00
33359237fe EEVEE: Motion Blur: Expose max blur and depth scale 2020-06-11 16:37:11 +02:00
8ac06377a5 EEVEE: Motion Blur: Fix camera near/far values uniforms 2020-06-11 15:05:21 +02:00
a0c947ed4c EEVEE: Motion Blur: Replace Wang Hash Noise function by blue noise 2020-06-11 14:27:20 +02:00
5fab2ce500 EEVEE: Motion Blur: Randomize velocity tile boundaries to avoid artifacts
This follows the paper implementation. This gives nice results on complex
motions.
2020-06-10 19:43:05 +02:00
e7744caf64 EEVEE: Motion Blur: Use Tile max dominant velocity reduction pre-passes
This makes the bluring post process quicker.
2020-06-10 19:41:58 +02:00
00f8cfd4ac Merge branch 'master' into eevee-motionblur-object 2020-06-10 13:37:36 +02:00
75008dc4b9 EEVEE: Motion Blur: Use closest interpolation for sampling
This Avoid halo artifacts
2020-06-09 13:35:08 +02:00
8ded0dd4d5 EEVEE: Motion Blur: Split both direction accumulation 2020-06-08 23:38:26 +02:00
e5062a775e EEVEE: Motion Blur: Do not request motion blur data in viewport 2020-06-08 19:21:25 +02:00
13aa113d7d EEVEE: Motion Blur: Simplify shader code 2020-06-08 19:20:56 +02:00
d8876e78a1 Merge branch 'master' into eevee-motionblur-object 2020-06-08 13:35:07 +02:00
e8592a0412 Merge branch 'master' into eevee-motionblur-object
# Conflicts:
#	source/blender/draw/engines/eevee/eevee_data.c
#	source/blender/draw/engines/eevee/eevee_effects.c
#	source/blender/draw/engines/eevee/eevee_motion_blur.c
#	source/blender/gpu/GPU_vertex_format.h
#	source/blender/makesdna/DNA_userdef_types.h
#	source/blender/makesrna/intern/rna_userdef.c
2020-06-04 14:24:10 +02:00
693b88f152 EEVEE: Motion Blur: Fix crash when motion blur is not enabled 2020-04-14 15:33:07 +02:00
3651ebb1db EEVEE: Motion Blur: Improve post processing
This improve the post process effect by implementing more accurate sampling
technique.

Details about the technique can be found here
https://casual-effects.com/research/McGuire2012Blur/McGuire12Blur.pdf
and here
http://www.iryoku.com/next-generation-post-processing-in-call-of-duty-advanced-warfare
2020-04-11 23:35:07 +02:00
7aef17d361 EEVEE: Motion Blur: Fix shutter delta 2020-04-09 03:59:13 +02:00
417f0d720b EEVEE: Motion Blur: Make motion blur closer to cycles by evaluating 3 step
For now we always center the delta around the frame time. We store 2 motion
steps, one before and one after the current frame. However, this also means
storing 2 motion vectors for each pixels, doubling the vram usage of the
motion vector buffer.

This patch also cleanup some uneeded complexity. We use the motion vectors
as is and don't use a multiplier.
2020-04-07 01:02:53 +02:00
ed58e3656a EEVEE: Motion Blur: Use orig_object->data instead of orig_data as key
This fix issue with some modifier setup
2020-04-07 00:58:35 +02:00
7b9a6f823b EEVEE: Implement deformation motion blur
This adds deformation motion blur by saving the vertex buffers of the
previous frame.

We modify the surface batch to pass the vert position as new attributes.
2020-04-06 17:55:49 +02:00
2f2fb4453f DRW: Batch Cache: Expose position vertex buffer to engine
This is in order to compute position deltas for motion blur. The engine
is responsible to handle the data. However it is not the owner of the data.
So a copy must be performed if the data needs to be kept accross frame.
2020-04-06 17:32:27 +02:00
f9f6042bcf GPUVertexFormat: Add Rename function
This is needed for motion blur.
2020-04-06 15:43:43 +02:00
16fd236e14 GPUVertBuf: Add duplication function 2020-04-06 15:06:40 +02:00
8cd42410c5 EEVEE: Motion Blur: Rework object motion vector rendering
Object matrices are now stored in a GHash per object, similar to cycles.
The objects surfaces are not split per material anymore.

This approach is not supported for viewport.

This approach is much cleaner and will be able to easily support deforming
motion blur.

This approach also supports instances
2020-04-04 00:45:55 +02:00
0213d9f865 EEVEE: Object Motion Blur: Initial Implementation
This adds object motion blur vectors for EEVEE as well as better noise reduction for it.

For TAA reprojection we just compute the motion vector on the fly based on camera motion and depth buffer.
This makes possible to store another motion vector only for the blurring which is not useful for TAA history
fetching.

The changes are quite simple. We just do an extra pass to write the motion vectors for opaque objects and
use the motion vector to do the motion blur sampling.

This does not improve the postprocess motion blur itself.

Viewport support is kind of a hack, relying on cached previously drawn objects states, and is to be enabled
through experimental support panel in userpref.

Differential Revision: https://developer.blender.org/D7297
2020-04-01 01:52:09 +02:00
bd6abacc04 EEVEE: Motion Blur: Fix camera motion blur in render 2020-04-01 01:31:13 +02:00
c42e68c484 EEVEE: Motion Blur: Fix rendering and center sample on current frame 2020-04-01 00:52:32 +02:00
a83ad13c49 EEVEE: Motion Blur: Make Temporal accumulation clear up the noise 2020-04-01 00:52:32 +02:00
7ebb1f2ff3 EEVEE: Motion Blur: Fix missing motion vectors after first TAA sample 2020-04-01 00:52:32 +02:00
5de40f4838 EEVEE: Motion Blur: Fix TAA reprojection
We compute the motion vector on the fly based on camera motion.
2020-04-01 00:52:32 +02:00
52f8ba66cb EEVEE: Motion Blur: Fix motion blur in render 2020-04-01 00:50:52 +02:00
536142e12f EEVEE: Object Motion Blur: Initial Implementation 2020-04-01 00:50:52 +02:00
34 changed files with 1313 additions and 237 deletions

View File

@@ -173,8 +173,9 @@ class RENDER_PT_eevee_motion_blur(RenderButtonsPanel, Panel):
layout.active = props.use_motion_blur
col = layout.column()
col.prop(props, "motion_blur_samples")
col.prop(props, "motion_blur_shutter")
col.prop(props, "motion_blur_depth_scale")
col.prop(props, "motion_blur_max")
class RENDER_PT_eevee_depth_of_field(RenderButtonsPanel, Panel):

View File

@@ -5076,5 +5076,13 @@ void blo_do_versions_280(FileData *fd, Library *UNUSED(lib), Main *bmain)
*/
{
/* Keep this block, even when empty. */
/* EEVEE Motion blur new parameters. */
if (!DNA_struct_elem_find(fd->filesdna, "SceneEEVEE", "float", "motion_blur_depth_scale")) {
LISTBASE_FOREACH (Scene *, scene, &bmain->scenes) {
scene->eevee.motion_blur_depth_scale = 100.0f;
scene->eevee.motion_blur_max = 32;
}
}
}
}

View File

@@ -214,6 +214,7 @@ data_to_c_simple(engines/eevee/shaders/effect_downsample_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/effect_downsample_cube_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/effect_gtao_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/effect_velocity_resolve_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/effect_velocity_tile_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/effect_minmaxz_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/effect_mist_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/effect_motion_blur_frag.glsl SRC)
@@ -224,6 +225,8 @@ data_to_c_simple(engines/eevee/shaders/effect_temporal_aa.glsl SRC)
data_to_c_simple(engines/eevee/shaders/lightprobe_planar_downsample_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/lightprobe_planar_downsample_geom.glsl SRC)
data_to_c_simple(engines/eevee/shaders/lightprobe_planar_downsample_vert.glsl SRC)
data_to_c_simple(engines/eevee/shaders/object_motion_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/object_motion_vert.glsl SRC)
data_to_c_simple(engines/eevee/shaders/prepass_frag.glsl SRC)
data_to_c_simple(engines/eevee/shaders/prepass_vert.glsl SRC)
data_to_c_simple(engines/eevee/shaders/shadow_accum_frag.glsl SRC)

View File

@@ -24,11 +24,135 @@
#include "DRW_render.h"
#include "BLI_ghash.h"
#include "BLI_memblock.h"
#include "BKE_duplilist.h"
#include "DEG_depsgraph_query.h"
#include "GPU_vertex_buffer.h"
#include "eevee_lightcache.h"
#include "eevee_private.h"
/* Motion Blur data. */
static void eevee_motion_blur_mesh_data_free(void *val)
{
EEVEE_GeometryMotionData *geom_mb = (EEVEE_GeometryMotionData *)val;
for (int i = 0; i < ARRAY_SIZE(geom_mb->vbo); i++) {
GPU_VERTBUF_DISCARD_SAFE(geom_mb->vbo[i]);
}
MEM_freeN(val);
}
static uint eevee_object_key_hash(const void *key)
{
EEVEE_ObjectKey *ob_key = (EEVEE_ObjectKey *)key;
uint hash = BLI_ghashutil_ptrhash(ob_key->ob);
hash = BLI_ghashutil_combine_hash(hash, BLI_ghashutil_ptrhash(ob_key->parent));
for (int i = 0; i < 16; i++) {
if (ob_key->id[i] != 0) {
hash = BLI_ghashutil_combine_hash(hash, BLI_ghashutil_inthash(ob_key->id[i]));
}
else {
break;
}
}
return hash;
}
/* Return false if equal. */
static bool eevee_object_key_cmp(const void *a, const void *b)
{
EEVEE_ObjectKey *key_a = (EEVEE_ObjectKey *)a;
EEVEE_ObjectKey *key_b = (EEVEE_ObjectKey *)b;
if (key_a->ob != key_b->ob) {
return true;
}
if (key_a->parent != key_b->parent) {
return true;
}
if (memcmp(key_a->id, key_b->id, sizeof(key_a->id)) != 0) {
return true;
}
return false;
}
void EEVEE_motion_blur_data_init(EEVEE_MotionBlurData *mb)
{
if (mb->object == NULL) {
mb->object = BLI_ghash_new(eevee_object_key_hash, eevee_object_key_cmp, "EEVEE Object Motion");
}
if (mb->geom == NULL) {
mb->geom = BLI_ghash_new(BLI_ghashutil_ptrhash, BLI_ghashutil_ptrcmp, "EEVEE Mesh Motion");
}
}
void EEVEE_motion_blur_data_free(EEVEE_MotionBlurData *mb)
{
if (mb->object) {
BLI_ghash_free(mb->object, MEM_freeN, MEM_freeN);
mb->object = NULL;
}
if (mb->geom) {
BLI_ghash_free(mb->geom, NULL, eevee_motion_blur_mesh_data_free);
mb->geom = NULL;
}
}
EEVEE_ObjectMotionData *EEVEE_motion_blur_object_data_get(EEVEE_MotionBlurData *mb, Object *ob)
{
if (mb->object == NULL) {
return NULL;
}
EEVEE_ObjectKey key, *key_p;
key.ob = ob;
DupliObject *dup = DRW_object_get_dupli(ob);
if (dup) {
key.parent = DRW_object_get_dupli_parent(ob);
memcpy(key.id, dup->persistent_id, sizeof(key.id));
}
else {
key.parent = key.ob;
memset(key.id, 0, sizeof(key.id));
}
EEVEE_ObjectMotionData *ob_step = BLI_ghash_lookup(mb->object, &key);
if (ob_step == NULL) {
key_p = MEM_mallocN(sizeof(*key_p), __func__);
memcpy(key_p, &key, sizeof(*key_p));
ob_step = MEM_callocN(sizeof(EEVEE_ObjectMotionData), __func__);
BLI_ghash_insert(mb->object, key_p, ob_step);
}
return ob_step;
}
EEVEE_GeometryMotionData *EEVEE_motion_blur_geometry_data_get(EEVEE_MotionBlurData *mb, Object *ob)
{
if (mb->geom == NULL) {
return NULL;
}
/* Use original data as key to ensure matching accross update. */
Object *ob_orig = DEG_get_original_object(ob);
EEVEE_GeometryMotionData *geom_step = BLI_ghash_lookup(mb->geom, ob_orig->data);
if (geom_step == NULL) {
geom_step = MEM_callocN(sizeof(EEVEE_GeometryMotionData), __func__);
BLI_ghash_insert(mb->geom, ob_orig->data, geom_step);
}
return geom_step;
}
/* View Layer data. */
void EEVEE_view_layer_data_free(void *storage)
{
EEVEE_ViewLayerData *sldata = (EEVEE_ViewLayerData *)storage;

View File

@@ -225,10 +225,13 @@ void EEVEE_effects_init(EEVEE_ViewLayerData *sldata,
*/
if ((effects->enabled_effects & EFFECT_VELOCITY_BUFFER) != 0) {
effects->velocity_tx = DRW_texture_pool_query_2d(
size_fs[0], size_fs[1], GPU_RG16, &draw_engine_eevee_type);
size_fs[0], size_fs[1], GPU_RGBA16, &draw_engine_eevee_type);
/* TODO output objects velocity during the mainpass. */
// GPU_framebuffer_texture_attach(fbl->main_fb, effects->velocity_tx, 1, 0);
GPU_framebuffer_ensure_config(&fbl->velocity_fb,
{
GPU_ATTACHMENT_TEXTURE(dtxl->depth),
GPU_ATTACHMENT_TEXTURE(effects->velocity_tx),
});
GPU_framebuffer_ensure_config(
&fbl->velocity_resolve_fb,
@@ -328,14 +331,18 @@ void EEVEE_effects_cache_init(EEVEE_ViewLayerData *sldata, EEVEE_Data *vedata)
}
if ((effects->enabled_effects & EFFECT_VELOCITY_BUFFER) != 0) {
EEVEE_MotionBlurData *mb_data = &effects->motion_blur;
/* This pass compute camera motions to the non moving objects. */
DRW_PASS_CREATE(psl->velocity_resolve, DRW_STATE_WRITE_COLOR);
grp = DRW_shgroup_create(EEVEE_shaders_velocity_resolve_sh_get(), psl->velocity_resolve);
DRW_shgroup_uniform_texture_ref(grp, "depthBuffer", &e_data.depth_src);
DRW_shgroup_uniform_block(grp, "common_block", sldata->common_ubo);
DRW_shgroup_uniform_block(grp, "renderpass_block", sldata->renderpass_ubo.combined);
DRW_shgroup_uniform_mat4(grp, "currPersinv", effects->velocity_curr_persinv);
DRW_shgroup_uniform_mat4(grp, "pastPersmat", effects->velocity_past_persmat);
DRW_shgroup_uniform_mat4(grp, "prevViewProjMatrix", mb_data->camera[MB_PREV].persmat);
DRW_shgroup_uniform_mat4(grp, "currViewProjMatrixInv", mb_data->camera[MB_CURR].persinv);
DRW_shgroup_uniform_mat4(grp, "nextViewProjMatrix", mb_data->camera[MB_NEXT].persmat);
DRW_shgroup_call(grp, quad, NULL);
}
}
@@ -501,17 +508,19 @@ static void EEVEE_velocity_resolve(EEVEE_Data *vedata)
EEVEE_FramebufferList *fbl = vedata->fbl;
EEVEE_StorageList *stl = vedata->stl;
EEVEE_EffectsInfo *effects = stl->effects;
struct DRWView *view = effects->taa_view;
if ((effects->enabled_effects & EFFECT_VELOCITY_BUFFER) != 0) {
DefaultTextureList *dtxl = DRW_viewport_texture_list_get();
e_data.depth_src = dtxl->depth;
DRW_view_persmat_get(view, effects->velocity_curr_persinv, true);
GPU_framebuffer_bind(fbl->velocity_resolve_fb);
DRW_draw_pass(psl->velocity_resolve);
if (psl->velocity_object) {
GPU_framebuffer_bind(fbl->velocity_fb);
DRW_draw_pass(psl->velocity_object);
}
}
DRW_view_persmat_get(view, effects->velocity_past_persmat, false);
}
void EEVEE_draw_effects(EEVEE_ViewLayerData *sldata, EEVEE_Data *vedata)
@@ -529,6 +538,7 @@ void EEVEE_draw_effects(EEVEE_ViewLayerData *sldata, EEVEE_Data *vedata)
effects->target_buffer = fbl->effect_color_fb; /* next target to render to */
/* Post process stack (order matters) */
EEVEE_velocity_resolve(vedata);
EEVEE_motion_blur_draw(vedata);
EEVEE_depth_of_field_draw(vedata);
@@ -537,7 +547,6 @@ void EEVEE_draw_effects(EEVEE_ViewLayerData *sldata, EEVEE_Data *vedata)
* Velocity resolve use a hack to exclude lookdev
* spheres from creating shimmering re-projection vectors. */
EEVEE_lookdev_draw(vedata);
EEVEE_velocity_resolve(vedata);
EEVEE_temporal_sampling_draw(vedata);
EEVEE_bloom_draw(vedata);

View File

@@ -421,18 +421,76 @@ static void eevee_render_to_image(void *vedata,
struct RenderLayer *render_layer,
const rcti *rect)
{
EEVEE_Data *ved = (EEVEE_Data *)vedata;
const DRWContextState *draw_ctx = DRW_context_state_get();
if (EEVEE_render_do_motion_blur(draw_ctx->depsgraph)) {
Scene *scene = DEG_get_evaluated_scene(draw_ctx->depsgraph);
float shutter = scene->eevee.motion_blur_shutter * 0.5f;
float time = CFRA;
/* Centered on frame for now. */
float start_time = time - shutter;
float end_time = time + shutter;
{
EEVEE_motion_blur_step_set(ved, MB_PREV);
RE_engine_frame_set(engine, floorf(start_time), fractf(start_time));
if (!EEVEE_render_init(vedata, engine, draw_ctx->depsgraph)) {
return;
}
if (RE_engine_test_break(engine)) {
return;
}
DRW_render_object_iter(vedata, engine, draw_ctx->depsgraph, EEVEE_render_cache);
EEVEE_motion_blur_cache_finish(vedata);
/* Reset passlist. This is safe as they are stored into managed memory chunks. */
memset(ved->psl, 0, sizeof(*ved->psl));
/* Fix memleak */
BLI_ghash_free(ved->stl->g_data->material_hash, NULL, NULL);
ved->stl->g_data->material_hash = NULL;
}
{
EEVEE_motion_blur_step_set(ved, MB_NEXT);
RE_engine_frame_set(engine, floorf(end_time), fractf(end_time));
EEVEE_render_init(vedata, engine, draw_ctx->depsgraph);
DRW_render_object_iter(vedata, engine, draw_ctx->depsgraph, EEVEE_render_cache);
EEVEE_motion_blur_cache_finish(vedata);
/* Reset passlist. This is safe as they are stored into managed memory chunks. */
memset(ved->psl, 0, sizeof(*ved->psl));
/* Fix memleak */
BLI_ghash_free(ved->stl->g_data->material_hash, NULL, NULL);
ved->stl->g_data->material_hash = NULL;
}
/* Current frame. */
EEVEE_motion_blur_step_set(ved, MB_CURR);
RE_engine_frame_set(engine, time, 0.0f);
}
if (!EEVEE_render_init(vedata, engine, draw_ctx->depsgraph)) {
return;
}
if (RE_engine_test_break(engine)) {
return;
}
DRW_render_object_iter(vedata, engine, draw_ctx->depsgraph, EEVEE_render_cache);
EEVEE_motion_blur_cache_finish(vedata);
/* Actually do the rendering. */
EEVEE_render_draw(vedata, engine, render_layer, rect);
EEVEE_volumes_free_smoke_textures();
EEVEE_motion_blur_data_free(&ved->stl->effects->motion_blur);
}
static void eevee_engine_free(void)

View File

@@ -901,6 +901,9 @@ void EEVEE_materials_cache_populate(EEVEE_Data *vedata,
}
}
}
/* Motion Blur Vectors. */
EEVEE_motion_blur_cache_populate(sldata, vedata, ob);
}
/* Volumetrics */

View File

@@ -24,12 +24,16 @@
#include "DRW_render.h"
#include "BLI_rand.h"
#include "BKE_animsys.h"
#include "BKE_camera.h"
#include "BKE_object.h"
#include "BKE_screen.h"
#include "DNA_anim_types.h"
#include "DNA_camera_types.h"
#include "DNA_mesh_types.h"
#include "DNA_screen_types.h"
#include "ED_screen.h"
@@ -37,172 +41,408 @@
#include "DEG_depsgraph.h"
#include "DEG_depsgraph_query.h"
#include "GPU_batch.h"
#include "GPU_texture.h"
#include "eevee_private.h"
static struct {
/* Motion Blur */
struct GPUShader *motion_blur_sh;
struct GPUShader *motion_blur_object_sh;
struct GPUShader *velocity_tiles_sh;
struct GPUShader *velocity_tiles_expand_sh;
} e_data = {NULL}; /* Engine data */
extern char datatoc_effect_velocity_tile_frag_glsl[];
extern char datatoc_effect_motion_blur_frag_glsl[];
extern char datatoc_object_motion_frag_glsl[];
extern char datatoc_object_motion_vert_glsl[];
extern char datatoc_common_view_lib_glsl[];
static void eevee_motion_blur_camera_get_matrix_at_time(Scene *scene,
ARegion *region,
RegionView3D *rv3d,
View3D *v3d,
Object *camera,
float time,
float r_mat[4][4])
{
float obmat[4][4];
/* HACK */
Object cam_cpy = *camera;
Camera camdata_cpy = *(Camera *)(camera->data);
cam_cpy.data = &camdata_cpy;
/* Reset original pointers, so direct evaluation does not attempt to flush
* animation back to the original object: otherwise viewport with motion
* blur enabled will always loose non-keyed changes. */
cam_cpy.id.orig_id = NULL;
camdata_cpy.id.orig_id = NULL;
const DRWContextState *draw_ctx = DRW_context_state_get();
/* Past matrix */
/* FIXME : This is a temporal solution that does not take care of parent animations */
/* Recalc Anim manually */
BKE_animsys_evaluate_animdata(&camdata_cpy.id, camdata_cpy.adt, time, ADT_RECALC_ALL, false);
BKE_object_where_is_calc_time(draw_ctx->depsgraph, scene, &cam_cpy, time);
/* Compute winmat */
CameraParams params;
BKE_camera_params_init(&params);
if (v3d != NULL) {
BKE_camera_params_from_view3d(&params, draw_ctx->depsgraph, v3d, rv3d);
BKE_camera_params_compute_viewplane(&params, region->winx, region->winy, 1.0f, 1.0f);
}
else {
BKE_camera_params_from_object(&params, &cam_cpy);
BKE_camera_params_compute_viewplane(
&params, scene->r.xsch, scene->r.ysch, scene->r.xasp, scene->r.yasp);
}
BKE_camera_params_compute_matrix(&params);
/* FIXME Should be done per view (MULTIVIEW) */
normalize_m4_m4(obmat, cam_cpy.obmat);
invert_m4(obmat);
mul_m4_m4m4(r_mat, params.winmat, obmat);
}
#define EEVEE_VELOCITY_TILE_SIZE 32
static void eevee_create_shader_motion_blur(void)
{
e_data.motion_blur_sh = DRW_shader_create_fullscreen(datatoc_effect_motion_blur_frag_glsl, NULL);
e_data.motion_blur_sh = DRW_shader_create_fullscreen(
datatoc_effect_motion_blur_frag_glsl,
"#define EEVEE_VELOCITY_TILE_SIZE " STRINGIFY(EEVEE_VELOCITY_TILE_SIZE) "\n");
e_data.motion_blur_object_sh = DRW_shader_create_with_lib(datatoc_object_motion_vert_glsl,
NULL,
datatoc_object_motion_frag_glsl,
datatoc_common_view_lib_glsl,
NULL);
e_data.velocity_tiles_sh = DRW_shader_create_fullscreen(
datatoc_effect_velocity_tile_frag_glsl,
"#define TILE_GATHER\n"
"#define EEVEE_VELOCITY_TILE_SIZE " STRINGIFY(EEVEE_VELOCITY_TILE_SIZE) "\n");
e_data.velocity_tiles_expand_sh = DRW_shader_create_fullscreen(
datatoc_effect_velocity_tile_frag_glsl,
"#define TILE_EXPANSION\n"
"#define EEVEE_VELOCITY_TILE_SIZE " STRINGIFY(EEVEE_VELOCITY_TILE_SIZE) "\n");
}
#if 0
static void eevee_motion_blur_past_persmat_get(const CameraParams *past_params,
const CameraParams *current_params,
const RegionView3D *rv3d,
const ARegion *region,
const float (*world_to_view)[4],
float (*r_world_to_ndc)[4])
{
CameraParams params = *past_params;
params.offsetx = current_params->offsetx;
params.offsety = current_params->offsety;
params.zoom = current_params->zoom;
float zoom = BKE_screen_view3d_zoom_to_fac(rv3d->camzoom);
params.shiftx *= zoom;
params.shifty *= zoom;
BKE_camera_params_compute_viewplane(&params, region->winx, region->winy, 1.0f, 1.0f);
BKE_camera_params_compute_matrix(&params);
mul_m4_m4m4(r_world_to_ndc, params.winmat, world_to_view);
}
#endif
int EEVEE_motion_blur_init(EEVEE_ViewLayerData *UNUSED(sldata), EEVEE_Data *vedata, Object *camera)
{
EEVEE_StorageList *stl = vedata->stl;
EEVEE_FramebufferList *fbl = vedata->fbl;
EEVEE_EffectsInfo *effects = stl->effects;
const DRWContextState *draw_ctx = DRW_context_state_get();
const Scene *scene_eval = DEG_get_evaluated_scene(draw_ctx->depsgraph);
Scene *scene = draw_ctx->scene;
View3D *v3d = draw_ctx->v3d;
RegionView3D *rv3d = draw_ctx->rv3d;
ARegion *region = draw_ctx->region;
if (scene_eval->eevee.flag & SCE_EEVEE_MOTION_BLUR_ENABLED) {
/* Update Motion Blur Matrices */
if (camera && (camera->type == OB_CAMERA) && (camera->data != NULL)) {
float persmat[4][4];
float ctime = DEG_get_ctime(draw_ctx->depsgraph);
float delta = scene_eval->eevee.motion_blur_shutter;
Object *ob_camera_eval = DEG_get_evaluated_object(draw_ctx->depsgraph, camera);
/* Viewport Matrix */
/* Note: This does not have TAA jitter applied. */
DRW_view_persmat_get(NULL, persmat, false);
bool view_is_valid = (stl->g_data->view_updated == false);
if (draw_ctx->evil_C != NULL) {
struct wmWindowManager *wm = CTX_wm_manager(draw_ctx->evil_C);
view_is_valid = view_is_valid && (ED_screen_animation_no_scrub(wm) == NULL);
}
/* The view is jittered by the oglrenderer. So avoid testing in this case. */
if (!DRW_state_is_image_render()) {
view_is_valid = view_is_valid && compare_m4m4(persmat, effects->prev_drw_persmat, FLT_MIN);
/* WATCH: assume TAA init code runs last. */
if (scene_eval->eevee.taa_samples == 1) {
/* Only if TAA is disabled. If not, TAA will update prev_drw_persmat itself. */
copy_m4_m4(effects->prev_drw_persmat, persmat);
}
}
effects->motion_blur_mat_cached = view_is_valid && !DRW_state_is_image_render();
/* Current matrix */
if (effects->motion_blur_mat_cached == false) {
eevee_motion_blur_camera_get_matrix_at_time(
scene, region, rv3d, v3d, ob_camera_eval, ctime, effects->current_world_to_ndc);
}
/* Only continue if camera is not being keyed */
if (DRW_state_is_image_render() ||
compare_m4m4(persmat, effects->current_world_to_ndc, 0.0001f)) {
/* Past matrix */
if (effects->motion_blur_mat_cached == false) {
eevee_motion_blur_camera_get_matrix_at_time(
scene, region, rv3d, v3d, ob_camera_eval, ctime - delta, effects->past_world_to_ndc);
#if 0 /* for future high quality blur */
/* Future matrix */
eevee_motion_blur_camera_get_matrix_at_time(
scene, region, rv3d, v3d, ob_camera_eval, ctime + delta, effects->future_world_to_ndc);
#endif
invert_m4_m4(effects->current_ndc_to_world, effects->current_world_to_ndc);
}
effects->motion_blur_mat_cached = true;
effects->motion_blur_samples = scene_eval->eevee.motion_blur_samples;
if (!e_data.motion_blur_sh) {
eevee_create_shader_motion_blur();
}
return EFFECT_MOTION_BLUR | EFFECT_POST_BUFFER;
}
}
/* Viewport not supported for now. */
if (!DRW_state_is_scene_render()) {
return 0;
}
if (scene->eevee.flag & SCE_EEVEE_MOTION_BLUR_ENABLED) {
if (!e_data.motion_blur_sh) {
eevee_create_shader_motion_blur();
}
if (DRW_state_is_scene_render()) {
int mb_step = effects->motion_blur_step;
DRW_view_viewmat_get(NULL, effects->motion_blur.camera[mb_step].viewmat, false);
DRW_view_persmat_get(NULL, effects->motion_blur.camera[mb_step].persmat, false);
DRW_view_persmat_get(NULL, effects->motion_blur.camera[mb_step].persinv, true);
}
if (camera != NULL) {
Camera *cam = camera->data;
effects->motion_blur_near_far[0] = cam->clip_start;
effects->motion_blur_near_far[1] = cam->clip_end;
}
else {
/* Not supported yet. */
BLI_assert(0);
}
#if 0 /* For when we can do viewport motion blur. */
/* Update Motion Blur Matrices */
if (camera && (camera->type == OB_CAMERA) && (camera->data != NULL)) {
if (effects->current_time != ctime) {
copy_m4_m4(effects->past_world_to_ndc, effects->current_world_to_ndc);
copy_m4_m4(effects->past_world_to_view, effects->current_world_to_view);
effects->past_time = effects->current_time;
effects->past_cam_params = effects->current_cam_params;
}
DRW_view_viewmat_get(NULL, effects->current_world_to_view, false);
DRW_view_persmat_get(NULL, effects->current_world_to_ndc, false);
DRW_view_persmat_get(NULL, effects->current_ndc_to_world, true);
if (draw_ctx->v3d) {
CameraParams params;
/* Save object params for next frame. */
BKE_camera_params_init(&effects->current_cam_params);
BKE_camera_params_from_object(&effects->current_cam_params, camera);
/* Compute v3d params to apply on last frame object params. */
BKE_camera_params_init(&params);
BKE_camera_params_from_view3d(&params, draw_ctx->depsgraph, draw_ctx->v3d, draw_ctx->rv3d);
eevee_motion_blur_past_persmat_get(&effects->past_cam_params,
&params,
draw_ctx->rv3d,
draw_ctx->region,
effects->past_world_to_view,
effects->past_world_to_ndc);
}
effects->current_time = ctime;
if (effects->cam_params_init == false) {
/* Disable motion blur if not initialized. */
copy_m4_m4(effects->past_world_to_ndc, effects->current_world_to_ndc);
copy_m4_m4(effects->past_world_to_view, effects->current_world_to_view);
effects->past_time = effects->current_time;
effects->past_cam_params = effects->current_cam_params;
effects->cam_params_init = true;
}
}
else {
/* Make no camera motion blur by using the same matrix for previous and current transform. */
DRW_view_persmat_get(NULL, effects->past_world_to_ndc, false);
DRW_view_persmat_get(NULL, effects->current_world_to_ndc, false);
DRW_view_persmat_get(NULL, effects->current_ndc_to_world, true);
effects->past_time = effects->current_time = ctime;
effects->cam_params_init = false;
}
#endif
effects->motion_blur_max = max_ii(0, scene->eevee.motion_blur_max);
const float *fs_size = DRW_viewport_size_get();
int tx_size[2] = {1 + ((int)fs_size[0] / EEVEE_VELOCITY_TILE_SIZE),
1 + ((int)fs_size[1] / EEVEE_VELOCITY_TILE_SIZE)};
effects->velocity_tiles_x_tx = DRW_texture_pool_query_2d(
tx_size[0], fs_size[1], GPU_RGBA16, &draw_engine_eevee_type);
GPU_framebuffer_ensure_config(&fbl->velocity_tiles_fb[0],
{
GPU_ATTACHMENT_NONE,
GPU_ATTACHMENT_TEXTURE(effects->velocity_tiles_x_tx),
});
effects->velocity_tiles_tx = DRW_texture_pool_query_2d(
tx_size[0], tx_size[1], GPU_RGBA16, &draw_engine_eevee_type);
GPU_framebuffer_ensure_config(&fbl->velocity_tiles_fb[1],
{
GPU_ATTACHMENT_NONE,
GPU_ATTACHMENT_TEXTURE(effects->velocity_tiles_tx),
});
return EFFECT_MOTION_BLUR | EFFECT_POST_BUFFER | EFFECT_VELOCITY_BUFFER;
}
return 0;
}
void EEVEE_motion_blur_step_set(EEVEE_Data *vedata, int step)
{
BLI_assert(step < 3);
/* Meh, code duplication. Could be avoided if render init would not contain cache init. */
if (vedata->stl->effects == NULL) {
vedata->stl->effects = MEM_callocN(sizeof(*vedata->stl->effects), __func__);
}
vedata->stl->effects->motion_blur_step = step;
}
void EEVEE_motion_blur_cache_init(EEVEE_ViewLayerData *UNUSED(sldata), EEVEE_Data *vedata)
{
EEVEE_PassList *psl = vedata->psl;
EEVEE_StorageList *stl = vedata->stl;
EEVEE_EffectsInfo *effects = stl->effects;
EEVEE_MotionBlurData *mb_data = &effects->motion_blur;
DefaultTextureList *dtxl = DRW_viewport_texture_list_get();
struct GPUBatch *quad = DRW_cache_fullscreen_quad_get();
const DRWContextState *draw_ctx = DRW_context_state_get();
Scene *scene = draw_ctx->scene;
if ((effects->enabled_effects & EFFECT_MOTION_BLUR) != 0) {
DRW_PASS_CREATE(psl->motion_blur, DRW_STATE_WRITE_COLOR);
const float *fs_size = DRW_viewport_size_get();
int tx_size[2] = {GPU_texture_width(effects->velocity_tiles_tx),
GPU_texture_height(effects->velocity_tiles_tx)};
DRWShadingGroup *grp;
{
DRW_PASS_CREATE(psl->velocity_tiles_x, DRW_STATE_WRITE_COLOR);
DRW_PASS_CREATE(psl->velocity_tiles, DRW_STATE_WRITE_COLOR);
DRWShadingGroup *grp = DRW_shgroup_create(e_data.motion_blur_sh, psl->motion_blur);
DRW_shgroup_uniform_int(grp, "samples", &effects->motion_blur_samples, 1);
DRW_shgroup_uniform_mat4(grp, "currInvViewProjMatrix", effects->current_ndc_to_world);
DRW_shgroup_uniform_mat4(grp, "pastViewProjMatrix", effects->past_world_to_ndc);
DRW_shgroup_uniform_texture_ref(grp, "colorBuffer", &effects->source_buffer);
DRW_shgroup_uniform_texture_ref(grp, "depthBuffer", &dtxl->depth);
DRW_shgroup_call(grp, quad, NULL);
/* Create max velocity tiles in 2 passes. One for X and one for Y */
GPUShader *sh = e_data.velocity_tiles_sh;
grp = DRW_shgroup_create(sh, psl->velocity_tiles_x);
DRW_shgroup_uniform_texture(grp, "velocityBuffer", effects->velocity_tx);
DRW_shgroup_uniform_ivec2_copy(grp, "velocityBufferSize", (int[2]){fs_size[0], fs_size[1]});
DRW_shgroup_uniform_vec2(grp, "viewportSize", DRW_viewport_size_get(), 1);
DRW_shgroup_uniform_vec2(grp, "viewportSizeInv", DRW_viewport_invert_size_get(), 1);
DRW_shgroup_uniform_ivec2_copy(grp, "gatherStep", (int[2]){1, 0});
DRW_shgroup_call_procedural_triangles(grp, NULL, 1);
grp = DRW_shgroup_create(sh, psl->velocity_tiles);
DRW_shgroup_uniform_texture(grp, "velocityBuffer", effects->velocity_tiles_x_tx);
DRW_shgroup_uniform_ivec2_copy(grp, "velocityBufferSize", (int[2]){tx_size[0], fs_size[1]});
DRW_shgroup_uniform_ivec2_copy(grp, "gatherStep", (int[2]){0, 1});
DRW_shgroup_call_procedural_triangles(grp, NULL, 1);
/* Expand max tiles by keeping the max tile in each tile neighborhood. */
DRW_PASS_CREATE(psl->velocity_tiles_expand[0], DRW_STATE_WRITE_COLOR);
DRW_PASS_CREATE(psl->velocity_tiles_expand[1], DRW_STATE_WRITE_COLOR);
for (int i = 0; i < 2; i++) {
GPUTexture *tile_tx = (i == 0) ? effects->velocity_tiles_tx : effects->velocity_tiles_x_tx;
GPUShader *sh_expand = e_data.velocity_tiles_expand_sh;
grp = DRW_shgroup_create(sh_expand, psl->velocity_tiles_expand[i]);
DRW_shgroup_uniform_ivec2_copy(grp, "velocityBufferSize", tx_size);
DRW_shgroup_uniform_texture(grp, "velocityBuffer", tile_tx);
DRW_shgroup_uniform_vec2(grp, "viewportSize", DRW_viewport_size_get(), 1);
DRW_shgroup_uniform_vec2(grp, "viewportSizeInv", DRW_viewport_invert_size_get(), 1);
DRW_shgroup_call_procedural_triangles(grp, NULL, 1);
}
}
{
DRW_PASS_CREATE(psl->motion_blur, DRW_STATE_WRITE_COLOR);
eGPUSamplerState state = 0;
int expand_steps = 1 + (max_ii(0, effects->motion_blur_max - 1) / EEVEE_VELOCITY_TILE_SIZE);
GPUTexture *tile_tx = (expand_steps & 1) ? effects->velocity_tiles_x_tx :
effects->velocity_tiles_tx;
grp = DRW_shgroup_create(e_data.motion_blur_sh, psl->motion_blur);
DRW_shgroup_uniform_texture(grp, "utilTex", EEVEE_materials_get_util_tex());
DRW_shgroup_uniform_texture_ref_ex(grp, "colorBuffer", &effects->source_buffer, state);
DRW_shgroup_uniform_texture_ref_ex(grp, "depthBuffer", &dtxl->depth, state);
DRW_shgroup_uniform_texture_ref_ex(grp, "velocityBuffer", &effects->velocity_tx, state);
DRW_shgroup_uniform_texture(grp, "tileMaxBuffer", tile_tx);
DRW_shgroup_uniform_float_copy(grp, "depthScale", scene->eevee.motion_blur_depth_scale);
DRW_shgroup_uniform_vec2(grp, "nearFar", effects->motion_blur_near_far, 1);
DRW_shgroup_uniform_bool_copy(grp, "isPerspective", DRW_view_is_persp_get(NULL));
DRW_shgroup_uniform_vec2(grp, "viewportSize", DRW_viewport_size_get(), 1);
DRW_shgroup_uniform_vec2(grp, "viewportSizeInv", DRW_viewport_invert_size_get(), 1);
DRW_shgroup_uniform_ivec2_copy(grp, "tileBufferSize", tx_size);
DRW_shgroup_call_procedural_triangles(grp, NULL, 1);
}
{
DRW_PASS_CREATE(psl->velocity_object, DRW_STATE_WRITE_COLOR | DRW_STATE_DEPTH_EQUAL);
grp = DRW_shgroup_create(e_data.motion_blur_object_sh, psl->velocity_object);
DRW_shgroup_uniform_mat4(grp, "prevViewProjMatrix", mb_data->camera[MB_PREV].persmat);
DRW_shgroup_uniform_mat4(grp, "currViewProjMatrix", mb_data->camera[MB_CURR].persmat);
DRW_shgroup_uniform_mat4(grp, "nextViewProjMatrix", mb_data->camera[MB_NEXT].persmat);
}
EEVEE_motion_blur_data_init(mb_data);
}
else {
psl->motion_blur = NULL;
psl->velocity_object = NULL;
}
}
void EEVEE_motion_blur_cache_populate(EEVEE_ViewLayerData *UNUSED(sldata),
EEVEE_Data *vedata,
Object *ob)
{
EEVEE_PassList *psl = vedata->psl;
EEVEE_StorageList *stl = vedata->stl;
EEVEE_EffectsInfo *effects = stl->effects;
DRWShadingGroup *grp = NULL;
if (!DRW_state_is_scene_render() || psl->velocity_object == NULL) {
return;
}
const bool is_dupli = (ob->base_flag & BASE_FROM_DUPLI) != 0;
/* For now we assume dupli objects are moving. */
const bool object_moves = is_dupli || BKE_object_moves_in_time(ob, true);
const bool is_deform = BKE_object_is_deform_modified(DRW_context_state_get()->scene, ob);
if (!(object_moves || is_deform)) {
return;
}
EEVEE_ObjectMotionData *mb_data = EEVEE_motion_blur_object_data_get(&effects->motion_blur, ob);
if (mb_data) {
int mb_step = effects->motion_blur_step;
/* Store transform */
copy_m4_m4(mb_data->obmat[mb_step], ob->obmat);
EEVEE_GeometryMotionData *mb_geom = EEVEE_motion_blur_geometry_data_get(&effects->motion_blur,
ob);
if (mb_step == MB_CURR) {
GPUBatch *batch = DRW_cache_object_surface_get(ob);
if (batch == NULL) {
return;
}
/* Fill missing matrices if the object was hidden in previous or next frame. */
if (is_zero_m4(mb_data->obmat[MB_PREV])) {
copy_m4_m4(mb_data->obmat[MB_PREV], mb_data->obmat[MB_CURR]);
}
if (is_zero_m4(mb_data->obmat[MB_NEXT])) {
copy_m4_m4(mb_data->obmat[MB_NEXT], mb_data->obmat[MB_CURR]);
}
grp = DRW_shgroup_create(e_data.motion_blur_object_sh, psl->velocity_object);
DRW_shgroup_uniform_mat4(grp, "prevModelMatrix", mb_data->obmat[MB_PREV]);
DRW_shgroup_uniform_mat4(grp, "currModelMatrix", mb_data->obmat[MB_CURR]);
DRW_shgroup_uniform_mat4(grp, "nextModelMatrix", mb_data->obmat[MB_NEXT]);
DRW_shgroup_uniform_bool(grp, "useDeform", &mb_geom->use_deform, 1);
DRW_shgroup_call(grp, batch, ob);
if (mb_geom->use_deform) {
/* Keep to modify later (after init). */
mb_geom->batch = batch;
}
}
else if (is_deform) {
/* Store vertex position buffer. */
mb_geom->vbo[mb_step] = DRW_cache_object_pos_vertbuf_get(ob);
mb_geom->use_deform = (mb_geom->vbo[mb_step] != NULL);
}
else {
mb_geom->vbo[mb_step] = NULL;
mb_geom->use_deform = false;
}
}
}
void EEVEE_motion_blur_cache_finish(EEVEE_Data *vedata)
{
EEVEE_StorageList *stl = vedata->stl;
EEVEE_EffectsInfo *effects = stl->effects;
GHashIterator ghi;
if ((effects->enabled_effects & EFFECT_MOTION_BLUR) == 0) {
return;
}
for (BLI_ghashIterator_init(&ghi, effects->motion_blur.geom);
BLI_ghashIterator_done(&ghi) == false;
BLI_ghashIterator_step(&ghi)) {
EEVEE_GeometryMotionData *mb_geom = BLI_ghashIterator_getValue(&ghi);
int mb_step = effects->motion_blur_step;
if (!mb_geom->use_deform) {
continue;
}
if (mb_step == MB_CURR) {
/* Modify batch to have data from adjacent frames. */
GPUBatch *batch = mb_geom->batch;
for (int i = 0; i < MB_CURR; i++) {
GPUVertBuf *vbo = mb_geom->vbo[i];
if (vbo && batch) {
if (vbo->vertex_len != batch->verts[0]->vertex_len) {
/* Vertex count mismatch, disable deform motion blur. */
mb_geom->use_deform = false;
GPU_VERTBUF_DISCARD_SAFE(mb_geom->vbo[MB_PREV]);
GPU_VERTBUF_DISCARD_SAFE(mb_geom->vbo[MB_NEXT]);
break;
}
/* Modify the batch to include the previous position. */
GPU_batch_vertbuf_add_ex(batch, vbo, true);
/* TODO(fclem) keep the vbo around for next (sub)frames. */
/* Only do once. */
mb_geom->vbo[i] = NULL;
}
}
}
else {
GPUVertBuf *vbo = mb_geom->vbo[mb_step];
/* If this assert fails, it means that different EEVEE_GeometryMotionDatas
* has been used for each motion blur step. */
BLI_assert(vbo);
if (vbo) {
/* Use the vbo to perform the copy on the GPU. */
GPU_vertbuf_use(vbo);
/* Perform a copy to avoid loosing it after RE_engine_frame_set(). */
mb_geom->vbo[mb_step] = vbo = GPU_vertbuf_duplicate(vbo);
/* Find and replace "pos" attrib name. */
int attrib_id = GPU_vertformat_attr_id_get(&vbo->format, "pos");
GPU_vertformat_attr_rename(&vbo->format, attrib_id, (mb_step == MB_PREV) ? "prv" : "nxt");
}
}
}
}
@@ -216,6 +456,36 @@ void EEVEE_motion_blur_draw(EEVEE_Data *vedata)
/* Motion Blur */
if ((effects->enabled_effects & EFFECT_MOTION_BLUR) != 0) {
int sample = DRW_state_is_image_render() ? effects->taa_render_sample :
effects->taa_current_sample;
double r;
BLI_halton_1d(2, 0.0, sample - 1, &r);
effects->motion_blur_sample_offset = r;
/* Create velocity max tiles in 2 passes. One for each dimension. */
GPU_framebuffer_bind(fbl->velocity_tiles_fb[0]);
DRW_draw_pass(psl->velocity_tiles_x);
GPU_framebuffer_bind(fbl->velocity_tiles_fb[1]);
DRW_draw_pass(psl->velocity_tiles);
/* Expand the tiles by reading the neighborhood. Do as many passes as required. */
int buf = 0;
for (int i = effects->motion_blur_max; i > 0; i -= EEVEE_VELOCITY_TILE_SIZE) {
GPU_framebuffer_bind(fbl->velocity_tiles_fb[buf]);
/* Change viewport to avoid invoking more pixel shaders than necessary since in one of the
* buffer the texture is way bigger in height. This avoid creating another texture and
* reduce VRAM usage. */
int w = GPU_texture_width(effects->velocity_tiles_tx);
int h = GPU_texture_height(effects->velocity_tiles_tx);
GPU_framebuffer_viewport_set(fbl->velocity_tiles_fb[buf], 0, 0, w, h);
DRW_draw_pass(psl->velocity_tiles_expand[buf]);
buf = buf ? 0 : 1;
}
GPU_framebuffer_bind(effects->target_buffer);
DRW_draw_pass(psl->motion_blur);
SWAP_BUFFERS();
@@ -225,4 +495,7 @@ void EEVEE_motion_blur_draw(EEVEE_Data *vedata)
void EEVEE_motion_blur_free(void)
{
DRW_SHADER_FREE_SAFE(e_data.motion_blur_sh);
DRW_SHADER_FREE_SAFE(e_data.motion_blur_object_sh);
DRW_SHADER_FREE_SAFE(e_data.velocity_tiles_sh);
DRW_SHADER_FREE_SAFE(e_data.velocity_tiles_expand_sh);
}

View File

@@ -29,6 +29,8 @@
#include "DNA_lightprobe_types.h"
#include "BKE_camera.h"
struct EEVEE_ShadowCasterBuffer;
struct GPUFrameBuffer;
struct Object;
@@ -256,7 +258,11 @@ typedef struct EEVEE_PassList {
struct DRWPass *sss_translucency_ps;
struct DRWPass *color_downsample_ps;
struct DRWPass *color_downsample_cube_ps;
struct DRWPass *velocity_object;
struct DRWPass *velocity_resolve;
struct DRWPass *velocity_tiles_x;
struct DRWPass *velocity_tiles;
struct DRWPass *velocity_tiles_expand[2];
struct DRWPass *taa_resolve;
struct DRWPass *alpha_checker;
@@ -327,6 +333,8 @@ typedef struct EEVEE_FramebufferList {
struct GPUFrameBuffer *renderpass_fb;
struct GPUFrameBuffer *ao_accum_fb;
struct GPUFrameBuffer *velocity_resolve_fb;
struct GPUFrameBuffer *velocity_fb;
struct GPUFrameBuffer *velocity_tiles_fb[2];
struct GPUFrameBuffer *update_noise_fb;
@@ -556,6 +564,41 @@ enum {
PROBE_UPDATE_ALL = 0xFFFFFF,
};
/* ************** MOTION BLUR ************ */
#define MB_PREV 0
#define MB_NEXT 1
#define MB_CURR 2
typedef struct EEVEE_MotionBlurData {
struct GHash *object;
struct GHash *geom;
struct {
float viewmat[4][4];
float persmat[4][4];
float persinv[4][4];
} camera[3];
} EEVEE_MotionBlurData;
typedef struct EEVEE_ObjectKey {
/** Object or source object for duplis */
struct Object *ob;
/** Parent object for duplis */
struct Object *parent;
/** Dupli objects recursive unique identifier */
int id[16]; /* 2*MAX_DUPLI_RECUR */
} EEVEE_ObjectKey;
typedef struct EEVEE_ObjectMotionData {
float obmat[3][4][4];
} EEVEE_ObjectMotionData;
typedef struct EEVEE_GeometryMotionData {
struct GPUBatch *batch; /* Batch for time = t. */
struct GPUVertBuf *vbo[2]; /* Vbo for time = t +/- step. */
int use_deform; /* To disable deform mb if vertcount mismatch. */
} EEVEE_GeometryMotionData;
/* ************ EFFECTS DATA ************* */
typedef enum EEVEE_EffectsFlag {
@@ -609,7 +652,7 @@ typedef struct EEVEE_EffectsInfo {
float taa_alpha;
bool prev_drw_support;
bool prev_is_navigating;
float prev_drw_persmat[4][4];
float prev_drw_persmat[4][4]; /* Used for checking view validity and reprojection. */
struct DRWView *taa_view;
/* Ambient Occlusion */
int ao_depth_layer;
@@ -617,15 +660,25 @@ typedef struct EEVEE_EffectsInfo {
struct GPUTexture *gtao_horizons; /* Textures from pool */
struct GPUTexture *gtao_horizons_debug;
/* Motion Blur */
float current_world_to_ndc[4][4];
float current_ndc_to_world[4][4];
float current_world_to_ndc[4][4];
float current_world_to_view[4][4];
float past_world_to_ndc[4][4];
int motion_blur_samples;
bool motion_blur_mat_cached;
float past_world_to_view[4][4];
CameraParams past_cam_params;
CameraParams current_cam_params;
float motion_blur_sample_offset;
char motion_blur_step; /* Which step we are evaluating. */
int motion_blur_max; /* Maximum distance in pixels a motion blured pixel can cover. */
float motion_blur_near_far[2]; /* Camera near/far clip distances (positive). */
bool cam_params_init;
/* TODO(fclem) Only used in render mode for now.
* This is because we are missing a per scene persistent place to hold this. */
struct EEVEE_MotionBlurData motion_blur;
/* Velocity Pass */
float velocity_curr_persinv[4][4];
float velocity_past_persmat[4][4];
struct GPUTexture *velocity_tx; /* Texture from pool */
struct GPUTexture *velocity_tiles_x_tx;
struct GPUTexture *velocity_tiles_tx;
/* Depth Of Field */
float dof_near_far[2];
float dof_params[2];
@@ -883,12 +936,17 @@ typedef struct EEVEE_PrivateData {
} EEVEE_PrivateData; /* Transient data */
/* eevee_data.c */
void EEVEE_motion_blur_data_init(EEVEE_MotionBlurData *mb);
void EEVEE_motion_blur_data_free(EEVEE_MotionBlurData *mb);
void EEVEE_view_layer_data_free(void *sldata);
EEVEE_ViewLayerData *EEVEE_view_layer_data_get(void);
EEVEE_ViewLayerData *EEVEE_view_layer_data_ensure_ex(struct ViewLayer *view_layer);
EEVEE_ViewLayerData *EEVEE_view_layer_data_ensure(void);
EEVEE_ObjectEngineData *EEVEE_object_data_get(Object *ob);
EEVEE_ObjectEngineData *EEVEE_object_data_ensure(Object *ob);
EEVEE_ObjectMotionData *EEVEE_motion_blur_object_data_get(EEVEE_MotionBlurData *mb, Object *ob);
EEVEE_GeometryMotionData *EEVEE_motion_blur_geometry_data_get(EEVEE_MotionBlurData *mb,
Object *ob);
EEVEE_LightProbeEngineData *EEVEE_lightprobe_data_get(Object *ob);
EEVEE_LightProbeEngineData *EEVEE_lightprobe_data_ensure(Object *ob);
EEVEE_LightEngineData *EEVEE_light_data_get(Object *ob);
@@ -1113,7 +1171,10 @@ void EEVEE_subsurface_free(void);
/* eevee_motion_blur.c */
int EEVEE_motion_blur_init(EEVEE_ViewLayerData *sldata, EEVEE_Data *vedata, Object *camera);
void EEVEE_motion_blur_step_set(EEVEE_Data *vedata, int step);
void EEVEE_motion_blur_cache_init(EEVEE_ViewLayerData *sldata, EEVEE_Data *vedata);
void EEVEE_motion_blur_cache_populate(EEVEE_ViewLayerData *sldata, EEVEE_Data *vedata, Object *ob);
void EEVEE_motion_blur_cache_finish(EEVEE_Data *vedata);
void EEVEE_motion_blur_draw(EEVEE_Data *vedata);
void EEVEE_motion_blur_free(void);
@@ -1194,6 +1255,7 @@ void EEVEE_render_draw(EEVEE_Data *vedata,
void EEVEE_render_update_passes(struct RenderEngine *engine,
struct Scene *scene,
struct ViewLayer *view_layer);
bool EEVEE_render_do_motion_blur(const struct Depsgraph *depsgraph);
/** eevee_lookdev.c */
void EEVEE_lookdev_cache_init(EEVEE_Data *vedata,

View File

@@ -46,6 +46,12 @@
#include "eevee_private.h"
bool EEVEE_render_do_motion_blur(const struct Depsgraph *depsgraph)
{
Scene *scene = DEG_get_evaluated_scene(depsgraph);
return (scene->eevee.flag & SCE_EEVEE_MOTION_BLUR_ENABLED) != 0;
}
/* Return true if init properly. */
bool EEVEE_render_init(EEVEE_Data *ved, RenderEngine *engine, struct Depsgraph *depsgraph)
{
@@ -144,6 +150,7 @@ bool EEVEE_render_init(EEVEE_Data *ved, RenderEngine *engine, struct Depsgraph *
DRWView *view = DRW_view_create(viewmat, winmat, NULL, NULL, NULL);
DRW_view_camtexco_set(view, camtexcofac);
DRW_view_reset();
DRW_view_default_set(view);
DRW_view_set_active(view);

View File

@@ -241,7 +241,6 @@ int EEVEE_temporal_sampling_init(EEVEE_ViewLayerData *UNUSED(sldata), EEVEE_Data
DRW_view_persmat_get(NULL, persmat, false);
view_is_valid = view_is_valid && compare_m4m4(persmat, effects->prev_drw_persmat, FLT_MIN);
copy_m4_m4(effects->prev_drw_persmat, persmat);
/* Prevent ghosting from probe data. */
view_is_valid = view_is_valid && (effects->prev_drw_support == DRW_state_draw_support()) &&
@@ -283,7 +282,7 @@ void EEVEE_temporal_sampling_cache_init(EEVEE_ViewLayerData *sldata, EEVEE_Data
EEVEE_TextureList *txl = vedata->txl;
EEVEE_EffectsInfo *effects = stl->effects;
if ((effects->enabled_effects & (EFFECT_TAA | EFFECT_TAA_REPROJECT)) != 0) {
if (effects->enabled_effects & EFFECT_TAA) {
struct GPUShader *sh = EEVEE_shaders_taa_resolve_sh_get(effects->enabled_effects);
DRW_PASS_CREATE(psl->taa_resolve, DRW_STATE_WRITE_COLOR);
@@ -295,8 +294,9 @@ void EEVEE_temporal_sampling_cache_init(EEVEE_ViewLayerData *sldata, EEVEE_Data
DRW_shgroup_uniform_block(grp, "renderpass_block", sldata->renderpass_ubo.combined);
if (effects->enabled_effects & EFFECT_TAA_REPROJECT) {
// DefaultTextureList *dtxl = DRW_viewport_texture_list_get();
DRW_shgroup_uniform_texture_ref(grp, "velocityBuffer", &effects->velocity_tx);
DefaultTextureList *dtxl = DRW_viewport_texture_list_get();
DRW_shgroup_uniform_texture_ref(grp, "depthBuffer", &dtxl->depth);
DRW_shgroup_uniform_mat4(grp, "prevViewProjectionMatrix", effects->prev_drw_persmat);
}
else {
DRW_shgroup_uniform_float(grp, "alpha", &effects->taa_alpha, 1);
@@ -364,5 +364,7 @@ void EEVEE_temporal_sampling_draw(EEVEE_Data *vedata)
DRW_viewport_request_redraw();
}
}
DRW_view_persmat_get(NULL, effects->prev_drw_persmat, false);
}
}

View File

@@ -1,64 +1,235 @@
/*
* Based on:
* A Fast and Stable Feature-Aware Motion Blur Filter
* by Jean-Philippe Guertin, Morgan McGuire, Derek Nowrouzezahrai
*
* With modification from the presentation:
* Next Generation Post Processing in Call of Duty Advanced Warfare
* by Jorge Jimenez
*/
uniform sampler2D colorBuffer;
uniform sampler2D depthBuffer;
uniform sampler2D velocityBuffer;
uniform sampler2D tileMaxBuffer;
/* current frame */
uniform mat4 currInvViewProjMatrix;
#define KERNEL 8
/* past frame frame */
uniform mat4 pastViewProjMatrix;
/* TODO(fclem) deduplicate this code. */
uniform sampler2DArray utilTex;
#define LUT_SIZE 64
#define texelfetch_noise_tex(coord) texelFetch(utilTex, ivec3(ivec2(coord) % LUT_SIZE, 2.0), 0)
uniform float depthScale;
uniform ivec2 tileBufferSize;
uniform vec2 viewportSize;
uniform vec2 viewportSizeInv;
uniform bool isPerspective;
uniform vec2 nearFar; /* Near & far view depths values */
#define linear_depth(z) \
((isPerspective) ? (nearFar.x * nearFar.y) / (z * (nearFar.x - nearFar.y) + nearFar.y) : \
z * (nearFar.y - nearFar.x) + nearFar.x) /* Only true for camera view! */
in vec4 uvcoordsvar;
out vec4 FragColor;
out vec4 fragColor;
#define MAX_SAMPLE 64
#define saturate(a) clamp(a, 0.0, 1.0)
uniform int samples;
float wang_hash_noise(uint s)
vec2 spread_compare(float center_motion_length, float sample_motion_length, float offset_length)
{
uint seed = (uint(gl_FragCoord.x) * 1664525u + uint(gl_FragCoord.y)) + s;
return saturate(vec2(center_motion_length, sample_motion_length) - offset_length + 1.0);
}
seed = (seed ^ 61u) ^ (seed >> 16u);
seed *= 9u;
seed = seed ^ (seed >> 4u);
seed *= 0x27d4eb2du;
seed = seed ^ (seed >> 15u);
vec2 depth_compare(float center_depth, float sample_depth)
{
return saturate(0.5 + vec2(depthScale, -depthScale) * (sample_depth - center_depth));
}
float value = float(seed);
value *= 1.0 / 4294967296.0;
return fract(value);
/* Kill contribution if not going the same direction. */
float dir_compare(vec2 offset, vec2 sample_motion, float sample_motion_length)
{
if (sample_motion_length < 0.5) {
return 1.0;
}
return (dot(offset, sample_motion) > 0.0) ? 1.0 : 0.0;
}
/* Return background (x) and foreground (y) weights. */
vec2 sample_weights(float center_depth,
float sample_depth,
float center_motion_length,
float sample_motion_length,
float offset_length)
{
/* Clasify foreground/background. */
vec2 depth_weight = depth_compare(center_depth, sample_depth);
/* Weight if sample is overlapping or under the center pixel. */
vec2 spread_weight = spread_compare(center_motion_length, sample_motion_length, offset_length);
return depth_weight * spread_weight;
}
vec4 decode_velocity(vec4 velocity)
{
velocity = velocity * 2.0 - 1.0;
/* Needed to match cycles. Can't find why... (fclem) */
velocity *= 0.5;
/* Transpose to pixelspace. */
velocity *= viewportSize.xyxy;
return velocity;
}
vec4 sample_velocity(vec2 uv)
{
vec4 data = texture(velocityBuffer, uv);
return decode_velocity(data);
}
vec2 sample_velocity(vec2 uv, const bool next)
{
vec4 data = sample_velocity(uv);
data.xy = (next ? data.zw : data.xy);
return data.xy;
}
void gather_sample(vec2 screen_uv,
float center_depth,
float center_motion_len,
vec2 offset,
float offset_len,
const bool next,
inout vec4 accum,
inout vec4 accum_bg,
inout vec3 w_accum)
{
vec2 sample_uv = screen_uv - offset * viewportSizeInv;
vec2 sample_motion = sample_velocity(sample_uv, next);
float sample_motion_len = length(sample_motion);
float sample_depth = linear_depth(texture(depthBuffer, sample_uv).r);
vec4 col = textureLod(colorBuffer, sample_uv, 0.0);
vec3 weights;
weights.xy = sample_weights(
center_depth, sample_depth, center_motion_len, sample_motion_len, offset_len);
weights.z = dir_compare(offset, sample_motion, sample_motion_len);
weights.xy *= weights.z;
accum += col * weights.y;
accum_bg += col * weights.x;
w_accum += weights;
}
void gather_blur(vec2 screen_uv,
vec2 center_motion,
float center_depth,
vec2 max_motion,
float ofs,
const bool next,
inout vec4 accum,
inout vec4 accum_bg,
inout vec3 w_accum)
{
float center_motion_len = length(center_motion);
float max_motion_len = length(max_motion);
/* Tile boundaries randomization can fetch a tile where there is less motion than this pixel.
* Fix this by overriding the max_motion. */
if (max_motion_len < center_motion_len) {
max_motion_len = center_motion_len;
max_motion = center_motion;
}
if (max_motion_len < 0.5) {
return;
}
int i;
float t, inc = 1.0 / float(KERNEL);
for (i = 0, t = ofs * inc; i < KERNEL; i++, t += inc) {
gather_sample(screen_uv,
center_depth,
center_motion_len,
max_motion * t,
max_motion_len * t,
next,
accum,
accum_bg,
w_accum);
}
if (center_motion_len < 0.5) {
return;
}
for (i = 0, t = ofs * inc; i < KERNEL; i++, t += inc) {
/* Also sample in center motion direction.
* Allow to recover motion where there is conflicting
* motion between foreground and background. */
gather_sample(screen_uv,
center_depth,
center_motion_len,
center_motion * t,
center_motion_len * t,
next,
accum,
accum_bg,
w_accum);
}
}
void main()
{
vec3 ndc_pos;
ndc_pos.xy = uvcoordsvar.xy;
ndc_pos.z = texture(depthBuffer, uvcoordsvar.xy).x;
vec2 uv = uvcoordsvar.xy;
float inv_samples = 1.0 / float(samples);
float noise = 2.0 * wang_hash_noise(0u) * inv_samples;
/* Data of the center pixel of the gather (target). */
float center_depth = linear_depth(texture(depthBuffer, uv).r);
vec4 center_motion = sample_velocity(uv);
vec4 center_color = textureLod(colorBuffer, uv, 0.0);
/* Normalize Device Coordinates are [-1, +1]. */
ndc_pos = ndc_pos * 2.0 - 1.0;
vec2 rand = texelfetch_noise_tex(gl_FragCoord.xy).xy;
vec4 p = currInvViewProjMatrix * vec4(ndc_pos, 1.0);
vec3 world_pos = p.xyz / p.w; /* Perspective divide */
/* Randomize tile boundary to avoid ugly discontinuities. Randomize 1/4th of the tile.
* Note this randomize only in one direction but in practice it's enough. */
rand.x = rand.x * 2.0 - 1.0;
ivec2 tile = ivec2(gl_FragCoord.xy + rand.x * float(EEVEE_VELOCITY_TILE_SIZE) * 0.25) /
EEVEE_VELOCITY_TILE_SIZE;
tile = clamp(tile, ivec2(0), tileBufferSize - 1);
vec4 max_motion = decode_velocity(texelFetch(tileMaxBuffer, tile, 0));
/* Now find where was this pixel position
* inside the past camera viewport */
vec4 old_ndc = pastViewProjMatrix * vec4(world_pos, 1.0);
old_ndc.xyz /= old_ndc.w; /* Perspective divide */
/* First (center) sample: time = T */
/* x: Background, y: Foreground, z: dir. */
vec3 w_accum = vec3(0.0, 0.0, 1.0);
vec4 accum_bg = vec4(0.0);
vec4 accum = vec4(0.0);
/* First linear gather. time = [T - delta, T] */
gather_blur(
uv, center_motion.xy, center_depth, max_motion.xy, rand.y, false, accum, accum_bg, w_accum);
/* Second linear gather. time = [T, T + delta] */
gather_blur(
uv, center_motion.zw, center_depth, max_motion.zw, rand.y, true, accum, accum_bg, w_accum);
vec2 motion = (ndc_pos.xy - old_ndc.xy) * 0.25; /* 0.25 fit cycles ref */
#if 1
/* Avoid division by 0.0. */
float w = 1.0 / (50.0 * float(KERNEL) * 4.0);
accum_bg += center_color * w;
w_accum.x += w;
/* Note: In Jimenez's presentation, they used center sample.
* We use background color as it contains more informations for foreground
* elements that have not enough weights.
* Yield beter blur in complex motion. */
center_color = accum_bg / w_accum.x;
#endif
/* Merge background. */
accum += accum_bg;
w_accum.y += w_accum.x;
/* Balance accumulation for failled samples.
* We replace the missing foreground by the background. */
float blend_fac = saturate(1.0 - w_accum.y / w_accum.z);
fragColor = (accum / w_accum.z) + center_color * blend_fac;
float inc = 2.0 * inv_samples;
float i = -1.0 + noise;
FragColor = vec4(0.0);
for (int j = 0; j < samples && j < MAX_SAMPLE; j++) {
FragColor += textureLod(colorBuffer, uvcoordsvar.xy + motion * i, 0.0) * inv_samples;
i += inc;
}
#if 0 /* For debugging. */
fragColor.rgb = fragColor.ggg;
fragColor.rg += max_motion.xy;
#endif
}

View File

@@ -1,6 +1,6 @@
uniform sampler2D colorHistoryBuffer;
uniform sampler2D velocityBuffer;
uniform mat4 prevViewProjectionMatrix;
out vec4 FragColor;
@@ -38,16 +38,19 @@ vec3 clip_to_aabb(vec3 color, vec3 minimum, vec3 maximum, vec3 average)
*/
void main()
{
ivec2 texel = ivec2(gl_FragCoord.xy);
vec2 motion = texelFetch(velocityBuffer, texel, 0).rg;
/* Decode from unsigned normalized 16bit texture. */
motion = motion * 2.0 - 1.0;
/* Compute pixel position in previous frame. */
vec2 screen_res = vec2(textureSize(colorBuffer, 0).xy);
vec2 uv = gl_FragCoord.xy / screen_res;
vec2 uv_history = uv - motion;
ivec2 texel = ivec2(gl_FragCoord.xy);
/* Compute pixel position in previous frame. */
float depth = textureLod(depthBuffer, uv, 0.0).r;
vec3 pos = get_world_space_from_depth(uv, depth);
vec2 uv_history = project_point(prevViewProjectionMatrix, pos).xy * 0.5 + 0.5;
/* HACK: Reject lookdev spheres from TAA reprojection. */
if (depth == 0.0) {
uv_history = uv;
}
ivec2 texel_history = ivec2(uv_history * screen_res);
vec4 color_history = textureLod(colorHistoryBuffer, uv_history, 0.0);

View File

@@ -1,8 +1,9 @@
uniform mat4 currPersinv;
uniform mat4 pastPersmat;
uniform mat4 prevViewProjMatrix;
uniform mat4 currViewProjMatrixInv;
uniform mat4 nextViewProjMatrix;
out vec2 outData;
out vec4 outData;
void main()
{
@@ -12,13 +13,12 @@ void main()
float depth = texelFetch(depthBuffer, texel, 0).r;
vec3 world_position = project_point(currPersinv, vec3(uv, depth) * 2.0 - 1.0);
vec2 uv_history = project_point(pastPersmat, world_position).xy * 0.5 + 0.5;
vec3 world_position = project_point(currViewProjMatrixInv, vec3(uv, depth) * 2.0 - 1.0);
vec2 uv_prev = project_point(prevViewProjMatrix, world_position).xy * 0.5 + 0.5;
vec2 uv_next = project_point(nextViewProjMatrix, world_position).xy * 0.5 + 0.5;
outData = uv - uv_history;
/* HACK: Reject lookdev spheres from TAA reprojection. */
outData = (depth > 0.0) ? outData : vec2(0.0);
outData.xy = uv_prev - uv;
outData.zw = uv_next - uv;
/* Encode to unsigned normalized 16bit texture. */
outData = outData * 0.5 + 0.5;

View File

@@ -0,0 +1,151 @@
/**
* Shaders that down-sample velocity buffer,
*
* Based on:
* A Fast and Stable Feature-Aware Motion Blur Filter
* by Jean-Philippe Guertin, Morgan McGuire, Derek Nowrouzezahrai
*
* Adapted from G3D Innovation Engine implementation.
*/
uniform sampler2D velocityBuffer;
uniform vec2 viewportSize;
uniform vec2 viewportSizeInv;
uniform ivec2 velocityBufferSize;
out vec4 tileMaxVelocity;
vec4 sample_velocity(ivec2 texel)
{
texel = clamp(texel, ivec2(0), velocityBufferSize - 1);
vec4 data = texelFetch(velocityBuffer, texel, 0);
/* Decode data. */
return (data * 2.0 - 1.0) * viewportSize.xyxy;
}
vec4 encode_velocity(vec4 velocity)
{
return velocity * viewportSizeInv.xyxy * 0.5 + 0.5;
}
#ifdef TILE_GATHER
uniform ivec2 gatherStep;
void main()
{
vec4 max_motion = vec4(0.0);
float max_motion_len_sqr_prev = 0.0;
float max_motion_len_sqr_next = 0.0;
ivec2 texel = ivec2(gl_FragCoord.xy);
texel = texel * gatherStep.yx + texel * EEVEE_VELOCITY_TILE_SIZE * gatherStep;
for (int i = 0; i < EEVEE_VELOCITY_TILE_SIZE; ++i) {
vec4 motion = sample_velocity(texel + i * gatherStep);
float motion_len_sqr_prev = dot(motion.xy, motion.xy);
float motion_len_sqr_next = dot(motion.zw, motion.zw);
if (motion_len_sqr_prev > max_motion_len_sqr_prev) {
max_motion_len_sqr_prev = motion_len_sqr_prev;
max_motion.xy = motion.xy;
}
if (motion_len_sqr_next > max_motion_len_sqr_next) {
max_motion_len_sqr_next = motion_len_sqr_next;
max_motion.zw = motion.zw;
}
}
tileMaxVelocity = encode_velocity(max_motion);
}
#else /* TILE_EXPANSION */
bool neighbor_affect_this_tile(ivec2 offset, vec2 velocity)
{
/* Manhattan distance to the tiles, which is used for
* differentiating corners versus middle blocks */
float displacement = float(abs(offset.x) + abs(offset.y));
/**
* Relative sign on each axis of the offset compared
* to the velocity for that tile. In order for a tile
* to affect the center tile, it must have a
* neighborhood velocity in which x and y both have
* identical or both have opposite signs relative to
* offset. If the offset coordinate is zero then
* velocity is irrelevant.
**/
vec2 point = sign(offset * velocity);
float dist = (point.x + point.y);
/**
* Here's an example of the logic for this code.
* In this diagram, the upper-left tile has offset = (-1, -1).
* V1 is velocity = (1, -2). point in this case = (-1, 1), and therefore dist = 0,
* so the upper-left tile does not affect the center.
*
* Now, look at another case. V2 = (-1, -2). point = (1, 1), so dist = 2 and the tile
* does affect the center.
*
* V2(-1,-2) V1(1, -2)
* \ /
* \ /
* \/___ ____ ____
* (-1, -1)| | | |
* |____|____|____|
* | | | |
* |____|____|____|
* | | | |
* |____|____|____|
**/
return (abs(dist) == displacement);
}
/**
* Only gather neighborhood velocity into tiles that could be affected by it.
* In the general case, only six of the eight neighbors contribute:
*
* This tile can't possibly be affected by the center one
* |
* v
* ____ ____ ____
* | | ///|/// |
* |____|////|//__|
* | |////|/ |
* |___/|////|____|
* | //|////| | <--- This tile can't possibly be affected by the center one
* |_///|///_|____|
**/
void main()
{
vec4 max_motion = vec4(0.0);
float max_motion_len_sqr_prev = -1.0;
float max_motion_len_sqr_next = -1.0;
ivec2 tile = ivec2(gl_FragCoord.xy);
ivec2 offset = ivec2(0);
for (offset.y = -1; offset.y <= 1; ++offset.y) {
for (offset.x = -1; offset.x <= 1; ++offset.x) {
vec4 motion = sample_velocity(tile + offset);
float motion_len_sqr_prev = dot(motion.xy, motion.xy);
float motion_len_sqr_next = dot(motion.zw, motion.zw);
if (motion_len_sqr_prev > max_motion_len_sqr_prev) {
if (neighbor_affect_this_tile(offset, motion.xy)) {
max_motion_len_sqr_prev = motion_len_sqr_prev;
max_motion.xy = motion.xy;
}
}
if (motion_len_sqr_next > max_motion_len_sqr_next) {
if (neighbor_affect_this_tile(offset, motion.zw)) {
max_motion_len_sqr_next = motion_len_sqr_next;
max_motion.zw = motion.zw;
}
}
}
}
tileMaxVelocity = encode_velocity(max_motion);
}
#endif

View File

@@ -0,0 +1,27 @@
uniform mat4 prevViewProjMatrix;
uniform mat4 currViewProjMatrix;
uniform mat4 nextViewProjMatrix;
in vec3 prevWorldPos;
in vec3 currWorldPos;
in vec3 nextWorldPos;
out vec4 outData;
void main()
{
vec4 prev_wpos = prevViewProjMatrix * vec4(prevWorldPos, 1.0);
vec4 curr_wpos = currViewProjMatrix * vec4(currWorldPos, 1.0);
vec4 next_wpos = nextViewProjMatrix * vec4(nextWorldPos, 1.0);
vec2 prev_uv = (prev_wpos.xy / prev_wpos.w);
vec2 curr_uv = (curr_wpos.xy / curr_wpos.w);
vec2 next_uv = (next_wpos.xy / next_wpos.w);
outData.xy = prev_uv - curr_uv;
outData.zw = next_uv - curr_uv;
/* Encode to unsigned normalized 16bit texture. */
outData = outData * 0.5 + 0.5;
}

View File

@@ -0,0 +1,23 @@
uniform mat4 currModelMatrix;
uniform mat4 prevModelMatrix;
uniform mat4 nextModelMatrix;
uniform bool useDeform;
in vec3 pos;
in vec3 prv; /* Previous frame position. */
in vec3 nxt; /* Next frame position. */
out vec3 currWorldPos;
out vec3 prevWorldPos;
out vec3 nextWorldPos;
void main()
{
prevWorldPos = (prevModelMatrix * vec4(useDeform ? prv : pos, 1.0)).xyz;
currWorldPos = (currModelMatrix * vec4(pos, 1.0)).xyz;
nextWorldPos = (nextModelMatrix * vec4(useDeform ? nxt : pos, 1.0)).xyz;
/* Use jittered projmatrix to be able to match exact sample depth (depth equal test).
* Note that currModelMatrix needs to also be equal to ModelMatrix for the samples to match. */
gl_Position = ViewProjectionMatrix * vec4(currWorldPos, 1.0);
}

View File

@@ -462,6 +462,10 @@ void DRW_shgroup_uniform_texture_ex(DRWShadingGroup *shgroup,
const char *name,
const struct GPUTexture *tex,
eGPUSamplerState sampler_state);
void DRW_shgroup_uniform_texture_ref_ex(DRWShadingGroup *shgroup,
const char *name,
GPUTexture **tex,
eGPUSamplerState sampler_state);
void DRW_shgroup_uniform_texture(DRWShadingGroup *shgroup,
const char *name,
const struct GPUTexture *tex);
@@ -569,7 +573,7 @@ void DRW_view_update_sub(DRWView *view, const float viewmat[4][4], const float w
const DRWView *DRW_view_default_get(void);
void DRW_view_default_set(DRWView *view);
void DRW_view_reset(void);
void DRW_view_set_active(DRWView *view);
void DRW_view_clip_planes_set(DRWView *view, float (*planes)[4], int plane_len);

View File

@@ -888,6 +888,32 @@ GPUBatch *DRW_cache_object_surface_get(Object *ob)
}
}
/* Returns the vertbuf used by shaded surface batch. */
GPUVertBuf *DRW_cache_object_pos_vertbuf_get(Object *ob)
{
Mesh *me = BKE_object_get_evaluated_mesh(ob);
short type = (me != NULL) ? OB_MESH : ob->type;
switch (type) {
case OB_MESH:
return DRW_mesh_batch_cache_pos_vertbuf_get((me != NULL) ? me : ob->data);
case OB_CURVE:
case OB_SURF:
case OB_FONT:
return DRW_curve_batch_cache_pos_vertbuf_get(ob->data);
case OB_MBALL:
return DRW_mball_batch_cache_pos_vertbuf_get(ob);
case OB_HAIR:
return NULL;
case OB_POINTCLOUD:
return NULL;
case OB_VOLUME:
return NULL;
default:
return NULL;
}
}
int DRW_cache_object_material_count_get(struct Object *ob)
{
Mesh *me = BKE_object_get_evaluated_mesh(ob);

View File

@@ -63,6 +63,8 @@ struct GPUBatch **DRW_cache_object_surface_material_get(struct Object *ob,
struct GPUBatch *DRW_cache_object_face_wireframe_get(struct Object *ob);
int DRW_cache_object_material_count_get(struct Object *ob);
struct GPUVertBuf *DRW_cache_object_pos_vertbuf_get(struct Object *ob);
/* Empties */
struct GPUBatch *DRW_cache_plain_axes_get(void);
struct GPUBatch *DRW_cache_single_arrow_get(void);

View File

@@ -199,6 +199,11 @@ struct GPUBatch *DRW_mesh_batch_cache_get_edituv_facedots(struct Mesh *me);
struct GPUBatch *DRW_mesh_batch_cache_get_uv_edges(struct Mesh *me);
struct GPUBatch *DRW_mesh_batch_cache_get_edit_mesh_analysis(struct Mesh *me);
/* For direct data access. */
struct GPUVertBuf *DRW_mesh_batch_cache_pos_vertbuf_get(struct Mesh *me);
struct GPUVertBuf *DRW_curve_batch_cache_pos_vertbuf_get(struct Curve *cu);
struct GPUVertBuf *DRW_mball_batch_cache_pos_vertbuf_get(struct Object *ob);
int DRW_mesh_material_count_get(struct Mesh *me);
/* Edit mesh bitflags (is this the right place?) */

View File

@@ -898,6 +898,16 @@ GPUBatch **DRW_curve_batch_cache_get_surface_shaded(struct Curve *cu,
return cache->surf_per_mat;
}
GPUVertBuf *DRW_curve_batch_cache_pos_vertbuf_get(struct Curve *cu)
{
CurveBatchCache *cache = curve_batch_cache_get(cu);
/* Request surface to trigger the vbo filling. Otherwise it may do nothing. */
DRW_batch_request(&cache->batch.surfaces);
DRW_vbo_request(NULL, &cache->ordered.loop_pos_nor);
return cache->ordered.loop_pos_nor;
}
GPUBatch *DRW_curve_batch_cache_get_wireframes_face(Curve *cu)
{
CurveBatchCache *cache = curve_batch_cache_get(cu);

View File

@@ -806,6 +806,23 @@ int DRW_mesh_material_count_get(Mesh *me)
/** \} */
/* ---------------------------------------------------------------------- */
/** \name Edit Mode API
* \{ */
GPUVertBuf *DRW_mesh_batch_cache_pos_vertbuf_get(Mesh *me)
{
MeshBatchCache *cache = mesh_batch_cache_get(me);
/* Request surface to trigger the vbo filling. Otherwise it may do nothing. */
mesh_batch_cache_add_request(cache, MBC_SURFACE);
DRW_batch_request(&cache->batch.surface);
DRW_vbo_request(NULL, &cache->final.vbo.pos_nor);
return cache->final.vbo.pos_nor;
}
/** \} */
/* ---------------------------------------------------------------------- */
/** \name Edit Mode API
* \{ */

View File

@@ -274,6 +274,18 @@ struct GPUBatch *DRW_metaball_batch_cache_get_edge_detection(struct Object *ob,
return cache->edge_detection;
}
struct GPUVertBuf *DRW_mball_batch_cache_pos_vertbuf_get(Object *ob)
{
if (!BKE_mball_is_basis(ob)) {
return NULL;
}
MetaBall *mb = ob->data;
MetaBallBatchCache *cache = metaball_batch_cache_get(mb);
return mball_batch_cache_get_pos_and_normals(ob, cache);
}
int DRW_metaball_material_count_get(MetaBall *mb)
{
return max_ii(1, mb->totcol);

View File

@@ -90,17 +90,19 @@ BLI_INLINE void DRW_vbo_request(GPUBatch *batch, GPUVertBuf **vbo)
if (*vbo == NULL) {
*vbo = MEM_callocN(sizeof(GPUVertBuf), "GPUVertBuf");
}
/* HACK set first vbo if not init. */
if (batch->verts[0] == NULL) {
GPU_batch_vao_cache_clear(batch);
batch->verts[0] = *vbo;
}
else {
/* HACK: bypass assert */
int vbo_vert_len = (*vbo)->vertex_len;
(*vbo)->vertex_len = batch->verts[0]->vertex_len;
GPU_batch_vertbuf_add(batch, *vbo);
(*vbo)->vertex_len = vbo_vert_len;
if (batch != NULL) {
/* HACK set first vbo if not init. */
if (batch->verts[0] == NULL) {
GPU_batch_vao_cache_clear(batch);
batch->verts[0] = *vbo;
}
else {
/* HACK: bypass assert */
int vbo_vert_len = (*vbo)->vertex_len;
(*vbo)->vertex_len = batch->verts[0]->vertex_len;
GPU_batch_vertbuf_add(batch, *vbo);
(*vbo)->vertex_len = vbo_vert_len;
}
}
}

View File

@@ -1629,13 +1629,6 @@ bool DRW_render_check_grease_pencil(Depsgraph *depsgraph)
return false;
}
static void drw_view_reset(void)
{
DST.view_default = NULL;
DST.view_active = NULL;
DST.view_previous = NULL;
}
static void DRW_render_gpencil_to_image(RenderEngine *engine,
struct RenderLayer *render_layer,
const rcti *rect)
@@ -1713,7 +1706,7 @@ void DRW_render_gpencil(struct RenderEngine *engine, struct Depsgraph *depsgraph
for (RenderView *render_view = render_result->views.first; render_view != NULL;
render_view = render_view->next) {
RE_SetActiveRenderView(render, render_view->name);
drw_view_reset();
DRW_view_reset();
DST.buffer_finish_called = false;
DRW_render_gpencil_to_image(engine, render_layer, &render_rect);
}
@@ -1821,7 +1814,7 @@ void DRW_render_to_image(RenderEngine *engine, struct Depsgraph *depsgraph)
for (RenderView *render_view = render_result->views.first; render_view != NULL;
render_view = render_view->next) {
RE_SetActiveRenderView(render, render_view->name);
drw_view_reset();
DRW_view_reset();
engine_type->draw_engine->render_to_image(data, engine, render_layer, &render_rect);
DST.buffer_finish_called = false;
}

View File

@@ -262,11 +262,19 @@ void DRW_shgroup_uniform_texture(DRWShadingGroup *shgroup, const char *name, con
DRW_shgroup_uniform_texture_ex(shgroup, name, tex, GPU_SAMPLER_MAX);
}
void DRW_shgroup_uniform_texture_ref(DRWShadingGroup *shgroup, const char *name, GPUTexture **tex)
void DRW_shgroup_uniform_texture_ref_ex(DRWShadingGroup *shgroup,
const char *name,
GPUTexture **tex,
eGPUSamplerState sampler_state)
{
BLI_assert(tex != NULL);
int loc = GPU_shader_get_texture_binding(shgroup->shader, name);
drw_shgroup_uniform_create_ex(shgroup, loc, DRW_UNIFORM_TEXTURE_REF, tex, GPU_SAMPLER_MAX, 0, 1);
drw_shgroup_uniform_create_ex(shgroup, loc, DRW_UNIFORM_TEXTURE_REF, tex, sampler_state, 0, 1);
}
void DRW_shgroup_uniform_texture_ref(DRWShadingGroup *shgroup, const char *name, GPUTexture **tex)
{
DRW_shgroup_uniform_texture_ref_ex(shgroup, name, tex, GPU_SAMPLER_MAX);
}
void DRW_shgroup_uniform_block(DRWShadingGroup *shgroup,
@@ -1786,6 +1794,14 @@ const DRWView *DRW_view_default_get(void)
return DST.view_default;
}
/* WARNING: Only use in render AND only if you are going to set view_default again. */
void DRW_view_reset(void)
{
DST.view_default = NULL;
DST.view_active = NULL;
DST.view_previous = NULL;
}
/* MUST only be called once per render and only in render mode. Sets default view. */
void DRW_view_default_set(DRWView *view)
{

View File

@@ -80,6 +80,8 @@ void GPU_vertbuf_init_with_format_ex(GPUVertBuf *, const GPUVertFormat *, GPUUsa
#define GPU_vertbuf_init_with_format(verts, format) \
GPU_vertbuf_init_with_format_ex(verts, format, GPU_USAGE_STATIC)
GPUVertBuf *GPU_vertbuf_duplicate(GPUVertBuf *verts);
uint GPU_vertbuf_size_get(const GPUVertBuf *);
void GPU_vertbuf_data_alloc(GPUVertBuf *, uint v_len);
void GPU_vertbuf_data_resize(GPUVertBuf *, uint v_len);

View File

@@ -124,6 +124,10 @@ BLI_INLINE const char *GPU_vertformat_attr_name_get(const GPUVertFormat *format,
return format->names + attr->names[n_idx];
}
/* WARNING: Can only rename using a string with same character count.
* WARNING: This removes all other aliases of this attrib */
void GPU_vertformat_attr_rename(GPUVertFormat *format, int attr, const char *new_name);
void GPU_vertformat_safe_attr_name(const char *attr_name, char *r_safe_name, uint max_len);
/* format conversion */

View File

@@ -85,6 +85,35 @@ void GPU_vertbuf_init_with_format_ex(GPUVertBuf *verts,
}
}
GPUVertBuf *GPU_vertbuf_duplicate(GPUVertBuf *verts)
{
GPUVertBuf *verts_dst = GPU_vertbuf_create(GPU_USAGE_STATIC);
/* Full copy. */
*verts_dst = *verts;
GPU_vertformat_copy(&verts_dst->format, &verts->format);
if (verts->vbo_id) {
uint buffer_sz = GPU_vertbuf_size_get(verts);
verts_dst->vbo_id = GPU_buf_alloc();
glBindBuffer(GL_COPY_READ_BUFFER, verts->vbo_id);
glBindBuffer(GL_COPY_WRITE_BUFFER, verts_dst->vbo_id);
glBufferData(GL_COPY_WRITE_BUFFER, buffer_sz, NULL, convert_usage_type_to_gl(verts->usage));
glCopyBufferSubData(GL_COPY_READ_BUFFER, GL_COPY_WRITE_BUFFER, 0, 0, buffer_sz);
#if VRAM_USAGE
vbo_memory_usage += GPU_vertbuf_size_get(verts);
#endif
}
if (verts->data) {
verts_dst->data = MEM_dupallocN(verts->data);
}
return verts_dst;
}
/** Same as discard but does not free. */
void GPU_vertbuf_clear(GPUVertBuf *verts)
{

View File

@@ -262,6 +262,20 @@ int GPU_vertformat_attr_id_get(const GPUVertFormat *format, const char *name)
return -1;
}
void GPU_vertformat_attr_rename(GPUVertFormat *format, int attr_id, const char *new_name)
{
BLI_assert(attr_id > -1 && attr_id < format->attr_len);
GPUVertAttr *attr = &format->attrs[attr_id];
char *attr_name = (char *)GPU_vertformat_attr_name_get(format, attr, 0);
BLI_assert(strlen(attr_name) == strlen(new_name));
int i = 0;
while (attr_name[i] != '\0') {
attr_name[i] = new_name[i];
i++;
}
attr->name_len = 1;
}
/* Encode 8 original bytes into 11 safe bytes. */
static void safe_bytes(char out[11], const char data[8])
{

View File

@@ -220,6 +220,8 @@
\
.motion_blur_samples = 8, \
.motion_blur_shutter = 0.5f, \
.motion_blur_depth_scale = 100.0f, \
.motion_blur_max = 32, \
\
.shadow_cube_size = 512, \
.shadow_cascade_size = 1024, \

View File

@@ -1628,8 +1628,10 @@ typedef struct SceneEEVEE {
float bloom_radius;
float bloom_clamp;
int motion_blur_samples;
int motion_blur_samples DNA_DEPRECATED;
int motion_blur_max;
float motion_blur_shutter;
float motion_blur_depth_scale;
int shadow_method DNA_DEPRECATED;
int shadow_cube_size;

View File

@@ -7150,12 +7150,6 @@ static void rna_def_scene_eevee(BlenderRNA *brna)
RNA_def_property_override_flag(prop, PROPOVERRIDE_OVERRIDABLE_LIBRARY);
RNA_def_property_update(prop, NC_SCENE | ND_RENDER_OPTIONS, NULL);
prop = RNA_def_property(srna, "motion_blur_samples", PROP_INT, PROP_UNSIGNED);
RNA_def_property_ui_text(prop, "Samples", "Number of samples to take with motion blur");
RNA_def_property_range(prop, 1, 64);
RNA_def_property_override_flag(prop, PROPOVERRIDE_OVERRIDABLE_LIBRARY);
RNA_def_property_update(prop, NC_SCENE | ND_RENDER_OPTIONS, NULL);
prop = RNA_def_property(srna, "motion_blur_shutter", PROP_FLOAT, PROP_FACTOR);
RNA_def_property_ui_text(prop, "Shutter", "Time taken in frames between shutter open and close");
RNA_def_property_range(prop, 0.0f, FLT_MAX);
@@ -7163,6 +7157,23 @@ static void rna_def_scene_eevee(BlenderRNA *brna)
RNA_def_property_override_flag(prop, PROPOVERRIDE_OVERRIDABLE_LIBRARY);
RNA_def_property_update(prop, NC_SCENE | ND_RENDER_OPTIONS, NULL);
prop = RNA_def_property(srna, "motion_blur_depth_scale", PROP_FLOAT, PROP_NONE);
RNA_def_property_ui_text(prop,
"Background Separation",
"Lower values will reduce background"
" bleeding onto foreground elements");
RNA_def_property_range(prop, 0.0f, FLT_MAX);
RNA_def_property_ui_range(prop, 0.01f, 1000.0f, 1, 2);
RNA_def_property_override_flag(prop, PROPOVERRIDE_OVERRIDABLE_LIBRARY);
RNA_def_property_update(prop, NC_SCENE | ND_RENDER_OPTIONS, NULL);
prop = RNA_def_property(srna, "motion_blur_max", PROP_INT, PROP_PIXEL);
RNA_def_property_ui_text(prop, "Max Blur", "Maximum blur distance a pixel can spread over");
RNA_def_property_range(prop, 1, 2048);
RNA_def_property_ui_range(prop, 1, 512, 1, -1);
RNA_def_property_override_flag(prop, PROPOVERRIDE_OVERRIDABLE_LIBRARY);
RNA_def_property_update(prop, NC_SCENE | ND_RENDER_OPTIONS, NULL);
/* Shadows */
prop = RNA_def_property(srna, "shadow_cube_size", PROP_ENUM, PROP_NONE);
RNA_def_property_enum_items(prop, eevee_shadow_size_items);