Commit Graph

131 Commits

Author SHA1 Message Date
Jason Fielder
4527dd1ce4 Metal: MTLMemoryManager implementation includes functions which manage allocation of MTLBuffer resources.
The memory manager includes both a GPUContext-local manager which allocates per-context resources such as Circular Scratch Buffers for temporary data such as uniform updates and resource staging, and a GPUContext-global memory manager which features a pooled memory allocator for efficient re-use of resources, to reduce CPU-overhead of frequent memory allocations.

These Memory Managers act as a simple interface for use by other Metal backend modules and to coordinate the lifetime of buffers, to ensure that GPU-resident resources are correctly tracked and freed when no longer in use.

Note: This also contains dependent DIFF changes from D15027, though these will be removed once D15027 lands.

Authored by Apple: Michael Parkin-White

Ref T96261

Reviewed By: fclem

Maniphest Tasks: T96261

Differential Revision: https://developer.blender.org/D15277
2022-07-01 10:31:57 +02:00
5b87862ddc Curves: Further split of curves draw code from particles
Extends the changes started in f31c3f8114 to completely separate
much of the DRW curves code from the particle hair drawing. In the short
term this increases duplication, but the idea is to simplify development
by making it easier to do larger changes to the new code, and the new
system will replace the particle hair at some point.

After this, only the shaders themselves are shared.

Differential Revision: https://developer.blender.org/D14699
2022-04-22 10:44:01 -05:00
f31c3f8114 Curves: Split curve EEVEE/workbench functions from particle hair
The GPU evaluation for curves will have to change significantly from the
current particle hair drawing code, due to its more general use cases
and support for more curve types. To simplify that process and avoid
introducing regressions for the rendering of hair particle systems,
this commit splits drawing functions for the curves object and
particle hair.

The changes are just inlining of functions and copying code
where necessary.

Differential Revision: https://developer.blender.org/D14576
2022-04-13 22:07:31 -05:00
c434782e3a File headers: SPDX License migration
Use a shorter/simpler license convention, stops the header taking so
much space.

Follow the SPDX license specification: https://spdx.org/licenses

- C/C++/objc/objc++
- Python
- Shell Scripts
- CMake, GNUmakefile

While most of the source tree has been included

- `./extern/` was left out.
- `./intern/cycles` & `./intern/atomic` are also excluded because they
  use different header conventions.

doc/license/SPDX-license-identifiers.txt has been added to list SPDX all
used identifiers.

See P2788 for the script that automated these edits.

Reviewed By: brecht, mont29, sergey

Ref D14069
2022-02-11 09:14:36 +11:00
fe1816f67f Curves: Rename "Hair" types, variables, and functions to "Curves"
Based on discussions from T95355 and T94193, the plan is to use
the name "Curves" to describe the data-block container for multiple
curves. Eventually this will replace the existing "Curve" data-block.
However, it will be a while before the curve data-block can be replaced
so in order to distinguish the two curve types in the UI, "Hair Curves"
will be used, but eventually changed back to "Curves".

This patch renames "hair-related" files, functions, types, and variable
names to this convention. A deep rename is preferred to keep code
consistent and to avoid any "hair" terminology from leaking, since the
new data-block is meant for all curve types, not just hair use cases.

The downside of this naming is that the difference between "Curve"
and "Curves" has become important. That was considered during
design discussons and deemed acceptable, especially given the
non-permanent nature of the somewhat common conflict.

Some points of interest:
- All DNA compatibility is lost, just like rBf59767ff9729.
- I renamed `ID_HA` to `ID_CV` so there is no complete mismatch.
- `hair_curves` is used where necessary to distinguish from the
  existing "curves" plural.
- I didn't rename any of the cycles/rendering code function names,
  since that is also used by the old hair particle system.

Differential Revision: https://developer.blender.org/D14007
2022-02-07 11:56:48 -06:00
012e41fc8b Cleanup: use our own conventions for tags in comments 2022-01-31 10:49:59 +11:00
e89d42ddff Cleanup: move public doc-strings into headers for 'draw'
Ref T92709
2021-12-08 20:30:05 +11:00
0f1a200a67 Fix T92682: EEVEE motion blur crash with curve objects
After rBb9febb54a492, the evaluated mesh from a curve is now presented
to render engines as a separate mesh object, but some code still assumed
that a curve object itself could have an evaluated mesh. However, this is
still true for surface objects and metaballs, which don't
use geometry sets yet.

Differential Revision: https://developer.blender.org/D13272
2021-11-19 11:36:29 -05:00
8f12457c25 Fix T90295: inconsistent render pass order between Cycles and Eevee 2021-07-29 17:59:03 +02:00
9b89de2571 Cleanup: consistent use of tags: NOTE/TODO/FIXME/XXX
Also use doxy style function reference `#` prefix chars when
referencing identifiers.
2021-07-04 00:43:40 +10:00
8041b1dd1c Cleanup: EEVEE: Remove unused mipmapping on main color buffer 2021-03-13 23:49:31 +01:00
75fc6e3b2b Cleanup: EEVEE: Remove the horizon search layered shader
This shader is of no use now that we the fullres hizbuffer.
2021-03-13 22:51:23 +01:00
67c8d97db3 Cleanup: spelling 2021-02-14 20:58:04 +11:00
000a340afa EEVEE: Depth of field: New implementation
This is a complete refactor over the old system. The goal was to increase quality
first and then have something more flexible and optimised.

|{F9603145} | {F9603142}|{F9603147}|

This fixes issues we had with the old system which were:
- Too much overdraw (low performance).
- Not enough precision in render targets (hugly color banding/drifting).
- Poor resolution near in-focus regions.
- Wrong support of orthographic views.
- Missing alpha support in viewport.
- Missing bokeh shape inversion on foreground field.
- Issues on some GPUs. (see T72489) (But I'm sure this one will have other issues as well heh...)
- Fix T81092

I chose Unreal's Diaphragm DOF as a reference / goal implementation.
It is well described in the presentation "A Life of a Bokeh" by Guillaume Abadie.
You can check about it here https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04

Along side the main implementation we provide a way to increase the quality by jittering the
camera position for each sample (the ones specified under the Sampling tab).

The jittering is dividing the actual post processing dof radius so that it fills the undersampling.
The user can still add more overblur to have a noiseless image, but reducing bokeh shape sharpness.

Effect of overblur (left without, right with):
| {F9603122} | {F9603123}|

The actual implementation differs a bit:
- Foreground gather implementation uses the same "ring binning" accumulator as background
  but uses a custom occlusion method. This gives the problem of inflating the foreground elements
  when they are over background or in-focus regions.
  This is was a hard decision but this was preferable to the other method that was giving poor
  opacity masks for foreground and had other more noticeable issues. Do note it is possible
  to improve this part in the future if a better alternative is found.
- Use occlusion texture for foreground. Presentation says it wasn't really needed for them.
- The TAA stabilisation pass is replace by a simple neighborhood clamping at the reduce copy
  stage for simplicity.
- We don't do a brute-force in-focus separate gather pass. Instead we just do the brute force
  pass during resolve. Using the separate pass could be a future optimization if needed but
  might give less precise results.
- We don't use compute shaders at all so shader branching might not be optimal. But performance
  is still way better than our previous implementation.
- We mainly rely on density change to fix all undersampling issues even for foreground (which
  is something the reference implementation is not doing strangely).

Remaining issues (not considered blocking for me):
- Slight defocus stability: Due to slight defocus bruteforce gather using the bare scene color,
  highlights are dilated and make convergence quite slow or imposible when using jittered DOF
  (or gives )
- ~~Slight defocus inflating: There seems to be a 1px inflation discontinuity of the slight focus
  convolution compared to the half resolution. This is not really noticeable if using jittered
  camera.~~ Fixed
- Foreground occlusion approximation is a bit glitchy and gives incorrect result if the
  a defocus foreground element overlaps a farther foreground element. Note that this is easily
  mitigated using the jittered camera position.
|{F9603114}|{F9603115}|{F9603116}|
- Foreground is inflating,  not revealing background. However this avoids some other bugs too
  as discussed previously. Also mitigated with jittered camera position.
|{F9603130}|{F9603129}|
- Sensor vertical fit is still broken (does not match cycles).
- Scattred bokeh shapes can be a bit strange at polygon vertices. This is due to the distance field
  stored in the Bokeh LUT which is not rounded at the edges. This is barely noticeable if the
  shape does not rotate.
- ~~Sampling pattern of the jittered camera position is suboptimal. Could try something like hammersley
  or poisson disc distribution.~~Used hexaweb sampling pattern which is not random but has better
stability and overall coverage.
- Very large bokeh (> 300 px) can exhibit undersampling artifact in gather pass and quite a bit of
  bleeding. But at this size it is preferable to use jittered camera position.

Codewise the changes are pretty much self contained and each pass are well documented.
However the whole pipeline is quite complex to understand from bird's-eye view.

Notes:
- There is the possibility of using arbitrary bokeh texture with this implementation.
  However implementation is a bit involved.
- Gathering max sample count is hardcoded to avoid to deal with shader variations. The actual
  max sample count is already quite high but samples are not evenly distributed due to the
  ring binning method.
- While this implementation does not need 32bit/channel textures to render correctly it does use
  many other textures so actual VRAM usage is higher than previous method for viewport but less
  for render. Textures are reused to avoid many allocations.
- Bokeh LUT computation is fast and done for each redraw because it can be animated. Also the
  texture can be shared with other viewport with different camera settings.
2021-02-12 22:35:52 +01:00
17e1e2bfd8 Cleanup: correct spelling in comments 2021-02-05 16:23:34 +11:00
Jeroen Bakker
1f41bdc6f3 Eevee Cryptomatte: Store hashes in render result meta data
Stores cryptomatte hashes as meta data to the render result. Compositors could
use this for lookup on names in stead of hashes.

Differential Revision: https://developer.blender.org/D9553
2021-01-05 15:03:05 +01:00
8f3a401975 Eevee: Add Volume Transmittance to Color Render Passes.
In Cycles the volume transmittance is already composited into the color
passes. In Eevee the volume transmittance pass was separate and needed
to be composited in the compositor. This patch adds the volume
transmittance pass direct in the next render passes:

 * Diffuse Color
 * Specular Color
 * Emission
 * Environment

This patch includes the removal of the volume transmittance render pass.
It also renames the volume render passes to match Cycles. The setting
themselves aren't unified.

Maniphest Tasks: T81134
2020-12-14 09:27:58 +01:00
Jeroen Bakker
76a0b322e4 EEVEE Cryptomatte
Cryptomatte is a standard to efficiently create mattes for compositing. The
renderer outputs the required render passes, which can then be used in the
compositor to create masks for specified objects. Unlike the Material and Object
Index passes, the objects to isolate are selected in compositing, and mattes
will be anti-aliased.

Cryptomatte was already available in Cycles this patch adds it to the EEVEE
render engine. Original specification can be found at
https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf

**Accurate mode**

Following Cycles, there are two accuracy modes. The difference between the two
modes is the number of render samples they take into account to create the
render passes. When accurate mode is off the number of levels is used. When
accuracy mode is active, the number of render samples is used.

**Deviation from standard**

Cryptomatte specification is based on a path trace approach where samples and
coverage are calculated at the same time. In EEVEE a sample is an exact match on
top of a prepared depth buffer. Coverage is at that moment always 1. By sampling
multiple times the number of surface hits decides the actual surface coverage
for a matte per pixel.

**Implementation Overview**

When drawing to the cryptomatte GPU buffer the depth of the fragment is matched
to the active depth buffer. The hashes of each cryptomatte layer is written in
the GPU buffer. The exact layout depends on the active cryptomatte layers. The
GPU buffer is downloaded and integrated into an accumulation buffer (stored in
CPU RAM).

The accumulation buffer stores the hashes + weights for a number of levels,
layers per pixel. When a hash already exists the weight will be increased. When
the hash doesn't exists it will be added to the buffer.

After all the samples have been calculated the accumulation buffer is processed.
During this phase the total pixel weights of each layer is mapped to be in a
range between 0 and 1. The hashes are also sorted (highest weight first).

Blender Kernel now has a `BKE_cryptomatte` header that access to common
functions for cryptomatte. This will in the future be used by the API.

* Alpha blended materials aren't supported. Alpha blended materials support in
  render passes needs research how to implement it in a maintainable way for any
  render pass.

This is a list of tasks that needs to be done for the same release that this
patch lands on (Blender 2.92)

* T82571 Add render tests.
* T82572 Documentation.
* T82573 Store hashes + Object names in the render result header.
* T82574 Use threading to increase performance in accumulation and post
  processing.
* T82575 Merge the cycles and EEVEE settings as they are identical.
* T82576 Add RNA to extract the cryptomatte hashes to use in python scripts.

Reviewed By: Clément Foucault

Maniphest Tasks: T81058

Differential Revision: https://developer.blender.org/D9165
2020-12-04 08:46:34 +01:00
Jeroen Bakker
2bae11d5c0 EEVEE: Arbitrary Output Variables
This patch adds support for AOVs in EEVEE. AOV Outputs can be defined in the
render pass tab and used in shader materials. Both Object and World based
shaders are supported. The AOV can be previewed in the viewport using the
renderpass selector in the shading popover.

AOV names that conflict with other AOVs are automatically corrected. AOV
conflicts with render passes get a warning icon. The reason behind this is that
changing render engines/passes can change the conflict, but you might not notice
it. Changing this automatically would also make the materials incorrect, so best
to leave this to the user.

**Implementation**

The patch adds a copies the AOV structures of Cycles into Blender. The goal is
that the Cycles will use Blenders AOV defintions. In the Blender kernel
(`layer.c`) the logic of these structures are implemented.

The GLSL shader of any GPUMaterial can hold multiple outputs (the main output
and the AOV outputs) based on the renderPassUBO the right output is selected.
This selection uses an hash that encodes the AOV structure. The full AOV needed
to be encoded when actually drawing the material pass as the AOV type changes
the behavior of the AOV. This isn't known yet when the GLSL is compiled.

**Future Developments**

* The AOV definitions in the render layer panel isn't shared with Cycles.
  Cycles should be migrated to use the same viewlayer aovs. During a previous
  attempt this failed as the AOV validation in cycles and in Blender have
  implementation differences what made it crash when an aov name was invalid.
  This could be fixed by extending the external render engine API.
* Add support to Cycles to render AOVs in the 3d viewport.
* Use a drop down list for selecting AOVs in the AOV Output node.
* Give user feedback when multiple AOV output nodes with the same AOV name
  exists in the same shader.
* Fix viewing single channel images in the image editor [T83314]
* Reduce viewport render time by only render needed draw passes. [T83316]

Reviewed By: Brecht van Lommel, Clément Foucault

Differential Revision: https://developer.blender.org/D7010
2020-12-04 08:14:07 +01:00
6b436b80a4 GPU: Rename gpu_extensions to gpu_capabilities
This makes more sense as this module has more to it than just
GL extensions.
2020-09-07 19:37:05 +02:00
7edd8a7738 GPUUniformBuf: Rename struct and change API a bit
This follows the GPU module naming of other buffers.
We pass name to distinguish each GPUUniformBuf in debug mode.
Also remove DRW_uniform_buffer interface.
2020-08-21 14:16:42 +02:00
68651534c2 Fix T77267: Render EEVEE AO pass when AO disabled.
In EEVEE the AO renderpass influenced other render passes. Until now the
pass wasn't selectable when AO was disabled in the scene to remove these
render artifacts.

This patch allows rendering EEVEE AO pass without enabling it in the
scene. It does this by binding a fallback texture that is used by the
surface shaders.

Reviewed By: Clément Foucault

Differential Revision: https://developer.blender.org/D7956
2020-08-17 11:10:32 +02:00
240ac779d5 Merge branch 'blender-v2.90-release'
# Conflicts:
#	source/blender/draw/engines/eevee/eevee_motion_blur.c
2020-08-12 18:16:47 +02:00
2218b61e8e Fix T79637 Motion blur gives artifacts when changing the camera
DRW_render_set_time is calling RE_engine_frame_set will in turn calls
BKE_scene_camera_switch_update.

To workaround this, we get the original camera object at render init and
get the evaluated version from it after each time change.
2020-08-12 18:06:36 +02:00
b134434224 Cleanup: declare arrays arrays where possible 2020-08-07 22:37:39 +10:00
58909abc68 EEVEE: Render: Fix regression caused by previous Motion blur fix
Caused by rB4f59e4bddcb0c06e441adf68a5f252a4e5b4b260
2020-08-07 00:59:14 +02:00
4f59e4bddc Fix T78452 EEVEE: Motion Blur: Crash when using camera switching
This was caused by the ViewLayer being freed with all its
engine data.
2020-08-06 23:06:18 +02:00
70d7805fa9 Cleanup: pass const matrices
Also order return matrices last.
2020-08-02 18:02:20 +10:00
8084b7e6e2 Cleanup: GPU: Replace all glReadPixels by GPU equivalent 2020-07-16 18:01:44 +02:00
439b40e601 EEVEE: Motion Blur: Add accumulation motion blur for better precision
This revisit the render pipeline to support time slicing for better motion
blur.

We support accumulation with or without the Post-process motion blur.

If using the post-process, we reuse last step next motion data to avoid
another scene reevaluation.

This also adds support for hair motion blur which is handled in a similar
way as mesh motion blur.

The total number of samples is distributed evenly accross all timesteps to
avoid sampling weighting issues. For this reason, the sample count is
(internally) rounded up to the next multiple of the step count.

Only FX Motion BLur: {F8632258}

FX Motion Blur + 4 time steps: {F8632260}

FX Motion Blur + 32 time steps: {F8632261}

Reviewed By: jbakker

Differential Revision: https://developer.blender.org/D8079
2020-06-23 14:04:41 +02:00
f84414d6e1 EEEVEE: Object Motion Blur: Initial Implementation
This adds object motion blur vectors for EEVEE as well as better noise
reduction for it.

For TAA reprojection we just compute the motion vector on the fly based on
camera motion and depth buffer. This makes possible to store another motion
vector only for the blurring which is not useful for TAA history fetching.

Motion Data is saved per object & per geometry if using deformation blur.
We support deformation motion blur by saving previous VBO and modifying the
actual GPUBatch for the geometry to include theses VBOs.

We store Previous and Next frame motion in the same motion vector buffer
(RG for prev and BA for next). This makes non linear motion blur (like
rotating objects) less prone to outward/inward blur.

We also improve the motion blur post process to expand outside the objects
border. We use a tile base approach and the max size of the blur is set via
a new render setting.

We use a background reconstruction method that needs another setting
(Background Separation).

Sampling is done using a fixed 8 dithered samples per direction. The final
render samples will clear the noise like other stochastic effects.

One caveat is that hair particles are not yet supported. Support will
come in another patch.

Reviewed By: jbakker

Differential Revision: https://developer.blender.org/D7297
2020-06-19 17:05:49 +02:00
b18c2a3c41 EEVEE: Refactor of eevee_material.c
These are the modifications:

-With DRW modification we reduce the number of passes we need to populate.
-Rename passes for consistent naming.
-Reduce complexity in code compilation
-Cleanup how renderpass accumulation passes are setup, using pass instances.
-Make sculpt mode compatible with shadows
-Make hair passes compatible with SSS
-Error shader and lookdev materials now use standalone materials.
-Support default shader (world and material) using a default nodetree internally.
-Change BLEND_CLIP to be emulated by gpu nodetree. Making less shader variations.
-Use BLI_memblock for cache memory allocation.
-Renderpasses are handled by switching a UBO ref bind.

One major hack in this patch is the use of modified pointer as ghash keys.
This rely on the assumption that the keys will never overlap because the
number of options per key will never be bigger than the pointed struct.

The use of one single nodetree to support default material is also a bit hacky
since it won't support concurent usage of this nodetree.
(see EEVEE_shader_default_surface_nodetree)

Another change is that objects with shader errors now appear solid magenta instead
of shaded magenta. This is only because of code reuse purpose but could be changed
if really needed.

Reviewed By: jbakker

Differential Revision: https://developer.blender.org/D7642
2020-06-02 16:58:07 +02:00
ff62481f65 Fix T69060: File Output Node does not work with Time Remapping
Problem is that the RenderEngines will change the RenderData cfra when
rendering (when time remapping is used -- at least workbench/eevee/
gpencil do a combination of BKE_scene_frame_get() plus
RE_GetCameraWindow() which alters the RenderData cfra).

Later on in the pipeline, the Compositor will use this RenderData cfra
to determine the output file name for the FileOutput node. (In contrast
to this, the 'regular' Output will use the Scene's RenderData -- not the
Render's -- cfra [which hasnt been altered])

It is not entirely clear why RE_GetCameraWindow was setting the cfra on
the Render, but it appears to be legacy OGL rendering related and is not
needed anymore.
Removing this will keep the cfra as needed for the Compositor FileOutput
node.
2020-03-27 10:10:54 +01:00
2d1cce8331 Cleanup: make format after SortedIncludes change 2020-03-19 09:33:58 +01:00
fd53b72871 Objects: Eevee and workbench rendering of new Volume, Hair, PointCloud
Only the volume drawing part is really finished and exposed to the user. Hair
plugs into the existing hair rendering code and is fairly straightforward. The
pointcloud drawing is a hack using overlays rather than Eevee and workbench.

The most tricky part for volume rendering is the case where each volume grid
has a different transform, which requires an additional matrix in the shader
and non-trivial logic in Eevee volume drawing. In the common case were all the
transforms match we don't use the additional per-grid matrix in the shader.

Ref T73201, T68981

Differential Revision: https://developer.blender.org/D6955
2020-03-18 11:23:05 +01:00
e0f41d32c9 Code Cleanup: UNDEF not existing define 2020-02-28 14:31:46 +01:00
57b7833d1e Cleanup: split off hair cache function for reusability
For when we support sources of hair other than particle systems.
2020-02-27 15:11:44 +01:00
c26f470cfe EEVEE: Fix memleak when G.is_break is set from another thread 2020-02-22 17:08:44 +01:00
Jeroen Bakker
be2bc97eba EEVEE: Render Passes
This patch adds new render passes to EEVEE. These passes include:

* Emission
* Diffuse Light
* Diffuse Color
* Glossy Light
* Glossy Color
* Environment
* Volume Scattering
* Volume Transmission
* Bloom
* Shadow

With these passes it will be possible to use EEVEE effectively for
compositing. During development we kept a close eye on how to get similar
results compared to cycles render passes there are some differences that
are related to how EEVEE works. For EEVEE we combined the passes to
`Diffuse` and `Specular`. There are no transmittance or sss passes anymore.
Cycles will be changed accordingly.

Cycles volume transmittance is added to multiple surface col passes. For
EEVEE we left the volume transmittance as a separate pass.

Known Limitations

* All materials that use alpha blending will not be rendered in the render
  passes. Other transparency modes are supported.
* More GPU memory is required to store the render passes. When rendering
  a HD image with all render passes enabled at max extra 570MB GPU memory is
  required.

Implementation Details

An overview of render passes have been described in
https://wiki.blender.org/wiki/Source/Render/EEVEE/RenderPasses

Future Developments

* In this implementation the materials are re-rendered for Diffuse/Glossy
  and Emission passes. We could use multi target rendering to improve the
  render speed.
* Other passes can be added later
* Don't render material based passes when only requesting AO or Shadow.
* Add more passes to the system. These could include Cryptomatte, AOV's, Vector,
  ObjectID, MaterialID, UV.

Reviewed By: Clément Foucault

Differential Revision: https://developer.blender.org/D6331
2020-02-21 11:13:43 +01:00
8c5cb8359a EEVEE: Test maximum texture size before render.
This will catch any non renderable size.
2020-01-30 15:07:23 +01:00
7c9d15fca8 Fix T71154: EEVEE Soft Shadows Viewport Rendering
EEVEE Soft shadows were not rendered correctly during viewport
rendering. The reason for this is that during viewport rendering the
shadow buffers were only update once and not per sample. This resulted
that all the samples calculated the same shadow.

This fix moves the call to `EEVEE_shadows_update` from cache finished to
draw scene. This needs to happen before `EEVEE_lightprobes_refresh`.

Reviewed By: fclem

Differential Revision: https://developer.blender.org/D6538
2020-01-17 14:03:29 +01:00
320d8ab155 EEVEE: Viewport Renderpasses
This patch will allow the user to select the EEVEE renderpass to be
shown in the viewport by default the combined pass will be shown.

Limitations:

* Viewport rendering stores the result in a `RenderResult`. RenderResult
  is not aware of the type of data it holds. In many places where RenderResult
  is used it is assumed that it stores a combined pass and the display+view
  transform are applied.

  I will propose to fix this in a future patch. But that is still being
  designed and discussed.

Reviewed By: fclem

Differential Revision: https://developer.blender.org/D6319
2019-11-28 09:12:28 +01:00
9d7f65630b EEVEE: GLSL Renderpasses
Most of the renderpasses in EEVEE used post-processing on the CPU. For
final image rendering this is sufficient, but when we want to display
the data to the user we don't want to transfer to the CPU to do post
processing to then upload it back to the GPU to display the result.

This patch moves the renderpass postprocessing to a GLSL shader.

This is the first step to do, before we will enable the renderpasses in the viewport.

Reviewed By: fclem

Differential Revision: https://developer.blender.org/D6206
2019-11-27 15:50:54 +01:00
17b63db4e2 EEVEE: Renderlayer artifacts
When rendering the Subsurface scattering lighting render layer with high
sample count render artifacts can appear. This patch will remove these
render artifacts by using a more precise texture format when samples
will be larger than 128. As with the new eevee-shadows it is more common
to use higher number of samples.

The reason why it was visible in the subsurface scattering is that every
sample could change the color. Adding different values will reduce
precision over the number of samples.

The subsurface color render layer also has this issue, but it is not noticeable as
the colors tend to be close to each other so the colors would
most of the time just shift the precision and hold up better.

Reviewed By: fclem

Differential Revision: https://developer.blender.org/D6245
2019-11-27 15:40:07 +01:00
0b2d1badec Cleanup: use post increment/decrement
When the result isn't used, prefer post increment/decrement
(already used nearly everywhere in Blender).
2019-09-08 00:23:25 +10:00
d8aaf25c23 Eevee: Shadow map refactor
Reviewed By: brecht

Differential Revision: http://developer.blender.org/D5659
2019-09-05 17:37:50 +02:00
d5002f007e Eevee: Improve Transparent BSDF behavior
Alpha blended Transparency is now using dual source blending making it
fully compatible with cycles Transparent BSDF.

Multiply and additive blend mode can be achieved using some nodes and are
going to be removed.
2019-08-14 13:36:56 +02:00
760dbd1cbf Cleanup: misc spelling fixes
T68035 by @luzpaz
2019-08-01 14:02:41 +10:00
484794ce67 Eevee: Fix first sample being accumulated without SSR
We check if the previous iteration (sample) was using a valid double buffer.
If it wasn't, we request another iteration.

This fix the issue for viewport,viewport render and image render.

Related to T65761 Eevee render inconsistency between 3D View, Viewport render, and F12 Render
2019-07-09 14:34:56 +02:00
a74041c96c Eevee/GPencil: Fix depth reading after render 2019-05-27 16:14:58 +02:00