Inputs now declare a domain priority value instead of declaring
themselves as domain inputs. This allows multiple inputs to assume
domain inference role, and the best option is chosen based on the
priority and other factors.
Lazily allocate results during execution and remove the allocation stage
completely. This allows avoiding redundant allocations and copying in
case a pass through is possible. This is made possible through the use
of a global cross-evaluation texture pool that mitigates the late
allocation overhead.
This patch refactors the design and architecture of the viewport
compositor.
There is now a clear separation between stages of evaluation of the
compositor setup. Construction, Compilation, Allocation, and Evaluation.
This will allow more granular caching and fit the engine API better.
Instead of dynamic insertion of operations in the operations stream,
operations now have "input processors" that prepares the inputs before
execution, this can be used to do implicit, transform realization, and
any other preprocessing operations that the operation doesn't want to
worry about in its execution method.
Most of the functionalities related to results were moved to that class
and it now does reference counting manually instead of delegating that
to the texture pool.
Other minor refactoring to allow for the above.
This patch adds support for implicit conversion between types. This is
done by emitting a meta operation that performs the conversion
transparently before the node operation gets its input results.
This patch adds the initial interface for compositor operations and node
operations. The patch also add a structure that encapsulates a result of
some operation.
This patch adds reference counting support to the texture pool,
separates the allocation implementation to a pure virtual method, and
implements a concrete class for the compositor engine based on the DRW
texture pool.
We want to avoid directly linking bf_nodes to bf_draw, so we should
start reorganizing implementation files. All components are moved to
bf_nodes, this is also where interfaces are defined to be implemented in
the compositor engine.
This lib allows any shader to use `print()` like functions for
logging and debugging shaders.
Usage is described in the comment at the top of the file.
This patch proposes an initial interface for the compositor execute
function of nodes, as well as an engine implementation for the needed
context functionalities.
This patch adds an initial implementation of the node tree compiler for
the viewport compositor. The compiler computes the most optimal
evaluation schedule for nodes that minimizes the use of intermediate
buffers.
Instead of using a manual list of dependency, the new implementation
scans all shader files beginning by `gpu_shader_material_` and extract
all function declarations.
This way we can deduce the internal dependencies between theses files.
This new implementation is merged with the manual pragma dependency system
uses by other shader files. This way it is compatible with the shader
logging system and does not require any string duplication during shader
building.
This introduce `DEBUG_DEPENDENCIES` (not a cmake flag but a local define)
which when set to 1, will list all the original files included in this
shader while omitting the generated / non original code.
This is the layout used by the amdgpu-pro GL implementation.
This also add some sanitizing of the parse output because the bespoke
implementation has bogus error when it comes to compute shaders.
This displays the error source such as IDE can find identify them
as path and let the programmer follow the direct link instead of
manual string search.
This only works for shaders compiling from unaltered sources as
it uses the source `char*` as key for filename search.
For some reason on some GL implementation (amdgpu) this particular
syntaxes shift the error lines.
Remove the context lines by default as they are not useful anymore.
We now use ShaderCreateInfo as a way to setup the custom material
implementation.
This is more versatile and flexible while not require parsing of
snippets of code.
This patch changes the link conversion operation for the compositor
shader to be an RGB average to match the Blender compositor. Compositor
shaders are now marked with a COMPOSITOR_SHADER definition.
This patch adds a new dedicated UBO for compositor shaders. The
FrameNumber UBO member was moved to the new UBO. Additionally, the OCIO
luminance coefficients were added to the UBO for utilization by various
compositor operations.
This patch ports the Vector Curves node to the viewport compositor. A
variant of the existing vector_curves was added to work without mixing
and the inputs were reduced to vec3 because vec4 are superfluous.
This patch ports the Math node to the viewport compositor. The shading
math shader was moved into a common directory to be used by both
materials and the compositor.
This patch ports the Color Ramp node to the viewport compositor. The
shading color ramp shader was moved into a common directory to be used
by both materials and the compositor.
This patch ports the RGB Curves node to the viewport compositor. The
curves code was mostly rewritten in a common directory to be used by
both the material and compositor nodes. The new code avoids code
duplication by moving common code into BKE curve mapping functions. It
also avoids ambiguous data embedding into gradient vectors and avoids
redundancies. Finally, a film-like implementation was added.
This patch ports the Mix RGB node to the viewport compositor. The
material Mix RGB node code was moved into a common directory to be
utilized by both the material and the compositor nodes. Additionally,
some of the operations were adapted to work with the compositor, in
particular, the linear and soft light operations now write the alpha to
the result, this has no effect on materials but is consistent with the
compositor.
This patch ports the Color Correction node to the viewport compositor.
The shader is a straightforward port of the compositor code. A function
to return the luminance coefficients from the color management
configuration was added to pass the coefficients to the shader.
This patch ports the Color Balance node to the viewport compositor. The
shader is a straightforward port of the compositor code. A few utilities
were added to ease implementation.
This patch ports the Bright And Contrast node to the viewport
compositor. The shader is a straightforward port of the compositor code.
The (un)premultiply_alpha functions were adjusted to retain the original
alpha for compatibility with the compositor. This has no effect on
materials because alpha is implicitly discarded.
The defrag shader make sure the free heap is free of holes. Making
the allocation more straightforward.
Since we now only reference the pages using the tiles, we introduce
a debug shader that produces an image with page data in a visual way.
This replaces the debug 8 option.
This also fixes some bug that were still present in the pipeline.
This separate the handling of directional lights (sun) into their
own loops. This will help reduce register pressure and remove some
pollution of the local light culling.
All sun lights are packed at the start of the light array.
We now scan the depth buffer after the prepass to tag the needed
shadow tiles.
This is much more precise than the bound box tagging which is now
reserved for transparent objects.
This also:
- fix pixel radius size.
- add a dedicated info buffer to avoid having one unused tile.
Until now the LOD selection was based on distance from camera.
Now it is based on receiver distance ratio. We compute the world
size of one view pixel along with the world size of one shadow texel.
By knowing one point distance to the light or to the view, we can
compute the pixel density ratio and deduce the corresponding LOD.
We use this to compute the min LOD during the visibility selection phase
and the "mean" LOD for usage tagging by BBoxes.
The tagging LOD is a crude approximation as it only uses the BBox
center.
This makes every shadow setup pass aware of the LOD chain of the tilemap
for each cubemap face.
In the free phase, we mask any LOD page that is completely covered by
higher LOD. This avoir commiting memory twice or more per area.
In the allocation phase, we check for the last valid LOD and set it
in the LOD 0 meta data. We also store the actual page location in LOD0 but
do not mark it as allocated as the LOD tile has the ownership of the page.
This removes the light count limit for the forward shaded object. This
also provides a more efficient way of computing the culling directly on
the GPU. Moreover, this avoids doing multiple lighting passes for high
light counts in the deferred pipeline, improving performance.
This continue the effort to implement virtual shadow mapping.
This includes:
- Spot cone culling of tile.
- Tile vs. view frustum tagging.
- Shadowmap Page allocation / freeing.
- Rendering to 4K buffer only tiles that needs it.
- Copying to shadow atlas.
This debug buffer is automatically bound if a shader is including
`common_debug_lib.glsl`. One buffer is created for each shading group
using such a shader.
The shader can then use the functions from that file to draw debug
lines. There is a hardcoded limit of line one buffer can contain. Make
sure to only output lines for a few threads at most.
Under the hood this uses a vertex buffer bound as SSBO that contains
the number of verts and all the positions and colors packed into 1 vec4.
We render by just rendering the whole buffer.
All unused vertices are initialized with NaN positions and will not be
drawn.
This is a total refactor of how shadows are handled.
We use Virtual shadow maps with different Level of details to
ensure a somewhat evenly distributed precision.
The shadow test is a really crude shadow test that will be
improved in further commit.
There is a pool of 4096 Tilemaps that are distributed between
shadowed ligths. These tilemaps are 16x16 each and reference
shadow map pages that are allocated in an atlas. Pages are only
allocated if needed (i.e: visible for rendering an object).
Page management is done on GPU using compute shaders to reduce
CPU task.
On CPU only one draw pass per updated tilemaps is issued.
This reduces the memory requirement of shadowmapping large scenes
with many lights.
Denoising make use of more memory to store and reproject the result of
previous frame to reduce noise. This only works for viewport.
There is a final bilateral filter for cleaning up noise even more.
Screen space Raytracing is supported by alpha blended surfaces.
However only opaque surfaces will be visible to the rays. This means
Alpha blended surfaces cannot reflect or refract themselves.
Denoising is not possible on alpha blended surfaces. Many samples
are needed for noise free results.
Since the cost of tracing can be very high, raytracing will only be
enabled on demand, on a per-material basis.
This simply reuse the reflection raytracing pipeline but with another
ray distribution. Only direct lighting, distant lighting and emissive
light are visible to diffuse rays.
Subsurface effect is not visible but transmittance effect is visible
to diffuse rays.
Indirect diffuse light is processed by the SSS filter.
The new pipeline is now cleaner and allows for deferred refraction.
The refractions are more accurate but are not denoised for now. More
research needs to be done in this area.
There is no feedback buffer for now, so reflections of metallic surfaces
will appear black.
The same restriction on refractive materials still holds true. They will
not appear in screen space tracing of other non refractive surfaces.
However, refractive surfaces (non-blended) can now reflect themselves
and the other surfaces with screen space reflections.
Half res tracing is not implemented back yet.
This is to automate the generation of reuse sample tables and maybe more
in the future. This is not designed to make compilation way longer than
expected.
Same as SSS this has been rewritten to support varying SSS radius.
Instead of relying on shadowmap hack to improve the transmittance
artifact (previously called translucency) we exposed a min thickness
output that will reduce the maximum of light bleeding that can happen
at the shading point. This is far from perfect but at least it is
tweakable.
The effect is now cheaper and the option to enable it is now gone.
It can always be artificially disabled by making the thickness bigger
than the sss radius.
The effect is always enabled for all SSS surfaces and will even be
applied on forward shaded object (alpha blend mode).
This only adds the output but the output is not yet used.
This thickness output is meant to control the aspect of subsurface,
refraction, absorption and volume shaders.
The value expected is the mean thickness inside the object at the
shading point. The source can be a vertex color or a texture map baked
from a raytracer.
This new implementation follows the technique described in
"Efficient screen space subsurface scattering Siggraph 2018".
Compared to the old implementation it fixes a lot of issues at
the cost of it being slower. This fixes:
- Light leaking between different objects.
- Light leaking between different surfaces with different depths.
- SSS radii are now "texturable" per pixel. No SSS surfaces limits.
- Noise should be lower.
- Precomputation is only done once for all SSS surfaces which lowers the
per material storage and precomputation time.
Implementation is also simpler as it is only a one pass processing.
We differ from the reference presentation by not precomputing the
RGB weights per samples. We actually compute them on the fly in order
to support varying SSS radii.
Notes:
- SSS IOR and SSS anisotropy are not supported.
- Object level light leak prevention might not work for high number of
objects in the scene (> 1024). In this case light leak might occur.
Adding or deleting (hidding) objects in the scene might change which
objects can leak.
This allows multiple instances of external render engines per viewport.
Allowing them to be combined by the compositor.
Many things needed to be ported to the draw manager since it is the only
one that can know what is inside the `DRWRenderScene` and can iterate
over all running engines.
This was caused by the per view `draw_view` not being freed correctly.
Fixing this also caused issue because the `draw_view` would keep
ownership of the renderbuffer and would free it a second time.
Moving all renderbuffers ownership to `draw_view` for now.
This was caused by the StructArrayBuffer wrapper not being tagged as NonMovable.
The UBO was in fact being freed at creation time in debug build, but the
pointer was kept as valid in the copied wrapper.
Changing the higher level structure to not use the copy constructor to avoid this.
This adds the new DRWRenderScene structure (and its sub structures) that
contains all the needed render passes for each scene present in the
compositor nodetree.
The scenes are rendered using a special option to avoid rendering overlays.
The render layer input to the GPUMaterial are now a separate structure and
a separate list of input handled by the compositor engine.
Rendering all scenes is the first thing done to avoir much trouble with
There are still issues like continuous rendering of TAA because the same
DRWData is used for all scenes.
This is pretty basic but it gets the job done. This may change in
the future.
The `NodeType` (SHADING) for the `OperationKey` is not ideal.
Also it seems the tagging comming from the nodetree tag everything
as COPY_ON_WRITE update. Which is slow. To investigate.
Fix memory leak, a crash when resizing and wrong texture coordinate
in camera mode.
Change render layer sampler name to allow other texture to be bound.
This is a needed change for the viewport compositor. The compositor
needs to draw to `dtxl->color` to have correct overlay / background
composition.
The solution here is to have a separate buffer that keeps the first
sample we blend from. This increases VRAM usage but it is the most
elegant option.
This introduce a new compositor engine. It applies the compositor
nodetree onto the render result in the viewport using GLSL shader.
For now it only very few nodes are supported and only the combined
pass is passed to the evaluation pass.
This reuse almost the same pipeline as `GPUMaterial`.
This integer divide by zero was evaluated to 0 on all platform but apple,
where it yields 1. The world lighting would then sample the 1 sample of the
first grid instead of its own sample.
This was caused by the blend mode that was used even with full opacity.
This caused issues with the viewport was resized and the color of the
framebuffer becomes undefined, leading to undefined values in the blend
equation.
Another fix would be to clear the viewport color on resize inside the
GPUViewport.
This is a necessary step for EEVEE's new arch. This moves more data
to the draw manager. This makes it easier to have the render or draw
engines manage their own data.
This makes more sense and cleans-up what the GPUViewport holds
Also rewrites the Texture pool manager to be in C++.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
Differential Revision: https://developer.blender.org/D11966
This change the gbuffer layout to use more of the hardware to converting
data back and forth. Normals are encoded as two 16 bits components and
colors as R11G11B10F format.
This was motivated by the need of better quality normals. The issue is
that this increase the GBuffer size consequently. In order to balance
this we chose to merge the refraction and Diffuse/SSS data to use the
same buffer. This means we need to stochastically chose one of these
layers (so noise appear). Given that Glass BSDFs are rarely mixed
with Diffuse BSDFs, we think this is a good tradeoff.
The functions need to be declared before main as prototypes.
The appended libs will use the resources (textures, UBOs) defined at
global scope.
This removes a bit of code duplication and some long macros.
Instead of appending using `BLENDER_REQUIRE`, shaders can now ask for
libs to be added after the shader's `main()` by using the
`BLENDER_REQUIRE_POST` pragma.
Use viewspace instead of world space to compute pixel projection.
This fix issues when camera is far from origin and float precision would
produce artifacts.
This port the facing "flat" normal trick used by the gpencil engine
to EEVEE as well as the thickness mode.
The objects parameters are passed via the objectInfos UBO to avoid
much boiler plate code. However if this UBO grows too much we might have
to split it.
The normal trick for planar surfaces is quite simple to port to the
vertex shader even if it is less efficient.
However to compute it we need the objects bounds. This is passed as a
scale only through the orco factors. This will needs a bit of cleaning
at some points, with boundbox computed at object level.
Nothing much different compared to the previous implementation.
The transparent BSDF and principled BSDF now detects when the material
is potentially transparent to select the best way to render it.
This makes is possible to have AA and correct blending of the
forward rendered spheres.
However, to avoid distorded spheres we need to not support Lookdev
in panoramic projection mode.
Also remove support for LookDev when using render border for now.
This differs a bit from old implementation.
- Instead of manually adjusting the viewport we correctly place the
sphere in the vertex shader.
- Rendering happens after TAA accumulation: This is because we now
support panoramic cameras and TAA would distort the spheres.
This expose the capability of having no light and no probe (except the
world one) for specific views / code path.
The caller just need to pass 0 as extent to the `set_view()` function.
This is usefull for lookdev.
This does not include reference spheres rendering.
The approach is a bit different than before.
Now we use a `bNodeTree` to control the rendering of lookdev. This
generates a `GPUMaterial` that is stored per `Instance`. This way
rendering lookdev is just updating the temp light cache using this
material as world material. Removing the use of custom shader.
This introduces a small hack in order to bind the studiolight hdri after
the nodetree glsl parsing.
The background display however is still using a custom shader in order
to sample the world cubemap with different roughness.
The view space option of the studiolight is now faster by using a
transform before shading instead of rebaking the lightprobe constantly.
This should not have any particular impact on render time.
When evaluating surfaces, the deferred passes needs to sample the
depth buffer. But it also test against the stancil buffer.
Moreover the sampler needs to be a 2D sampler which is not the case
for cubemaps and texture2Darrays.
To overcome this we simply copy the gbuffer depth to another
temp texture using framebuffer blitting.
Some things differs from old implementation.
- Object visibility is filtered correctly without using a visibility
callback (which is to be removed).
The implementation is also more high level using less low level tricks.
A dedicated LightProbeView is created for each lightprobe cubeface to
render using all pipeline (deferred and forward).
There is still a few things not working.
Only world probe is supported for now.
The new implementation diverge from the original by randomly
selecting one lightprobe instead of sampling them all.
This speeds-up rendering a bit.
This is a small convenience. This let the render engine use this
default world if scene has no world.
World is black to keep the same behavior as before.
Shading groups are now created by the material_array_get functions
instead of passing a reference to be filled later. This avoids having
to wait later to maybe create a sub shading group.
This also simplifies different geomety type handling.
This adds a new closure selection method.
- In a first pass, weights are accumulated per output type (diffuse,
reflection, refraction).
- A random threshold is then generated before evaluating the BSDF nodes
again.
- During the evaluation pass the random threshold is decremented until
it reaches 0. At this moment the current BSDF is sampled.
For this to work, I splited the evaluation and the weighting in two
functions for all BSDF. The `*_eval` nodes are generated as dangling
nodes from the graph and only serialized after the rest of the graph.
Recalc flag on Material ID being unavailable to render engine, this
adds a simple way to detect material update by detecting shader creation
or update.
This constructs a "mirror" nodetree that feeds the closure "shader"
nodes with their respective final weight.
The tree is mirrored using simple math nodes. This is quite messy but
this is the only way to proceed without introducing special nodes.
The other issue with this method is that inputs are all uniforms even
for unplugged socket on temporary math nodes with add bloat to the
shader uniform buffer structure.
Only the part relevant to the weighting is duplicated. Other connexions
with the shading tree are reuse.
All shader nodes are updated to receive a `Weight` hidden parameter.
The original shader mixing tree is preserve to let the choice of using
either way to weight the output.
For now this is only done for the output nodes. This will need to be
extended to Closure to RGBA sub-tree.
This is the first step towards the new evaluation scheme of EEVEE
closures.
This commit contains:
- Removal of GPU_SOURCE_BUILTIN type, prefering global instead. This
avoid many boilerplate code since most of the old builtins are now
datas that are always present (i.e: view matrices, normals).
- Rewritting of codegen in C++ to use `std::stringstream`.
- Added a callback to let engine decide what to do with codegen code.
This remove a lot of needs for defines because of code order
dependency. The engine can insert the nodetree code in custom ways
to create advance effects (i.e: add displacement or vertex lighting).
Engine now returns final shader strings.
- Closure nodes evaluation replacment is a placeholder for now.
This is a port of the old material grouping. This is a bit more
clean as we use containers for each passes and other structures.
Nodetree is generated without major error for simple materials but
it is not yet used as closures are not outputed.
This adds the transparency and volume handling in the deferred
render pipeline.
Implementation is still unfinished.
To have better naming convention, I renamed object shader to surface.
This introduce a fat Gbuffer layout that groups closure data in groups
of similar BSDF. The goal is to have at least one sample for each
group to avoid too much code complexity and expected worse performance.
There is a lot of room for buffer reuse to reduce memory usage but it is
not considered a priority for now.
Add a smooth transition to avoid flickering of stochastic effects such
as soft shadows.
This use a simple blend method to progressively reveal the render
after some low sample count to avoid most of the flickering.
Parameters are hardcoded for now.
We use a new RNG to avoid correlation artifacts between Anti-Aliasing
and Shadow samples (see T68594).
The new sequence is a leap halton sequence. This makes it good with
low number of samples and yield less correlation issues.
Another change is that we directly jitter the projection matrix instead
of rotating the view matrix. This is improving convergence time and
avoid passing a second matrix to the shader.
However this case lead to discontinuity artifacts at face boders.
We might want to revert to the old rotation method for this
reason even if convergence is slower.
Now the shadows are linked to a `Light` object. The `Light` object is
linked to an `ObjectKey` to ensure persistence and deletion tracking.
The Uniform data are packed so that there is 1 `ShadowPunctualData`
per light in a `LightBatch`. This means there is only a shadowmap
limit to the number of `Shadow` in a scene.
Difference with previous implementation:
- Better texture space usage of cone and area light shadow.
- Shadows are packed in an atlas. Reducing requirements for future
features.
- Sampling is simpler because shadow matrix does everything.
This follows closely the implementation of 2.5D tiled light
culling described in the presentation:
"Improved Culling for Tiled and Clustered Rendering"
from Michal Drobot
http://advances.realtimerendering.com/s2017/2017_Sig_Improved_Culling_final.pdf
I chose the tile + Z binning approach for its high depth range support
and low CPU overhead & low memory consumption compared to the cluster
based culling. The cons is that the culling is a bit less precise in
some aspect but it is quite balanced.
The culling is done by the `Culling` object which is templated to easily
be reused for light probes cullg.
The Z-binning process is described starting from slide 20 in the
reference pdf.
I also implemented a debug pass to visualize false negative (light
culled when they shouldn't) and light evaluation density.
This is useful to detect failure case and hotspot. This could be exposed
as a developper only render pass in the future.
Some optimization of the reference implementation requires extensions
not yet added to GPU module and will be added later.
This has the basis of clustered light culling but does not yet do
it. The lights are only culled by frustum.
Its the same as if there was only one Cell for the entire Viewport.
This also wrap GPUFrameBuffer & GPUTexture inside eevee:Framebuffer
and eevee:Texture to improve managment.
Another cleanup was to put all members of `Instance` public to
avoid much complexity in accessing the data with modules
dependencies.
Also split velocity View related data to `class Velocity` and
rename previous `Velocity` to `VelocityModule`
Support infinite light count by dividing rendering into chucks of
LIGHT_MAX. Forward passes are just rendered again and deferred passes
(not implemented yet) will just have to have multiple light evaluation
passes.
This is almost the same thing as old implementation.
Differences:
- We clamp the motion vectors to their maximum when sampling the velocity buffer.
- Velocity rendering (and data manager) is separated from motion blur. This allows
outputing the motion vector render pass and in the future use motion vectors to
reproject older frames.
- Vector render pass support (only if motion blur is disabled, just like cycles).
- Velocity tiles are computed in one pass (simpler code, less CPU overhead, less
VRAM usage, maybe a bit slower but imperceivable (< 0.3ms)).
- Two velocity passes are outputed, one for motion blur fx (applied per shading view)
and one for the vector pass. This could be optimized further in the future.
- No current support for deformation & hair (to come).
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Also modifies camera setup to be compatible with future camera motion
blur.
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Pretty much identical to the previous implementation. With the exception
of a temporary noise function and some simplification of the CoC
computation. This also fixes issues with the Ortho depth of field.
Most of the files were modified to comply to new shader codestyle.
This also adds partial support of panoramic cameras (bokeh and
anamorphic is still buggy).
This cleansup a lot of confusion / complexity in the setup code.
Setup is closer to what cycles does now.
Also duplicates some buggy behavior of Cycles for now until this
is fixed.
This move view resolution handling to the `Camera` class that will
in the future clip and trim each view in panoramic projection.
There is a new `CameraView` that contains the `DRWView` and subview.
This way each `ShadingView` is associated to a unique `CameraView`.
ShadingView` & `CameraView` are all allocated & defined at creation time
but only the one activated by `Camera` will be rendered.
This option will make accumulation happen in a pre exposed logarithm
color space. This reduces the importance of bright pixels in the pixel
filter which will result in less aliasing in theses areas.
There is a few cases where one might want to disable this option to
match cycles better.
Render mode is really close to what the viewport render does.
Film output is done by resolving the data to the next (double buffered)
framebuffer and read back.
This also includes a bit of cleaning about naming of init() and sync()
functions.
This commit adds the Film class that handles accumulation of color and
non-color data using arbitrary projection and filter size.
A weighted accumulation (sum) is done into a data buffer with an
additional weight buffer. The sum being per pixel, it allows the input
textures that are not aligned with the output pixel grid.
Panoramic projection works by rendering a cubemap (6 views) of the scene
at the camera position. The Film filter pass then gather the pixels
using the correct Panoramic projection ensuring correct Anti-Aliasing.
For Non-color data (depth, normals) we only keep the closest value to
the target pixel center (simulating a filter size of 0).
Color data is accumulated in a log space to improve AntiAliasing output.
This is hardcoded for now.
Larger filters have poor performance but are very fast to converge.
Code Wise: This commit rename some modules to avoid possible confusion
and have better meaning. Use namespace instead of prefixes.
Added a new eevee_shared.hh file to share structure and enum definitions
between GLSL and C++.
Same idea as previous commit. This cleans-up the interface and put all
viewport related data inside the `DRWData` struct.
The draw manager is responsible for freeing it. That is the main point
of this all. In the future, we can have custom freeing method for each
engine.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.