Because:
- Less redundancy.
- Better suffixes.
Also a few modification to GPU_texture_create_* to simplify the API:
- make the format explicit to the texture creation process.
- remove the component count as it's specified in the GPUTextureFormat.
Added Object Overlap Overlay
- Added R32UI support to GPU_framebuffer
- Added R32U support to draw manager
- The overlay mode has a object data pass that will render 'needed' data to specific buffers so we can mix them together via a deferred rendering. In future will also add UV's and other data
- Overlap is implemented as an overlay so it could be used on top of the Scene lighted Solid mode (that will be rendered by Eevee.
Reviewers: fclem, brecht
Reviewed By: fclem
Subscribers: sergey
Tags: #code_quest
Maniphest Tasks: T54726
Differential Revision: https://developer.blender.org/D3174
This refactor modernise the use of framebuffers.
It also touches a lot of files so breaking down changes we have:
- GPUTexture: Allow textures to be attached to more than one GPUFrameBuffer.
This allows to create and configure more FBO without the need to attach
and detach texture at drawing time.
- GPUFrameBuffer: The wrapper starts to mimic opengl a bit closer. This
allows to configure the framebuffer inside a context other than the one
that will be rendering the framebuffer. We do the actual configuration
when binding the FBO. We also Keep track of config validity and save
drawbuffers state in the FBO. We remove the different bind/unbind
functions. These make little sense now that we have separate contexts.
- DRWFrameBuffer: We replace DRW_framebuffer functions by GPU_framebuffer
ones to avoid another layer of abstraction. We move the DRW convenience
functions to GPUFramebuffer instead and even add new ones. The MACRO
GPU_framebuffer_ensure_config is pretty much all you need to create and
config a GPUFramebuffer.
- DRWTexture: Due to the removal of DRWFrameBuffer, we needed to create
functions to create textures for thoses framebuffers. Pool textures are
now using default texture parameters for the texture type asked.
- DRWManager: Make sure no framebuffer object is bound when doing cache
filling.
- GPUViewport: Add new color_only_fb and depth_only_fb along with FB API
usage update. This let draw engines render to color/depth only target
and without the need to attach/detach textures.
- WM_window: Assert when a framebuffer is bound when changing context.
This balance the fact we are not track ogl context inside GPUFramebuffer.
- Eevee, Clay, Mode engines: Update to new API. This comes with a lot of
code simplification.
This also come with some cleanups in some engine codes.
This includes a few modification:
- The biggest one is call glActiveTexture before doing any call to
glBindTexture for rendering purpose (uniform value depends on it).
This is also better to know what's going on when rendering UI. So if
there is missing UI elements because of this commit look for this first.
This allows us to have "less calls" to glActiveTexture (I did not
measure the final count) and less checks inside GPU_texture.
- Remove use of GL_TEXTURE0 as a uniform value in a few places.
- Be more strict and use BLI_assert for bad usage of GPU_texture functions.
- Disable filtering for integer and stencil textures (not supported by
OGL specs).
- Replace bools inside GPUTexture by a bitflag supporting more options to
identify texture types.
Although the problem was exposed in 9457715d9a, the problem was in the
original code that was copied over. To have:
```
} else { /* EXPECTED_VALUE */
```
Without an BLI_assert(value == EXPECTED_VALUE); is asking for troubles.
Yet another reason to favour switch statements with:
```
default:
BLI_assert(!"value not implemented or supported");
```
Instead of chained if/else if/else /* expected_value */.
We only keep this as a way to get GPU_stubs to run, in case we want to do a
throughout cleanup in the codebase and want code using legacy calls to
fail to build.
-Use 11_11_10 buffers for hdr content.
-Eevee compositing share 1 buffer if bloom and DOF are both activated.
-Fix slowdown when resizing EEVEE viewport.
-Removed DRW_BUF_*** enums causing confusion.
Adds 2 static unsigned int to Gawain and GPUTexture to roughly keep track of the memory stats on the GPU.
Stats can be view at the bottom of the GPU stats with debug value > 20.
- enabled lights
- alpha test
- texture environment
- point sprites (always enabled in modern GL)
Moved is_clip_plane for better struct packing, no functional change there.
Part of T51164
Texturing is always enabled in GLSL. Simply use a sampler in the shader.
Replaced gpu_generate_mipmap with glGenerateMipmap since the former just Enabled/Disabled the texture target and called the latter.
Part of T51164
For now only compute GGX convolution. The GGX LUT used for the split sum approximation (UE4) is merged with the LTX mag LUT that uses the same parameters (theta and roughness)
- test for 2D textures first since it's the most common case
- declare variables close to where they're used
- fix compiler warning for proxy (uninitialized use)
- safe return if n != 1, 2, 3 (should never happen)
- white space
Using Texture Arrays to store shadow maps so less texture slots are used when shading. This means a large amount of shadows can be supported.
Support Projection Shadow Map for sun like in old BI/BGE.
Support Cube Shadow Map for Point/Spot/Area lights. the benefit of using it for spot light is that the spot angle does not change shadow resolution (at the cost of more memory used). The implementation of the cubemap sampling is targeted for 3.3 core. We rely on 2D texture arrays to store cubemaps faces and sample the right one manualy. Significant performance improvement can be done using Cubemap Arrays on supported hardware.
Shadows are only hardware filtered. Prefiltered shadows and settings comming next.
Using Linear Transform Cosines to compute area lighting. This is far more accurate than other techniques but also slower.
We use rotating quad to mimic sphere area light. For a better approximation, we use a rotating octogon.
Initial work by Clément Foucault with contributions from Dalai Felinto
(mainly per-collection engine settings logic, and depsgraph iterator placeholder).
This makes Blender require OpenGL 3.3. Which means Intel graphic card
and OSX will break. Disable CLAY_ENGINE in CMake in those cases.
This is a prototype render engine intended to help the design of real
render engines. This is mainly an engine with enphasis in matcap and
ambient occlusion.
Implemented Features
--------------------
* Clay Render Engine, following the new API, to be used as reference for
future engines
* A more complete Matcap customization with more options
* Per-Collection render engine settings
* New Ground Truth AO - not enabled
Missing Features
----------------
* Finish object edit mode
- Fix shaders to use new matrix
- Fix artifacts when edge does off screen
- Fix depth issue
- Selection sillhouette
- Mesh wires
- Use mesh normals (for higher quality matcap)
- Non-Mesh objects drawing
- Widget drawing
- Performance issues
* Finish mesh edit mode
- Derived-Mesh-less edit mode API (mesh_rende.c)
* General edit mode
- Per-collection edit mode settings
* General engines
- Per-collection engine settings
(they are their, but they still need to be flushed by depsgraph, and
used by the drawing code)
-Remove NPOT check as it should be supported by default with OGL 3.3
-All custom texture creation follow the same path now
-Now explicit texture format is required when creating a custom texture (Non RGBA8)
-Support for arrays of textures
Reviewers: dfelinto, merwin
Differential Revision: https://developer.blender.org/D2452
Removed some of my earlier glActiveTexture calls. After reviewing the
code I now trust that GL_TEXTURE0 is active by default. Fewer GL calls,
same results.
Fixed some misuse of glActiveTexture & glUniformi, mostly my fault.
Caught by --debug-gpu on Windows. Don't know why this appeared to be
working previously!
Plus some easy cleanup nearby.
Non-power-of-two textures are always allowed. Keeping the disabled checks in the code in case we support OpenGL ES in the future. Even then it should be a compile-time check, not at run-time.
Errors are caught & reported by our GL debug callback. This gives us way more useful information than sporadic calls to glGetError.
I removed almost all use of glGetError, including our own GPU_ASSERT_NO_GL_ERRORS and GPU_CHECK_ERRORS_AROUND macros.
Still used in rna_Image_gl_load because it passes unvalidated input to OpenGL functions.
Still used in gpu_state_print_fl_ex as an exception handling hack -- will rewrite this soon.
The optimism embodied by this commit will not prevent OpenGL errors. We need to analyze what would cause GL to fail at certain points and proactively intercept these failures. Or guarantee they can't happen.