The shaders are: `GPU_SHADER_3D_FLAT_SELECT_ID` and `GPU_SHADER_3D_UNIFORM_SELECT_ID`.
This commit allows the drawing of the mesh select ids to be done on a 32UI format texture.
This simplifies the shader that previously acted on the backbuffer and had to do an uint to rgba conversion.
Differential Revision: https://developer.blender.org/D4350
Some old AMD drivers crash when a vbo with stride 1 is used a few times.
I have not found a real solution to this problem. So the solution was to use a vbo with stride 4 (which in theory is less efficient and takes up more memory space).
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
On some platform does not support line width > 1.0 and can even throw and
error. Better check an at least display something rather than no lines at
all.
When calling glBlitFramebuffer on most (if not all) mac that have a GPU
from the Radeon Pro series, the data is not properly copied and only a
subset of the pixels are correctly copied.
This only happens when blitting the depth buffer if the depth buffer is
GL_DEPTH24_STENCIL8.
Changing the depth buffer format to GPU_DEPTH32F_STENCIL8 fixes the issue
but only works if blitting the depth componnent. The stencil componnent
still provoke issues when being copied.
This is caused by a driver bug that prevent us from rendering to (or even
binding) a texture mip level that is below GL_TEXTURE_MAX_LEVEL of the
target texture. This is fine in most drivers (and legal AFAIK) but not on
thoses Intels HDXXX + Windows.
As a fix we just put GL_TEXTURE_MAX_LEVEL lower (which is illegal because
it is undefined behaviour), but in practice it works ok and does not
trigger any warnings or errors.
This commit fixes most of the problems encountered on these GPUs (T56668).
This happens on NVidia GPUs, using more textures than the maximum allowed
by the gl will NOT trigger a linking issue (maybe because of bindless
texture implementation?).
So in this case we manually count the number of samplers per shader stage
and compare it against the GL limit. We discard the shader if the sampler
count is too high. This shows the user something is wrong with the shader.
This bug (explained here https://github.com/dfelinto/opengl-sandbox/blob/downsample/README.md) is breaking eevee beyond the point it's workable.
This patch workaround the issue by making sure every fbo have mipmaps that are strictly greater than 16px. This break the bloom visuals a bit but only for this setup.
We only keep this as a way to get GPU_stubs to run, in case we want to do a
throughout cleanup in the codebase and want code using legacy calls to
fail to build.
There is no more point of keep those around. ES20 may need special case
when/if we dabble with it again. Meanwhile no point on polluting the
code with this.
(ghost still has reference for the PROFILE, but that's reasonable)
In the move to OpenGL 3.3 core profile, we drop support for compatibility profile and older versions.
OpenSubdiv was the only user; I'll update OSD next.
These are always supported now
- instancing as of GL 3.1
- geometry shaders as of GL 3.2
The change to rna_scene.c could use some cleanup, since we don't really need a runtime query function.