If we fail allocating a proxy texture don't fail, instead create a
smaller nearest filtered image to display in its place.
This can make viewing slow (it's an extra O^3 operation), but this will
probably help us render the tornado in 3D viewport in gooseberry and
still actually see something - despite the rendering taking longer.
I've added a debug print so we can know when this happens.
That was really crappy indeed. Now we have a separate API
for low level OpenGL programs, plus a nice interface for GPU, also
removes some GL calls from main code as a plus :)
The source for the programs is also moved to nice external .glsl files
(not sure which extension convention GPU assemply uses)
and on screen rendering.
Aaaaah, the beauty of driver implementations of OpenGL!
Turns out the problem here is that drivers calculate df/dy differently
in some cases (probably because OpenGL counts y reverse to how the
window system does, so drivers can get confused).
Fixed this for the ATI case based on info we have so far, there's also
the Intel case which will be handled separately (missing info on Intel's
renderer string etc).
Unfortunately we can't really fix this for the general case so we'll
have to haldle cases as they come in our tracker and by adding silly
string comparisons in our GPU initialization module <sigh>.
Count line from beginning of the whole shader source instead of each
string sepatately since it helps with finding out the error line in most
tested platforms
Quite a few things wrong here:
* Mac did not support EXT_draw_instanced, only ARB_draw_instanced
* Draw instanced did not work unless data came from vertex buffer, which
is second time we see weird things with vertex arrays in mac
* There were a few stupid mistakes by me as well, such as binding to
uniform locations for the wrong shaders (it's a wonder it ever worked
:p)
A new checkbox "High quality" is provided in camera settings to enable
this. This creates a depth of field that is much closer to the rendered
result and even supports aperture blades in the effect, but it's more
expensive too. There are optimizations to do here since the technique is
very fill rate heavy.
People, be careful, this -can- lock up your screen if depth of field
blurring is too extreme.
Technical details:
This uses geometry shaders + instancing and is an adaptation of
techniques gathered from
http://bartwronski.com/2014/04/07/bokeh-depth-of-field-going-insane-http://advances.realtimerendering.com/s2011/SousaSchulzKazyan%20-
%20in%20Real-Time%20Rendering%20Course).ppt
TODOs:
* Support dithering to minimize banding.
* Optimize fill rate in geometry shader.
Using bool when we're asking yes/no questions such as whether some GPU
feature is supported.
Consolidated these simple functions into gpu_extensions.c and grouped
them in the header.
Const-ified some args where the functions don't modify the pointed-to
data.
patch number D706 with changes:
- WITH_GPU_DEBUG just creates a debug context (and enables the debug messaging
system functions) but leaves the checks we had intact. Old patch
added the debug functionality only if we had the flag on to save some
performance.
Rationale here is that we might not want to recompile blender just to get
the extra information, and having users start blender with a -d flag to
get the extra information is also useful for bug reports. Those checks already
existed and most expensive ones are hidden behind a debug mode check
so performance should not be that bad.
- Did some cleanup of existing functionality:
When things go wrong blender side, just print the error,
don't check for GL errors first.
- Did not port changes needed for GLES to regular glew.h
- Got rid of duplicate or very similar new functionality.
Generally, code is more moving things around/cleanup and should work exactly
as before apart from the debug context, so it's safe to add even now.
It also provides a nice substitute function for glu error descriptions
Basically, before drawing X-Rays, we now bind a second depth buffer.
After drawing XRays, we do an extra resolve pass where we overwrite the
non-XRay depth buffer in pixels where the depth is not maximum (which
means background pixel, since depth is cleared before drawing X-Ray
objects).
This ensures both scene and X-Rays keep their depth values and are ready
for compositing. Well, the odd effect due to depth discontinuities can be
expected, and X-Rays are a bit more expensive (extra buffer + resolve pass)
but at least X-Rays won't invalidate depth values anymore. Whee!
This commit introduces a few ready made effects for the 3D viewport
and OpenGL rendering.
Included effects are Depth of Field, accessible from camera view
and screen space ambient occlusion. Those effects can be turned on and
tweaked from the shading panel in the 3D viewport.
Off screen rendering will use the settings of the current camera.
WIP documentation can be found here:
http://wiki.blender.org/index.php/User:Psy-Fi/Framebuffer_Post-processing
This is yet another issue with framebuffers. There are two issues: We
need the framebuffer fully bound to check for completeness and when we
bind a depth texture as frame buffer we need to disable read/write.
commits.
Basically, we don't set a draw buffer until draw time comes. Also add
explicit validation function to validate after all textures have been
attached (could be done automatically at bind time too probably, but
left out for now)
* read buffers are set at texture binding time
* change naming when setting a texture as framebuffer
* add function to set slot of framebuffer as current target instead of
texture.
* Binding a buffer reuses the dimensions of the texture at bind time
(can use viewport to set to arbitrary range later)
* Removed offscreen buffer width/height, use the generated texture
dimensions instead. Those were supposed to be checked to see if
generated texture had the requested size but were never actually changed
to the texture dimensions (and it's redundant to store twice).
This was a little difficult to track down, basically it was a missing
escape sequence that only manifested itself when GPU did not support
bicubic filtering.
Extra:
* Fix memory leaks when an error occurs in shader compilation
* Display full shader when a compilation error occurs. Makes it easier
to diagnose if problem is caused by a syntax or compatibility error.
ATIs.
This is actually a test to see if this can be enabled on ATI cards.
According to various sources, newer ATI cards supporting GLSL 3.0
support gl_ClippingDistance in shaders, which is the forward compatible
way to do custom clipping.
This fix will bind 6 additional varying variables on ATIs, which may
lead to some shaders not compiling due to limiting out of those
variables, or to performance degradation. Also I do not have an ATI
handy to test.
Having those in mind, this commit may well be reverted later.
Clipping planes are usually 4 (6 is for cube clipping), but making
shaders depend on viewport state is really bad, and would lead to
recompilation, so I took the worst case here to avoid that.
Hopefully driver does some optimization there.
Issue here is most likely sampler uniforms and textures not being
updated properly when zero binding is created. Solution for now is to
allow zero binding but when this happens use sexy pink invalid texture
instead :p.
Disabling display lists for legacy ATI cards since they don't support display lists well.
Also removing an unused variable from the display list rasterizer.
window, the game would stop drawing in the first and mess up the OpenGL state of
the second.
Also fixes glPushAttrib/glPopAttrib getting out of sync in some cases.