In all but one call the value 0 (aka GPU_NONE) was passed in. Clearer
to just default to GPU_NONE and change the one caller that sets a real
type to do it explicitly.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D1026
This is yet another issue with framebuffers. There are two issues: We
need the framebuffer fully bound to check for completeness and when we
bind a depth texture as frame buffer we need to disable read/write.
Basically this commit gets rid of most of the derived mesh immediate mode
drawing (cases such as subsurf excluded). Even when VBO is turned off
in user preferences, we still use vertex arrays, which are very similar to
VBOs but memory is client side. Vertex arrays are OpenGL 1.1 so compatibility
is not an issue here.
Reviewers: campbellbarton, sergey, jwilkins
Differential Revision: https://developer.blender.org/D919
This is added in the spirit of the general cycles GLSL system
which is pretty much WIP still.
This will only work on cycles at the moment but generating for blender
internal is possible too of course though it will be done in a separate
commit.
This hasn't been tested with all and every node in cycles, but
environment and regular textures with texture coordinates work.
There is some difference between the way cycles treats some coordinates,
which is in world space and the way GLSL treats them, which is in view
space.
We might want to explore and improve this further in the future.
...also </drumroll>
commits.
Basically, we don't set a draw buffer until draw time comes. Also add
explicit validation function to validate after all textures have been
attached (could be done automatically at bind time too probably, but
left out for now)
* read buffers are set at texture binding time
* change naming when setting a texture as framebuffer
* add function to set slot of framebuffer as current target instead of
texture.
* Binding a buffer reuses the dimensions of the texture at bind time
(can use viewport to set to arbitrary range later)
* Removed offscreen buffer width/height, use the generated texture
dimensions instead. Those were supposed to be checked to see if
generated texture had the requested size but were never actually changed
to the texture dimensions (and it's redundant to store twice).
The solution is to do the multiplication with the energy in the shader
after texture application.
We might be able to avoid setting dyncol completely, but this needs
better investigation. Some shader paths also look a bit redundant.
Also, texture mapping is not supported very well for light lamps, might
also need investigation.
Attempt to select closest bones when possible.
Occlusion query selection does't support this well because we can't
really derive depth information from occlusion tests. May be possible to
improve this somewhat in the future.
Bring back shading in texture painting.
This works now but it uses 3 texture units instead of two. Most GPUs of
DirectX 8 (OpenGL 1.4 should cover that) functionality even should have
those, but some old GPUs might not work with that. In any case, I hope
we will be moving to OpenGL 2.1 requirement soon anyway where 4-8
texture units are usually the norm.