This repository has been archived on 2023-10-09. You can view files and clone it, but cannot push or open issues or pull requests.
Files
blender-archive/source/blender/editors/screen/glutil.c

1215 lines
34 KiB
C
Raw Normal View History

/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
2010-02-12 13:34:04 +00:00
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2001-2002 by NaN Holding BV.
* All rights reserved.
*
* Contributor(s): Blender Foundation
*
* ***** END GPL LICENSE BLOCK *****
*/
2011-02-27 20:29:51 +00:00
/** \file blender/editors/screen/glutil.c
* \ingroup edscr
*/
#include <stdio.h>
#include <string.h>
#include "MEM_guardedalloc.h"
Holiday coding log :) Nice formatted version (pictures soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.66/Usability Short list of main changes: - Transparent region option (over main region), added code to blend in/out such panels. - Min size window now 640 x 480 - Fixed DPI for ui - lots of cleanup and changes everywhere. Icon image need correct size still, layer-in-use icon needs remake. - Macbook retina support, use command line --no-native-pixels to disable it - Timeline Marker label was drawing wrong - Trackpad and magic mouse: supports zoom (hold ctrl) - Fix for splash position: removed ghost function and made window size update after creation immediate - Fast undo buffer save now adds UI as well. Could be checked for regular file save even... Quit.blend and temp file saving use this now. - Dixed filename in window on reading quit.blend or temp saves, and they now add a warning in window title: "(Recovered)" - New Userpref option "Keep Session" - this always saves quit.blend, and loads on start. This allows keeping UI and data without actual saves, until you actually save. When you load startup.blend and quit, it recognises the quit.blend as a startup (no file name in header) - Added 3D view copy/paste buffers (selected objects). Shortcuts ctrl-c, ctrl-v (OSX, cmd-c, cmd-v). Coded partial file saving for it. Could be used for other purposes. Todo: use OS clipboards. - User preferences (themes, keymaps, user settings) now can be saved as a separate file. Old option is called "Save Startup File" the new one "Save User Settings". To visualise this difference, the 'save startup file' button has been removed from user preferences window. That option is available as CTRL+U and in File menu still. - OSX: fixed bug that stopped giving mouse events outside window. This also fixes "Continuous Grab" for OSX. (error since 2009)
2012-12-12 18:58:11 +00:00
#include "DNA_userdef_types.h"
#include "DNA_vec_types.h"
2012-08-21 20:34:05 +00:00
#include "BLI_rect.h"
#include "BLI_utildefines.h"
#include "BLI_math.h"
#include "BLI_threads.h"
Holiday coding log :) Nice formatted version (pictures soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.66/Usability Short list of main changes: - Transparent region option (over main region), added code to blend in/out such panels. - Min size window now 640 x 480 - Fixed DPI for ui - lots of cleanup and changes everywhere. Icon image need correct size still, layer-in-use icon needs remake. - Macbook retina support, use command line --no-native-pixels to disable it - Timeline Marker label was drawing wrong - Trackpad and magic mouse: supports zoom (hold ctrl) - Fix for splash position: removed ghost function and made window size update after creation immediate - Fast undo buffer save now adds UI as well. Could be checked for regular file save even... Quit.blend and temp file saving use this now. - Dixed filename in window on reading quit.blend or temp saves, and they now add a warning in window title: "(Recovered)" - New Userpref option "Keep Session" - this always saves quit.blend, and loads on start. This allows keeping UI and data without actual saves, until you actually save. When you load startup.blend and quit, it recognises the quit.blend as a startup (no file name in header) - Added 3D view copy/paste buffers (selected objects). Shortcuts ctrl-c, ctrl-v (OSX, cmd-c, cmd-v). Coded partial file saving for it. Could be used for other purposes. Todo: use OS clipboards. - User preferences (themes, keymaps, user settings) now can be saved as a separate file. Old option is called "Save Startup File" the new one "Save User Settings". To visualise this difference, the 'save startup file' button has been removed from user preferences window. That option is available as CTRL+U and in File menu still. - OSX: fixed bug that stopped giving mouse events outside window. This also fixes "Continuous Grab" for OSX. (error since 2009)
2012-12-12 18:58:11 +00:00
#include "BKE_blender.h"
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
#include "BKE_global.h"
#include "BKE_colortools.h"
#include "BKE_context.h"
#include "BIF_gl.h"
#include "BIF_glutil.h"
#include "GPU_extensions.h"
#include "IMB_colormanagement.h"
#include "IMB_imbuf_types.h"
#ifndef GL_CLAMP_TO_EDGE
#define GL_CLAMP_TO_EDGE 0x812F
#endif
/* ******************************************** */
/* defined in BIF_gl.h */
const GLubyte stipple_halftone[128] = {
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55,
0xAA, 0xAA, 0xAA, 0xAA, 0x55, 0x55, 0x55, 0x55};
/* repeat this pattern
*
* X000X000
* 00000000
* 00X000X0
* 00000000 */
const GLubyte stipple_quarttone[128] = {
2012-04-29 15:47:02 +00:00
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0,
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0,
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0,
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0,
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0,
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0,
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0,
136, 136, 136, 136, 0, 0, 0, 0, 34, 34, 34, 34, 0, 0, 0, 0};
const GLubyte stipple_diag_stripes_pos[128] = {
2012-05-08 15:43:59 +00:00
0x00, 0xff, 0x00, 0xff, 0x01, 0xfe, 0x01, 0xfe,
0x03, 0xfc, 0x03, 0xfc, 0x07, 0xf8, 0x07, 0xf8,
0x0f, 0xf0, 0x0f, 0xf0, 0x1f, 0xe0, 0x1f, 0xe0,
0x3f, 0xc0, 0x3f, 0xc0, 0x7f, 0x80, 0x7f, 0x80,
0xff, 0x00, 0xff, 0x00, 0xfe, 0x01, 0xfe, 0x01,
0xfc, 0x03, 0xfc, 0x03, 0xf8, 0x07, 0xf8, 0x07,
0xf0, 0x0f, 0xf0, 0x0f, 0xe0, 0x1f, 0xe0, 0x1f,
0xc0, 0x3f, 0xc0, 0x3f, 0x80, 0x7f, 0x80, 0x7f,
0x00, 0xff, 0x00, 0xff, 0x01, 0xfe, 0x01, 0xfe,
0x03, 0xfc, 0x03, 0xfc, 0x07, 0xf8, 0x07, 0xf8,
0x0f, 0xf0, 0x0f, 0xf0, 0x1f, 0xe0, 0x1f, 0xe0,
0x3f, 0xc0, 0x3f, 0xc0, 0x7f, 0x80, 0x7f, 0x80,
0xff, 0x00, 0xff, 0x00, 0xfe, 0x01, 0xfe, 0x01,
0xfc, 0x03, 0xfc, 0x03, 0xf8, 0x07, 0xf8, 0x07,
0xf0, 0x0f, 0xf0, 0x0f, 0xe0, 0x1f, 0xe0, 0x1f,
0xc0, 0x3f, 0xc0, 0x3f, 0x80, 0x7f, 0x80, 0x7f};
const GLubyte stipple_diag_stripes_neg[128] = {
2012-05-08 15:43:59 +00:00
0xff, 0x00, 0xff, 0x00, 0xfe, 0x01, 0xfe, 0x01,
0xfc, 0x03, 0xfc, 0x03, 0xf8, 0x07, 0xf8, 0x07,
0xf0, 0x0f, 0xf0, 0x0f, 0xe0, 0x1f, 0xe0, 0x1f,
0xc0, 0x3f, 0xc0, 0x3f, 0x80, 0x7f, 0x80, 0x7f,
0x00, 0xff, 0x00, 0xff, 0x01, 0xfe, 0x01, 0xfe,
0x03, 0xfc, 0x03, 0xfc, 0x07, 0xf8, 0x07, 0xf8,
0x0f, 0xf0, 0x0f, 0xf0, 0x1f, 0xe0, 0x1f, 0xe0,
0x3f, 0xc0, 0x3f, 0xc0, 0x7f, 0x80, 0x7f, 0x80,
0xff, 0x00, 0xff, 0x00, 0xfe, 0x01, 0xfe, 0x01,
0xfc, 0x03, 0xfc, 0x03, 0xf8, 0x07, 0xf8, 0x07,
0xf0, 0x0f, 0xf0, 0x0f, 0xe0, 0x1f, 0xe0, 0x1f,
0xc0, 0x3f, 0xc0, 0x3f, 0x80, 0x7f, 0x80, 0x7f,
0x00, 0xff, 0x00, 0xff, 0x01, 0xfe, 0x01, 0xfe,
0x03, 0xfc, 0x03, 0xfc, 0x07, 0xf8, 0x07, 0xf8,
0x0f, 0xf0, 0x0f, 0xf0, 0x1f, 0xe0, 0x1f, 0xe0,
0x3f, 0xc0, 0x3f, 0xc0, 0x7f, 0x80, 0x7f, 0x80};
void fdrawbezier(float vec[4][3])
{
float dist;
float curve_res = 24, spline_step = 0.0f;
2012-05-08 15:43:59 +00:00
dist = 0.5f * ABS(vec[0][0] - vec[3][0]);
/* check direction later, for top sockets */
2012-05-08 15:43:59 +00:00
vec[1][0] = vec[0][0] + dist;
vec[1][1] = vec[0][1];
2012-05-08 15:43:59 +00:00
vec[2][0] = vec[3][0] - dist;
vec[2][1] = vec[3][1];
/* we can reuse the dist variable here to increment the GL curve eval amount*/
2012-05-08 15:43:59 +00:00
dist = 1.0f / curve_res;
cpack(0x0);
glMap1f(GL_MAP1_VERTEX_3, 0.0, 1.0, 3, 4, vec[0]);
glBegin(GL_LINE_STRIP);
while (spline_step < 1.000001f) {
#if 0
if (do_shaded)
UI_ThemeColorBlend(th_col1, th_col2, spline_step);
#endif
glEvalCoord1f(spline_step);
spline_step += dist;
}
glEnd();
}
void fdrawline(float x1, float y1, float x2, float y2)
{
float v[2];
glBegin(GL_LINE_STRIP);
v[0] = x1; v[1] = y1;
glVertex2fv(v);
v[0] = x2; v[1] = y2;
glVertex2fv(v);
glEnd();
}
void fdrawbox(float x1, float y1, float x2, float y2)
{
float v[2];
glBegin(GL_LINE_STRIP);
v[0] = x1; v[1] = y1;
glVertex2fv(v);
v[0] = x1; v[1] = y2;
glVertex2fv(v);
v[0] = x2; v[1] = y2;
glVertex2fv(v);
v[0] = x2; v[1] = y1;
glVertex2fv(v);
v[0] = x1; v[1] = y1;
glVertex2fv(v);
glEnd();
}
void fdrawcheckerboard(float x1, float y1, float x2, float y2)
{
2012-05-08 15:43:59 +00:00
unsigned char col1[4] = {40, 40, 40}, col2[4] = {50, 50, 50};
2012-04-29 15:47:02 +00:00
GLubyte checker_stipple[32 * 32 / 8] = {
255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0,
255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0,
0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255,
0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255,
255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0,
255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0,
0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255,
0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255};
glColor3ubv(col1);
glRectf(x1, y1, x2, y2);
glColor3ubv(col2);
glEnable(GL_POLYGON_STIPPLE);
glPolygonStipple(checker_stipple);
glRectf(x1, y1, x2, y2);
glDisable(GL_POLYGON_STIPPLE);
}
void sdrawline(short x1, short y1, short x2, short y2)
{
short v[2];
glBegin(GL_LINE_STRIP);
v[0] = x1; v[1] = y1;
glVertex2sv(v);
v[0] = x2; v[1] = y2;
glVertex2sv(v);
glEnd();
}
/*
* x1,y2
* | \
* | \
* | \
* x1,y1-- x2,y1
*/
static void sdrawtripoints(short x1, short y1, short x2, short y2)
{
short v[2];
2012-05-08 15:43:59 +00:00
v[0] = x1; v[1] = y1;
glVertex2sv(v);
2012-05-08 15:43:59 +00:00
v[0] = x1; v[1] = y2;
glVertex2sv(v);
2012-05-08 15:43:59 +00:00
v[0] = x2; v[1] = y1;
glVertex2sv(v);
}
void sdrawtri(short x1, short y1, short x2, short y2)
{
glBegin(GL_LINE_STRIP);
sdrawtripoints(x1, y1, x2, y2);
glEnd();
}
void sdrawtrifill(short x1, short y1, short x2, short y2)
{
glBegin(GL_TRIANGLES);
sdrawtripoints(x1, y1, x2, y2);
glEnd();
}
void sdrawbox(short x1, short y1, short x2, short y2)
{
short v[2];
glBegin(GL_LINE_STRIP);
v[0] = x1; v[1] = y1;
glVertex2sv(v);
v[0] = x1; v[1] = y2;
glVertex2sv(v);
v[0] = x2; v[1] = y2;
glVertex2sv(v);
v[0] = x2; v[1] = y1;
glVertex2sv(v);
v[0] = x1; v[1] = y1;
glVertex2sv(v);
glEnd();
}
/* ******************************************** */
Various changes made in the process of working on the UI code: * Added functions to generate Timer events. There was some unfinished code to create one timer per window, this replaces that with a way to let operators or other handlers add/remove their own timers as needed. This is currently delivered as an event with the timer handle, perhaps this should be a notifier instead? Also includes some fixes in ghost for timer events that were not delivered in time, due to passing negative timeout. * Added a Message event, which is a generic event that can be added by any operator. This is used in the UI code to communicate the results of opened blocks. Again, this may be better as a notifier. * These two events should not be blocked as they are intended for a specific operator or handler, so there were exceptions added for this, which is one of the reasons they might work better as notifiers, but currently these things can't listen to notifier yet. * Added an option to events to indicate if the customdata should be freed or not. * Added a free() callback for area regions, and added a free function for area regions in blenkernel since it was already there for screens and areas. * Added ED_screen/area/region_exit functions to clean up things like operators and handlers when they are closed. * Added screen level regions, these will draw over areas boundaries, with the last created region on top. These are useful for tooltips, menus, etc, and are not saved to file. It's using the same ARegion struct as areas to avoid code duplication, but perhaps that should be renamed then. Note that redraws currently go correct, because only full window redraws are used, for partial redraws without any frontbuffer drawing, the window manager needs to get support for compositing subwindows. * Minor changes in the subwindow code to retrieve the matrix, and moved setlinestyle to glutil.c. * Reversed argument order in WM_event_add/remove_keymap_handler to be consistent with modal_handler. * Operators can now block events but not necessarily cancel/finish. * Modal operators are now stored in a list in the window/area/region they were created in. This means for example that when a transform operator is invoked from a region but registers a handler at the window level (since mouse motion across areas should work), it will still get removed when the region is closed while the operator is running.
2008-11-11 15:18:21 +00:00
void setlinestyle(int nr)
{
2012-05-08 15:43:59 +00:00
if (nr == 0) {
Various changes made in the process of working on the UI code: * Added functions to generate Timer events. There was some unfinished code to create one timer per window, this replaces that with a way to let operators or other handlers add/remove their own timers as needed. This is currently delivered as an event with the timer handle, perhaps this should be a notifier instead? Also includes some fixes in ghost for timer events that were not delivered in time, due to passing negative timeout. * Added a Message event, which is a generic event that can be added by any operator. This is used in the UI code to communicate the results of opened blocks. Again, this may be better as a notifier. * These two events should not be blocked as they are intended for a specific operator or handler, so there were exceptions added for this, which is one of the reasons they might work better as notifiers, but currently these things can't listen to notifier yet. * Added an option to events to indicate if the customdata should be freed or not. * Added a free() callback for area regions, and added a free function for area regions in blenkernel since it was already there for screens and areas. * Added ED_screen/area/region_exit functions to clean up things like operators and handlers when they are closed. * Added screen level regions, these will draw over areas boundaries, with the last created region on top. These are useful for tooltips, menus, etc, and are not saved to file. It's using the same ARegion struct as areas to avoid code duplication, but perhaps that should be renamed then. Note that redraws currently go correct, because only full window redraws are used, for partial redraws without any frontbuffer drawing, the window manager needs to get support for compositing subwindows. * Minor changes in the subwindow code to retrieve the matrix, and moved setlinestyle to glutil.c. * Reversed argument order in WM_event_add/remove_keymap_handler to be consistent with modal_handler. * Operators can now block events but not necessarily cancel/finish. * Modal operators are now stored in a list in the window/area/region they were created in. This means for example that when a transform operator is invoked from a region but registers a handler at the window level (since mouse motion across areas should work), it will still get removed when the region is closed while the operator is running.
2008-11-11 15:18:21 +00:00
glDisable(GL_LINE_STIPPLE);
}
else {
glEnable(GL_LINE_STIPPLE);
Holiday coding log :) Nice formatted version (pictures soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.66/Usability Short list of main changes: - Transparent region option (over main region), added code to blend in/out such panels. - Min size window now 640 x 480 - Fixed DPI for ui - lots of cleanup and changes everywhere. Icon image need correct size still, layer-in-use icon needs remake. - Macbook retina support, use command line --no-native-pixels to disable it - Timeline Marker label was drawing wrong - Trackpad and magic mouse: supports zoom (hold ctrl) - Fix for splash position: removed ghost function and made window size update after creation immediate - Fast undo buffer save now adds UI as well. Could be checked for regular file save even... Quit.blend and temp file saving use this now. - Dixed filename in window on reading quit.blend or temp saves, and they now add a warning in window title: "(Recovered)" - New Userpref option "Keep Session" - this always saves quit.blend, and loads on start. This allows keeping UI and data without actual saves, until you actually save. When you load startup.blend and quit, it recognises the quit.blend as a startup (no file name in header) - Added 3D view copy/paste buffers (selected objects). Shortcuts ctrl-c, ctrl-v (OSX, cmd-c, cmd-v). Coded partial file saving for it. Could be used for other purposes. Todo: use OS clipboards. - User preferences (themes, keymaps, user settings) now can be saved as a separate file. Old option is called "Save Startup File" the new one "Save User Settings". To visualise this difference, the 'save startup file' button has been removed from user preferences window. That option is available as CTRL+U and in File menu still. - OSX: fixed bug that stopped giving mouse events outside window. This also fixes "Continuous Grab" for OSX. (error since 2009)
2012-12-12 18:58:11 +00:00
if (U.pixelsize > 1.0f)
glLineStipple(nr, 0xCCCC);
else
glLineStipple(nr, 0xAAAA);
Various changes made in the process of working on the UI code: * Added functions to generate Timer events. There was some unfinished code to create one timer per window, this replaces that with a way to let operators or other handlers add/remove their own timers as needed. This is currently delivered as an event with the timer handle, perhaps this should be a notifier instead? Also includes some fixes in ghost for timer events that were not delivered in time, due to passing negative timeout. * Added a Message event, which is a generic event that can be added by any operator. This is used in the UI code to communicate the results of opened blocks. Again, this may be better as a notifier. * These two events should not be blocked as they are intended for a specific operator or handler, so there were exceptions added for this, which is one of the reasons they might work better as notifiers, but currently these things can't listen to notifier yet. * Added an option to events to indicate if the customdata should be freed or not. * Added a free() callback for area regions, and added a free function for area regions in blenkernel since it was already there for screens and areas. * Added ED_screen/area/region_exit functions to clean up things like operators and handlers when they are closed. * Added screen level regions, these will draw over areas boundaries, with the last created region on top. These are useful for tooltips, menus, etc, and are not saved to file. It's using the same ARegion struct as areas to avoid code duplication, but perhaps that should be renamed then. Note that redraws currently go correct, because only full window redraws are used, for partial redraws without any frontbuffer drawing, the window manager needs to get support for compositing subwindows. * Minor changes in the subwindow code to retrieve the matrix, and moved setlinestyle to glutil.c. * Reversed argument order in WM_event_add/remove_keymap_handler to be consistent with modal_handler. * Operators can now block events but not necessarily cancel/finish. * Modal operators are now stored in a list in the window/area/region they were created in. This means for example that when a transform operator is invoked from a region but registers a handler at the window level (since mouse motion across areas should work), it will still get removed when the region is closed while the operator is running.
2008-11-11 15:18:21 +00:00
}
}
2012-05-08 15:43:59 +00:00
/* Invert line handling */
2012-09-06 01:31:15 +00:00
#define GL_TOGGLE(mode, onoff) (((onoff) ? glEnable : glDisable)(mode))
void set_inverted_drawing(int enable)
{
2012-05-08 15:43:59 +00:00
glLogicOp(enable ? GL_INVERT : GL_COPY);
2012-09-06 01:31:15 +00:00
GL_TOGGLE(GL_COLOR_LOGIC_OP, enable);
GL_TOGGLE(GL_DITHER, !enable);
}
void sdrawXORline(int x0, int y0, int x1, int y1)
{
2012-05-08 15:43:59 +00:00
if (x0 == x1 && y0 == y1) return;
set_inverted_drawing(1);
glBegin(GL_LINES);
glVertex2i(x0, y0);
glVertex2i(x1, y1);
glEnd();
set_inverted_drawing(0);
}
void sdrawXORline4(int nr, int x0, int y0, int x1, int y1)
{
static short old[4][2][2];
2012-05-08 15:43:59 +00:00
static char flags[4] = {0, 0, 0, 0};
2012-05-08 15:43:59 +00:00
/* with builtin memory, max 4 lines */
set_inverted_drawing(1);
glBegin(GL_LINES);
2012-05-08 15:43:59 +00:00
if (nr == -1) { /* flush */
for (nr = 0; nr < 4; nr++) {
if (flags[nr]) {
glVertex2sv(old[nr][0]);
glVertex2sv(old[nr][1]);
2012-05-08 15:43:59 +00:00
flags[nr] = 0;
}
}
}
else {
2012-05-08 15:43:59 +00:00
if (nr >= 0 && nr < 4) {
if (flags[nr]) {
glVertex2sv(old[nr][0]);
glVertex2sv(old[nr][1]);
}
2012-05-08 15:43:59 +00:00
old[nr][0][0] = x0;
old[nr][0][1] = y0;
old[nr][1][0] = x1;
old[nr][1][1] = y1;
2012-05-08 15:43:59 +00:00
flags[nr] = 1;
}
glVertex2i(x0, y0);
glVertex2i(x1, y1);
}
glEnd();
set_inverted_drawing(0);
}
void fdrawXORellipse(float xofs, float yofs, float hw, float hh)
{
2012-05-08 15:43:59 +00:00
if (hw == 0) return;
set_inverted_drawing(1);
glPushMatrix();
2012-04-29 15:47:02 +00:00
glTranslatef(xofs, yofs, 0.0f);
glScalef(1.0f, hh / hw, 1.0f);
2012-05-08 15:43:59 +00:00
glutil_draw_lined_arc(0.0, M_PI * 2.0, hw, 20);
glPopMatrix();
set_inverted_drawing(0);
}
void fdrawXORcirc(float xofs, float yofs, float rad)
{
set_inverted_drawing(1);
glPushMatrix();
glTranslatef(xofs, yofs, 0.0);
2012-05-08 15:43:59 +00:00
glutil_draw_lined_arc(0.0, M_PI * 2.0, rad, 20);
glPopMatrix();
set_inverted_drawing(0);
}
void glutil_draw_filled_arc(float start, float angle, float radius, int nsegments)
{
int i;
glBegin(GL_TRIANGLE_FAN);
glVertex2f(0.0, 0.0);
2012-05-08 15:43:59 +00:00
for (i = 0; i < nsegments; i++) {
float t = (float) i / (nsegments - 1);
float cur = start + t * angle;
2012-05-08 15:43:59 +00:00
glVertex2f(cosf(cur) * radius, sinf(cur) * radius);
}
glEnd();
}
void glutil_draw_lined_arc(float start, float angle, float radius, int nsegments)
{
int i;
glBegin(GL_LINE_STRIP);
2012-05-08 15:43:59 +00:00
for (i = 0; i < nsegments; i++) {
float t = (float) i / (nsegments - 1);
float cur = start + t * angle;
2012-05-08 15:43:59 +00:00
glVertex2f(cosf(cur) * radius, sinf(cur) * radius);
}
glEnd();
}
int glaGetOneInteger(int param)
{
GLint i;
glGetIntegerv(param, &i);
return i;
}
float glaGetOneFloat(int param)
{
GLfloat v;
glGetFloatv(param, &v);
return v;
}
void glaRasterPosSafe2f(float x, float y, float known_good_x, float known_good_y)
{
2012-05-08 15:43:59 +00:00
GLubyte dummy = 0;
2012-05-08 15:43:59 +00:00
/* As long as known good coordinates are correct
* this is guaranteed to generate an ok raster
* position (ignoring potential (real) overflow
* issues).
*/
glRasterPos2f(known_good_x, known_good_y);
2012-05-08 15:43:59 +00:00
/* Now shift the raster position to where we wanted
* it in the first place using the glBitmap trick.
*/
glBitmap(0, 0, 0, 0, x - known_good_x, y - known_good_y, &dummy);
}
static int get_cached_work_texture(int *w_r, int *h_r)
{
2012-05-08 15:43:59 +00:00
static GLint texid = -1;
static int tex_w = 256;
static int tex_h = 256;
2012-05-08 15:43:59 +00:00
if (texid == -1) {
GLint ltexid = glaGetOneInteger(GL_TEXTURE_2D);
unsigned char *tbuf;
glGenTextures(1, (GLuint *)&texid);
glBindTexture(GL_TEXTURE_2D, texid);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
2012-05-08 15:43:59 +00:00
tbuf = MEM_callocN(tex_w * tex_h * 4, "tbuf");
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, tex_w, tex_h, 0, GL_RGBA, GL_UNSIGNED_BYTE, tbuf);
MEM_freeN(tbuf);
glBindTexture(GL_TEXTURE_2D, ltexid);
}
2012-05-08 15:43:59 +00:00
*w_r = tex_w;
*h_r = tex_h;
return texid;
}
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
void glaDrawPixelsTexScaled(float x, float y, int img_w, int img_h, int format, int type, int zoomfilter, void *rect, float scaleX, float scaleY)
{
2012-05-08 15:43:59 +00:00
unsigned char *uc_rect = (unsigned char *) rect;
float *f_rect = (float *)rect;
float xzoom = glaGetOneFloat(GL_ZOOM_X), yzoom = glaGetOneFloat(GL_ZOOM_Y);
int ltexid = glaGetOneInteger(GL_TEXTURE_2D);
int lrowlength = glaGetOneInteger(GL_UNPACK_ROW_LENGTH);
int subpart_x, subpart_y, tex_w, tex_h;
int seamless, offset_x, offset_y, nsubparts_x, nsubparts_y;
2012-05-08 15:43:59 +00:00
int texid = get_cached_work_texture(&tex_w, &tex_h);
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
int components;
/* Specify the color outside this function, and tex will modulate it.
* This is useful for changing alpha without using glPixelTransferf()
*/
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, img_w);
glBindTexture(GL_TEXTURE_2D, texid);
/* don't want nasty border artifacts */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, zoomfilter);
#ifdef __APPLE__
/* workaround for os x 10.5/10.6 driver bug: http://lists.apple.com/archives/Mac-opengl/2008/Jul/msg00117.html */
glPixelZoom(1.f, 1.f);
#endif
/* setup seamless 2=on, 0=off */
2012-05-08 15:43:59 +00:00
seamless = ((tex_w < img_w || tex_h < img_h) && tex_w > 2 && tex_h > 2) ? 2 : 0;
offset_x = tex_w - seamless;
offset_y = tex_h - seamless;
nsubparts_x = (img_w + (offset_x - 1)) / (offset_x);
nsubparts_y = (img_h + (offset_y - 1)) / (offset_y);
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (format == GL_RGBA)
components = 4;
else if (format == GL_RGB)
components = 3;
else if (format == GL_LUMINANCE)
components = 1;
else {
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
BLI_assert(!"Incompatible format passed to glaDrawPixelsTexScaled");
return;
}
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (type == GL_FLOAT) {
Implement GPU-side display transform for clip editor Implemented using GLSL API from OpenColorIO library and some general functions were added to it's c-api: - OCIO_setupGLSLDraw prepares OpenGL context for GPU-based transformation for a giver processor. This function compiles and links shader, sets up it's argument. After this transformation would be applied on an image displaying as a 2D texture. So, glaDrawPixelsTex called after OCIO_setupGLSLDraw will do a proper color space transform. - OCIO_finishGLSLDraw restores OpenGL context after all color-managed display is over. - OCIO_freeOGLState frees allocated state structure used for cacheing some GLSL-related stuff. There're some utility functions in IMB_colormanagent which are basically proxies to lower level OCIO functions but which could be used from any place in blender. Chacheing of movie clip frame on GPU is also removed now, and either glaDrawPixelsTex or glaDrawPixelsAuto are used for display now. This is so no code duplication happens now and no large textures are lurking around in GPU memory. Known issues: - Texture buffer and GLSL are no longer checking for video card capabilities, possibly could lead to some artifacts on crappy drivers/cards. - Only float buffers are displaying using GLSL, byte buffers will still use fallback display method. This is to be addressed later. - If RGB curves are used as a part of display transform, GLSL display will also be disabled. This is also thing to be solved later. Additional changes: - glaDrawPixelsTexScaled will now use RGBA16F as an internal format of storing textures when it's used to draw float buffer. This is needed so LUT are applied without precision loss.
2013-03-29 16:02:27 +00:00
/* need to set internal format to higher range float */
/* NOTE: this could fail on some drivers, like mesa,
* but currently this code is only used by color
* management stuff which already checks on whether
* it's possible to use GL_RGBA16F_ARB
*/
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, tex_w, tex_h, 0, format, GL_FLOAT, NULL);
Implement GPU-side display transform for clip editor Implemented using GLSL API from OpenColorIO library and some general functions were added to it's c-api: - OCIO_setupGLSLDraw prepares OpenGL context for GPU-based transformation for a giver processor. This function compiles and links shader, sets up it's argument. After this transformation would be applied on an image displaying as a 2D texture. So, glaDrawPixelsTex called after OCIO_setupGLSLDraw will do a proper color space transform. - OCIO_finishGLSLDraw restores OpenGL context after all color-managed display is over. - OCIO_freeOGLState frees allocated state structure used for cacheing some GLSL-related stuff. There're some utility functions in IMB_colormanagent which are basically proxies to lower level OCIO functions but which could be used from any place in blender. Chacheing of movie clip frame on GPU is also removed now, and either glaDrawPixelsTex or glaDrawPixelsAuto are used for display now. This is so no code duplication happens now and no large textures are lurking around in GPU memory. Known issues: - Texture buffer and GLSL are no longer checking for video card capabilities, possibly could lead to some artifacts on crappy drivers/cards. - Only float buffers are displaying using GLSL, byte buffers will still use fallback display method. This is to be addressed later. - If RGB curves are used as a part of display transform, GLSL display will also be disabled. This is also thing to be solved later. Additional changes: - glaDrawPixelsTexScaled will now use RGBA16F as an internal format of storing textures when it's used to draw float buffer. This is needed so LUT are applied without precision loss.
2013-03-29 16:02:27 +00:00
}
else {
/* switch to 8bit RGBA for byte buffer */
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, tex_w, tex_h, 0, format, GL_UNSIGNED_BYTE, NULL);
Implement GPU-side display transform for clip editor Implemented using GLSL API from OpenColorIO library and some general functions were added to it's c-api: - OCIO_setupGLSLDraw prepares OpenGL context for GPU-based transformation for a giver processor. This function compiles and links shader, sets up it's argument. After this transformation would be applied on an image displaying as a 2D texture. So, glaDrawPixelsTex called after OCIO_setupGLSLDraw will do a proper color space transform. - OCIO_finishGLSLDraw restores OpenGL context after all color-managed display is over. - OCIO_freeOGLState frees allocated state structure used for cacheing some GLSL-related stuff. There're some utility functions in IMB_colormanagent which are basically proxies to lower level OCIO functions but which could be used from any place in blender. Chacheing of movie clip frame on GPU is also removed now, and either glaDrawPixelsTex or glaDrawPixelsAuto are used for display now. This is so no code duplication happens now and no large textures are lurking around in GPU memory. Known issues: - Texture buffer and GLSL are no longer checking for video card capabilities, possibly could lead to some artifacts on crappy drivers/cards. - Only float buffers are displaying using GLSL, byte buffers will still use fallback display method. This is to be addressed later. - If RGB curves are used as a part of display transform, GLSL display will also be disabled. This is also thing to be solved later. Additional changes: - glaDrawPixelsTexScaled will now use RGBA16F as an internal format of storing textures when it's used to draw float buffer. This is needed so LUT are applied without precision loss.
2013-03-29 16:02:27 +00:00
}
2012-05-08 15:43:59 +00:00
for (subpart_y = 0; subpart_y < nsubparts_y; subpart_y++) {
for (subpart_x = 0; subpart_x < nsubparts_x; subpart_x++) {
int remainder_x = img_w - subpart_x * offset_x;
int remainder_y = img_h - subpart_y * offset_y;
int subpart_w = (remainder_x < tex_w) ? remainder_x : tex_w;
int subpart_h = (remainder_y < tex_h) ? remainder_y : tex_h;
int offset_left = (seamless && subpart_x != 0) ? 1 : 0;
int offset_bot = (seamless && subpart_y != 0) ? 1 : 0;
int offset_right = (seamless && remainder_x > tex_w) ? 1 : 0;
int offset_top = (seamless && remainder_y > tex_h) ? 1 : 0;
float rast_x = x + subpart_x * offset_x * xzoom;
float rast_y = y + subpart_y * offset_y * yzoom;
/* check if we already got these because we always get 2 more when doing seamless*/
2012-05-08 15:43:59 +00:00
if (subpart_w <= seamless || subpart_h <= seamless)
continue;
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (type == GL_FLOAT) {
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, subpart_w, subpart_h, format, GL_FLOAT, &f_rect[subpart_y * offset_y * img_w * components + subpart_x * offset_x * components]);
/* add an extra border of pixels so linear looks ok at edges of full image. */
2012-05-08 15:43:59 +00:00
if (subpart_w < tex_w)
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexSubImage2D(GL_TEXTURE_2D, 0, subpart_w, 0, 1, subpart_h, format, GL_FLOAT, &f_rect[subpart_y * offset_y * img_w * components + (subpart_x * offset_x + subpart_w - 1) * components]);
2012-05-08 15:43:59 +00:00
if (subpart_h < tex_h)
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, subpart_h, subpart_w, 1, format, GL_FLOAT, &f_rect[(subpart_y * offset_y + subpart_h - 1) * img_w * components + subpart_x * offset_x * components]);
2012-05-08 15:43:59 +00:00
if (subpart_w < tex_w && subpart_h < tex_h)
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexSubImage2D(GL_TEXTURE_2D, 0, subpart_w, subpart_h, 1, 1, format, GL_FLOAT, &f_rect[(subpart_y * offset_y + subpart_h - 1) * img_w * components + (subpart_x * offset_x + subpart_w - 1) * components]);
}
else {
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, subpart_w, subpart_h, format, GL_UNSIGNED_BYTE, &uc_rect[subpart_y * offset_y * img_w * components + subpart_x * offset_x * components]);
2012-05-08 15:43:59 +00:00
if (subpart_w < tex_w)
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexSubImage2D(GL_TEXTURE_2D, 0, subpart_w, 0, 1, subpart_h, format, GL_UNSIGNED_BYTE, &uc_rect[subpart_y * offset_y * img_w * components + (subpart_x * offset_x + subpart_w - 1) * components]);
2012-05-08 15:43:59 +00:00
if (subpart_h < tex_h)
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, subpart_h, subpart_w, 1, format, GL_UNSIGNED_BYTE, &uc_rect[(subpart_y * offset_y + subpart_h - 1) * img_w * components + subpart_x * offset_x * components]);
2012-05-08 15:43:59 +00:00
if (subpart_w < tex_w && subpart_h < tex_h)
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glTexSubImage2D(GL_TEXTURE_2D, 0, subpart_w, subpart_h, 1, 1, format, GL_UNSIGNED_BYTE, &uc_rect[(subpart_y * offset_y + subpart_h - 1) * img_w * components + (subpart_x * offset_x + subpart_w - 1) * components]);
}
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
2012-05-08 15:43:59 +00:00
glTexCoord2f((float)(0 + offset_left) / tex_w, (float)(0 + offset_bot) / tex_h);
glVertex2f(rast_x + (float)offset_left * xzoom, rast_y + (float)offset_bot * yzoom);
2012-05-08 15:43:59 +00:00
glTexCoord2f((float)(subpart_w - offset_right) / tex_w, (float)(0 + offset_bot) / tex_h);
glVertex2f(rast_x + (float)(subpart_w - offset_right) * xzoom * scaleX, rast_y + (float)offset_bot * yzoom);
2012-05-08 15:43:59 +00:00
glTexCoord2f((float)(subpart_w - offset_right) / tex_w, (float)(subpart_h - offset_top) / tex_h);
glVertex2f(rast_x + (float)(subpart_w - offset_right) * xzoom * scaleX, rast_y + (float)(subpart_h - offset_top) * yzoom * scaleY);
2012-05-08 15:43:59 +00:00
glTexCoord2f((float)(0 + offset_left) / tex_w, (float)(subpart_h - offset_top) / tex_h);
glVertex2f(rast_x + (float)offset_left * xzoom, rast_y + (float)(subpart_h - offset_top) * yzoom * scaleY);
glEnd();
glDisable(GL_TEXTURE_2D);
}
}
glBindTexture(GL_TEXTURE_2D, ltexid);
glPixelStorei(GL_UNPACK_ROW_LENGTH, lrowlength);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
#ifdef __APPLE__
/* workaround for os x 10.5/10.6 driver bug (above) */
glPixelZoom(xzoom, yzoom);
#endif
}
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
void glaDrawPixelsTex(float x, float y, int img_w, int img_h, int format, int type, int zoomfilter, void *rect)
{
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glaDrawPixelsTexScaled(x, y, img_w, img_h, format, type, zoomfilter, rect, 1.0f, 1.0f);
}
void glaDrawPixelsSafe(float x, float y, int img_w, int img_h, int row_w, int format, int type, void *rect)
{
2012-05-08 15:43:59 +00:00
float xzoom = glaGetOneFloat(GL_ZOOM_X);
float yzoom = glaGetOneFloat(GL_ZOOM_Y);
/* The pixel space coordinate of the intersection of
* the [zoomed] image with the origin.
*/
float ix = -x / xzoom;
float iy = -y / yzoom;
2012-05-08 15:43:59 +00:00
/* The maximum pixel amounts the image can be cropped
* at the lower left without exceeding the origin.
*/
int off_x = floor(max_ff(ix, 0.0f));
int off_y = floor(max_ff(iy, 0.0f));
2012-05-08 15:43:59 +00:00
/* The zoomed space coordinate of the raster position
* (starting at the lower left most unclipped pixel).
*/
float rast_x = x + off_x * xzoom;
float rast_y = y + off_y * yzoom;
GLfloat scissor[4];
int draw_w, draw_h;
2012-05-08 15:43:59 +00:00
/* Determine the smallest number of pixels we need to draw
* before the image would go off the upper right corner.
*
* It may seem this is just an optimization but some graphics
* cards (ATI) freak out if there is a large zoom factor and
* a large number of pixels off the screen (probably at some
* level the number of image pixels to draw is getting multiplied
* by the zoom and then clamped). Making sure we draw the
* fewest pixels possible keeps everyone mostly happy (still
* fails if we zoom in on one really huge pixel so that it
* covers the entire screen).
*/
glGetFloatv(GL_SCISSOR_BOX, scissor);
draw_w = min_ii(img_w - off_x, ceil((scissor[2] - rast_x) / xzoom));
draw_h = min_ii(img_h - off_y, ceil((scissor[3] - rast_y) / yzoom));
2012-05-08 15:43:59 +00:00
if (draw_w > 0 && draw_h > 0) {
int old_row_length = glaGetOneInteger(GL_UNPACK_ROW_LENGTH);
2012-05-08 15:43:59 +00:00
/* Don't use safe RasterPos (slower) if we can avoid it. */
if (rast_x >= 0 && rast_y >= 0) {
glRasterPos2f(rast_x, rast_y);
}
else {
glaRasterPosSafe2f(rast_x, rast_y, 0, 0);
}
glPixelStorei(GL_UNPACK_ROW_LENGTH, row_w);
2012-05-08 15:43:59 +00:00
if (format == GL_LUMINANCE || format == GL_RED) {
if (type == GL_FLOAT) {
float *f_rect = (float *)rect;
glDrawPixels(draw_w, draw_h, format, type, f_rect + (off_y * row_w + off_x));
}
2012-05-08 15:43:59 +00:00
else if (type == GL_INT || type == GL_UNSIGNED_INT) {
int *i_rect = (int *)rect;
glDrawPixels(draw_w, draw_h, format, type, i_rect + (off_y * row_w + off_x));
}
}
else { /* RGBA */
2012-05-08 15:43:59 +00:00
if (type == GL_FLOAT) {
float *f_rect = (float *)rect;
glDrawPixels(draw_w, draw_h, format, type, f_rect + (off_y * row_w + off_x) * 4);
}
2012-05-08 15:43:59 +00:00
else if (type == GL_UNSIGNED_BYTE) {
unsigned char *uc_rect = (unsigned char *) rect;
glDrawPixels(draw_w, draw_h, format, type, uc_rect + (off_y * row_w + off_x) * 4);
}
}
glPixelStorei(GL_UNPACK_ROW_LENGTH, old_row_length);
}
}
/* uses either DrawPixelsSafe or DrawPixelsTex, based on user defined maximum */
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
void glaDrawPixelsAuto(float x, float y, int img_w, int img_h, int format, int type, int zoomfilter, void *rect)
{
if (U.image_gpubuffer_limit) {
/* Megapixels, use float math to prevent overflow */
float img_size = ((float)img_w * (float)img_h) / (1024.0f * 1024.0f);
if (U.image_gpubuffer_limit > (int)img_size) {
glColor4f(1.0, 1.0, 1.0, 1.0);
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glaDrawPixelsTex(x, y, img_w, img_h, format, type, zoomfilter, rect);
return;
}
}
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glaDrawPixelsSafe(x, y, img_w, img_h, img_w, format, type, rect);
}
/* 2D Drawing Assistance */
void glaDefine2DArea(rcti *screen_rect)
{
const int sc_w = BLI_rcti_size_x(screen_rect) + 1;
const int sc_h = BLI_rcti_size_y(screen_rect) + 1;
glViewport(screen_rect->xmin, screen_rect->ymin, sc_w, sc_h);
glScissor(screen_rect->xmin, screen_rect->ymin, sc_w, sc_h);
/* The GLA_PIXEL_OFS magic number is to shift the matrix so that
2012-05-08 15:43:59 +00:00
* both raster and vertex integer coordinates fall at pixel
* centers properly. For a longer discussion see the OpenGL
* Programming Guide, Appendix H, Correctness Tips.
*/
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, sc_w, 0.0, sc_h, -1, 1);
glTranslatef(GLA_PIXEL_OFS, GLA_PIXEL_OFS, 0.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
#if 0 /* UNUSED */
struct gla2DDrawInfo {
int orig_vp[4], orig_sc[4];
float orig_projmat[16], orig_viewmat[16];
rcti screen_rect;
rctf world_rect;
float wo_to_sc[2];
};
void gla2DGetMap(gla2DDrawInfo *di, rctf *rect)
{
2012-05-08 15:43:59 +00:00
*rect = di->world_rect;
}
void gla2DSetMap(gla2DDrawInfo *di, rctf *rect)
{
int sc_w, sc_h;
float wo_w, wo_h;
2012-05-08 15:43:59 +00:00
di->world_rect = *rect;
sc_w = BLI_rcti_size_x(&di->screen_rect);
sc_h = BLI_rcti_size_y(&di->screen_rect);
wo_w = BLI_rcti_size_x(&di->world_rect);
wo_h = BLI_rcti_size_y(&di->world_rect);
2012-05-08 15:43:59 +00:00
di->wo_to_sc[0] = sc_w / wo_w;
di->wo_to_sc[1] = sc_h / wo_h;
}
2013-04-03 15:04:24 +00:00
/** Save the current OpenGL state and initialize OpenGL for 2D
* rendering. glaEnd2DDraw should be called on the returned structure
* to free it and to return OpenGL to its previous state. The
* scissor rectangle is set to match the viewport.
*
* See glaDefine2DArea for an explanation of why this function uses integers.
*
* \param screen_rect The screen rectangle to be used for 2D drawing.
* \param world_rect The world rectangle that the 2D area represented
* by \a screen_rect is supposed to represent. If NULL it is assumed the
* world has a 1 to 1 mapping to the screen.
*/
gla2DDrawInfo *glaBegin2DDraw(rcti *screen_rect, rctf *world_rect)
{
2012-05-08 15:43:59 +00:00
gla2DDrawInfo *di = MEM_mallocN(sizeof(*di), "gla2DDrawInfo");
int sc_w, sc_h;
float wo_w, wo_h;
glGetIntegerv(GL_VIEWPORT, (GLint *)di->orig_vp);
glGetIntegerv(GL_SCISSOR_BOX, (GLint *)di->orig_sc);
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat *)di->orig_projmat);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat *)di->orig_viewmat);
2012-05-08 15:43:59 +00:00
di->screen_rect = *screen_rect;
if (world_rect) {
2012-05-08 15:43:59 +00:00
di->world_rect = *world_rect;
}
else {
di->world_rect.xmin = di->screen_rect.xmin;
di->world_rect.ymin = di->screen_rect.ymin;
di->world_rect.xmax = di->screen_rect.xmax;
di->world_rect.ymax = di->screen_rect.ymax;
}
sc_w = BLI_rcti_size_x(&di->screen_rect);
sc_h = BLI_rcti_size_y(&di->screen_rect);
wo_w = BLI_rcti_size_x(&di->world_rect);
wo_h = BLI_rcti_size_y(&di->world_rect);
2012-05-08 15:43:59 +00:00
di->wo_to_sc[0] = sc_w / wo_w;
di->wo_to_sc[1] = sc_h / wo_h;
glaDefine2DArea(&di->screen_rect);
return di;
}
2013-04-03 15:04:24 +00:00
/**
* Translate the (\a wo_x, \a wo_y) point from world coordinates into screen space.
*/
void gla2DDrawTranslatePt(gla2DDrawInfo *di, float wo_x, float wo_y, int *sc_x_r, int *sc_y_r)
{
2012-05-08 15:43:59 +00:00
*sc_x_r = (wo_x - di->world_rect.xmin) * di->wo_to_sc[0];
*sc_y_r = (wo_y - di->world_rect.ymin) * di->wo_to_sc[1];
}
2013-04-03 15:04:24 +00:00
/**
* Translate the \a world point from world coordiantes into screen space.
*/
void gla2DDrawTranslatePtv(gla2DDrawInfo *di, float world[2], int screen_r[2])
{
2012-05-08 15:43:59 +00:00
screen_r[0] = (world[0] - di->world_rect.xmin) * di->wo_to_sc[0];
screen_r[1] = (world[1] - di->world_rect.ymin) * di->wo_to_sc[1];
}
2013-04-03 15:04:24 +00:00
/**
* Restores the previous OpenGL state and free's the auxilary gla data.
*/
void glaEnd2DDraw(gla2DDrawInfo *di)
{
glViewport(di->orig_vp[0], di->orig_vp[1], di->orig_vp[2], di->orig_vp[3]);
glScissor(di->orig_vp[0], di->orig_vp[1], di->orig_vp[2], di->orig_vp[3]);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(di->orig_projmat);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(di->orig_viewmat);
MEM_freeN(di);
}
#endif
/* **************** GL_POINT hack ************************ */
2012-05-08 15:43:59 +00:00
static int curmode = 0;
static int pointhack = 0;
2012-04-29 15:47:02 +00:00
static GLubyte Squaredot[16] = {0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff};
void bglBegin(int mode)
{
2012-05-08 15:43:59 +00:00
curmode = mode;
2012-05-08 15:43:59 +00:00
if (mode == GL_POINTS) {
float value[4];
glGetFloatv(GL_POINT_SIZE_RANGE, value);
if (value[1] < 2.0f) {
glGetFloatv(GL_POINT_SIZE, value);
2012-05-08 15:43:59 +00:00
pointhack = floor(value[0] + 0.5f);
if (pointhack > 4) pointhack = 4;
}
else {
glBegin(mode);
}
}
}
#if 0 /* UNUSED */
int bglPointHack(void)
{
float value[4];
int pointhack_px;
glGetFloatv(GL_POINT_SIZE_RANGE, value);
if (value[1] < 2.0f) {
glGetFloatv(GL_POINT_SIZE, value);
2012-05-08 15:43:59 +00:00
pointhack_px = floorf(value[0] + 0.5f);
if (pointhack_px > 4) pointhack_px = 4;
return pointhack_px;
}
return 0;
}
#endif
void bglVertex3fv(const float vec[3])
{
switch (curmode) {
2012-05-08 15:43:59 +00:00
case GL_POINTS:
if (pointhack) {
glRasterPos3fv(vec);
glBitmap(pointhack, pointhack, (float)pointhack / 2.0f, (float)pointhack / 2.0f, 0.0, 0.0, Squaredot);
}
else {
glVertex3fv(vec);
}
2012-05-08 15:43:59 +00:00
break;
}
}
void bglVertex3f(float x, float y, float z)
{
switch (curmode) {
2012-05-08 15:43:59 +00:00
case GL_POINTS:
if (pointhack) {
glRasterPos3f(x, y, z);
glBitmap(pointhack, pointhack, (float)pointhack / 2.0f, (float)pointhack / 2.0f, 0.0, 0.0, Squaredot);
}
else {
glVertex3f(x, y, z);
}
2012-05-08 15:43:59 +00:00
break;
}
}
void bglVertex2fv(const float vec[2])
{
switch (curmode) {
2012-05-08 15:43:59 +00:00
case GL_POINTS:
if (pointhack) {
glRasterPos2fv(vec);
glBitmap(pointhack, pointhack, (float)pointhack / 2, pointhack / 2, 0.0, 0.0, Squaredot);
}
else {
glVertex2fv(vec);
}
2012-05-08 15:43:59 +00:00
break;
}
}
void bglEnd(void)
{
2012-05-08 15:43:59 +00:00
if (pointhack) pointhack = 0;
else glEnd();
}
/* Uses current OpenGL state to get view matrices for gluProject/gluUnProject */
void bgl_get_mats(bglMats *mats)
{
2012-05-08 15:43:59 +00:00
const double badvalue = 1.0e-6;
glGetDoublev(GL_MODELVIEW_MATRIX, mats->modelview);
glGetDoublev(GL_PROJECTION_MATRIX, mats->projection);
glGetIntegerv(GL_VIEWPORT, (GLint *)mats->viewport);
/* Very strange code here - it seems that certain bad values in the
* modelview matrix can cause gluUnProject to give bad results. */
if (mats->modelview[0] < badvalue &&
mats->modelview[0] > -badvalue)
{
mats->modelview[0] = 0;
}
if (mats->modelview[5] < badvalue &&
mats->modelview[5] > -badvalue)
{
mats->modelview[5] = 0;
}
/* Set up viewport so that gluUnProject will give correct values */
mats->viewport[0] = 0;
mats->viewport[1] = 0;
}
/* *************** glPolygonOffset hack ************* */
/* dist is only for ortho now... */
void bglPolygonOffset(float viewdist, float dist)
{
2012-05-08 15:43:59 +00:00
static float winmat[16], offset = 0.0;
if (dist != 0.0f) {
float offs;
// glEnable(GL_POLYGON_OFFSET_FILL);
// glPolygonOffset(-1.0, -1.0);
/* hack below is to mimic polygon offset */
glMatrixMode(GL_PROJECTION);
glGetFloatv(GL_PROJECTION_MATRIX, (float *)winmat);
/* dist is from camera to center point */
2012-05-08 15:43:59 +00:00
if (winmat[15] > 0.5f) offs = 0.00001f * dist * viewdist; // ortho tweaking
else offs = 0.0005f * dist; // should be clipping value or so...
2012-05-08 15:43:59 +00:00
winmat[14] -= offs;
offset += offs;
glLoadMatrixf(winmat);
glMatrixMode(GL_MODELVIEW);
}
else {
glMatrixMode(GL_PROJECTION);
2012-05-08 15:43:59 +00:00
winmat[14] += offset;
offset = 0.0;
glLoadMatrixf(winmat);
glMatrixMode(GL_MODELVIEW);
}
}
#if 0 /* UNUSED */
void bglFlush(void)
{
glFlush();
#ifdef __APPLE__
// if (GPU_type_matches(GPU_DEVICE_INTEL, GPU_OS_MAC, GPU_DRIVER_OFFICIAL))
// XXX myswapbuffers(); //hack to get mac intel graphics to show frontbuffer
#endif
}
#endif
/* **** Color management helper functions for GLSL display/transform ***** */
/* Draw given image buffer on a screen using GLSL for display transform */
void glaDrawImBuf_glsl(ImBuf *ibuf, float x, float y, int zoomfilter,
ColorManagedViewSettings *view_settings,
ColorManagedDisplaySettings *display_settings)
{
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
bool force_fallback = false;
bool need_fallback = true;
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
/* Early out */
if (ibuf->rect == NULL && ibuf->rect_float == NULL)
return;
/* Dithering is not supported on GLSL yet */
force_fallback = ibuf->dither != 0.0f;
/* Single channel images could not be transformed using GLSL yet */
force_fallback = ibuf->channels == 1;
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
/* This is actually lots of crap, but currently not sure about
* more clear way to bypass partial buffer update crappyness
* while rendering.
*
* The thing is -- render engines are only updating byte and
* display buffers for active render result opened in image
* editor. This works fine to show render progress without
* switching render layers in image editor user, but this is
* completely useless for GLSL display, where we need to have
* original buffer which we could color manage.
*
* For the time of rendering, we'll stick back to slower CPU
* display buffer update. GLSL could be used as soon as some
* fixes (?) are done in render itself, so we'll always have
* image buffer with relevant float buffer opened while
* rendering.
*
* On the other hand, when using Cycles, stressing GPU with
* GLSL could backfire on a performance.
* - sergey -
*/
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (G.is_rendering) {
/* Try to detect whether we're drawing render result,
* other images could have both rect and rect_float
* but they'll be synchronized
*/
if (ibuf->rect_float && ibuf->rect &&
((ibuf->mall & IB_rectfloat) == 0))
{
force_fallback = true;
}
}
/* Try to draw buffer using GLSL display transform */
if (force_fallback == false) {
int ok;
if (ibuf->rect_float) {
if (ibuf->float_colorspace) {
ok = IMB_colormanagement_setup_glsl_draw_from_space(view_settings, display_settings,
ibuf->float_colorspace, TRUE);
}
else {
ok = IMB_colormanagement_setup_glsl_draw(view_settings, display_settings, TRUE);
}
}
else {
ok = IMB_colormanagement_setup_glsl_draw_from_space(view_settings, display_settings,
ibuf->rect_colorspace, FALSE);
}
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (ok) {
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glColor4f(1.0, 1.0, 1.0, 1.0);
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (ibuf->rect_float) {
int format = 0;
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (ibuf->channels == 3)
format = GL_RGB;
else if (ibuf->channels == 4)
format = GL_RGBA;
else
BLI_assert(!"Incompatible number of channels for GLSL display");
if (format != 0) {
glaDrawPixelsTex(x, y, ibuf->x, ibuf->y, format, GL_FLOAT,
zoomfilter, ibuf->rect_float);
}
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
}
else if (ibuf->rect) {
/* ibuf->rect is always RGBA */
glaDrawPixelsTex(x, y, ibuf->x, ibuf->y, GL_RGBA, GL_UNSIGNED_BYTE,
zoomfilter, ibuf->rect);
}
IMB_colormanagement_finish_glsl_draw();
need_fallback = false;
}
}
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
/* In case GLSL failed or not usable, fallback to glaDrawPixelsAuto */
if (need_fallback) {
unsigned char *display_buffer;
void *cache_handle;
display_buffer = IMB_display_buffer_acquire(ibuf, view_settings, display_settings, &cache_handle);
if (display_buffer)
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glaDrawPixelsAuto(x, y, ibuf->x, ibuf->y, GL_RGBA, GL_UNSIGNED_BYTE,
zoomfilter, display_buffer);
IMB_display_buffer_release(cache_handle);
}
}
void glaDrawImBuf_glsl_ctx(const bContext *C, ImBuf *ibuf, float x, float y, int zoomfilter)
{
ColorManagedViewSettings *view_settings;
ColorManagedDisplaySettings *display_settings;
IMB_colormanagement_display_settings_from_ctx(C, &view_settings, &display_settings);
glaDrawImBuf_glsl(ibuf, x, y, zoomfilter, view_settings, display_settings);
}
/* Transform buffer from role to scene linear space using GLSL OCIO conversion
*
* See IMB_colormanagement_setup_transform_from_role_glsl description for
* some more details
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
*
* NOTE: this only works for RGBA buffers!
*/
int glaBufferTransformFromRole_glsl(float *buffer, int width, int height, int role)
{
GPUOffScreen *ofs;
char err_out[256];
rcti display_rect;
ofs = GPU_offscreen_create(width, height, err_out);
if (!ofs)
return FALSE;
GPU_offscreen_bind(ofs);
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
if (!IMB_colormanagement_setup_transform_from_role_glsl(role, TRUE)) {
GPU_offscreen_unbind(ofs);
GPU_offscreen_free(ofs);
return FALSE;
}
BLI_rcti_init(&display_rect, 0, width, 0, height);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glaDefine2DArea(&display_rect);
Bunch of fixes for GLSL display transform - GLSL shader wasn't aware of alpha predivide option, always assuming alpha is straight. Gave wrong results when displaying transparent float buffers. - GLSL display wasn't aware of float buffers with number of channels different from 4, crashing when trying to display image with different number of channels. This required a bit larger changes, namely now it's possible to pass format (GL_RGB, GL_RGBAm GL_LUMINANCE) to glaDrawPixelsTex, This also implied adding format to glaDrawPixelsAuto and modifying all places where this functions are called. Now GLSL will handle both 3 and 4 channels buffers, single channel images are handled by CPU. - Replaced hack for render result displaying with a bit different hack. Namely CPU conversion will happen only during render, once render is done GLSL would be used for displaying render result on a screen. This is so because of the way renderer updates parts of the image -- it happens without respect to active render layer in image user. This is harmless because only display buffer is modifying, but this is tricky because we don't have original buffer opened during rendering. One more related fix here was about when rendering multiple layers, wrong image would be displaying when rendering is done. Added a signal to invalidate display buffer once rendering is done (only happens when using multiple layers). This solves issue with wrong buffer stuck on the display when using regular CPU display space transform and if GLSL is available it'll make image displayed with a GLSL shader. - As an additional change, byte buffers now also uses GLSL display transform. So now only dutehr and RGB curves are stoppers for using GLSL for all kind of display transforms.
2013-04-03 15:59:54 +00:00
glaDrawPixelsTex(0, 0, width, height, GL_RGBA, GL_FLOAT,
GL_NEAREST, buffer);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
GPU_offscreen_read_pixels(ofs, GL_FLOAT, buffer);
IMB_colormanagement_finish_glsl_transform();
/* unbind */
GPU_offscreen_unbind(ofs);
GPU_offscreen_free(ofs);
return TRUE;
}