1
1

Compare commits

...

12 Commits

Author SHA1 Message Date
a8809e3ab5 Decklink: remove old versions headers 2016-06-10 13:44:19 +02:00
d20fe9a9bc VideoTexture: remove C++11 warning
Throwing in a destructor in c++11 would cause immediate crash of blender.
    Replaced the throw with a message on the console.
2016-06-10 13:31:48 +02:00
cb10fcaed4 Cleanup: strip ws 2016-06-10 20:25:06 +10:00
9b76dd1085 CMake: rename WITH_DECKLINK -> WITH_GAMEENGINE_DECKLINK 2016-06-10 20:07:44 +10:00
9853ecfd54 Cleanup: indentation & minor formatting 2016-06-10 19:59:52 +10:00
80009d1766 Minor fixes for RST docs 2016-06-10 19:27:21 +10:00
1b6067b567 Correct hasattr for code using item access
Also minor cleanup
2016-06-10 18:48:54 +10:00
26b11140ad BGE: DeckLink card support for video capture and streaming.
You can capture and stream video in the BGE using the DeckLink video
   cards from Black Magic Design. You need a card and Desktop Video software
   version 10.4 or above to use these features in the BGE.
   Many thanks to Nuno Estanquiero who tested the patch extensively
   on a variety of Decklink products, it wouldn't have been possible without
   his help.
   You can find a brief summary of the decklink features here: https://wiki.blender.org/index.php/Dev:Source/GameEngine/Decklink
   The full API details and samples are in the Python API documentation.

bge.texture.VideoDeckLink(format, capture=0):

   Use this object to capture a video stream. the format argument describes
   the video and pixel formats and the capture argument the card number.
   This object can be used as a source for bge.texture.Texture so that the frame
   is sent to the GPU, or by itself using the new refresh method to get the video
   frame in a buffer.
   The frames are usually not in RGB but in YUV format (8bit or 10bit); they
   require a shader to extract the RGB components in the GPU. Details and sample
   shaders in the documentation.
   3D video capture is supported: the frames are double height with left and right
   eyes in top-bottom order. The 'eye' uniform (see setUniformEyef) can be used to
   sample the 3D frame when the BGE is also in stereo mode. This allows to composite
   a 3D video stream with a 3D scene and render it in stereo.
   In Windows, and if you have a nVidia Quadro GPU, you can benefit of an additional
   performance boost by using 'GPUDirect': a method to send a video frame to the GPU
   without going through the OGL driver. The 'pinned memory' OGL extension is also
   supported (only on high-end AMD GPU) with the same effect.

bge.texture.DeckLink(cardIdx=0, format=""):

   Use this object to send video frame to a DeckLink card. Only the immediate mode
   is supported, the scheduled mode is not implemented.
   This object is similar to bge.texture.Texture: you need to attach a image source
   and call refresh() to compute and send the frame to the card.
   This object is best suited for video keying: a video stream (not captured) flows
   through the card and the frame you send to the card are displayed above it (the
   card does the compositing automatically based on the alpha channel).
   At the time of this commit, 3D video keying is supported in the BGE but not in the
   DeckLink card due to a color space issue.
2016-06-10 10:09:26 +02:00
0120e30406 Cleanup: formatting, also add header guards 2016-06-10 09:00:59 +10:00
88de57a9e1 BL_Shader.setUniformEyef(name)
defines a uniform that reflects the eye being rendered in stereo mode:
    0.0 for the left eye, 0.5 for the right eye.
    In non stereo mode, the value of the uniform is fixed to 0.0.
    The typical use of this uniform is in stereo mode to sample stereo textures
    containing the left and right eye images in a top-bottom order.

    python:
      shader = obj.meshes[0].materials[mat].getShader()
      shader.setUniformEyef("eye")

    shader:
      uniform float eye;
      uniform sampler2D tex;
      void main(void)
      {
         vec4 color;
         float ty, tx;
         tx = gl_TexCoord[0].x;
         ty = eye+gl_TexCoord[0].y*0.5;
         // ty will be between 0 and 0.5 for the left eye render
         // and 0.5 and 1.0 for the right eye render.
         color = texture(tex, vec2(tx, ty));
         ...
      }
2016-06-10 00:28:19 +02:00
901d33ed0c Atomic ops: Fix atomic_add_uint32 and atomic_sub_uint32 in Windows
The assembler version in Windows used to return the previous value
    of the variable while all the other versions return the new value.
    This is now fixed for consistency.
    Note: this bug had no effect on blender because no part of the code
    use the return value of these functions, but the future BGE DeckLink
    module makes use of it to implement reference counter.
2016-06-10 00:00:33 +02:00
8089876804 BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
    The render pass is enabled by default but it can be disabled with
    bge.logic.setRender(False).
    Once disabled, the render pass is skipped and a new logic frame starts
    immediately. Note that VSync no longer limits the fps when render is off
    but the 'Use Frame Rate' option in the Render Properties still does.
    To run as many frames as possible, untick the option
    This function is useful when you don't need the default render, e.g.
    when doing offscreen render to an alternate device than the monitor.
    Note that without VSync, you must limit the frame rate by other means.

fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
    Use this method to create an offscreen buffer of given size, with given MSAA
    samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
    or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
    retrieve the frame buffer on the host and the latter if you want to pass the render
    to another context (texture are proper OGL object, render buffers aren't)
    The object created by this function can only be used as a parameter of the
    bge.texture.ImageRender() constructor to send the the render to the FBO rather
    than to the frame buffer. This is best suited when you want to create a render
    of specific size, or if you need an image with an alpha channel.

bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
    Without arg, the refresh method of the image objects is pretty much a no-op, it
    simply invalidates the image so that on next texture refresh, the image will
    be recalculated.
    It is now possible to pass an optional buffer object to transfer the image (and
    recalculate it if it was invalid) to an external object. The object must implement
    the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
    depending on format argument (only those 2 formats are supported) and ts is an
    optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
    With this function you don't need anymore to link the image object to a Texture
    object to use: the image object is self-sufficient.

bge.texture.ImageRender(scene, camera, fbo=None)
    Render to buffer is possible by passing a FBO object (see offScreenCreate).

bge.texture.ImageRender.render()
    Allows asynchronous render: call this method to render the scene but without
    extracting the pixels yet. The function returns as soon as the render commands
    have been send to the GPU. The render will proceed asynchronously in the GPU
    while the host can perform other tasks.
    To complete the render, you can either call refresh() directly of refresh the texture
    to which this object is the source. Asynchronous render is useful to achieve optimal
    performance: call render() on frame N and refresh() on frame N+1 to give as much as
    time as possible to the GPU to render the frame while the game engine can perform other tasks.

Support negative scale on camera.
    Camera scale was previously ignored in the BGE.
    It is now injected in the modelview matrix as a vertical or horizontal flip
    of the scene (respectively if scaleY<0 and scaleX<0).
    Note that the actual value of the scale is not used, only the sign.
    This allows to flip the image produced by ImageRender() without any performance
    degradation: the flip is integrated in the render itself.

Optimized image transfer from ImageRender to buffer.
    Previously, images that were transferred to the host were always going through
    buffers in VideoTexture. It is now possible to transfer ImageRender
    images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
    if the attributes of the ImageRender objects are set as follow:
       flip=False, alpha=True, scale=False, depth=False, zbuff=False.
       (if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
72 changed files with 21279 additions and 181 deletions

View File

@@ -229,6 +229,7 @@ option(WITH_BULLET "Enable Bullet (Physics Engine)" ON)
option(WITH_SYSTEM_BULLET "Use the systems bullet library (currently unsupported due to missing features in upstream!)" )
mark_as_advanced(WITH_SYSTEM_BULLET)
option(WITH_GAMEENGINE "Enable Game Engine" ${_init_GAMEENGINE})
option(WITH_GAMEENGINE_DECKLINK "Support BlackMagicDesign DeckLink cards in the Game Engine" ON)
option(WITH_PLAYER "Build Player" OFF)
option(WITH_OPENCOLORIO "Enable OpenColorIO color management" ${_init_OPENCOLORIO})

View File

@@ -685,6 +685,14 @@ function(SETUP_BLENDER_SORTED_LIBS)
list_insert_after(BLENDER_SORTED_LIBS "ge_logic_ngnetwork" "extern_bullet")
endif()
if(WITH_GAMEENGINE_DECKLINK)
list(APPEND BLENDER_SORTED_LIBS bf_intern_decklink)
endif()
if(WIN32)
list(APPEND BLENDER_SORTED_LIBS bf_intern_gpudirect)
endif()
if(WITH_OPENSUBDIV)
list(APPEND BLENDER_SORTED_LIBS bf_intern_opensubdiv)
endif()

View File

@@ -0,0 +1,237 @@
"""
Video Capture with DeckLink
+++++++++++++++++++++++++++
Video frames captured with DeckLink cards have pixel formats that are generally not directly
usable by OpenGL, they must be processed by a shader. The three shaders presented here should
cover all common video capture cases.
This file reflects the current video transfer method implemented in the Decklink module:
whenever possible the video images are transferred as float texture because this is more
compatible with GPUs. Of course, only the pixel formats that have a correspondant GL format
can be transferred as float. Look for fg_shaders in this file for an exhaustive list.
Other pixel formats will be transferred as 32 bits integer red-channel texture but this
won't work with certain GPU (Intel GMA); the corresponding shaders are not shown here.
However, it should not be necessary to use any of them as the list below covers all practical
cases of video capture with all types of Decklink product.
In other words, only use one of the pixel format below and you will be fine. Note that depending
on the video stream, only certain pixel formats will be allowed (others will throw an exception).
For example, to capture a PAL video stream, you must use one of the YUV formats.
To find which pixel format is suitable for a particular video stream, use the 'Media Express'
utility that comes with the Decklink software : if you see the video in the 'Log and Capture'
Window, you have selected the right pixel format and you can use the same in Blender.
Notes: * these shaders only decode the RGB channel and set the alpha channel to a fixed
value (look for color.a = ). It's up to you to add postprocessing to the color.
* these shaders are compatible with 2D and 3D video stream
"""
import bge
from bge import logic
from bge import texture as vt
# The default vertex shader, because we need one
#
VertexShader = """
#version 130
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
"""
# For use with RGB video stream: the pixel is directly usable
#
FragmentShader_R10l = """
#version 130
uniform sampler2D tex;
// stereo = 1.0 if 2D image, =0.5 if 3D (left eye below, right eye above)
uniform float stereo;
// eye = 0.0 for the left eye, 0.5 for the right eye
uniform float eye;
void main(void)
{
vec4 color;
float tx, ty;
tx = gl_TexCoord[0].x;
ty = eye+gl_TexCoord[0].y*stereo;
color = texture(tex, vec2(tx,ty));
color.a = 0.7;
gl_FragColor = color;
}
"""
# For use with YUV video stream
#
FragmentShader_2vuy = """
#version 130
uniform sampler2D tex;
// stereo = 1.0 if 2D image, =0.5 if 3D (left eye below, right eye above)
uniform float stereo;
// eye = 0.0 for the left eye, 0.5 for the right eye
uniform float eye;
void main(void)
{
vec4 color;
float tx, ty, width, Y, Cb, Cr;
int px;
tx = gl_TexCoord[0].x;
ty = eye+gl_TexCoord[0].y*stereo;
width = float(textureSize(tex, 0).x);
color = texture(tex, vec2(tx, ty));
px = int(floor(fract(tx*width)*2.0));
switch (px) {
case 0:
Y = color.g;
break;
case 1:
Y = color.a;
break;
}
Y = (Y - 0.0625) * 1.168949772;
Cb = (color.b - 0.0625) * 1.142857143 - 0.5;
Cr = (color.r - 0.0625) * 1.142857143 - 0.5;
color.r = Y + 1.5748 * Cr;
color.g = Y - 0.1873 * Cb - 0.4681 * Cr;
color.b = Y + 1.8556 * Cb;
color.a = 0.7;
gl_FragColor = color;
}
"""
# For use with high resolution YUV
#
FragmentShader_v210 = """
#version 130
uniform sampler2D tex;
// stereo = 1.0 if 2D image, =0.5 if 3D (left eye below, right eye above)
uniform float stereo;
// eye = 0.0 for the left eye, 0.5 for the right eye
uniform float eye;
void main(void)
{
vec4 color, color1, color2, color3;
int px;
float tx, ty, width, sx, dx, bx, Y, Cb, Cr;
tx = gl_TexCoord[0].x;
ty = eye+gl_TexCoord[0].y*stereo;
width = float(textureSize(tex, 0).x);
// to sample macro pixels (6 pixels in 4 words)
sx = tx*width*0.25+0.01;
// index of display pixel in the macro pixel 0..5
px = int(floor(fract(sx)*6.0));
// increment as we sample the macro pixel
dx = 1.0/width;
// base x coord of macro pixel
bx = (floor(sx)+0.01)*dx*4.0;
color = texture(tex, vec2(bx, ty));
color1 = texture(tex, vec2(bx+dx, ty));
color2 = texture(tex, vec2(bx+dx*2.0, ty));
color3 = texture(tex, vec2(bx+dx*3.0, ty));
switch (px) {
case 0:
case 1:
Cb = color.b;
Cr = color.r;
break;
case 2:
case 3:
Cb = color1.g;
Cr = color2.b;
break;
default:
Cb = color2.r;
Cr = color3.g;
break;
}
switch (px) {
case 0:
Y = color.g;
break;
case 1:
Y = color1.b;
break;
case 2:
Y = color1.r;
break;
case 3:
Y = color2.g;
break;
case 4:
Y = color3.b;
break;
default:
Y = color3.r;
break;
}
Y = (Y - 0.0625) * 1.168949772;
Cb = (Cb - 0.0625) * 1.142857143 - 0.5;
Cr = (Cr - 0.0625) * 1.142857143 - 0.5;
color.r = Y + 1.5748 * Cr;
color.g = Y - 0.1873 * Cb - 0.4681 * Cr;
color.b = Y + 1.8556 * Cb;
color.a = 0.7;
gl_FragColor = color;
}
"""
# The exhausitve list of pixel formats that are transferred as float texture
# Only use those for greater efficiency and compatiblity.
#
fg_shaders = {
'2vuy' :FragmentShader_2vuy,
'8BitYUV' :FragmentShader_2vuy,
'v210' :FragmentShader_v210,
'10BitYUV' :FragmentShader_v210,
'8BitBGRA' :FragmentShader_R10l,
'BGRA' :FragmentShader_R10l,
'8BitARGB' :FragmentShader_R10l,
'10BitRGBXLE':FragmentShader_R10l,
'R10l' :FragmentShader_R10l
}
#
# Helper function to attach a pixel shader to the material that receives the video frame.
#
def config_video(obj, format, pixel, is3D=False, mat=0, card=0):
if pixel not in fg_shaders:
raise('Unsuported shader')
shader = obj.meshes[0].materials[mat].getShader()
if shader is not None and not shader.isValid():
shader.setSource(VertexShader, fg_shaders[pixel], True)
shader.setSampler('tex', 0)
shader.setUniformEyef("eye")
shader.setUniform1f("stereo", 0.5 if is3D else 1.0)
tex = vt.Texture(obj, mat)
tex.source = vt.VideoDeckLink(format + "/" + pixel + ("/3D" if is3D else ""), card)
print("frame rate: ", tex.source.framerate)
tex.source.play()
obj["video"] = tex
#
# Attach this function to an object that has a material with texture
# and call it once to initialize the object
#
def init(cont):
# config_video(cont.owner, 'HD720p5994', '8BitBGRA')
# config_video(cont.owner, 'HD720p5994', '8BitYUV')
# config_video(cont.owner, 'pal ', '10BitYUV')
config_video(cont.owner, 'pal ', '8BitYUV')
#
# To be called on every frame
#
def play(cont):
obj = cont.owner
video = obj.get("video")
if video is not None:
video.refresh(True)

View File

@@ -378,6 +378,28 @@ General functions
Render next frame (if Python has control)
.. function:: setRender(render)
Sets the global flag that controls the render of the scene.
If True, the render is done after the logic frame.
If False, the render is skipped and another logic frame starts immediately.
.. note::
GPU VSync no longer limits the number of frame per second when render is off,
but the *Use Frame Rate* option still regulates the fps. To run as many frames
as possible, untick this option (Render Properties, System panel).
:arg render: the render flag
:type render: bool
.. function:: getRender()
Get the current value of the global render flag
:return: The flag value
:rtype: bool
**********************
Time related functions
**********************

View File

@@ -90,6 +90,48 @@ Constants
Right eye being used during stereoscopic rendering.
.. data:: RAS_OFS_RENDER_BUFFER
The pixel buffer for offscreen render is a RenderBuffer. Argument to :func:`offScreenCreate`
.. data:: RAS_OFS_RENDER_TEXTURE
The pixel buffer for offscreen render is a Texture. Argument to :func:`offScreenCreate`
*****
Types
*****
.. class:: RASOffScreen
An off-screen render buffer object.
Use :func:`offScreenCreate` to create it.
Currently it can only be used in the :class:`bge.texture.ImageRender`
constructor to render on a FBO rather than the default viewport.
.. attribute:: width
The width in pixel of the FBO
:type: integer
.. attribute:: height
The height in pixel of the FBO
:type: integer
.. attribute:: color
The underlying OpenGL bind code of the texture object that holds
the rendered image, 0 if the FBO is using RenderBuffer.
The choice between RenderBuffer and Texture is determined
by the target argument of :func:`offScreenCreate`.
:type: integer
*********
Functions
@@ -362,3 +404,22 @@ Functions
Get the current vsync value
:rtype: One of VSYNC_OFF, VSYNC_ON, VSYNC_ADAPTIVE
.. function:: offScreenCreate(width,height[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Create a Off-screen render buffer object.
:arg width: the width of the buffer in pixels
:type width: integer
:arg height: the height of the buffer in pixels
:type height: integer
:arg samples: the number of multisample for anti-aliasing (MSAA), 0 to disable MSAA
:type samples: integer
:arg target: the pixel storage: :data:`RAS_OFS_RENDER_BUFFER` to render on RenderBuffers (the default),
:data:`RAS_OFS_RENDER_TEXTURE` to render on texture.
The later is interesting if you want to access the texture directly (see :attr:`RASOffScreen.color`).
Otherwise the default is preferable as it's more widely supported by GPUs and more efficient.
If the GPU does not support MSAA+Texture (e.g. Intel HD GPU), MSAA will be disabled.
:type target: integer
:rtype: :class:`RASOffScreen`

View File

@@ -49,8 +49,15 @@ When the texture object is deleted, the new texture is deleted and the old textu
.. literalinclude:: ../examples/bge.texture.1.py
:lines: 8-
.. include:: ../examples/bge.texture.2.py
:start-line: 1
:end-line: 6
.. literalinclude:: ../examples/bge.texture.2.py
:lines: 8-
*************
Video classes
*************
@@ -173,12 +180,20 @@ Video classes
:return: Whether the video was playing.
:rtype: bool
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA", timestamp=-1.0)
Refresh video - get its status.
:value: see `FFmpeg Video and Image Status`_.
Refresh video - get its status and optionally copy the frame to an external buffer.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
:arg timestamp: An optional timestamp (in seconds from the start of the movie)
of the frame to be copied to the buffer.
:type timestamp: float
:return: see `FFmpeg Video and Image Status`_.
:rtype: int
*************
@@ -244,12 +259,17 @@ Image classes
* :class:`FilterRGB24`
* :class:`FilterRGBA32`
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image, i.e. load it.
:value: see `FFmpeg Video and Image Status`_.
Refresh image, get its status and optionally copy the frame to an external buffer.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
:return: see `FFmpeg Video and Image Status`_.
:rtype: int
.. method:: reload(newname=None)
@@ -411,9 +431,18 @@ Image classes
:type: :class:`~bgl.Buffer` or None
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image - invalidate its current content.
Refresh image - render and copy the image to an external buffer (optional)
then invalidate its current content.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is rendered and copied to the buffer,
which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
.. attribute:: scale
@@ -498,9 +527,17 @@ Image classes
:type: :class:`~bgl.Buffer` or None
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image - invalidate its current content.
Refresh image - calculate and copy the image to an external buffer (optional) then invalidate its current content.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is calculated and copied to the buffer,
which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
.. attribute:: scale
@@ -548,11 +585,15 @@ Image classes
.. class:: ImageRender(scene, camera)
Image source from render.
The render is done on a custom framebuffer object if fbo is specified,
otherwise on the default framebuffer.
:arg scene: Scene in which the image has to be taken.
:type scene: :class:`~bge.types.KX_Scene`
:arg camera: Camera from which the image has to be taken.
:type camera: :class:`~bge.types.KX_Camera`
:arg fbo: Off-screen render buffer object (optional)
:type fbo: :class:`~bge.render.RASOffScreen`
.. attribute:: alpha
@@ -596,13 +637,9 @@ Image classes
.. attribute:: image
Image data. (readonly)
:type: :class:`~bgl.Buffer` or None
.. method:: refresh()
Refresh image - invalidate its current content.
.. attribute:: scale
Fast scale of image (near neighbour).
@@ -640,6 +677,42 @@ Image classes
:type: bool
.. method:: render()
Render the scene but do not extract the pixels yet.
The function returns as soon as the render commands have been send to the GPU.
The render will proceed asynchronously in the GPU while the host can perform other tasks.
To complete the render, you can either call :func:`refresh`
directly of refresh the texture of which this object is the source.
This method is useful to implement asynchronous render for optimal performance: call render()
on frame n and refresh() on frame n+1 to give as much as time as possible to the GPU
to render the frame while the game engine can perform other tasks.
:return: True if the render was initiated, False if the render cannot be performed (e.g. the camera is active)
:rtype: bool
.. method:: refresh()
.. method:: refresh(buffer, format="RGBA")
Refresh video - render and optionally copy the image to an external buffer then invalidate its current content.
The render may have been started earlier with the :func:`render` method,
in which case this function simply waits for the render operations to complete.
When called without argument, the pixels are not extracted but the render is guaranteed
to be completed when the function returns.
This only makes sense with offscreen render on texture target (see :func:`~bge.render.offScreenCreate`).
:arg buffer: An object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
The transfer to the buffer is optimal if no processing of the image is needed.
This is the case if ``flip=False, alpha=True, scale=False, whole=True, depth=False, zbuff=False``
and no filter is set.
:type buffer: any buffer type of sufficient size
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
:return: True if the render is complete, False if the render cannot be performed (e.g. the camera is active)
:rtype: bool
.. class:: ImageViewport
Image source from viewport.
@@ -689,9 +762,19 @@ Image classes
:type: sequence of two ints
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image - invalidate its current content.
Refresh video - copy the viewport to an external buffer (optional) then invalidate its current content.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
The transfer to the buffer is optimal if no processing of the image is needed.
This is the case if ``flip=False, alpha=True, scale=False, whole=True, depth=False, zbuff=False``
and no filter is set.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
.. attribute:: scale
@@ -730,7 +813,184 @@ Image classes
:type: bool
.. class:: VideoDeckLink(format, capture=0)
Image source from an external video stream captured with a DeckLink video card from
Black Magic Design.
Before this source can be used, a DeckLink hardware device must be installed, it can be a PCIe card
or a USB device, and the 'Desktop Video' software package (version 10.4 or above must be installed)
on the host as described in the DeckLink documentation.
If in addition you have a recent nVideo Quadro card, you can benefit from the 'GPUDirect' technology
to push the captured video frame very efficiently to the GPU. For this you need to install the
'DeckLink SDK' version 10.4 or above and copy the 'dvp.dll' runtime library to Blender's
installation directory or to any other place where Blender can load a DLL from.
:arg format: string describing the video format to be captured.
:type format: str
:arg capture: Card number from which the input video must be captured.
:type capture: int
The format argument must be written as ``<displayMode>/<pixelFormat>[/3D][:<cacheSize>]`` where ``<displayMode>``
describes the frame size and rate and <pixelFormat> the encoding of the pixels.
The optional ``/3D`` suffix is to be used if the video stream is stereo with a left and right eye feed.
The optional ``:<cacheSize>`` suffix determines the number of the video frames kept in cache, by default 8.
Some DeckLink cards won't work below a certain cache size.
The default value 8 should be sufficient for all cards.
You may try to reduce the cache size to reduce the memory footprint. For example the The 4K Extreme is known
to work with 3 frames only, the Extreme 2 needs 4 frames and the Intensity Shuttle needs 6 frames, etc.
Reducing the cache size may be useful when Decklink is used in conjunction with GPUDirect:
all frames must be locked in memory in that case and that puts a lot of pressure on memory.
If you reduce the cache size too much,
you'll get no error but no video feed either.
The valid ``<displayMode>`` values are copied from the ``BMDDisplayMode`` enum in the DeckLink API
without the 'bmdMode' prefix. In case a mode that is not in this list is added in a later version
of the SDK, it is also possible to specify the 4 letters of the internal code for that mode.
You will find the internal code in the ``DeckLinkAPIModes.h`` file that is part of the SDK.
Here is for reference the full list of supported display modes with their equivalent internal code:
Internal Codes
- NTSC 'ntsc'
- NTSC2398 'nt23'
- PAL 'pal '
- NTSCp 'ntsp'
- PALp 'palp'
HD 1080 Modes
- HD1080p2398 '23ps'
- HD1080p24 '24ps'
- HD1080p25 'Hp25'
- HD1080p2997 'Hp29'
- HD1080p30 'Hp30'
- HD1080i50 'Hi50'
- HD1080i5994 'Hi59'
- HD1080i6000 'Hi60'
- HD1080p50 'Hp50'
- HD1080p5994 'Hp59'
- HD1080p6000 'Hp60'
HD 720 Modes
- HD720p50 'hp50'
- HD720p5994 'hp59'
- HD720p60 'hp60'
2k Modes
- 2k2398 '2k23'
- 2k24 '2k24'
- 2k25 '2k25'
4k Modes
- 4K2160p2398 '4k23'
- 4K2160p24 '4k24'
- 4K2160p25 '4k25'
- 4K2160p2997 '4k29'
- 4K2160p30 '4k30'
- 4K2160p50 '4k50'
- 4K2160p5994 '4k59'
- 4K2160p60 '4k60'
Most of names are self explanatory. If necessary refer to the DeckLink API documentation for more information.
Similarly, <pixelFormat> is copied from the BMDPixelFormat enum.
Here is for reference the full list of supported pixel format and their equivalent internal code:
Pixel Formats
- 8BitYUV '2vuy'
- 10BitYUV 'v210'
- 8BitARGB * no equivalent code *
- 8BitBGRA 'BGRA'
- 10BitRGB 'r210'
- 12BitRGB 'R12B'
- 12BitRGBLE 'R12L'
- 10BitRGBXLE 'R10l'
- 10BitRGBX 'R10b'
Refer to the DeckLink SDK documentation for a full description of these pixel format.
It is important to understand them as the decoding of the pixels is NOT done in VideoTexture
for performance reason. Instead a specific shader must be used to decode the pixel in the GPU.
Only the '8BitARGB', '8BitBGRA' and '10BitRGBXLE' pixel formats are mapped directly to OpenGL RGB float textures.
The '8BitYUV' and '10BitYUV' pixel formats are mapped to openGL RGB float texture but require a shader to decode.
The other pixel formats are sent as a ``GL_RED_INTEGER`` texture (i.e. a texture with only the
red channel coded as an unsigned 32 bit integer) and are not recommended for use.
Example: ``HD1080p24/10BitYUV/3D:4`` is equivalent to ``24ps/v210/3D:4``
and represents a full HD stereo feed at 24 frame per second and 4 frames cache size.
Although video format auto detection is possible with certain DeckLink devices, the corresponding
API is NOT implemented in the BGE. Therefore it is important to specify the format string that
matches exactly the video feed. If the format is wrong, no frame will be captured.
It should be noted that the pixel format that you need to specify is not necessarily the actual
format in the video feed. For example, the 4K Extreme card delivers 8bit RGBs pixels in the
'10BitRGBXLE' format. Use the 'Media Express' application included in 'Desktop Video' to discover
which pixel format works for a particular video stream.
.. attribute:: status
Status of the capture: 1=ready to use, 2=capturing, 3=stopped
:type: int
.. attribute:: framerate
Capture frame rate as computed from the video format.
:type: float
.. attribute:: valid
Tells if the image attribute can be used to retrieve the image.
Always False in this implementation (the image is not available at python level)
:type: bool
.. attribute:: image
The image data. Always None in this implementation.
:type: :class:`~bgl.Buffer` or None
.. attribute:: size
The size of the frame in pixel.
Stereo frames have double the height of the video frame, i.e. 3D is delivered to the GPU
as a single image in top-bottom order, left eye on top.
:type: (int,int)
.. attribute:: scale
Not used in this object.
:type: bool
.. attribute:: flip
Not used in this object.
:type: bool
.. attribute:: filter
Not used in this object.
.. method:: play()
Kick-off the capture after creation of the object.
:return: True if the capture could be started, False otherwise.
:rtype: bool
.. method:: pause()
Temporary stops the capture. Use play() to restart it.
:return: True if the capture could be paused, False otherwise.
:rtype: bool
.. method:: stop()
Stops the capture.
:return: True if the capture could be stopped, False otherwise.
:rtype: bool
***************
Texture classes
***************
@@ -782,6 +1042,7 @@ Texture classes
:type: one of...
* :class:`VideoFFmpeg`
* :class:`VideoDeckLink`
* :class:`ImageFFmpeg`
* :class:`ImageBuff`
* :class:`ImageMirror`
@@ -789,7 +1050,129 @@ Texture classes
* :class:`ImageRender`
* :class:`ImageViewport`
.. class:: DeckLink(cardIdx=0, format="")
Certain DeckLink devices can be used to playback video: the host sends video frames regularly
for immediate or scheduled playback. The video feed is outputted on HDMI or SDI interfaces.
This class supports the immediate playback mode: it has a source attribute that is assigned
one of the source object in the bge.texture module. Refreshing the DeckLink object causes
the image source to be computed and sent to the DeckLink device for immediate transmission
on the output interfaces. Keying is supported: it allows to composite the frame with an
input video feed that transits through the DeckLink card.
:arg cardIdx: Number of the card to be used for output (0=first card).
It should be noted that DeckLink devices are usually half duplex:
they can either be used for capture or playback but not both at the same time.
:type cardIdx: int
:arg format: String representing the display mode of the output feed.
:type format: str
The default value of the format argument is reserved for auto detection but it is currently
not supported (it will generate a runtime error) and thus the video format must be explicitly
specified. If keying is the goal (see keying attributes), the format must match exactly the
input video feed, otherwise it can be any format supported by the device (there will be a
runtime error if not).
The format of the string is ``<displayMode>[/3D]``.
Refer to :class:`VideoDeckLink` to get the list of acceptable ``<displayMode>``.
The optional ``/3D`` suffix is used to create a stereo 3D feed.
In that case the 'right' attribute must also be set to specify the image source for the right eye.
Note: The pixel format is not specified here because it is always BGRA. The alpha channel is
used in keying to mix the source with the input video feed, otherwise it is not used.
If a conversion is needed to match the native video format, it is done inside the DeckLink driver
or device.
.. attribute:: source
This attribute must be set to one of the image source. If the image size does not fit exactly
the frame size, the extend attribute determines what to do.
For best performance, the source image should match exactly the size of the output frame.
A further optimization is achieved if the image source object is ImageViewport or ImageRender
set for whole viewport, flip disabled and no filter: the GL frame buffer is copied directly
to the image buffer and directly from there to the DeckLink card (hence no buffer to buffer
copy inside VideoTexture).
:type: one of...
- :class:`VideoFFmpeg`
- :class:`VideoDeckLink`
- :class:`ImageFFmpeg`
- :class:`ImageBuff`
- :class:`ImageMirror`
- :class:`ImageMix`
- :class:`ImageRender`
- :class:`ImageViewport`
.. attribute:: right
If the video format is stereo 3D, this attribute should be set to an image source object
that will produce the right eye images. If the goal is to render the BGE scene in 3D,
it can be achieved with 2 cameras, one for each eye, used by 2 ImageRender with an offscreen
render buffer that is just the size of the video frame.
:type: one of...
- :class:`VideoFFmpeg`
- :class:`VideoDeckLink`
- :class:`ImageFFmpeg`
- :class:`ImageBuff`
- :class:`ImageMirror`
- :class:`ImageMix`
- :class:`ImageRender`
- :class:`ImageViewport`
.. attribute:: keying
Specify if keying is enabled. False (default): the output frame is sent unmodified on
the output interface (in that case no input video is required). True: the output frame
is mixed with the input video, using the alpha channel to blend the two images and the
combination is sent on the output interface.
:type: bool
.. attribute:: level
If keying is enabled, sets the keying level from 0 to 255. This value is a global alpha value
that multiplies the alpha channel of the image source. Use 255 (the default) to keep the alpha
channel unmodified, 0 to make the output frame totally transparent.
:type: int
.. attribute:: extend
Determines how the image source should be mapped if the size does not fit the video frame size.
* False (the default): map the image pixel by pixel.
If the image size is smaller than the frame size, extra space around the image is filled with
0-alpha black. If it is larger, the image is cropped to fit the frame size.
* True: the image is scaled by the nearest neighbor algorithm to fit the frame size.
The scaling is fast but poor quality. For best results, always adjust the image source to
match the size of the output video.
:type: bool
.. method:: close()
Close the DeckLink device and release all resources. After calling this method,
the object cannot be reactivated, it must be destroyed and a new DeckLink object
created from fresh to restart the output.
.. method:: refresh(refresh_source,ts)
This method must be called frequently to update the output frame in the DeckLink device.
:arg refresh_source: True if the source objects image buffer should be invalidated after being
used to compute the output frame. This triggers the recomputing of the
source image on next refresh, which is normally the desired effect.
False if the image source buffer should stay valid and reused on next refresh.
Note that the DeckLink device stores the output frame and replays until a
new frame is sent from the host. Thus, it is not necessary to refresh the
DeckLink object if it is known that the image source has not changed.
:type refresh_source: bool
:arg ts: The timestamp value passed to the image source object to compute the image.
If unspecified, the BGE clock is used.
:type ts: float
**************
Filter classes
**************

View File

@@ -214,6 +214,16 @@ base class --- :class:`PyObjectPlus`
:arg iList: a list (2, 3 or 4 elements) of integer values
:type iList: list[integer]
.. method:: setUniformEyef(name)
Set a uniform with a float value that reflects the eye being render in stereo mode:
0.0 for the left eye, 0.5 for the right eye. In non stereo mode, the value of the uniform
is fixed to 0.0. The typical use of this uniform is in stereo mode to sample stereo textures
containing the left and right eye images in a top-bottom order.
:arg name: the uniform name
:type name: string
.. method:: validate()
Validate the shader object.

View File

@@ -34,6 +34,10 @@ add_subdirectory(mikktspace)
add_subdirectory(glew-mx)
add_subdirectory(eigen)
if (WITH_GAMEENGINE_DECKLINK)
add_subdirectory(decklink)
endif()
if(WITH_AUDASPACE)
add_subdirectory(audaspace)
endif()
@@ -79,8 +83,10 @@ if(WITH_OPENSUBDIV)
endif()
# only windows needs utf16 converter
# gpudirect is a runtime interface to the nVidia's DVP driver, only for windows
if(WIN32)
add_subdirectory(utfconv)
add_subdirectory(gpudirect)
endif()
if(WITH_OPENVDB)

View File

@@ -129,23 +129,24 @@ ATOMIC_INLINE uint32_t atomic_cas_uint32(uint32_t *v, uint32_t old, uint32_t _ne
#elif (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
ATOMIC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x)
{
uint32_t ret = x;
asm volatile (
"lock; xaddl %0, %1;"
: "+r" (x), "=m" (*p) /* Outputs. */
: "+r" (ret), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return x;
return ret+x;
}
ATOMIC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x)
{
x = (uint32_t)(-(int32_t)x);
ret = (uint32_t)(-(int32_t)x);
asm volatile (
"lock; xaddl %0, %1;"
: "+r" (x), "=m" (*p) /* Outputs. */
: "+r" (ret), "=m" (*p) /* Outputs. */
: "m" (*p) /* Inputs. */
);
return x;
return ret-x;
}
ATOMIC_INLINE uint32_t atomic_cas_uint32(uint32_t *v, uint32_t old, uint32_t _new)

View File

@@ -0,0 +1,58 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2015, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Blender Foundation.
#
# ***** END GPL LICENSE BLOCK *****
set(INC
)
set(INC_SYS
)
set(SRC
DeckLinkAPI.cpp
DeckLinkAPI.h
)
if (WIN32)
list(APPEND SRC
win/DeckLinkAPI_h.h
win/DeckLinkAPI_i.c
)
endif()
if (UNIX AND NOT APPLE)
list(APPEND SRC
linux/DeckLinkAPI.h
linux/DeckLinkAPIConfiguration.h
linux/DeckLinkAPIDeckControl.h
linux/DeckLinkAPIDiscovery.h
linux/DeckLinkAPIDispatch.cpp
linux/DeckLinkAPIModes.h
linux/DeckLinkAPIVersion.h
linux/DeckLinkAPITypes.h
linux/LinuxCOM.h
)
endif()
blender_add_lib(bf_intern_decklink "${SRC}" "${INC}" "${INC_SYS}")

View File

@@ -0,0 +1,50 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file decklink/DeckLinkAPI.cpp
* \ingroup decklink
*/
#include "DeckLinkAPI.h"
#ifdef WIN32
IDeckLinkIterator* BMD_CreateDeckLinkIterator(void)
{
HRESULT result;
IDeckLinkIterator* pDLIterator = NULL;
result = CoCreateInstance(CLSID_CDeckLinkIterator, NULL, CLSCTX_ALL, IID_IDeckLinkIterator, (void**)&pDLIterator);
if (FAILED(result))
return NULL;
return pDLIterator;
}
#else
IDeckLinkIterator* BMD_CreateDeckLinkIterator(void)
{
return CreateDeckLinkIteratorInstance();
}
#endif // WIN32

View File

@@ -0,0 +1,56 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file decklink/DeckLinkAPI.h
* \ingroup decklink
*/
#ifndef __DECKLINKAPI_H__
#define __DECKLINKAPI_H__
/* Include the OS specific Declink headers */
#ifdef WIN32
# include <windows.h>
# include <objbase.h>
# include <comutil.h>
# include "win/DeckLinkAPI_h.h"
typedef unsigned int dl_size_t;
#elif defined(__APPLE__)
# error "Decklink not supported in OSX"
#else
# include "linux/DeckLinkAPI.h"
/* Windows COM API uses BOOL, linux uses bool */
# define BOOL bool
typedef uint32_t dl_size_t;
#endif
/* OS independent function to get the device iterator */
IDeckLinkIterator* BMD_CreateDeckLinkIterator(void);
#endif /* __DECKLINKAPI_H__ */

View File

@@ -0,0 +1,767 @@
/* -LICENSE-START-
** Copyright (c) 2014 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
*/
#ifndef BMD_DECKLINKAPI_H
#define BMD_DECKLINKAPI_H
#ifndef BMD_CONST
#if defined(_MSC_VER)
#define BMD_CONST __declspec(selectany) static const
#else
#define BMD_CONST static const
#endif
#endif
/* DeckLink API */
#include <stdint.h>
#include "LinuxCOM.h"
#include "DeckLinkAPITypes.h"
#include "DeckLinkAPIModes.h"
#include "DeckLinkAPIDiscovery.h"
#include "DeckLinkAPIConfiguration.h"
#include "DeckLinkAPIDeckControl.h"
#define BLACKMAGIC_DECKLINK_API_MAGIC 1
// Type Declarations
// Interface ID Declarations
BMD_CONST REFIID IID_IDeckLinkVideoOutputCallback = /* 20AA5225-1958-47CB-820B-80A8D521A6EE */ {0x20,0xAA,0x52,0x25,0x19,0x58,0x47,0xCB,0x82,0x0B,0x80,0xA8,0xD5,0x21,0xA6,0xEE};
BMD_CONST REFIID IID_IDeckLinkInputCallback = /* DD04E5EC-7415-42AB-AE4A-E80C4DFC044A */ {0xDD,0x04,0xE5,0xEC,0x74,0x15,0x42,0xAB,0xAE,0x4A,0xE8,0x0C,0x4D,0xFC,0x04,0x4A};
BMD_CONST REFIID IID_IDeckLinkMemoryAllocator = /* B36EB6E7-9D29-4AA8-92EF-843B87A289E8 */ {0xB3,0x6E,0xB6,0xE7,0x9D,0x29,0x4A,0xA8,0x92,0xEF,0x84,0x3B,0x87,0xA2,0x89,0xE8};
BMD_CONST REFIID IID_IDeckLinkAudioOutputCallback = /* 403C681B-7F46-4A12-B993-2BB127084EE6 */ {0x40,0x3C,0x68,0x1B,0x7F,0x46,0x4A,0x12,0xB9,0x93,0x2B,0xB1,0x27,0x08,0x4E,0xE6};
BMD_CONST REFIID IID_IDeckLinkIterator = /* 50FB36CD-3063-4B73-BDBB-958087F2D8BA */ {0x50,0xFB,0x36,0xCD,0x30,0x63,0x4B,0x73,0xBD,0xBB,0x95,0x80,0x87,0xF2,0xD8,0xBA};
BMD_CONST REFIID IID_IDeckLinkAPIInformation = /* 7BEA3C68-730D-4322-AF34-8A7152B532A4 */ {0x7B,0xEA,0x3C,0x68,0x73,0x0D,0x43,0x22,0xAF,0x34,0x8A,0x71,0x52,0xB5,0x32,0xA4};
BMD_CONST REFIID IID_IDeckLinkOutput = /* CC5C8A6E-3F2F-4B3A-87EA-FD78AF300564 */ {0xCC,0x5C,0x8A,0x6E,0x3F,0x2F,0x4B,0x3A,0x87,0xEA,0xFD,0x78,0xAF,0x30,0x05,0x64};
BMD_CONST REFIID IID_IDeckLinkInput = /* AF22762B-DFAC-4846-AA79-FA8883560995 */ {0xAF,0x22,0x76,0x2B,0xDF,0xAC,0x48,0x46,0xAA,0x79,0xFA,0x88,0x83,0x56,0x09,0x95};
BMD_CONST REFIID IID_IDeckLinkVideoFrame = /* 3F716FE0-F023-4111-BE5D-EF4414C05B17 */ {0x3F,0x71,0x6F,0xE0,0xF0,0x23,0x41,0x11,0xBE,0x5D,0xEF,0x44,0x14,0xC0,0x5B,0x17};
BMD_CONST REFIID IID_IDeckLinkMutableVideoFrame = /* 69E2639F-40DA-4E19-B6F2-20ACE815C390 */ {0x69,0xE2,0x63,0x9F,0x40,0xDA,0x4E,0x19,0xB6,0xF2,0x20,0xAC,0xE8,0x15,0xC3,0x90};
BMD_CONST REFIID IID_IDeckLinkVideoFrame3DExtensions = /* DA0F7E4A-EDC7-48A8-9CDD-2DB51C729CD7 */ {0xDA,0x0F,0x7E,0x4A,0xED,0xC7,0x48,0xA8,0x9C,0xDD,0x2D,0xB5,0x1C,0x72,0x9C,0xD7};
BMD_CONST REFIID IID_IDeckLinkVideoInputFrame = /* 05CFE374-537C-4094-9A57-680525118F44 */ {0x05,0xCF,0xE3,0x74,0x53,0x7C,0x40,0x94,0x9A,0x57,0x68,0x05,0x25,0x11,0x8F,0x44};
BMD_CONST REFIID IID_IDeckLinkVideoFrameAncillary = /* 732E723C-D1A4-4E29-9E8E-4A88797A0004 */ {0x73,0x2E,0x72,0x3C,0xD1,0xA4,0x4E,0x29,0x9E,0x8E,0x4A,0x88,0x79,0x7A,0x00,0x04};
BMD_CONST REFIID IID_IDeckLinkAudioInputPacket = /* E43D5870-2894-11DE-8C30-0800200C9A66 */ {0xE4,0x3D,0x58,0x70,0x28,0x94,0x11,0xDE,0x8C,0x30,0x08,0x00,0x20,0x0C,0x9A,0x66};
BMD_CONST REFIID IID_IDeckLinkScreenPreviewCallback = /* B1D3F49A-85FE-4C5D-95C8-0B5D5DCCD438 */ {0xB1,0xD3,0xF4,0x9A,0x85,0xFE,0x4C,0x5D,0x95,0xC8,0x0B,0x5D,0x5D,0xCC,0xD4,0x38};
BMD_CONST REFIID IID_IDeckLinkGLScreenPreviewHelper = /* 504E2209-CAC7-4C1A-9FB4-C5BB6274D22F */ {0x50,0x4E,0x22,0x09,0xCA,0xC7,0x4C,0x1A,0x9F,0xB4,0xC5,0xBB,0x62,0x74,0xD2,0x2F};
BMD_CONST REFIID IID_IDeckLinkNotificationCallback = /* B002A1EC-070D-4288-8289-BD5D36E5FF0D */ {0xB0,0x02,0xA1,0xEC,0x07,0x0D,0x42,0x88,0x82,0x89,0xBD,0x5D,0x36,0xE5,0xFF,0x0D};
BMD_CONST REFIID IID_IDeckLinkNotification = /* 0A1FB207-E215-441B-9B19-6FA1575946C5 */ {0x0A,0x1F,0xB2,0x07,0xE2,0x15,0x44,0x1B,0x9B,0x19,0x6F,0xA1,0x57,0x59,0x46,0xC5};
BMD_CONST REFIID IID_IDeckLinkAttributes = /* ABC11843-D966-44CB-96E2-A1CB5D3135C4 */ {0xAB,0xC1,0x18,0x43,0xD9,0x66,0x44,0xCB,0x96,0xE2,0xA1,0xCB,0x5D,0x31,0x35,0xC4};
BMD_CONST REFIID IID_IDeckLinkKeyer = /* 89AFCAF5-65F8-421E-98F7-96FE5F5BFBA3 */ {0x89,0xAF,0xCA,0xF5,0x65,0xF8,0x42,0x1E,0x98,0xF7,0x96,0xFE,0x5F,0x5B,0xFB,0xA3};
BMD_CONST REFIID IID_IDeckLinkVideoConversion = /* 3BBCB8A2-DA2C-42D9-B5D8-88083644E99A */ {0x3B,0xBC,0xB8,0xA2,0xDA,0x2C,0x42,0xD9,0xB5,0xD8,0x88,0x08,0x36,0x44,0xE9,0x9A};
BMD_CONST REFIID IID_IDeckLinkDeviceNotificationCallback = /* 4997053B-0ADF-4CC8-AC70-7A50C4BE728F */ {0x49,0x97,0x05,0x3B,0x0A,0xDF,0x4C,0xC8,0xAC,0x70,0x7A,0x50,0xC4,0xBE,0x72,0x8F};
BMD_CONST REFIID IID_IDeckLinkDiscovery = /* CDBF631C-BC76-45FA-B44D-C55059BC6101 */ {0xCD,0xBF,0x63,0x1C,0xBC,0x76,0x45,0xFA,0xB4,0x4D,0xC5,0x50,0x59,0xBC,0x61,0x01};
/* Enum BMDVideoOutputFlags - Flags to control the output of ancillary data along with video. */
typedef uint32_t BMDVideoOutputFlags;
enum _BMDVideoOutputFlags {
bmdVideoOutputFlagDefault = 0,
bmdVideoOutputVANC = 1 << 0,
bmdVideoOutputVITC = 1 << 1,
bmdVideoOutputRP188 = 1 << 2,
bmdVideoOutputDualStream3D = 1 << 4
};
/* Enum BMDFrameFlags - Frame flags */
typedef uint32_t BMDFrameFlags;
enum _BMDFrameFlags {
bmdFrameFlagDefault = 0,
bmdFrameFlagFlipVertical = 1 << 0,
/* Flags that are applicable only to instances of IDeckLinkVideoInputFrame */
bmdFrameHasNoInputSource = 1 << 31
};
/* Enum BMDVideoInputFlags - Flags applicable to video input */
typedef uint32_t BMDVideoInputFlags;
enum _BMDVideoInputFlags {
bmdVideoInputFlagDefault = 0,
bmdVideoInputEnableFormatDetection = 1 << 0,
bmdVideoInputDualStream3D = 1 << 1
};
/* Enum BMDVideoInputFormatChangedEvents - Bitmask passed to the VideoInputFormatChanged notification to identify the properties of the input signal that have changed */
typedef uint32_t BMDVideoInputFormatChangedEvents;
enum _BMDVideoInputFormatChangedEvents {
bmdVideoInputDisplayModeChanged = 1 << 0,
bmdVideoInputFieldDominanceChanged = 1 << 1,
bmdVideoInputColorspaceChanged = 1 << 2
};
/* Enum BMDDetectedVideoInputFormatFlags - Flags passed to the VideoInputFormatChanged notification to describe the detected video input signal */
typedef uint32_t BMDDetectedVideoInputFormatFlags;
enum _BMDDetectedVideoInputFormatFlags {
bmdDetectedVideoInputYCbCr422 = 1 << 0,
bmdDetectedVideoInputRGB444 = 1 << 1,
bmdDetectedVideoInputDualStream3D = 1 << 2
};
/* Enum BMDDeckLinkCapturePassthroughMode - Enumerates whether the video output is electrically connected to the video input or if the clean switching mode is enabled */
typedef uint32_t BMDDeckLinkCapturePassthroughMode;
enum _BMDDeckLinkCapturePassthroughMode {
bmdDeckLinkCapturePassthroughModeDirect = /* 'pdir' */ 0x70646972,
bmdDeckLinkCapturePassthroughModeCleanSwitch = /* 'pcln' */ 0x70636C6E
};
/* Enum BMDOutputFrameCompletionResult - Frame Completion Callback */
typedef uint32_t BMDOutputFrameCompletionResult;
enum _BMDOutputFrameCompletionResult {
bmdOutputFrameCompleted,
bmdOutputFrameDisplayedLate,
bmdOutputFrameDropped,
bmdOutputFrameFlushed
};
/* Enum BMDReferenceStatus - GenLock input status */
typedef uint32_t BMDReferenceStatus;
enum _BMDReferenceStatus {
bmdReferenceNotSupportedByHardware = 1 << 0,
bmdReferenceLocked = 1 << 1
};
/* Enum BMDAudioSampleRate - Audio sample rates supported for output/input */
typedef uint32_t BMDAudioSampleRate;
enum _BMDAudioSampleRate {
bmdAudioSampleRate48kHz = 48000
};
/* Enum BMDAudioSampleType - Audio sample sizes supported for output/input */
typedef uint32_t BMDAudioSampleType;
enum _BMDAudioSampleType {
bmdAudioSampleType16bitInteger = 16,
bmdAudioSampleType32bitInteger = 32
};
/* Enum BMDAudioOutputStreamType - Audio output stream type */
typedef uint32_t BMDAudioOutputStreamType;
enum _BMDAudioOutputStreamType {
bmdAudioOutputStreamContinuous,
bmdAudioOutputStreamContinuousDontResample,
bmdAudioOutputStreamTimestamped
};
/* Enum BMDDisplayModeSupport - Output mode supported flags */
typedef uint32_t BMDDisplayModeSupport;
enum _BMDDisplayModeSupport {
bmdDisplayModeNotSupported = 0,
bmdDisplayModeSupported,
bmdDisplayModeSupportedWithConversion
};
/* Enum BMDTimecodeFormat - Timecode formats for frame metadata */
typedef uint32_t BMDTimecodeFormat;
enum _BMDTimecodeFormat {
bmdTimecodeRP188VITC1 = /* 'rpv1' */ 0x72707631, // RP188 timecode where DBB1 equals VITC1 (line 9)
bmdTimecodeRP188VITC2 = /* 'rp12' */ 0x72703132, // RP188 timecode where DBB1 equals VITC2 (line 9 for progressive or line 571 for interlaced/PsF)
bmdTimecodeRP188LTC = /* 'rplt' */ 0x72706C74, // RP188 timecode where DBB1 equals LTC (line 10)
bmdTimecodeRP188Any = /* 'rp18' */ 0x72703138, // For capture: return the first valid timecode in {VITC1, LTC ,VITC2} - For playback: set the timecode as VITC1
bmdTimecodeVITC = /* 'vitc' */ 0x76697463,
bmdTimecodeVITCField2 = /* 'vit2' */ 0x76697432,
bmdTimecodeSerial = /* 'seri' */ 0x73657269
};
/* Enum BMDAnalogVideoFlags - Analog video display flags */
typedef uint32_t BMDAnalogVideoFlags;
enum _BMDAnalogVideoFlags {
bmdAnalogVideoFlagCompositeSetup75 = 1 << 0,
bmdAnalogVideoFlagComponentBetacamLevels = 1 << 1
};
/* Enum BMDAudioOutputAnalogAESSwitch - Audio output Analog/AESEBU switch */
typedef uint32_t BMDAudioOutputAnalogAESSwitch;
enum _BMDAudioOutputAnalogAESSwitch {
bmdAudioOutputSwitchAESEBU = /* 'aes ' */ 0x61657320,
bmdAudioOutputSwitchAnalog = /* 'anlg' */ 0x616E6C67
};
/* Enum BMDVideoOutputConversionMode - Video/audio conversion mode */
typedef uint32_t BMDVideoOutputConversionMode;
enum _BMDVideoOutputConversionMode {
bmdNoVideoOutputConversion = /* 'none' */ 0x6E6F6E65,
bmdVideoOutputLetterboxDownconversion = /* 'ltbx' */ 0x6C746278,
bmdVideoOutputAnamorphicDownconversion = /* 'amph' */ 0x616D7068,
bmdVideoOutputHD720toHD1080Conversion = /* '720c' */ 0x37323063,
bmdVideoOutputHardwareLetterboxDownconversion = /* 'HWlb' */ 0x48576C62,
bmdVideoOutputHardwareAnamorphicDownconversion = /* 'HWam' */ 0x4857616D,
bmdVideoOutputHardwareCenterCutDownconversion = /* 'HWcc' */ 0x48576363,
bmdVideoOutputHardware720p1080pCrossconversion = /* 'xcap' */ 0x78636170,
bmdVideoOutputHardwareAnamorphic720pUpconversion = /* 'ua7p' */ 0x75613770,
bmdVideoOutputHardwareAnamorphic1080iUpconversion = /* 'ua1i' */ 0x75613169,
bmdVideoOutputHardwareAnamorphic149To720pUpconversion = /* 'u47p' */ 0x75343770,
bmdVideoOutputHardwareAnamorphic149To1080iUpconversion = /* 'u41i' */ 0x75343169,
bmdVideoOutputHardwarePillarbox720pUpconversion = /* 'up7p' */ 0x75703770,
bmdVideoOutputHardwarePillarbox1080iUpconversion = /* 'up1i' */ 0x75703169
};
/* Enum BMDVideoInputConversionMode - Video input conversion mode */
typedef uint32_t BMDVideoInputConversionMode;
enum _BMDVideoInputConversionMode {
bmdNoVideoInputConversion = /* 'none' */ 0x6E6F6E65,
bmdVideoInputLetterboxDownconversionFromHD1080 = /* '10lb' */ 0x31306C62,
bmdVideoInputAnamorphicDownconversionFromHD1080 = /* '10am' */ 0x3130616D,
bmdVideoInputLetterboxDownconversionFromHD720 = /* '72lb' */ 0x37326C62,
bmdVideoInputAnamorphicDownconversionFromHD720 = /* '72am' */ 0x3732616D,
bmdVideoInputLetterboxUpconversion = /* 'lbup' */ 0x6C627570,
bmdVideoInputAnamorphicUpconversion = /* 'amup' */ 0x616D7570
};
/* Enum BMDVideo3DPackingFormat - Video 3D packing format */
typedef uint32_t BMDVideo3DPackingFormat;
enum _BMDVideo3DPackingFormat {
bmdVideo3DPackingSidebySideHalf = /* 'sbsh' */ 0x73627368,
bmdVideo3DPackingLinebyLine = /* 'lbyl' */ 0x6C62796C,
bmdVideo3DPackingTopAndBottom = /* 'tabo' */ 0x7461626F,
bmdVideo3DPackingFramePacking = /* 'frpk' */ 0x6672706B,
bmdVideo3DPackingLeftOnly = /* 'left' */ 0x6C656674,
bmdVideo3DPackingRightOnly = /* 'righ' */ 0x72696768
};
/* Enum BMDIdleVideoOutputOperation - Video output operation when not playing video */
typedef uint32_t BMDIdleVideoOutputOperation;
enum _BMDIdleVideoOutputOperation {
bmdIdleVideoOutputBlack = /* 'blac' */ 0x626C6163,
bmdIdleVideoOutputLastFrame = /* 'lafa' */ 0x6C616661,
bmdIdleVideoOutputDesktop = /* 'desk' */ 0x6465736B
};
/* Enum BMDDeckLinkAttributeID - DeckLink Attribute ID */
typedef uint32_t BMDDeckLinkAttributeID;
enum _BMDDeckLinkAttributeID {
/* Flags */
BMDDeckLinkSupportsInternalKeying = /* 'keyi' */ 0x6B657969,
BMDDeckLinkSupportsExternalKeying = /* 'keye' */ 0x6B657965,
BMDDeckLinkSupportsHDKeying = /* 'keyh' */ 0x6B657968,
BMDDeckLinkSupportsInputFormatDetection = /* 'infd' */ 0x696E6664,
BMDDeckLinkHasReferenceInput = /* 'hrin' */ 0x6872696E,
BMDDeckLinkHasSerialPort = /* 'hspt' */ 0x68737074,
BMDDeckLinkHasAnalogVideoOutputGain = /* 'avog' */ 0x61766F67,
BMDDeckLinkCanOnlyAdjustOverallVideoOutputGain = /* 'ovog' */ 0x6F766F67,
BMDDeckLinkHasVideoInputAntiAliasingFilter = /* 'aafl' */ 0x6161666C,
BMDDeckLinkHasBypass = /* 'byps' */ 0x62797073,
BMDDeckLinkSupportsDesktopDisplay = /* 'extd' */ 0x65787464,
BMDDeckLinkSupportsClockTimingAdjustment = /* 'ctad' */ 0x63746164,
BMDDeckLinkSupportsFullDuplex = /* 'fdup' */ 0x66647570,
BMDDeckLinkSupportsFullFrameReferenceInputTimingOffset = /* 'frin' */ 0x6672696E,
BMDDeckLinkSupportsSMPTELevelAOutput = /* 'lvla' */ 0x6C766C61,
BMDDeckLinkSupportsDualLinkSDI = /* 'sdls' */ 0x73646C73,
BMDDeckLinkSupportsIdleOutput = /* 'idou' */ 0x69646F75,
/* Integers */
BMDDeckLinkMaximumAudioChannels = /* 'mach' */ 0x6D616368,
BMDDeckLinkMaximumAnalogAudioChannels = /* 'aach' */ 0x61616368,
BMDDeckLinkNumberOfSubDevices = /* 'nsbd' */ 0x6E736264,
BMDDeckLinkSubDeviceIndex = /* 'subi' */ 0x73756269,
BMDDeckLinkPersistentID = /* 'peid' */ 0x70656964,
BMDDeckLinkTopologicalID = /* 'toid' */ 0x746F6964,
BMDDeckLinkVideoOutputConnections = /* 'vocn' */ 0x766F636E,
BMDDeckLinkVideoInputConnections = /* 'vicn' */ 0x7669636E,
BMDDeckLinkAudioOutputConnections = /* 'aocn' */ 0x616F636E,
BMDDeckLinkAudioInputConnections = /* 'aicn' */ 0x6169636E,
BMDDeckLinkDeviceBusyState = /* 'dbst' */ 0x64627374,
BMDDeckLinkVideoIOSupport = /* 'vios' */ 0x76696F73, // Returns a BMDVideoIOSupport bit field
/* Floats */
BMDDeckLinkVideoInputGainMinimum = /* 'vigm' */ 0x7669676D,
BMDDeckLinkVideoInputGainMaximum = /* 'vigx' */ 0x76696778,
BMDDeckLinkVideoOutputGainMinimum = /* 'vogm' */ 0x766F676D,
BMDDeckLinkVideoOutputGainMaximum = /* 'vogx' */ 0x766F6778,
/* Strings */
BMDDeckLinkSerialPortDeviceName = /* 'slpn' */ 0x736C706E
};
/* Enum BMDDeckLinkAPIInformationID - DeckLinkAPI information ID */
typedef uint32_t BMDDeckLinkAPIInformationID;
enum _BMDDeckLinkAPIInformationID {
BMDDeckLinkAPIVersion = /* 'vers' */ 0x76657273
};
/* Enum BMDDeviceBusyState - Current device busy state */
typedef uint32_t BMDDeviceBusyState;
enum _BMDDeviceBusyState {
bmdDeviceCaptureBusy = 1 << 0,
bmdDevicePlaybackBusy = 1 << 1,
bmdDeviceSerialPortBusy = 1 << 2
};
/* Enum BMDVideoIOSupport - Device video input/output support */
typedef uint32_t BMDVideoIOSupport;
enum _BMDVideoIOSupport {
bmdDeviceSupportsCapture = 1 << 0,
bmdDeviceSupportsPlayback = 1 << 1
};
/* Enum BMD3DPreviewFormat - Linked Frame preview format */
typedef uint32_t BMD3DPreviewFormat;
enum _BMD3DPreviewFormat {
bmd3DPreviewFormatDefault = /* 'defa' */ 0x64656661,
bmd3DPreviewFormatLeftOnly = /* 'left' */ 0x6C656674,
bmd3DPreviewFormatRightOnly = /* 'righ' */ 0x72696768,
bmd3DPreviewFormatSideBySide = /* 'side' */ 0x73696465,
bmd3DPreviewFormatTopBottom = /* 'topb' */ 0x746F7062
};
/* Enum BMDNotifications - Events that can be subscribed through IDeckLinkNotification */
typedef uint32_t BMDNotifications;
enum _BMDNotifications {
bmdPreferencesChanged = /* 'pref' */ 0x70726566
};
#if defined(__cplusplus)
// Forward Declarations
class IDeckLinkVideoOutputCallback;
class IDeckLinkInputCallback;
class IDeckLinkMemoryAllocator;
class IDeckLinkAudioOutputCallback;
class IDeckLinkIterator;
class IDeckLinkAPIInformation;
class IDeckLinkOutput;
class IDeckLinkInput;
class IDeckLinkVideoFrame;
class IDeckLinkMutableVideoFrame;
class IDeckLinkVideoFrame3DExtensions;
class IDeckLinkVideoInputFrame;
class IDeckLinkVideoFrameAncillary;
class IDeckLinkAudioInputPacket;
class IDeckLinkScreenPreviewCallback;
class IDeckLinkGLScreenPreviewHelper;
class IDeckLinkNotificationCallback;
class IDeckLinkNotification;
class IDeckLinkAttributes;
class IDeckLinkKeyer;
class IDeckLinkVideoConversion;
class IDeckLinkDeviceNotificationCallback;
class IDeckLinkDiscovery;
/* Interface IDeckLinkVideoOutputCallback - Frame completion callback. */
class IDeckLinkVideoOutputCallback : public IUnknown
{
public:
virtual HRESULT ScheduledFrameCompleted (/* in */ IDeckLinkVideoFrame *completedFrame, /* in */ BMDOutputFrameCompletionResult result) = 0;
virtual HRESULT ScheduledPlaybackHasStopped (void) = 0;
protected:
virtual ~IDeckLinkVideoOutputCallback () {} // call Release method to drop reference count
};
/* Interface IDeckLinkInputCallback - Frame arrival callback. */
class IDeckLinkInputCallback : public IUnknown
{
public:
virtual HRESULT VideoInputFormatChanged (/* in */ BMDVideoInputFormatChangedEvents notificationEvents, /* in */ IDeckLinkDisplayMode *newDisplayMode, /* in */ BMDDetectedVideoInputFormatFlags detectedSignalFlags) = 0;
virtual HRESULT VideoInputFrameArrived (/* in */ IDeckLinkVideoInputFrame* videoFrame, /* in */ IDeckLinkAudioInputPacket* audioPacket) = 0;
protected:
virtual ~IDeckLinkInputCallback () {} // call Release method to drop reference count
};
/* Interface IDeckLinkMemoryAllocator - Memory allocator for video frames. */
class IDeckLinkMemoryAllocator : public IUnknown
{
public:
virtual HRESULT AllocateBuffer (/* in */ uint32_t bufferSize, /* out */ void **allocatedBuffer) = 0;
virtual HRESULT ReleaseBuffer (/* in */ void *buffer) = 0;
virtual HRESULT Commit (void) = 0;
virtual HRESULT Decommit (void) = 0;
};
/* Interface IDeckLinkAudioOutputCallback - Optional callback to allow audio samples to be pulled as required. */
class IDeckLinkAudioOutputCallback : public IUnknown
{
public:
virtual HRESULT RenderAudioSamples (/* in */ bool preroll) = 0;
};
/* Interface IDeckLinkIterator - enumerates installed DeckLink hardware */
class IDeckLinkIterator : public IUnknown
{
public:
virtual HRESULT Next (/* out */ IDeckLink **deckLinkInstance) = 0;
};
/* Interface IDeckLinkAPIInformation - DeckLinkAPI attribute interface */
class IDeckLinkAPIInformation : public IUnknown
{
public:
virtual HRESULT GetFlag (/* in */ BMDDeckLinkAPIInformationID cfgID, /* out */ bool *value) = 0;
virtual HRESULT GetInt (/* in */ BMDDeckLinkAPIInformationID cfgID, /* out */ int64_t *value) = 0;
virtual HRESULT GetFloat (/* in */ BMDDeckLinkAPIInformationID cfgID, /* out */ double *value) = 0;
virtual HRESULT GetString (/* in */ BMDDeckLinkAPIInformationID cfgID, /* out */ const char **value) = 0;
protected:
virtual ~IDeckLinkAPIInformation () {} // call Release method to drop reference count
};
/* Interface IDeckLinkOutput - Created by QueryInterface from IDeckLink. */
class IDeckLinkOutput : public IUnknown
{
public:
virtual HRESULT DoesSupportVideoMode (/* in */ BMDDisplayMode displayMode, /* in */ BMDPixelFormat pixelFormat, /* in */ BMDVideoOutputFlags flags, /* out */ BMDDisplayModeSupport *result, /* out */ IDeckLinkDisplayMode **resultDisplayMode) = 0;
virtual HRESULT GetDisplayModeIterator (/* out */ IDeckLinkDisplayModeIterator **iterator) = 0;
virtual HRESULT SetScreenPreviewCallback (/* in */ IDeckLinkScreenPreviewCallback *previewCallback) = 0;
/* Video Output */
virtual HRESULT EnableVideoOutput (/* in */ BMDDisplayMode displayMode, /* in */ BMDVideoOutputFlags flags) = 0;
virtual HRESULT DisableVideoOutput (void) = 0;
virtual HRESULT SetVideoOutputFrameMemoryAllocator (/* in */ IDeckLinkMemoryAllocator *theAllocator) = 0;
virtual HRESULT CreateVideoFrame (/* in */ int32_t width, /* in */ int32_t height, /* in */ int32_t rowBytes, /* in */ BMDPixelFormat pixelFormat, /* in */ BMDFrameFlags flags, /* out */ IDeckLinkMutableVideoFrame **outFrame) = 0;
virtual HRESULT CreateAncillaryData (/* in */ BMDPixelFormat pixelFormat, /* out */ IDeckLinkVideoFrameAncillary **outBuffer) = 0;
virtual HRESULT DisplayVideoFrameSync (/* in */ IDeckLinkVideoFrame *theFrame) = 0;
virtual HRESULT ScheduleVideoFrame (/* in */ IDeckLinkVideoFrame *theFrame, /* in */ BMDTimeValue displayTime, /* in */ BMDTimeValue displayDuration, /* in */ BMDTimeScale timeScale) = 0;
virtual HRESULT SetScheduledFrameCompletionCallback (/* in */ IDeckLinkVideoOutputCallback *theCallback) = 0;
virtual HRESULT GetBufferedVideoFrameCount (/* out */ uint32_t *bufferedFrameCount) = 0;
/* Audio Output */
virtual HRESULT EnableAudioOutput (/* in */ BMDAudioSampleRate sampleRate, /* in */ BMDAudioSampleType sampleType, /* in */ uint32_t channelCount, /* in */ BMDAudioOutputStreamType streamType) = 0;
virtual HRESULT DisableAudioOutput (void) = 0;
virtual HRESULT WriteAudioSamplesSync (/* in */ void *buffer, /* in */ uint32_t sampleFrameCount, /* out */ uint32_t *sampleFramesWritten) = 0;
virtual HRESULT BeginAudioPreroll (void) = 0;
virtual HRESULT EndAudioPreroll (void) = 0;
virtual HRESULT ScheduleAudioSamples (/* in */ void *buffer, /* in */ uint32_t sampleFrameCount, /* in */ BMDTimeValue streamTime, /* in */ BMDTimeScale timeScale, /* out */ uint32_t *sampleFramesWritten) = 0;
virtual HRESULT GetBufferedAudioSampleFrameCount (/* out */ uint32_t *bufferedSampleFrameCount) = 0;
virtual HRESULT FlushBufferedAudioSamples (void) = 0;
virtual HRESULT SetAudioCallback (/* in */ IDeckLinkAudioOutputCallback *theCallback) = 0;
/* Output Control */
virtual HRESULT StartScheduledPlayback (/* in */ BMDTimeValue playbackStartTime, /* in */ BMDTimeScale timeScale, /* in */ double playbackSpeed) = 0;
virtual HRESULT StopScheduledPlayback (/* in */ BMDTimeValue stopPlaybackAtTime, /* out */ BMDTimeValue *actualStopTime, /* in */ BMDTimeScale timeScale) = 0;
virtual HRESULT IsScheduledPlaybackRunning (/* out */ bool *active) = 0;
virtual HRESULT GetScheduledStreamTime (/* in */ BMDTimeScale desiredTimeScale, /* out */ BMDTimeValue *streamTime, /* out */ double *playbackSpeed) = 0;
virtual HRESULT GetReferenceStatus (/* out */ BMDReferenceStatus *referenceStatus) = 0;
/* Hardware Timing */
virtual HRESULT GetHardwareReferenceClock (/* in */ BMDTimeScale desiredTimeScale, /* out */ BMDTimeValue *hardwareTime, /* out */ BMDTimeValue *timeInFrame, /* out */ BMDTimeValue *ticksPerFrame) = 0;
virtual HRESULT GetFrameCompletionReferenceTimestamp (/* in */ IDeckLinkVideoFrame *theFrame, /* in */ BMDTimeScale desiredTimeScale, /* out */ BMDTimeValue *frameCompletionTimestamp) = 0;
protected:
virtual ~IDeckLinkOutput () {} // call Release method to drop reference count
};
/* Interface IDeckLinkInput - Created by QueryInterface from IDeckLink. */
class IDeckLinkInput : public IUnknown
{
public:
virtual HRESULT DoesSupportVideoMode (/* in */ BMDDisplayMode displayMode, /* in */ BMDPixelFormat pixelFormat, /* in */ BMDVideoInputFlags flags, /* out */ BMDDisplayModeSupport *result, /* out */ IDeckLinkDisplayMode **resultDisplayMode) = 0;
virtual HRESULT GetDisplayModeIterator (/* out */ IDeckLinkDisplayModeIterator **iterator) = 0;
virtual HRESULT SetScreenPreviewCallback (/* in */ IDeckLinkScreenPreviewCallback *previewCallback) = 0;
/* Video Input */
virtual HRESULT EnableVideoInput (/* in */ BMDDisplayMode displayMode, /* in */ BMDPixelFormat pixelFormat, /* in */ BMDVideoInputFlags flags) = 0;
virtual HRESULT DisableVideoInput (void) = 0;
virtual HRESULT GetAvailableVideoFrameCount (/* out */ uint32_t *availableFrameCount) = 0;
virtual HRESULT SetVideoInputFrameMemoryAllocator (/* in */ IDeckLinkMemoryAllocator *theAllocator) = 0;
/* Audio Input */
virtual HRESULT EnableAudioInput (/* in */ BMDAudioSampleRate sampleRate, /* in */ BMDAudioSampleType sampleType, /* in */ uint32_t channelCount) = 0;
virtual HRESULT DisableAudioInput (void) = 0;
virtual HRESULT GetAvailableAudioSampleFrameCount (/* out */ uint32_t *availableSampleFrameCount) = 0;
/* Input Control */
virtual HRESULT StartStreams (void) = 0;
virtual HRESULT StopStreams (void) = 0;
virtual HRESULT PauseStreams (void) = 0;
virtual HRESULT FlushStreams (void) = 0;
virtual HRESULT SetCallback (/* in */ IDeckLinkInputCallback *theCallback) = 0;
/* Hardware Timing */
virtual HRESULT GetHardwareReferenceClock (/* in */ BMDTimeScale desiredTimeScale, /* out */ BMDTimeValue *hardwareTime, /* out */ BMDTimeValue *timeInFrame, /* out */ BMDTimeValue *ticksPerFrame) = 0;
protected:
virtual ~IDeckLinkInput () {} // call Release method to drop reference count
};
/* Interface IDeckLinkVideoFrame - Interface to encapsulate a video frame; can be caller-implemented. */
class IDeckLinkVideoFrame : public IUnknown
{
public:
virtual long GetWidth (void) = 0;
virtual long GetHeight (void) = 0;
virtual long GetRowBytes (void) = 0;
virtual BMDPixelFormat GetPixelFormat (void) = 0;
virtual BMDFrameFlags GetFlags (void) = 0;
virtual HRESULT GetBytes (/* out */ void **buffer) = 0;
virtual HRESULT GetTimecode (/* in */ BMDTimecodeFormat format, /* out */ IDeckLinkTimecode **timecode) = 0;
virtual HRESULT GetAncillaryData (/* out */ IDeckLinkVideoFrameAncillary **ancillary) = 0;
protected:
virtual ~IDeckLinkVideoFrame () {} // call Release method to drop reference count
};
/* Interface IDeckLinkMutableVideoFrame - Created by IDeckLinkOutput::CreateVideoFrame. */
class IDeckLinkMutableVideoFrame : public IDeckLinkVideoFrame
{
public:
virtual HRESULT SetFlags (/* in */ BMDFrameFlags newFlags) = 0;
virtual HRESULT SetTimecode (/* in */ BMDTimecodeFormat format, /* in */ IDeckLinkTimecode *timecode) = 0;
virtual HRESULT SetTimecodeFromComponents (/* in */ BMDTimecodeFormat format, /* in */ uint8_t hours, /* in */ uint8_t minutes, /* in */ uint8_t seconds, /* in */ uint8_t frames, /* in */ BMDTimecodeFlags flags) = 0;
virtual HRESULT SetAncillaryData (/* in */ IDeckLinkVideoFrameAncillary *ancillary) = 0;
virtual HRESULT SetTimecodeUserBits (/* in */ BMDTimecodeFormat format, /* in */ BMDTimecodeUserBits userBits) = 0;
protected:
virtual ~IDeckLinkMutableVideoFrame () {} // call Release method to drop reference count
};
/* Interface IDeckLinkVideoFrame3DExtensions - Optional interface implemented on IDeckLinkVideoFrame to support 3D frames */
class IDeckLinkVideoFrame3DExtensions : public IUnknown
{
public:
virtual BMDVideo3DPackingFormat Get3DPackingFormat (void) = 0;
virtual HRESULT GetFrameForRightEye (/* out */ IDeckLinkVideoFrame* *rightEyeFrame) = 0;
protected:
virtual ~IDeckLinkVideoFrame3DExtensions () {} // call Release method to drop reference count
};
/* Interface IDeckLinkVideoInputFrame - Provided by the IDeckLinkVideoInput frame arrival callback. */
class IDeckLinkVideoInputFrame : public IDeckLinkVideoFrame
{
public:
virtual HRESULT GetStreamTime (/* out */ BMDTimeValue *frameTime, /* out */ BMDTimeValue *frameDuration, /* in */ BMDTimeScale timeScale) = 0;
virtual HRESULT GetHardwareReferenceTimestamp (/* in */ BMDTimeScale timeScale, /* out */ BMDTimeValue *frameTime, /* out */ BMDTimeValue *frameDuration) = 0;
protected:
virtual ~IDeckLinkVideoInputFrame () {} // call Release method to drop reference count
};
/* Interface IDeckLinkVideoFrameAncillary - Obtained through QueryInterface() on an IDeckLinkVideoFrame object. */
class IDeckLinkVideoFrameAncillary : public IUnknown
{
public:
virtual HRESULT GetBufferForVerticalBlankingLine (/* in */ uint32_t lineNumber, /* out */ void **buffer) = 0;
virtual BMDPixelFormat GetPixelFormat (void) = 0;
virtual BMDDisplayMode GetDisplayMode (void) = 0;
protected:
virtual ~IDeckLinkVideoFrameAncillary () {} // call Release method to drop reference count
};
/* Interface IDeckLinkAudioInputPacket - Provided by the IDeckLinkInput callback. */
class IDeckLinkAudioInputPacket : public IUnknown
{
public:
virtual long GetSampleFrameCount (void) = 0;
virtual HRESULT GetBytes (/* out */ void **buffer) = 0;
virtual HRESULT GetPacketTime (/* out */ BMDTimeValue *packetTime, /* in */ BMDTimeScale timeScale) = 0;
protected:
virtual ~IDeckLinkAudioInputPacket () {} // call Release method to drop reference count
};
/* Interface IDeckLinkScreenPreviewCallback - Screen preview callback */
class IDeckLinkScreenPreviewCallback : public IUnknown
{
public:
virtual HRESULT DrawFrame (/* in */ IDeckLinkVideoFrame *theFrame) = 0;
protected:
virtual ~IDeckLinkScreenPreviewCallback () {} // call Release method to drop reference count
};
/* Interface IDeckLinkGLScreenPreviewHelper - Created with CoCreateInstance(). */
class IDeckLinkGLScreenPreviewHelper : public IUnknown
{
public:
/* Methods must be called with OpenGL context set */
virtual HRESULT InitializeGL (void) = 0;
virtual HRESULT PaintGL (void) = 0;
virtual HRESULT SetFrame (/* in */ IDeckLinkVideoFrame *theFrame) = 0;
virtual HRESULT Set3DPreviewFormat (/* in */ BMD3DPreviewFormat previewFormat) = 0;
protected:
virtual ~IDeckLinkGLScreenPreviewHelper () {} // call Release method to drop reference count
};
/* Interface IDeckLinkNotificationCallback - DeckLink Notification Callback Interface */
class IDeckLinkNotificationCallback : public IUnknown
{
public:
virtual HRESULT Notify (/* in */ BMDNotifications topic, /* in */ uint64_t param1, /* in */ uint64_t param2) = 0;
};
/* Interface IDeckLinkNotification - DeckLink Notification interface */
class IDeckLinkNotification : public IUnknown
{
public:
virtual HRESULT Subscribe (/* in */ BMDNotifications topic, /* in */ IDeckLinkNotificationCallback *theCallback) = 0;
virtual HRESULT Unsubscribe (/* in */ BMDNotifications topic, /* in */ IDeckLinkNotificationCallback *theCallback) = 0;
};
/* Interface IDeckLinkAttributes - DeckLink Attribute interface */
class IDeckLinkAttributes : public IUnknown
{
public:
virtual HRESULT GetFlag (/* in */ BMDDeckLinkAttributeID cfgID, /* out */ bool *value) = 0;
virtual HRESULT GetInt (/* in */ BMDDeckLinkAttributeID cfgID, /* out */ int64_t *value) = 0;
virtual HRESULT GetFloat (/* in */ BMDDeckLinkAttributeID cfgID, /* out */ double *value) = 0;
virtual HRESULT GetString (/* in */ BMDDeckLinkAttributeID cfgID, /* out */ const char **value) = 0;
protected:
virtual ~IDeckLinkAttributes () {} // call Release method to drop reference count
};
/* Interface IDeckLinkKeyer - DeckLink Keyer interface */
class IDeckLinkKeyer : public IUnknown
{
public:
virtual HRESULT Enable (/* in */ bool isExternal) = 0;
virtual HRESULT SetLevel (/* in */ uint8_t level) = 0;
virtual HRESULT RampUp (/* in */ uint32_t numberOfFrames) = 0;
virtual HRESULT RampDown (/* in */ uint32_t numberOfFrames) = 0;
virtual HRESULT Disable (void) = 0;
protected:
virtual ~IDeckLinkKeyer () {} // call Release method to drop reference count
};
/* Interface IDeckLinkVideoConversion - Created with CoCreateInstance(). */
class IDeckLinkVideoConversion : public IUnknown
{
public:
virtual HRESULT ConvertFrame (/* in */ IDeckLinkVideoFrame* srcFrame, /* in */ IDeckLinkVideoFrame* dstFrame) = 0;
protected:
virtual ~IDeckLinkVideoConversion () {} // call Release method to drop reference count
};
/* Interface IDeckLinkDeviceNotificationCallback - DeckLink device arrival/removal notification callbacks */
class IDeckLinkDeviceNotificationCallback : public IUnknown
{
public:
virtual HRESULT DeckLinkDeviceArrived (/* in */ IDeckLink* deckLinkDevice) = 0;
virtual HRESULT DeckLinkDeviceRemoved (/* in */ IDeckLink* deckLinkDevice) = 0;
protected:
virtual ~IDeckLinkDeviceNotificationCallback () {} // call Release method to drop reference count
};
/* Interface IDeckLinkDiscovery - DeckLink device discovery */
class IDeckLinkDiscovery : public IUnknown
{
public:
virtual HRESULT InstallDeviceNotifications (/* in */ IDeckLinkDeviceNotificationCallback* deviceNotificationCallback) = 0;
virtual HRESULT UninstallDeviceNotifications (void) = 0;
protected:
virtual ~IDeckLinkDiscovery () {} // call Release method to drop reference count
};
/* Functions */
extern "C" {
IDeckLinkIterator* CreateDeckLinkIteratorInstance (void);
IDeckLinkDiscovery* CreateDeckLinkDiscoveryInstance (void);
IDeckLinkAPIInformation* CreateDeckLinkAPIInformationInstance (void);
IDeckLinkGLScreenPreviewHelper* CreateOpenGLScreenPreviewHelper (void);
IDeckLinkVideoConversion* CreateVideoConversionInstance (void);
bool IsDeckLinkAPIPresent (void);
}
#endif // defined(__cplusplus)
#endif /* defined(BMD_DECKLINKAPI_H) */

View File

@@ -0,0 +1,192 @@
/* -LICENSE-START-
** Copyright (c) 2014 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
*/
#ifndef BMD_DECKLINKAPICONFIGURATION_H
#define BMD_DECKLINKAPICONFIGURATION_H
#ifndef BMD_CONST
#if defined(_MSC_VER)
#define BMD_CONST __declspec(selectany) static const
#else
#define BMD_CONST static const
#endif
#endif
// Type Declarations
// Interface ID Declarations
BMD_CONST REFIID IID_IDeckLinkConfiguration = /* 1E69FCF6-4203-4936-8076-2A9F4CFD50CB */ {0x1E,0x69,0xFC,0xF6,0x42,0x03,0x49,0x36,0x80,0x76,0x2A,0x9F,0x4C,0xFD,0x50,0xCB};
/* Enum BMDDeckLinkConfigurationID - DeckLink Configuration ID */
typedef uint32_t BMDDeckLinkConfigurationID;
enum _BMDDeckLinkConfigurationID {
/* Serial port Flags */
bmdDeckLinkConfigSwapSerialRxTx = /* 'ssrt' */ 0x73737274,
/* Video Input/Output Flags */
bmdDeckLinkConfigUse1080pNotPsF = /* 'fpro' */ 0x6670726F,
/* Video Input/Output Integers */
bmdDeckLinkConfigHDMI3DPackingFormat = /* '3dpf' */ 0x33647066,
bmdDeckLinkConfigBypass = /* 'byps' */ 0x62797073,
bmdDeckLinkConfigClockTimingAdjustment = /* 'ctad' */ 0x63746164,
/* Audio Input/Output Flags */
bmdDeckLinkConfigAnalogAudioConsumerLevels = /* 'aacl' */ 0x6161636C,
/* Video output flags */
bmdDeckLinkConfigFieldFlickerRemoval = /* 'fdfr' */ 0x66646672,
bmdDeckLinkConfigHD1080p24ToHD1080i5994Conversion = /* 'to59' */ 0x746F3539,
bmdDeckLinkConfig444SDIVideoOutput = /* '444o' */ 0x3434346F,
bmdDeckLinkConfigSingleLinkVideoOutput = /* 'sglo' */ 0x73676C6F,
bmdDeckLinkConfigBlackVideoOutputDuringCapture = /* 'bvoc' */ 0x62766F63,
bmdDeckLinkConfigLowLatencyVideoOutput = /* 'llvo' */ 0x6C6C766F,
bmdDeckLinkConfigDownConversionOnAllAnalogOutput = /* 'caao' */ 0x6361616F,
bmdDeckLinkConfigSMPTELevelAOutput = /* 'smta' */ 0x736D7461,
/* Video Output Integers */
bmdDeckLinkConfigVideoOutputConnection = /* 'vocn' */ 0x766F636E,
bmdDeckLinkConfigVideoOutputConversionMode = /* 'vocm' */ 0x766F636D,
bmdDeckLinkConfigAnalogVideoOutputFlags = /* 'avof' */ 0x61766F66,
bmdDeckLinkConfigReferenceInputTimingOffset = /* 'glot' */ 0x676C6F74,
bmdDeckLinkConfigVideoOutputIdleOperation = /* 'voio' */ 0x766F696F,
bmdDeckLinkConfigDefaultVideoOutputMode = /* 'dvom' */ 0x64766F6D,
bmdDeckLinkConfigDefaultVideoOutputModeFlags = /* 'dvof' */ 0x64766F66,
/* Video Output Floats */
bmdDeckLinkConfigVideoOutputComponentLumaGain = /* 'oclg' */ 0x6F636C67,
bmdDeckLinkConfigVideoOutputComponentChromaBlueGain = /* 'occb' */ 0x6F636362,
bmdDeckLinkConfigVideoOutputComponentChromaRedGain = /* 'occr' */ 0x6F636372,
bmdDeckLinkConfigVideoOutputCompositeLumaGain = /* 'oilg' */ 0x6F696C67,
bmdDeckLinkConfigVideoOutputCompositeChromaGain = /* 'oicg' */ 0x6F696367,
bmdDeckLinkConfigVideoOutputSVideoLumaGain = /* 'oslg' */ 0x6F736C67,
bmdDeckLinkConfigVideoOutputSVideoChromaGain = /* 'oscg' */ 0x6F736367,
/* Video Input Flags */
bmdDeckLinkConfigVideoInputScanning = /* 'visc' */ 0x76697363, // Applicable to H264 Pro Recorder only
bmdDeckLinkConfigUseDedicatedLTCInput = /* 'dltc' */ 0x646C7463, // Use timecode from LTC input instead of SDI stream
/* Video Input Integers */
bmdDeckLinkConfigVideoInputConnection = /* 'vicn' */ 0x7669636E,
bmdDeckLinkConfigAnalogVideoInputFlags = /* 'avif' */ 0x61766966,
bmdDeckLinkConfigVideoInputConversionMode = /* 'vicm' */ 0x7669636D,
bmdDeckLinkConfig32PulldownSequenceInitialTimecodeFrame = /* 'pdif' */ 0x70646966,
bmdDeckLinkConfigVANCSourceLine1Mapping = /* 'vsl1' */ 0x76736C31,
bmdDeckLinkConfigVANCSourceLine2Mapping = /* 'vsl2' */ 0x76736C32,
bmdDeckLinkConfigVANCSourceLine3Mapping = /* 'vsl3' */ 0x76736C33,
bmdDeckLinkConfigCapturePassThroughMode = /* 'cptm' */ 0x6370746D,
/* Video Input Floats */
bmdDeckLinkConfigVideoInputComponentLumaGain = /* 'iclg' */ 0x69636C67,
bmdDeckLinkConfigVideoInputComponentChromaBlueGain = /* 'iccb' */ 0x69636362,
bmdDeckLinkConfigVideoInputComponentChromaRedGain = /* 'iccr' */ 0x69636372,
bmdDeckLinkConfigVideoInputCompositeLumaGain = /* 'iilg' */ 0x69696C67,
bmdDeckLinkConfigVideoInputCompositeChromaGain = /* 'iicg' */ 0x69696367,
bmdDeckLinkConfigVideoInputSVideoLumaGain = /* 'islg' */ 0x69736C67,
bmdDeckLinkConfigVideoInputSVideoChromaGain = /* 'iscg' */ 0x69736367,
/* Audio Input Integers */
bmdDeckLinkConfigAudioInputConnection = /* 'aicn' */ 0x6169636E,
/* Audio Input Floats */
bmdDeckLinkConfigAnalogAudioInputScaleChannel1 = /* 'ais1' */ 0x61697331,
bmdDeckLinkConfigAnalogAudioInputScaleChannel2 = /* 'ais2' */ 0x61697332,
bmdDeckLinkConfigAnalogAudioInputScaleChannel3 = /* 'ais3' */ 0x61697333,
bmdDeckLinkConfigAnalogAudioInputScaleChannel4 = /* 'ais4' */ 0x61697334,
bmdDeckLinkConfigDigitalAudioInputScale = /* 'dais' */ 0x64616973,
/* Audio Output Integers */
bmdDeckLinkConfigAudioOutputAESAnalogSwitch = /* 'aoaa' */ 0x616F6161,
/* Audio Output Floats */
bmdDeckLinkConfigAnalogAudioOutputScaleChannel1 = /* 'aos1' */ 0x616F7331,
bmdDeckLinkConfigAnalogAudioOutputScaleChannel2 = /* 'aos2' */ 0x616F7332,
bmdDeckLinkConfigAnalogAudioOutputScaleChannel3 = /* 'aos3' */ 0x616F7333,
bmdDeckLinkConfigAnalogAudioOutputScaleChannel4 = /* 'aos4' */ 0x616F7334,
bmdDeckLinkConfigDigitalAudioOutputScale = /* 'daos' */ 0x64616F73,
/* Device Information Strings */
bmdDeckLinkConfigDeviceInformationLabel = /* 'dila' */ 0x64696C61,
bmdDeckLinkConfigDeviceInformationSerialNumber = /* 'disn' */ 0x6469736E,
bmdDeckLinkConfigDeviceInformationCompany = /* 'dico' */ 0x6469636F,
bmdDeckLinkConfigDeviceInformationPhone = /* 'diph' */ 0x64697068,
bmdDeckLinkConfigDeviceInformationEmail = /* 'diem' */ 0x6469656D,
bmdDeckLinkConfigDeviceInformationDate = /* 'dida' */ 0x64696461
};
// Forward Declarations
class IDeckLinkConfiguration;
/* Interface IDeckLinkConfiguration - DeckLink Configuration interface */
class IDeckLinkConfiguration : public IUnknown
{
public:
virtual HRESULT SetFlag (/* in */ BMDDeckLinkConfigurationID cfgID, /* in */ bool value) = 0;
virtual HRESULT GetFlag (/* in */ BMDDeckLinkConfigurationID cfgID, /* out */ bool *value) = 0;
virtual HRESULT SetInt (/* in */ BMDDeckLinkConfigurationID cfgID, /* in */ int64_t value) = 0;
virtual HRESULT GetInt (/* in */ BMDDeckLinkConfigurationID cfgID, /* out */ int64_t *value) = 0;
virtual HRESULT SetFloat (/* in */ BMDDeckLinkConfigurationID cfgID, /* in */ double value) = 0;
virtual HRESULT GetFloat (/* in */ BMDDeckLinkConfigurationID cfgID, /* out */ double *value) = 0;
virtual HRESULT SetString (/* in */ BMDDeckLinkConfigurationID cfgID, /* in */ const char *value) = 0;
virtual HRESULT GetString (/* in */ BMDDeckLinkConfigurationID cfgID, /* out */ const char **value) = 0;
virtual HRESULT WriteConfigurationToPreferences (void) = 0;
protected:
virtual ~IDeckLinkConfiguration () {} // call Release method to drop reference count
};
/* Functions */
extern "C" {
}
#endif /* defined(BMD_DECKLINKAPICONFIGURATION_H) */

View File

@@ -0,0 +1,215 @@
/* -LICENSE-START-
** Copyright (c) 2014 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
*/
#ifndef BMD_DECKLINKAPIDECKCONTROL_H
#define BMD_DECKLINKAPIDECKCONTROL_H
#ifndef BMD_CONST
#if defined(_MSC_VER)
#define BMD_CONST __declspec(selectany) static const
#else
#define BMD_CONST static const
#endif
#endif
// Type Declarations
// Interface ID Declarations
BMD_CONST REFIID IID_IDeckLinkDeckControlStatusCallback = /* 53436FFB-B434-4906-BADC-AE3060FFE8EF */ {0x53,0x43,0x6F,0xFB,0xB4,0x34,0x49,0x06,0xBA,0xDC,0xAE,0x30,0x60,0xFF,0xE8,0xEF};
BMD_CONST REFIID IID_IDeckLinkDeckControl = /* 8E1C3ACE-19C7-4E00-8B92-D80431D958BE */ {0x8E,0x1C,0x3A,0xCE,0x19,0xC7,0x4E,0x00,0x8B,0x92,0xD8,0x04,0x31,0xD9,0x58,0xBE};
/* Enum BMDDeckControlMode - DeckControl mode */
typedef uint32_t BMDDeckControlMode;
enum _BMDDeckControlMode {
bmdDeckControlNotOpened = /* 'ntop' */ 0x6E746F70,
bmdDeckControlVTRControlMode = /* 'vtrc' */ 0x76747263,
bmdDeckControlExportMode = /* 'expm' */ 0x6578706D,
bmdDeckControlCaptureMode = /* 'capm' */ 0x6361706D
};
/* Enum BMDDeckControlEvent - DeckControl event */
typedef uint32_t BMDDeckControlEvent;
enum _BMDDeckControlEvent {
bmdDeckControlAbortedEvent = /* 'abte' */ 0x61627465, // This event is triggered when a capture or edit-to-tape operation is aborted.
/* Export-To-Tape events */
bmdDeckControlPrepareForExportEvent = /* 'pfee' */ 0x70666565, // This event is triggered a few frames before reaching the in-point. IDeckLinkInput::StartScheduledPlayback() should be called at this point.
bmdDeckControlExportCompleteEvent = /* 'exce' */ 0x65786365, // This event is triggered a few frames after reaching the out-point. At this point, it is safe to stop playback.
/* Capture events */
bmdDeckControlPrepareForCaptureEvent = /* 'pfce' */ 0x70666365, // This event is triggered a few frames before reaching the in-point. The serial timecode attached to IDeckLinkVideoInputFrames is now valid.
bmdDeckControlCaptureCompleteEvent = /* 'ccev' */ 0x63636576 // This event is triggered a few frames after reaching the out-point.
};
/* Enum BMDDeckControlVTRControlState - VTR Control state */
typedef uint32_t BMDDeckControlVTRControlState;
enum _BMDDeckControlVTRControlState {
bmdDeckControlNotInVTRControlMode = /* 'nvcm' */ 0x6E76636D,
bmdDeckControlVTRControlPlaying = /* 'vtrp' */ 0x76747270,
bmdDeckControlVTRControlRecording = /* 'vtrr' */ 0x76747272,
bmdDeckControlVTRControlStill = /* 'vtra' */ 0x76747261,
bmdDeckControlVTRControlShuttleForward = /* 'vtsf' */ 0x76747366,
bmdDeckControlVTRControlShuttleReverse = /* 'vtsr' */ 0x76747372,
bmdDeckControlVTRControlJogForward = /* 'vtjf' */ 0x76746A66,
bmdDeckControlVTRControlJogReverse = /* 'vtjr' */ 0x76746A72,
bmdDeckControlVTRControlStopped = /* 'vtro' */ 0x7674726F
};
/* Enum BMDDeckControlStatusFlags - Deck Control status flags */
typedef uint32_t BMDDeckControlStatusFlags;
enum _BMDDeckControlStatusFlags {
bmdDeckControlStatusDeckConnected = 1 << 0,
bmdDeckControlStatusRemoteMode = 1 << 1,
bmdDeckControlStatusRecordInhibited = 1 << 2,
bmdDeckControlStatusCassetteOut = 1 << 3
};
/* Enum BMDDeckControlExportModeOpsFlags - Export mode flags */
typedef uint32_t BMDDeckControlExportModeOpsFlags;
enum _BMDDeckControlExportModeOpsFlags {
bmdDeckControlExportModeInsertVideo = 1 << 0,
bmdDeckControlExportModeInsertAudio1 = 1 << 1,
bmdDeckControlExportModeInsertAudio2 = 1 << 2,
bmdDeckControlExportModeInsertAudio3 = 1 << 3,
bmdDeckControlExportModeInsertAudio4 = 1 << 4,
bmdDeckControlExportModeInsertAudio5 = 1 << 5,
bmdDeckControlExportModeInsertAudio6 = 1 << 6,
bmdDeckControlExportModeInsertAudio7 = 1 << 7,
bmdDeckControlExportModeInsertAudio8 = 1 << 8,
bmdDeckControlExportModeInsertAudio9 = 1 << 9,
bmdDeckControlExportModeInsertAudio10 = 1 << 10,
bmdDeckControlExportModeInsertAudio11 = 1 << 11,
bmdDeckControlExportModeInsertAudio12 = 1 << 12,
bmdDeckControlExportModeInsertTimeCode = 1 << 13,
bmdDeckControlExportModeInsertAssemble = 1 << 14,
bmdDeckControlExportModeInsertPreview = 1 << 15,
bmdDeckControlUseManualExport = 1 << 16
};
/* Enum BMDDeckControlError - Deck Control error */
typedef uint32_t BMDDeckControlError;
enum _BMDDeckControlError {
bmdDeckControlNoError = /* 'noer' */ 0x6E6F6572,
bmdDeckControlModeError = /* 'moer' */ 0x6D6F6572,
bmdDeckControlMissedInPointError = /* 'mier' */ 0x6D696572,
bmdDeckControlDeckTimeoutError = /* 'dter' */ 0x64746572,
bmdDeckControlCommandFailedError = /* 'cfer' */ 0x63666572,
bmdDeckControlDeviceAlreadyOpenedError = /* 'dalo' */ 0x64616C6F,
bmdDeckControlFailedToOpenDeviceError = /* 'fder' */ 0x66646572,
bmdDeckControlInLocalModeError = /* 'lmer' */ 0x6C6D6572,
bmdDeckControlEndOfTapeError = /* 'eter' */ 0x65746572,
bmdDeckControlUserAbortError = /* 'uaer' */ 0x75616572,
bmdDeckControlNoTapeInDeckError = /* 'nter' */ 0x6E746572,
bmdDeckControlNoVideoFromCardError = /* 'nvfc' */ 0x6E766663,
bmdDeckControlNoCommunicationError = /* 'ncom' */ 0x6E636F6D,
bmdDeckControlBufferTooSmallError = /* 'btsm' */ 0x6274736D,
bmdDeckControlBadChecksumError = /* 'chks' */ 0x63686B73,
bmdDeckControlUnknownError = /* 'uner' */ 0x756E6572
};
// Forward Declarations
class IDeckLinkDeckControlStatusCallback;
class IDeckLinkDeckControl;
/* Interface IDeckLinkDeckControlStatusCallback - Deck control state change callback. */
class IDeckLinkDeckControlStatusCallback : public IUnknown
{
public:
virtual HRESULT TimecodeUpdate (/* in */ BMDTimecodeBCD currentTimecode) = 0;
virtual HRESULT VTRControlStateChanged (/* in */ BMDDeckControlVTRControlState newState, /* in */ BMDDeckControlError error) = 0;
virtual HRESULT DeckControlEventReceived (/* in */ BMDDeckControlEvent event, /* in */ BMDDeckControlError error) = 0;
virtual HRESULT DeckControlStatusChanged (/* in */ BMDDeckControlStatusFlags flags, /* in */ uint32_t mask) = 0;
protected:
virtual ~IDeckLinkDeckControlStatusCallback () {} // call Release method to drop reference count
};
/* Interface IDeckLinkDeckControl - Deck Control main interface */
class IDeckLinkDeckControl : public IUnknown
{
public:
virtual HRESULT Open (/* in */ BMDTimeScale timeScale, /* in */ BMDTimeValue timeValue, /* in */ bool timecodeIsDropFrame, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Close (/* in */ bool standbyOn) = 0;
virtual HRESULT GetCurrentState (/* out */ BMDDeckControlMode *mode, /* out */ BMDDeckControlVTRControlState *vtrControlState, /* out */ BMDDeckControlStatusFlags *flags) = 0;
virtual HRESULT SetStandby (/* in */ bool standbyOn) = 0;
virtual HRESULT SendCommand (/* in */ uint8_t *inBuffer, /* in */ uint32_t inBufferSize, /* out */ uint8_t *outBuffer, /* out */ uint32_t *outDataSize, /* in */ uint32_t outBufferSize, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Play (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Stop (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT TogglePlayStop (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Eject (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT GoToTimecode (/* in */ BMDTimecodeBCD timecode, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT FastForward (/* in */ bool viewTape, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Rewind (/* in */ bool viewTape, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT StepForward (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT StepBack (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Jog (/* in */ double rate, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Shuttle (/* in */ double rate, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT GetTimecodeString (/* out */ const char **currentTimeCode, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT GetTimecode (/* out */ IDeckLinkTimecode **currentTimecode, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT GetTimecodeBCD (/* out */ BMDTimecodeBCD *currentTimecode, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT SetPreroll (/* in */ uint32_t prerollSeconds) = 0;
virtual HRESULT GetPreroll (/* out */ uint32_t *prerollSeconds) = 0;
virtual HRESULT SetExportOffset (/* in */ int32_t exportOffsetFields) = 0;
virtual HRESULT GetExportOffset (/* out */ int32_t *exportOffsetFields) = 0;
virtual HRESULT GetManualExportOffset (/* out */ int32_t *deckManualExportOffsetFields) = 0;
virtual HRESULT SetCaptureOffset (/* in */ int32_t captureOffsetFields) = 0;
virtual HRESULT GetCaptureOffset (/* out */ int32_t *captureOffsetFields) = 0;
virtual HRESULT StartExport (/* in */ BMDTimecodeBCD inTimecode, /* in */ BMDTimecodeBCD outTimecode, /* in */ BMDDeckControlExportModeOpsFlags exportModeOps, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT StartCapture (/* in */ bool useVITC, /* in */ BMDTimecodeBCD inTimecode, /* in */ BMDTimecodeBCD outTimecode, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT GetDeviceID (/* out */ uint16_t *deviceId, /* out */ BMDDeckControlError *error) = 0;
virtual HRESULT Abort (void) = 0;
virtual HRESULT CrashRecordStart (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT CrashRecordStop (/* out */ BMDDeckControlError *error) = 0;
virtual HRESULT SetCallback (/* in */ IDeckLinkDeckControlStatusCallback *callback) = 0;
protected:
virtual ~IDeckLinkDeckControl () {} // call Release method to drop reference count
};
/* Functions */
extern "C" {
}
#endif /* defined(BMD_DECKLINKAPIDECKCONTROL_H) */

View File

@@ -0,0 +1,71 @@
/* -LICENSE-START-
** Copyright (c) 2014 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
*/
#ifndef BMD_DECKLINKAPIDISCOVERY_H
#define BMD_DECKLINKAPIDISCOVERY_H
#ifndef BMD_CONST
#if defined(_MSC_VER)
#define BMD_CONST __declspec(selectany) static const
#else
#define BMD_CONST static const
#endif
#endif
// Type Declarations
// Interface ID Declarations
BMD_CONST REFIID IID_IDeckLink = /* C418FBDD-0587-48ED-8FE5-640F0A14AF91 */ {0xC4,0x18,0xFB,0xDD,0x05,0x87,0x48,0xED,0x8F,0xE5,0x64,0x0F,0x0A,0x14,0xAF,0x91};
// Forward Declarations
class IDeckLink;
/* Interface IDeckLink - represents a DeckLink device */
class IDeckLink : public IUnknown
{
public:
virtual HRESULT GetModelName (/* out */ const char **modelName) = 0;
virtual HRESULT GetDisplayName (/* out */ const char **displayName) = 0;
protected:
virtual ~IDeckLink () {} // call Release method to drop reference count
};
/* Functions */
extern "C" {
}
#endif /* defined(BMD_DECKLINKAPIDISCOVERY_H) */

View File

@@ -0,0 +1,148 @@
/* -LICENSE-START-
** Copyright (c) 2009 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
**/
#include <stdlib.h>
#include <stdio.h>
#include <pthread.h>
#include <dlfcn.h>
#include <ctype.h>
#include "DeckLinkAPI.h"
#define kDeckLinkAPI_Name "libDeckLinkAPI.so"
#define KDeckLinkPreviewAPI_Name "libDeckLinkPreviewAPI.so"
typedef IDeckLinkIterator* (*CreateIteratorFunc)(void);
typedef IDeckLinkAPIInformation* (*CreateAPIInformationFunc)(void);
typedef IDeckLinkGLScreenPreviewHelper* (*CreateOpenGLScreenPreviewHelperFunc)(void);
typedef IDeckLinkVideoConversion* (*CreateVideoConversionInstanceFunc)(void);
typedef IDeckLinkDiscovery* (*CreateDeckLinkDiscoveryInstanceFunc)(void);
static pthread_once_t gDeckLinkOnceControl = PTHREAD_ONCE_INIT;
static pthread_once_t gPreviewOnceControl = PTHREAD_ONCE_INIT;
static bool gLoadedDeckLinkAPI = false;
static CreateIteratorFunc gCreateIteratorFunc = NULL;
static CreateAPIInformationFunc gCreateAPIInformationFunc = NULL;
static CreateOpenGLScreenPreviewHelperFunc gCreateOpenGLPreviewFunc = NULL;
static CreateVideoConversionInstanceFunc gCreateVideoConversionFunc = NULL;
static CreateDeckLinkDiscoveryInstanceFunc gCreateDeckLinkDiscoveryFunc = NULL;
static void InitDeckLinkAPI (void)
{
void *libraryHandle;
libraryHandle = dlopen(kDeckLinkAPI_Name, RTLD_NOW|RTLD_GLOBAL);
if (!libraryHandle)
{
fprintf(stderr, "%s\n", dlerror());
return;
}
gLoadedDeckLinkAPI = true;
gCreateIteratorFunc = (CreateIteratorFunc)dlsym(libraryHandle, "CreateDeckLinkIteratorInstance_0002");
if (!gCreateIteratorFunc)
fprintf(stderr, "%s\n", dlerror());
gCreateAPIInformationFunc = (CreateAPIInformationFunc)dlsym(libraryHandle, "CreateDeckLinkAPIInformationInstance_0001");
if (!gCreateAPIInformationFunc)
fprintf(stderr, "%s\n", dlerror());
gCreateVideoConversionFunc = (CreateVideoConversionInstanceFunc)dlsym(libraryHandle, "CreateVideoConversionInstance_0001");
if (!gCreateVideoConversionFunc)
fprintf(stderr, "%s\n", dlerror());
gCreateDeckLinkDiscoveryFunc = (CreateDeckLinkDiscoveryInstanceFunc)dlsym(libraryHandle, "CreateDeckLinkDiscoveryInstance_0001");
if (!gCreateDeckLinkDiscoveryFunc)
fprintf(stderr, "%s\n", dlerror());
}
static void InitDeckLinkPreviewAPI (void)
{
void *libraryHandle;
libraryHandle = dlopen(KDeckLinkPreviewAPI_Name, RTLD_NOW|RTLD_GLOBAL);
if (!libraryHandle)
{
fprintf(stderr, "%s\n", dlerror());
return;
}
gCreateOpenGLPreviewFunc = (CreateOpenGLScreenPreviewHelperFunc)dlsym(libraryHandle, "CreateOpenGLScreenPreviewHelper_0001");
if (!gCreateOpenGLPreviewFunc)
fprintf(stderr, "%s\n", dlerror());
}
bool IsDeckLinkAPIPresent (void)
{
// If the DeckLink API dynamic library was successfully loaded, return this knowledge to the caller
return gLoadedDeckLinkAPI;
}
IDeckLinkIterator* CreateDeckLinkIteratorInstance (void)
{
pthread_once(&gDeckLinkOnceControl, InitDeckLinkAPI);
if (gCreateIteratorFunc == NULL)
return NULL;
return gCreateIteratorFunc();
}
IDeckLinkAPIInformation* CreateDeckLinkAPIInformationInstance (void)
{
pthread_once(&gDeckLinkOnceControl, InitDeckLinkAPI);
if (gCreateAPIInformationFunc == NULL)
return NULL;
return gCreateAPIInformationFunc();
}
IDeckLinkGLScreenPreviewHelper* CreateOpenGLScreenPreviewHelper (void)
{
pthread_once(&gDeckLinkOnceControl, InitDeckLinkAPI);
pthread_once(&gPreviewOnceControl, InitDeckLinkPreviewAPI);
if (gCreateOpenGLPreviewFunc == NULL)
return NULL;
return gCreateOpenGLPreviewFunc();
}
IDeckLinkVideoConversion* CreateVideoConversionInstance (void)
{
pthread_once(&gDeckLinkOnceControl, InitDeckLinkAPI);
if (gCreateVideoConversionFunc == NULL)
return NULL;
return gCreateVideoConversionFunc();
}
IDeckLinkDiscovery* CreateDeckLinkDiscoveryInstance (void)
{
pthread_once(&gDeckLinkOnceControl, InitDeckLinkAPI);
if (gCreateDeckLinkDiscoveryFunc == NULL)
return NULL;
return gCreateDeckLinkDiscoveryFunc();
}

View File

@@ -0,0 +1,191 @@
/* -LICENSE-START-
** Copyright (c) 2014 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
*/
#ifndef BMD_DECKLINKAPIMODES_H
#define BMD_DECKLINKAPIMODES_H
#ifndef BMD_CONST
#if defined(_MSC_VER)
#define BMD_CONST __declspec(selectany) static const
#else
#define BMD_CONST static const
#endif
#endif
// Type Declarations
// Interface ID Declarations
BMD_CONST REFIID IID_IDeckLinkDisplayModeIterator = /* 9C88499F-F601-4021-B80B-032E4EB41C35 */ {0x9C,0x88,0x49,0x9F,0xF6,0x01,0x40,0x21,0xB8,0x0B,0x03,0x2E,0x4E,0xB4,0x1C,0x35};
BMD_CONST REFIID IID_IDeckLinkDisplayMode = /* 3EB2C1AB-0A3D-4523-A3AD-F40D7FB14E78 */ {0x3E,0xB2,0xC1,0xAB,0x0A,0x3D,0x45,0x23,0xA3,0xAD,0xF4,0x0D,0x7F,0xB1,0x4E,0x78};
/* Enum BMDDisplayMode - Video display modes */
typedef uint32_t BMDDisplayMode;
enum _BMDDisplayMode {
/* SD Modes */
bmdModeNTSC = /* 'ntsc' */ 0x6E747363,
bmdModeNTSC2398 = /* 'nt23' */ 0x6E743233, // 3:2 pulldown
bmdModePAL = /* 'pal ' */ 0x70616C20,
bmdModeNTSCp = /* 'ntsp' */ 0x6E747370,
bmdModePALp = /* 'palp' */ 0x70616C70,
/* HD 1080 Modes */
bmdModeHD1080p2398 = /* '23ps' */ 0x32337073,
bmdModeHD1080p24 = /* '24ps' */ 0x32347073,
bmdModeHD1080p25 = /* 'Hp25' */ 0x48703235,
bmdModeHD1080p2997 = /* 'Hp29' */ 0x48703239,
bmdModeHD1080p30 = /* 'Hp30' */ 0x48703330,
bmdModeHD1080i50 = /* 'Hi50' */ 0x48693530,
bmdModeHD1080i5994 = /* 'Hi59' */ 0x48693539,
bmdModeHD1080i6000 = /* 'Hi60' */ 0x48693630, // N.B. This _really_ is 60.00 Hz.
bmdModeHD1080p50 = /* 'Hp50' */ 0x48703530,
bmdModeHD1080p5994 = /* 'Hp59' */ 0x48703539,
bmdModeHD1080p6000 = /* 'Hp60' */ 0x48703630, // N.B. This _really_ is 60.00 Hz.
/* HD 720 Modes */
bmdModeHD720p50 = /* 'hp50' */ 0x68703530,
bmdModeHD720p5994 = /* 'hp59' */ 0x68703539,
bmdModeHD720p60 = /* 'hp60' */ 0x68703630,
/* 2k Modes */
bmdMode2k2398 = /* '2k23' */ 0x326B3233,
bmdMode2k24 = /* '2k24' */ 0x326B3234,
bmdMode2k25 = /* '2k25' */ 0x326B3235,
/* DCI Modes (output only) */
bmdMode2kDCI2398 = /* '2d23' */ 0x32643233,
bmdMode2kDCI24 = /* '2d24' */ 0x32643234,
bmdMode2kDCI25 = /* '2d25' */ 0x32643235,
/* 4k Modes */
bmdMode4K2160p2398 = /* '4k23' */ 0x346B3233,
bmdMode4K2160p24 = /* '4k24' */ 0x346B3234,
bmdMode4K2160p25 = /* '4k25' */ 0x346B3235,
bmdMode4K2160p2997 = /* '4k29' */ 0x346B3239,
bmdMode4K2160p30 = /* '4k30' */ 0x346B3330,
bmdMode4K2160p50 = /* '4k50' */ 0x346B3530,
bmdMode4K2160p5994 = /* '4k59' */ 0x346B3539,
bmdMode4K2160p60 = /* '4k60' */ 0x346B3630,
/* DCI Modes (output only) */
bmdMode4kDCI2398 = /* '4d23' */ 0x34643233,
bmdMode4kDCI24 = /* '4d24' */ 0x34643234,
bmdMode4kDCI25 = /* '4d25' */ 0x34643235,
/* Special Modes */
bmdModeUnknown = /* 'iunk' */ 0x69756E6B
};
/* Enum BMDFieldDominance - Video field dominance */
typedef uint32_t BMDFieldDominance;
enum _BMDFieldDominance {
bmdUnknownFieldDominance = 0,
bmdLowerFieldFirst = /* 'lowr' */ 0x6C6F7772,
bmdUpperFieldFirst = /* 'uppr' */ 0x75707072,
bmdProgressiveFrame = /* 'prog' */ 0x70726F67,
bmdProgressiveSegmentedFrame = /* 'psf ' */ 0x70736620
};
/* Enum BMDPixelFormat - Video pixel formats supported for output/input */
typedef uint32_t BMDPixelFormat;
enum _BMDPixelFormat {
bmdFormat8BitYUV = /* '2vuy' */ 0x32767579,
bmdFormat10BitYUV = /* 'v210' */ 0x76323130,
bmdFormat8BitARGB = 32,
bmdFormat8BitBGRA = /* 'BGRA' */ 0x42475241,
bmdFormat10BitRGB = /* 'r210' */ 0x72323130, // Big-endian RGB 10-bit per component with SMPTE video levels (64-960). Packed as 2:10:10:10
bmdFormat12BitRGB = /* 'R12B' */ 0x52313242, // Big-endian RGB 12-bit per component with full range (0-4095). Packed as 12-bit per component
bmdFormat12BitRGBLE = /* 'R12L' */ 0x5231324C, // Little-endian RGB 12-bit per component with full range (0-4095). Packed as 12-bit per component
bmdFormat10BitRGBXLE = /* 'R10l' */ 0x5231306C, // Little-endian 10-bit RGB with SMPTE video levels (64-940)
bmdFormat10BitRGBX = /* 'R10b' */ 0x52313062 // Big-endian 10-bit RGB with SMPTE video levels (64-940)
};
/* Enum BMDDisplayModeFlags - Flags to describe the characteristics of an IDeckLinkDisplayMode. */
typedef uint32_t BMDDisplayModeFlags;
enum _BMDDisplayModeFlags {
bmdDisplayModeSupports3D = 1 << 0,
bmdDisplayModeColorspaceRec601 = 1 << 1,
bmdDisplayModeColorspaceRec709 = 1 << 2
};
// Forward Declarations
class IDeckLinkDisplayModeIterator;
class IDeckLinkDisplayMode;
/* Interface IDeckLinkDisplayModeIterator - enumerates over supported input/output display modes. */
class IDeckLinkDisplayModeIterator : public IUnknown
{
public:
virtual HRESULT Next (/* out */ IDeckLinkDisplayMode **deckLinkDisplayMode) = 0;
protected:
virtual ~IDeckLinkDisplayModeIterator () {} // call Release method to drop reference count
};
/* Interface IDeckLinkDisplayMode - represents a display mode */
class IDeckLinkDisplayMode : public IUnknown
{
public:
virtual HRESULT GetName (/* out */ const char **name) = 0;
virtual BMDDisplayMode GetDisplayMode (void) = 0;
virtual long GetWidth (void) = 0;
virtual long GetHeight (void) = 0;
virtual HRESULT GetFrameRate (/* out */ BMDTimeValue *frameDuration, /* out */ BMDTimeScale *timeScale) = 0;
virtual BMDFieldDominance GetFieldDominance (void) = 0;
virtual BMDDisplayModeFlags GetFlags (void) = 0;
protected:
virtual ~IDeckLinkDisplayMode () {} // call Release method to drop reference count
};
/* Functions */
extern "C" {
}
#endif /* defined(BMD_DECKLINKAPIMODES_H) */

View File

@@ -0,0 +1,110 @@
/* -LICENSE-START-
** Copyright (c) 2014 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
*/
#ifndef BMD_DECKLINKAPITYPES_H
#define BMD_DECKLINKAPITYPES_H
#ifndef BMD_CONST
#if defined(_MSC_VER)
#define BMD_CONST __declspec(selectany) static const
#else
#define BMD_CONST static const
#endif
#endif
// Type Declarations
typedef int64_t BMDTimeValue;
typedef int64_t BMDTimeScale;
typedef uint32_t BMDTimecodeBCD;
typedef uint32_t BMDTimecodeUserBits;
// Interface ID Declarations
BMD_CONST REFIID IID_IDeckLinkTimecode = /* BC6CFBD3-8317-4325-AC1C-1216391E9340 */ {0xBC,0x6C,0xFB,0xD3,0x83,0x17,0x43,0x25,0xAC,0x1C,0x12,0x16,0x39,0x1E,0x93,0x40};
/* Enum BMDTimecodeFlags - Timecode flags */
typedef uint32_t BMDTimecodeFlags;
enum _BMDTimecodeFlags {
bmdTimecodeFlagDefault = 0,
bmdTimecodeIsDropFrame = 1 << 0,
bmdTimecodeFieldMark = 1 << 1
};
/* Enum BMDVideoConnection - Video connection types */
typedef uint32_t BMDVideoConnection;
enum _BMDVideoConnection {
bmdVideoConnectionSDI = 1 << 0,
bmdVideoConnectionHDMI = 1 << 1,
bmdVideoConnectionOpticalSDI = 1 << 2,
bmdVideoConnectionComponent = 1 << 3,
bmdVideoConnectionComposite = 1 << 4,
bmdVideoConnectionSVideo = 1 << 5
};
/* Enum BMDAudioConnection - Audio connection types */
typedef uint32_t BMDAudioConnection;
enum _BMDAudioConnection {
bmdAudioConnectionEmbedded = 1 << 0,
bmdAudioConnectionAESEBU = 1 << 1,
bmdAudioConnectionAnalog = 1 << 2,
bmdAudioConnectionAnalogXLR = 1 << 3,
bmdAudioConnectionAnalogRCA = 1 << 4
};
// Forward Declarations
class IDeckLinkTimecode;
/* Interface IDeckLinkTimecode - Used for video frame timecode representation. */
class IDeckLinkTimecode : public IUnknown
{
public:
virtual BMDTimecodeBCD GetBCD (void) = 0;
virtual HRESULT GetComponents (/* out */ uint8_t *hours, /* out */ uint8_t *minutes, /* out */ uint8_t *seconds, /* out */ uint8_t *frames) = 0;
virtual HRESULT GetString (/* out */ const char **timecode) = 0;
virtual BMDTimecodeFlags GetFlags (void) = 0;
virtual HRESULT GetTimecodeUserBits (/* out */ BMDTimecodeUserBits *userBits) = 0;
protected:
virtual ~IDeckLinkTimecode () {} // call Release method to drop reference count
};
/* Functions */
extern "C" {
}
#endif /* defined(BMD_DECKLINKAPITYPES_H) */

View File

@@ -0,0 +1,37 @@
/* -LICENSE-START-
* ** Copyright (c) 2014 Blackmagic Design
* **
* ** Permission is hereby granted, free of charge, to any person or organization
* ** obtaining a copy of the software and accompanying documentation covered by
* ** this license (the "Software") to use, reproduce, display, distribute,
* ** execute, and transmit the Software, and to prepare derivative works of the
* ** Software, and to permit third-parties to whom the Software is furnished to
* ** do so, all subject to the following:
* **
* ** The copyright notices in the Software and this entire statement, including
* ** the above license grant, this restriction and the following disclaimer,
* ** must be included in all copies of the Software, in whole or in part, and
* ** all derivative works of the Software, unless such copies or derivative
* ** works are solely in the form of machine-executable object code generated by
* ** a source language processor.
* **
* ** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* ** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* ** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
* ** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
* ** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
* ** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* ** DEALINGS IN THE SOFTWARE.
* ** -LICENSE-END-
* */
/* DeckLinkAPIVersion.h */
#ifndef __DeckLink_API_Version_h__
#define __DeckLink_API_Version_h__
#define BLACKMAGIC_DECKLINK_API_VERSION 0x0a040000
#define BLACKMAGIC_DECKLINK_API_VERSION_STRING "10.4"
#endif // __DeckLink_API_Version_h__

View File

@@ -0,0 +1,100 @@
/* -LICENSE-START-
** Copyright (c) 2009 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation covered by
** this license (the "Software") to use, reproduce, display, distribute,
** execute, and transmit the Software, and to prepare derivative works of the
** Software, and to permit third-parties to whom the Software is furnished to
** do so, all subject to the following:
**
** The copyright notices in the Software and this entire statement, including
** the above license grant, this restriction and the following disclaimer,
** must be included in all copies of the Software, in whole or in part, and
** all derivative works of the Software, unless such copies or derivative
** works are solely in the form of machine-executable object code generated by
** a source language processor.
**
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
** -LICENSE-END-
*/
#ifndef __LINUX_COM_H_
#define __LINUX_COM_H_
struct REFIID
{
unsigned char byte0;
unsigned char byte1;
unsigned char byte2;
unsigned char byte3;
unsigned char byte4;
unsigned char byte5;
unsigned char byte6;
unsigned char byte7;
unsigned char byte8;
unsigned char byte9;
unsigned char byte10;
unsigned char byte11;
unsigned char byte12;
unsigned char byte13;
unsigned char byte14;
unsigned char byte15;
};
typedef REFIID CFUUIDBytes;
#define CFUUIDGetUUIDBytes(x) x
#define _HRESULT_DEFINED
typedef int HRESULT;
typedef unsigned long ULONG;
typedef void *LPVOID;
#define SUCCEEDED(Status) ((HRESULT)(Status) >= 0)
#define FAILED(Status) ((HRESULT)(Status)<0)
#define IS_ERROR(Status) ((unsigned long)(Status) >> 31 == SEVERITY_ERROR)
#define HRESULT_CODE(hr) ((hr) & 0xFFFF)
#define HRESULT_FACILITY(hr) (((hr) >> 16) & 0x1fff)
#define HRESULT_SEVERITY(hr) (((hr) >> 31) & 0x1)
#define SEVERITY_SUCCESS 0
#define SEVERITY_ERROR 1
#define MAKE_HRESULT(sev,fac,code) ((HRESULT) (((unsigned long)(sev)<<31) | ((unsigned long)(fac)<<16) | ((unsigned long)(code))) )
#define S_OK ((HRESULT)0x00000000L)
#define S_FALSE ((HRESULT)0x00000001L)
#define E_UNEXPECTED ((HRESULT)0x8000FFFFL)
#define E_NOTIMPL ((HRESULT)0x80000001L)
#define E_OUTOFMEMORY ((HRESULT)0x80000002L)
#define E_INVALIDARG ((HRESULT)0x80000003L)
#define E_NOINTERFACE ((HRESULT)0x80000004L)
#define E_POINTER ((HRESULT)0x80000005L)
#define E_HANDLE ((HRESULT)0x80000006L)
#define E_ABORT ((HRESULT)0x80000007L)
#define E_FAIL ((HRESULT)0x80000008L)
#define E_ACCESSDENIED ((HRESULT)0x80000009L)
#define STDMETHODCALLTYPE
#define IID_IUnknown (REFIID){0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xC0,0x00,0x00,0x00,0x00,0x00,0x00,0x46}
#define IUnknownUUID IID_IUnknown
#ifdef __cplusplus
class IUnknown
{
public:
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID *ppv) = 0;
virtual ULONG STDMETHODCALLTYPE AddRef(void) = 0;
virtual ULONG STDMETHODCALLTYPE Release(void) = 0;
};
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,343 @@
/* this ALWAYS GENERATED file contains the IIDs and CLSIDs */
/* link this file in with the server and any clients */
/* File created by MIDL compiler version 8.00.0603 */
/* at Mon Apr 13 20:57:05 2015
*/
/* Compiler settings for ..\..\include\DeckLinkAPI.idl:
Oicf, W1, Zp8, env=Win64 (32b run), target_arch=AMD64 8.00.0603
protocol : dce , ms_ext, c_ext, robust
error checks: allocation ref bounds_check enum stub_data
VC __declspec() decoration level:
__declspec(uuid()), __declspec(selectany), __declspec(novtable)
DECLSPEC_UUID(), MIDL_INTERFACE()
*/
/* @@MIDL_FILE_HEADING( ) */
#pragma warning( disable: 4049 ) /* more than 64k source lines */
#ifdef __cplusplus
extern "C"{
#endif
#include <rpc.h>
#include <rpcndr.h>
#ifdef _MIDL_USE_GUIDDEF_
#ifndef INITGUID
#define INITGUID
#include <guiddef.h>
#undef INITGUID
#else
#include <guiddef.h>
#endif
#define MIDL_DEFINE_GUID(type,name,l,w1,w2,b1,b2,b3,b4,b5,b6,b7,b8) \
DEFINE_GUID(name,l,w1,w2,b1,b2,b3,b4,b5,b6,b7,b8)
#else // !_MIDL_USE_GUIDDEF_
#ifndef __IID_DEFINED__
#define __IID_DEFINED__
typedef struct _IID
{
unsigned long x;
unsigned short s1;
unsigned short s2;
unsigned char c[8];
} IID;
#endif // __IID_DEFINED__
#ifndef CLSID_DEFINED
#define CLSID_DEFINED
typedef IID CLSID;
#endif // CLSID_DEFINED
#define MIDL_DEFINE_GUID(type,name,l,w1,w2,b1,b2,b3,b4,b5,b6,b7,b8) \
const type name = {l,w1,w2,{b1,b2,b3,b4,b5,b6,b7,b8}}
#endif !_MIDL_USE_GUIDDEF_
MIDL_DEFINE_GUID(IID, LIBID_DeckLinkAPI,0xD864517A,0xEDD5,0x466D,0x86,0x7D,0xC8,0x19,0xF1,0xC0,0x52,0xBB);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkTimecode,0xBC6CFBD3,0x8317,0x4325,0xAC,0x1C,0x12,0x16,0x39,0x1E,0x93,0x40);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDisplayModeIterator,0x9C88499F,0xF601,0x4021,0xB8,0x0B,0x03,0x2E,0x4E,0xB4,0x1C,0x35);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDisplayMode,0x3EB2C1AB,0x0A3D,0x4523,0xA3,0xAD,0xF4,0x0D,0x7F,0xB1,0x4E,0x78);
MIDL_DEFINE_GUID(IID, IID_IDeckLink,0xC418FBDD,0x0587,0x48ED,0x8F,0xE5,0x64,0x0F,0x0A,0x14,0xAF,0x91);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkConfiguration,0x1E69FCF6,0x4203,0x4936,0x80,0x76,0x2A,0x9F,0x4C,0xFD,0x50,0xCB);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDeckControlStatusCallback,0x53436FFB,0xB434,0x4906,0xBA,0xDC,0xAE,0x30,0x60,0xFF,0xE8,0xEF);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDeckControl,0x8E1C3ACE,0x19C7,0x4E00,0x8B,0x92,0xD8,0x04,0x31,0xD9,0x58,0xBE);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingDeviceNotificationCallback,0xF9531D64,0x3305,0x4B29,0xA3,0x87,0x7F,0x74,0xBB,0x0D,0x0E,0x84);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingH264InputCallback,0x823C475F,0x55AE,0x46F9,0x89,0x0C,0x53,0x7C,0xC5,0xCE,0xDC,0xCA);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingDiscovery,0x2C837444,0xF989,0x4D87,0x90,0x1A,0x47,0xC8,0xA3,0x6D,0x09,0x6D);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingVideoEncodingMode,0x1AB8035B,0xCD13,0x458D,0xB6,0xDF,0x5E,0x8F,0x7C,0x21,0x41,0xD9);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingMutableVideoEncodingMode,0x19BF7D90,0x1E0A,0x400D,0xB2,0xC6,0xFF,0xC4,0xE7,0x8A,0xD4,0x9D);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingVideoEncodingModePresetIterator,0x7AC731A3,0xC950,0x4AD0,0x80,0x4A,0x83,0x77,0xAA,0x51,0xC6,0xC4);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingDeviceInput,0x24B6B6EC,0x1727,0x44BB,0x98,0x18,0x34,0xFF,0x08,0x6A,0xCF,0x98);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingH264NALPacket,0xE260E955,0x14BE,0x4395,0x97,0x75,0x9F,0x02,0xCC,0x0A,0x9D,0x89);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingAudioPacket,0xD9EB5902,0x1AD2,0x43F4,0x9E,0x2C,0x3C,0xFA,0x50,0xB5,0xEE,0x19);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingMPEG2TSPacket,0x91810D1C,0x4FB3,0x4AAA,0xAE,0x56,0xFA,0x30,0x1D,0x3D,0xFA,0x4C);
MIDL_DEFINE_GUID(IID, IID_IBMDStreamingH264NALParser,0x5867F18C,0x5BFA,0x4CCC,0xB2,0xA7,0x9D,0xFD,0x14,0x04,0x17,0xD2);
MIDL_DEFINE_GUID(CLSID, CLSID_CBMDStreamingDiscovery,0x0CAA31F6,0x8A26,0x40B0,0x86,0xA4,0xBF,0x58,0xDC,0xCA,0x71,0x0C);
MIDL_DEFINE_GUID(CLSID, CLSID_CBMDStreamingH264NALParser,0x7753EFBD,0x951C,0x407C,0x97,0xA5,0x23,0xC7,0x37,0xB7,0x3B,0x52);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoOutputCallback,0x20AA5225,0x1958,0x47CB,0x82,0x0B,0x80,0xA8,0xD5,0x21,0xA6,0xEE);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInputCallback,0xDD04E5EC,0x7415,0x42AB,0xAE,0x4A,0xE8,0x0C,0x4D,0xFC,0x04,0x4A);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkMemoryAllocator,0xB36EB6E7,0x9D29,0x4AA8,0x92,0xEF,0x84,0x3B,0x87,0xA2,0x89,0xE8);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkAudioOutputCallback,0x403C681B,0x7F46,0x4A12,0xB9,0x93,0x2B,0xB1,0x27,0x08,0x4E,0xE6);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkIterator,0x50FB36CD,0x3063,0x4B73,0xBD,0xBB,0x95,0x80,0x87,0xF2,0xD8,0xBA);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkAPIInformation,0x7BEA3C68,0x730D,0x4322,0xAF,0x34,0x8A,0x71,0x52,0xB5,0x32,0xA4);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkOutput,0xCC5C8A6E,0x3F2F,0x4B3A,0x87,0xEA,0xFD,0x78,0xAF,0x30,0x05,0x64);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInput,0xAF22762B,0xDFAC,0x4846,0xAA,0x79,0xFA,0x88,0x83,0x56,0x09,0x95);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoFrame,0x3F716FE0,0xF023,0x4111,0xBE,0x5D,0xEF,0x44,0x14,0xC0,0x5B,0x17);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkMutableVideoFrame,0x69E2639F,0x40DA,0x4E19,0xB6,0xF2,0x20,0xAC,0xE8,0x15,0xC3,0x90);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoFrame3DExtensions,0xDA0F7E4A,0xEDC7,0x48A8,0x9C,0xDD,0x2D,0xB5,0x1C,0x72,0x9C,0xD7);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoInputFrame,0x05CFE374,0x537C,0x4094,0x9A,0x57,0x68,0x05,0x25,0x11,0x8F,0x44);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoFrameAncillary,0x732E723C,0xD1A4,0x4E29,0x9E,0x8E,0x4A,0x88,0x79,0x7A,0x00,0x04);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkAudioInputPacket,0xE43D5870,0x2894,0x11DE,0x8C,0x30,0x08,0x00,0x20,0x0C,0x9A,0x66);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkScreenPreviewCallback,0xB1D3F49A,0x85FE,0x4C5D,0x95,0xC8,0x0B,0x5D,0x5D,0xCC,0xD4,0x38);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkGLScreenPreviewHelper,0x504E2209,0xCAC7,0x4C1A,0x9F,0xB4,0xC5,0xBB,0x62,0x74,0xD2,0x2F);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDX9ScreenPreviewHelper,0x2094B522,0xD1A1,0x40C0,0x9A,0xC7,0x1C,0x01,0x22,0x18,0xEF,0x02);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkNotificationCallback,0xb002a1ec,0x070d,0x4288,0x82,0x89,0xbd,0x5d,0x36,0xe5,0xff,0x0d);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkNotification,0x0a1fb207,0xe215,0x441b,0x9b,0x19,0x6f,0xa1,0x57,0x59,0x46,0xc5);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkAttributes,0xABC11843,0xD966,0x44CB,0x96,0xE2,0xA1,0xCB,0x5D,0x31,0x35,0xC4);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkKeyer,0x89AFCAF5,0x65F8,0x421E,0x98,0xF7,0x96,0xFE,0x5F,0x5B,0xFB,0xA3);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoConversion,0x3BBCB8A2,0xDA2C,0x42D9,0xB5,0xD8,0x88,0x08,0x36,0x44,0xE9,0x9A);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDeviceNotificationCallback,0x4997053B,0x0ADF,0x4CC8,0xAC,0x70,0x7A,0x50,0xC4,0xBE,0x72,0x8F);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDiscovery,0xCDBF631C,0xBC76,0x45FA,0xB4,0x4D,0xC5,0x50,0x59,0xBC,0x61,0x01);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkIterator,0x1F2E109A,0x8F4F,0x49E4,0x92,0x03,0x13,0x55,0x95,0xCB,0x6F,0xA5);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkAPIInformation,0x263CA19F,0xED09,0x482E,0x9F,0x9D,0x84,0x00,0x57,0x83,0xA2,0x37);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkGLScreenPreviewHelper,0xF63E77C7,0xB655,0x4A4A,0x9A,0xD0,0x3C,0xA8,0x5D,0x39,0x43,0x43);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkDX9ScreenPreviewHelper,0xCC010023,0xE01D,0x4525,0x9D,0x59,0x80,0xC8,0xAB,0x3D,0xC7,0xA0);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkVideoConversion,0x7DBBBB11,0x5B7B,0x467D,0xAE,0xA4,0xCE,0xA4,0x68,0xFD,0x36,0x8C);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkDiscovery,0x1073A05C,0xD885,0x47E9,0xB3,0xC6,0x12,0x9B,0x3F,0x9F,0x64,0x8B);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkConfiguration_v10_2,0xC679A35B,0x610C,0x4D09,0xB7,0x48,0x1D,0x04,0x78,0x10,0x0F,0xC0);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkOutput_v9_9,0xA3EF0963,0x0862,0x44ED,0x92,0xA9,0xEE,0x89,0xAB,0xF4,0x31,0xC7);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInput_v9_2,0x6D40EF78,0x28B9,0x4E21,0x99,0x0D,0x95,0xBB,0x77,0x50,0xA0,0x4F);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDeckControlStatusCallback_v8_1,0xE5F693C1,0x4283,0x4716,0xB1,0x8F,0xC1,0x43,0x15,0x21,0x95,0x5B);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDeckControl_v8_1,0x522A9E39,0x0F3C,0x4742,0x94,0xEE,0xD8,0x0D,0xE3,0x35,0xDA,0x1D);
MIDL_DEFINE_GUID(IID, IID_IDeckLink_v8_0,0x62BFF75D,0x6569,0x4E55,0x8D,0x4D,0x66,0xAA,0x03,0x82,0x9A,0xBC);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkIterator_v8_0,0x74E936FC,0xCC28,0x4A67,0x81,0xA0,0x1E,0x94,0xE5,0x2D,0x4E,0x69);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkIterator_v8_0,0xD9EDA3B3,0x2887,0x41FA,0xB7,0x24,0x01,0x7C,0xF1,0xEB,0x1D,0x37);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDeckControl_v7_9,0xA4D81043,0x0619,0x42B7,0x8E,0xD6,0x60,0x2D,0x29,0x04,0x1D,0xF7);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDisplayModeIterator_v7_6,0x455D741F,0x1779,0x4800,0x86,0xF5,0x0B,0x5D,0x13,0xD7,0x97,0x51);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDisplayMode_v7_6,0x87451E84,0x2B7E,0x439E,0xA6,0x29,0x43,0x93,0xEA,0x4A,0x85,0x50);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkOutput_v7_6,0x29228142,0xEB8C,0x4141,0xA6,0x21,0xF7,0x40,0x26,0x45,0x09,0x55);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInput_v7_6,0x300C135A,0x9F43,0x48E2,0x99,0x06,0x6D,0x79,0x11,0xD9,0x3C,0xF1);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkTimecode_v7_6,0xEFB9BCA6,0xA521,0x44F7,0xBD,0x69,0x23,0x32,0xF2,0x4D,0x9E,0xE6);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoFrame_v7_6,0xA8D8238E,0x6B18,0x4196,0x99,0xE1,0x5A,0xF7,0x17,0xB8,0x3D,0x32);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkMutableVideoFrame_v7_6,0x46FCEE00,0xB4E6,0x43D0,0x91,0xC0,0x02,0x3A,0x7F,0xCE,0xB3,0x4F);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoInputFrame_v7_6,0x9A74FA41,0xAE9F,0x47AC,0x8C,0xF4,0x01,0xF4,0x2D,0xD5,0x99,0x65);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkScreenPreviewCallback_v7_6,0x373F499D,0x4B4D,0x4518,0xAD,0x22,0x63,0x54,0xE5,0xA5,0x82,0x5E);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkGLScreenPreviewHelper_v7_6,0xBA575CD9,0xA15E,0x497B,0xB2,0xC2,0xF9,0xAF,0xE7,0xBE,0x4E,0xBA);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoConversion_v7_6,0x3EB504C9,0xF97D,0x40FE,0xA1,0x58,0xD4,0x07,0xD4,0x8C,0xB5,0x3B);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkConfiguration_v7_6,0xB8EAD569,0xB764,0x47F0,0xA7,0x3F,0xAE,0x40,0xDF,0x6C,0xBF,0x10);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoOutputCallback_v7_6,0xE763A626,0x4A3C,0x49D1,0xBF,0x13,0xE7,0xAD,0x36,0x92,0xAE,0x52);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInputCallback_v7_6,0x31D28EE7,0x88B6,0x4CB1,0x89,0x7A,0xCD,0xBF,0x79,0xA2,0x64,0x14);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkGLScreenPreviewHelper_v7_6,0xD398CEE7,0x4434,0x4CA3,0x9B,0xA6,0x5A,0xE3,0x45,0x56,0xB9,0x05);
MIDL_DEFINE_GUID(CLSID, CLSID_CDeckLinkVideoConversion_v7_6,0xFFA84F77,0x73BE,0x4FB7,0xB0,0x3E,0xB5,0xE4,0x4B,0x9F,0x75,0x9B);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInputCallback_v7_3,0xFD6F311D,0x4D00,0x444B,0x9E,0xD4,0x1F,0x25,0xB5,0x73,0x0A,0xD0);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkOutput_v7_3,0x271C65E3,0xC323,0x4344,0xA3,0x0F,0xD9,0x08,0xBC,0xB2,0x0A,0xA3);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInput_v7_3,0x4973F012,0x9925,0x458C,0x87,0x1C,0x18,0x77,0x4C,0xDB,0xBE,0xCB);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoInputFrame_v7_3,0xCF317790,0x2894,0x11DE,0x8C,0x30,0x08,0x00,0x20,0x0C,0x9A,0x66);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDisplayModeIterator_v7_1,0xB28131B6,0x59AC,0x4857,0xB5,0xAC,0xCD,0x75,0xD5,0x88,0x3E,0x2F);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkDisplayMode_v7_1,0xAF0CD6D5,0x8376,0x435E,0x84,0x33,0x54,0xF9,0xDD,0x53,0x0A,0xC3);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoFrame_v7_1,0x333F3A10,0x8C2D,0x43CF,0xB7,0x9D,0x46,0x56,0x0F,0xEE,0xA1,0xCE);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoInputFrame_v7_1,0xC8B41D95,0x8848,0x40EE,0x9B,0x37,0x6E,0x34,0x17,0xFB,0x11,0x4B);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkAudioInputPacket_v7_1,0xC86DE4F6,0xA29F,0x42E3,0xAB,0x3A,0x13,0x63,0xE2,0x9F,0x07,0x88);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkVideoOutputCallback_v7_1,0xEBD01AFA,0xE4B0,0x49C6,0xA0,0x1D,0xED,0xB9,0xD1,0xB5,0x5F,0xD9);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInputCallback_v7_1,0x7F94F328,0x5ED4,0x4E9F,0x97,0x29,0x76,0xA8,0x6B,0xDC,0x99,0xCC);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkOutput_v7_1,0xAE5B3E9B,0x4E1E,0x4535,0xB6,0xE8,0x48,0x0F,0xF5,0x2F,0x6C,0xE5);
MIDL_DEFINE_GUID(IID, IID_IDeckLinkInput_v7_1,0x2B54EDEF,0x5B32,0x429F,0xBA,0x11,0xBB,0x99,0x05,0x96,0xEA,0xCD);
#undef MIDL_DEFINE_GUID
#ifdef __cplusplus
}
#endif

View File

@@ -0,0 +1,41 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2015, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Blender Foundation.
#
# ***** END GPL LICENSE BLOCK *****
set(INC
.
# XXX, bad level include!
../../source/blender/blenlib
)
set(INC_SYS
${GLEW_INCLUDE_PATH}
)
set(SRC
dvpapi.cpp
dvpapi.h
)
blender_add_lib(bf_intern_gpudirect "${SRC}" "${INC}" "${INC_SYS}")

147
intern/gpudirect/dvpapi.cpp Normal file
View File

@@ -0,0 +1,147 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file gpudirect/dvpapi.c
* \ingroup gpudirect
*/
#ifdef WIN32
#include <stdlib.h>
#include "dvpapi.h"
extern "C" {
#include "BLI_dynlib.h"
}
#define KDVPAPI_Name "dvp.dll"
typedef DVPStatus (DVPAPIENTRY * PFNDVPINITGLCONTEXT) (uint32_t flags);
typedef DVPStatus (DVPAPIENTRY * PFNDVPCLOSEGLCONTEXT) (void);
typedef DVPStatus (DVPAPIENTRY * PFNDVPGETLIBRARYVERSION)(uint32_t *major, uint32_t *minor);
static uint32_t __dvpMajorVersion = 0;
static uint32_t __dvpMinorVersion = 0;
static PFNDVPGETLIBRARYVERSION __dvpGetLibrayVersion = NULL;
static PFNDVPINITGLCONTEXT __dvpInitGLContext = NULL;
static PFNDVPCLOSEGLCONTEXT __dvpCloseGLContext = NULL;
PFNDVPBEGIN __dvpBegin = NULL;
PFNDVPEND __dvpEnd = NULL;
PFNDVPCREATEBUFFER __dvpCreateBuffer = NULL;
PFNDVPDESTROYBUFFER __dvpDestroyBuffer = NULL;
PFNDVPFREEBUFFER __dvpFreeBuffer = NULL;
PFNDVPMEMCPYLINED __dvpMemcpyLined = NULL;
PFNDVPMEMCPY __dvpMemcpy = NULL;
PFNDVPIMPORTSYNCOBJECT __dvpImportSyncObject = NULL;
PFNDVPFREESYNCOBJECT __dvpFreeSyncObject = NULL;
PFNDVPMAPBUFFERENDAPI __dvpMapBufferEndAPI = NULL;
PFNDVPMAPBUFFERWAITDVP __dvpMapBufferWaitDVP = NULL;
PFNDVPMAPBUFFERENDDVP __dvpMapBufferEndDVP = NULL;
PFNDVPMAPBUFFERWAITAPI __dvpMapBufferWaitAPI = NULL;
PFNDVPBINDTOGLCTX __dvpBindToGLCtx = NULL;
PFNDVPGETREQUIREDCONSTANTSGLCTX __dvpGetRequiredConstantsGLCtx = NULL;
PFNDVPCREATEGPUTEXTUREGL __dvpCreateGPUTextureGL = NULL;
PFNDVPUNBINDFROMGLCTX __dvpUnbindFromGLCtx = NULL;
static DynamicLibrary *__dvpLibrary = NULL;
DVPStatus dvpGetLibrayVersion(uint32_t *major, uint32_t *minor)
{
if (!__dvpLibrary)
return DVP_STATUS_ERROR;
*major = __dvpMajorVersion;
*minor = __dvpMinorVersion;
return DVP_STATUS_OK;
}
DVPStatus dvpInitGLContext(uint32_t flags)
{
DVPStatus status;
if (!__dvpLibrary) {
__dvpLibrary = BLI_dynlib_open(KDVPAPI_Name);
if (!__dvpLibrary) {
return DVP_STATUS_ERROR;
}
// "?dvpInitGLContext@@YA?AW4DVPStatus@@I@Z";
__dvpInitGLContext = (PFNDVPINITGLCONTEXT)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpInitGLContext@@YA?AW4DVPStatus@@I@Z");
__dvpCloseGLContext = (PFNDVPCLOSEGLCONTEXT)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpCloseGLContext@@YA?AW4DVPStatus@@XZ");
__dvpGetLibrayVersion = (PFNDVPGETLIBRARYVERSION)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpGetLibrayVersion@@YA?AW4DVPStatus@@PEAI0@Z");
__dvpBegin = (PFNDVPBEGIN)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpBegin@@YA?AW4DVPStatus@@XZ");
__dvpEnd = (PFNDVPEND)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpEnd@@YA?AW4DVPStatus@@XZ");
__dvpCreateBuffer = (PFNDVPCREATEBUFFER)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpCreateBuffer@@YA?AW4DVPStatus@@PEAUDVPSysmemBufferDescRec@@PEA_K@Z");
__dvpDestroyBuffer = (PFNDVPDESTROYBUFFER)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpDestroyBuffer@@YA?AW4DVPStatus@@_K@Z");
__dvpFreeBuffer = (PFNDVPFREEBUFFER)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpFreeBuffer@@YA?AW4DVPStatus@@_K@Z");
__dvpMemcpyLined = (PFNDVPMEMCPYLINED)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpMemcpyLined@@YA?AW4DVPStatus@@_K0I000III@Z");
__dvpMemcpy = (PFNDVPMEMCPY)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpMemcpy2D@@YA?AW4DVPStatus@@_K0I000IIIII@Z");
__dvpImportSyncObject = (PFNDVPIMPORTSYNCOBJECT)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpImportSyncObject@@YA?AW4DVPStatus@@PEAUDVPSyncObjectDescRec@@PEA_K@Z");
__dvpFreeSyncObject = (PFNDVPFREESYNCOBJECT)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpFreeSyncObject@@YA?AW4DVPStatus@@_K@Z");
__dvpMapBufferEndAPI = (PFNDVPMAPBUFFERENDAPI)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpMapBufferEndAPI@@YA?AW4DVPStatus@@_K@Z");
__dvpMapBufferWaitDVP = (PFNDVPMAPBUFFERWAITDVP)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpMapBufferWaitDVP@@YA?AW4DVPStatus@@_K@Z");
__dvpMapBufferEndDVP = (PFNDVPMAPBUFFERENDDVP)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpMapBufferEndDVP@@YA?AW4DVPStatus@@_K@Z");
__dvpMapBufferWaitAPI = (PFNDVPMAPBUFFERWAITAPI)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpMapBufferWaitAPI@@YA?AW4DVPStatus@@_K@Z");
__dvpBindToGLCtx = (PFNDVPBINDTOGLCTX)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpBindToGLCtx@@YA?AW4DVPStatus@@_K@Z");
__dvpGetRequiredConstantsGLCtx = (PFNDVPGETREQUIREDCONSTANTSGLCTX)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpGetRequiredConstantsGLCtx@@YA?AW4DVPStatus@@PEAI00000@Z");
__dvpCreateGPUTextureGL = (PFNDVPCREATEGPUTEXTUREGL)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpCreateGPUTextureGL@@YA?AW4DVPStatus@@IPEA_K@Z");
__dvpUnbindFromGLCtx = (PFNDVPUNBINDFROMGLCTX)BLI_dynlib_find_symbol(__dvpLibrary, "?dvpUnbindFromGLCtx@@YA?AW4DVPStatus@@_K@Z");
if (!__dvpInitGLContext ||
!__dvpCloseGLContext ||
!__dvpGetLibrayVersion ||
!__dvpBegin ||
!__dvpEnd ||
!__dvpCreateBuffer ||
!__dvpDestroyBuffer ||
!__dvpFreeBuffer ||
!__dvpMemcpyLined ||
!__dvpMemcpy ||
!__dvpImportSyncObject ||
!__dvpFreeSyncObject ||
!__dvpMapBufferEndAPI ||
!__dvpMapBufferWaitDVP ||
!__dvpMapBufferEndDVP ||
!__dvpMapBufferWaitAPI ||
!__dvpBindToGLCtx ||
!__dvpGetRequiredConstantsGLCtx ||
!__dvpCreateGPUTextureGL ||
!__dvpUnbindFromGLCtx)
{
return DVP_STATUS_ERROR;
}
// check that the library version is what we want
if ((status = __dvpGetLibrayVersion(&__dvpMajorVersion, &__dvpMinorVersion)) != DVP_STATUS_OK)
return status;
if (__dvpMajorVersion != DVP_MAJOR_VERSION || __dvpMinorVersion < DVP_MINOR_VERSION)
return DVP_STATUS_ERROR;
}
return (!__dvpInitGLContext) ? DVP_STATUS_ERROR : __dvpInitGLContext(flags);
}
DVPStatus dvpCloseGLContext(void)
{
return (!__dvpCloseGLContext) ? DVP_STATUS_ERROR : __dvpCloseGLContext();
}
#endif // WIN32

667
intern/gpudirect/dvpapi.h Normal file
View File

@@ -0,0 +1,667 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file gpudirect/dvpapi.h
* \ingroup gpudirect
*/
#ifndef __DVPAPI_H__
#define __DVPAPI_H__
#ifdef WIN32
#include <stdlib.h>
#include <stdint.h>
#include "GL/glew.h"
#if defined(__GNUC__) && __GNUC__>=4
# define DVPAPI extern __attribute__ ((visibility("default")))
#elif defined(__SUNPRO_C) || defined(__SUNPRO_CC)
# define DVPAPI extern __global
#else
# define DVPAPI extern
#endif
#define DVPAPIENTRY
#define DVP_MAJOR_VERSION 1
#define DVP_MINOR_VERSION 63
typedef uint64_t DVPBufferHandle;
typedef uint64_t DVPSyncObjectHandle;
typedef enum {
DVP_STATUS_OK = 0,
DVP_STATUS_INVALID_PARAMETER = 1,
DVP_STATUS_UNSUPPORTED = 2,
DVP_STATUS_END_ENUMERATION = 3,
DVP_STATUS_INVALID_DEVICE = 4,
DVP_STATUS_OUT_OF_MEMORY = 5,
DVP_STATUS_INVALID_OPERATION = 6,
DVP_STATUS_TIMEOUT = 7,
DVP_STATUS_INVALID_CONTEXT = 8,
DVP_STATUS_INVALID_RESOURCE_TYPE = 9,
DVP_STATUS_INVALID_FORMAT_OR_TYPE = 10,
DVP_STATUS_DEVICE_UNINITIALIZED = 11,
DVP_STATUS_UNSIGNALED = 12,
DVP_STATUS_SYNC_ERROR = 13,
DVP_STATUS_SYNC_STILL_BOUND = 14,
DVP_STATUS_ERROR = -1,
} DVPStatus;
// Pixel component formats stored in the system memory buffer
// analogous to those defined in the OpenGL API, except for
// DVP_BUFFER and the DVP_CUDA_* types. DVP_BUFFER provides
// an unspecified format type to allow for general interpretation
// of the bytes at a later stage (in GPU shader). Note that not
// all paths will achieve optimal speeds due to lack of HW support
// for the transformation. The CUDA types are to be used when
// copying to/from a system memory buffer from-to a CUDA array, as the
// CUDA array implies a memory layout that matches the array.
typedef enum {
DVP_BUFFER, // Buffer treated as a raw buffer
// and copied directly into GPU buffer
// without any interpretation of the
// stored bytes.
DVP_DEPTH_COMPONENT,
DVP_RGBA,
DVP_BGRA,
DVP_RED,
DVP_GREEN,
DVP_BLUE,
DVP_ALPHA,
DVP_RGB,
DVP_BGR,
DVP_LUMINANCE,
DVP_LUMINANCE_ALPHA,
DVP_CUDA_1_CHANNEL,
DVP_CUDA_2_CHANNELS,
DVP_CUDA_4_CHANNELS,
DVP_RGBA_INTEGER,
DVP_BGRA_INTEGER,
DVP_RED_INTEGER,
DVP_GREEN_INTEGER,
DVP_BLUE_INTEGER,
DVP_ALPHA_INTEGER,
DVP_RGB_INTEGER,
DVP_BGR_INTEGER,
DVP_LUMINANCE_INTEGER,
DVP_LUMINANCE_ALPHA_INTEGER,
} DVPBufferFormats;
// Possible pixel component storage types for system memory buffers
typedef enum {
DVP_UNSIGNED_BYTE,
DVP_BYTE,
DVP_UNSIGNED_SHORT,
DVP_SHORT,
DVP_UNSIGNED_INT,
DVP_INT,
DVP_FLOAT,
DVP_HALF_FLOAT,
DVP_UNSIGNED_BYTE_3_3_2,
DVP_UNSIGNED_BYTE_2_3_3_REV,
DVP_UNSIGNED_SHORT_5_6_5,
DVP_UNSIGNED_SHORT_5_6_5_REV,
DVP_UNSIGNED_SHORT_4_4_4_4,
DVP_UNSIGNED_SHORT_4_4_4_4_REV,
DVP_UNSIGNED_SHORT_5_5_5_1,
DVP_UNSIGNED_SHORT_1_5_5_5_REV,
DVP_UNSIGNED_INT_8_8_8_8,
DVP_UNSIGNED_INT_8_8_8_8_REV,
DVP_UNSIGNED_INT_10_10_10_2,
DVP_UNSIGNED_INT_2_10_10_10_REV,
} DVPBufferTypes;
// System memory descriptor describing the size and storage formats
// of the buffer
typedef struct DVPSysmemBufferDescRec {
uint32_t width; // Buffer Width
uint32_t height; // Buffer Height
uint32_t stride; // Stride
uint32_t size; // Specifies the surface size if
// format == DVP_BUFFER
DVPBufferFormats format; // see enum above
DVPBufferTypes type; // see enum above
void *bufAddr; // Buffer memory address
} DVPSysmemBufferDesc;
// Flags specified at sync object creation:
// ----------------------------------------
// Tells the implementation to use events wherever
// possible instead of software spin loops. Note if HW
// wait operations are supported by the implementation
// then events will not be used in the dvpMemcpy*
// functions. In such a case, events may still be used
// in dvpSyncObjClientWait* functions.
#define DVP_SYNC_OBJECT_FLAGS_USE_EVENTS 0x00000001
typedef struct DVPSyncObjectDescRec {
uint32_t *sem; // Location to write semaphore value
uint32_t flags; // See above DVP_SYNC_OBJECT_FLAGS_* bits
DVPStatus (*externalClientWaitFunc) (DVPSyncObjectHandle sync,
uint32_t value,
bool GEQ, // If true then the function should wait for the sync value to be
// greater than or equal to the value parameter. Otherwise just a
// straight forward equality comparison should be performed.
uint64_t timeout);
// If non-null, externalClientWaitFunc allows the DVP library
// to call the application to wait for a sync object to be
// released. This allows the application to create events,
// which can be triggered on device interrupts instead of
// using spin loops inside the DVP library. Upon succeeding
// the function must return DVP_STATUS_OK, non-zero for failure
// and DVP_STATUS_TIMEOUT on timeout. The externalClientWaitFunc should
// not alter the current GL or CUDA context state
} DVPSyncObjectDesc;
// Time used when event timeouts should be ignored
#define DVP_TIMEOUT_IGNORED 0xFFFFFFFFFFFFFFFFull
typedef DVPStatus (DVPAPIENTRY * PFNDVPBEGIN) (void);
typedef DVPStatus (DVPAPIENTRY * PFNDVPEND) (void);
typedef DVPStatus (DVPAPIENTRY * PFNDVPCREATEBUFFER)(DVPSysmemBufferDesc *desc, DVPBufferHandle *hBuf);
typedef DVPStatus (DVPAPIENTRY * PFNDVPDESTROYBUFFER)(DVPBufferHandle hBuf);
typedef DVPStatus (DVPAPIENTRY * PFNDVPFREEBUFFER)(DVPBufferHandle gpuBufferHandle);
typedef DVPStatus (DVPAPIENTRY * PFNDVPMEMCPYLINED)(DVPBufferHandle srcBuffer,
DVPSyncObjectHandle srcSync,
uint32_t srcAcquireValue,
uint64_t timeout,
DVPBufferHandle dstBuffer,
DVPSyncObjectHandle dstSync,
uint32_t dstReleaseValue,
uint32_t startingLine,
uint32_t numberOfLines);
typedef DVPStatus (DVPAPIENTRY * PFNDVPMEMCPY)(DVPBufferHandle srcBuffer,
DVPSyncObjectHandle srcSync,
uint32_t srcAcquireValue,
uint64_t timeout,
DVPBufferHandle dstBuffer,
DVPSyncObjectHandle dstSync,
uint32_t dstReleaseValue,
uint32_t srcOffset,
uint32_t dstOffset,
uint32_t count);
typedef DVPStatus (DVPAPIENTRY * PFNDVPIMPORTSYNCOBJECT)(DVPSyncObjectDesc *desc,
DVPSyncObjectHandle *syncObject);
typedef DVPStatus (DVPAPIENTRY * PFNDVPFREESYNCOBJECT)(DVPSyncObjectHandle syncObject);
typedef DVPStatus (DVPAPIENTRY * PFNDVPGETREQUIREDCONSTANTSGLCTX)(uint32_t *bufferAddrAlignment,
uint32_t *bufferGPUStrideAlignment,
uint32_t *semaphoreAddrAlignment,
uint32_t *semaphoreAllocSize,
uint32_t *semaphorePayloadOffset,
uint32_t *semaphorePayloadSize);
typedef DVPStatus (DVPAPIENTRY * PFNDVPBINDTOGLCTX)(DVPBufferHandle hBuf);
typedef DVPStatus (DVPAPIENTRY * PFNDVPUNBINDFROMGLCTX)(DVPBufferHandle hBuf);
typedef DVPStatus (DVPAPIENTRY * PFNDVPMAPBUFFERENDAPI)(DVPBufferHandle gpuBufferHandle);
typedef DVPStatus (DVPAPIENTRY * PFNDVPMAPBUFFERWAITDVP)(DVPBufferHandle gpuBufferHandle);
typedef DVPStatus (DVPAPIENTRY * PFNDVPMAPBUFFERENDDVP)(DVPBufferHandle gpuBufferHandle);
typedef DVPStatus (DVPAPIENTRY * PFNDVPMAPBUFFERWAITAPI)(DVPBufferHandle gpuBufferHandle);
typedef DVPStatus (DVPAPIENTRY * PFNDVPCREATEGPUTEXTUREGL)(GLuint texID,
DVPBufferHandle *bufferHandle);
// Flags supplied to the dvpInit* functions:
//
// DVP_DEVICE_FLAGS_SHARE_APP_CONTEXT is only supported for OpenGL
// contexts and is the only supported flag for CUDA. It allows for
// certain cases to be optimized by sharing the context
// of the application for the DVP operations. This removes the
// need to do certain synchronizations. See issue 5 for parallel
// issues. When used, the app's GL context must be current for all calls
// to the DVP library.
// the DVP library.
#define DVP_DEVICE_FLAGS_SHARE_APP_CONTEXT 0x000000001
//------------------------------------------------------------------------
// Function: dvpInitGLContext
//
// To be called before any DVP resources are allocated.
// This call allows for specification of flags that may
// change the way DVP operations are performed. See above
// for the list of flags.
//
// The OpenGL context must be current at time of call.
//
// Parameters: flags[IN] - Buffer description structure
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
extern DVPStatus dvpInitGLContext(uint32_t flags);
//------------------------------------------------------------------------
// Function: dvpCloseGLContext
//
// Function to be called when app closes to allow freeing
// of any DVP library allocated resources.
//
// The OpenGL context must be current at time of call.
//
// Parameters: none
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
extern DVPStatus dvpCloseGLContext();
//------------------------------------------------------------------------
// Function: dvpGetLibrayVersion
//
// Description: Returns the current version of the library
//
// Parameters: major[OUT] - returned major version
// minor[OUT] - returned minor version
//
// Returns: DVP_STATUS_OK
//------------------------------------------------------------------------
extern DVPStatus dvpGetLibrayVersion(uint32_t *major, uint32_t *minor);
//------------------------------------------------------------------------
// Function: dvpBegin
//
// Description: dvpBegin must be called before any combination of DVP
// function calls dvpMemCpy*, dvpMapBufferWaitDVP,
// dvpSyncObjClientWait*, and dvpMapBufferEndDVP. After
// the last of these functions has been called is dvpEnd
// must be called. This allows for more efficient batched
// DVP operations.
//
// Parameters: none
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpBegin DVPAPI_GET_FUN(__dvpBegin)
//------------------------------------------------------------------------
// Function: dvpEnd
//
// Description: dvpEnd signals the end of a batch of DVP function calls
// that began with dvpBegin
//
// Parameters: none
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpEnd DVPAPI_GET_FUN(__dvpEnd)
//------------------------------------------------------------------------
// Function: dvpCreateBuffer
//
// Description: Create a DVP buffer using system memory, wrapping a user
// passed pointer. The pointer must be aligned
// to values returned by dvpGetRequiredAlignments*
//
// Parameters: desc[IN] - Buffer description structure
// hBuf[OUT] - DVP Buffer handle
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpCreateBuffer DVPAPI_GET_FUN(__dvpCreateBuffer)
//------------------------------------------------------------------------
// Function: dvpDestroyBuffer
//
// Description: Destroy a previously created DVP buffer.
//
// Parameters: hBuf[IN] - DVP Buffer handle
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpDestroyBuffer DVPAPI_GET_FUN(__dvpDestroyBuffer)
//------------------------------------------------------------------------
// Function: dvpFreeBuffer
//
// Description: dvpFreeBuffer frees the DVP buffer reference
//
// Parameters: gpuBufferHandle[IN] - DVP Buffer handle
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpFreeBuffer DVPAPI_GET_FUN(__dvpFreeBuffer)
//------------------------------------------------------------------------
// Function: dvpMemcpyLined
//
// Description: dvpMemcpyLined provides buffer copies between a
// DVP sysmem buffer and a graphics API texture (as opposed to
// a buffer type). Other buffer types (such
// as graphics API buffers) return DVP_STATUS_INVALID_PARAMETER.
//
// In addition, see "dvpMemcpy* general comments" above.
//
// Parameters: srcBuffer[IN] - src buffer handle
// srcSync[IN] - sync to acquire on before transfer
// srcAcquireValue[IN] - value to acquire on before transfer
// timeout[IN] - time out value in nanoseconds.
// dstBuffer[IN] - src buffer handle
// dstSync[IN] - sync to release on transfer completion
// dstReleaseValue[IN] - value to release on completion
// startingLine[IN] - starting line of buffer
// numberOfLines[IN] - number of lines to copy
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//
// GL state effected: The following GL state may be altered by this
// function (not relevant if no GL source or destination
// is used):
// -GL_PACK_SKIP_ROWS, GL_PACK_SKIP_PIXELS,
// GL_PACK_ROW_LENGTH
// -The buffer bound to GL_PIXEL_PACK_BUFFER
// -The current bound framebuffer (GL_FRAMEBUFFER_EXT)
// -GL_UNPACK_SKIP_ROWS, GL_UNPACK_SKIP_PIXELS,
// GL_UNPACK_ROW_LENGTH
// -The buffer bound to GL_PIXEL_UNPACK_BUFFER
// -The texture bound to GL_TEXTURE_2D
//------------------------------------------------------------------------
#define dvpMemcpyLined DVPAPI_GET_FUN(__dvpMemcpyLined)
//------------------------------------------------------------------------
// Function: dvpMemcpy
//
// Description: dvpMemcpy provides buffer copies between a
// DVP sysmem buffer and a graphics API pure buffer (as
// opposed to a texture type). Other buffer types (such
// as graphics API textures) return
// DVP_STATUS_INVALID_PARAMETER.
//
// The start address of the srcBuffer is given by srcOffset
// and the dstBuffer start address is given by dstOffset.
//
// In addition, see "dvpMemcpy* general comments" above.
//
// Parameters: srcBuffer[IN] - src buffer handle
// srcSync[IN] - sync to acquire on before transfer
// srcAcquireValue[IN] - value to acquire on before transfer
// timeout[IN] - time out value in nanoseconds.
// dstBuffer[IN] - src buffer handle
// dstSync[IN] - sync to release on completion
// dstReleaseValue[IN] - value to release on completion
// uint32_t srcOffset[IN] - byte offset of srcBuffer
// uint32_t dstOffset[IN] - byte offset of dstBuffer
// uint32_t count[IN] - number of bytes to copy
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//
// GL state effected: The following GL state may be altered by this
// function (not relevant if no GL source or destination
// is used):
// - The buffer bound to GL_COPY_WRITE_BUFFER
// - The buffer bound to GL_COPY_READ_BUFFER
//
//------------------------------------------------------------------------
#define dvpMemcpy DVPAPI_GET_FUN(__dvpMemcpy)
//------------------------------------------------------------------------
// Function: dvpImportSyncObject
//
// Description: dvpImportSyncObject creates a DVPSyncObject from the
// DVPSyncObjectDesc. Note that a sync object is not
// supported for copy operations targeting different APIs.
// This means, for example, it is illegal to call dvpMemCpy*
// for source or target GL texture with sync object A and
// then later use that same sync object in dvpMemCpy*
// operation for a source or target CUDA buffer. The same
// semaphore memory can still be used for two different sync
// objects.
//
// Parameters: desc[IN] - data describing the sync object
// syncObject[OUT] - handle to sync object
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpImportSyncObject DVPAPI_GET_FUN(__dvpImportSyncObject)
//------------------------------------------------------------------------
// Function: dvpFreeSyncObject
//
// Description: dvpFreeSyncObject waits for any outstanding releases on
// this sync object before freeing the resources allocated for
// the specified sync object. The application must make sure
// any outstanding acquire operations have already been
// completed.
//
// If OpenGL is being used and the app's GL context is being
// shared (via the DVP_DEVICE_FLAGS_SHARE_APP_CONTEXT flag),
// then dvpFreeSyncObject needs to be called while each context,
// on which the sync object was used, is current. If
// DVP_DEVICE_FLAGS_SHARE_APP_CONTEXT is used and there are out
// standing contexts from which this sync object must be free'd
// then dvpFreeSyncObject will return DVP_STATUS_SYNC_STILL_BOUND.
//
// Parameters: syncObject[IN] - handle to sync object to be free'd
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
// DVP_STATUS_SYNC_STILL_BOUND
//------------------------------------------------------------------------
#define dvpFreeSyncObject DVPAPI_GET_FUN(__dvpFreeSyncObject)
//------------------------------------------------------------------------
// Function: dvpMapBufferEndAPI
//
// Description: Tells DVP to setup a signal for this buffer in the
// callers API context or device. The signal follows all
// previous API operations up to this point and, thus,
// allows subsequent DVP calls to know when then this buffer
// is ready for use within the DVP library. This function
// would be followed by a call to dvpMapBufferWaitDVP to
// synchronize rendering in the API stream and the DVP
// stream.
//
// If OpenGL or CUDA is used, the OpenGL/CUDA context
// must be current at time of call.
//
// The use of dvpMapBufferEndAPI is NOT recommended for
// CUDA synchronisation, as it is more optimal to use a
// applcation CUDA stream in conjunction with
// dvpMapBufferEndCUDAStream. This allows the driver to
// do optimisations, such as parllelise the copy operations
// and compute.
//
// This must be called outside the dvpBegin/dvpEnd pair. In
// addition, this call is not thread safe and must be called
// from or fenced against the rendering thread associated with
// the context or device.
//
// Parameters: gpuBufferHandle[IN] - buffer to track
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
// DVP_STATUS_UNSIGNALED - returned if the API is
// unable to place a signal in the API context queue
//------------------------------------------------------------------------
#define dvpMapBufferEndAPI DVPAPI_GET_FUN(__dvpMapBufferEndAPI)
//------------------------------------------------------------------------
// Function: dvpMapBufferEndAPI
//
// Description: Tells DVP to setup a signal for this buffer in the
// callers API context or device. The signal follows all
// previous API operations up to this point and, thus,
// allows subsequent DVP calls to know when then this buffer
// is ready for use within the DVP library. This function
// would be followed by a call to dvpMapBufferWaitDVP to
// synchronize rendering in the API stream and the DVP
// stream.
//
// If OpenGL or CUDA is used, the OpenGL/CUDA context
// must be current at time of call.
//
// The use of dvpMapBufferEndAPI is NOT recommended for
// CUDA synchronisation, as it is more optimal to use a
// applcation CUDA stream in conjunction with
// dvpMapBufferEndCUDAStream. This allows the driver to
// do optimisations, such as parllelise the copy operations
// and compute.
//
// This must be called outside the dvpBegin/dvpEnd pair. In
// addition, this call is not thread safe and must be called
// from or fenced against the rendering thread associated with
// the context or device.
//
// Parameters: gpuBufferHandle[IN] - buffer to track
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
// DVP_STATUS_UNSIGNALED - returned if the API is
// unable to place a signal in the API context queue
//------------------------------------------------------------------------
#define dvpMapBufferEndAPI DVPAPI_GET_FUN(__dvpMapBufferEndAPI)
//------------------------------------------------------------------------
// Function: dvpMapBufferWaitDVP
//
// Description: Tells DVP to make the DVP stream wait for a previous
// signal triggered by a dvpMapBufferEndAPI call.
//
// This must be called inside the dvpBegin/dvpEnd pair.
//
// Parameters: gpuBufferHandle[IN] - buffer to track
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpMapBufferWaitDVP DVPAPI_GET_FUN(__dvpMapBufferWaitDVP)
//------------------------------------------------------------------------
// Function: dvpMapBufferEndDVP
//
// Description: Tells DVP to setup a signal for this buffer after
// DVP operations are complete. The signal allows
// the API to know when then this buffer is
// ready for use within a API stream. This function would
// be followed by a call to dvpMapBufferWaitAPI to
// synchronize copies in the DVP stream and the API
// rendering stream.
//
// This must be called inside the dvpBegin/dvpEnd pair.
//
// Parameters: gpuBufferHandle[IN] - buffer to track
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpMapBufferEndDVP DVPAPI_GET_FUN(__dvpMapBufferEndDVP)
//------------------------------------------------------------------------
// Function: dvpMapBufferWaitAPI
//
// Description: Tells DVP to make the current API context or device to
// wait for a previous signal triggered by a
// dvpMapBufferEndDVP call.
//
// The use of dvpMapBufferWaitCUDAStream is NOT recommended for
// CUDA synchronisation, as it is more optimal to use a
// applcation CUDA stream in conjunction with
// dvpMapBufferEndCUDAStream. This allows the driver to
// do optimisations, such as parllelise the copy operations
// and compute.
//
// If OpenGL or CUDA is used, the OpenGL/CUDA context
// must be current at time of call.
//
// This must be called outside the dvpBegin/dvpEnd pair. In
// addition, this call is not thread safe and must be called
// from or fenced against the rendering thread associated with
// the context or device.
//
// Parameters: gpuBufferHandle[IN] - buffer to track
//
// Returns: DVP_STATUS_OK
// DVP_STATUS_INVALID_PARAMETER
// DVP_STATUS_ERROR
//------------------------------------------------------------------------
#define dvpMapBufferWaitAPI DVPAPI_GET_FUN(__dvpMapBufferWaitAPI)
//------------------------------------------------------------------------
// If the multiple GL contexts used in the application access the same
// sysmem buffers, then application must create those GL contexts with
// display list shared.
//------------------------------------------------------------------------
#define dvpBindToGLCtx DVPAPI_GET_FUN(__dvpBindToGLCtx)
#define dvpGetRequiredConstantsGLCtx DVPAPI_GET_FUN(__dvpGetRequiredConstantsGLCtx)
#define dvpCreateGPUTextureGL DVPAPI_GET_FUN(__dvpCreateGPUTextureGL)
#define dvpUnbindFromGLCtx DVPAPI_GET_FUN(__dvpUnbindFromGLCtx)
DVPAPI PFNDVPBEGIN __dvpBegin;
DVPAPI PFNDVPEND __dvpEnd;
DVPAPI PFNDVPCREATEBUFFER __dvpCreateBuffer;
DVPAPI PFNDVPDESTROYBUFFER __dvpDestroyBuffer;
DVPAPI PFNDVPFREEBUFFER __dvpFreeBuffer;
DVPAPI PFNDVPMEMCPYLINED __dvpMemcpyLined;
DVPAPI PFNDVPMEMCPY __dvpMemcpy;
DVPAPI PFNDVPIMPORTSYNCOBJECT __dvpImportSyncObject;
DVPAPI PFNDVPFREESYNCOBJECT __dvpFreeSyncObject;
DVPAPI PFNDVPMAPBUFFERENDAPI __dvpMapBufferEndAPI;
DVPAPI PFNDVPMAPBUFFERWAITDVP __dvpMapBufferWaitDVP;
DVPAPI PFNDVPMAPBUFFERENDDVP __dvpMapBufferEndDVP;
DVPAPI PFNDVPMAPBUFFERWAITAPI __dvpMapBufferWaitAPI;
//------------------------------------------------------------------------
// If the multiple GL contexts used in the application access the same
// sysmem buffers, then application must create those GL contexts with
// display list shared.
//------------------------------------------------------------------------
DVPAPI PFNDVPBINDTOGLCTX __dvpBindToGLCtx;
DVPAPI PFNDVPGETREQUIREDCONSTANTSGLCTX __dvpGetRequiredConstantsGLCtx;
DVPAPI PFNDVPCREATEGPUTEXTUREGL __dvpCreateGPUTextureGL;
DVPAPI PFNDVPUNBINDFROMGLCTX __dvpUnbindFromGLCtx;
#define DVPAPI_GET_FUN(x) x
#endif // WIN32
#endif // __DVPAPI_H__

View File

@@ -143,6 +143,16 @@ public:
m_el[3][0] *= x; m_el[3][1] *= y; m_el[3][2] *= z; m_el[3][3] *= w;
}
/**
* Scale the rows of this matrix with x, y, z, w respectively.
*/
void tscale(MT_Scalar x, MT_Scalar y, MT_Scalar z, MT_Scalar w) {
m_el[0][0] *= x; m_el[1][0] *= y; m_el[2][0] *= z; m_el[3][0] *= w;
m_el[0][1] *= x; m_el[1][1] *= y; m_el[2][1] *= z; m_el[3][1] *= w;
m_el[0][2] *= x; m_el[1][2] *= y; m_el[2][2] *= z; m_el[3][2] *= w;
m_el[0][3] *= x; m_el[1][3] *= y; m_el[2][3] *= z; m_el[3][3] *= w;
}
/**
* Return a column-scaled version of this matrix.
*/

View File

@@ -215,6 +215,14 @@ endif()
list(APPEND BLENDER_SORTED_LIBS bf_intern_locale)
endif()
if(WITH_GAMEENGINE_DECKLINK)
list(APPEND BLENDER_SORTED_LIBS bf_intern_decklink)
endif()
if(WIN32)
list(APPEND BLENDER_SORTED_LIBS bf_intern_gpudirect)
endif()
if(WITH_OPENSUBDIV)
list(APPEND BLENDER_SORTED_LIBS bf_intern_opensubdiv)
endif()

View File

@@ -341,6 +341,7 @@ extern "C" void StartKetsjiShell(struct bContext *C, struct ARegion *ar, rcti *c
ketsjiengine->SetUseFixedTime(usefixed);
ketsjiengine->SetTimingDisplay(frameRate, profile, properties);
ketsjiengine->SetRestrictAnimationFPS(restrictAnimFPS);
ketsjiengine->SetRender(true);
KX_KetsjiEngine::SetExitKey(ConvertKeyCode(startscene->gm.exitkey));
//set the global settings (carried over if restart/load new files)

View File

@@ -678,6 +678,7 @@ bool GPG_Application::initEngine(GHOST_IWindow* window, const int stereoMode)
//set the global settings (carried over if restart/load new files)
m_ketsjiengine->SetGlobalSettings(m_globalSettings);
m_ketsjiengine->SetRender(true);
m_engineInitialized = true;
}

View File

@@ -32,6 +32,7 @@
#include "MT_Matrix4x4.h"
#include "MT_Matrix3x3.h"
#include "KX_PyMath.h"
#include "KX_PythonInit.h"
#include "MEM_guardedalloc.h"
#include "RAS_MeshObject.h"
@@ -67,15 +68,16 @@ BL_Uniform::~BL_Uniform()
#endif
}
void BL_Uniform::Apply(class BL_Shader *shader)
bool BL_Uniform::Apply(class BL_Shader *shader)
{
#ifdef SORT_UNIFORMS
RAS_IRasterizer *ras;
MT_assert(mType > UNI_NONE && mType < UNI_MAX && mData);
if (!mDirty) {
return;
}
if (!mDirty)
return false;
mDirty = false;
switch (mType) {
case UNI_FLOAT:
{
@@ -83,6 +85,15 @@ void BL_Uniform::Apply(class BL_Shader *shader)
glUniform1fARB(mLoc, (GLfloat)*f);
break;
}
case UNI_FLOAT_EYE:
{
float *f = (float*)mData;
ras = KX_GetActiveEngine()->GetRasterizer();
*f = (ras->GetEye() == RAS_IRasterizer::RAS_STEREO_LEFTEYE) ? 0.0f : 0.5f;
glUniform1fARB(mLoc, (GLfloat)*f);
mDirty = (ras->Stereo()) ? true : false;
break;
}
case UNI_INT:
{
int *f = (int *)mData;
@@ -138,7 +149,7 @@ void BL_Uniform::Apply(class BL_Shader *shader)
break;
}
}
mDirty = false;
return mDirty;
#endif
}
@@ -274,11 +285,10 @@ void BL_Shader::ApplyShader()
return;
}
for (unsigned int i = 0; i < mUniforms.size(); i++) {
mUniforms[i]->Apply(this);
}
mDirty = false;
for (unsigned int i=0; i<mUniforms.size(); i++) {
mDirty |= mUniforms[i]->Apply(this);
}
#endif
}
@@ -314,64 +324,70 @@ bool BL_Shader::LinkProgram()
return false;
}
// -- vertex shader ------------------
tmpVert = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
glShaderSourceARB(tmpVert, 1, (const char **)&vertProg, 0);
glCompileShaderARB(tmpVert);
glGetObjectParameterivARB(tmpVert, GL_OBJECT_INFO_LOG_LENGTH_ARB, (GLint *)&vertlen);
if (vertProg[0] != 0) {
// -- vertex shader ------------------
tmpVert = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
glShaderSourceARB(tmpVert, 1, (const char**)&vertProg, 0);
glCompileShaderARB(tmpVert);
glGetObjectParameterivARB(tmpVert, GL_OBJECT_INFO_LOG_LENGTH_ARB, (GLint*)&vertlen);
// print info if any
if (vertlen > 0 && vertlen < MAX_LOG_LEN) {
logInf = (char *)MEM_mallocN(vertlen, "vert-log");
glGetInfoLogARB(tmpVert, vertlen, (GLsizei *)&char_len, logInf);
// print info if any
if (vertlen > 0 && vertlen < MAX_LOG_LEN) {
logInf = (char*)MEM_mallocN(vertlen, "vert-log");
glGetInfoLogARB(tmpVert, vertlen, (GLsizei*)&char_len, logInf);
if (char_len > 0) {
spit("---- Vertex Shader Error ----");
spit(logInf);
}
MEM_freeN(logInf);
logInf = 0;
}
// check for compile errors
glGetObjectParameterivARB(tmpVert, GL_OBJECT_COMPILE_STATUS_ARB, (GLint*)&vertstatus);
if (!vertstatus) {
spit("---- Vertex shader failed to compile ----");
goto programError;
}
}
if (char_len > 0) {
spit("---- Vertex Shader Error ----");
spit(logInf);
if (fragProg[0] != 0) {
// -- fragment shader ----------------
tmpFrag = glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);
glShaderSourceARB(tmpFrag, 1, (const char**)&fragProg, 0);
glCompileShaderARB(tmpFrag);
glGetObjectParameterivARB(tmpFrag, GL_OBJECT_INFO_LOG_LENGTH_ARB, (GLint*)&fraglen);
if (fraglen > 0 && fraglen < MAX_LOG_LEN) {
logInf = (char*)MEM_mallocN(fraglen, "frag-log");
glGetInfoLogARB(tmpFrag, fraglen, (GLsizei*)&char_len, logInf);
if (char_len > 0) {
spit("---- Fragment Shader Error ----");
spit(logInf);
}
MEM_freeN(logInf);
logInf = 0;
}
MEM_freeN(logInf);
logInf = 0;
}
// check for compile errors
glGetObjectParameterivARB(tmpVert, GL_OBJECT_COMPILE_STATUS_ARB, (GLint *)&vertstatus);
if (!vertstatus) {
spit("---- Vertex shader failed to compile ----");
goto programError;
}
// -- fragment shader ----------------
tmpFrag = glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);
glShaderSourceARB(tmpFrag, 1, (const char **)&fragProg, 0);
glCompileShaderARB(tmpFrag);
glGetObjectParameterivARB(tmpFrag, GL_OBJECT_INFO_LOG_LENGTH_ARB, (GLint *)&fraglen);
if (fraglen > 0 && fraglen < MAX_LOG_LEN) {
logInf = (char *)MEM_mallocN(fraglen, "frag-log");
glGetInfoLogARB(tmpFrag, fraglen, (GLsizei *)&char_len, logInf);
if (char_len > 0) {
spit("---- Fragment Shader Error ----");
spit(logInf);
glGetObjectParameterivARB(tmpFrag, GL_OBJECT_COMPILE_STATUS_ARB, (GLint*)&fragstatus);
if (!fragstatus) {
spit("---- Fragment shader failed to compile ----");
goto programError;
}
MEM_freeN(logInf);
logInf = 0;
}
glGetObjectParameterivARB(tmpFrag, GL_OBJECT_COMPILE_STATUS_ARB, (GLint *)&fragstatus);
if (!fragstatus) {
spit("---- Fragment shader failed to compile ----");
if (!tmpFrag && !tmpVert) {
spit("---- No shader given ----");
goto programError;
}
// -- program ------------------------
// set compiled vert/frag shader & link
tmpProg = glCreateProgramObjectARB();
glAttachObjectARB(tmpProg, tmpVert);
glAttachObjectARB(tmpProg, tmpFrag);
if (tmpVert) {
glAttachObjectARB(tmpProg, tmpVert);
}
if (tmpFrag) {
glAttachObjectARB(tmpProg, tmpFrag);
}
glLinkProgramARB(tmpProg);
glGetObjectParameterivARB(tmpProg, GL_OBJECT_INFO_LOG_LENGTH_ARB, (GLint *)&proglen);
glGetObjectParameterivARB(tmpProg, GL_OBJECT_LINK_STATUS_ARB, (GLint *)&progstatus);
@@ -396,8 +412,12 @@ bool BL_Shader::LinkProgram()
// set
mShader = tmpProg;
glDeleteObjectARB(tmpVert);
glDeleteObjectARB(tmpFrag);
if (tmpVert) {
glDeleteObjectARB(tmpVert);
}
if (tmpFrag) {
glDeleteObjectARB(tmpFrag);
}
mOk = 1;
mError = 0;
return true;
@@ -748,6 +768,7 @@ PyMethodDef BL_Shader::Methods[] = {
KX_PYMETHODTABLE(BL_Shader, validate),
// access functions
KX_PYMETHODTABLE(BL_Shader, isValid),
KX_PYMETHODTABLE(BL_Shader, setUniformEyef),
KX_PYMETHODTABLE(BL_Shader, setUniform1f),
KX_PYMETHODTABLE(BL_Shader, setUniform2f),
KX_PYMETHODTABLE(BL_Shader, setUniform3f),
@@ -1019,6 +1040,27 @@ KX_PYMETHODDEF_DOC(BL_Shader, setUniform4f, "setUniform4f(name, fx,fy,fz, fw) ")
return NULL;
}
KX_PYMETHODDEF_DOC(BL_Shader, setUniformEyef, "setUniformEyef(name)")
{
if (mError) {
Py_RETURN_NONE;
}
const char *uniform;
float value = 0.0f;
if (PyArg_ParseTuple(args, "s:setUniformEyef", &uniform)) {
int loc = GetUniformLocation(uniform);
if (loc != -1) {
#ifdef SORT_UNIFORMS
SetUniformfv(loc, BL_Uniform::UNI_FLOAT_EYE, &value, sizeof(float));
#else
SetUniform(loc, (int)value);
#endif
}
Py_RETURN_NONE;
}
return NULL;
}
KX_PYMETHODDEF_DOC(BL_Shader, setUniform1i, "setUniform1i(name, ix)")
{
if (mError) {

View File

@@ -64,10 +64,11 @@ public:
UNI_FLOAT4,
UNI_MAT3,
UNI_MAT4,
UNI_FLOAT_EYE,
UNI_MAX
};
void Apply(class BL_Shader *shader);
bool Apply(class BL_Shader *shader);
void SetData(int location, int type, bool transpose = false);
int GetLocation() { return mLoc; }
void *getData() { return mData; }
@@ -226,6 +227,7 @@ public:
KX_PYMETHOD_DOC(BL_Shader, setUniform3i);
KX_PYMETHOD_DOC(BL_Shader, setUniform2i);
KX_PYMETHOD_DOC(BL_Shader, setUniform1i);
KX_PYMETHOD_DOC(BL_Shader, setUniformEyef);
KX_PYMETHOD_DOC(BL_Shader, setUniformfv);
KX_PYMETHOD_DOC(BL_Shader, setUniformiv);
KX_PYMETHOD_DOC(BL_Shader, setUniformMatrix4);

View File

@@ -1600,7 +1600,7 @@ void KX_Dome::RotateCamera(KX_Camera* cam, int i)
MT_Transform camtrans(cam->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), cam->GetCameraData()->m_perspective);
cam->SetModelviewMatrix(viewmat);
// restore the original orientation
@@ -2035,7 +2035,7 @@ void KX_Dome::RenderDomeFrame(KX_Scene* scene, KX_Camera* cam, int i)
MT_Transform camtrans(cam->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), 1.0f);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), 1.0f);
cam->SetModelviewMatrix(viewmat);
// restore the original orientation

View File

@@ -108,7 +108,7 @@ double KX_KetsjiEngine::m_suspendeddelta = 0.0;
double KX_KetsjiEngine::m_average_framerate = 0.0;
bool KX_KetsjiEngine::m_restrict_anim_fps = false;
short KX_KetsjiEngine::m_exitkey = 130; // ESC Key
bool KX_KetsjiEngine::m_doRender = true;
/**
* Constructor of the Ketsji Engine
@@ -173,6 +173,7 @@ KX_KetsjiEngine::KX_KetsjiEngine(KX_ISystem* system)
m_overrideFrameColorR(0.0f),
m_overrideFrameColorG(0.0f),
m_overrideFrameColorB(0.0f),
m_overrideFrameColorA(0.0f),
m_usedome(false)
{
@@ -381,7 +382,7 @@ void KX_KetsjiEngine::RenderDome()
m_overrideFrameColorR,
m_overrideFrameColorG,
m_overrideFrameColorB,
1.0
m_overrideFrameColorA
);
}
else
@@ -749,6 +750,9 @@ bool KX_KetsjiEngine::NextFrame()
scene->setSuspendedTime(m_clockTime);
m_logger->StartLog(tc_services, m_kxsystem->GetTimeInSeconds(), true);
// invalidates the shadow buffer from previous render/ImageRender because the scene has changed
scene->SetShadowDone(false);
}
// update system devices
@@ -771,7 +775,7 @@ bool KX_KetsjiEngine::NextFrame()
// Start logging time spent outside main loop
m_logger->StartLog(tc_outside, m_kxsystem->GetTimeInSeconds(), true);
return doRender;
return doRender && m_doRender;
}
@@ -805,7 +809,7 @@ void KX_KetsjiEngine::Render()
m_overrideFrameColorR,
m_overrideFrameColorG,
m_overrideFrameColorB,
1.0
m_overrideFrameColorA
);
}
else
@@ -1133,6 +1137,8 @@ void KX_KetsjiEngine::RenderShadowBuffers(KX_Scene *scene)
cam->Release();
}
}
/* remember that we have a valid shadow buffer for that scene */
scene->SetShadowDone(true);
}
// update graphics
@@ -1252,7 +1258,7 @@ void KX_KetsjiEngine::RenderFrame(KX_Scene* scene, KX_Camera* cam)
MT_Transform camtrans(cam->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), cam->GetCameraData()->m_perspective);
cam->SetModelviewMatrix(viewmat);
// The following actually reschedules all vertices to be
@@ -1925,6 +1931,16 @@ short KX_KetsjiEngine::GetExitKey()
return m_exitkey;
}
void KX_KetsjiEngine::SetRender(bool render)
{
m_doRender = render;
}
bool KX_KetsjiEngine::GetRender()
{
return m_doRender;
}
void KX_KetsjiEngine::SetShowFramerate(bool frameRate)
{
m_show_framerate = frameRate;
@@ -2023,19 +2039,21 @@ bool KX_KetsjiEngine::GetUseOverrideFrameColor(void) const
}
void KX_KetsjiEngine::SetOverrideFrameColor(float r, float g, float b)
void KX_KetsjiEngine::SetOverrideFrameColor(float r, float g, float b, float a)
{
m_overrideFrameColorR = r;
m_overrideFrameColorG = g;
m_overrideFrameColorB = b;
m_overrideFrameColorA = a;
}
void KX_KetsjiEngine::GetOverrideFrameColor(float& r, float& g, float& b) const
void KX_KetsjiEngine::GetOverrideFrameColor(float& r, float& g, float& b, float& a) const
{
r = m_overrideFrameColorR;
g = m_overrideFrameColorG;
b = m_overrideFrameColorB;
a = m_overrideFrameColorA;
}

View File

@@ -129,6 +129,8 @@ private:
static short m_exitkey; /* Key used to exit the BGE */
static bool m_doRender; /* whether or not the scene should be rendered after the logic frame */
int m_exitcode;
STR_String m_exitstring;
@@ -199,6 +201,8 @@ private:
float m_overrideFrameColorG;
/** Blue component of framing bar color. */
float m_overrideFrameColorB;
/** alpha component of framing bar color. */
float m_overrideFrameColorA;
/** Settings that doesn't go away with Game Actuator */
GlobalSettings m_globalsettings;
@@ -209,7 +213,6 @@ private:
void RenderFrame(KX_Scene* scene, KX_Camera* cam);
void PostRenderScene(KX_Scene* scene);
void RenderDebugProperties();
void RenderShadowBuffers(KX_Scene *scene);
public:
KX_KetsjiEngine(class KX_ISystem* system);
@@ -249,6 +252,7 @@ public:
///returns true if an update happened to indicate -> Render
bool NextFrame();
void Render();
void RenderShadowBuffers(KX_Scene *scene);
void StartEngine(bool clearIpo);
void StopEngine();
@@ -400,6 +404,16 @@ public:
static short GetExitKey();
/**
* Activate or deactivates the render of the scene after the logic frame
* \param render true (render) or false (do not render)
*/
static void SetRender(bool render);
/**
* Get the current render flag value
*/
static bool GetRender();
/**
* \Sets the display for frame rate on or off.
*/
@@ -485,7 +499,7 @@ public:
* \param g Green component of the override color.
* \param b Blue component of the override color.
*/
void SetOverrideFrameColor(float r, float g, float b);
void SetOverrideFrameColor(float r, float g, float b, float a);
/**
* Returns the color used for framing bar color instead of the one in the Blender file's scenes.
@@ -493,7 +507,7 @@ public:
* \param g Green component of the override color.
* \param b Blue component of the override color.
*/
void GetOverrideFrameColor(float& r, float& g, float& b) const;
void GetOverrideFrameColor(float& r, float& g, float& b, float& a) const;
KX_Scene* CreateScene(const STR_String& scenename);
KX_Scene* CreateScene(Scene *scene, bool libloading=false);

View File

@@ -104,6 +104,7 @@ extern "C" {
#include "BL_ArmatureObject.h"
#include "RAS_IRasterizer.h"
#include "RAS_ICanvas.h"
#include "RAS_IOffScreen.h"
#include "RAS_BucketManager.h"
#include "RAS_2DFilterManager.h"
#include "MT_Vector3.h"
@@ -469,6 +470,21 @@ static PyObject *gPyGetExitKey(PyObject *)
return PyLong_FromLong(KX_KetsjiEngine::GetExitKey());
}
static PyObject *gPySetRender(PyObject *, PyObject *args)
{
int render;
if (!PyArg_ParseTuple(args, "i:setRender", &render))
return NULL;
KX_KetsjiEngine::SetRender(render);
Py_RETURN_NONE;
}
static PyObject *gPyGetRender(PyObject *)
{
return PyBool_FromLong(KX_KetsjiEngine::GetRender());
}
static PyObject *gPySetMaxLogicFrame(PyObject *, PyObject *args)
{
int frame;
@@ -909,6 +925,8 @@ static struct PyMethodDef game_methods[] = {
{"setAnimRecordFrame", (PyCFunction) gPySetAnimRecordFrame, METH_VARARGS, (const char *)"Sets the current frame number used for animation recording"},
{"getExitKey", (PyCFunction) gPyGetExitKey, METH_NOARGS, (const char *)"Gets the key used to exit the game engine"},
{"setExitKey", (PyCFunction) gPySetExitKey, METH_VARARGS, (const char *)"Sets the key used to exit the game engine"},
{"setRender", (PyCFunction) gPySetRender, METH_VARARGS, (const char *)"Set the global render flag"},
{"getRender", (PyCFunction) gPyGetRender, METH_NOARGS, (const char *)"get the global render flag value"},
{"getUseExternalClock", (PyCFunction) gPyGetUseExternalClock, METH_NOARGS, (const char *)"Get if we use the time provided by an external clock"},
{"setUseExternalClock", (PyCFunction) gPySetUseExternalClock, METH_VARARGS, (const char *)"Set if we use the time provided by an external clock"},
{"getClockTime", (PyCFunction) gPyGetClockTime, METH_NOARGS, (const char *)"Get the last BGE render time. "
@@ -1457,6 +1475,158 @@ static PyObject *gPyGetDisplayDimensions(PyObject *)
return result;
}
/* python wrapper around RAS_IOffScreen
* Should eventually gets its own file
*/
static void PyRASOffScreen__tp_dealloc(PyRASOffScreen *self)
{
if (self->ofs)
delete self->ofs;
Py_TYPE(self)->tp_free((PyObject *)self);
}
PyDoc_STRVAR(py_RASOffScreen_doc,
"RASOffscreen(width, height) -> new GPU Offscreen object"
"initialized to hold a framebuffer object of ``width`` x ``height``.\n"
""
);
PyDoc_STRVAR(RASOffScreen_width_doc, "Offscreen buffer width.\n\n:type: integer");
static PyObject *RASOffScreen_width_get(PyRASOffScreen *self, void *UNUSED(type))
{
return PyLong_FromLong(self->ofs->GetWidth());
}
PyDoc_STRVAR(RASOffScreen_height_doc, "Offscreen buffer height.\n\n:type: GLsizei");
static PyObject *RASOffScreen_height_get(PyRASOffScreen *self, void *UNUSED(type))
{
return PyLong_FromLong(self->ofs->GetHeight());
}
PyDoc_STRVAR(RASOffScreen_color_doc, "Offscreen buffer texture object (if target is RAS_OFS_RENDER_TEXTURE).\n\n:type: GLuint");
static PyObject *RASOffScreen_color_get(PyRASOffScreen *self, void *UNUSED(type))
{
return PyLong_FromLong(self->ofs->GetColor());
}
static PyGetSetDef RASOffScreen_getseters[] = {
{(char *)"width", (getter)RASOffScreen_width_get, (setter)NULL, RASOffScreen_width_doc, NULL},
{(char *)"height", (getter)RASOffScreen_height_get, (setter)NULL, RASOffScreen_height_doc, NULL},
{(char *)"color", (getter)RASOffScreen_color_get, (setter)NULL, RASOffScreen_color_doc, NULL},
{NULL, NULL, NULL, NULL, NULL} /* Sentinel */
};
static int PyRASOffScreen__tp_init(PyRASOffScreen *self, PyObject *args, PyObject *kwargs)
{
int width, height, samples, target;
const char *keywords[] = {"width", "height", "samples", "target", NULL};
samples = 0;
target = RAS_IOffScreen::RAS_OFS_RENDER_BUFFER;
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii|ii:RASOffscreen", (char **)keywords, &width, &height, &samples, &target)) {
return -1;
}
if (width <= 0) {
PyErr_SetString(PyExc_ValueError, "negative 'width' given");
return -1;
}
if (height <= 0) {
PyErr_SetString(PyExc_ValueError, "negative 'height' given");
return -1;
}
if (samples < 0) {
PyErr_SetString(PyExc_ValueError, "negative 'samples' given");
return -1;
}
if (target != RAS_IOffScreen::RAS_OFS_RENDER_BUFFER && target != RAS_IOffScreen::RAS_OFS_RENDER_TEXTURE)
{
PyErr_SetString(PyExc_ValueError, "invalid 'target' given, can only be RAS_OFS_RENDER_BUFFER or RAS_OFS_RENDER_TEXTURE");
return -1;
}
if (!gp_Rasterizer)
{
PyErr_SetString(PyExc_SystemError, "no rasterizer");
return -1;
}
self->ofs = gp_Rasterizer->CreateOffScreen(width, height, samples, target);
if (!self->ofs) {
PyErr_SetString(PyExc_SystemError, "creation failed");
return -1;
}
return 0;
}
PyTypeObject PyRASOffScreen_Type = {
PyVarObject_HEAD_INIT(NULL, 0)
"RASOffScreen", /* tp_name */
sizeof(PyRASOffScreen), /* tp_basicsize */
0, /* tp_itemsize */
/* methods */
(destructor)PyRASOffScreen__tp_dealloc, /* tp_dealloc */
NULL, /* tp_print */
NULL, /* tp_getattr */
NULL, /* tp_setattr */
NULL, /* tp_compare */
NULL, /* tp_repr */
NULL, /* tp_as_number */
NULL, /* tp_as_sequence */
NULL, /* tp_as_mapping */
NULL, /* tp_hash */
NULL, /* tp_call */
NULL, /* tp_str */
NULL, /* tp_getattro */
NULL, /* tp_setattro */
NULL, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT, /* tp_flags */
py_RASOffScreen_doc, /* Documentation string */
NULL, /* tp_traverse */
NULL, /* tp_clear */
NULL, /* tp_richcompare */
0, /* tp_weaklistoffset */
NULL, /* tp_iter */
NULL, /* tp_iternext */
NULL, /* tp_methods */
NULL, /* tp_members */
RASOffScreen_getseters, /* tp_getset */
NULL, /* tp_base */
NULL, /* tp_dict */
NULL, /* tp_descr_get */
NULL, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)PyRASOffScreen__tp_init, /* tp_init */
(allocfunc)PyType_GenericAlloc, /* tp_alloc */
(newfunc)PyType_GenericNew, /* tp_new */
(freefunc)0, /* tp_free */
NULL, /* tp_is_gc */
NULL, /* tp_bases */
NULL, /* tp_mro */
NULL, /* tp_cache */
NULL, /* tp_subclasses */
NULL, /* tp_weaklist */
(destructor) NULL /* tp_del */
};
static PyObject *gPyOffScreenCreate(PyObject *UNUSED(self), PyObject *args)
{
int width;
int height;
int samples;
int target;
samples = 0;
if (!PyArg_ParseTuple(args, "ii|ii:offScreenCreate", &width, &height, &samples, &target))
return NULL;
return PyObject_CallObject((PyObject *) &PyRASOffScreen_Type, args);
}
PyDoc_STRVAR(Rasterizer_module_documentation,
"This is the Python API for the game engine of Rasterizer"
);
@@ -1511,6 +1681,7 @@ static struct PyMethodDef rasterizer_methods[] = {
{"showProperties",(PyCFunction) gPyShowProperties, METH_VARARGS, "show or hide the debug properties"},
{"autoDebugList",(PyCFunction) gPyAutoDebugList, METH_VARARGS, "enable or disable auto adding debug properties to the debug list"},
{"clearDebugList",(PyCFunction) gPyClearDebugList, METH_NOARGS, "clears the debug property list"},
{"offScreenCreate", (PyCFunction) gPyOffScreenCreate, METH_VARARGS, "create an offscreen buffer object, arguments are width and height in pixels"},
{ NULL, (PyCFunction) NULL, 0, NULL }
};
@@ -2330,6 +2501,8 @@ PyMODINIT_FUNC initRasterizerPythonBinding()
PyObject *m;
PyObject *d;
PyType_Ready(&PyRASOffScreen_Type);
m = PyModule_Create(&Rasterizer_module_def);
PyDict_SetItemString(PySys_GetObject("modules"), Rasterizer_module_def.m_name, m);
@@ -2357,6 +2530,11 @@ PyMODINIT_FUNC initRasterizerPythonBinding()
KX_MACRO_addTypesToDict(d, LEFT_EYE, RAS_IRasterizer::RAS_STEREO_LEFTEYE);
KX_MACRO_addTypesToDict(d, RIGHT_EYE, RAS_IRasterizer::RAS_STEREO_RIGHTEYE);
/* offscreen render */
KX_MACRO_addTypesToDict(d, RAS_OFS_RENDER_BUFFER, RAS_IOffScreen::RAS_OFS_RENDER_BUFFER);
KX_MACRO_addTypesToDict(d, RAS_OFS_RENDER_TEXTURE, RAS_IOffScreen::RAS_OFS_RENDER_TEXTURE);
// XXXX Add constants here
// Check for errors

View File

@@ -172,6 +172,7 @@ KX_Scene::KX_Scene(class SCA_IInputDevice* keyboarddevice,
m_activity_culling = false;
m_suspend = false;
m_isclearingZbuffer = true;
m_isShadowDone = false;
m_tempObjectList = new CListValue();
m_objectlist = new CListValue();
m_parentlist = new CListValue();

View File

@@ -171,6 +171,11 @@ protected:
*/
bool m_isclearingZbuffer;
/**
* Does the shadow buffer needs calculing
*/
bool m_isShadowDone;
/**
* The name of the scene
*/
@@ -572,6 +577,8 @@ public:
bool IsSuspended();
bool IsClearingZBuffer();
void EnableZBufferClearing(bool isclearingZbuffer);
bool IsShadowDone() { return m_isShadowDone; }
void SetShadowDone(bool b) { m_isShadowDone = b; }
// use of DBVT tree for camera culling
void SetDbvtCulling(bool b) { m_dbvt_culling = b; }
bool GetDbvtCulling() { return m_dbvt_culling; }

View File

@@ -65,6 +65,8 @@ set(SRC
RAS_IPolygonMaterial.h
RAS_IRasterizer.h
RAS_ILightObject.h
RAS_IOffScreen.h
RAS_ISync.h
RAS_MaterialBucket.h
RAS_MeshObject.h
RAS_ObjectColor.h

View File

@@ -0,0 +1,84 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file RAS_IOffScreen.h
* \ingroup bgerast
*/
#ifndef __RAS_OFFSCREEN_H__
#define __RAS_OFFSCREEN_H__
#include "EXP_Python.h"
class RAS_ICanvas;
class MT_Transform;
struct Image;
class RAS_IOffScreen
{
public:
enum RAS_OFS_BIND_MODE {
RAS_OFS_BIND_RENDER = 0,
RAS_OFS_BIND_READ,
};
enum RAS_OFS_RENDER_TARGET {
RAS_OFS_RENDER_BUFFER = 0, // use render buffer as render target
RAS_OFS_RENDER_TEXTURE, // use texture as render target
};
int m_width;
int m_height;
int m_samples;
int m_color; // if used, holds the texture object, 0 if not used
virtual ~RAS_IOffScreen() {}
virtual bool Create(int width, int height, int samples, RAS_OFS_RENDER_TARGET target) = 0;
virtual void Destroy() = 0;
virtual void Bind(RAS_OFS_BIND_MODE mode) = 0;
virtual void Blit() = 0;
virtual void Unbind() = 0;
virtual void MipMap() = 0;
virtual int GetWidth() { return m_width; }
virtual int GetHeight() { return m_height; }
virtual int GetSamples() { return m_samples; }
virtual int GetColor() { return m_color; }
};
#ifdef WITH_PYTHON
typedef struct {
PyObject_HEAD
RAS_IOffScreen *ofs;
} PyRASOffScreen;
extern PyTypeObject PyRASOffScreen_Type;
#endif
#endif /* __RAS_OFFSCREEN_H__ */

View File

@@ -55,6 +55,8 @@ class RAS_IPolyMaterial;
class RAS_MeshSlot;
class RAS_ILightObject;
class SCA_IScene;
class RAS_IOffScreen;
class RAS_ISync;
typedef vector<unsigned short> KX_IndexArray;
typedef vector<RAS_TexVert> KX_VertexArray;
@@ -257,6 +259,18 @@ public:
virtual void SetFocalLength(const float focallength) = 0;
virtual float GetFocalLength() = 0;
/**
* Create an offscreen render buffer that can be used as target for render.
* For the time being, it is only used in VideoTexture for custom render.
*/
virtual RAS_IOffScreen *CreateOffScreen(int width, int height, int samples, int target) = 0;
/**
* Create a sync object
* For use with offscreen render
*/
virtual RAS_ISync *CreateSync(int type) = 0;
/**
* SwapBuffers swaps the back buffer with the front buffer.
*/
@@ -287,7 +301,7 @@ public:
* Sets the modelview matrix.
*/
virtual void SetViewMatrix(const MT_Matrix4x4 &mat, const MT_Matrix3x3 &ori,
const MT_Point3 &pos, bool perspective) = 0;
const MT_Point3 &pos, const MT_Vector3 &scale, bool perspective) = 0;
/**
*/

View File

@@ -0,0 +1,48 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file RAS_ISync.h
* \ingroup bgerast
*/
#ifndef __RAS_ISYNC_H__
#define __RAS_ISYNC_H__
class RAS_ISync
{
public:
enum RAS_SYNC_TYPE {
RAS_SYNC_TYPE_FENCE = 0,
};
virtual ~RAS_ISync() {}
virtual bool Create(RAS_SYNC_TYPE type) = 0;
virtual void Destroy() = 0;
virtual void Wait() = 0;
};
#endif /* __RAS_ISYNC_H__ */

View File

@@ -51,6 +51,8 @@ set(INC_SYS
set(SRC
RAS_ListRasterizer.cpp
RAS_OpenGLLight.cpp
RAS_OpenGLOffScreen.cpp
RAS_OpenGLSync.cpp
RAS_OpenGLRasterizer.cpp
RAS_StorageVA.cpp
RAS_StorageVBO.cpp
@@ -58,6 +60,8 @@ set(SRC
RAS_IStorage.h
RAS_ListRasterizer.h
RAS_OpenGLLight.h
RAS_OpenGLOffScreen.h
RAS_OpenGLSync.h
RAS_OpenGLRasterizer.h
RAS_StorageVA.h
RAS_StorageVBO.h

View File

@@ -242,7 +242,7 @@ void RAS_OpenGLLight::BindShadowBuffer(RAS_ICanvas *canvas, KX_Camera *cam, MT_T
RAS_IRasterizer::StereoMode stereomode = m_rasterizer->GetStereoMode();
m_rasterizer->SetStereoMode(RAS_IRasterizer::RAS_STEREO_NOSTEREO);
m_rasterizer->SetProjectionMatrix(projectionmat);
m_rasterizer->SetViewMatrix(modelviewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(modelviewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetStereoMode(stereomode);
}

View File

@@ -0,0 +1,347 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#include "glew-mx.h"
#include <stdio.h>
#include "RAS_OpenGLOffScreen.h"
#include "RAS_ICanvas.h"
RAS_OpenGLOffScreen::RAS_OpenGLOffScreen(RAS_ICanvas *canvas)
:m_canvas(canvas), m_depthrb(0), m_colorrb(0), m_depthtx(0), m_colortx(0),
m_fbo(0), m_blitfbo(0), m_blitrbo(0), m_blittex(0), m_target(RAS_OFS_RENDER_BUFFER), m_bound(false)
{
m_width = 0;
m_height = 0;
m_samples = 0;
m_color = 0;
}
RAS_OpenGLOffScreen::~RAS_OpenGLOffScreen()
{
Destroy();
}
bool RAS_OpenGLOffScreen::Create(int width, int height, int samples, RAS_OFS_RENDER_TARGET target)
{
GLenum status;
GLuint glo[2], fbo;
GLint max_samples;
GLenum textarget;
if (m_fbo) {
printf("RAS_OpenGLOffScreen::Create(): buffer exists already, destroy first\n");
return false;
}
if (target != RAS_IOffScreen::RAS_OFS_RENDER_BUFFER &&
target != RAS_IOffScreen::RAS_OFS_RENDER_TEXTURE)
{
printf("RAS_OpenGLOffScreen::Create(): invalid offscren target\n");
return false;
}
if (!GLEW_EXT_framebuffer_object) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer not supported\n");
return false;
}
if (samples) {
if (!GLEW_EXT_framebuffer_multisample ||
!GLEW_EXT_framebuffer_blit)
{
samples = 0;
}
}
if (samples && target == RAS_OFS_RENDER_TEXTURE) {
// we need this in addition if we use multisample textures
if (!GLEW_ARB_texture_multisample ||
!GLEW_EXT_framebuffer_multisample_blit_scaled)
{
samples = 0;
}
}
if (samples) {
max_samples = 0;
glGetIntegerv(GL_MAX_SAMPLES_EXT , &max_samples);
if (samples > max_samples)
samples = max_samples;
}
m_target = target;
fbo = 0;
glGenFramebuffersEXT(1, &fbo);
if (fbo == 0) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer creation failed: %d\n", (int)glGetError());
return false;
}
m_fbo = fbo;
glo[0] = glo[1] = 0;
if (target == RAS_OFS_RENDER_TEXTURE) {
glGenTextures(2, glo);
if (glo[0] == 0 || glo[1] == 0) {
printf("RAS_OpenGLOffScreen::Create(): texture creation failed: %d\n", (int)glGetError());
goto L_ERROR;
}
m_depthtx = glo[0];
m_color = m_colortx = glo[1];
if (samples) {
textarget = GL_TEXTURE_2D_MULTISAMPLE;
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, m_depthtx);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, samples, GL_DEPTH_COMPONENT, width, height, true);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, m_colortx);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, samples, GL_RGBA8, width, height, true);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, 0);
}
else {
textarget = GL_TEXTURE_2D;
glBindTexture(GL_TEXTURE_2D, m_depthtx);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, m_colortx);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, textarget, m_depthtx, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textarget, m_colortx, 0);
}
else {
glGenRenderbuffersEXT(2, glo);
if (glo[0] == 0 || glo[1] == 0) {
printf("RAS_OpenGLOffScreen::Create(): render buffer creation failed: %d\n", (int)glGetError());
goto L_ERROR;
}
m_depthrb = glo[0];
m_colorrb = glo[1];
glBindRenderbufferEXT(GL_RENDERBUFFER, m_depthrb);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER, samples, GL_DEPTH_COMPONENT, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER, m_colorrb);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER, samples, GL_RGBA8, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER, m_depthrb);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER, m_colorrb);
}
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer incomplete: %d\n", (int)status);
goto L_ERROR;
}
m_width = width;
m_height = height;
if (samples > 0) {
GLuint blit_tex;
GLuint blit_fbo;
// create a secondary FBO to blit to before the pixel can be read
/* write into new single-sample buffer */
glGenFramebuffersEXT(1, &blit_fbo);
if (!blit_fbo) {
printf("RAS_OpenGLOffScreen::Create(): failed creating a FBO for multi-sample offscreen buffer\n");
goto L_ERROR;
}
m_blitfbo = blit_fbo;
blit_tex = 0;
if (target == RAS_OFS_RENDER_TEXTURE) {
glGenTextures(1, &blit_tex);
if (!blit_tex) {
printf("RAS_OpenGLOffScreen::Create(): failed creating a texture for multi-sample offscreen buffer\n");
goto L_ERROR;
}
// m_color is the texture where the final render goes, the blit texture in this case
m_color = m_blittex = blit_tex;
glBindTexture(GL_TEXTURE_2D, m_blittex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, m_blitfbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_blittex, 0);
}
else {
/* create render buffer for new 'fbo_blit' */
glGenRenderbuffersEXT(1, &blit_tex);
if (!blit_tex) {
printf("RAS_OpenGLOffScreen::Create(): failed creating a render buffer for multi-sample offscreen buffer\n");
goto L_ERROR;
}
m_blitrbo = blit_tex;
glBindRenderbufferEXT(GL_RENDERBUFFER, m_blitrbo);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER, 0, GL_RGBA8, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER, 0);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, m_blitfbo);
glFramebufferRenderbufferEXT(GL_DRAW_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER, m_blitrbo);
}
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER_EXT);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_EXT, 0);
if (status != GL_FRAMEBUFFER_COMPLETE) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer for multi-sample offscreen buffer incomplete: %d\n", (int)status);
goto L_ERROR;
}
// remember that multisample is enabled
m_samples = 1;
}
return true;
L_ERROR:
Destroy();
return false;
}
void RAS_OpenGLOffScreen::Destroy()
{
GLuint globj;
Unbind();
if (m_fbo) {
globj = m_fbo;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
if (m_target == RAS_OFS_RENDER_TEXTURE) {
GLenum textarget = (m_samples) ? GL_TEXTURE_2D_MULTISAMPLE : GL_TEXTURE_2D;
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, textarget, 0, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textarget, 0, 0);
}
else {
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, 0);
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &globj);
m_fbo = 0;
}
if (m_depthrb) {
globj = m_depthrb;
glDeleteRenderbuffers(1, &globj);
m_depthrb = 0;
}
if (m_colorrb) {
globj = m_colorrb;
glDeleteRenderbuffers(1, &globj);
m_colorrb = 0;
}
if (m_depthtx) {
globj = m_depthtx;
glDeleteTextures(1, &globj);
m_depthtx = 0;
}
if (m_colortx) {
globj = m_colortx;
glDeleteTextures(1, &globj);
m_colortx = 0;
}
if (m_blitfbo) {
globj = m_blitfbo;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_blitfbo);
if (m_target == RAS_OFS_RENDER_TEXTURE) {
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, 0, 0);
}
else {
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, 0);
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &globj);
m_blitfbo = 0;
}
if (m_blitrbo) {
globj = m_blitrbo;
glDeleteRenderbuffers(1, &globj);
m_blitrbo = 0;
}
if (m_blittex) {
globj = m_blittex;
glDeleteTextures(1, &globj);
m_blittex = 0;
}
m_width = 0;
m_height = 0;
m_samples = 0;
m_color = 0;
m_target = RAS_OFS_RENDER_BUFFER;
}
void RAS_OpenGLOffScreen::Bind(RAS_OFS_BIND_MODE mode)
{
if (m_fbo) {
if (mode == RAS_OFS_BIND_RENDER) {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
glViewport(0, 0, m_width, m_height);
glDisable(GL_SCISSOR_TEST);
}
else if (!m_blitfbo) {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
}
else {
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, m_blitfbo);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
}
m_bound = true;
}
}
void RAS_OpenGLOffScreen::Unbind()
{
if (!m_bound)
return;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glEnable(GL_SCISSOR_TEST);
glReadBuffer(GL_BACK);
glDrawBuffer(GL_BACK);
m_bound = false;
}
void RAS_OpenGLOffScreen::MipMap()
{
if (m_color) {
glBindTexture(GL_TEXTURE_2D, m_color);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
}
void RAS_OpenGLOffScreen::Blit()
{
if (m_bound && m_blitfbo) {
// set the draw target to the secondary FBO, the read target is still the multisample FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER, m_blitfbo);
// sample the primary
glBlitFramebufferEXT(0, 0, m_width, m_height, 0, 0, m_width, m_height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
// make sure the next glReadPixels will read from the secondary buffer
glBindFramebufferEXT(GL_READ_FRAMEBUFFER, m_blitfbo);
}
}

View File

@@ -0,0 +1,65 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __RAS_OPENGLOFFSCREEN__
#define __RAS_OPENGLOFFSCREEN__
#include "RAS_IOffScreen.h"
#include "GPU_extensions.h"
class RAS_ICanvas;
class RAS_OpenGLOffScreen : public RAS_IOffScreen
{
RAS_ICanvas *m_canvas;
// these are GL objects
unsigned int m_depthrb;
unsigned int m_colorrb;
unsigned int m_depthtx;
unsigned int m_colortx;
unsigned int m_fbo;
unsigned int m_blitfbo;
unsigned int m_blitrbo;
unsigned int m_blittex;
RAS_OFS_RENDER_TARGET m_target;
bool m_bound;
public:
RAS_OpenGLOffScreen(RAS_ICanvas *canvas);
~RAS_OpenGLOffScreen();
bool Create(int width, int height, int samples, RAS_OFS_RENDER_TARGET target);
void Destroy();
void Bind(RAS_OFS_BIND_MODE mode);
void Blit();
void Unbind();
void MipMap();
};
#endif /* __RAS_OPENGLOFFSCREEN__ */

View File

@@ -46,6 +46,8 @@
#include "MT_CmMatrix4x4.h"
#include "RAS_OpenGLLight.h"
#include "RAS_OpenGLOffScreen.h"
#include "RAS_OpenGLSync.h"
#include "RAS_StorageVA.h"
#include "RAS_StorageVBO.h"
@@ -92,6 +94,7 @@ RAS_OpenGLRasterizer::RAS_OpenGLRasterizer(RAS_ICanvas* canvas, RAS_STORAGE_TYPE
m_time(0.0f),
m_campos(0.0f, 0.0f, 0.0f),
m_camortho(false),
m_camnegscale(false),
m_stereomode(RAS_STEREO_NOSTEREO),
m_curreye(RAS_STEREO_LEFTEYE),
m_eyeseparation(0.0f),
@@ -207,7 +210,7 @@ void RAS_OpenGLRasterizer::SetBackColor(float color[3])
m_redback = color[0];
m_greenback = color[1];
m_blueback = color[2];
m_alphaback = 1.0f;
m_alphaback = 0.0f;
}
void RAS_OpenGLRasterizer::SetFog(short type, float start, float dist, float intensity, float color[3])
@@ -600,6 +603,31 @@ float RAS_OpenGLRasterizer::GetFocalLength()
return m_focallength;
}
RAS_IOffScreen *RAS_OpenGLRasterizer::CreateOffScreen(int width, int height, int samples, int target)
{
RAS_IOffScreen *ofs;
ofs = new RAS_OpenGLOffScreen(m_2DCanvas);
if (!ofs->Create(width, height, samples, (RAS_IOffScreen::RAS_OFS_RENDER_TARGET)target)) {
delete ofs;
return NULL;
}
return ofs;
}
RAS_ISync *RAS_OpenGLRasterizer::CreateSync(int type)
{
RAS_ISync *sync;
sync = new RAS_OpenGLSync();
if (!sync->Create((RAS_ISync::RAS_SYNC_TYPE)type)) {
delete sync;
return NULL;
}
return sync;
}
void RAS_OpenGLRasterizer::SwapBuffers()
{
@@ -924,6 +952,7 @@ MT_Matrix4x4 RAS_OpenGLRasterizer::GetOrthoMatrix(
void RAS_OpenGLRasterizer::SetViewMatrix(const MT_Matrix4x4 &mat,
const MT_Matrix3x3 & camOrientMat3x3,
const MT_Point3 & pos,
const MT_Vector3 &scale,
bool perspective)
{
m_viewmatrix = mat;
@@ -966,6 +995,12 @@ void RAS_OpenGLRasterizer::SetViewMatrix(const MT_Matrix4x4 &mat,
}
}
bool negX = (scale[0] < 0.0f);
bool negY = (scale[0] < 0.0f);
bool negZ = (scale[0] < 0.0f);
if (negX || negY || negZ) {
m_viewmatrix.tscale((negX)?-1.0f:1.0f, (negY)?-1.0f:1.0f, (negZ)?-1.0f:1.0f, 1.0);
}
m_viewinvmatrix = m_viewmatrix;
m_viewinvmatrix.invert();
@@ -976,6 +1011,7 @@ void RAS_OpenGLRasterizer::SetViewMatrix(const MT_Matrix4x4 &mat,
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(glviewmat);
m_campos = pos;
m_camnegscale = negX ^ negY ^ negZ;
}
@@ -1108,6 +1144,9 @@ void RAS_OpenGLRasterizer::SetAlphaBlend(int alphablend)
void RAS_OpenGLRasterizer::SetFrontFace(bool ccw)
{
if (m_camnegscale)
ccw = !ccw;
if (m_last_frontface == ccw)
return;

View File

@@ -96,6 +96,7 @@ class RAS_OpenGLRasterizer : public RAS_IRasterizer
MT_Matrix4x4 m_viewinvmatrix;
MT_Point3 m_campos;
bool m_camortho;
bool m_camnegscale;
StereoMode m_stereomode;
StereoEye m_curreye;
@@ -180,7 +181,8 @@ public:
virtual float GetEyeSeparation();
virtual void SetFocalLength(const float focallength);
virtual float GetFocalLength();
virtual RAS_IOffScreen *CreateOffScreen(int width, int height, int samples, int target);
virtual RAS_ISync *CreateSync(int type);
virtual void SwapBuffers();
virtual void IndexPrimitives(class RAS_MeshSlot &ms);
@@ -189,7 +191,12 @@ public:
virtual void SetProjectionMatrix(MT_CmMatrix4x4 &mat);
virtual void SetProjectionMatrix(const MT_Matrix4x4 &mat);
virtual void SetViewMatrix(const MT_Matrix4x4 &mat, const MT_Matrix3x3 &ori, const MT_Point3 &pos, bool perspective);
virtual void SetViewMatrix(
const MT_Matrix4x4 &mat,
const MT_Matrix3x3 &ori,
const MT_Point3 &pos,
const MT_Vector3 &scale,
bool perspective);
virtual const MT_Point3& GetCameraPosition();
virtual bool GetCameraOrtho();

View File

@@ -0,0 +1,82 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#include "glew-mx.h"
#include <stdio.h>
#include "RAS_OpenGLSync.h"
RAS_OpenGLSync::RAS_OpenGLSync()
:m_sync(NULL)
{
}
RAS_OpenGLSync::~RAS_OpenGLSync()
{
Destroy();
}
bool RAS_OpenGLSync::Create(RAS_SYNC_TYPE type)
{
if (m_sync) {
printf("RAS_OpenGLSync::Create(): sync already exists, destroy first\n");
return false;
}
if (type != RAS_SYNC_TYPE_FENCE) {
printf("RAS_OpenGLSync::Create(): only RAS_SYNC_TYPE_FENCE are currently supported\n");
return false;
}
if (!GLEW_ARB_sync) {
printf("RAS_OpenGLSync::Create(): ARB_sync extension is needed to create sync object\n");
return false;
}
m_sync = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
if (!m_sync) {
printf("RAS_OpenGLSync::Create(): glFenceSync() failed");
return false;
}
return true;
}
void RAS_OpenGLSync::Destroy()
{
if (m_sync) {
glDeleteSync(m_sync);
m_sync = NULL;
}
}
void RAS_OpenGLSync::Wait()
{
if (m_sync) {
// this is needed to ensure that the sync is in the GPU
glFlush();
// block until the operation have completed
glWaitSync(m_sync, 0, GL_TIMEOUT_IGNORED);
}
}

View File

@@ -0,0 +1,50 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __RAS_OPENGLSYNC__
#define __RAS_OPENGLSYNC__
#include "RAS_ISync.h"
struct __GLsync;
class RAS_OpenGLSync : public RAS_ISync
{
private:
struct __GLsync *m_sync;
public:
RAS_OpenGLSync();
~RAS_OpenGLSync();
virtual bool Create(RAS_SYNC_TYPE type);
virtual void Destroy();
virtual void Wait();
};
#endif /* __RAS_OPENGLSYNC__ */

View File

@@ -45,6 +45,9 @@ set(INC
../../../intern/glew-mx
../../../intern/guardedalloc
../../../intern/string
../../../intern/decklink
../../../intern/gpudirect
../../../intern/atomic
)
set(INC_SYS
@@ -68,8 +71,10 @@ set(SRC
ImageViewport.cpp
PyTypeList.cpp
Texture.cpp
DeckLink.cpp
VideoBase.cpp
VideoFFmpeg.cpp
VideoDeckLink.cpp
blendVideoTex.cpp
BlendType.h
@@ -87,8 +92,10 @@ set(SRC
ImageViewport.h
PyTypeList.h
Texture.h
DeckLink.h
VideoBase.h
VideoFFmpeg.h
VideoDeckLink.h
)
if(WITH_CODEC_FFMPEG)
@@ -100,7 +107,13 @@ if(WITH_CODEC_FFMPEG)
remove_strict_flags_file(
VideoFFmpeg.cpp
VideoDeckLink
DeckLink
)
endif()
if(WITH_GAMEENGINE_DECKLINK)
add_definitions(-DWITH_GAMEENGINE_DECKLINK)
endif()
blender_add_lib(ge_videotex "${SRC}" "${INC}" "${INC_SYS}")

View File

@@ -36,7 +36,8 @@
#define NULL 0
#endif
#ifndef HRESULT
#ifndef _HRESULT_DEFINED
#define _HRESULT_DEFINED
#define HRESULT long
#endif

View File

@@ -0,0 +1,813 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file gameengine/VideoTexture/Texture.cpp
* \ingroup bgevideotex
*/
#ifdef WITH_GAMEENGINE_DECKLINK
// implementation
// FFmpeg defines its own version of stdint.h on Windows.
// Decklink needs FFmpeg, so it uses its version of stdint.h
// this is necessary for INT64_C macro
#ifndef __STDC_CONSTANT_MACROS
#define __STDC_CONSTANT_MACROS
#endif
// this is necessary for UINTPTR_MAX (used by atomic-ops)
#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS
#endif
#include "atomic_ops.h"
#include "EXP_PyObjectPlus.h"
#include "KX_KetsjiEngine.h"
#include "KX_PythonInit.h"
#include "DeckLink.h"
#include <memory.h>
// macro for exception handling and logging
#define CATCH_EXCP catch (Exception & exp) \
{ exp.report(); return NULL; }
static struct
{
const char *name;
BMDDisplayMode mode;
} sModeStringTab[] = {
{ "NTSC", bmdModeNTSC },
{ "NTSC2398", bmdModeNTSC2398 },
{ "PAL", bmdModePAL },
{ "NTSCp", bmdModeNTSCp },
{ "PALp", bmdModePALp },
/* HD 1080 Modes */
{ "HD1080p2398", bmdModeHD1080p2398 },
{ "HD1080p24", bmdModeHD1080p24 },
{ "HD1080p25", bmdModeHD1080p25 },
{ "HD1080p2997", bmdModeHD1080p2997 },
{ "HD1080p30", bmdModeHD1080p30 },
{ "HD1080i50", bmdModeHD1080i50 },
{ "HD1080i5994", bmdModeHD1080i5994 },
{ "HD1080i6000", bmdModeHD1080i6000 },
{ "HD1080p50", bmdModeHD1080p50 },
{ "HD1080p5994", bmdModeHD1080p5994 },
{ "HD1080p6000", bmdModeHD1080p6000 },
/* HD 720 Modes */
{ "HD720p50", bmdModeHD720p50 },
{ "HD720p5994", bmdModeHD720p5994 },
{ "HD720p60", bmdModeHD720p60 },
/* 2k Modes */
{ "2k2398", bmdMode2k2398 },
{ "2k24", bmdMode2k24 },
{ "2k25", bmdMode2k25 },
/* DCI Modes (output only) */
{ "2kDCI2398", bmdMode2kDCI2398 },
{ "2kDCI24", bmdMode2kDCI24 },
{ "2kDCI25", bmdMode2kDCI25 },
/* 4k Modes */
{ "4K2160p2398", bmdMode4K2160p2398 },
{ "4K2160p24", bmdMode4K2160p24 },
{ "4K2160p25", bmdMode4K2160p25 },
{ "4K2160p2997", bmdMode4K2160p2997 },
{ "4K2160p30", bmdMode4K2160p30 },
{ "4K2160p50", bmdMode4K2160p50 },
{ "4K2160p5994", bmdMode4K2160p5994 },
{ "4K2160p60", bmdMode4K2160p60 },
// sentinel
{ NULL }
};
static struct
{
const char *name;
BMDPixelFormat format;
} sFormatStringTab[] = {
{ "8BitYUV", bmdFormat8BitYUV },
{ "10BitYUV", bmdFormat10BitYUV },
{ "8BitARGB", bmdFormat8BitARGB },
{ "8BitBGRA", bmdFormat8BitBGRA },
{ "10BitRGB", bmdFormat10BitRGB },
{ "12BitRGB", bmdFormat12BitRGB },
{ "12BitRGBLE", bmdFormat12BitRGBLE },
{ "10BitRGBXLE", bmdFormat10BitRGBXLE },
{ "10BitRGBX", bmdFormat10BitRGBX },
// sentinel
{ NULL }
};
ExceptionID DeckLinkBadDisplayMode, DeckLinkBadPixelFormat;
ExpDesc DeckLinkBadDisplayModeDesc(DeckLinkBadDisplayMode, "Invalid or unsupported display mode");
ExpDesc DeckLinkBadPixelFormatDesc(DeckLinkBadPixelFormat, "Invalid or unsupported pixel format");
HRESULT decklink_ReadDisplayMode(const char *format, size_t len, BMDDisplayMode *displayMode)
{
int i;
if (len == 0)
len = strlen(format);
for (i = 0; sModeStringTab[i].name != NULL; i++) {
if (strlen(sModeStringTab[i].name) == len &&
!strncmp(sModeStringTab[i].name, format, len))
{
*displayMode = sModeStringTab[i].mode;
return S_OK;
}
}
if (len != 4)
THRWEXCP(DeckLinkBadDisplayMode, S_OK);
// assume the user entered directly the mode value as a 4 char string
*displayMode = (BMDDisplayMode)((((uint32_t)format[0]) << 24) + (((uint32_t)format[1]) << 16) + (((uint32_t)format[2]) << 8) + ((uint32_t)format[3]));
return S_OK;
}
HRESULT decklink_ReadPixelFormat(const char *format, size_t len, BMDPixelFormat *pixelFormat)
{
int i;
if (!len)
len = strlen(format);
for (i = 0; sFormatStringTab[i].name != NULL; i++) {
if (strlen(sFormatStringTab[i].name) == len &&
!strncmp(sFormatStringTab[i].name, format, len))
{
*pixelFormat = sFormatStringTab[i].format;
return S_OK;
}
}
if (len != 4)
THRWEXCP(DeckLinkBadPixelFormat, S_OK);
// assume the user entered directly the mode value as a 4 char string
*pixelFormat = (BMDPixelFormat)((((uint32_t)format[0]) << 24) + (((uint32_t)format[1]) << 16) + (((uint32_t)format[2]) << 8) + ((uint32_t)format[3]));
return S_OK;
}
class DeckLink3DFrameWrapper : public IDeckLinkVideoFrame, IDeckLinkVideoFrame3DExtensions
{
public:
// IUnknown
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID *ppv)
{
if (!memcmp(&iid, &IID_IDeckLinkVideoFrame3DExtensions, sizeof(iid))) {
if (mpRightEye) {
*ppv = (IDeckLinkVideoFrame3DExtensions*)this;
return S_OK;
}
}
return E_NOTIMPL;
}
virtual ULONG STDMETHODCALLTYPE AddRef(void) { return 1U; }
virtual ULONG STDMETHODCALLTYPE Release(void) { return 1U; }
// IDeckLinkVideoFrame
virtual long STDMETHODCALLTYPE GetWidth(void) { return mpLeftEye->GetWidth(); }
virtual long STDMETHODCALLTYPE GetHeight(void) { return mpLeftEye->GetHeight(); }
virtual long STDMETHODCALLTYPE GetRowBytes(void) { return mpLeftEye->GetRowBytes(); }
virtual BMDPixelFormat STDMETHODCALLTYPE GetPixelFormat(void) { return mpLeftEye->GetPixelFormat(); }
virtual BMDFrameFlags STDMETHODCALLTYPE GetFlags(void) { return mpLeftEye->GetFlags(); }
virtual HRESULT STDMETHODCALLTYPE GetBytes(void **buffer) { return mpLeftEye->GetBytes(buffer); }
virtual HRESULT STDMETHODCALLTYPE GetTimecode(BMDTimecodeFormat format,IDeckLinkTimecode **timecode)
{ return mpLeftEye->GetTimecode(format, timecode); }
virtual HRESULT STDMETHODCALLTYPE GetAncillaryData(IDeckLinkVideoFrameAncillary **ancillary)
{ return mpLeftEye->GetAncillaryData(ancillary); }
// IDeckLinkVideoFrame3DExtensions
virtual BMDVideo3DPackingFormat STDMETHODCALLTYPE Get3DPackingFormat(void)
{
return bmdVideo3DPackingLeftOnly;
}
virtual HRESULT STDMETHODCALLTYPE GetFrameForRightEye(
/* [out] */ IDeckLinkVideoFrame **rightEyeFrame)
{
mpRightEye->AddRef();
*rightEyeFrame = mpRightEye;
return S_OK;
}
// Constructor
DeckLink3DFrameWrapper(IDeckLinkVideoFrame *leftEye, IDeckLinkVideoFrame *rightEye)
{
mpLeftEye = leftEye;
mpRightEye = rightEye;
}
// no need for a destructor, it's just a wrapper
private:
IDeckLinkVideoFrame *mpLeftEye;
IDeckLinkVideoFrame *mpRightEye;
};
static void decklink_Reset(DeckLink *self)
{
self->m_lastClock = 0.0;
self->mDLOutput = NULL;
self->mUse3D = false;
self->mDisplayMode = bmdModeUnknown;
self->mKeyingSupported = false;
self->mHDKeyingSupported = false;
self->mSize[0] = 0;
self->mSize[1] = 0;
self->mFrameSize = 0;
self->mLeftFrame = NULL;
self->mRightFrame = NULL;
self->mKeyer = NULL;
self->mUseKeying = false;
self->mKeyingLevel = 255;
self->mUseExtend = false;
}
#ifdef __BIG_ENDIAN__
#define CONV_PIXEL(i) ((((i)>>16)&0xFF00)+(((i)&0xFF00)<<16)+((i)&0xFF00FF))
#else
#define CONV_PIXEL(i) ((((i)&0xFF)<<16)+(((i)>>16)&0xFF)+((i)&0xFF00FF00))
#endif
// adapt the pixel format and picture size from VideoTexture (RGBA) to DeckLink (BGRA)
static void decklink_ConvImage(uint32_t *dest, const short *destSize, const uint32_t *source, const short *srcSize, bool extend)
{
short w, h, x, y;
const uint32_t *s;
uint32_t *d, p;
bool sameSize = (destSize[0] == srcSize[0] && destSize[1] == srcSize[1]);
if (sameSize || !extend) {
// here we convert pixel by pixel
w = (destSize[0] < srcSize[0]) ? destSize[0] : srcSize[0];
h = (destSize[1] < srcSize[1]) ? destSize[1] : srcSize[1];
for (y = 0; y < h; ++y) {
s = source + y*srcSize[0];
d = dest + y*destSize[0];
for (x = 0; x < w; ++x, ++s, ++d) {
*d = CONV_PIXEL(*s);
}
}
}
else {
// here we scale
// interpolation accumulator
int accHeight = srcSize[1] >> 1;
d = dest;
s = source;
// process image rows
for (y = 0; y < srcSize[1]; ++y) {
// increase height accum
accHeight += destSize[1];
// if pixel row has to be drawn
if (accHeight >= srcSize[1]) {
// decrease accum
accHeight -= srcSize[1];
// width accum
int accWidth = srcSize[0] >> 1;
// process row
for (x = 0; x < srcSize[0]; ++x, ++s) {
// increase width accum
accWidth += destSize[0];
// convert pixel
p = CONV_PIXEL(*s);
// if pixel has to be drown one or more times
while (accWidth >= srcSize[0]) {
// decrease accum
accWidth -= srcSize[0];
*d++ = p;
}
}
// if there should be more identical lines
while (accHeight >= srcSize[1]) {
accHeight -= srcSize[1];
// copy previous line
memcpy(d, d - destSize[0], 4 * destSize[0]);
d += destSize[0];
}
}
else {
// if we skip a source line
s += srcSize[0];
}
}
}
}
// DeckLink object allocation
static PyObject *DeckLink_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
// allocate object
DeckLink * self = reinterpret_cast<DeckLink*>(type->tp_alloc(type, 0));
// initialize object structure
decklink_Reset(self);
// m_leftEye is a python object, it's handled by python
self->m_leftEye = NULL;
self->m_rightEye = NULL;
// return allocated object
return reinterpret_cast<PyObject*>(self);
}
// forward declaration
PyObject *DeckLink_close(DeckLink *self);
int DeckLink_setSource(DeckLink *self, PyObject *value, void *closure);
// DeckLink object deallocation
static void DeckLink_dealloc(DeckLink *self)
{
// release renderer
Py_XDECREF(self->m_leftEye);
// close decklink
PyObject *ret = DeckLink_close(self);
Py_DECREF(ret);
// release object
Py_TYPE((PyObject *)self)->tp_free((PyObject *)self);
}
ExceptionID AutoDetectionNotAvail, DeckLinkOpenCard, DeckLinkBadFormat, DeckLinkInternalError;
ExpDesc AutoDetectionNotAvailDesc(AutoDetectionNotAvail, "Auto detection not yet available");
ExpDesc DeckLinkOpenCardDesc(DeckLinkOpenCard, "Cannot open card for output");
ExpDesc DeckLinkBadFormatDesc(DeckLinkBadFormat, "Invalid or unsupported output format, use <mode>[/3D]");
ExpDesc DeckLinkInternalErrorDesc(DeckLinkInternalError, "DeckLink API internal error, please report");
// DeckLink object initialization
static int DeckLink_init(DeckLink *self, PyObject *args, PyObject *kwds)
{
IDeckLinkIterator* pIterator;
IDeckLinkAttributes* pAttributes;
IDeckLinkDisplayModeIterator* pDisplayModeIterator;
IDeckLinkDisplayMode* pDisplayMode;
IDeckLink* pDL;
char* p3D;
BOOL flag;
size_t len;
int i;
uint32_t displayFlags;
BMDVideoOutputFlags outputFlags;
BMDDisplayModeSupport support;
uint32_t* bytes;
// material ID
short cardIdx = 0;
// texture ID
char *format = NULL;
static const char *kwlist[] = {"cardIdx", "format", NULL};
// get parameters
if (!PyArg_ParseTupleAndKeywords(args, kwds, "|hs",
const_cast<char**>(kwlist), &cardIdx, &format))
return -1;
try {
if (format == NULL) {
THRWEXCP(AutoDetectionNotAvail, S_OK);
}
if ((p3D = strchr(format, '/')) != NULL && strcmp(p3D, "/3D"))
THRWEXCP(DeckLinkBadFormat, S_OK);
self->mUse3D = (p3D) ? true : false;
// read the mode
len = (p3D) ? (size_t)(p3D - format) : strlen(format);
// throws if bad mode
decklink_ReadDisplayMode(format, len, &self->mDisplayMode);
pIterator = BMD_CreateDeckLinkIterator();
pDL = NULL;
if (pIterator) {
i = 0;
while (pIterator->Next(&pDL) == S_OK) {
if (i == cardIdx) {
break;
}
i++;
pDL->Release();
pDL = NULL;
}
pIterator->Release();
}
if (!pDL) {
THRWEXCP(DeckLinkOpenCard, S_OK);
}
// detect the capabilities
if (pDL->QueryInterface(IID_IDeckLinkAttributes, (void**)&pAttributes) == S_OK) {
if (pAttributes->GetFlag(BMDDeckLinkSupportsInternalKeying, &flag) == S_OK && flag) {
self->mKeyingSupported = true;
if (pAttributes->GetFlag(BMDDeckLinkSupportsHDKeying, &flag) == S_OK && flag) {
self->mHDKeyingSupported = true;
}
}
pAttributes->Release();
}
if (pDL->QueryInterface(IID_IDeckLinkOutput, (void**)&self->mDLOutput) != S_OK) {
self->mDLOutput = NULL;
}
if (self->mKeyingSupported) {
pDL->QueryInterface(IID_IDeckLinkKeyer, (void **)&self->mKeyer);
}
// we don't need the device anymore, release to avoid leaking
pDL->Release();
if (!self->mDLOutput)
THRWEXCP(DeckLinkOpenCard, S_OK);
if (self->mDLOutput->GetDisplayModeIterator(&pDisplayModeIterator) != S_OK)
THRWEXCP(DeckLinkInternalError, S_OK);
displayFlags = (self->mUse3D) ? bmdDisplayModeSupports3D : 0;
outputFlags = (self->mUse3D) ? bmdVideoOutputDualStream3D : bmdVideoOutputFlagDefault;
pDisplayMode = NULL;
i = 0;
while (pDisplayModeIterator->Next(&pDisplayMode) == S_OK) {
if (pDisplayMode->GetDisplayMode() == self->mDisplayMode
&& (pDisplayMode->GetFlags() & displayFlags) == displayFlags) {
if (self->mDLOutput->DoesSupportVideoMode(self->mDisplayMode, bmdFormat8BitBGRA, outputFlags, &support, NULL) != S_OK ||
support == bmdDisplayModeNotSupported)
{
printf("Warning: DeckLink card %d reports no BGRA support, proceed anyway\n", cardIdx);
}
break;
}
pDisplayMode->Release();
pDisplayMode = NULL;
i++;
}
pDisplayModeIterator->Release();
if (!pDisplayMode)
THRWEXCP(DeckLinkBadFormat, S_OK);
self->mSize[0] = pDisplayMode->GetWidth();
self->mSize[1] = pDisplayMode->GetHeight();
self->mFrameSize = 4*self->mSize[0]*self->mSize[1];
pDisplayMode->Release();
if (self->mDLOutput->EnableVideoOutput(self->mDisplayMode, outputFlags) != S_OK)
// this shouldn't fail
THRWEXCP(DeckLinkOpenCard, S_OK);
if (self->mDLOutput->CreateVideoFrame(self->mSize[0], self->mSize[1], self->mSize[0] * 4, bmdFormat8BitBGRA, bmdFrameFlagFlipVertical, &self->mLeftFrame) != S_OK)
THRWEXCP(DeckLinkInternalError, S_OK);
// clear alpha channel in the frame buffer
self->mLeftFrame->GetBytes((void **)&bytes);
memset(bytes, 0, self->mFrameSize);
if (self->mUse3D) {
if (self->mDLOutput->CreateVideoFrame(self->mSize[0], self->mSize[1], self->mSize[0] * 4, bmdFormat8BitBGRA, bmdFrameFlagFlipVertical, &self->mRightFrame) != S_OK)
THRWEXCP(DeckLinkInternalError, S_OK);
// clear alpha channel in the frame buffer
self->mRightFrame->GetBytes((void **)&bytes);
memset(bytes, 0, self->mFrameSize);
}
}
catch (Exception & exp)
{
printf("DeckLink: exception when opening card %d: %s\n", cardIdx, exp.what());
exp.report();
// normally, the object should be deallocated
return -1;
}
// initialization succeeded
return 0;
}
// close added decklink
PyObject *DeckLink_close(DeckLink * self)
{
if (self->mLeftFrame)
self->mLeftFrame->Release();
if (self->mRightFrame)
self->mRightFrame->Release();
if (self->mKeyer)
self->mKeyer->Release();
if (self->mDLOutput)
self->mDLOutput->Release();
decklink_Reset(self);
Py_RETURN_NONE;
}
// refresh decklink key frame
static PyObject *DeckLink_refresh(DeckLink *self, PyObject *args)
{
// get parameter - refresh source
PyObject *param;
double ts = -1.0;
if (!PyArg_ParseTuple(args, "O|d:refresh", &param, &ts) || !PyBool_Check(param)) {
// report error
PyErr_SetString(PyExc_TypeError, "The value must be a bool");
return NULL;
}
// some trick here: we are in the business of loading a key frame in decklink,
// no use to do it if we are still in the same rendering frame.
// We find this out by looking at the engine current clock time
KX_KetsjiEngine* engine = KX_GetActiveEngine();
if (engine->GetClockTime() != self->m_lastClock)
{
self->m_lastClock = engine->GetClockTime();
// set source refresh
bool refreshSource = (param == Py_True);
uint32_t *leftEye = NULL;
uint32_t *rightEye = NULL;
// try to process key frame from source
try {
// check if optimization is possible
if (self->m_leftEye != NULL) {
ImageBase *leftImage = self->m_leftEye->m_image;
short * srcSize = leftImage->getSize();
self->mLeftFrame->GetBytes((void **)&leftEye);
if (srcSize[0] == self->mSize[0] && srcSize[1] == self->mSize[1])
{
// buffer has same size, can load directly
if (!leftImage->loadImage(leftEye, self->mFrameSize, GL_BGRA, ts))
leftEye = NULL;
}
else {
// scaling is required, go the hard way
unsigned int *src = leftImage->getImage(0, ts);
if (src != NULL)
decklink_ConvImage(leftEye, self->mSize, src, srcSize, self->mUseExtend);
else
leftEye = NULL;
}
}
if (leftEye) {
if (self->mUse3D && self->m_rightEye != NULL) {
ImageBase *rightImage = self->m_rightEye->m_image;
short * srcSize = rightImage->getSize();
self->mRightFrame->GetBytes((void **)&rightEye);
if (srcSize[0] == self->mSize[0] && srcSize[1] == self->mSize[1])
{
// buffer has same size, can load directly
rightImage->loadImage(rightEye, self->mFrameSize, GL_BGRA, ts);
}
else {
// scaling is required, go the hard way
unsigned int *src = rightImage->getImage(0, ts);
if (src != NULL)
decklink_ConvImage(rightEye, self->mSize, src, srcSize, self->mUseExtend);
}
}
if (self->mUse3D) {
DeckLink3DFrameWrapper frame3D(
(IDeckLinkVideoFrame*)self->mLeftFrame,
(IDeckLinkVideoFrame*)self->mRightFrame);
self->mDLOutput->DisplayVideoFrameSync(&frame3D);
}
else {
self->mDLOutput->DisplayVideoFrameSync((IDeckLinkVideoFrame*)self->mLeftFrame);
}
}
// refresh texture source, if required
if (refreshSource) {
if (self->m_leftEye)
self->m_leftEye->m_image->refresh();
if (self->m_rightEye)
self->m_rightEye->m_image->refresh();
}
}
CATCH_EXCP;
}
Py_RETURN_NONE;
}
// get source object
static PyObject *DeckLink_getSource(DeckLink *self, PyObject *value, void *closure)
{
// if source exists
if (self->m_leftEye != NULL) {
Py_INCREF(self->m_leftEye);
return reinterpret_cast<PyObject*>(self->m_leftEye);
}
// otherwise return None
Py_RETURN_NONE;
}
// set source object
int DeckLink_setSource(DeckLink *self, PyObject *value, void *closure)
{
// check new value
if (value == NULL || !pyImageTypes.in(Py_TYPE(value))) {
// report value error
PyErr_SetString(PyExc_TypeError, "Invalid type of value");
return -1;
}
// increase ref count for new value
Py_INCREF(value);
// release previous
Py_XDECREF(self->m_leftEye);
// set new value
self->m_leftEye = reinterpret_cast<PyImage*>(value);
// return success
return 0;
}
// get source object
static PyObject *DeckLink_getRight(DeckLink *self, PyObject *value, void *closure)
{
// if source exists
if (self->m_rightEye != NULL)
{
Py_INCREF(self->m_rightEye);
return reinterpret_cast<PyObject*>(self->m_rightEye);
}
// otherwise return None
Py_RETURN_NONE;
}
// set source object
static int DeckLink_setRight(DeckLink *self, PyObject *value, void *closure)
{
// check new value
if (value == NULL || !pyImageTypes.in(Py_TYPE(value)))
{
// report value error
PyErr_SetString(PyExc_TypeError, "Invalid type of value");
return -1;
}
// increase ref count for new value
Py_INCREF(value);
// release previous
Py_XDECREF(self->m_rightEye);
// set new value
self->m_rightEye = reinterpret_cast<PyImage*>(value);
// return success
return 0;
}
static PyObject *DeckLink_getKeying(DeckLink *self, PyObject *value, void *closure)
{
if (self->mUseKeying) Py_RETURN_TRUE;
else Py_RETURN_FALSE;
}
static int DeckLink_setKeying(DeckLink *self, PyObject *value, void *closure)
{
if (value == NULL || !PyBool_Check(value))
{
PyErr_SetString(PyExc_TypeError, "The value must be a bool");
return -1;
}
if (self->mKeyer != NULL)
{
if (value == Py_True)
{
if (self->mKeyer->Enable(false) != S_OK)
{
PyErr_SetString(PyExc_RuntimeError, "Error enabling keyer");
return -1;
}
self->mUseKeying = true;
self->mKeyer->SetLevel(self->mKeyingLevel);
}
else
{
self->mKeyer->Disable();
self->mUseKeying = false;
}
}
// success
return 0;
}
static PyObject *DeckLink_getLevel(DeckLink *self, PyObject *value, void *closure)
{
return Py_BuildValue("h", self->mKeyingLevel);
}
static int DeckLink_setLevel(DeckLink *self, PyObject *value, void *closure)
{
long level;
if (value == NULL || !PyLong_Check(value)) {
PyErr_SetString(PyExc_TypeError, "The value must be an integer from 0 to 255");
return -1;
}
level = PyLong_AsLong(value);
if (level > 255)
level = 255;
else if (level < 0)
level = 0;
self->mKeyingLevel = (uint8_t)level;
if (self->mUseKeying) {
if (self->mKeyer->SetLevel(self->mKeyingLevel) != S_OK) {
PyErr_SetString(PyExc_RuntimeError, "Error changin level of keyer");
return -1;
}
}
// success
return 0;
}
static PyObject *DeckLink_getExtend(DeckLink *self, PyObject *value, void *closure)
{
if (self->mUseExtend) Py_RETURN_TRUE;
else Py_RETURN_FALSE;
}
static int DeckLink_setExtend(DeckLink *self, PyObject *value, void *closure)
{
if (value == NULL || !PyBool_Check(value))
{
PyErr_SetString(PyExc_TypeError, "The value must be a bool");
return -1;
}
self->mUseExtend = (value == Py_True);
return 0;
}
// class DeckLink methods
static PyMethodDef decklinkMethods[] =
{
{ "close", (PyCFunction)DeckLink_close, METH_NOARGS, "Close dynamic decklink and restore original"},
{ "refresh", (PyCFunction)DeckLink_refresh, METH_VARARGS, "Refresh decklink from source"},
{NULL} /* Sentinel */
};
// class DeckLink attributes
static PyGetSetDef decklinkGetSets[] =
{
{ (char*)"source", (getter)DeckLink_getSource, (setter)DeckLink_setSource, (char*)"source of decklink (left eye)", NULL},
{ (char*)"right", (getter)DeckLink_getRight, (setter)DeckLink_setRight, (char*)"source of decklink (right eye)", NULL },
{ (char*)"keying", (getter)DeckLink_getKeying, (setter)DeckLink_setKeying, (char*)"whether keying is enabled (frame is alpha-composited with passthrough output)", NULL },
{ (char*)"level", (getter)DeckLink_getLevel, (setter)DeckLink_setLevel, (char*)"change the level of keying (overall alpha level of key frame, 0 to 255)", NULL },
{ (char*)"extend", (getter)DeckLink_getExtend, (setter)DeckLink_setExtend, (char*)"whether image should stretched to fit frame", NULL },
{ NULL }
};
// class DeckLink declaration
PyTypeObject DeckLinkType =
{
PyVarObject_HEAD_INIT(NULL, 0)
"VideoTexture.DeckLink", /*tp_name*/
sizeof(DeckLink), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)DeckLink_dealloc,/*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
&imageBufferProcs, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT, /*tp_flags*/
"DeckLink objects", /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
decklinkMethods, /* tp_methods */
0, /* tp_members */
decklinkGetSets, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)DeckLink_init, /* tp_init */
0, /* tp_alloc */
DeckLink_new, /* tp_new */
};
#endif /* WITH_GAMEENGINE_DECKLINK */

View File

@@ -0,0 +1,86 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file VideoTexture/DeckLink.h
* \ingroup bgevideotex
*/
#ifndef __DECKLINK_H__
#define __DECKLINK_H__
#ifdef WITH_GAMEENGINE_DECKLINK
#include "EXP_PyObjectPlus.h"
#include <structmember.h>
#include "DNA_image_types.h"
#include "DeckLinkAPI.h"
#include "ImageBase.h"
#include "BlendType.h"
#include "Exception.h"
// type DeckLink declaration
struct DeckLink
{
PyObject_HEAD
// last refresh
double m_lastClock;
// decklink card to which we output
IDeckLinkOutput * mDLOutput;
IDeckLinkKeyer * mKeyer;
IDeckLinkMutableVideoFrame *mLeftFrame;
IDeckLinkMutableVideoFrame *mRightFrame;
bool mUse3D;
bool mUseKeying;
bool mUseExtend;
bool mKeyingSupported;
bool mHDKeyingSupported;
uint8_t mKeyingLevel;
BMDDisplayMode mDisplayMode;
short mSize[2];
uint32_t mFrameSize;
// image source
PyImage * m_leftEye;
PyImage * m_rightEye;
};
// DeckLink type description
extern PyTypeObject DeckLinkType;
// helper function
HRESULT decklink_ReadDisplayMode(const char *format, size_t len, BMDDisplayMode *displayMode);
HRESULT decklink_ReadPixelFormat(const char *format, size_t len, BMDPixelFormat *displayMode);
#endif /* WITH_GAMEENGINE_DECKLINK */
#endif /* __DECKLINK_H__ */

View File

@@ -213,6 +213,7 @@ void registerAllExceptions(void)
ImageSizesNotMatchDesc.registerDesc();
ImageHasExportsDesc.registerDesc();
InvalidColorChannelDesc.registerDesc();
InvalidImageModeDesc.registerDesc();
SceneInvalidDesc.registerDesc();
CameraInvalidDesc.registerDesc();
ObserverInvalidDesc.registerDesc();
@@ -223,4 +224,18 @@ void registerAllExceptions(void)
MirrorTooSmallDesc.registerDesc();
SourceVideoEmptyDesc.registerDesc();
SourceVideoCreationDesc.registerDesc();
OffScreenInvalidDesc.registerDesc();
#ifdef WITH_GAMEENGINE_DECKLINK
AutoDetectionNotAvailDesc.registerDesc();
DeckLinkBadDisplayModeDesc.registerDesc();
DeckLinkBadPixelFormatDesc.registerDesc();
DeckLinkOpenCardDesc.registerDesc();
DeckLinkBadFormatDesc.registerDesc();
DeckLinkInternalErrorDesc.registerDesc();
SourceVideoOnlyCaptureDesc.registerDesc();
VideoDeckLinkBadFormatDesc.registerDesc();
VideoDeckLinkOpenCardDesc.registerDesc();
VideoDeckLinkDvpInternalErrorDesc.registerDesc();
VideoDeckLinkPinMemoryErrorDesc.registerDesc();
#endif
}

View File

@@ -46,7 +46,7 @@
throw Exception (err, macroHRslt, __FILE__, __LINE__); \
}
#define THRWEXCP(err,hRslt) throw Exception (err, hRslt, __FILE__, __LINE__);
#define THRWEXCP(err,hRslt) throw Exception (err, hRslt, __FILE__, __LINE__)
#if defined WIN32
@@ -209,9 +209,11 @@ extern ExpDesc MaterialNotAvailDesc;
extern ExpDesc ImageSizesNotMatchDesc;
extern ExpDesc ImageHasExportsDesc;
extern ExpDesc InvalidColorChannelDesc;
extern ExpDesc InvalidImageModeDesc;
extern ExpDesc SceneInvalidDesc;
extern ExpDesc CameraInvalidDesc;
extern ExpDesc ObserverInvalidDesc;
extern ExpDesc OffScreenInvalidDesc;
extern ExpDesc MirrorInvalidDesc;
extern ExpDesc MirrorSizeInvalidDesc;
extern ExpDesc MirrorNormalInvalidDesc;
@@ -219,7 +221,19 @@ extern ExpDesc MirrorHorizontalDesc;
extern ExpDesc MirrorTooSmallDesc;
extern ExpDesc SourceVideoEmptyDesc;
extern ExpDesc SourceVideoCreationDesc;
extern ExpDesc DeckLinkBadDisplayModeDesc;
extern ExpDesc DeckLinkBadPixelFormatDesc;
extern ExpDesc AutoDetectionNotAvailDesc;
extern ExpDesc DeckLinkOpenCardDesc;
extern ExpDesc DeckLinkBadFormatDesc;
extern ExpDesc DeckLinkInternalErrorDesc;
extern ExpDesc SourceVideoOnlyCaptureDesc;
extern ExpDesc VideoDeckLinkBadFormatDesc;
extern ExpDesc VideoDeckLinkOpenCardDesc;
extern ExpDesc VideoDeckLinkDvpInternalErrorDesc;
extern ExpDesc VideoDeckLinkPinMemoryErrorDesc;
extern ExceptionID InvalidImageMode;
void registerAllExceptions(void);
#endif

View File

@@ -44,6 +44,13 @@
#define VT_A(v) ((unsigned char*)&v)[3]
#define VT_RGBA(v,r,g,b,a) VT_R(v)=(unsigned char)r, VT_G(v)=(unsigned char)g, VT_B(v)=(unsigned char)b, VT_A(v)=(unsigned char)a
#ifdef __BIG_ENDIAN__
# define VT_SWAPBR(i) ((((i) >> 16) & 0xFF00) + (((i) & 0xFF00) << 16) + ((i) & 0xFF00FF))
#else
# define VT_SWAPBR(i) ((((i) & 0xFF) << 16) + (((i) >> 16) & 0xFF) + ((i) & 0xFF00FF00))
#endif
// forward declaration
class FilterBase;

View File

@@ -81,6 +81,30 @@ protected:
}
};
/// class for BGRA32 conversion
class FilterBGRA32 : public FilterBase
{
public:
/// constructor
FilterBGRA32 (void) {}
/// destructor
virtual ~FilterBGRA32 (void) {}
/// get source pixel size
virtual unsigned int getPixelSize (void) { return 4; }
protected:
/// filter pixel, source byte buffer
virtual unsigned int filter(
unsigned char *src, short x, short y,
short * size, unsigned int pixSize, unsigned int val)
{
VT_RGBA(val,src[2],src[1],src[0],src[3]);
return val;
}
};
/// class for BGR24 conversion
class FilterBGR24 : public FilterBase
{

View File

@@ -32,7 +32,6 @@
extern "C" {
#include "bgl.h"
}
#include "glew-mx.h"
#include <vector>
#include <string.h>
@@ -50,6 +49,14 @@ extern "C" {
// ImageBase class implementation
ExceptionID ImageHasExports;
ExceptionID InvalidColorChannel;
ExceptionID InvalidImageMode;
ExpDesc ImageHasExportsDesc(ImageHasExports, "Image has exported buffers, cannot resize");
ExpDesc InvalidColorChannelDesc(InvalidColorChannel, "Invalid or too many color channels specified. At most 4 values within R, G, B, A, 0, 1");
ExpDesc InvalidImageModeDesc(InvalidImageMode, "Invalid image mode, only RGBA and BGRA are supported");
// constructor
ImageBase::ImageBase (bool staticSrc) : m_image(NULL), m_imgSize(0),
m_avail(false), m_scale(false), m_scaleChange(false), m_flip(false),
@@ -111,6 +118,28 @@ unsigned int * ImageBase::getImage (unsigned int texId, double ts)
return m_avail ? m_image : NULL;
}
bool ImageBase::loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts)
{
unsigned int *d, *s, v, len;
if (getImage(0, ts) != NULL && size >= getBuffSize()) {
switch (format) {
case GL_RGBA:
memcpy(buffer, m_image, getBuffSize());
break;
case GL_BGRA:
len = (unsigned int)m_size[0] * m_size[1];
for (s=m_image, d=buffer; len; len--) {
v = *s++;
*d++ = VT_SWAPBR(v);
}
break;
default:
THRWEXCP(InvalidImageMode,S_OK);
}
return true;
}
return false;
}
// refresh image source
void ImageBase::refresh (void)
@@ -179,11 +208,18 @@ void ImageBase::setFilter (PyFilter * filt)
m_pyfilter = filt;
}
ExceptionID ImageHasExports;
ExceptionID InvalidColorChannel;
void ImageBase::swapImageBR()
{
unsigned int size, v, *s;
ExpDesc ImageHasExportsDesc(ImageHasExports, "Image has exported buffers, cannot resize");
ExpDesc InvalidColorChannelDesc(InvalidColorChannel, "Invalid or too many color channels specified. At most 4 values within R, G, B, A, 0, 1");
if (m_avail) {
size = 1 * m_size[0] * m_size[1];
for (s=m_image; size; size--) {
v = *s;
*s++ = VT_SWAPBR(v);
}
}
}
// initialize image data
void ImageBase::init (short width, short height)
@@ -500,10 +536,57 @@ PyObject *Image_getSize (PyImage *self, void *closure)
}
// refresh image
PyObject *Image_refresh (PyImage *self)
PyObject *Image_refresh (PyImage *self, PyObject *args)
{
Py_buffer buffer;
bool done = true;
char *mode = NULL;
double ts = -1.0;
unsigned int format;
memset(&buffer, 0, sizeof(buffer));
if (PyArg_ParseTuple(args, "|s*sd:refresh", &buffer, &mode, &ts)) {
if (buffer.buf) {
// a target buffer is provided, verify its format
if (buffer.readonly) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be writable");
}
else if (!PyBuffer_IsContiguous(&buffer, 'C')) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be contiguous in memory");
}
else if (((intptr_t)buffer.buf & 3) != 0) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be aligned to 4 bytes boundary");
}
else {
// ready to get the image into our buffer
try {
if (mode == NULL || !strcmp(mode, "RGBA"))
format = GL_RGBA;
else if (!strcmp(mode, "BGRA"))
format = GL_BGRA;
else
THRWEXCP(InvalidImageMode,S_OK);
done = self->m_image->loadImage((unsigned int *)buffer.buf, buffer.len, format, ts);
}
catch (Exception & exp) {
exp.report();
}
}
PyBuffer_Release(&buffer);
if (PyErr_Occurred()) {
return NULL;
}
}
}
else {
return NULL;
}
self->m_image->refresh();
Py_RETURN_NONE;
if (done)
Py_RETURN_TRUE;
Py_RETURN_FALSE;
}
// get scale

View File

@@ -40,6 +40,7 @@
#include "FilterBase.h"
#include "glew-mx.h"
// forward declarations
struct PyImage;
@@ -104,6 +105,13 @@ public:
/// calculate size(nearest power of 2)
static short calcSize(short size);
/// calculate image from sources and send it to a target buffer instead of a texture
/// format is GL_RGBA or GL_BGRA
virtual bool loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts);
/// swap the B and R channel in-place in the image buffer
void swapImageBR();
/// number of buffer pointing to m_image, public because not handled by this class
int m_exports;
@@ -348,7 +356,7 @@ PyObject *Image_getImage(PyImage *self, char *mode);
// get image size
PyObject *Image_getSize(PyImage *self, void *closure);
// refresh image - invalidate current content
PyObject *Image_refresh(PyImage *self);
PyObject *Image_refresh(PyImage *self, PyObject *args);
// get scale
PyObject *Image_getScale(PyImage *self, void *closure);

View File

@@ -156,7 +156,7 @@ static PyMethodDef imageMixMethods[] = {
{"getWeight", (PyCFunction)getWeight, METH_VARARGS, "get image source weight"},
{"setWeight", (PyCFunction)setWeight, METH_VARARGS, "set image source weight"},
// methods from ImageBase class
{"refresh", (PyCFunction)Image_refresh, METH_NOARGS, "Refresh image - invalidate its current content"},
{"refresh", (PyCFunction)Image_refresh, METH_VARARGS, "Refresh image - invalidate its current content"},
{NULL}
};
// attributes structure

View File

@@ -43,6 +43,8 @@
#include "RAS_CameraData.h"
#include "RAS_MeshObject.h"
#include "RAS_Polygon.h"
#include "RAS_IOffScreen.h"
#include "RAS_ISync.h"
#include "BLI_math.h"
#include "ImageRender.h"
@@ -51,11 +53,12 @@
#include "Exception.h"
#include "Texture.h"
ExceptionID SceneInvalid, CameraInvalid, ObserverInvalid;
ExceptionID SceneInvalid, CameraInvalid, ObserverInvalid, OffScreenInvalid;
ExceptionID MirrorInvalid, MirrorSizeInvalid, MirrorNormalInvalid, MirrorHorizontal, MirrorTooSmall;
ExpDesc SceneInvalidDesc(SceneInvalid, "Scene object is invalid");
ExpDesc CameraInvalidDesc(CameraInvalid, "Camera object is invalid");
ExpDesc ObserverInvalidDesc(ObserverInvalid, "Observer object is invalid");
ExpDesc OffScreenInvalidDesc(OffScreenInvalid, "Offscreen object is invalid");
ExpDesc MirrorInvalidDesc(MirrorInvalid, "Mirror object is invalid");
ExpDesc MirrorSizeInvalidDesc(MirrorSizeInvalid, "Mirror has no vertex or no size");
ExpDesc MirrorNormalInvalidDesc(MirrorNormalInvalid, "Cannot determine mirror plane");
@@ -63,12 +66,15 @@ ExpDesc MirrorHorizontalDesc(MirrorHorizontal, "Mirror is horizontal in local sp
ExpDesc MirrorTooSmallDesc(MirrorTooSmall, "Mirror is too small");
// constructor
ImageRender::ImageRender (KX_Scene *scene, KX_Camera * camera) :
ImageViewport(),
ImageRender::ImageRender (KX_Scene *scene, KX_Camera * camera, PyRASOffScreen * offscreen) :
ImageViewport(offscreen),
m_render(true),
m_done(false),
m_scene(scene),
m_camera(camera),
m_owncamera(false),
m_offscreen(offscreen),
m_sync(NULL),
m_observer(NULL),
m_mirror(NULL),
m_clip(100.f),
@@ -81,6 +87,10 @@ ImageRender::ImageRender (KX_Scene *scene, KX_Camera * camera) :
m_engine = KX_GetActiveEngine();
m_rasterizer = m_engine->GetRasterizer();
m_canvas = m_engine->GetCanvas();
// keep a reference to the offscreen buffer
if (m_offscreen) {
Py_INCREF(m_offscreen);
}
}
// destructor
@@ -88,6 +98,9 @@ ImageRender::~ImageRender (void)
{
if (m_owncamera)
m_camera->Release();
if (m_sync)
delete m_sync;
Py_XDECREF(m_offscreen);
}
// get background color
@@ -121,30 +134,41 @@ void ImageRender::setBackgroundFromScene (KX_Scene *scene)
// capture image from viewport
void ImageRender::calcImage (unsigned int texId, double ts)
void ImageRender::calcViewport (unsigned int texId, double ts, unsigned int format)
{
if (m_rasterizer->GetDrawingMode() != RAS_IRasterizer::KX_TEXTURED || // no need for texture
m_camera->GetViewport() || // camera must be inactive
m_camera == m_scene->GetActiveCamera())
{
// no need to compute texture in non texture rendering
m_avail = false;
return;
}
// render the scene from the camera
Render();
// get image from viewport
ImageViewport::calcImage(texId, ts);
// restore OpenGL state
m_canvas->EndFrame();
if (!m_done) {
if (!Render()) {
return;
}
}
else if (m_offscreen) {
m_offscreen->ofs->Bind(RAS_IOffScreen::RAS_OFS_BIND_READ);
}
// wait until all render operations are completed
WaitSync();
// get image from viewport (or FBO)
ImageViewport::calcViewport(texId, ts, format);
if (m_offscreen) {
m_offscreen->ofs->Unbind();
}
}
void ImageRender::Render()
bool ImageRender::Render()
{
RAS_FrameFrustum frustum;
if (!m_render)
return;
if (!m_render ||
m_rasterizer->GetDrawingMode() != RAS_IRasterizer::KX_TEXTURED || // no need for texture
m_camera->GetViewport() || // camera must be inactive
m_camera == m_scene->GetActiveCamera())
{
// no need to compute texture in non texture rendering
return false;
}
if (!m_scene->IsShadowDone())
m_engine->RenderShadowBuffers(m_scene);
if (m_mirror)
{
@@ -164,7 +188,7 @@ void ImageRender::Render()
MT_Scalar observerDistance = mirrorPlaneDTerm - observerWorldPos.dot(mirrorWorldZ);
// if distance < 0.01 => observer is on wrong side of mirror, don't render
if (observerDistance < 0.01)
return;
return false;
// set camera world position = observerPos + normal * 2 * distance
MT_Point3 cameraWorldPos = observerWorldPos + (MT_Scalar(2.0)*observerDistance)*mirrorWorldZ;
m_camera->GetSGNode()->SetLocalPosition(cameraWorldPos);
@@ -215,7 +239,15 @@ void ImageRender::Render()
RAS_Rect area = m_canvas->GetWindowArea();
// The screen area that ImageViewport will copy is also the rendering zone
m_canvas->SetViewPort(m_position[0], m_position[1], m_position[0]+m_capSize[0]-1, m_position[1]+m_capSize[1]-1);
if (m_offscreen) {
// bind the fbo and set the viewport to full size
m_offscreen->ofs->Bind(RAS_IOffScreen::RAS_OFS_BIND_RENDER);
// this is needed to stop crashing in canvas check
m_canvas->UpdateViewPort(0, 0, m_offscreen->ofs->GetWidth(), m_offscreen->ofs->GetHeight());
}
else {
m_canvas->SetViewPort(m_position[0], m_position[1], m_position[0]+m_capSize[0]-1, m_position[1]+m_capSize[1]-1);
}
m_canvas->ClearColor(m_background[0], m_background[1], m_background[2], m_background[3]);
m_canvas->ClearBuffer(RAS_ICanvas::COLOR_BUFFER|RAS_ICanvas::DEPTH_BUFFER);
m_rasterizer->BeginFrame(m_engine->GetClockTime());
@@ -292,17 +324,18 @@ void ImageRender::Render()
MT_Transform camtrans(m_camera->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, m_camera->NodeGetWorldOrientation(), m_camera->NodeGetWorldPosition(), m_camera->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(viewmat, m_camera->NodeGetWorldOrientation(), m_camera->NodeGetWorldPosition(), m_camera->NodeGetLocalScaling(), m_camera->GetCameraData()->m_perspective);
m_camera->SetModelviewMatrix(viewmat);
// restore the stereo mode now that the matrix is computed
m_rasterizer->SetStereoMode(stereomode);
if (stereomode == RAS_IRasterizer::RAS_STEREO_QUADBUFFERED) {
// In QUAD buffer stereo mode, the GE render pass ends with the right eye on the right buffer
// but we need to draw on the left buffer to capture the render
// TODO: implement an explicit function in rasterizer to restore the left buffer.
m_rasterizer->SetEye(RAS_IRasterizer::RAS_STEREO_LEFTEYE);
}
if (m_rasterizer->Stereo()) {
// stereo mode change render settings that disturb this render, cancel them all
// we don't need to restore them as they are set before each frame render.
glDrawBuffer(GL_BACK_LEFT);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDisable(GL_POLYGON_STIPPLE);
}
m_scene->CalculateVisibleMeshes(m_rasterizer,m_camera);
@@ -314,8 +347,48 @@ void ImageRender::Render()
// restore the canvas area now that the render is completed
m_canvas->GetWindowArea() = area;
m_canvas->EndFrame();
// In case multisample is active, blit the FBO
if (m_offscreen)
m_offscreen->ofs->Blit();
// end of all render operations, let's create a sync object just in case
if (m_sync) {
// a sync from a previous render, should not happen
delete m_sync;
m_sync = NULL;
}
m_sync = m_rasterizer->CreateSync(RAS_ISync::RAS_SYNC_TYPE_FENCE);
// remember that we have done render
m_done = true;
// the image is not available at this stage
m_avail = false;
return true;
}
void ImageRender::Unbind()
{
if (m_offscreen)
{
m_offscreen->ofs->Unbind();
}
}
void ImageRender::WaitSync()
{
if (m_sync) {
m_sync->Wait();
// done with it, deleted it
delete m_sync;
m_sync = NULL;
}
if (m_offscreen) {
// this is needed to finalize the image if the target is a texture
m_offscreen->ofs->MipMap();
}
// all rendered operation done and complete, invalidate render for next time
m_done = false;
}
// cast Image pointer to ImageRender
inline ImageRender * getImageRender (PyImage *self)
@@ -337,11 +410,13 @@ static int ImageRender_init(PyObject *pySelf, PyObject *args, PyObject *kwds)
PyObject *scene;
// camera object
PyObject *camera;
// offscreen buffer object
PyRASOffScreen *offscreen = NULL;
// parameter keywords
static const char *kwlist[] = {"sceneObj", "cameraObj", NULL};
static const char *kwlist[] = {"sceneObj", "cameraObj", "ofsObj", NULL};
// get parameters
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO",
const_cast<char**>(kwlist), &scene, &camera))
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|O",
const_cast<char**>(kwlist), &scene, &camera, &offscreen))
return -1;
try
{
@@ -357,11 +432,16 @@ static int ImageRender_init(PyObject *pySelf, PyObject *args, PyObject *kwds)
// throw exception if camera is not available
if (cameraPtr == NULL) THRWEXCP(CameraInvalid, S_OK);
if (offscreen) {
if (Py_TYPE(offscreen) != &PyRASOffScreen_Type) {
THRWEXCP(OffScreenInvalid, S_OK);
}
}
// get pointer to image structure
PyImage *self = reinterpret_cast<PyImage*>(pySelf);
// create source object
if (self->m_image != NULL) delete self->m_image;
self->m_image = new ImageRender(scenePtr, cameraPtr);
self->m_image = new ImageRender(scenePtr, cameraPtr, offscreen);
}
catch (Exception & exp)
{
@@ -372,6 +452,55 @@ static int ImageRender_init(PyObject *pySelf, PyObject *args, PyObject *kwds)
return 0;
}
static PyObject *ImageRender_refresh(PyImage *self, PyObject *args)
{
ImageRender *imageRender = getImageRender(self);
if (!imageRender) {
PyErr_SetString(PyExc_TypeError, "Incomplete ImageRender() object");
return NULL;
}
if (PyArg_ParseTuple(args, "")) {
// refresh called with no argument.
// For other image objects it simply invalidates the image buffer
// For ImageRender it triggers a render+sync
// Note that this only makes sense when doing offscreen render on texture
if (!imageRender->isDone()) {
if (!imageRender->Render()) {
Py_RETURN_FALSE;
}
// as we are not trying to read the pixels, just unbind
imageRender->Unbind();
}
// wait until all render operations are completed
// this will also finalize the texture
imageRender->WaitSync();
Py_RETURN_TRUE;
}
else {
// fallback on standard processing
PyErr_Clear();
return Image_refresh(self, args);
}
}
// refresh image
static PyObject *ImageRender_render(PyImage *self)
{
ImageRender *imageRender = getImageRender(self);
if (!imageRender) {
PyErr_SetString(PyExc_TypeError, "Incomplete ImageRender() object");
return NULL;
}
if (!imageRender->Render()) {
Py_RETURN_FALSE;
}
// we are not reading the pixels now, unbind
imageRender->Unbind();
Py_RETURN_TRUE;
}
// get background color
static PyObject *getBackground (PyImage *self, void *closure)
@@ -410,7 +539,8 @@ static int setBackground(PyImage *self, PyObject *value, void *closure)
// methods structure
static PyMethodDef imageRenderMethods[] =
{ // methods from ImageBase class
{"refresh", (PyCFunction)Image_refresh, METH_NOARGS, "Refresh image - invalidate its current content"},
{"refresh", (PyCFunction)ImageRender_refresh, METH_VARARGS, "Refresh image - invalidate its current content after optionally transferring its content to a target buffer"},
{"render", (PyCFunction)ImageRender_render, METH_NOARGS, "Render scene - run before refresh() to performs asynchronous render"},
{NULL}
};
// attributes structure
@@ -601,7 +731,9 @@ static PyGetSetDef imageMirrorGetSets[] =
ImageRender::ImageRender (KX_Scene *scene, KX_GameObject *observer, KX_GameObject *mirror, RAS_IPolyMaterial *mat) :
ImageViewport(),
m_render(false),
m_done(false),
m_scene(scene),
m_offscreen(NULL),
m_observer(observer),
m_mirror(mirror),
m_clip(100.f)

View File

@@ -39,6 +39,8 @@
#include "DNA_screen_types.h"
#include "RAS_ICanvas.h"
#include "RAS_IRasterizer.h"
#include "RAS_IOffScreen.h"
#include "RAS_ISync.h"
#include "ImageViewport.h"
@@ -48,7 +50,7 @@ class ImageRender : public ImageViewport
{
public:
/// constructor
ImageRender(KX_Scene *scene, KX_Camera *camera);
ImageRender(KX_Scene *scene, KX_Camera *camera, PyRASOffScreen *offscreen);
ImageRender(KX_Scene *scene, KX_GameObject *observer, KX_GameObject *mirror, RAS_IPolyMaterial * mat);
/// destructor
@@ -63,16 +65,30 @@ public:
float getClip (void) { return m_clip; }
/// set whole buffer use
void setClip (float clip) { m_clip = clip; }
/// render status
bool isDone() { return m_done; }
/// render frame (public so that it is accessible from python)
bool Render();
/// in case fbo is used, method to unbind
void Unbind();
/// wait for render to complete
void WaitSync();
protected:
/// true if ready to render
bool m_render;
/// is render done already?
bool m_done;
/// rendered scene
KX_Scene * m_scene;
/// camera for render
KX_Camera * m_camera;
/// do we own the camera?
bool m_owncamera;
/// if offscreen render
PyRASOffScreen *m_offscreen;
/// object to synchronize render even if no buffer transfer
RAS_ISync *m_sync;
/// for mirror operation
KX_GameObject * m_observer;
KX_GameObject * m_mirror;
@@ -91,15 +107,15 @@ protected:
KX_KetsjiEngine* m_engine;
/// background color
float m_background[4];
float m_background[4];
/// render 3d scene to image
virtual void calcImage (unsigned int texId, double ts);
virtual void calcImage (unsigned int texId, double ts) { calcViewport(texId, ts, GL_RGBA); }
/// render 3d scene to image
virtual void calcViewport (unsigned int texId, double ts, unsigned int format);
void Render();
void SetupRenderFrame(KX_Scene *scene, KX_Camera* cam);
void RenderFrame(KX_Scene* scene, KX_Camera* cam);
void setBackgroundFromScene(KX_Scene *scene);
void SetWorldSettings(KX_WorldInfo* wi);
};

View File

@@ -45,14 +45,22 @@
// constructor
ImageViewport::ImageViewport (void) : m_alpha(false), m_texInit(false)
ImageViewport::ImageViewport (PyRASOffScreen *offscreen) : m_alpha(false), m_texInit(false)
{
// get viewport rectangle
RAS_Rect rect = KX_GetActiveEngine()->GetCanvas()->GetWindowArea();
m_viewport[0] = rect.GetLeft();
m_viewport[1] = rect.GetBottom();
m_viewport[2] = rect.GetWidth();
m_viewport[3] = rect.GetHeight();
if (offscreen) {
m_viewport[0] = 0;
m_viewport[1] = 0;
m_viewport[2] = offscreen->ofs->GetWidth();
m_viewport[3] = offscreen->ofs->GetHeight();
}
else {
RAS_Rect rect = KX_GetActiveEngine()->GetCanvas()->GetWindowArea();
m_viewport[0] = rect.GetLeft();
m_viewport[1] = rect.GetBottom();
m_viewport[2] = rect.GetWidth();
m_viewport[3] = rect.GetHeight();
}
//glGetIntegerv(GL_VIEWPORT, m_viewport);
// create buffer for viewport image
@@ -60,7 +68,7 @@ ImageViewport::ImageViewport (void) : m_alpha(false), m_texInit(false)
// float (1 float = 4 bytes per pixel)
m_viewportImage = new BYTE [4 * getViewportSize()[0] * getViewportSize()[1]];
// set attributes
setWhole(false);
setWhole((offscreen) ? true : false);
}
// destructor
@@ -126,25 +134,26 @@ void ImageViewport::setPosition (GLint pos[2])
// capture image from viewport
void ImageViewport::calcImage (unsigned int texId, double ts)
void ImageViewport::calcViewport (unsigned int texId, double ts, unsigned int format)
{
// if scale was changed
if (m_scaleChange)
// reset image
init(m_capSize[0], m_capSize[1]);
// if texture wasn't initialized
if (!m_texInit) {
if (!m_texInit && texId != 0) {
// initialize it
loadTexture(texId, m_image, m_size);
m_texInit = true;
}
// if texture can be directly created
if (texId != 0 && m_pyfilter == NULL && m_capSize[0] == calcSize(m_capSize[0])
&& m_capSize[1] == calcSize(m_capSize[1]) && !m_flip && !m_zbuff && !m_depth)
if (texId != 0 && m_pyfilter == NULL && m_size[0] == m_capSize[0] &&
m_size[1] == m_capSize[1] && !m_flip && !m_zbuff && !m_depth)
{
// just copy current viewport to texture
glBindTexture(GL_TEXTURE_2D, texId);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1]);
glBindTexture(GL_TEXTURE_2D, 0);
// image is not available
m_avail = false;
}
@@ -176,11 +185,33 @@ void ImageViewport::calcImage (unsigned int texId, double ts)
// get frame buffer data
if (m_alpha) {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], GL_RGBA,
GL_UNSIGNED_BYTE, m_viewportImage);
// filter loaded data
FilterRGBA32 filt;
filterImage(filt, m_viewportImage, m_capSize);
// as we are reading the pixel in the native format, we can read directly in the image buffer
// if we are sure that no processing is needed on the image
if (m_size[0] == m_capSize[0] &&
m_size[1] == m_capSize[1] &&
!m_flip &&
!m_pyfilter)
{
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], format,
GL_UNSIGNED_BYTE, m_image);
m_avail = true;
}
else if (!m_pyfilter) {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], format,
GL_UNSIGNED_BYTE, m_viewportImage);
FilterRGBA32 filt;
filterImage(filt, m_viewportImage, m_capSize);
}
else {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], GL_RGBA,
GL_UNSIGNED_BYTE, m_viewportImage);
FilterRGBA32 filt;
filterImage(filt, m_viewportImage, m_capSize);
if (format == GL_BGRA) {
// in place byte swapping
swapImageBR();
}
}
}
else {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], GL_RGB,
@@ -188,12 +219,46 @@ void ImageViewport::calcImage (unsigned int texId, double ts)
// filter loaded data
FilterRGB24 filt;
filterImage(filt, m_viewportImage, m_capSize);
if (format == GL_BGRA) {
// in place byte swapping
swapImageBR();
}
}
}
}
}
}
bool ImageViewport::loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts)
{
unsigned int *tmp_image;
bool ret;
// if scale was changed
if (m_scaleChange) {
// reset image
init(m_capSize[0], m_capSize[1]);
}
// size must be identical
if (size < getBuffSize())
return false;
if (m_avail) {
// just copy
return ImageBase::loadImage(buffer, size, format, ts);
}
else {
tmp_image = m_image;
m_image = buffer;
calcViewport(0, ts, format);
ret = m_avail;
m_image = tmp_image;
// since the image was not loaded to our buffer, it's not valid
m_avail = false;
}
return ret;
}
// cast Image pointer to ImageViewport
@@ -336,7 +401,7 @@ int ImageViewport_setCaptureSize(PyImage *self, PyObject *value, void *closure)
// methods structure
static PyMethodDef imageViewportMethods[] =
{ // methods from ImageBase class
{"refresh", (PyCFunction)Image_refresh, METH_NOARGS, "Refresh image - invalidate its current content"},
{"refresh", (PyCFunction)Image_refresh, METH_VARARGS, "Refresh image - invalidate its current content"},
{NULL}
};
// attributes structure

View File

@@ -35,6 +35,7 @@
#include "Common.h"
#include "ImageBase.h"
#include "RAS_IOffScreen.h"
/// class for viewport access
@@ -42,7 +43,7 @@ class ImageViewport : public ImageBase
{
public:
/// constructor
ImageViewport (void);
ImageViewport (PyRASOffScreen *offscreen=NULL);
/// destructor
virtual ~ImageViewport (void);
@@ -67,6 +68,9 @@ public:
/// set position in viewport
void setPosition (GLint pos[2] = NULL);
/// capture image from viewport to user buffer
virtual bool loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts);
protected:
/// frame buffer rectangle
GLint m_viewport[4];
@@ -89,7 +93,10 @@ protected:
bool m_texInit;
/// capture image from viewport
virtual void calcImage (unsigned int texId, double ts);
virtual void calcImage (unsigned int texId, double ts) { calcViewport(texId, ts, GL_RGBA); }
/// capture image from viewport
virtual void calcViewport (unsigned int texId, double ts, unsigned int format);
/// get viewport size
GLint * getViewportSize (void) { return m_viewport + 2; }

View File

@@ -393,9 +393,10 @@ static PyObject *Texture_refresh(Texture *self, PyObject *args)
}
// load texture for rendering
loadTexture(self->m_actTex, texture, size, self->m_mipmap);
// refresh texture source, if required
if (refreshSource) self->m_source->m_image->refresh();
}
// refresh texture source, if required
if (refreshSource) {
self->m_source->m_image->refresh();
}
}
}

View File

@@ -137,8 +137,53 @@ PyObject *Video_getStatus(PyImage *self, void *closure)
}
// refresh video
PyObject *Video_refresh(PyImage *self)
PyObject *Video_refresh(PyImage *self, PyObject *args)
{
Py_buffer buffer;
char *mode = NULL;
unsigned int format;
double ts = -1.0;
memset(&buffer, 0, sizeof(buffer));
if (PyArg_ParseTuple(args, "|s*sd:refresh", &buffer, &mode, &ts)) {
if (buffer.buf) {
// a target buffer is provided, verify its format
if (buffer.readonly) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be writable");
}
else if (!PyBuffer_IsContiguous(&buffer, 'C')) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be contiguous in memory");
}
else if (((intptr_t)buffer.buf & 3) != 0) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be aligned to 4 bytes boundary");
}
else {
// ready to get the image into our buffer
try {
if (mode == NULL || !strcmp(mode, "RGBA"))
format = GL_RGBA;
else if (!strcmp(mode, "BGRA"))
format = GL_BGRA;
else
THRWEXCP(InvalidImageMode,S_OK);
if (!self->m_image->loadImage((unsigned int *)buffer.buf, buffer.len, format, ts)) {
PyErr_SetString(PyExc_TypeError, "Could not load the buffer, perhaps size is not compatible");
}
}
catch (Exception & exp) {
exp.report();
}
}
PyBuffer_Release(&buffer);
if (PyErr_Occurred())
return NULL;
}
}
else
{
return NULL;
}
getVideo(self)->refresh();
return Video_getStatus(self, NULL);
}

View File

@@ -190,7 +190,7 @@ void Video_open(VideoBase *self, char *file, short captureID);
PyObject *Video_play(PyImage *self);
PyObject *Video_pause(PyImage *self);
PyObject *Video_stop(PyImage *self);
PyObject *Video_refresh(PyImage *self);
PyObject *Video_refresh(PyImage *self, PyObject *args);
PyObject *Video_getStatus(PyImage *self, void *closure);
PyObject *Video_getRange(PyImage *self, void *closure);
int Video_setRange(PyImage *self, PyObject *value, void *closure);

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,256 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file VideoDeckLink.h
* \ingroup bgevideotex
*/
#ifndef __VIDEODECKLINK_H__
#define __VIDEODECKLINK_H__
#ifdef WITH_GAMEENGINE_DECKLINK
/* this needs to be parsed with __cplusplus defined before included through DeckLink_compat.h */
#if defined(__FreeBSD__)
# include <inttypes.h>
#endif
#include <map>
#include <set>
extern "C" {
#include <pthread.h>
#include "DNA_listBase.h"
#include "BLI_threads.h"
#include "BLI_blenlib.h"
}
#include "GL/glew.h"
#ifdef WIN32
#include "dvpapi.h"
#endif
#include "DeckLinkAPI.h"
#include "VideoBase.h"
class PinnedMemoryAllocator;
struct TextureDesc
{
uint32_t width;
uint32_t height;
uint32_t stride;
uint32_t size;
GLenum internalFormat;
GLenum format;
GLenum type;
TextureDesc()
{
width = 0;
height = 0;
stride = 0;
size = 0;
internalFormat = 0;
format = 0;
type = 0;
}
};
class CaptureDelegate;
// type VideoDeckLink declaration
class VideoDeckLink : public VideoBase
{
friend class CaptureDelegate;
public:
/// constructor
VideoDeckLink (HRESULT * hRslt);
/// destructor
virtual ~VideoDeckLink ();
/// open video/image file
virtual void openFile(char *file);
/// open video capture device
virtual void openCam(char *driver, short camIdx);
/// release video source
virtual bool release (void);
/// overwrite base refresh to handle fixed image
virtual void refresh(void);
/// play video
virtual bool play (void);
/// pause video
virtual bool pause (void);
/// stop video
virtual bool stop (void);
/// set play range
virtual void setRange (double start, double stop);
/// set frame rate
virtual void setFrameRate (float rate);
protected:
// format and codec information
/// image calculation
virtual void calcImage (unsigned int texId, double ts);
private:
void VideoFrameArrived(IDeckLinkVideoInputFrame* inputFrame);
void LockCache()
{
pthread_mutex_lock(&mCacheMutex);
}
void UnlockCache()
{
pthread_mutex_unlock(&mCacheMutex);
}
IDeckLinkInput* mDLInput;
BMDDisplayMode mDisplayMode;
BMDPixelFormat mPixelFormat;
bool mUse3D;
uint32_t mFrameWidth;
uint32_t mFrameHeight;
TextureDesc mTextureDesc;
PinnedMemoryAllocator* mpAllocator;
CaptureDelegate* mpCaptureDelegate;
// cache frame in transit between the callback thread and the main BGE thread
// keep only one frame in cache because we just want to keep up with real time
pthread_mutex_t mCacheMutex;
IDeckLinkVideoInputFrame* mpCacheFrame;
bool mClosing;
};
inline VideoDeckLink *getDeckLink(PyImage *self)
{
return static_cast<VideoDeckLink*>(self->m_image);
}
////////////////////////////////////////////
// TextureTransfer : Abstract class to perform a transfer to GPU memory using fast transfer if available
////////////////////////////////////////////
class TextureTransfer
{
public:
TextureTransfer() {}
virtual ~TextureTransfer() { }
virtual void PerformTransfer() = 0;
protected:
static bool _PinBuffer(void *address, uint32_t size);
static void _UnpinBuffer(void* address, uint32_t size);
};
////////////////////////////////////////////
// PinnedMemoryAllocator
////////////////////////////////////////////
// PinnedMemoryAllocator implements the IDeckLinkMemoryAllocator interface and can be used instead of the
// built-in frame allocator, by setting with SetVideoInputFrameMemoryAllocator() or SetVideoOutputFrameMemoryAllocator().
//
// For this sample application a custom frame memory allocator is used to ensure each address
// of frame memory is aligned on a 4kB boundary required by the OpenGL pinned memory extension.
// If the pinned memory extension is not available, this allocator will still be used and
// demonstrates how to cache frame allocations for efficiency.
//
// The frame cache delays the releasing of buffers until the cache fills up, thereby avoiding an
// allocate plus pin operation for every frame, followed by an unpin and deallocate on every frame.
class PinnedMemoryAllocator : public IDeckLinkMemoryAllocator
{
public:
PinnedMemoryAllocator(unsigned cacheSize, size_t memSize);
virtual ~PinnedMemoryAllocator();
void TransferBuffer(void* address, TextureDesc* texDesc, GLuint texId);
// IUnknown methods
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID *ppv);
virtual ULONG STDMETHODCALLTYPE AddRef(void);
virtual ULONG STDMETHODCALLTYPE Release(void);
// IDeckLinkMemoryAllocator methods
virtual HRESULT STDMETHODCALLTYPE AllocateBuffer(dl_size_t bufferSize, void* *allocatedBuffer);
virtual HRESULT STDMETHODCALLTYPE ReleaseBuffer(void* buffer);
virtual HRESULT STDMETHODCALLTYPE Commit();
virtual HRESULT STDMETHODCALLTYPE Decommit();
private:
static bool mGPUDirectInitialized;
static bool mHasDvp;
static bool mHasAMDPinnedMemory;
static size_t mReservedProcessMemory;
static bool ReserveMemory(size_t size);
void Lock()
{
pthread_mutex_lock(&mMutex);
}
void Unlock()
{
pthread_mutex_unlock(&mMutex);
}
HRESULT _ReleaseBuffer(void* buffer);
uint32_t mRefCount;
// protect the cache and the allocated map,
// not the pinnedBuffer map as it is only used from main thread
pthread_mutex_t mMutex;
std::map<void*, uint32_t> mAllocatedSize;
std::vector<void*> mBufferCache;
std::map<void *, TextureTransfer*> mPinnedBuffer;
#ifdef WIN32
DVPBufferHandle mDvpCaptureTextureHandle;
#endif
// target texture in GPU
GLuint mTexId;
uint32_t mBufferCacheSize;
};
////////////////////////////////////////////
// Capture Delegate Class
////////////////////////////////////////////
class CaptureDelegate : public IDeckLinkInputCallback
{
VideoDeckLink* mpOwner;
public:
CaptureDelegate(VideoDeckLink* pOwner);
// IUnknown needs only a dummy implementation
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID *ppv) { return E_NOINTERFACE; }
virtual ULONG STDMETHODCALLTYPE AddRef() { return 1; }
virtual ULONG STDMETHODCALLTYPE Release() { return 1; }
virtual HRESULT STDMETHODCALLTYPE VideoInputFrameArrived(IDeckLinkVideoInputFrame *videoFrame, IDeckLinkAudioInputPacket *audioPacket);
virtual HRESULT STDMETHODCALLTYPE VideoInputFormatChanged(BMDVideoInputFormatChangedEvents notificationEvents, IDeckLinkDisplayMode *newDisplayMode, BMDDetectedVideoInputFormatFlags detectedSignalFlags);
};
#endif /* WITH_GAMEENGINE_DECKLINK */
#endif /* __VIDEODECKLINK_H__ */

View File

@@ -1203,7 +1203,7 @@ static PyMethodDef videoMethods[] =
{"play", (PyCFunction)Video_play, METH_NOARGS, "Play (restart) video"},
{"pause", (PyCFunction)Video_pause, METH_NOARGS, "pause video"},
{"stop", (PyCFunction)Video_stop, METH_NOARGS, "stop video (play will replay it from start)"},
{"refresh", (PyCFunction)Video_refresh, METH_NOARGS, "Refresh video - get its status"},
{"refresh", (PyCFunction)Video_refresh, METH_VARARGS, "Refresh video - get its status"},
{NULL}
};
// attributes structure
@@ -1326,7 +1326,7 @@ static PyObject *Image_reload(PyImage *self, PyObject *args)
// methods structure
static PyMethodDef imageMethods[] =
{ // methods from VideoBase class
{"refresh", (PyCFunction)Video_refresh, METH_NOARGS, "Refresh image, i.e. load it"},
{"refresh", (PyCFunction)Video_refresh, METH_VARARGS, "Refresh image, i.e. load it"},
{"reload", (PyCFunction)Image_reload, METH_VARARGS, "Reload image, i.e. reopen it"},
{NULL}
};

View File

@@ -128,6 +128,10 @@ static PyMethodDef moduleMethods[] =
extern PyTypeObject VideoFFmpegType;
extern PyTypeObject ImageFFmpegType;
#endif
#ifdef WITH_GAMEENGINE_DECKLINK
extern PyTypeObject VideoDeckLinkType;
extern PyTypeObject DeckLinkType;
#endif
extern PyTypeObject FilterBlueScreenType;
extern PyTypeObject FilterGrayType;
extern PyTypeObject FilterColorType;
@@ -144,6 +148,9 @@ static void registerAllTypes(void)
#ifdef WITH_FFMPEG
pyImageTypes.add(&VideoFFmpegType, "VideoFFmpeg");
pyImageTypes.add(&ImageFFmpegType, "ImageFFmpeg");
#endif
#ifdef WITH_GAMEENGINE_DECKLINK
pyImageTypes.add(&VideoDeckLinkType, "VideoDeckLink");
#endif
pyImageTypes.add(&ImageBuffType, "ImageBuff");
pyImageTypes.add(&ImageMixType, "ImageMix");
@@ -194,6 +201,10 @@ PyMODINIT_FUNC initVideoTexturePythonBinding(void)
return NULL;
if (PyType_Ready(&TextureType) < 0)
return NULL;
#ifdef WITH_GAMEENGINE_DECKLINK
if (PyType_Ready(&DeckLinkType) < 0)
return NULL;
#endif
m = PyModule_Create(&VideoTexture_module_def);
PyDict_SetItemString(PySys_GetObject("modules"), VideoTexture_module_def.m_name, m);
@@ -207,6 +218,10 @@ PyMODINIT_FUNC initVideoTexturePythonBinding(void)
Py_INCREF(&TextureType);
PyModule_AddObject(m, "Texture", (PyObject *)&TextureType);
#ifdef WITH_GAMEENGINE_DECKLINK
Py_INCREF(&DeckLinkType);
PyModule_AddObject(m, "DeckLink", (PyObject *)&DeckLinkType);
#endif
PyModule_AddIntConstant(m, "SOURCE_ERROR", SourceError);
PyModule_AddIntConstant(m, "SOURCE_EMPTY", SourceEmpty);
PyModule_AddIntConstant(m, "SOURCE_READY", SourceReady);