WebGL API.
More...
Foreword
GPAC supports the WebGL 1.0 Core API. For more documentation, please check https://www.khronos.org/registry/webgl/specs/latest/1.0
The WebGL API cannot currently be loaded when using SVG or VRML scripts. It is only available for JSFilter, and shall be loaded as a JS module with the name "webgl":
import * as webgl from 'webgl'
...
or
...
Definition: webgl.idl:348
The API implements most of WebGL 1.0 context calls. What is not supported:
- premultiplied alpha (gl.GL_UNPACK_PREMULTIPLY_ALPHA_WEBGL & co)
- WebGL extensions (gl.getExtension)
WebGL Context
The WebGL API in GPAC does not use any canvas element, since it is designed to run outside of a DOM. The WebGL context shall therefore be created using a constructor call.
A filter is responsible for deciding when to issue GL calls: this can be in a process function or in a task callback (see JSFilter::post_task). There is no such thing as requestAnimationFrame in GPAC. Consequently, the owning filter is responsible for:
- activating and deactivating the context in order to make the associated GL context active and bind / unbind the underlying framebuffer
- resizing the framebuffer when needed
- Warning
- it is unsafe to assume that your filter is owning the OpenGL context, there may be other filters operating on the context. This means that context state (viewport, clearColor, etc...) shall be restored when reactivating the context.
- Note
- WebGL filters always run on the main process to avoid concurrent usage of the OpenGL context, but this might change in the future.
The WebGL API in GPAC work by default on offscreen framebuffer objects:
- the color attachment is always a texture object, RGBA 32 bits or RGB 24 bits (see WebGLContextAttributes)
- the depth attachment is a renderbuffer by default and cannot be exported; this behavior can be changed by setting the "depth" attribute of the WebGLContextAttributes object to "texture" before creation, thereby creating a depth texture attachment; the format is 24 bit precision integer (desktop) or 16 bit precision integer (iOS, Android).
The underlying framebuffer color texture attachment and if enabled, depth texture attachment, can be dispatched as a GPAC packet using FilterPid::new_packet; this allows forwarding a framebuffer data to other filters without having to copy to system memory the framebuffer content.
When forwarding a framebuffer, it is recommended not to draw anything nor activate GL context until all references of the packet holding the framebuffer are consumed. A callback function is used for that, see example below.
- Note
- you can always use glReadPixels to read back the framebuffer and send packets using the usual FilterPacket tools.
The WebGL API in GPAC can also be configured to run on the primary frame buffer; this is achieved by adding a "primary" attribute set to true to the WebGLContextAttributes object at creation. In this case:
- depth buffer cannot be delivered as a texture
- video output SHALL be created before WebGL context creation (typically by loading the video output filter before the JS filter)
Texturing
WebGL offer two ways of creating textures:
- regular texImage2D using ArrayBuffers created from the script. This is obviously supported in GPAC.
- texImage2D from TexImageSource
TexImageSource in WebGL can be ImageBitmap, ImageData, HTMLImageElement, HTMLCanvasElement, HTMLVideoElement or OffscreenCanvas.
Since GPAC doesn't run WebGL in a DOM/browser, these objects are not available for creating textures. Instead, the following objects can be used:
Using EVG textures
EVG Texture can be used to quickly load JPG or PNG images:
let texture = gl.createTexture();
let tx = new evg.Texture('source.jpg');
gl.bindtexture(gl.TEXTURE_2D, texture);
gl.texImage2D(target, level, internalformat, format, type, tx);
EVG Texture combined with EVG Canvas can be used to draw text and 2D shapes:
let canvas = new evg.Canvas(200, 200, 'rgba');
let texture = gl.createTexture();
let tx = new evg.Texture(canvas);
gl.bindtexture(gl.TEXTURE_2D, texture);
gl.texImage2D(target, level, internalformat, format, type, tx);
For more info on drawing with EVG, see EVG JS API
Using named textures
Dealing with pixel formats in OpenGL/WebGL/GLSL can be quite heavy:
- some pixel formats come in various component ordering, where only a few subset are natively supported (eg RGBA is OK, but BGRA is not)
- some pixel formats are not natively supported by OpenGL/WebGL (typically, most/all flavors of YUV)
- some pixel formats are planar and require more than one texture to draw them, which is quite heavy to setup
- some video decoders might output directly as a set of one or more OpenGL textures on the GPU (NVDec, iOS VideoToolbox, Android MediaCodec)
In order to simplify your code and deal efficiently with most formats, the WebGL API in GPAC introduces the concept of named textures.
A named texture is a texture created with a name:
let tx = gl.createTexture('myVidTex');
The texture data is then associated using upload():
let pck = input_pid.get_packet();
tx.upload(pck);
tx.upload(some_evg_texture);
Regular bindTexture and texImage2D can also be used if you don't like changing your code too much:
let pck = input_pid.get_packet();
gl.bindTexture(gl.TEXTURE_2D, tx);
gl.texImage2D(target, level, internalformat, format, type, pck);
gl.bindTexture(gl.TEXTURE_2D, tx);
gl.texImage2D(target, level, internalformat, format, type, some_evg_texture);
The magic comes in when creating your shaders: any call to texture2D on a sampler2D using the same name as the NamedTexture is rewritten before compilation and replaced with GLSL code handling the pixel format conversion for you !
varying vec2 vTextureCoord;
uniform sampler2D myVidTex;
uniform sampler2D imageSampler;
vec2 tx = vTextureCoord;
vid = texture2D(myVidTex, tx);
img = texture2D(imageSampler, tx);
vid.alpha = img.alpha;
gl_FragColor = vid;
}
int main(int argc, char **argv)
Definition: ios_main.m:37
The resulting fragment shader may contain one or more sampler2D and a few additional uniforms, but they are managed for you by GPAC!
The named texture is then used as usual:
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, tx);
gl.uniform1i(myVidTexUniformLocation, 0);
gl.activeTexture(gl.TEXTURE0 + tx.nb_textures);
gl.bindTexture(gl.TEXTURE_2D, imageTexture);
gl.uniform1i(imageSamplerUniformLocation, 0);
In the above code, note the usage of tx.nb_textures : this allows fetching the underlying number of texture units used by the named texture, and properly setting up multitexturing.
- Warning
- A consequence of this is that you cannot reuse a fragment shader for both a NamedTexture and a regular WebGLTexture, this will simply not work.
-
Using explicit location assignment in your shader on a named texture sampler2D is NOT supported:
@ N
Definition: uni_bidi.c:68
The core concept for dealing with NamedTexture is that the fragment shader sources must be set AFTER the texture is being setup (upload / texImage2D). Doing it before will result in an unmodifed fragment shader and missing uniforms.
To summarize, NamedTexture allows you to use existing glsl fragment shaders sources with any pixel format for your source, provided that:
- you tag the texture with the name of the sampler2D you want to replace
- you upload data to your texture before creating the program using it
The NamedTexture does not track any pixel format or image width changes, mostly because the program needs recompiling anyway. This means that whenever the pixel format or source image width change for a NamedTexture, you must:
- reset the NamedTexture by calling reconfigure()
- destroy your GLSL program,
- upload the new data to your NamedTexture
- resetup your fragment shader source and program
- Note
- The width must be checked, since for packed YUV it is needed and exposed as a uniform. You could also modify this uniform manually, see "Inside named textures" section
Example of using source FilterPacket with NamedTexture
filter.
set_cap({id:
"StreamType", value:
"Video", inout:
true} );
let tx = gl.createTexture('MyVid');
let program = null;
let width=0;
let height=0;
let pix_fmt = '';
let ipid = null;
let opid = null;
const vertexShaderSource = `
...
`;
const fragmentShaderSource = `
varying vec2 vTextureCoord;
uniform sampler2D MyVid;
vec2 tx= vTextureCoord;
tx.y = 1.0 - tx.y;
gl_FragColor = texture2D(MyVid, tx);
}
`;
if (!opid) {
opid = this.new_pid();
}
ipid = pid;
opid.copy_props(pid);
opid.set_prop('PixelFormat', 'rgba');
opid.set_prop('Stride', null);
opid.set_prop('StrideUV', null);
let n_width = pid.get_prop('Width');
let n_height = pid.get_prop('Height');
let pf = pid.get_prop('PixelFormat');
if ((n_width != width) || (n_height != height)) {
width = n_width;
height = n_height;
}
if (pf != pix_fmt) {
pix_fmt = pf;
program = null;
tx.reconfigure();
}
}
{
let ipck = ipid.get_packet();
gl.activate(true);
gl.bindTexture(gl.TEXTURE_2D, tx);
gl.texImage2D(gl.TEXTURE_2D, 0, 0, 0, 0, ipck);
if (!programInfo) programInfo = setupProgram(gl, vertexShaderSource, fragmentShaderSource);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, tx);
gl.uniform1i(programInfo.uniformLocations.txVid, 0);
gl.flush();
gl.activate(false);
let opck = opid.new_packet(gl, () => {
filter.frame_pending=
false; } );
this.frame_pending = true;
opck.copy_props(ipck);
ipid.drop_packet();
opck.send();
}
@ GF_OK
Definition: tools.h:132
attribute JSFilter filter
Definition: jsf.idl:27
attribute GF_Err process()
attribute GF_Err configure_pid(FilterPid pid)
void set_cap(optional JSCapDesc cap_desc=null)
Inside named textures
NamedTexture allows supporting all pixel formats currently used in GPAC without any conversion before GPU upload. Namely:
- YUV 420, 422 and 444 planar 8 bits (and 10 bits on desktop versions)
- YUYV, YVYU, UYVU, VYUY 422 8 bits
- NV12 and NV21 8 bits (and 10 bits on desktop versions)
- RGBA, ARGB, BGRA, ABGR, RGBX, XRGB, BGRX, XBGR
- AlphaGrey and GreyAlpha
- Greyscale
- RGB 444, RGB 555, RGB 565
If you want to have fun, the underlying uniforms are defined in the fragment shader, with $NAME$ being replaced by the name of the NamedTexture:
- uniform sampler2D _gf_$NAME$_1: RGB (all variants), packed YUV (all variants) or Y plane, always defined
- uniform sampler2D _gf_$NAME$_2: U or UV plane, if any, undefined otherwise
- uniform sampler2D _gf_$NAME$_3: V plane, if any, undefined otherwise
- uniform float _gf_$NAME$_width: image width for packed YUV, undefined otherwise
The texture formats are as follows:
- RGB 444, RGB 555, RGB 565 are uploaded as alpha grey images
- nv12 and nv21 are uploaded as greyscale image for Y and alpha grey image for UV
- all planar formats are uploaded as one greyscale image per plane
- All 10 bit support is done using 16 bits texture, GL_UNSIGNED_SHORT format and GL_RED_SCALE/GL_ALPHA_SCALE
- Note
- Currently 10 bit support is disabled on iOS and Android since GL_RED_SCALE/GL_ALPHA_SCALE are not supported in GLES2
The YUV to RGB conversion values are currently hardcoded, we will expose them as uniforms soon. The YUV+alpha is yet to be implemented.
Matrices
The 'evg' module comes with a Matrix object to avoid external dependencies for matrix manipulation.
◆ VideoColorConfig
interface VideoColorConfig |
Video color space config for named textures
Data Fields |
attribute boolean |
fullrange |
fullrange video flag
|
attribute DOMString |
matrix |
GPAC name for CICP MatrixCoeficients, or interger value
|