libgpac
Documentation of the core library of GPAC

WebGL API. More...

+ Collaboration diagram for WebGL API:

Data Structures

interface  WebGLContext
 
interface  NamedTexture
 
interface  VideoColorConfig
 
interface  FilterPid
 

Detailed Description

Foreword

GPAC supports the WebGL 1.0 Core API. For more documentation, please check https://www.khronos.org/registry/webgl/specs/latest/1.0

The WebGL API cannot currently be loaded when using SVG or VRML scripts. It is only available for JSFilter, and shall be loaded as a JS module with the name "webgl":

import * as webgl from 'webgl'
...

or

import {WebGLContext} from 'webgl'
...
Definition: webgl.idl:348

The API implements most of WebGL 1.0 context calls. What is not supported:

WebGL Context

The WebGL API in GPAC does not use any canvas element, since it is designed to run outside of a DOM. The WebGL context shall therefore be created using a constructor call.

A filter is responsible for deciding when to issue GL calls: this can be in a process function or in a task callback (see JSFilter::post_task). There is no such thing as requestAnimationFrame in GPAC. Consequently, the owning filter is responsible for:

Warning
it is unsafe to assume that your filter is owning the OpenGL context, there may be other filters operating on the context. This means that context state (viewport, clearColor, etc...) shall be restored when reactivating the context.
Note
WebGL filters always run on the main process to avoid concurrent usage of the OpenGL context, but this might change in the future.

The WebGL API in GPAC work by default on offscreen framebuffer objects:

The underlying framebuffer color texture attachment and if enabled, depth texture attachment, can be dispatched as a GPAC packet using FilterPid::new_packet; this allows forwarding a framebuffer data to other filters without having to copy to system memory the framebuffer content.

When forwarding a framebuffer, it is recommended not to draw anything nor activate GL context until all references of the packet holding the framebuffer are consumed. A callback function is used for that, see example below.

Note
you can always use glReadPixels to read back the framebuffer and send packets using the usual FilterPacket tools.

The WebGL API in GPAC can also be configured to run on the primary frame buffer; this is achieved by adding a "primary" attribute set to true to the WebGLContextAttributes object at creation. In this case:

Texturing

WebGL offer two ways of creating textures:

TexImageSource in WebGL can be ImageBitmap, ImageData, HTMLImageElement, HTMLCanvasElement, HTMLVideoElement or OffscreenCanvas.

Since GPAC doesn't run WebGL in a DOM/browser, these objects are not available for creating textures. Instead, the following objects can be used:

Using EVG textures

EVG Texture can be used to quickly load JPG or PNG images:

let texture = gl.createTexture();
let tx = new evg.Texture('source.jpg');
gl.bindtexture(gl.TEXTURE_2D, texture);
gl.texImage2D(target, level, internalformat, format, type, tx);
//at this point the data is uploaded on GPU, the EVG texture is no longer needed and can be GC'ed

EVG Texture combined with EVG Canvas can be used to draw text and 2D shapes:

let canvas = new evg.Canvas(200, 200, 'rgba');
/* draw stuff on canvas
...
*/
let texture = gl.createTexture();
let tx = new evg.Texture(canvas);
gl.bindtexture(gl.TEXTURE_2D, texture);
gl.texImage2D(target, level, internalformat, format, type, tx);
//at this point the data is uploaded on GPU, the EVG texture and canvas are no longer needed and can be GC'ed

For more info on drawing with EVG, see EVG JS API

Using named textures

Dealing with pixel formats in OpenGL/WebGL/GLSL can be quite heavy:

In order to simplify your code and deal efficiently with most formats, the WebGL API in GPAC introduces the concept of named textures.

A named texture is a texture created with a name:

let tx = gl.createTexture('myVidTex');

The texture data is then associated using upload():

//source data is in system memory or already in OpenGL textures
let pck = input_pid.get_packet();
tx.upload(pck);
//or
//source data is only in system memory
tx.upload(some_evg_texture);

Regular bindTexture and texImage2D can also be used if you don't like changing your code too much:

let pck = input_pid.get_packet();
gl.bindTexture(gl.TEXTURE_2D, tx);
//source data is in system memory or already in OpenGL textures
gl.texImage2D(target, level, internalformat, format, type, pck);
//or
gl.bindTexture(gl.TEXTURE_2D, tx);
//source data is only in system memory
gl.texImage2D(target, level, internalformat, format, type, some_evg_texture);

The magic comes in when creating your shaders: any call to texture2D on a sampler2D using the same name as the NamedTexture is rewritten before compilation and replaced with GLSL code handling the pixel format conversion for you !

varying vec2 vTextureCoord;
uniform sampler2D myVidTex; //this will get replaced before compilation
uniform sampler2D imageSampler; //this will NOT get replaced
void main(void) {
vec2 tx = vTextureCoord;
vid = texture2D(myVidTex, tx); //this will get replaced before compilation
img = texture2D(imageSampler, tx); //this will NOT get replaced
vid.alpha = img.alpha;
gl_FragColor = vid;
}
int main(int argc, char **argv)
Definition: ios_main.m:37

The resulting fragment shader may contain one or more sampler2D and a few additional uniforms, but they are managed for you by GPAC!

The named texture is then used as usual:

gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, tx);
//this one is ignored for named textures (the uniformlocation object exists but is deactivated) but you can just keep your code as usual
gl.uniform1i(myVidTexUniformLocation, 0);
gl.activeTexture(gl.TEXTURE0 + tx.nb_textures);
gl.bindTexture(gl.TEXTURE_2D, imageTexture);
gl.uniform1i(imageSamplerUniformLocation, 0);

In the above code, note the usage of tx.nb_textures : this allows fetching the underlying number of texture units used by the named texture, and properly setting up multitexturing.

Warning
A consequence of this is that you cannot reuse a fragment shader for both a NamedTexture and a regular WebGLTexture, this will simply not work.
Using explicit location assignment in your shader on a named texture sampler2D is NOT supported:
layout(location = N)
@ N
Definition: uni_bidi.c:68

The core concept for dealing with NamedTexture is that the fragment shader sources must be set AFTER the texture is being setup (upload / texImage2D). Doing it before will result in an unmodifed fragment shader and missing uniforms.

To summarize, NamedTexture allows you to use existing glsl fragment shaders sources with any pixel format for your source, provided that:

The NamedTexture does not track any pixel format or image width changes, mostly because the program needs recompiling anyway. This means that whenever the pixel format or source image width change for a NamedTexture, you must:

Note
The width must be checked, since for packed YUV it is needed and exposed as a uniform. You could also modify this uniform manually, see "Inside named textures" section

Example of using source FilterPacket with NamedTexture

import {WebGLContext, Matrix} from 'webgl'
//let our filter accept and produce only raw video
filter.set_cap({id: "StreamType", value: "Video", inout: true} );
filter.set_cap({id: "CodecID", value: "raw", inout: true} );
//setup webGL
let gl = new WebGLContext(1280, 720);
//setup named texture
let tx = gl.createTexture('MyVid');
let program = null;
let width=0;
let height=0;
let pix_fmt = '';
let ipid = null;
let opid = null;
const vertexShaderSource = `
...
`;
const fragmentShaderSource = `
varying vec2 vTextureCoord;
uniform sampler2D MyVid; //same as our named texture
void main(void) {
vec2 tx= vTextureCoord;
//vertical flip
tx.y = 1.0 - tx.y;
gl_FragColor = texture2D(MyVid, tx);
}
`;
filter.configure_pid = function(pid) {
if (!opid) {
opid = this.new_pid();
}
ipid = pid;
//copy all props from input pid
opid.copy_props(pid);
//default pixel format for WebGL context framebuffer is RGBA
opid.set_prop('PixelFormat', 'rgba');
//drop these properties
opid.set_prop('Stride', null);
opid.set_prop('StrideUV', null);
//check if pixel format, width or height have changed by checking props
let n_width = pid.get_prop('Width');
let n_height = pid.get_prop('Height');
let pf = pid.get_prop('PixelFormat');
if ((n_width != width) || (n_height != height)) {
width = n_width;
height = n_height;
//you may want to resize your canvas here
}
if (pf != pix_fmt) {
pix_fmt = pf;
//dereference program (wait gor GC to kill it) or delete it using gl.deleteProgram
program = null;
//notify the texture it needs reconfiguring
tx.reconfigure();
}
}
filter.process = function()
{
//previous frame is still being used by output(s), do not modify (although you technically can ...) !
if (filter.frame_pending) return GF_OK;
//get source packet
let ipck = ipid.get_packet();
if (!ipck) return GF_OK;
//request the OpenGL context to be the current one
gl.activate(true);
//upload texture - these are the same as tx.upload(ipck);
gl.bindTexture(gl.TEXTURE_2D, tx);
gl.texImage2D(gl.TEXTURE_2D, 0, 0, 0, 0, ipck);
//program not created, do it now that we know the texture format
if (!programInfo) programInfo = setupProgram(gl, vertexShaderSource, fragmentShaderSource);
/*draw scene
setup viewport, matrices, uniforms, etc.
...
*/
//set video texture
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, tx);
//this one is ignored for gpac named textures, just kept to make sure we don't break usual webGL programming
gl.uniform1i(programInfo.uniformLocations.txVid, 0);
/*
...
drawElements / drawArray ...
end draw scene
*/
//make sure all OpenGL calls are done before sending the packet
gl.flush();
//indicate we are done with the OpenGL context
gl.activate(false);
//create packet from webgl framebuffer, with a callback to get notified when the frambuffer is no longer in use by other filters
let opck = opid.new_packet(gl, () => { filter.frame_pending=false; } );
//remember we wait for the notif
this.frame_pending = true;
//copy all properties of the source packet
opck.copy_props(ipck);
//note that we drop the source after the draw in this example: since the source data could be OpenGL textures, we don't want to discard them until we are done
ipid.drop_packet();
//send packet !
opck.send();
}
@ GF_OK
Definition: tools.h:132
attribute JSFilter filter
Definition: jsf.idl:27
attribute GF_Err process()
attribute GF_Err configure_pid(FilterPid pid)
void set_cap(optional JSCapDesc cap_desc=null)
Definition: evg.idl:1261

Inside named textures

NamedTexture allows supporting all pixel formats currently used in GPAC without any conversion before GPU upload. Namely:

If you want to have fun, the underlying uniforms are defined in the fragment shader, with $NAME$ being replaced by the name of the NamedTexture:

The texture formats are as follows:

Note
Currently 10 bit support is disabled on iOS and Android since GL_RED_SCALE/GL_ALPHA_SCALE are not supported in GLES2

The YUV to RGB conversion values are currently hardcoded, we will expose them as uniforms soon. The YUV+alpha is yet to be implemented.

Matrices

The 'evg' module comes with a Matrix object to avoid external dependencies for matrix manipulation.


Data Structure Documentation

◆ VideoColorConfig

interface VideoColorConfig

Video color space config for named textures

Data Fields
attribute boolean fullrange

fullrange video flag

attribute DOMString matrix

GPAC name for CICP MatrixCoeficients, or interger value