Skip to main content



The rendering engine processes the image in 6 steps. Their drawing order is as follows:

serial numberrendering targetconfig.json property name
1.Camera imagecamera
2.Effect backgroundbackground
3.Background separationbackgroundSeparation
4.Effects based on face recognition datafrx
5.Effect foregroundforeground
6.Final processingfinal

Do not configure the background effect if you need the background separation feature. There is no need to enable both of these layers simultaneously.

The 3D effect is drawn on the layer with face recognition data. The Filter Editor models are imported into this layer.


You can apply different blending modes to combine layers.

File Structure#

Features that are configured by config.json along with rendering layers.

nameEffect name. Should correspond to the effect folder name.
display_nameUser friendly effect name (optional).
versionConfiguration version. Defaulted to 1 if skipped.
cameraEffect applied to the camera texture.
backgroundEffect applied to the background layer.
backgroundSeparationBackground effect applied around the user's silhouette.
frx3D effect
foregroundForeground effect
rotation_vectorEnables device rotation for the effect.
finalFinal result on the effect.
soundsSound files for the effect.
voiceChangerAudio effect applied after the recording.
recognizerRecognizer features to enable.

File structure example#

{    "name": "example_1",    "display_name": "file structure example",    "version": 1.0,
    "sounds": [        ...    ],
    "voiceChanger": {        ...    },
    "recognizer": [        ...    ],
    "camera": {        ...    },
    "background": {        ...    }
    "frx": {        ...    },
    "foreground": {        ...    },
    "final": {        ...    }
    "recordDuration" : 10}


camera, background, backgroundSeparation, frx and foreground layers have the same option possible: media. With this option you can set image, video or gif texture, which will render on this layer. You can also choose the layer blending mode, opacity, etc. The FRX layer doesn't provide rendering, but it can provide video texture to glfx_VIDEO uniform sampler, which you can use to apply to some materials. These layers have 2 options: type and media.

typeREQUIRED3D_ANIM, MEDIAdescribes the effect type applied to the layer
mediaOPTIONALmedia option described in the detail later

FRX and camera layers example#

"frx": {    "type": "3D_ANIM",      // Applied effect type    "3d_anim": {        "type": "vfx",    },    "media": { ... }}
// or "frx": {    "type": "MEDIA",      // Applied effect type    "media": {         "type": "VIDEO",        "file": "video.mp4",   // or "animation.gif"        "videoSpeed": 1.0,     //optional        "opacity": 1.0,        //optional        "blend": "ALPHA",      //optional        "mode": "RGBA"     }}
"camera": {    "type": "MEDIA",    "media": {        "type": "PICTURE",        "file": "bg.png"    }},


The layer "final" describes effects applied to the final rendered frame. The layer "final" provides the next options:

typeREQUIREDFILTERAdditional shader processing
filterREQUIREDOption that describes the filter

The option "filter" provides the next options:

fragmentCodeREQUIREDfragment shader file to process the final image
texturesOPTIONALtextures for the fragment shader

The option "textures" is a list of objects where you need to set 2 required properties and 2 optional ones:

fileREQUIREDtexture file attached to the shader and provided by fragmentCode option
samplerREQUIREDThe name of the texture declared in the shader code which is transmitted via "fragmentCode"

Layer "final" example#

"final": {    "type": "FILTER",                       // Applied effect type    "filter": {        "fragmentCode": "function.fsh",        "textures": [{                      // Textures for use            "file" : "texture.png",         // Texture file name            "sampler" : "lookupTexture",    // Fragment shader frienly texture name            "magFilter" : "LINEAR",         // (Optional)            "minFilter" : "LINEAR"          // (Optional)        }]    }}


Option "media" describes textures data added to a layer. "Media" can be added only as an option in camera, background, backgroundSeparation , frx and foreground layers.

typeREQUIREDPICTURE,VIDEOdescribes texture type (GIF files should have VIDEO typegst)
fileREQUIRED"image.png", "animation.gif", "video.mp4"texture file name
modeOPTIONALRGBDefault image only with rgb channels
OPTIONALRGBAApplied commonly to video textures. The width of the textures should be twice bigger than RGB image. RGB data is on the left part of the image and alpha on the right
blendOPTIONALnormal, alpha, multiply, darken, screen, lighten, hardlight, hard_light, overlay, pinlight, pin_light, add, colordodge, color_dodge, softlight, soft_lightblending mode
opacityOPTIONAL[0;1]Layer opaqueness
videoSpeedOPTIONALany floatVideo file frame rate multiplier (applied only to type "VIDEO")
linkOPTIONALApplicable to the Background, Foreground and Camera layers. Describes Video/Picture UV point with coordinates in range (0..1) binding with model vertex by its index.

"link" option provides the next options:

indexOPTIONALany intindex of vertex
scaleXOPTIONALany floatscale uv on X axis
scaleYOPTIONALany floatscale uv on Y axis
uOPTIONAL[0;1]u coordinate
vOPTIONAL[0;1]v coordinate

"media" option example#

"media": {        "type": "VIDEO",        "file": "video.mp4",            // VIDEO and PICTURE type option        "videoSpeed": 1.0,              // (Optional) Video file frame rate multiplier
        "opacity": 1.0,                 // (Optional) Layer opaqueness in range (0.0 - 1.0).                                         //  Image alpha multiplied by this opacity value. The result is used for blending
        "blend": "NORMAL",              // (Optional) Defaulted to "NORMAL". Same as PhotoShop blend types        "mode": "RGBA",
        "link": {                       // (Optional) Applicable for Background, Foreground and Camera layers.                                         // Video/Picture UV point with coordinates in range (0..1) binding with model vertex by it's index.            "index": 436,            "scaleX": 0.5,            "scaleY": 0.5,            "u": 0.5,            "v": 0.5        }    }

"media": {    "type": "PICTURE",    "file": "bg.png",}


The option "sounds" is a list with audio files data objects. These objects should contain the next properties:

trackNameREQUIREDaudio file name
typeOPTIONALbackgroundapplied without audio filters (e.g.: voice changer)
volumeOPTIONAL[0;1]sound volume

"sounds" option example#

"sounds": [    {        "trackName" : "1.mp3",          // Audio file name        "type" : "background",          // (Optional). Applied without audio filters (e.g.: voice changer)        "volume" : 1.0                  // (Optional). Defaulted to 1.0    },    {        "trackName" : "2.mp3"    }]

Voice changer#

The option "voiceChanger" applies audio filter to the recorded audio track in video. The option "voiceChanger" has the next options:

audioFiltersREQUIREDlist of audio filter objects.

Audio filter objects should have the next properties:

pitchREQUIREDpitch audio filter, has property "pitchShift", sets pitch shift value [-2400;2400]

"voiceChanger" option example#

"voiceChanger" : {    "audioFilters" : [                  // Audio filters array        {            "pitch" : {                 // Pitch audio filter                "pitchShift" : 1500     // Pitch shift value. Range: [-2400 .. 2400].            }        }    ] }


The option "recognizer" is a list of features and neural networks enabled for this effect.

"recognizer" option example#

"recognizer": [    "frx_disabled",     // Useful for effects without face recognition features    "background",    "skin_segmentation",    "lips_segmentation",    "eyes_segmentation",    "body_segmentation",    "occlusion",    "glasses",     // uses occlusion    "hair",    "lips_shine",    "face_acne",    "eye_bags",    "hair_strand",    "hand_gestures"]

Rotation vector#

The option "rotation_vector" turn's on/off updates of Gravity vector had been set in config.js Api.dynGravity() after device rotation.

"rotation_vector" option example#

    "rotation_vector": true

common config.json file example#

{  "name": "example_1",  "display_name": "file structure example",  "version": 1.0,
  "camera": {    "type": "MEDIA",    "media": {      "type": "PICTURE",      "file": "bg.png"    }  },
  "background": {    "type": "MEDIA",    "fadeMask": "mask.png",    "media": {      "type": "PICTURE",      "file": "bg2.png",      "blend": "MULTIPLY"    },  }
  "frx": {    "type" : "3d_anim",    "3d_anim" : {      "type" : "vfx"    }   },
  "foreground": {     "type" : "MEDIA",    "media" : {      "type" : "VIDEO"      "file": "video.mp4"    }    },
  "final": {    "type": "FILTER",    "filter": {        "fragmentCode": "function.fsh",        "textures": [{            "file" : "texture.png",            "sampler" : "lookupTexture",            "magFilter" : "LINEAR",            "minFilter" : "LINEAR"        }]    }  }
  "recognizer": [    "background",  ],
  "recordDuration" : 10,
  "rotation_vector": true}