Skip to main content

Enabling recognizer features for recoloring

Recognizer features allow you to implement real-time recoloring for lips, hair, eyes, skin and segment background or full body to change it in face filters. To enable some recognizer feature you need to add it into the field "recognizer" in config.json file within effect. This field must be the array with the recognizer features names.

Example:

"recognizer" : ["background", "lips_segmentation"]

Recognizer features with recoloring support:

  • "background"
  • "skin_segmentation"
  • "lips_segmentation"
  • "eyes_segmentation"
  • "body_segmentation"
  • "occlusion"
  • "hair"
  • "lips_shine"

See all recognizer features.

Access to recognizer features results#

Results of recognizer feature recognition you can find in your shaders. There are two types of recognition results:

  • transformation
  • mask

Transformation is commonly a 2x4 transformation transposed matrix from global effect space into feature space. Read more about transformations.
Mask is a 2d image binary mask with recognition results.

To access transformation you need to declare a uniform block with glfx_BASIS_DATA name in your shaders.

Commonly this block looks like this:

layout(std140) uniform glfx_BASIS_DATA
{
vec4 unused;
vec4 glfx_SCREEN;
vec4 glfx_BG_MASK_T[2];
vec4 glfx_HAIR_MASK_T[2];
vec4 glfx_LIPS_MASK_T[2];
vec4 glfx_L_EYE_MASK_T[2];
vec4 glfx_R_EYE_MASK_T[2];
vec4 glfx_SKIN_MASK_T[2];
vec4 glfx_OCCLUSION_MASK_T[2];
};

To access the recognition mask you need to declare sampler 2d uniform variable with a specific name (name of every feature variable will be specified in the next sections).

Example:

uniform sampler2D glfx_BACKGROUND, glfx_LIPS_MASK;
// ...
void main()

Process of implementing recognizer feature#

The process of applying recognizer features consist of blending some color with the background image on a surface and cropping this surface with a binary mask.

First, you need to go to the vertex shader and transform this surface so that it will bind to the expected position. To achieve this, you need to use transformation, which will be delivered in shaders with the enabled feature. This transformation is the transformation matrix from effect space into local feature space. To transform your surface rightly you need to multiply the inverse feature transformation matrix with surface vertices.

Example:

void main()
{
mat3 lips_m = inverse( mat3(
glfx_LIPS_MASK_T[0].xyz,
glfx_LIPS_MASK_T[1].xyz,
vec3(0.,0.,1.) ) );
gl_Position = vec4( (vec3(attrib_v,1.)*lips_m).xy, 0., 1. );
}
note

The use case of transformation matrix feature depends on the surface you want to use. If you want to use the surface from the demo effect, first check the transformation use case from the effect shader.

Next, you need to go to the fragment shader and declare glfx_BACKGROUND and glfx_LIPS_MASK uniform variables. After that, you need to blend color, which you want to apply with the background color and crop your surface.

Example:

void main()
{
vec3 color = vec3(1., 0., 0.); // your color
vec3 bg_color = texture(glfx_BACKGROUND, var_uv).rgb; // get bg color
float mask = texture(glfx_LIPS_MASK, var_uv).r; // get binary mask
vec3 color = mix(bg_color, color, mask); // crop
frag_color = vec4(color, 1.); // draw
}

Next, you need to spawn the feature surface in config.js.

Download example effect
Last updated on