Skip to main content

EffectPlayer Overview

image

TheEffectPlayer module is the main interface for all low-level functionality of Banuba SDK. As described in the Introduction page, EffectPlayer is responsible for consuming camera frames, running various recognition pipelines on it, sending the recognized data to the effect playback engine, and drawing the final image on the provided OpenGL surface.

note

For most use cases of Banuba SDK, it's better to use platform-specific APIs and language bindings as described in the Platforms section. The EffectPlayer API should be used only for non-trivial cases not covered by Platform APIs.

Overview#

To work with the EffectPlayer application, you should create an appropriate execution environment. The two most essential components of this environment are the camera and render surface. The camera pushes the image frames to EffectPlayer. The render surface creates the OpenGL context with target framebuffer where the final image is rendered.

The camera or any other image source should periodically push the frames into EffectPlayer. Usually, it's done in the dedicated thread called Camera thread.

EffectPlayer periodically runs pipeline with recognition technologies to retrieve data from an input image. The component with the pipeline is called Recognizer, and it runs its work cycle in a dedicated thread incapsulated in EffectPlayer. Before recognition, the input images from the camera are placed into InputBuffer where they are waiting for the next recognition cycle. During processing a lot of information is attached to the image, e.g. face positions, segmentation masks from neural networks etc. In the end of the recognition process, these data compose the final result of the recognition which is stored in a separate data structure called FrameData. The final FrameData is placed in OutputBuffer from where it is sent to an Effect and drawn to the render surface.

During the drawing cycle FrameData is popped from OutputBuffer and consumed by the Effect engine. On this stage, the Effect engine executes all effect's logic, e.g. sends all recognized data to the script engine or to the renderer. Drawing cycles are usually initiated from an app or platform modules. The platform module creates a dedicated Render thread to manage this task. This module ensures that the OpenGL context and target framebuffer exist during the drawing calls.

EffectManager#

A new class that encapsulates all logic of working with effects. To get an instance of this class, call the method effect_manager from EffectPlayer. Some methods from EffectPlayer were moved here. For example instead of load_effect in EffectPlayer now you have an ability to choose between synchronous and asynchronous implementations (load and load_async respectively). You no longer need to worry about the effect status. Once the effect is loaded, it will be applied immediately.

Effect#

This class represents an effect in Banuba SDK. load and load_async methods from EffectManager return a reference to the Effect object. You can call call_js_method on specific effect.

Threading model#

As descibed in the Overview section, EffectPlayer expects its methods to be called from the three treads: the UI thread, render thread or camera thread. See the reference of EffectPlayer for more details about each method.

note

It's important to call each method of EffectPlayer only from its dediceted thread. Calling methods from the wrong threads can cause unexpected app crashes and halts.

Use cases#

EffectPlayer supports different operational modes:

  • offline high-quality processing of an individual images or video frames
  • online processing and visualization of real-time camera input

Realtime camera preview#

The Camera and Render thread work continuously, and the user sees the processed image on the screen in real time. It's the most commonly used operational mode.

To enable this mode, call the following methods:

Live editing#

The user can apply effects to the static photo uploaded from the gallery and see dynamic changes in real time.

Video Processing#

The video is processed synchronously frame by frame with results returned as a series of images in system memory.

note

Realtime processing methods can't be used during video processing (draw, push_frame, etc.). Please, use the methods descibed above.

Single-image processing#

One photo is processed synchronously with results returned as an image in system memory.

note

The state of face tracking isn't saved between process_image calls. The process_image method is slow but accurate and not intended for realtime usage.

Still have any questions about FaceAR SDK?

Visit our FAQ or contact our support.

Last updated on