Introduction to Unity Face AR plugin
Unity Face AR plugin is a native library compiled for the following platforms:
This library exports pure C interface represented in script
BanubaSDKBridge.cs as C# bindings. BanubaSDKBridge provides the following common methods to initialize the face recognition engine.
Face AR plugin static environment initialization:
These methods should be called once at the start and the end of the game respectively.
Once the environment is initialized, you can create the native recognizer object with the following methods:
You can use the already implemented wrapper from
Recognizer.cs for releasing the memory.
The recognizer object init method needs a path to its resources. They are placed in
Assets/StreamingAssets folder, and unity does not compress resources placed there which is important.
Full path to
Assets/StreamingAssets is platform dependent. Unity provides it as
We recommend using only one instance of the recognizer object to decrease memory consumption.
Once the recognizer object is created successfully, you can change options and enabled features using the following methods:
All these methods are properly documented in
To process an image, you need to create the native frame representation object using following methods:
You can use already implemented wrapper from
FrameData.cs for releasing the memory.
Once the frame data instance is created, you need to set an image to it using
bnb_frame_data_set_bpc8_img for BPC8 (RGB and similar) images and
bnb_frame_data_set_yuv_img for YUV images.
After that, you can process the frame data with a prepared recognizer instance by calling
When processing is completed, the frame data object will be filled up with data according to the enabled recognizer features.
recognizer_feature_frx_id is enabled by default, so you can check if a face was detected on the current frame by extracting the face data structure from the frame data object by calling
For back-compatibility reasons, the frame data always contains one face with index 0. You should additionally check the face rectangle flag like
face.rectangle.hasFaceRectangle > 0, and it will show if the face was detected.
When the face is detected, the face data object contains
verticies of face mesh and face
camera_position contains projection and model-view maticies, and also affine coefficients extracted from them.
- Landmarks of face mesh packed into
floatarray of (x, y) coords one-by-one
- Verticies of face mesh packed into
floatarray of (x, y, z) coords one-by-one
- UV coords of face mesh packed into
floatarray of (x, y) coords one-by-one (
- Indicies of face mesh verticies packed into
Face mesh usage example is placed in
Action Units (or blendshapes) can be enabled by setting the recognizer feature
Once done, you can extract action units from
FrameData object for the detected face using the method
Action units are packed into
float array. The indexes of elements in this array correspond to enum
Action Units usage example is placed in