Skip to main content

Banuba SDK C++ API overview

Banuba SDK provides a C++ interface but it is pretty complex and may look inconvenient, so a wrapper called "Offscreen Effect Player" (OEP) exists to simplify its usage. This wrapper takes responsibility for the setup rendering context necessary for the Effect player rendering and provides a convenient interface for conversion of rendering results to different image formats.

This document describes each interface in short but focusing mostly OEP. It covers only live video stream processing.

info

See more relevant info about OEP work on mobile platforms on the corresponding pages:

All the code demonstrated in this document is just for reference, it contains just necessary parts to show some API usage, so it shouldn't be seen as completely safe (e.g. null pointer checks are omitted for simplicity).

C++ native interface

All headers with all declarations are shipped with each SDK archive regardless of the target platform. The place where headers are located depends on the platform, usually, this is an "include" folder, but in the case of Apple platforms, this is the "PrivateHeaders" folder inside frameworks.

SDK initialization

Before using the Banuba SDK it must be initialized. utility_manager class serves this purpose. One static method should be called to do this. It must be called only once.

This initialization method accepts a list of resources paths and a client token. Resources paths are paths where any SDK resources (scripts, shaders, models, etc.) are located. Effects are also should be located in these paths.

NOTE: Banuba SDK consists of the code represented by frameworks or libraries and resources, represented as Neural Networks (NN) resources, scripts, shaders, etc., and Effects. For instance, in Android and iOS, NN resources are integrated into the Banuba SDK module, and the path to that resources is known to the Banuba SDK Resource Manager by default. But if you want to separate resources from the code of Banuba SDK it is necessary to provide that path to utility_manager during initialization. Considering that Effects are always delivered as a separate resource of the Banuba SDK, the path to the Effect's resources (folder with effects) should be always provided in the list of paths.

#include <bnb/recognizer/interfaces/utility_manager.hpp>

/* ... */
// somewhere in main() or other initialization function
std::vector<std::string> paths {
"/path/to/resources/folder",
"/path/to/effects/folder"
};
bnb::interfaces::utility_manager::initialize(paths, token);

Also, there is a corresponding cleanup/release method, which should be called before terminating the program.

#include <bnb/recognizer/interfaces/utility_manager.hpp>

/* ... */
// one of the last lines of main() or somewhere in cleanup function
bnb::interfaces::utility_manager::release();

The abovementioned utility_manager has several other useful static methods, one of them is set_log_record_callback() you may be interested in. See corresponding comments in utility_manager.hpp for details.

After initialization, effect_player object must be created. This is the main object for interaction with Banuba SDK.

#include <bnb/effect_player/interfaces/effect_player.hpp>

// somewhere in initialization code
std::shared_ptr<bnb::interfaces::effect_player> effect_player =
bnb::interfaces::effect_player::create(
bnb::interfaces::effect_player_configuration(
1280,
720,
bnb::interfaces::nn_mode::automatically,
bnb::interfaces::face_search_mode::good,
false,
false
)
);

See effect_player_configuration declaration for fields description.

The next step is to initialize the rendering surface for the effect player.

The following call may or may not be present and its argument varies depending on the platform. See our platform-specific examples for details. In general, when OpenGL is used as a rendering backend, it is not required, it is mentioned here just for completeness, this particular example is for Metal API used on macOS.

if (effect_player->get_current_render_backend_type() == interfaces::render_backend_type::metal) {
effect_player->effect_manager()->set_render_surface(
reinterpret_cast<int64_t>(window->get_metal_layer_handler()));
}

In order to finalize render surface initialization the following methods must be called:

effect_player->surface_created(w, h);
effect_player->surface_changed(w, h);

Please note that any of these methods must be called from the render thread (i.e. where graphics context is active). surface_created() and should be called only once surface_changed() - each time when rendering surface size changes (and for the first time right after surface_created(), otherwise the initialization will not complete).

Now initialization is complete and SDK is ready to load any effect. It draws input frames as is when an effect is not loaded (actually a special empty effect is activated).

Working with the SDK

As it was mentioned above, effect_player is the main object that provides SDK functionality.

Loading an effect

Banuba SDK provides 2 methods for effect loading: synchronous and asynchronous. As its name says, synchronous will block until the effect is completely loaded (it may take noticeable time), asynchronous will not. Another significant restriction to the synchronous method is that it must be called from the rendering thread, so application UI will be blocked for effect loading time and this doesn't fit all cases. An asynchronous method has no such restrictions, so it is recommended. Synchronous method is provided just for completeness.

So, both mentioned methods are implemented in effect_manager class as its corresponding methods, required object itself can be retrieved from effect_player:

// synchronous method
effect_player->effect_manager()->load(effect);
// asynchronous method
effect_player->effect_manager()->load_async(effect);

Both methods accept just a single argument - effect folder name relative to one of resources directories passed during SDK initialization.

Please note some technical specific - effect will not be activated until you start pushing the frames into SDK.

SDK input

As input, Banuba SDK supports several pixel formats:

  • RBGA (including variations like RGB, ARGB, etc., see pixel_format declaration for full list of possible options)
  • YUV NV12
  • YUV i420

For YUV formats, both Full and Video ranges are supported. Supported color encoding standards are BT.601 and BT.709.

The frames passed to the SDK should be represented as full_image_t object. Unfortunately, there are no factory functions for that in the public interface, it should be constructed directly from bpc8_image_t or yuv_image_t objects which represent specific input formats in turn. See bnb/types/full_image.hpp header file for possible constructors and any other details.

Later, created full_image_t can be passed as effect_player input. effect_player has several methods to accept the input, but the simplest of them is push_frame(), it accepts only full_image_t. See effect_player declaration in bnb/effect_player/interfaces/effect_player.hpp for complete list of possible input methods, just search for "push".

The following example demonstrates how to create full_image_t from YUV i420 planes and pass it to SDK:

using namespace bnb;
effect_player->push_frame(
full_image_t(
yuv_image_t(
color_plane(y_plane_ptr),
color_plane(u_plane_ptr),
color_plane(v_plane_ptr),
image_format(
width, height,
camera_orientation::deg_0,
false, // mirroring
0 // face orientation
),
yuv_format_t{
color_range::video,
color_std::bt709,
yuv_format::yuv_i420
}
)
)
);

Please note that no strides are supported, it is assumed that stride is always equal corresponding width for all planes.

SDK output

In the native C++ API, there is no such thing as output. Banuba SDK just renders image to some surface (prepared during initialization steps), reading the desired image from surface and converting it to the appropriate format is on you. The image is rendered in RGBA and there are no options to change it.

As the SDK does not setup anything related to rendering (it is user's responsibility to do so), calling the drawing function is also user's responsibility. Function must be called from the rendering thread (i.e. where graphical context is available).

while (effect_player->draw() < 0) {
std::this_thread::yield();
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}

This function returns -1 when there is no new processed frame is ready, so it should be called in a loop to be able to draw new frames.

Communicating with effect

Banuba SDK provides Effect API, which allows customizing an effect on the fly (change colors, textures, etc. or even load new features to it, like background replacement). The list of available methods/options vary depending on effect itself. But any method provided by the effect can be called using call_js_method() or even eval_js(), see bnb/effect_player/interfaces/effect.hpp for details.

call_js_method() accepts parameters: method name to call and its arguments. There is only one parameter for all possible calling method arguments, its format depends on method itself, see the documentation to find a specific method (documentation shows examples only in languages used for mobile platforms but the syntax for C++ is the same).

effect_player->effect_manager()->current()->call_js_method(method, param);

With eval_js() you are able to execute arbitrary JS code and get the result of its execution, see its declaration for details.

Offscreen Effect Player

Offscreen Effect Player (or just OEP) is a wrapper around C++ interfaces made to hide complexities described above and simplify its usage, especially in video streaming applications, where Banuba SDK should be a part of whole video processing pipeline.

OEP is implemented as a separate submodule which consists of a set of interfaces and provides some (but not all) default implementations for them. OEP sources are in the public repository on GitHub. See its README for details about repository structure and interfaces purposes, it has a pretty good description. This repository should be added as submodule to your project.

As the OEP module doesn't have the implementations for all the provided interfaces, some of them must be implemented on the application side. See the corresponding demo apps (for example OEP-desktop) for possible (and ready to use) implementations.

OEP initialization

Unlike the native C++ interfaces, OEP provides a slightly different way for SDK initialization. Instead of creating only one object, you should create a couple of them, but they are required only to initialize each other.

#include <interfaces/offscreen_effect_player.hpp>

// implementations on app side
#include "render_context.hpp"
#include "effect_player.hpp"

/* ... */
// somewhere in main() or other initialization function

// create render_context instance
// (app/platform specific, must be implemented on app side)
auto rc = bnb::oep::interfaces::render_context::create();

// create offscreen_render_target instance
// (default implementation is provided, but can be reimplemented)
auto ort = bnb::oep::interfaces::offscreen_render_target::create(rc);

// create effect_player implementation instance
// (app specific, must be implemented on app side, but implementations from example are fine)
std::vector<std::string> resources_paths {
"/path/to/resources/folder",
"/path/to/effects/folder" // optional
};
auto ep = bnb::oep::interfaces::effect_player::create(resources_paths, token);

// create offscreen_effect_player instance
// (implementation is in OEP module, this is the main object to work with)
auto oep = bnb::oep::interfaces::offscreen_effect_player::create(ep, ort, oep_width, oep_height);

// ... any other application-specific logic ...

Please note that there are no static method calls to destroy/cleanup, unlike in the native C++ interface. Everything required is hidden behind the provided interfaces, so there is no need to worry about it.

Another important thing to note is that you might need to call surface_changed() method after initialization at least once. It is a very good idea to call it on window resize event or a similar one that will be called in any case on application startup, but later than effect player initialization.

Working with OEP

As mentioned above, offscreen_effect_player is the main object (referred as oep variable in this document) that provides an access to SDK functionality. OEP doesn't provide the whole possibilities available through effect_player interfaces only the most common ones. But if something missing is required, it can be accessed through the application-specific effect_player implementation, because it has access to effect_player Banuba SDK C++ interface.

Loading an effect with OEP

Unlike the native C++ interface, loading the desired effect with OEP doesn't require accessing other objects. OEP provides only one function for effect loading, and it behaves as an asynchronous one.

oep->load_effect("effects/Afro");

Notes in the same section above are also true here, e.g. effect will not be activated until you start pushing the frames into SDK.

OEP input

List of supported pixel formats is the same (at least for now) as for "plain" effect_player mentioned above, so please refer to the corresponding section in this document for details.

But it is worth noting that OEP accepts different objects as input compared to "plain" effect_player but these objects can be created much easier than full_image_t required for effect_player - a factory function with straightforward interface is provided for it. Created objects should be passed to process_image_async() function.

#include <interfaces/pixel_buffer.hpp>
#include <interfaces/offscreen_effect_player.hpp>

using ns = bnb::oep::interfaces::pixel_buffer;
/* ... fill the planes data ... */
std::vector<ns::plane_data> planes{y_plane, u_plane, v_plane};
pixel_buffer_sptr img = ns::create(planes, bnb::oep::interfaces::image_format::i420_bt709_full, width, hight);

oep->process_image_async(image, bnb::oep::interfaces::rotation::deg0, [](image_processing_result_sptr sptr) {}, bnb::oep::interfaces::rotation::deg180);

Refer to the interfaces/pixel_buffer.hpp header for details about available constants and types.

OEP output

Compared to native SDK interfaces, OEP provides impressive options for output. First of all, it is worth mentioning that it is possible to get processed image as buffer in the desired format or as texture and you should not worry about rendering at all - everything is covered by the interfaces. Moreover, OEP provides an option to rotate the output image.

There is one important thing to understand - in the case of buffer output irrespective of the requested format, GPU-to-CPU memory synchronization will happen, and this is the most time-consuming operation in most cases.

The list of supported output formats at least the same as input, i.e.:

  • RGBA (including variations)
  • YUV i420
  • YUV NV12

Both full and video ranges in either BT.601 and BT.709 standards are supported.

The information above is the true only for the default implementations provided, the actual output formats support may vary depending on offscreen_render_target implementation.

The example of getting output as a texture:

auto proc_callback = [](image_processing_result_sptr result) {
if (result != nullptr) {
auto render_callback = [](std::optional<rendered_texture_t> texture_id) {
if (texture_id.has_value()) {
auto gl_texture = static_cast<GLuint>(reinterpret_cast<int64_t>(*texture_id));
// do anything with this texture, e.g. render it
}
};
result->get_texture(render_callback);
}
};

oep->process_image_async(image, bnb::oep::interfaces::rotation::deg0, proc_callback, bnb::oep::interfaces::rotation::deg180);

Example of getting output as a buffer looks similar:

auto proc_callback = [] (image_processing_result_sptr result) {
if (result) {
auto image_callback = [] (pixel_buffer_sptr out_img) {
if (out_img) {
// do anything with planes
// out_img->get_base_sptr_of_plane(0)
// out_img->get_base_sptr_of_plane(1)
}
};
result->get_image(bnb::oep::interfaces::image_format::nv12_bt709_full, image_callback);
}
};

m_oep->process_image_async(image, bnb::oep::interfaces::rotation::deg0, proc_callback, std::nullopt);

Communicating with effect in OEP

As for the native C++ interface, call_js_method() is the way for effect manipulation. But the only difference that no any intermediate objects should be retrieved to call this, it is a part of offscreen_effect_player interface. It also accepts the same two parameters: method to call and its arguments as single parameter.

oep->call_js_method(method, param);

eval_js() is not available in OEP.