# Banuba Face AR SDK Documentation > Docs provides full information about Banuba Face AR SDK, integration and customization guides. ## far-sdk Discover Banuba Face AR SDK and your possibilities to build AR apps - [Banuba Face AR SDK](/index.md): Discover Banuba Face AR SDK and your possibilities to build AR apps ### search - [Search the documentation](/search.md) ### support - [Contact Support](/support.md) ### api_docs API documentation iOS (Swift) - [API Documentation](/api_docs.md): API documentation iOS (Swift) ### effects #### guides ##### feature_params Since v1.6.1, Banuba SDK provides an oportunity to change some feature parameters using scripting engine: - [How to change feature parameters using scripting api](/effects/guides/feature_params.md): Since v1.6.1, Banuba SDK provides an oportunity to change some feature parameters using scripting engine: ##### hand_ar_hand_gestures Learn how to utilize hand gesture tracking with Banuba API. - [How to use the Hand gestures feature](/effects/guides/hand_ar_hand_gestures.md): Learn how to utilize hand gesture tracking with Banuba API. #### makeup_deprecated ##### face_beauty Banuba provides the Face Beauty API designed to help you integrate augmented reality beauty try-on features into iOS and Android apps - [Face Beauty API](/effects/makeup_deprecated/face_beauty.md): Banuba provides the Face Beauty API designed to help you integrate augmented reality beauty try-on features into iOS and Android apps ##### makeup Banuba provides the Virtual Makeup API designed to help developers integrate augmented reality beauty try-on features into their iOS and Android apps - [Virtual Makeup API](/effects/makeup_deprecated/makeup.md): Banuba provides the Virtual Makeup API designed to help developers integrate augmented reality beauty try-on features into their iOS and Android apps ##### makeup_usage Learn how to combine the Makeup API and Beauty API features and use them in your application. - [How to combine the Makeup API and Beauty API features](/effects/makeup_deprecated/makeup_usage.md): Learn how to combine the Makeup API and Beauty API features and use them in your application. #### overview You may also create your own effects with - [Effect structure](/effects/overview.md): You may also create your own effects with #### prefabs ##### face GLTF - [On Face Prefabs](/effects/prefabs/face.md): GLTF ##### hands Nails - [On Hands prefabs](/effects/prefabs/hands.md): Nails ##### makeup Basic concepts - [Makeup Prefabs](/effects/prefabs/makeup.md): Basic concepts ##### overview Prefab is high level object that represents set of render and SDK features. - [Prefabs Overview](/effects/prefabs/overview.md): Prefab is high level object that represents set of render and SDK features. ##### sounds Sounds - [Sounds Prefabs](/effects/prefabs/sounds.md): Sounds ##### sprites sprites - [Sprites Prefabs](/effects/prefabs/sprites.md): sprites ##### top_level Background - [Top Level Prefabs](/effects/prefabs/top_level.md): Background #### virtual_background Banuba provides the Virtual Background API designed to help developers integrate augmented reality background separation into their apps - [Virtual Background API](/effects/virtual_background.md): Banuba provides the Virtual Background API designed to help developers integrate augmented reality background separation into their apps ### support_page - Dev Portal - [Support](/support_page.md): - Dev Portal ### tutorials #### capabilities ##### 3rd_licenses A list of third party libraries used within Banuba SDK - [Third parties library list](/tutorials/capabilities/3rd_licenses.md): A list of third party libraries used within Banuba SDK ##### demo_face_filters List of Demo Face Filters and technologies represented with them. - [Demo Face Filters](/tutorials/capabilities/demo_face_filters.md): List of Demo Face Filters and technologies represented with them. ##### glossary AR Technologies - [FaceAR Glossary](/tutorials/capabilities/glossary.md): AR Technologies ##### sdk_features A list of Banuba Face AR SDK features and platforms supported - [SDK Features](/tutorials/capabilities/sdk_features.md): A list of Banuba Face AR SDK features and platforms supported ##### system_requirements Supported Platforms - [System Requirements](/tutorials/capabilities/system_requirements.md): Supported Platforms ##### technical_specification Technical specification and minimal requirements of the Banuba Face AR SDK features. - [Technical Specification](/tutorials/capabilities/technical_specification.md): Technical specification and minimal requirements of the Banuba Face AR SDK features. ##### token_management This section will provide the reader with the answers from the user’s frequently asked questions related to the token management process. - [Token Management](/tutorials/capabilities/token_management.md): This section will provide the reader with the answers from the user’s frequently asked questions related to the token management process. #### changelog [1.18.0] - 2025-03-09 - [Changelog](/tutorials/changelog.md): [1.18.0] - 2025-03-09 #### development ##### api_overview API Overview - [API Overview](/tutorials/development/api_overview.md): API Overview ###### android image - [android](/tutorials/development/api_overview/android.md): image ###### desktop image - [desktop](/tutorials/development/api_overview/desktop.md): image ###### ios image - [ios](/tutorials/development/api_overview/ios.md): image ###### web image - [web](/tutorials/development/api_overview/web.md): image ##### basic_integration Getting Started guide for Banuba SDK - [Getting Started](/tutorials/development/basic_integration.md): Getting Started guide for Banuba SDK ###### android Installation - [android](/tutorials/development/basic_integration/android.md): Installation ###### desktop The steps below apply to desktop integration (Windows and/or macOS) with C++. - [desktop](/tutorials/development/basic_integration/desktop.md): The steps below apply to desktop integration (Windows and/or macOS) with C++. ###### flutter Banuba SDK for - [flutter](/tutorials/development/basic_integration/flutter.md): Banuba SDK for ###### ios Installation - [ios](/tutorials/development/basic_integration/ios.md): Installation ###### react_native Banuba SDK for - [react_native](/tutorials/development/basic_integration/react_native.md): Banuba SDK for ###### web Requirements - [web](/tutorials/development/basic_integration/web.md): Requirements ##### guides ###### ar_cloud A guide on how to use AR cloud in the SDK - [AR Cloud Guide](/tutorials/development/guides/ar_cloud.md): A guide on how to use AR cloud in the SDK ###### landmarks A guide on how to get face landmarks - [Face Landmarks Guide](/tutorials/development/guides/landmarks.md): A guide on how to get face landmarks ###### migration To version 1.17.0 - [Migration Guides](/tutorials/development/guides/migration.md): To version 1.17.0 ###### optimization Optimizing WebAR SDK bundle size - [Optimization Guides](/tutorials/development/guides/optimization.md): Optimizing WebAR SDK bundle size ###### watermark How to apply a watermark to a video - [Watermark Guide](/tutorials/development/guides/watermark.md): How to apply a watermark to a video ##### installation A getting started guide on how to add Banuba SDK to a project - [Adding Banuba SDK to your project](/tutorials/development/installation.md): A getting started guide on how to add Banuba SDK to a project ###### android Packages - [android](/tutorials/development/installation/android.md): Packages ###### desktop Banuba SDK for desktop platforms (i.e. Windows and MacOS) is distributed via - [desktop](/tutorials/development/installation/desktop.md): Banuba SDK for desktop platforms (i.e. Windows and MacOS) is distributed via ###### ios CocoaPods packages - [ios](/tutorials/development/installation/ios.md): CocoaPods packages ###### web NPM Package - [web](/tutorials/development/installation/web.md): NPM Package ##### known_issues Visit our FAQ or contact our support. - [Known Issues](/tutorials/development/known_issues.md): Visit our FAQ or contact our support. ###### web MediaStreamCapture stream freezes when a browser tab becomes inactive in Safari - [web](/tutorials/development/known_issues/web.md): MediaStreamCapture stream freezes when a browser tab becomes inactive in Safari ##### llms If you use AI agents, feel free to use our LLM-ready documentation. - [Vibe Coding](/tutorials/development/llms.md): If you use AI agents, feel free to use our LLM-ready documentation. ##### samples A getting started guide for Banuba SDK - [Examples of using Banuba SDK](/tutorials/development/samples.md): A getting started guide for Banuba SDK ###### android Requirements - [android](/tutorials/development/samples/android.md): Requirements ###### desktop Examples bellow are written in C++ and will run both on Windows and macOS. - [desktop](/tutorials/development/samples/desktop.md): Examples bellow are written in C++ and will run both on Windows and macOS. ###### flutter Minimal sample - [flutter](/tutorials/development/samples/flutter.md): Minimal sample ###### ios iOS samples (Swift) - [ios](/tutorials/development/samples/ios.md): iOS samples (Swift) ###### macos macOS sample (Swift) - [macos](/tutorials/development/samples/macos.md): macOS sample (Swift) ###### react_native Minimal sample - [react_native](/tutorials/development/samples/react_native.md): Minimal sample ###### web Quickstart - [web](/tutorials/development/samples/web.md): Quickstart ##### videocall A guide on how to integrate video calling in a project with Banuba SDK - [Using video calls with the Banuba SDK](/tutorials/development/videocall.md): A guide on how to integrate video calling in a project with Banuba SDK ###### android * 0 - none, * 1 - like, * 2 - ok, * 3 - palm, * 4 - rock, * 5 - victory/peace. You may extend this trigger function behaviour by adding your custom logic to config.js from the "test\_gestures" effect folder. E.g. play a guitar solo audio every time someone shows the "rock": ``` function setGesture(json){ var gestureInfo = JSON.parse(json); switch (gestureInfo.idx) { case 4: Api.playSound("rockme.ogg", false, 1); break; default: break; } } ``` The only limit is your creativity! 4. Now you can run your application to test hand gestures tracking and your custom logic. --- # Face Beauty API danger **This feature is deprecated**. [We recommend you to use Prefabs](/far-sdk/effects/prefabs/overview.md) Banuba provides the Beauty API designed to help you integrate face modification and touch-up functionality into your app. The beautification features fit into a variety of use cases, e.g. live streaming, video chats and video conferencing apps, selfie editors, and portrait retouching software. It aims to make the user feel comfortable about the camera experience. [](pathname:///generated/effects/Makeup.zip) [Download example](pathname:///generated/effects/Makeup.zip) tip [Read more about how to use and combine](/far-sdk/effects/makeup_deprecated/makeup_usage.md) Makeup API and Face Beauty API features. note Please [contact us](/far-sdk/support/.md) if you wish to use Makeup API features in iOS version 14.x. The Beauty module allows to enhance the face via the following built-in features: ## Teeth Whitening[​](#teeth-whitening "Direct link to Teeth Whitening") Allows for a beautiful smile. * `Teeth.whitening(n)` - changes whitening texture intensity, where **n** - any value from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ // set teeth whitening strength Teeth.whitening(1) ``` ``` // Effect mCurrentEffect = ... // set teeth whitening strength mCurrentEffect.evalJs("Teeth.whitening(1)", null); ``` ``` // var currentEffect: BNBEffect = ... // set teeth whitening strength currentEffect?.evalJs("Teeth.whitening(1)", resultCallback: nil) ``` ``` // set teeth whitening strength await effect.evalJs("Teeth.whitening(1)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/teeth-ad89bfabc20c76750c21b2348693680b.jpg) Drag ## Face morphing[​](#face-morphing "Direct link to Face morphing") Slims down the cheeks and nose to make it more delicate. * `FaceMorph.eyebrows({spacing: value, height: value, bend: value})` - set eyebrows morph params: spacing - Adjusting the space between the eyebrows \[-1;1], height - Raising/lowering the eyebrows \[-1;1], bend - Adjusting the bend of the eyebrows \[-1;1] * `FaceMorph.eyes({rounding: value, enlargement: value, height: value, spacing: value, squint: value, lower_eyelid_pos: value, lower_eyelid_size: value, down: value, eyelid_upper: value, eyelid_lower: value})` - set eyes morph params: rounding - Adjusting the roundness of the eyes \[0;1], enlargement - Enlarging the eyes \[0;1], height - Raising/lowering the eyes \[-1;1], spacing - Adjusting the space between the eyes \[-1;1], squint - Making the person squint by adjusting the eyelids \[-1;1], lower\_eyelid\_pos - Raising/lowering the lower eyelid \[-1;1], lower\_eyelid\_size - Enlarging/shrinking the lower eyelid \[-1;1], down - Eyes down \[0;1], eyelid\_upper - Eyelid upper \[0;1], eyelid\_lower - Eyelid lower \[0;1] * `FaceMorph.nose({width: value, length: value, tip_width: value, down_up: value, sellion: value})` - set nose morph params: width - Adjusting the nose width \[-1;1], length - Adjusting the nose length \[-1;1], tip\_width - Adjusting the nose tip width \[0;1], down\_up - Nose down/up \[0;1], sellion - Nose sellion \[0;1] * `FaceMorph.lips({size: value, height: value, thickness: value, mouth_size: value, smile: value, shape: value, sharp: value})` - set lips morph params: size - Adjusting the width and vertical size of the lips \[-1;1], height - Raising/lowering the lips \[-1;1], thickness - Adjusting the thickness of the lips \[-1;1], mouth\_size - Adjusting the size of the mouth \[-1;1], smile - Making a person smile \[0;1], shape - Adjusting the shape of the lips \[-1;1], sharp - Lips Sharp \[0;1] * `FaceMorph.face({narrowing: value, v_shape: value, cheekbones_narrowing: value, cheeks_narrowing: value, jaw_narrowing: value, chin_shortening: value, chin_narrowing: value, sunken_cheeks: value, cheeks_jaw_narrowing: value, jaw_wide_thin: value, chin: value, forehead: value})` - set face morph params: narrowing - Narrowing the face \[0;1], v\_shape - Shrinking the chin and narrowing the cheeks \[0;1], chekbones\_narrowing - Narrowing the cheekbones \[-1;1], cheeks\_narrowing - Narrowing the cheeks \[0;1], jaw\_narrowing - Narrowing the jaw \[0;1], chin\_shortening - Decreasing the length of the chin \[0;1], chin\_narrowing - Narrowing the chin \[0;1], sunken\_cheeks - Sinking the cheeks and emphasizing the cheekbones \[0;1], cheeks\_jaw\_narrowing - Narrowing the cheeks and the jaw \[0;1], jaw\_wide\_thin - Jaw wide/thin \[0;1], chin - Face Chin \[0;1], forehead - Forehead \[0;1] * `FaceMorph.clear()`- resets morph - config.js - Java - Swift - JavaScript ``` // Set FaceMorph effects FaceMorph.eyebrows({spacing: 0.6, height: 0.3, bend: 1.0}) FaceMorph.eyes({rounding: 0.6, enlargement: 0.3, height: 0.3, spacing: 0.3, squint: 0.3, lower_eyelid_pos: 0.3, lower_eyelid_size: 0.3}) FaceMorph.face({narrowing: 0.6, v_shape: 0.3, cheekbones_narrowing: 0.3, cheeks_narrowing: 0.3, jaw_narrowing: 0.7, chin_shortening: 0.3, chin_narrowing: 0.3, sunken_cheeks: 0.1, cheeks_jaw_narrowing: 0.2}) FaceMorph.nose({width: 0.3, length: 0.2, tip_width: 0.1}) FaceMorph.lips({size: 0.4, height: 1.0, thickness: 0.1, mouth_size: 0.2, smile: 0.8, shape: 0.4}) // Reset all the FaceMorph effects FaceMorph.clear() ``` ``` // Effect mCurrentEffect = ... // Set FaceMorph effects mCurrentEffect.evalJs("FaceMorph.eyebrows({spacing: 0.6, height: 0.3, bend: 1.0})", null); mCurrentEffect.evalJs("FaceMorph.eyes({rounding: 0.6, enlargement: 0.3, height: 0.3, spacing: 0.3, squint: 0.3, lower_eyelid_pos: 0.3, lower_eyelid_size: 0.3})", null); mCurrentEffect.evalJs("FaceMorph.face({narrowing: 0.6, v_shape: 0.3, cheekbones_narrowing: 0.3, cheeks_narrowing: 0.3, jaw_narrowing: 0.7, chin_shortening: 0.3, chin_narrowing: 0.3, sunken_cheeks: 0.1, cheeks_jaw_narrowing: 0.2})", null); mCurrentEffect.evalJs("FaceMorph.nose({width: 0.3, length: 0.2, tip_width: 0.1})", null); mCurrentEffect.evalJs("FaceMorph.lips({size: 0.4, height: 1.0, thickness: 0.1, mouth_size: 0.2, smile: 0.8, shape: 0.4})", null); // Reset FaceMorph effects mCurrentEffect.evalJs("FaceMorph.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set FaceMorph effects currentEffect?.evalJs("FaceMorph.eyebrows({spacing: 0.6, height: 0.3, bend: 1.0})", resultCallback: nil) currentEffect?.evalJs("FaceMorph.eyes({rounding: 0.6, enlargement: 0.3, height: 0.3, spacing: 0.3, squint: 0.3, lower_eyelid_pos: 0.3, lower_eyelid_size: 0.3})", resultCallback: nil) currentEffect?.evalJs("FaceMorph.face({narrowing: 0.6, v_shape: 0.3, cheekbones_narrowing: 0.3, cheeks_narrowing: 0.3, jaw_narrowing: 0.7, chin_shortening: 0.3, chin_narrowing: 0.3, sunken_cheeks: 0.1, cheeks_jaw_narrowing: 0.2})", resultCallback: nil) currentEffect?.evalJs("FaceMorph.nose({width: 0.3, length: 0.2, tip_width: 0.1})", resultCallback: nil) currentEffect?.evalJs("FaceMorph.lips({size: 0.4, height: 1.0, thickness: 0.1, mouth_size: 0.2, smile: 0.8, shape: 0.4})", resultCallback: nil) // Reset FaceMorph effects currentEffect?.evalJs("FaceMorph.clear()", resultCallback: nil) ``` ``` // Set FaceMorph effects await effect.evalJs("FaceMorph.eyebrows({spacing: 0.6, height: 0.3, bend: 1.0})") await effect.evalJs("FaceMorph.eyes({rounding: 0.6, enlargement: 0.3, height: 0.3, spacing: 0.3, squint: 0.3, lower_eyelid_pos: 0.3, lower_eyelid_size: 0.3})") await effect.evalJs("FaceMorph.face({narrowing: 0.6, v_shape: 0.3, cheekbones_narrowing: 0.3, cheeks_narrowing: 0.3, jaw_narrowing: 0.7, chin_shortening: 0.3, chin_narrowing: 0.3, sunken_cheeks: 0.1, cheeks_jaw_narrowing: 0.2})") await effect.evalJs("FaceMorph.nose({width: 0.3, length: 0.2, tip_width: 0.1})") await effect.evalJs("FaceMorph.lips({size: 0.4, height: 1.0, thickness: 0.1, mouth_size: 0.2, smile: 0.8, shape: 0.4})") // Reset FaceMorph effects await effect.evalJs("FaceMorph.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/morph-082d96aed26822a507fd45581b8b49f4.jpg) Drag ## Photo filters (LUTs)[​](#photo-filters-luts "Direct link to Photo filters (LUTs)") Applies color filters to the entire image. * `Filter.set("lut_texture.png")` - set lut filter texture, * `Filter.strength(n)` - set lut filter strength, where **n** - any value from 0 to 1 (including decimal), but also larger values may be passed, like 2, 3, etc; * `Filter.clear()`- clears filter. - config.js - Java - Swift - JavaScript ``` // Set lut filter characteristics Filter.set("lut_texture.png") Filter.strength(1) // Clear filter Filter.clear() ``` ``` // Effect mCurrentEffect = ... // Set lut filter characteristics mCurrentEffect.evalJs("Filter.set('lut_texture.png')", null); mCurrentEffect.evalJs("Filter.strength(1)", null); // Clear filter mCurrentEffect.evalJs("Filter.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set lut filter characteristics currentEffect?.evalJs("Filter.set('lut_texture.png')", resultCallback: nil) currentEffect?.evalJs("Filter.strength(1)", resultCallback: nil) // Clear filter currentEffect?.evalJs("Filter.clear()", resultCallback: nil) ``` ``` // Set lut filter characteristics await effect.evalJs("Filter.set('lut_texture.png')") await effect.evalJs("Filter.strength(1)") // Clear filter await effect.evalJs("Filter.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/Filter-db9e7fe33e0c0309c76a9abad0226df7.jpg) Drag ## Skin[​](#skin "Direct link to Skin") ### Skin smoothing (Skin softening)[​](#skin-smoothing-skin-softening "Direct link to Skin smoothing (Skin softening)") Makes the skin look younger by smoothing wrinkles. * `Skin.softening(n)` - set softening intensity, where **n** - any value from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Skin.softening(1) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Skin.softening(1)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Skin.softening(1)", resultCallback: nil) ``` ``` await effect.evalJs("Skin.softening(1)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/SkinSoftening-e7800133c7d654b8372366266bdbf026.jpg) Drag ### Skin color[​](#skin-color "Direct link to Skin color") Changes the face and neck skin color. * `Skin.color("R G B A")` - set skin color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal), * `Skin.clear()` - clears skin color and softening. info Requires [Skin segmentation](/far-sdk/tutorials/capabilities/sdk_features.md#face-ar-sdk-neural-network-features) Neural Network. * config.js * Java * Swift * JavaScript ``` // Set skin color Skin.color("0.8 0.6 0.1 0.4") // Reset skin color Skin.clear() ``` ``` // Effect mCurrentEffect = ... // Set skin color mCurrentEffect.evalJs("Skin.color('0.8 0.6 0.1 0.4')", null); // Reset skin color mCurrentEffect.evalJs("Skin.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set skin color currentEffect?.evalJs("Skin.color('0.8 0.6 0.1 0.4')", resultCallback: nil) // Reset skin color currentEffect?.evalJs("Skin.clear()", resultCallback: nil) ``` ``` // Set skin color await effect.evalJs("Skin.color('0.8 0.6 0.1 0.4')") // Reset skin color await effect.evalJs("Skin.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/SkinColor-34d47f813d6ab2280eaf6e3793f8d14d.jpg) Drag ## Background separation[​](#background-separation "Direct link to Background separation") info Requires [Background separation](/far-sdk/tutorials/capabilities/sdk_features.md#face-ar-sdk-neural-network-features) Neural Network. tip If you need only the background separation effect, see [Virtual Background API](/far-sdk/effects/virtual_background.md). ### Background texture[​](#background-texture "Direct link to Background texture") Sets the file as the background texture. * `Background.texture("bg_image.png")` - sets an image file as a background texture. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. * `Background.texture("bg_video.mp4")` - sets the video file as a background texture. Visit [technical specification](/far-sdk/tutorials/capabilities/technical_specification.md#video-formats-support) for supported video formats. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.texture("bg_colors_tile.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.texture('bg_colors_tile.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.texture('bg_colors_tile.png')", resultCallback: nil) ``` ``` await effect.evalJs("Background.texture('bg_colors_tile.png')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/BackgroundTexture-fe6299595a550a13eec111677e4e2539.jpg) Drag ### Background transparency[​](#background-transparency "Direct link to Background transparency") Sets the background transparency. * `Background.transparency(n)` - set transperany value from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.transparency(0.5) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.transparency(0.5)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.transparency(0.5)", resultCallback: nil) ``` ``` await effect.evalJs("Background.transparency(0.5)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/BackgroundTexture-fe6299595a550a13eec111677e4e2539.jpg)![Left image compare](/far-sdk/assets/images/BackgroundTransparent-2ee7764a986084bc8020b1342daef38c.jpg) Drag ### Background rotation[​](#background-rotation "Direct link to Background rotation") Rotates the background texture clockwise in degrees. * `Background.rotation(deg)` - set rotation value from 0 360 degrees. The value should be divisible by 90 degrees. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.rotation(90) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.rotation(90)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.rotation(90)", resultCallback: nil) ``` ``` await effect.evalJs("Background.rotation(90)") ``` ### Background scale[​](#background-scale "Direct link to Background scale") Scales the background texture. * `Background.scale(n)` - multiplies the background texture size on given value. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.scale(2) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.scale(2)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.scale(2)", resultCallback: nil) ``` ``` await effect.evalJs("Background.scale(2)") ``` ### Background contentMode[​](#background-contentmode "Direct link to Background contentMode") Sets the background texture content mode. * `Background.contentMode("mode")` - set mode type, possible values: `fill`, `fit`, `scale_to_fill`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.contentMode("fill") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.contentMode('fill')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.contentMode('fill')", resultCallback: nil) ``` ``` await effect.evalJs("Background.contentMode('fill')") ``` ### Background blur[​](#background-blur "Direct link to Background blur") Blurs the background behind the user. * `Background.blur(n)` - sets the background blur radius in \[0, 1] range. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.blur(0.2) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.blur(0.2)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.blur(0.2)", resultCallback: nil) ``` ``` await effect.evalJs("Background.blur(0.2)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/BackgroundTexture-fe6299595a550a13eec111677e4e2539.jpg)![Left image compare](/far-sdk/assets/images/BackgroundBlur-19465ffc736d1651b18c01ea8d99b1ef.jpg) Drag ### Background clear[​](#background-clear "Direct link to Background clear") Removes the background color and texture, resets any settings applied. * config.js * Java * Swift * JavaScript ``` /* Feel free to add your custom code below */ Background.clear() ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.clear()", resultCallback: nil) ``` ``` await effect.evalJs("Background.clear()") ``` ## Hair coloring[​](#hair-coloring "Direct link to Hair coloring") info Requires [Hair segmentation](/far-sdk/tutorials/capabilities/sdk_features.md#face-ar-sdk-neural-network-features) Neural Network. ### Single color[​](#single-color "Direct link to Single color") Dyes hair with one color. * `Hair.color("R G B A")` - set hair color in R G B A format (separated with space). Each value should be in a rage from 0 to 1 (including decimal), * `Hair.clear()` - clears hair color (including gradient and hair strands). - config.js - Java - Swift - JavaScript ``` // Set hair color Hair.color("0.39 0.14 0.14 0.8") // Reset hair color Hair.clear() ``` ``` // Effect mCurrentEffect = ... // Set hair color mCurrentEffect.evalJs("Hair.color('0.39 0.14 0.14 0.8')", null); // Reset hair color mCurrentEffect.evalJs("Hair.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set hair color currentEffect?.evalJs("Hair.color('0.39 0.14 0.14 0.8')", resultCallback: nil // Reset hair color currentEffect?.evalJs("Hair.clear()", resultCallback: nil) ``` ``` // Set hair color await effect.evalJs("Hair.color('0.39 0.14 0.14 0.8')") // Reset hair color await effect.evalJs("Hair.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/HairColor-90c31b1560331c076aac589c7f741ce8.jpg) Drag ### Hair gradient[​](#hair-gradient "Direct link to Hair gradient") Dyes hair with 1 to 5 colors. * `Hair.color("start_color_rgba", "end_color_rgba")` - set hair gradient in R G B A format (separated with space). Each rgba value should be in a rage from 0 to 1 (including decimal), - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Hair.color("0.19 0.06 0.25", "0.09 0.25 0.38") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Hair.color('0.19 0.06 0.25', '0.09 0.25 0.38')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Hair.color('0.19 0.06 0.25', '0.09 0.25 0.38')", resultCallback: nil) ``` ``` await effect.evalJs("Hair.color('0.19 0.06 0.25', '0.09 0.25 0.38')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/HairGradient-b253876cdcdbb46c8ae69f129ebcdbb5.jpg) Drag ### Hair strands painting[​](#hair-strands-painting "Direct link to Hair strands painting") Dyes hair strands with 1 to 5 colors. * `Hair.strands("R G B A", "R G B A", "R G B A", ...)` - set hair strands in R G B A format (separated with space) with 5 color maximum. info Requires [Hair strands painting](/far-sdk/tutorials/capabilities/sdk_features.md#face-ar-sdk-neural-network-features) Add-On. * config.js * Java * Swift * JavaScript ``` /* Feel free to add your custom code below */ Hair.strands("0.80 0.40 0.40 1.0", "0.83 0.40 0.40 1.0", "0.85 0.75 0.75 1.0", "0.87 0.60 0.60 1.0", "0.99 0.65 0.65 1.0") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Hair.strands('0.80 0.40 0.40 1.0', '0.83 0.40 0.40 1.0', '0.85 0.75 0.75 1.0', '0.87 0.60 0.60 1.0', '0.99 0.65 0.65 1.0')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Hair.strands('0.80 0.40 0.40 1.0', '0.83 0.40 0.40 1.0', '0.85 0.75 0.75 1.0', '0.87 0.60 0.60 1.0', '0.99 0.65 0.65 1.0')", resultCallback: nil); ``` ``` await effect.evalJs("Hair.strands('0.80 0.40 0.40 1.0', '0.83 0.40 0.40 1.0', '0.85 0.75 0.75 1.0', '0.87 0.60 0.60 1.0', '0.99 0.65 0.65 1.0')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/HairStrand-cd7d7af6e94d76429c69883f0ad7a201.jpg) Drag ## Eyes beautification[​](#eyes-beautification "Direct link to Eyes beautification") info Requires [Eye segmentation](/far-sdk/tutorials/capabilities/sdk_features.md#face-ar-sdk-neural-network-features) Neural Network. ### Eyes coloring[​](#eyes-coloring "Direct link to Eyes coloring") Changes the color of the iris as in virtual lens try on. * `Eyes.color("R G B A")` - set eyes color in R G B A format (separated with space). Each value should be in a rage from 0 to 1 (including decimal), * `Eyes.clear()` - clears eyes color including flare and whitening. - config.js - Java - Swift - JavaScript ``` // Set eyes color Eyes.color("0 0.2 0.8 0.64") // Reset eyes color Eyes.clear() ``` ``` // Effect mCurrentEffect = ... // Set eyes color mCurrentEffect.evalJs("Eyes.color('0 0.2 0.8 0.64')", null); // Reset eyes color mCurrentEffect.evalJs("Eyes.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set eyes color currentEffect?.evalJs("Eyes.color('0 0.2 0.8 0.64')", resultCallback: nil) // Reset eyes color currentEffect?.evalJs("Eyes.clear()", resultCallback: nil) ``` ``` // Set eyes color await effect.evalJs("Eyes.color('0 0.2 0.8 0.64')") // Reset eyes color await effect.evalJs("Eyes.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyescolor-fc2dcb06828e09ff46183aadd16ce026.jpg) Drag ### Eye flare[​](#eye-flare "Direct link to Eye flare") Makes eyes more expressive adding flare. * `Eyes.flare(n)` - sets the eyes flare strength from 0 to 1. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Eyes.flare("1") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Eyes.flare(1)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Eyes.flare(1)", resultCallback: nil) ``` ``` await effect.evalJs("Eyes.flare(1)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/EyesFlare-aaa9dfc2204ee98f6874e0556099d0c8.jpg) Drag ### Eyes whitening[​](#eyes-whitening "Direct link to Eyes whitening") Makes the look more expressive by whitening eyes. * `Eyes.whitening(n)` - sets the eyes sclera whitening strength from 0 to 1. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Eyes.whitening(1) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Eyes.whitening(1)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Eyes.whitening(1)", resultCallback: nil) ``` ``` await effect.evalJs("Eyes.whitening(1)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/EyesWhitening-53c06893890dff12356d60e292ee7b23.jpg) Drag ## Eye bags removal[​](#eye-bags-removal "Direct link to Eye bags removal") Removes eye bags. Works in offline mode only (for photo or video processing). info Requires [Eye bags removal](/far-sdk/tutorials/capabilities/sdk_features.md#face-ar-sdk-neural-network-features) Add-On. * `EyeBagsRemoval.enable()` - enables eye bags removal, * `EyeBagsRemoval.disable()` - disables eye bags removal. - config.js - Java - Swift ``` /* Feel free to add your custom code below */ // Enable eye bags removal EyeBagsRemoval.enable() // Disable eye bags removal EyeBagsRemoval.disable() ``` ``` // Effect mCurrentEffect = ... // Enable eye bags removal mCurrentEffect.evalJs("EyeBagsRemoval.enable()", null); // Disable eye bags removal mCurrentEffect.evalJs("EyeBagsRemoval.disable()", null); ``` ``` // var currentEffect: BNBEffect = ... // Enable eye bags removal currentEffect?.evalJs("EyeBagsRemoval.enable()", resultCallback: nil) // Disable eye bags removal currentEffect?.evalJs("EyeBagsRemoval.disable()", resultCallback: nil) ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/EyeBagsRemoval-d7c8bc7aee12c5f8af08a777ff5ac0c6.jpg) Drag --- # Virtual Makeup API danger **This is deprecated feature**. [We recommend you to use Prefabs](/far-sdk/effects/prefabs/overview.md) Banuba provides the Makeup API designed to help you integrate augmented reality beauty try-on features into your app. The AR makeup features fit into e-commerce try-on apps, makeovers, selfie editors and portrait retouching software. It aims to overlay realistic makeup onto the face to showcase the product or let users change their appearance. [](pathname:///generated/effects/Makeup.zip) [Download example](pathname:///generated/effects/Makeup.zip) tip [Read more about how to use and combine](/far-sdk/effects/makeup_deprecated/makeup_usage.md) Makeup API and Face Beauty API features. The Beauty module allows to enhance the face with the following built-in features: ## Face Makeup[​](#face-makeup "Direct link to Face Makeup") Sets texture as composite makeup (i.e. all-in-one: eyelashes, shadows, eyeliner, etc). * `Makeup.set("makeup_texture.png")` - sets an image-file as a makeup texture. The file should be placed into the effect's folder. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.set("example_makeup.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.set('example_makeup.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.set('example_makeup.png')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.set('example_makeup.png')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/Makeup-56cc767d4fda7de0c6c77d44b779ac40.jpg) Drag ### Highlighting[​](#highlighting "Direct link to Highlighting") Set highlighter color. * `Makeup.highlighter("R G B A")` - set highlighter color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.highlighter("0.75 0.74 0.74 0.4") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.highlighter('0.75 0.74 0.74 0.4')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.highlighter('0.75 0.74 0.74 0.4')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.highlighter('0.75 0.74 0.74 0.4')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/highlighter-a09bcfd376fe0db096ec2278c36bac0f.jpg) Drag **Set a custom highlighter texture** * `Makeup.highlighter("highlighter.png")` - sets an image-file as a highlighter texture. The file should be placed into the effect's folder. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.highlighter("highlighter.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.highlighter('highlighter.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.highlighter('highlighter.png')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.highlighter('highlighter.png')") ``` ### Contouring[​](#contouring "Direct link to Contouring") Sets contour color. * `Makeup.contour("R G B A")` - set contour color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.contour("0.3 0.1 0.1 0.6") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.contour('0.3 0.1 0.1 0.6')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.contour('0.3 0.1 0.1 0.6')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.contour('0.3 0.1 0.1 0.6')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/contour-59178fb2d6a2d71b883e26139a6f65aa.jpg) Drag **Set a custom contour texture** * `Makeup.contour("contour.png")` - sets an image-file as a contour texture. The file should be placed into the effect's folder. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.contour("contour.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.contour('contour.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.contour('contour.png')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.contour('contour.png')") ``` ### Foundation[​](#foundation "Direct link to Foundation") Foundation is a combination of two [Beauty API](/far-sdk/effects/makeup_deprecated/face_beauty.md) features: [`Skin.color()`](/far-sdk/effects/makeup_deprecated/face_beauty.md#skin-color) and [`Skin.softening()`](/far-sdk/effects/makeup_deprecated/face_beauty.md#skin-smoothing-skin-softening). info Requires [Skin segmentation](/far-sdk/tutorials/capabilities/sdk_features.md#face-ar-sdk-neural-network-features) Neural Network. * config.js * Java * Swift * JavaScript ``` /* Feel free to add your custom code below */ // set skin color Skin.color("0.73 0.39 0.08 0.3") // set softening strength (skin smoothing) Skin.softening(1) ``` ``` // Effect mCurrentEffect = ... // set skin color mCurrentEffect.evalJs("Skin.color('0.73 0.39 0.08 0.3')", null); // set softening strength (skin smoothing) mCurrentEffect.evalJs("Skin.softening(1)", null); ``` ``` // var currentEffect: BNBEffect = ... // set skin color currentEffect?.evalJs("Skin.color('0.73 0.39 0.08 0.3')", resultCallback: nil) // set softening strength (skin smoothing) currentEffect?.evalJs("Skin.softening(1)", resultCallback: nil) ``` ``` // set skin color await effect.evalJs("Skin.color('0.73 0.39 0.08 0.3')") // set softening strength (skin smoothing) await effect.evalJs("Skin.softening(1)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/foundation-f79656a2dea0a0cb33ae9954459fc02c.jpg) Drag ### Blush[​](#blush "Direct link to Blush") Set blush color. * `Makeup.blushes("R G B A")` - set blushes color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.blushes("0.7 0.1 0.2 0.5") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.blushes('0.7 0.1 0.2 0.5')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.blushes('0.7 0.1 0.2 0.5')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.blushes('0.7 0.1 0.2 0.5')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/blush-0e2e26991bbfbf30ee310457cbc0fa3a.jpg) Drag **Set a custom blush texture** * `Makeup.blushes("blushes.png")` - sets an image-file as a blushes texture. The file should be placed into the effect's folder. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.blushes("blush.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.blushes('blush.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.blushes('blush.png')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.blushes('blush.png')") ``` ### Softlight[​](#softlight "Direct link to Softlight") Highlights a face like a directional flashlight. * `Softlight.strength(n)` - changes softlight intensity, where **n** - any value from 0 to 1 (including decimal). Values > 1 may be also provided. * `Softlight.clear()` - reset softlight, equals to `Softlight.strength(0)`. - config.js - Java - Swift - JavaScript ``` // Set softlight strength Softlight.strength(1) // Reset softlight Softlight.clear() ``` ``` // Effect mCurrentEffect = ... // Set softlight strength mCurrentEffect.evalJs("Softlight.strength(1)", null); // Reset softlight mCurrentEffect.evalJs("Softlight.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set softlight strength currentEffect?.evalJs("Softlight.strength(1)", resultCallback: nil) // Reset softlight currentEffect?.evalJs("Softlight.clear()", resultCallback: nil) ``` ``` // Set softlight strength await effect.evalJs("Softlight.strength(1)") // Reset softlight await effect.evalJs("Softlight.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/Softlight-cbb459a901f91b9511924ae35b82dec7.jpg) Drag ## Brows makeup[​](#brows-makeup "Direct link to Brows makeup") Set brows color. * `Brows.color("R G B A")` - set brows color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). * `Brows.clear()`- clears brows color. - config.js - Java - Swift - JavaScript ``` // Set brows color Brows.color("0.172 0.125 0.105 0.732") // Reset brows color Brows.clear() ``` ``` // Effect mCurrentEffect = ... // Set brows color mCurrentEffect.evalJs("Brows.color('0.172 0.125 0.105 0.732')", null); // Reset brows color mCurrentEffect.evalJs("Brows.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set brows color currentEffect?.evalJs("Brows.color('0.172 0.125 0.105 0.732')", resultCallback: nil) // Reset brows color currentEffect?.evalJs("Brows.clear()", resultCallback: nil) ``` ``` // Set brows color await effect.evalJs("Brows.color('0.172 0.125 0.105 0.732')") // Reset brows color await effect.evalJs("Brows.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/brows-2703f25e2ff6933315668262ab5f9bd0.jpg) Drag ## Eye makeup[​](#eye-makeup "Direct link to Eye makeup") ### Eyeliner[​](#eyeliner "Direct link to Eyeliner") Set eyeliner color. * `Makeup.eyeliner("R G B A")` - set eyeliner color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.eyeliner("0 0 0") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.eyeliner('0 0 0')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.eyeliner('0 0 0')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.eyeliner('0 0 0')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyeliner-92fa652bb6068a9432490887ad80f046.jpg) Drag **Set a custom eyeliner texture** * `Makeup.eyeliner("eyeliner.png")` - sets an image-file as an eyeliner texture. The file should be placed into the effect's folder. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.eyeliner("eyeliner.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.eyeliner('eyeliner.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.eyeliner('eyeliner.png')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.eyeliner('eyeliner.png')") ``` ### Eyeshadow[​](#eyeshadow "Direct link to Eyeshadow") Set eyeshadow color. * `Makeup.eyeshadow("R G B A")` - set eyeshadow color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.eyeshadow("0.6 0.5 1 0.6") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.eyeshadow('0.6 0.5 1 0.6')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.eyeshadow('0.6 0.5 1 0.6')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.eyeshadow('0.6 0.5 1 0.6')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyeshadow-2079f8ae8253e3db7c06a5d04b057109.jpg) Drag **Set a custom eyeshadow texture** * `Makeup.eyeshadow("eyeshadow.png")` - sets an image-file as an eyeshadow texture. The file should be placed into the effect's folder. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.eyeshadow("eyeshadow.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.eyeshadow('eyeshadow.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.eyeshadow('eyeshadow.png')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.eyeshadow('eyeshadow.png')") ``` ### Eyelashes[​](#eyelashes "Direct link to Eyelashes") Set eyelashes color. * `Makeup.lashes("R G B A")` - set eyelashes color in R G B A format (separated with space). Each value should be in a rage from 0 to 1 (including decimal). - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.lashes("0 0 0") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.lashes('0 0 0')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.lashes('0 0 0')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.lashes('0 0 0')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyelashes-5ebbf4d5d38bba778b494ff5d793a159.jpg) Drag **Set a custom eyelashes texture** * `Makeup.lashes("eyelashes.png")` - sets an image-file as an eyelashes texture. The file should be placed into the effect's folder. * Supported formats: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Makeup.lashes("eyelashes.png") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Makeup.lashes('eyelashes.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Makeup.lashes('eyelashes.png')", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.lashes('eyelashes.png')") ``` ### Makeup.clear[​](#makeupclear "Direct link to Makeup.clear") Global method for Makeup. Clears all Makeup features that have been set. * config.js * Java * Swift * JavaScript ``` Makeup.clear() ``` ``` mCurrentEffect.evalJs("Makeup.clear()", null); ``` ``` currentEffect?.evalJs("Makeup.clear()", resultCallback: nil) ``` ``` await effect.evalJs("Makeup.clear()") ``` ## Lipstick[​](#lipstick "Direct link to Lipstick") ### Matt[​](#matt "Direct link to Matt") Set lips matte color. * `Lips.matt("R G B A")` - set lips matte color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). * `Lips.clear()`- clears lips color. - config.js - Java - Swift - JavaScript ``` // Set lips color Lips.matt("0.85 0.43 0.5 0.8") // Reset lips color Lips.clear() ``` ``` // Effect mCurrentEffect = ... // Set lips color mCurrentEffect.evalJs("Lips.matt('0.85 0.43 0.5 0.8')", null); // Reset lips color mCurrentEffect.evalJs("Lips.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set lips color currentEffect?.evalJs("Lips.matt('0.85 0.43 0.5 0.8')", resultCallback: nil) // Reset lips color currentEffect?.evalJs("Lips.clear()", resultCallback: nil) ``` ``` // Set lips color await effect.evalJs("Lips.matt('0.85 0.43 0.5 0.8')") // Reset lips color await effect.evalJs("Lips.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/lips_matt-01491f0b700df825a8a4fd2170a82261.jpg) Drag ### Shiny[​](#shiny "Direct link to Shiny") Set lips shiny color. * `Lips.shiny("R G B A")` - set shiny lips color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). * `Lips.clear()`- clears lips color. - config.js - Java - Swift - JavaScript ``` // Set lips color Lips.shiny("1 0 0.49 1") // Reset lips color Lips.clear() ``` ``` // Effect mCurrentEffect = ... // Set lips color mCurrentEffect.evalJs("Lips.shiny('1 0 0.49 1')", null); // Reset lips color mCurrentEffect.evalJs("Lips.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set lips color currentEffect?.evalJs("Lips.shiny('1 0 0.49 1')", resultCallback: nil) // Reset lips color currentEffect?.evalJs("Lips.clear()", resultCallback: nil) ``` ``` // Set lips color await effect.evalJs("Lips.shiny('1 0 0.49 1')") // Reset lips color await effect.evalJs("Lips.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/lips_shiny-815cce33a3d095ae1ebbb034b6bfd7f9.jpg) Drag ### Glitter[​](#glitter "Direct link to Glitter") Set lips glitter color. * `Lips.glitter("R G B A")` - set lips glitter color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). * `Lips.clear()`- clears lips color. - config.js - Java - Swift - JavaScript ``` // Set lips color Lips.glitter("0.552 0 0 1") // Reset lips color Lips.clear() ``` ``` // Effect mCurrentEffect = ... // Set lips color mCurrentEffect.evalJs("Lips.glitter('0.552 0 0 1')", null); // Reset lips color mCurrentEffect.evalJs("Lips.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set lips color currentEffect?.evalJs("Lips.glitter('0.552 0 0 1')", resultCallback: nil) // Reset lips color currentEffect?.evalJs("Lips.clear()", resultCallback: nil) ``` ``` // Set lips color await effect.evalJs("Lips.glitter('0.552 0 0 1')") // Reset lips color await effect.evalJs("Lips.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/lips_glitter-3a5f273825b856d74fa6567e3a99b915.jpg) Drag ### Extended lips options[​](#extended-lips-options "Direct link to Extended lips options") It is possible to set up extended lips parameters which are usually pre-defined in Matte, Shiny or Glitter lips. **Common options** * `Lips.color("R G B A")` - set lips color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). * `Lips.brightness(n)` - changes lips brightness intensity, where n - any value from 0 to 2 (including decimal). 0 stands for the minimal brightness (black color), 1 stands for standard brightness. Values > 2 may be also provided. **Shine options** * `Lips.saturation(n)` - changes shine saturation intensity, where n - any value from 0 to 1 (including decimal). * `Lips.shineIntensity(n)` - changes shine intensity, where n - any value from 0 to 2 (including decimal). Values > 2 may be also provided. * `Lips.shineBleeding(n)` - changes shine blending strength, where n - any value from 0 to 1 (including decimal). Values > 1 may be also provided. * `Lips.shineScale(n)` - changes shine scale, where n - any value from 0 to 1 (including decimal). 0 stands for the minimal scale (shine disabled), 1 stands for standard scale. Values > 1 may be also provided. **Glitter options** * `Lips.glitterGrain(n)` - changes glitter grain strength, where n - any value from 0 to 2 (including decimal). Values > 2 may be also provided. * `Lips.glitterIntensity(n)` - changes glitter intensity, where n - any value from 0 to 2 (including decimal). Values > 2 may be also provided. * `Lips.glitterBleeding(n)` - changes glitter blending strength, where n - any value from 0 to 2 (including decimal). Values > 2 may be also provided. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Lips.color("1 0 0 1") Lips.brightness(1) Lips.saturation(1) Lips.shineIntensity(2) Lips.shineBleeding(1) Lips.shineScale(1) Lips.glitterGrain(1) Lips.glitterIntensity(1) Lips.glitterBleeding(1) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Lips.color('1 0 0 1')", null); mCurrentEffect.evalJs("Lips.brightness(1)", null); mCurrentEffect.evalJs("Lips.saturation(1)", null); mCurrentEffect.evalJs("Lips.shineIntensity(2)", null); mCurrentEffect.evalJs("Lips.shineBleeding(1)", null); mCurrentEffect.evalJs("Lips.shineScale(1)", null); mCurrentEffect.evalJs("Lips.glitterGrain(1)", null); mCurrentEffect.evalJs("Lips.glitterIntensity(1)", null); mCurrentEffect.evalJs("Lips.glitterBleeding(1)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Lips.color('1 0 0 1')", resultCallback: nil) currentEffect?.evalJs("Lips.brightness(1)", resultCallback: nil) currentEffect?.evalJs("Lips.saturation(1)", resultCallback: nil) currentEffect?.evalJs("Lips.shineIntensity(2)", resultCallback: nil) currentEffect?.evalJs("Lips.shineBleeding(1)", resultCallback: nil) currentEffect?.evalJs("Lips.shineScale(1)", resultCallback: nil) currentEffect?.evalJs("Lips.glitterGrain(1)", resultCallback: nil) currentEffect?.evalJs("Lips.glitterIntensity(1)", resultCallback: nil) currentEffect?.evalJs("Lips.glitterBleeding(1)", resultCallback: nil) ``` ``` await effect.evalJs("Lips.color('1 0 0 1')") await effect.evalJs("Lips.brightness(1)") await effect.evalJs("Lips.saturation(1)") await effect.evalJs("Lips.shineIntensity(2)") await effect.evalJs("Lips.shineBleeding(1)") await effect.evalJs("Lips.shineScale(1)") await effect.evalJs("Lips.glitterGrain(1)") await effect.evalJs("Lips.glitterIntensity(1)") await effect.evalJs("Lips.glitterBleeding(1)") ``` ## Lips liner[​](#lips-liner "Direct link to Lips liner") Set lips liner color. * `LipsLiner.color("R G B A")` - set lips liner color in R G B A format (separated with space). Each value should be in a range from 0 to 1 (including decimal). * `LipsLiner.clear()`- clears lips liner color. - config.js - Java - Swift - JavaScript ``` // Set lips color LipsLiner.color("1.0 0.7 0.8 0.8") // Reset lips color LipsLiner.clear() ``` ``` // Effect mCurrentEffect = ... // Set lips color mCurrentEffect.evalJs("LipsLiner.color('1.0 0.7 0.8 0.8')", null); // Reset lips color mCurrentEffect.evalJs("LipsLiner.clear()", null); ``` ``` // var currentEffect: BNBEffect = ... // Set lips color currentEffect?.evalJs("LipsLiner.color('1.0 0.7 0.8 0.8')", resultCallback: nil) // Reset lips color currentEffect?.evalJs("LipsLiner.clear()", resultCallback: nil) ``` ``` // Set lips color await effect.evalJs("LipsLiner.color('1.0 0.7 0.8 0.8')") // Reset lips color await effect.evalJs("LipsLiner.clear()") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/LipsLiner-bd67dbd8f788ec675fc8d6d0185170b0.jpg) Drag --- # How to combine the Makeup API and Beauty API features danger **This feature is deprecated**. [We recommend you to use Prefabs](/far-sdk/effects/prefabs/overview.md) As you can notice, both the **Makeup API** and [**Beauty API**](/far-sdk/effects/makeup_deprecated/face_beauty.md) are using the same Face Filter. As a result, it is possible to combine their features in your application. You can consume the effect API in several ways. ## Via effect's *config.js*[​](#via-effects-configjs "Direct link to via-effects-configjs") Add to the bottom of the `Makeup/config.js` file: ``` /* Feel free to add your custom code below */ Lips.matt("0.85 0.23 0.2 0.8") ``` ## From application[​](#from-application "Direct link to From application") In your app code use `evalJs` from Banuba SDK: * Java * Swift * JavaScript ``` // Effect mCurrentEffect = ... // set lips matt color mCurrentEffect.evalJs("Lips.matt('0.85 0.23 0.2 0.8')", null); ``` ``` // var currentEffect: BNBEffect = ... // set lips matt color currentEffect?.evalJs("Lips.matt('0.85 0.23 0.2 0.8')", resultCallback: nil); ``` ``` // set lips matt color await effect.evalJs("Lips.matt('0.85 0.23 0.2 0.8')") ``` ## Combine features[​](#combine-features "Direct link to Combine features") To combine the Makeup API features, call the desired methods in your app or in `config.js` as shown above. **Example:** * config.js * Java * Swift * JavaScript ``` /* Feel free to add your custom code below */ Lips.matt("0.85 0.23 0.2 0.8") Makeup.eyeshadow("0.6 0.5 1 0.6") Makeup.contour("0.3 0.1 0.1 0.2") ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Lips.matt('0.85 0.23 0.2 0.8')", null); mCurrentEffect.evalJs("Makeup.eyeshadow('0.6 0.5 1 0.6')", null); mCurrentEffect.evalJs("Makeup.contour('0.3 0.1 0.1 0.2')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Lips.matt('0.85 0.23 0.2 0.8')", resultCallback: nil); currentEffect?.evalJs("Makeup.eyeshadow('0.6 0.5 1 0.6')", resultCallback: nil); currentEffect?.evalJs("Makeup.contour('0.3 0.1 0.1 0.2')", resultCallback: nil); ``` ``` await effect.evalJs("Lips.matt('0.85 0.23 0.2 0.8')") await effect.evalJs("Makeup.eyeshadow('0.6 0.5 1 0.6')") await effect.evalJs("Makeup.contour('0.3 0.1 0.1 0.2')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/multiple_features-9024a99b4558c5965d099d96f0b3fdb4.jpg) Drag --- tip You may also create your own effects with [**Banuba Studio**](https://studio.banuba.com/) or buy some from the [**Banuba Asset Store**](https://assetstore.banuba.net/). # Effect structure Every Banuba effect is a folder containing files with special meaning. This is what a typical effect looks like: ``` effect_folder/ |-- config.json |-- config.js |-- images/ | | | ... |-- shaders/ | | | ... |-- videos/ | | | ... ... ``` Only `config.json` is mandatory (see below). So, in order to start, create a folder with one blank file named like this. Other subfolders have no special meaning and exist only to group related files, which are usually referred from `config.json` or `config.js`. ## config.json[​](#configjson "Direct link to config.json") This is the basic file that describes the effect itself. It has the following format: ``` { "scene": "effect name", "version": "2.0.0", "camera": {} } ``` Where * `scene` - this is the name of your effect. * `version` - version of this configuration file. Always set it to `"2.0.0"`. The previous version is designed for the complex legacy effects. * `camera` - tells that you will render camera feed on the screen. Each effect can be complemented by other features ("prefabs" in our terminology). tip **Learn more about prefabs** [here](/far-sdk/effects/prefabs/overview.md) The basic complete example will look like this (it will render black eyeliner on a face): ``` { "scene": "Retouch example", "version": "2.0.0", "camera": {}, "faces": [ { "makeup_eyeliner": { "color": "0.0 0.0 0.0", "finish": "matte_liquid", "coverage": "hi" } } ] } ``` Another good example to start: [](pathname:///generated/effects/simple_hat_v2.zip) [Download simple 3D model sample](pathname:///generated/effects/simple_hat_v2.zip) ## config.js[​](#configjs "Direct link to config.js") This file may contain some business logic written in JavaScript. Refer to other documentation pages in this section. The most important feature related to scripting is probably [Virtual background](/far-sdk/effects/virtual_background.md). --- # On Face Prefabs ## GLTF[​](#gltf "Direct link to GLTF") ``` "faces": [ { "gltf": { "@mesh": "path/to/gltf/model", "rotation": "0 0 0", "scale": "1.0 1.0 1.0", "translation": "0.0 0.0 0.0", "animation": { "name": "Animation 1", "mode": "loop", "seek_position": 100 }, "@use_physics": false, "gravity": "0.0 -1000.0 0.0", "bones": { "bone_1": 1.0, "bone_2": 0.0, "bone_3": 1.0, "bone_4": 0.0, "bone_5": 1.0 }, "colliders": [ { "center": "0. 0. 0.", "radius": 100.0 }, { "center": "10. 110. 420.", "radius": 650.0 }, { "center": "14. 300. 156.", "radius": 10.0 } ], "constraints": [ { "from": "bone_1", "to": "bone_2", "distance": 10.0 }, { "from": "bone_2", "to": "bone_3", "distance": 50.0 }, { "from": "bone_3", "to": "bone_4", "distance": 30.0 }, { "from": "bone_4", "to": "bone_5", "distance": 60.0 } ], "damping": 0.99 } // ... } // ... ] ``` Place a 3D model in the GLTF format on the face. | Parameter | Description | Optional | Default Value | | :------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `@mesh` | Path to a GLTF model. Note the leading `@` in the parameter, it is mandatory. Supported formats: `.glb`, `.gltf`. Note that all included files, such as GLTF model, shaders, textures, sounds (if there are any), must be located in one folder. | *+* | *+* | | `rotation` | Rotation angles (in degrees) over *X*, *Y*, *Z* axes. Note the default value. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"-90 0 0"` | | `scale` | Scale along X, Y, Z axes. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"1 1 1"` | | `translation` | Translate the model along *X*, *Y*, *Z* axes (in millimetres). | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `animation` | Play animation from GLTF file. **All keys are optional** (in most cases just empty object is enough to play the default animation).*Parameters:*`name` - Select animation from file by name.`seek_position` - Where to start animation relatively from the start position (in ms).`mode` - (`"off"` \| `"loop"` \| `"once"` \| `"once_reversed"` \| `"fixed"`) - how to play the animation selected by `"name"`. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `{}` | | `@use_physics` | Load GLTF with physics simulation. Note the leading `@` in the parameter, it is mandatory. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `false` | | `gravity` | Sets gravitation vector along *X*, *Y*, *Z* axes. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `bones` | Sets bones inversed mass. Object, where key is a name of bone and parameter is an inversed mass. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `[]` | | `colliders` | Add sphere colliders for physical bones. Array of objects, each object is a separate collider.*Parameters:*`center` - *X*, *Y*, *Z* coordinates of the center of sphere.`radius` - float radius of the sphere. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `[]` | | `constraints` | Add constraint between two bones. Array of objects, each object is a one constraint.*Parameters:*`from` - "from" bone name.`to` - "t" bone name. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `[]` | | `damping` | Damping parameter for physics simulation, good values usually are in \[0.9-1.0] range. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.99` | note To create a 3D model in GLTF format for use in the effect, it is recommended to use our [head geometry](/far-sdk/assets/files/head-a7e3b48b08e13d5c85ae83246dbe64cb.glb) as a template. If the model is created without our head geometry, it the scale must be set to `0.1` and the rotation set to `-90` degrees over the `X axis` in the prefab. ## Video Texture[​](#video-texture "Direct link to Video Texture") ``` "faces": [ { "video_texture": { "@mesh": "path/to/gltf/model", "use_separate_alpha": true, "video": "path/to/video/texture", "alpha": "path/to/video/texture/alpha", "rotation": "0 0 0", "scale": "1.0 1.0 1.0", "translation": "0.0 0.0 0.0", } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `@mesh` | Path to a GLTF model. Note the leading `@` in the parameter, it is mandatory. Supported formats: `.glb`, `.gltf`. Plane quad is set by default. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | \`\` | | `use_separate_alpha` | If provided video will be combined (common video on the left and video's alpha on the right) this field must be set to `false`. If there will be two separate videos (common video and alpha) this field must be set to `true`. | *+* | *+* | | `video` | Path to a video texture file. Note that if use\_separate\_alpha is set to `false`, this video must be split in half: left is the common video and right is the video's alpha. Information about supported formats can be find here [video formats](https://docs.banuba.com/far-sdk/tutorials/capabilities/technical_specification#video-formats-support). | *+* | *+* | | `alpha` | Path to a video texture alpha file. Note that if use\_separate\_alpha is set to `true`, this video will be used as the alpha for the video. Information about supported formats can be find here [video formats](https://docs.banuba.com/far-sdk/tutorials/capabilities/technical_specification#video-formats-support). | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | \`\` | | `rotation` | Rotation angles (in degrees) over *X*, *Y*, *Z* axes. Note the default value. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"-90 0 0"` | | `scale` | Scale along *X*, *Y*, *Z* axes. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"1 1 1"` | | `translation` | Translate the model along *X*, *Y*, *Z* axes (in millimetres). | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | ## Earrings[​](#earrings "Direct link to Earrings") ``` "faces": [ { "earrings": { "@mesh_left": "path/to/left/gltf/model", "@mesh_right": "path/to/right/gltf/model", "@use_physics": true, "left": { "scale": "1 1 1", "rotation": "0 0 0", "translation": "0 0 0", "animation": { "name": "static", "mode": "fixed" }, "gravity": "0.0 -1800.0 0.0", "damping": 0.99, "bones": { "Bone_L_1": 0.0, "Bone_L_2": 1.0, "Bone_L_3": 1.0, "Bone_L_4": 1.0, "Bone_L_5": 1.0 } }, "right": { "scale": "1 1 1", "rotation": "0 0 0", "translation": "0 0 0", "animation": { "name": "static", "mode": "fixed" }, "gravity": "0.0 -1800.0 0.0", "damping": 0.99, "bones": { "Bone_R_1": 0.0, "Bone_R_2": 1.0, "Bone_R_3": 1.0, "Bone_R_4": 1.0, "Bone_R_5": 1.0 } }, "bones_in_mv_space": false } // ... } // ... ] ``` Place two 3D models of earring in GLTF format on the each ear. | Parameter | Description | Optional | Default Value | | :------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `@mesh_left` | Path to the left earring GLTF model. Note the leading `@` in the parameter, it is mandatory. Supported formats: `.glb`, `.gltf`. Note that all included files, such as GLTF model, shaders, textures, sounds (if there are any), must be located in one folder. | *+* | *+* | | `@mesh_right` | Path to the right earring GLTF model. Note the leading `@` in the parameter, it is mandatory. Supported formats: `.glb`, `.gltf`. Note that all included files, such as GLTF model, shaders, textures, sounds (if there are any), must be located in one folder. | *+* | *+* | | `@use_physics` | Load GLTF models with physics simulation. Note the leading `@` in the parameter, it is mandatory. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `true` | | `left` | Container of params for the left earring. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `undefined` | | `right` | Container of params for the right earring. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `undefined` | | `scale` | Sets scale along *X*, *Y*, *Z* axes. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"1 1 1"` | | `rotation` | Rotation angles (in degrees) over *X*, *Y*, *Z* axes. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `translation` | Translate the model along *X*, *Y*, *Z* axes (in millimetres). | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `animation` | Play animation from GLTF files. **All keys are optional** (in most cases just empty object is enough to play the default animation).*Parameters:*`name` - Select animation from file by name.`seek_position` - Where to start animation relatively from the start position (in ms).`mode` - (`"off"` \| `"loop"` \| `"once"` \| `"once_reversed"` \| `"fixed"`) - how to play the animation selected by `"name"`. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `{}` | | `gravity` | Sets gravitation vector along *X*, *Y*, *Z* axes. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `bones` | It sets the bones' inverse mass. The object, where the key is the bone's name and the parameter is the inverse mass. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `[]` | | `damping` | Damping parameter for physics simulation, good values usually are in \[0.9-1.0] range. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.99` | ## Action Units[​](#action-units "Direct link to Action Units") ``` "faces": [ { "action_units": {}, // ... } // ... ] ``` Enable action units from a GLTF model. ## Eyes Whitening[​](#eyes-whitening "Direct link to Eyes Whitening") **Usage** ``` "faces": [ { "eyes_whitening": { "strength": 1.0 } // ... } // ... ] ``` Makes the look more expressive by whitening the eyes. | Parameter | Description | Optional | Default Value | | :--------- | :------------------------------------------------------ | :------: | :-----------: | | `strength` | Eyes whitening. Float number in the range `[0.0, 1.0]`. | *+* | *+* | **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/EyesWhitening-53c06893890dff12356d60e292ee7b23.jpg) Drag ## Eyes Flare[​](#eyes-flare "Direct link to Eyes Flare") **Usage** ``` "faces": [ { "eyes_flare": { "strength": 1.0 } // ... } // ... ] ``` Apply flare to the eyes. | Parameter | Description | Optional | Default Value | | :--------- | :-------------------------------------------------------- | :------: | :-----------: | | `strength` | Flare brightness. Float number in the range `[0.0, 1.0]`. | *+* | *+* | **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/EyesFlare-aaa9dfc2204ee98f6874e0556099d0c8.jpg) Drag ## Teeth Whitening[​](#teeth-whitening "Direct link to Teeth Whitening") **Usage** ``` "faces": [ { "teeth_whitening": { "strength": 1.0 } // ... } // ... ] ``` Apply whitening to the teeth. | Parameter | Description | Optional | Default Value | | :--------- | :------------------------------------------------------- | :------: | :-----------: | | `strength` | Teeth whitening. Float number in the range `[0.0, 1.0]`. | *+* | *+* | **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/teeth-ad89bfabc20c76750c21b2348693680b.jpg) Drag ## Softlight[​](#softlight "Direct link to Softlight") **Usage** ``` "faces": [ { "softlight": { "strength": 1.0, "texture": "path/to/file" } // ... } // ... ] ``` Apply softlight to the face. | Parameter | Description | Optional | Default Value | | :--------- | :---------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `strength` | Softlight strength. Float number in the range `[0.0, 1.0]`. | *+* | *+* | | `texture` | Path to custom softlight texture. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | \`\` | **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/SkinSoftening-e7800133c7d654b8372366266bdbf026.jpg) Drag ## Morphing[​](#morphing "Direct link to Morphing") Morph (i.e. deform) certain parts of the face. **Usage** ``` "faces": [ { "morphing": { "eyebrows": { "spacing": 0.6, "height": 0.1, "bend": 1.0 }, "eyes": { "rounding": 0.6, "enlargement": 0.3, "height": 0, "spacing": 0.3, "squint": 0.3, "lower_eyelid_pos": 0, "lower_eyelid_size": 0, "down": 0, "eyelid_upper": 0, "eyelid_lower": 0 }, "face": { "narrowing": 0, "v_shape": 0, "cheekbones_narrowing": 0, "cheeks_narrowing": 0, "jaw_narrowing": 0, "chin_shortening": 0.3, "chin_narrowing": 0, "sunken_cheeks": 0.0, "cheeks_jaw_narrowing": 0, "jaw_wide_thin": 0, "chin": 0, "forehead": 0.3 }, "nose": { "width": 0.3, "length": 0.2, "tip_width": 0.1, "down_up": 0.1, "sellion": 0.2 }, "lips": { "size": 0.4, "height": 1.0, "thickness": 0.1, "mouth_size": 0.2, "smile": 0.8, "shape": 0.4, "sharp": 0.6 } } // ... } // ... ] ``` All settings are optional. ### Eyebrows[​](#eyebrows "Direct link to Eyebrows") | Parameter | Description | Optional | Default Value | | :------------- | :--------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `spacing` | Adjusting the space between the eyebrows *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `heightheight` | Raising/lowering the eyebrows *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `bend` | Adjusting the bend of the eyebrows *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | ### Eyes[​](#eyes "Direct link to Eyes") | Parameter | Description | Optional | Default Value | | :------------------ | :------------------------------------------------------------ | :------------------------------------------------------------------: | :-----------: | | `rounding` | Adjusting the roundness of the eyes *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `enlargement` | Enlarging the eyes *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `height` | Raising/lowering the eyes *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `spacing` | Adjusting the space between the eyes *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `squint` | Making the person squint by adjusting the eyelids *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `lower_eyelid_pos` | Raising/lowering the lower eyelid *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `lower_eyelid_size` | Enlarging/shrinking the lower eyelid *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `down` | Eyes down *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `eyelid_upper` | Eyelid upper *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `eyelid_lower` | Eyelid lower *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | ### Face[​](#face "Direct link to Face") | Parameter | Description | Optional | Default Value | | :------------------------- | :----------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `narrowing` | Narrowing the face *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `v_shape` | Shrinking the chin and narrowing the cheeks *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `cheekbones_narrowing` | Narrowing the cheekbones *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `cheeks_narrowing` | Narrowing the cheeks *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `jaw_narrowing` | Narrowing the jaw *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `chin_shortening` | Decreasing the length of the chin *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `chin_narrowing` | Narrowing the chin *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `sunken_cheeks` | Sinking the cheeks and emphasizing the cheekbones *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `cheeks_and_jaw_narrowing` | Narrowing the cheeks and the jaw *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `jaw_wide_thin` | Jaw wide/thin *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `chin` | Face chin *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `forehead` | Forehead *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | ### Nose[​](#nose "Direct link to Nose") | Parameter | Description | Optional | Default Value | | :---------- | :-------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `width` | Adjusting the nose width *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `length` | Adjusting the nose length *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `tip_width` | Adjusting the nose tip width *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `down_up` | Nose down/up *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `sellion` | Nose sellion *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | ### Lips[​](#lips "Direct link to Lips") | Parameter | Description | Optional | Default Value | | :----------- | :------------------------------------------------------------ | :------------------------------------------------------------------: | :-----------: | | `size` | Adjusting the width and vertical size of the lips *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `height` | Raising/lowering the lips *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `thickness` | Adjusting the thickness of the lips *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `mouth_size` | Adjusting the size of the mouth *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `smile` | Making a person smile *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `shape` | Adjusting the shape of the lips *\[-1; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `sharp` | Lips sharp *\[0; 1]*. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/morphing-92a63912987e8ddc904b6bf20cbea396.jpg) Drag ## Eyes[​](#eyes-1 "Direct link to Eyes") Eyes recoloring. **Usage** ``` "faces": [ { "eyes": { "eyes": "0 0.2 0.8 0.64", "corneosclera": "1 1 1 1", "pupil": "0 0 0 1" } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :------------- | :--------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `eyes` | Eyes (i.e. iris) color. See note about color format above. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `corneosclera` | Corneosclera ("sclera" in everyday language) color. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `pupil` | Pupil color. See note about color format above. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyes-c3613cadd6bb4ebe4eccbf7084c669f5.jpg) Drag ## Hair[​](#hair "Direct link to Hair") Hair recoloring. Usually one parameter is used to set the hair color: ``` "faces": [ { "hair": { "color": [ "0.19 0.06 0.25 1.0" ] } // ... } // ... ] ``` But hair recoloring supports up to 5 parameters to make vertical colors gradient. Here is the example with 2 color parameters: ``` "faces": [ { "hair": { "color": [ "0.19 0.06 0.25 1.0", "0.09 0.25 0.38 1.0" ] } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :-------- | :------------------------------------------------------------------------------------------------------------------------------------------ | :------------------------------------------------------------------: | :-----------: | | `color` | Apply solid color if one element supplied or gradient recoloring if array (like `stands` in the sample). See note about color format above. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0 0"` | **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/hair_colors-e5dc1a08bb938c142a4519d44206d47f.jpg) Drag ## Hair Strands[​](#hair-strands "Direct link to Hair Strands") Hair strands recoloring. Supports from 1 to 5 colors parameters to recolor different strands. ``` "faces": [ { "hair_strands": { "color": [ "0.80 0.40 0.40 1.0", "0.83 0.40 0.40 1.0", "0.85 0.75 0.75 1.0", "0.87 0.60 0.60 1.0", "0.99 0.65 0.65 1.0" ] } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :-------- | :---------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `color` | Apply colors to hair strands. Supports up to 5 different colors in array. You can also provide one element. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0 0"` | **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/hair_strands-0a1cc04eb7f5482cef64a0567d00716f.jpg) Drag --- # On Hands prefabs ### Nails[​](#nails "Direct link to Nails") **Usage** You can choose a solid color and a gloss to recolor nails. Additionally you may apply textures over the selected color. ``` { "nails": { "color": "#FFFF49", "gloss": 40, "textures": [ "tex1.png", "tex2.png", "tex3.png", "tex4.png", "tex5.png" ] } } ``` | Parameter | Description | Optional | Default Value | | :--------- | :------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `color` | Set nails color | *+* | *+* | | `gloss` | Gloss of the nails color in the range of *\[0, 60]* (recommended) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `40` | | `textures` | 5-element array of texture's filenames of each nail. They will be applied over the selected color. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `undefined` | **Preview** ![Right image compare](/far-sdk/assets/images/Right_before-4fb8bc03b86de05223f45d7c0bebc10f.png)![Left image compare](/far-sdk/assets/images/Right_after-0742345b61bc4398c8c0ff3e5c83f3d3.png) Drag note The nails coloring feature requires the [nails segmentation neural network](/far-sdk/tutorials/capabilities/sdk_features.md). Please, make sure your licence plan includes one by contacting your sales manager or fill in the website form to request it. --- # Makeup Prefabs ### Basic concepts[​](#basic-concepts "Direct link to Basic concepts") Each `makeup` prefab represents a specific region of the face and has a similar style of settings: ``` // ... { "color": "0.95 0.70 0.54", "finish": "natural", "coverage": "mid" } // ... ``` `color` - Solid color of the region in RGB string format. `finish` - Type of the finish surface (e.g. `natural`, `matte`). See available options for the specific region below. `coverage` - Coverage intensity. Can be `low`, `mid`, `high` or float number in the range `[0.0, 1.0]`. ### Makeup Base[​](#makeup-base "Direct link to Makeup Base") ``` "faces": [ { "makeup_base": { "mode": "quality", "smooth": "0 1", "smokey": 0 } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :-------- | :---------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `mode` | Can be 'speed' or 'quality' (to use heavy algorithms). | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"speed"` | | `smooth` | Smooths face skin. Contains 2 values of the smoothing strength for corresponding regions: `whole face` and `under eyes`. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 1"` | | `smokey` | Smokey eyes index. 0 disables smokey eyes; 1, 2, 3 or 4 selects different set of smokey eyes textures for the first two eyeshadow layers. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0` | ### Eyebags[​](#eyebags "Direct link to Eyebags") **Usage** ``` "faces": [ { "makeup_eyebags": { "alpha": 0.8 }, // ... } // ... ] ``` `alpha` *(optional)* - Transparency for eyes circles in the range from `0` to `1`. Default value: `0.8`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyebags-9cbd40d2add99adcaa5df79b82b4b3cf.jpg) Drag ### Foundation[​](#foundation "Direct link to Foundation") **Usage** ``` "faces": [ { "makeup_foundation": { "color": "0.95 0.70 0.54", "finish": "natural", "coverage": "mid" }, // ... } // ... ] ``` `finish` - One of: `natural`, `matte`, `radiance`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/foundation-722812ce8d57e2adc8fd9492c47689f1.jpg) Drag ### Concealer[​](#concealer "Direct link to Concealer") **Usage** ``` "faces": [ { "makeup_concealer": { "color": "0.94 0.73 0.66", "finish": "natural", "coverage": "mid" }, // ... } // ... ] ``` `finish` - One of: `natural`, `matte`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/concealer-9f1a17f7cb038d705dce5b41732d744c.jpg) Drag ### Contour[​](#contour "Direct link to Contour") **Usage** ``` "faces": [ { "makeup_contour": { "color": "1 0 0", "finish": "normal", "coverage": "mid" }, // ... } // ... ] ``` `finish` - One of: `normal`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/contour-5841a00c527aea7d50ed6210be0c5705.jpg) Drag ### Highlighter[​](#highlighter "Direct link to Highlighter") **Usage** ``` "faces": [ { "makeup_highlighter": { "color": "0.9 0.80 0.83", "finish": "shimmer", "coverage": "mid" }, // ... } // ... ] ``` `finish` - One of: `shimmer`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/highlighter-bd43a4e867e535aad0d65353762c63a8.jpg) Drag ### Blush[​](#blush "Direct link to Blush") **Usage** ``` "faces": [ { "makeup_blush": { "color": "0.88 0.65 0.75", "finish": "shimmer", "coverage": "mid" }, // ... } // ... ] ``` `finish` - One of: `shimmer`, `matte`, `cream_shine`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/blush-af461e4c729818d407a7233758243c03.jpg) Drag ### Lipstick[​](#lipstick "Direct link to Lipstick") **Usage** ``` "faces": [ { "makeup_lipstick": { "color": "0.88 0.47 0.61", "finish": "shimmer", "coverage": "high" }, // ... } // ... ] ``` `finish` - One of: `shimmer`, `matte_dry`, `matte_cream`, `matte_powder`, `matte_liquid`, `cream_shine`, `glossy_cream_plumping`, `balm`, `balm_light`, `glossy_cream_shimmer`, `cream_vividcolors`, `matte_velvet`, `cream_shine_glitter`, `matte_velvet_sparkling`, `matte_sheer_lightcolors`, `matte_cream_vividcolors`, `metallic_cream`, `matte_velvet_sparkling_lightcolors`, `matte_light`, `metallic_shine`, `metallic_dry_lightcolors`, `satin`, `shine`, `cream`, `clear`, `cream_darkcolors`, `clear_shimmer`, `metallic_sheer`. **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/lipstick-4b6bc4d5377a5fb262ad948e7488b9bf.jpg) Drag ### Lipsliner[​](#lipsliner "Direct link to Lipsliner") **Usage** ``` "faces": [ { "makeup_lipsliner": { "color": "0.99 0.0 0.0", "finish": "shimmer", "coverage": "high", "liner": "3 5" }, // ... } // ... ] ``` `finish` - One of: `shimmer`. `liner` *(optional)* - Liner options contain 2 values: `liner width` and `softness`. Default value: `3 5`. **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/lipsliner-c446fc27d392e9716288209b30a47e58.jpg) Drag ### Eyeshadow[​](#eyeshadow "Direct link to Eyeshadow") **Usage** ``` "faces": [ { "makeup_eyeshadow": [ { "color": "0.21 0.42 0.32", "finish": "matte", "coverage": "high" }, { "color": "0.3 0.58 0.47", "finish": "shimmer", "coverage": "high" }, { "color": "1.00 0.91 0.27", "finish": "metallic", "coverage": "high" } ], // ... } // ... ] ``` Can be an object with one eyeshadow definition, or an array of the eyeshadows (**max length 3**) applied one after another. `finish` - One of: `shimmer`, `matte`, `matte_powder`, `glitter_metallic`, `glitter`, `metallic`, `glitter_sheer`, `cream`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyeshadow-ebee6f8b0da993382f550ad5872534a8.jpg) Drag ### Eyelashes[​](#eyelashes "Direct link to Eyelashes") **Usage** ``` "faces": [ { "makeup_eyelashes": { "color": "0 0 0", "finish": "volume", "coverage": "high" }, // ... } // ... ] ``` `finish` - One of: `volume`, `lengthening`, `lengthandvolume`, `natural`, `natural_bottom` **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyelashes-d10119917263609bb8a19279278ebd76.jpg) Drag ### Eyeliner[​](#eyeliner "Direct link to Eyeliner") **Usage** ``` "faces": [ { "makeup_eyeliner": { "color": "0.0 0.0 0.0", "finish": "matte_liquid", "coverage": "high" }, // ... } // ... ] ``` `finish` - One of: `shimmer`, `cream`, `matte_liquid`, `matte_cream`, `matte_dark`, `metallic`, `shimmer`, `glitter`, `cream_lightcolors`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyeliner-4d7a9a99580d76059b9d2fbcac49b5b9.jpg) Drag ### Eyebrows[​](#eyebrows "Direct link to Eyebrows") **Usage** ``` "faces": [ { "makeup_eyebrows": { "color": "0.39 0.26 0.27", "finish": "matte", "coverage": "mid" }, // ... } // ... ] ``` `finish` - One of: `matte`, `wet`, `clear`. **Preview** ![Right image compare](/far-sdk/assets/images/original-8823b66ca7c9e5a54656f6ee9b77c29f.jpg)![Left image compare](/far-sdk/assets/images/eyebrows-80609d099b6ebe704df37dc3d7203c39.jpg) Drag ### Lipsshine[​](#lipsshine "Direct link to Lipsshine") **Usage** ``` "faces": [ { "makeup_lipsshine": { "color": "1.0 0.0 0.0", "finish": "glitter", "coverage": "mid" }, // ... } // ... ] ``` `finish` - One of: `shine`, `glitter`. **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/lipshine-71bf9d4d51904c26b6347475215ee75b.jpg) Drag ### Lips Gloss[​](#lips-gloss "Direct link to Lips Gloss") **Usage** ``` "faces": [ { "makeup_lipsgloss": { "threshold": 0.96, "contour": 0.45, "weakness": 0.5, "multiplier": 1.5, "alpha": 0.8 }, // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :----------- | :------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `threshold` | Gloss threshold. Higher value - smaller blick. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.95` | | `contour` | Blick contour softness. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.5` | | `weakness` | Blick mask weakness. Higher value - smaller blick. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.5` | | `multiplier` | Blick mask multiplier. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `1.5` | | `alpha` | Blick mask visibility. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.7` | **Preview** ![Right image compare](/far-sdk/assets/images/lipsgloss1-3552ded6972501314cbd7ec57bf8a502.jpg)![Left image compare](/far-sdk/assets/images/lipsgloss2-81fa5f99c282bfe870937e83c7435bd6.jpg) Drag ### Lips Chameleon[​](#lips-chameleon "Direct link to Lips Chameleon") **Usage** ``` "faces": [ { "makeup_lipschameleon": { "colors": [ { "color": "#3342b0", "finish": "metallic_cream", "coverage": "high" }, { "color": "#a028b0", "finish": "metallic_cream", "coverage": "high" } ], "threshold": 0.92, "contour": 0.3, "weakness": 0.25 } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :---------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `colors` | Array of lipsticks parameters. Must contain 2 elements with color/finish/coverage fields. | *+* | *+* | | `color` | Solid color of the region in RGB string format. | *+* | *+* | | `finish` | Type of the finish surface (shimmer, matte\_dry, matte\_cream, matte\_powder, matte\_liquid, cream\_shine, glossy\_cream\_plumping, balm, balm\_light, glossy\_cream\_shimmer, cream\_vividcolors, matte\_velvet, cream\_shine\_glitter, matte\_velvet\_sparkling, matte\_sheer\_lightcolors, matte\_cream\_vividcolors, metallic\_cream, matte\_velvet\_sparkling\_lightcolors, matte\_light, metallic\_shine, metallic\_dry\_lightcolors, satin, shine, cream, clear, cream\_darkcolors, clear\_shimmer, metallic\_sheer). | *+* | *+* | | `coverage` | Lipstick coverage intensity. | *+* | *+* | | `threshold` | Chameleon mask threshold. Higher value - smaller mask. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.92` | | `contour` | Chameleon mask contour softness | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.3` | | `weakness` | Chameleon mask weakness. Higher value - smaller mask. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.25` | **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/lipschameleon-ba7ea6e56fe0a7e7ecf62493feec9d29.jpg) Drag ### Eyelids Gloss[​](#eyelids-gloss "Direct link to Eyelids Gloss") **Usage** ``` "faces": [ { "makeup_eyelidsgloss": { "threshold": 0.95, "contour": 0.6, "weakness": 0.6, "multiplier": 1.5, "alpha": 1 } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :----------- | :------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `threshold` | Gloss threshold. Higher value - smaller blick. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.98` | | `contour` | Blick contour softness. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.5` | | `weakness` | Blick mask weakness. Higher value - smaller blick. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.5` | | `multiplier` | Blick mask multiplier. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `1.5` | | `alpha` | Blick mask visibility. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.6` | **Preview** ![Right image compare](/far-sdk/assets/images/eyelidsgloss1-88a2d06dcc4e82d1fe9212e9a54c3b0c.jpg)![Left image compare](/far-sdk/assets/images/eyelidsgloss2-d0fbc506fb3f1e7024ca6b3e34572a05.jpg) Drag ### Eyelids Chameleon[​](#eyelids-chameleon "Direct link to Eyelids Chameleon") **Usage** ``` "faces": [ { "makeup_eyelids_chameleon": { "colors": { "bottom": [ { "color": "#3342b0", "finish": "shimmer", "coverage": "high" }, { "color": "#3342b0", "finish": "shimmer", "coverage": "high" }, { "color": "#3342b0", "finish": "shimmer", "coverage": "high" } ], "upper": [ { "color": "#a028b0", "finish": "shimmer", "coverage": "high" }, { "color": "#a028b0", "finish": "shimmer", "coverage": "high" }, { "color": "#a028b0", "finish": "shimmer", "coverage": "high" } ] }, "threshold": 0.92, "contour": 0.3, "weakness": 0.25 } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :---------- | :----------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `colors` | Container of bottom and upper colors. | *+* | *+* | | `bottom` | Array of bottom colors. May contain up to 3 elements with color/finish/coverage data. | *+* | *+* | | `upper` | Array of upper colors. May contain up to 3 elements with color/finish/coverage data. | *+* | *+* | | `color` | Solid color of the region in RGB string format. | *+* | *+* | | `finish` | Type of the finish surface (shimmer, matte, matte\_powder, glitter\_metallic, glitter, metallic, glitter\_sheer, cream). | *+* | *+* | | `coverage` | Coverage intensity. | *+* | *+* | | `threshold` | Chameleon mask threshold. Higher value - smaller mask. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.92` | | `contour` | Chameleon mask contour softness | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.3` | | `weakness` | Chameleon mask weakness. Higher value - smaller mask. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.25` | **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/eyelidschameleon-2a37c1d6e58bfad1fb2fd1470591e1e4.jpg) Drag ### Lips Glitter[​](#lips-glitter "Direct link to Lips Glitter") **Usage** ``` "faces": [ { "makeup_lips_glitter": { "color": "0.7 0.3 0.7", "alpha": 0.75 } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :-------- | :--------------------------- | :------------------------------------------------------------------: | :-----------: | | `color` | Glitter color in RGB format. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `alpha` | Glitter effect visibility. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | **Preview** ![Right image compare](/far-sdk/assets/images/lips_glitter_right-220262837fd3b237c9a5e7b8fd06eec0.jpg)![Left image compare](/far-sdk/assets/images/lips_glitter_left-15bb0ecbc4cbc280253f24d1ff3581a7.jpg) Drag ### Eyelids Glitter[​](#eyelids-glitter "Direct link to Eyelids Glitter") **Usage** ``` "faces": [ { "makeup_eyelids_glitter": { "color": "1.0 0.0 0.5", "alpha": 0.75 } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :-------- | :--------------------------- | :------------------------------------------------------------------: | :-----------: | | `color` | Glitter color in RGB format. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0"` | | `alpha` | Glitter effect visibility. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | **Preview** ![Right image compare](/far-sdk/assets/images/eyelids_glitter_right-dbeb8a7e5fcff9d94c5c3f8ee7a757ca.jpg)![Left image compare](/far-sdk/assets/images/eyelids_glitter_left-2c0828a3e020f38a87a827dfcc5fc458.jpg) Drag ## All Features[​](#all-features "Direct link to All Features") Lets combine all features together: **Example** ``` { "faces": [ { "makeup_base": { "smooth": "0 1", "mode": "quality" }, "makeup_eyebags": { "alpha": 0.8 }, "makeup_foundation": { "color": "0.95 0.70 0.54", "finish": "natural", "coverage": "mid" }, "makeup_concealer": { "color": "0.94 0.73 0.66", "finish": "natural", "coverage": 1 }, "makeup_contour": { "color": "1 0 0", "finish": "normal", "coverage": "mid" }, "makeup_highlighter": { "color": "0.9 0.80 0.83", "finish": "shimmer", "coverage": "mid" }, "makeup_blush": { "color": "0.88 0.65 0.75", "finish": "shimmer", "coverage": "mid" }, "makeup_lipstick": { "color": "0.88 0.47 0.61", "finish": "shimmer", "coverage": "high" }, "makeup_lipsliner": { "color": "0.99 0.0 0.0", "finish": "shimmer", "coverage": "high", "liner": "3 5" }, "makeup_eyeshadow": [ { "color": "0.21 0.42 0.32", "finish": "matte", "coverage": "high" }, { "color": "0.3 0.58 0.47", "finish": "shimmer", "coverage": "high" }, { "color": "1.00 0.91 0.27", "finish": "metallic", "coverage": "high" } ], "makeup_eyeliner": { "color": "0.0 0.0 0.0", "finish": "matte_liquid", "coverage": "high" }, "makeup_eyebrows": { "color": "0.39 0.26 0.27", "finish": "matte", "coverage": "mid" }, "makeup_lipsshine": { "color": "1.0 0.0 0.0", "finish": "glitter", "coverage": "mid" }, "makeup_eyelashes": { "color": "1 0 0", "finish": "volume", "coverage": "high" }, "makeup_lipsgloss": { "threshold": 0.96, "contour": 0.45, "weakness": 0.5, "multiplier": 1.5, "alpha": 0.8 }, "makeup_eyelidsgloss": { "threshold": 0.95, "contour": 0.6, "weakness": 0.6, "multiplier": 1.5, "alpha": 1 } } ], "scene": "test_makeup", "version": "2.0.0" } ``` **Preview** ![Right image compare](/far-sdk/assets/images/original2-f74f044be90519e96c2b58b2db7ddce2.jpg)![Left image compare](/far-sdk/assets/images/all-fa62ce77e0126c7c957ad612dfb34859.jpg) Drag --- # Prefabs Overview Prefab is high level object that represents set of render and SDK features. Prefabs divided on several types: * [On Face](/far-sdk/effects/prefabs/face.md) - prefabs that depends on face. * [Makeup](/far-sdk/effects/prefabs/makeup.md) - represents makeup features. Also depends on face. * [Top Level](/far-sdk/effects/prefabs/top_level.md) - affects the whole screen and do not attached to any face. * [On Hands](/far-sdk/effects/prefabs/hands.md) - effects applied on hands. * [Sprites](/far-sdk/effects/prefabs/sprites.md#sprites) - represents simple 2d sprites * [Sounds](/far-sdk/effects/prefabs/sounds.md) - represents audio Scheme of using prefabs: ``` { "scene": "effect name", "version": "2.0.0", "camera": {}, // Top level prefabs "background": { // ... }, "foreground": { // ... }, "lut": { // ... }, "lights": { // ... }, "msaa": { // ... }, // On Face "faces": [ { // first face "face_prefab1": { //... }, "makeup_prefab1": { //... } // ... }, { // second face } // .. ], // Sounds "sounds": [ { "sound_prefab1": { //... }, "sound_prefab2": { //... } // ... } ], // Sprites "sprites": [ { "sprite_prefab1": { //... }, "sprite_prefab2": { //... } // ... } // ... ] } ``` Where * `scene` - this is the name of your effect. * `version` - version of this configuration file. Always set it to `"2.0.0"`. Previous version is designed for complex legacy effects. * `camera` - tells that you will render camera feed on the screen. * `faces` - array of JSON objects describing features you wish to place on each face. So, for each face we will define a JSON object where keys are the names of prefabs and values are the parameters for each prefab. * `top_level_prefab` - one of the top level prefabs * `sprites` - array of JSON objects describing sprites features. * `sounds` - array of JSON objects describing sounds features tip You can create effect with any set of prefabs. tip You could change effect in runtime calling `reload_config()`/`reloadConfig()` method: * C++ * Java * Swift * JavaScript ``` constexpr auto new_config = R"( { "camera" : {}, "background" : { // ... } } )"; effect_player->effect_manager()->reload_config(new_config); ``` ``` import com.banuba.sdk.player.Player // ... val newConfig = "" " { "camera" : {}, "background" : { // ... } } "" "; player.effectPlayer.effectManager() .reloadConfig(newConfig) ``` ``` import BanubaEffectPlayer // ... let newConfig = """ { "camera": {}, "background": { // ... } } """; player.effectPlayer?.effectManager().reloadConfig(newConfig) ``` ``` const new_config = `{ "camera": {}, "background": { // ... } }` player._effectManager.reloadConfig(new_config) ``` note Colors are represented as 3- or 4-component string, each component is a value in range *\[0, 1]* or *\[0, 255]* separated by a space. E.g.: `"1 0 0 1"` - red color, etc. HTML-like hex strings are also accepted. E.g.: `"#00FF00"` - green color, alpha `FF` is assumed if missing. --- # Sounds Prefabs ## Sounds[​](#sounds "Direct link to Sounds") This array contains audio effects bound to the effect. You can enable several audio tracks at once. ### Audio[​](#audio "Direct link to Audio") Simple audio effect on background, looped by default. **Usage** ``` "sounds": [ { "audio": { "filename": "sound.ogg", "loop": true, "volume": 0.7 } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :--------- | :--------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `filename` | Path to audio track. Tracks in `.ogg` and `.wav` format are supported. | *+* | *+* | | `loop` | Loop the audio track. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `false` | | `volume` | This audio track volume. Value from `[0, 1]`. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `1` | note To use prefab with sound effect on Web it's **mandatory** to set `player.setVolume(1);` in the code. --- # Sprites Prefabs ## sprites[​](#sprites "Direct link to sprites") This array contains sprite-based prefabs like the hint ### Hint[​](#hint "Direct link to Hint") Text message on the screen. **Usage** ``` "sprites": [ { "hint": { "text": "Simple Text!", "font": "path/to/font/file", "color": "1 0 0 1", "size": "30 10", "translation": "0 0", "rotation": "0 0 0" } // ... } // ... ] ``` | Parameter | Description | Optional | Default Value | | :------------ | :---------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :------------: | | `text` | Message will be displayed. | *+* | *+* | | `font` | Path to the font file. Supported formats: `.otf`, `.otc`, `.ttf`, `.ttc` | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `default font` | | `color` | Color of the text. Multiple hints support only one color and it must be set manually for each hint. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `1 1 1 1` | | `size` | Size (in percents of screen size) of the text area \[X, Y]. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `100 100` | | `translation` | Translate the center of hint relativly the center of screen along *X*, *Y* axes in percents of screen size. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0 0` | | `rotation` | Rotation angles (in degrees) **only** over *Z* axes. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0 0 0` | **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/hint-e7610a1cad3f0facb2590c99cda3058f.jpg) Drag --- # Top Level Prefabs ### Background[​](#background "Direct link to Background") **Usage** You can choose to use texture: ``` { "background": { "texture": "capy.jpeg", "rotation": 0, "scale": 1, "content_mode": "scale_to_fill", "blend_mode": "default", "clear_color": "1 0 0 1", "use_filter": true } } ``` Or blur of the real background: ``` { "background": { "blur": 0.5 } } ``` Or transparency: ``` { "background": { "transparency": 0.5 } } ``` Enable virtual background. | Parameter | Description | Optional | Default Value | | :-------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :---------------: | | `transparency` | Sets the background transparency from 0 to 1. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `texture` | Sets the file (image or video) as the background texture. For better perfomance, formats must be used: `.jpeg`, `.jpg`, `.png`, `.mp4 (H.264 and aac only)`. Formats such as `.heic`, `.webp`, `.webm` can be used also, but performance may vary depending on the device used. More info can be find here [image formats](https://docs.banuba.com/far-sdk/tutorials/capabilities/technical_specification#image-formats-support) and [video formats](https://docs.banuba.com/far-sdk/tutorials/capabilities/technical_specification#video-formats-support) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `null` | | `rotation` | Rotates the background texture clockwise in degrees. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `scale` | Scales the background texture. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `1.0` | | `content_mode` | Fits the background texture inside frame. Possible values are: `scale_to_fill`, `fill`, `fit`. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"scale_to_fill"` | | `blend_mode` | Sets the texture content mode. Possible values are: `default`, `screen`, `split_alpha`, `multiply`. `default` -- traditional alpha blending; `split_alpha` -- alpha channel is on the right side of input texture. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"default"` | | `blur` | Sets the background blur radius. Radius in \[0, 1] range. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | | `clear_color` | Specifies the color of the area not covered by background texture (e.g. `content_mode` `fit`). Black by default. Transparency (the last) is a conventional value only and currently ignored (use `1` for it). | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0 0 0"` | | `camera_video_scale` | \[Experimental] Set scale factor for the camera image relative to the center of the image. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"1 1"` | | `camera_video_origin` | \[Experimental] Set offset for the camera image relative to the center of the image. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"0 0"` | | `use_filter` | Enables/disables filter for smoothing segmentation mask contours. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `true` | **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/background-470533cb99448493c845be22788ccd68.jpg) Drag ### foreground[​](#foreground "Direct link to foreground") **Usage** Apply a texture or a video to the whole screen. ``` { "foreground": { "filename": "path/to/texture/file", "@blend": "multiply", "rotation": 0 } } ``` | Parameter | Description | Optional | Default Value | | :--------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `filename` | Path to a texture or a video file. Texture must have reasonable alpha channel or, as in the case of video, it will hide the underlying drawings. Information about supported formats can be find here [image formats](https://docs.banuba.com/far-sdk/tutorials/capabilities/technical_specification#image-formats-support) and [video formats](https://docs.banuba.com/far-sdk/tutorials/capabilities/technical_specification#video-formats-support) | *+* | *+* | | `@blend` | Apply the corresponding blending mode. Note the leading `@` in the parameter, it is mandatory. Possible values are: `off`, `alpha`, `premul_alpha`, `alpha_rgba`, `screen`, `add`, `multiply`, `min`, `max`. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `"alpha"` | | `rotation` | Rotates the foreground clockwise in degrees. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `0.0` | ### lut[​](#lut "Direct link to lut") **Usage** Apply a color filter (aka LUT). ``` { "lut": { "filename": "path/to/lut/file", "strength": 0.9 } } ``` | Parameter | Description | Optional | Default Value | | :--------- | :---------------------------------- | :------------------------------------------------------------------: | :-----------: | | `filename` | Path to the LUT file (usually PNG). | *+* | *+* | | `strength` | LUT strength. Value in \[0, 1]. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `1.0` | **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/lut-d90746e32baa84b6f5b8027228b9f563.jpg) Drag ### lights[​](#lights "Direct link to lights") **Usage** Add up to 4 directional light sources to GLTF models in addition to IBL textures. **NOTE** that GLTF model is mandatory. ``` { "lights": { "radiance": [ "10 0 0 0", "0 0 10 0" ], "direction": [ "-1 0 0", "1 0 0" ] } } ``` | Parameter | Description | Optional | Default Value | | :---------- | :------------------------------------------------------------------------------------------------------------------------------------------ | :------: | :-----------: | | `radiance` | Array (or a single value for 1 light) of light sources radiances in *R*, *G*, *B* components and light wrap around factor in 4th component. | *+* | *+* | | `direction` | Array (or a single value for 1 light) of light sources directions in *X*, *Y*, *Z*. | *+* | *+* | ### msaa[​](#msaa "Direct link to msaa") **Usage** Apply multisample anti-aliasing to effect. MSAA makes the picture look cleaner by smoothing out jagged edges. **NOTE** that GLTF model is mandatory. ``` { "msaa": { "@samples_count": 4 } } ``` | Parameter | Description | Optional | Default Value | | :--------------- | :----------------------------------------------------------------------------- | :------: | :-----------: | | `@samples_count` | MSAA strength. Value must be 1, 2 or 4. If strength is 1 - msaa is turned off. | *+* | *+* | ### Bokeh[​](#bokeh "Direct link to Bokeh") **Usage** Enable bokeh effect for background. ``` { "bokeh": { "samples": 16 } } ``` | Parameter | Description | Optional | Default Value | | :-------- | :------------------------------------------------------------------- | :------------------------------------------------------------------: | :-----------: | | `samples` | Samples amount in the range \[8, 24]. Higher value - stronger bokeh. | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | `16` | **Preview** ![Right image compare](/far-sdk/assets/images/bokeh1-d7e98efa92c812542789f664a113638d.jpg)![Left image compare](/far-sdk/assets/images/bokeh2-7a57a4a3469764e1337ab4dc0f704eba.jpg) Drag --- # Virtual Background API Banuba provides the Virtual Background API designed to help you integrate augmented reality background separation into your app. It aims to change your background to hide everything behind you. [](pathname:///generated/effects/Background_doc.zip) [Download example](pathname:///generated/effects/Background_doc.zip) ## How to add a background to an effect[​](#how-to-add-a-background-to-an-effect "Direct link to How to add a background to an effect") Assume you have an effect, say [Afro](pathname:///generated/effects/Afro.zip), and want to add a background image to it, say [beach.png](pathname:///img/effect/combine_effect_vbg/beach.png) . To accomplish this, connect the built-in Background module to the effect via `evalJs`. Now, you can set the image as background texture: * Java * Swift ``` // Effect mCurrentEffect = ... // Connect the built-in background module (once per effect) mCurrentEffect.evalJs("Background = require('bnb_js/background')", null); // Then, set the background texture mCurrentEffect.evalJs("Background.texture('/absolute/path/to/beach.png')", null); ``` ``` // var currentEffect: BNBEffect = ... // Connect the built-in background module (once per effect) currentEffect?.evalJs("Background = require('bnb_js/background')", resultCallback: nil); // Then, set the background texture currentEffect?.evalJs("Background.texture('/absolute/path/to/beach.png')", resultCallback: nil); ``` ### Combine VBG with a WebAR effect (AR 3D Mask)[​](#combine-vbg-with-a-webar-effect-ar-3d-mask "Direct link to Combine VBG with a WebAR effect (AR 3D Mask)") On the Web platform, the file system is represented by the effect itself. It means you should put the desired image inside the effect folder before the `evalJs` call. One way to accomplish this is to put the image directly into the effect archive: 1. Unpack the effect archive 2. Put the image into the effect folder, say as `images/beach.png` 3. Compress the effect folder and use the new archive instead of the original one This way you should be able to set the image using the relative path: ``` // const player = await Player.create(...) // const effect = new Effect(...) // await player.applyEffect(effect) // Connect the built-in background module (once per effect) await effect.evalJs("Background = require('bnb_js/background')") // Then, set the background texture await effect.evalJs("Background.texture('images/beach.png')") ``` Another way is to upload an image to the effect's file system on demand. You can leverage the `Effect.writeFile()` API to accomplish this: ``` // const player = await Player.create(...) // const effect = new Effect(...) // await player.applyEffect(effect) // Load the image file however you like, e.g from a remote server const image = await fetch("/path/to/beach.png").then(r => r.arrayBuffer()) await effect.writeFile("images/beach.png", image) // Connect the built-in background module (once per effect) await effect.evalJs("Background = require('bnb_js/background')") // Then, set the background texture await effect.evalJs("Background.texture('images/beach.png')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original-7af4345d086ee096ce3656413c269b22.jpg)![Left image compare](/far-sdk/assets/images/vbg-e9db7c61e2572f49df1134d25b033cd3.jpg) Drag The Virtual Background API allows to change background with the following built-in features: ## Background texture[​](#background-texture "Direct link to Background texture") Sets the background behind the user to a texture. * `Background.texture('image.png')` - sets an image or a video file as a background texture. The file should be placed into the effect's folder. * [Supported formats.](/far-sdk/tutorials/capabilities/technical_specification.md#video-formats-support) - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.texture('image.png') ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.texture('image.png')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.texture('image.png')", resultCallback: nil) ``` ``` await effect.evalJs("Background.texture('image.png')") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/BackgroundTexture-fe6299595a550a13eec111677e4e2539.jpg) Drag ### Background texture content mode[​](#background-texture-content-mode "Direct link to Background texture content mode") Scales the background content mode. * `Background.contentMode(mode)` - sets a content mode of background texture. * Available mods: `scale_to_fill`, `fill`, `fit`. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.texture('image.png') Background.contentMode('fit') ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.texture('image.png')", null); mCurrentEffect.evalJs("Background.contentMode('fit')", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.texture('image.png')", resultCallback: nil) currentEffect?.evalJs("Background.contentMode('fit')", resultCallback: nil) ``` ``` await effect.evalJs("Background.texture('image.png')") await effect.evalJs("Background.contentMode('fit')") ``` ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/BackgroundContentModeFit-b2b07f464dccc49e433763ee6d1a99b7.jpg) Drag ### Background texture rotation[​](#background-texture-rotation "Direct link to Background texture rotation") Rotates the background texture clockwise in degrees. * `Background.rotation(angle)` - sets background image rotation angle. Angle should be provided in degrees. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.texture('image.png') Background.rotation(90) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.texture('image.png')", null); mCurrentEffect.evalJs("Background.rotation(90)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.texture('image.png')", resultCallback: nil) currentEffect?.evalJs("Background.rotation(90)", resultCallback: nil) ``` ``` await effect.evalJs("Background.texture('image.png')") await effect.evalJs("Background.rotation(90)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/BackgroundRotation-bab2be0102d207bd6ae9427b96e95ece.jpg) Drag ### Background texture scale[​](#background-texture-scale "Direct link to Background texture scale") Scales the background texture. * `Background.scale(factor)` - sets the scale factor of background texture. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.texture('image.png') Background.scale(2) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.texture('image.png')", null); mCurrentEffect.evalJs("Background.scale(2)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.texture('image.png')", resultCallback: nil) currentEffect?.evalJs("Background.scale(2)", resultCallback: nil) ``` ``` await effect.evalJs("Background.texture('image.png')") await effect.evalJs("Background.scale(2)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/BackgroundScale-07b26044692d8592b470084f1a12587f.jpg) Drag ## Background blur[​](#background-blur "Direct link to Background blur") Sets the background blur radius. * `Background.blur(radius)` - set the background blur radius in range from 0 to 1. - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.blur(0.6) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.blur(0.6)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.blur(0.6)", resultCallback: nil) ``` ``` await effect.evalJs("Background.blur(0.6)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/BackgroundTexture-fe6299595a550a13eec111677e4e2539.jpg)![Left image compare](/far-sdk/assets/images/BackgroundBlur-72d90fc5c1e1f874b4f53f8ddcf212eb.jpg) Drag ## Background transparency[​](#background-transparency "Direct link to Background transparency") Sets background transparency value. * `Background.transparency(value)` - set background transparency value in range from 0 to 1. 0 - transparent background disabled , 1 - fully transparent background enabled - config.js - Java - Swift - JavaScript ``` /* Feel free to add your custom code below */ Background.transparency(1) ``` ``` // Effect mCurrentEffect = ... mCurrentEffect.evalJs("Background.transparency(1)", null); ``` ``` // var currentEffect: BNBEffect = ... currentEffect?.evalJs("Background.transparency(1)", resultCallback: nil) ``` ``` await effect.evalJs("Background.transparency(1)") ``` **Preview** ![Right image compare](/far-sdk/assets/images/original_wide-2f360425a0e0a779832de176b75c4354.jpg)![Left image compare](/far-sdk/assets/images/BackgroundTransparent-1648360dee4dd0da0f4aa673866738e8.png) Drag --- # Support * [Dev Portal](https://community.banuba.com/) * [FAQ page](https://www.banuba.com/faq/). * [Contact our support](/far-sdk/support/.md). --- # Third parties library list ## 3-clause BSD License[​](#3-clause-bsd-license "Direct link to 3-clause BSD License") > Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: > > 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. > 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. > 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ""AS IS"" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. " | Name | Version | Platform | Link | Copyright info | | ------------------------ | ------- | -------- | -------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | | CLI11 | 1.9.1 | Desktop | | CLI11 1.8 Copyright (c) 2017-2019 University of Cincinnati, developed by Henry Schreiner under NSF AWARD 1414736. All rights reserved. | | ios-cmake | 3.0.2 | iOS | | Copyright (c) 2011-2014, Andrew Fischer (c) 2017, Alexander Widerberg All rights reserved. | | tinyexr | 0.9.5 | Desktop | | Syoyo Fujita() | | pybind11 | 2.6.2 | Python | | Copyright (c) 2016 Wenzel Jakob , All rights reserved. | | Win camera from chromium | N/A | Windows | | Copyright 2015 The Chromium Authors. All rights reserved. | ## Apache License 2.0[​](#apache-license-20 "Direct link to Apache License 2.0") > "Licensed under the Apache License, Version 2.0 (the ""License""); you may not use this file except in compliance with the License. You may obtain a copy of the License at: Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an ""AS IS"" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." | Name | Version | Platform | Link | Copyright info | | --------------- | ------- | -------- | --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | | djinni | 1.3.0 | All | | Copyright (с) 2014 - 2019 Kannan Goundan, Tony Grue, Derek He, Steven Kabbes, Jacob Potter, Iulia Tamas, Andrew Twyman | | tensorflow lite | 2.19.0 | All | | "Copyright (c) Google Inc. Yuan Tang " | | shaderc | 2025.2 | iOS, OSX | | "Copyright (c) Google Inc." | | SPIRV-Cross | 1.4.313 | iOS, OSX | | "Copyright (c) 2014-2020 The Khronos Group Inc." | | draco | 1.5.6 | All | | "Copyright (c) Google Inc. and other contributors" | | opencv | 4.11.0 | All | | "Copyright (c) 2025, OpenCV team" | ## Boost Software License 1.0[​](#boost-software-license-10 "Direct link to Boost Software License 1.0") > "Permission is hereby granted, free of charge, to any person or organization obtaining a copy of the software and accompanying documentation covered by this license (the ""Software"") to use, reproduce, display, distribute, execute, and transmit the Software, and to prepare derivative works of the Software, and to permit third-parties to whom the Software is furnished to do so, all subject to the following: The copyright notices in the Software and this entire statement, including the above license grant, this restriction and the following disclaimer, must be included in all copies of the Software, in whole or in part, and all derivative works of the Software, unless such copies or derivative works are solely in the form of machine-executable object code generated by a source language processor. THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." | Name | Version | Platform | Link | Copyright info | | ------ | ------- | -------- | ------------------------------------ | -------------------------- | | catch2 | 2.5.0 | All | | Copyright © catch2 Authors | ## BSD 2-Clause Simplified[​](#bsd-2-clause-simplified "Direct link to BSD 2-Clause Simplified") > "Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: > > * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. > * * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. HIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ""AS IS"" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE." | Name | Version | Platform | Link | Copyright info | | ------- | ------- | -------- | ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | | cpuinfo | N/A | All | | "Copyright (c) 2017-2018 Facebook Inc. Copyright (C) 2012-2017 Georgia Institute of Technology. Copyright (C) 2010-2012 Marat Dukhan. All rights reserved." | ## GNU Lesser General Public Licence version 2.1 or later[​](#gnu-lesser-general-public-licence-version-21-or-later "Direct link to GNU Lesser General Public Licence version 2.1 or later") > This library is free software and is governed by GNU Lesser General Public License, version 2.1, available at . | Name | Version | Platform | Link | Copyright info | | ----------- | ------- | -------- | ------------------------------------------- | ------------------------------------------------- | | ffmpeg | 7.1.3 | Windows | | Copyright (c) ffmpeg Authors | | openal soft | 1.18 | Windows | | Copyright (C) 1991 Free Software Foundation, Inc. | ## ISC License[​](#isc-license "Direct link to ISC License") > "Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED ""AS IS"" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE." | Name | Version | Platform | Link | Copyright info | | ------ | ----------- | -------- | --------------------------------------- | ----------------------------------------------------------------------------------------- | | detex | 0.1.2alpha2 | All | | Copyright (c) 2015 Harm Hanemaaijer | | Sodium | 1.0.18 | All | | Copyright (c) Sodium authors, | ## MIT License[​](#mit-license "Direct link to MIT License") > "Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." | Name | Version | Platform | Link | Copyright info | | ------------- | ---------- | --------------------- | ------------------------------------------- | ------------------------------------------------------------------------- | | asyncplusplus | 1.2 | All | | Copyright (c) 2015 Amanieu d'Antras | | imgui | 1.91.9 | Desktop | | Copyright (c) 2014-2025 Omar Cornut | | jnipp | N/A | Android | | Copyright (c) 2016 Mitchell Dowd | | json | 3.10.4 | All | | Copyright (c) 2013-2019 Niels Lohmann | | url-cpp | N/A | All | | Copyright (c) 2016-2017 SEOmoz, Inc. | | tinygltf | 2.6.3 | All | | Copyright (c) 2017 Syoyo Fujita, Aurélien Chatelain and many contributors | | NumCpp | 2.14.1 | All | | Copyright (C) 2018-2023 David Pilger | | quickjs | 2020-11-10 | Android, Windows, Web | | Copyright (c) 2017-2020 Fabrice Bellard and Charlie Gordon. | ## Public domain[​](#public-domain "Direct link to Public domain") > This is free and unencumbered software released into the public domain. Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. | Name | Version | Platform | Link | Copyright info | | --------------- | ------- | -------- | ------------------------------------------------- | --------------------------------------------------- | | drwav | 0.8.5 | All | | Copyright @ the Dr-wav Authors | | stb | 2.33 | All | | Copyright (C) stb authors | | HdrHistogram\_c | 0.11.2 | All | | Copyright (c) Gil Tene, Michael Barker, Matt Warren | ## The Happy Bunny and MIT Licences[​](#the-happy-bunny-and-mit-licences "Direct link to The Happy Bunny and MIT Licences") > "The Happy Bunny Licence: Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Restrictions: By making use of the Software for military purposes, you choose to make a Bunny unhappy. THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. MIT Licence: Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." | Name | Version | Platform | Link | Copyright info | | ---- | ------- | -------- | ------------------------------- | ----------------------------------------- | | glm | 0.9.9.8 | All | | Copyright (c) 2005 - 2014 G-Truc Creation | ## zlib License[​](#zlib-license "Direct link to zlib License") > "This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: > > 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. > 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. > 3. This notice may not be removed or altered from any source distribution." | Name | Version | Platform | Link | Copyright info | | ---------- | ------- | -------- | ----------------------------------------------------- | -------------------------------------------------------------------------------------------------- | | glfw | 3.4 | All | | Copyright (c) 2002-2006 Marcus Geelnard Copyright (c) 2006-2016 Camilla Löwy | | zlib | 1.2.11 | All | | Copyright (C) 1995-2017 Jean-loup Gailly and Mark Adler | | mikktspace | 1.0 | All | | Copyright (C) 2011 by Morten S. Mikkelsen | ## Mozilla Public License, Version 2.0[​](#mozilla-public-license-version-20 "Direct link to Mozilla Public License, Version 2.0") > Permissions of this weak copyleft license are conditioned on making available source code of licensed files and modifications of those files under the same license (or in certain cases, one of the GNU licenses). Copyright and license notices must be preserved. Contributors provide an express grant of patent rights. However, a larger work using the licensed work may be distributed under different terms and without source code for files added in the larger work. This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at . | Name | Version | Platform | Link | Copyright info | | ----- | ------- | -------- | ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | | Eigen | 3.4.0 | All | | Copyright (C) 2008 Gael Guennebaud Copyright (C) 2008 Benoit Jacob | ## The LibYuv Project Authors[​](#the-libyuv-project-authors "Direct link to The LibYuv Project Authors") > Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of TransGaming Inc., Google Inc., 3DLabs Inc. Ltd., nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | Name | Version | Platform | Link | Copyright info | | ------ | ------- | -------- | -------------------------------------------------- | --------------------------------------------------------------- | | LibYuv | 1789 | All | | Copyright 2011 The LibYuv Project Authors. All rights reserved. | **Note:** The following libraries are also used in FaceAR SDK: OpenAL on iOS (Apple) and OpenSL on Android (Google). --- # Demo Face Filters tip You may also create your own effects with [**Banuba Studio**](https://studio.banuba.com/) or buy some from the [**Banuba Asset Store**](https://assetstore.banuba.net/). List of Facer AR SDK **Demo Face Filters** and [technologies](/far-sdk/tutorials/capabilities/sdk_features.md) represented with them. Face AR SDK release archives are supplied with a minimum built-in number of face filter (effects), all other Demo effects can be downloaded from this page. | Filters | Technologies represented | Required packages | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | | [![0003\_cu\_Spider1\_v3\_b1 icon](/far-sdk/generated/effects/icons/0003_cu_Spider1_v3_b1.png)download](/far-sdk/generated/effects/0003_cu_Spider1_v3_b1.zip) | Animation, Triggers | face\_tracker | | [![ActionunitsGrout icon](/far-sdk/generated/effects/icons/ActionunitsRabbit.png)download](/far-sdk/generated/effects/ActionunitsRabbit.zip) | Action units with blendshapes in the AR 3D Mask | face\_tracker | | [![Afro icon](/far-sdk/generated/effects/icons/afro.png)download](/far-sdk/generated/effects/Afro.zip) | AR 3D Mask | face\_tracker | | [![BG\_Metro icon](/far-sdk/generated/effects/icons/BG_Metro.png)download](/far-sdk/generated/effects/BG_Metro.zip) | Simple effect that allow to set an image as the Background
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | background | | [![blur\_bg icon](/far-sdk/generated/effects/icons/blur_bg.png)download](/far-sdk/generated/effects/blur_bg.zip) | Background segmentation neural network, Blur
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | background | | [![bokeh icon](/far-sdk/generated/effects/icons/bokeh.png)download](/far-sdk/generated/effects/test_bokeh.zip) | Background segmentation neural network, Bokeh effect | face\_tracker, background | | [![BulldogHarlamov icon](/far-sdk/generated/effects/icons/BulldogHarlamov.png)download](/far-sdk/generated/effects/BulldogHarlamov.zip) | Transfer of facial expressions to the 3D model, AR 3D Mask | face\_tracker | | [![BurningMan2018 icon](/far-sdk/generated/effects/icons/HawaiiHairFlower.png)download](/far-sdk/generated/effects/HawaiiHairFlower.zip) | Retouch, AR 3D Mask | face\_tracker | | [![CartoonOctopus icon](/far-sdk/generated/effects/icons/CartoonOctopus.png)download](/far-sdk/generated/effects/CartoonOctopus.zip) | Animation, Triggers | face\_tracker | | [![CubemapEverest icon](/far-sdk/generated/effects/icons/CubemapMoon.png)download](/far-sdk/generated/effects/CubemapMoon.zip) | Background segmentation neural network, 3D environment cubemap texture, AR 3D Mask
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, background | | [![DebugFRX icon](/far-sdk/generated/effects/icons/DebugWireframe.png)download](/far-sdk/generated/effects/DebugFRX.zip) | Display face recognition wireframe with landmarks | face\_tracker | | [![Glasses icon](/far-sdk/generated/effects/icons/glasses_RayBan4165_Dark.png)download](/far-sdk/generated/effects/glasses_RayBan4165_Dark.zip) | Virtual glasses try-on | face\_tracker | | [![Makeup icon](/far-sdk/generated/effects/icons/Makeup.png)download](/far-sdk/generated/effects/Makeup.zip) | Virtual Makeup API effect. See more in Effect API section.
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | See comment below the table\* | | [![MonsterFactory icon](/far-sdk/generated/effects/icons/1001Nights_bloom.png)download](/far-sdk/generated/effects/1001Nights_bloom.zip) | AR 3D Mask, Physics | face\_tracker | | [![nn\_api icon](/far-sdk/generated/effects/icons/nn_api.png)download](/far-sdk/generated/effects/nn_api.zip) | Effect with API for testing multiple neural networks work at the same time: Lips, Hair, Background, Hair and Skin
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, lips, skin, background, hair | | [![PineappleGlasses icon](/far-sdk/generated/effects/icons/Pineappleglasses.png)download](/far-sdk/generated/effects/PineappleGlasses.zip) | Background segmentation neural network, Animated texture, AR 3D Mask
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, background | | [![PoliceMan icon](/far-sdk/generated/effects/icons/PoliceMan.png)download](/far-sdk/generated/effects/PoliceMan.zip) | Morphing, AR 3D Mask | face\_tracker | | [![Rorschach icon](/far-sdk/generated/effects/icons/Rorschach.png)download](/far-sdk/generated/effects/Rorschach.zip) | Background segmentation neural network, Animated texture, AR 3D Mask
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, background | | [![scene\_4\_faces icon](/far-sdk/generated/effects/icons/scene_4_faces.png)download](/far-sdk/generated/effects/scene_4_faces.zip) | Multi-face morphing example | face\_tracker | | [![HairPinkGirl icon](/far-sdk/generated/effects/icons/HairPinkGirl.png)download](/far-sdk/generated/effects/HairPinkGirl.zip) | Animated texture, AR 3D Mask | face\_tracker | | [![test\_BG icon](/far-sdk/generated/effects/icons/test_BG.png)download](/far-sdk/generated/effects/test_BG.zip) | Background segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | background | | [![test\_Body icon](/far-sdk/generated/effects/icons/test_Body.png)download](/far-sdk/generated/effects/test_Body.zip) | Full body segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | body | | [![test\_Eye\_lenses icon](/far-sdk/generated/effects/icons/test_Eye_lenses.png)download](/far-sdk/generated/effects/test_Eye_lenses.zip) | Virtual eye lenses example
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, eyes | | [![test\_Eyelashes icon](/far-sdk/generated/effects/icons/test_Eyelashes.png)download](/far-sdk/generated/effects/test_Eyelashes.zip) | Effect for eyelashes tracking debug | face\_tracker | | [![test\_Eyes icon](/far-sdk/generated/effects/icons/test_Eyes.png)download](/far-sdk/generated/effects/test_Eyes.zip) | Eyes segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, eyes | | [![test\_gestures icon](/far-sdk/generated/effects/icons/test_gestures.png)download](/far-sdk/generated/effects/test_gestures.zip) | Hand gestures detection example | hands | | [![test\_Glasses icon](/far-sdk/generated/effects/icons/test_Glasses.png)download](/far-sdk/generated/effects/test_Glasses.zip) | Glasses detection (neural network approach), AR 3D Mask | face\_tracker, background | | [![test\_Hair icon](/far-sdk/generated/effects/icons/test_Hair.png)download](/far-sdk/generated/effects/test_Hair.zip) | Hair segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | hair | | [![test\_Hair\_bound icon](/far-sdk/generated/effects/icons/test_Hair_bound.png)download](/far-sdk/generated/effects/test_Hair_bound.zip) | Hair recoloring in multiple shades, Hair segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, hair | | [![test\_Hair\_strand icon](/far-sdk/generated/effects/icons/test_Hair_strand.png)download](/far-sdk/generated/effects/test_Hair_strand.zip) | Hair strands recoloring
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, hair | | [![test\_HandSkelet icon](/far-sdk/generated/effects/icons/test_HandSkelet.png)download](/far-sdk/generated/effects/test_HandSkelet.zip) | Displays hand skeleton model | hands | | [![test\_heart\_rate icon](/far-sdk/generated/effects/icons/test_heart_rate.png)download](/far-sdk/generated/effects/test_heart_rate.zip) | Heart rate measurement (Pulse) technology example | face\_tracker | | [![test\_image\_process\_cartoon icon](/far-sdk/generated/effects/icons/test_image_process_cartoon.png)download](/far-sdk/generated/effects/test_image_process_cartoon.zip) | Shader-based image filter | face\_tracker | | [![test\_Lips icon](/far-sdk/generated/effects/icons/test_Lips.png)download](/far-sdk/generated/effects/test_Lips.zip) | Lips segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, lips | | [![test\_Lips\_glitter icon](/far-sdk/generated/effects/icons/test_Lips_glitter.png)download](/far-sdk/generated/effects/test_Lips_glitter.zip) | Glitter lipstick effect, lips segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, lips | | [![test\_Lips\_shine icon](/far-sdk/generated/effects/icons/test_Lips_shine.png)download](/far-sdk/generated/effects/test_Lips_shine.zip) | Shiny lipstick effect, lips segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | face\_tracker, lips | | [![test\_Nails icon](/far-sdk/generated/effects/icons/test_Nails.png)download](/far-sdk/generated/effects/test_Nails.zip) | Virtual nails try on effect
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | hands | | [![test\_Ring icon](/far-sdk/generated/effects/icons/test_Ring.png)download](/far-sdk/generated/effects/test_Ring.zip) | AR Ring demo effect
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | hands | | [![test\_Ruler icon](/far-sdk/generated/effects/icons/test_Ruler.png)download](/far-sdk/generated/effects/test_Ruler.zip) | Face-to-phone distance measurement | face\_tracker | | [![test\_Skin icon](/far-sdk/generated/effects/icons/test_Skin.png)download](/far-sdk/generated/effects/test_Skin.zip) | Skin segmentation neural network
Face filter performance can be lower on mid-end and low-end devices due to neural network usage. | skin | #### Legend[​](#legend "Direct link to Legend") * `Online / Offline` - Online face filters can be applied both for realtime work and photo. Offline face filters are designed to work only with photos (offline mode). * Click on filter name to download the effect. * **Required packages** column lists [the packages](/far-sdk/tutorials/development/installation.md) you must include within the app. For iOS package names follow this simple rule: *face\_tracker -> BNBFaceTracker* etc. \* *Makeup* effect dependencies are configured during runtime according to features enabled. You will see in log which package is missing in case of error. --- # FaceAR Glossary ## AR Technologies[​](#ar-technologies "Direct link to AR Technologies") ### FRX (Face tracking)[​](#frx-face-tracking "Direct link to FRX (Face tracking)") The technology to detect and track the presence of a human face in a digital video frame to enable FaceAR camera experiences. ### Face match[​](#face-match "Direct link to Face match") Comparing a face in the image/video to one reference image. ### Face attributes[​](#face-attributes "Direct link to Face attributes") Detection of multiple facial parameters: hair color, eye color, skin tone, gender, hair & facial hair style, nose & lips shape. ### Face shape[​](#face-shape "Direct link to Face shape") Face shape detection that can be used for things like personalized recommendations ### Background separation[​](#background-separation "Direct link to Background separation") Neural network that separates a user on the foreground from the background in a sequence of video frames to remove the background or replace it with another image. As a standalone feature, the background segmentation is used in video conferencing, live streaming and video communication apps. It also can be used as part of face filter together with facial animation in entertainment apps. ### Bokeh effect[​](#bokeh-effect "Direct link to Bokeh effect") The algorithm that separates a person on the foreground and blurs the background as in photography. ### Skin segmentation[​](#skin-segmentation "Direct link to Skin segmentation") Neural network trained to recognize and segment the skin area for recoloring tasks, e.g. make the skin tone lighter or darker. ### Skin smoothing[​](#skin-smoothing "Direct link to Skin smoothing") Neural network trained to segment and blur the skin on the face to create a retouch effect similar to professional photography. ### Neck segmentation and smoothing[​](#neck-segmentation-and-smoothing "Direct link to Neck segmentation and smoothing") Segmentation of the neck for applying effects like skin smoothing or trying on appropriate items (e.g. scarves) ### Hair Segmentation[​](#hair-segmentation "Direct link to Hair Segmentation") Neural network to detect and segment the image into hair, face and background to allow for real-time hair modification such as change the hair color. ### Eye segmentation[​](#eye-segmentation "Direct link to Eye segmentation") Neural network to detect and segment the eye into iris, pupil and eyeball for eye recoloring and virtual contact lens try on. ### Eye lenses[​](#eye-lenses "Direct link to Eye lenses") Applying virtual contact lenses and changing lenses color on physical glasses. ### Pupillary distance[​](#pupillary-distance "Direct link to Pupillary distance") Measuring distance between the centers of the person's pupils, a useful parameter for eyewear try-on. ### Brows segmentation[​](#brows-segmentation "Direct link to Brows segmentation") Segmentation of eyebrows for applying effects like facial feature editing. ### Lips segmentation[​](#lips-segmentation "Direct link to Lips segmentation") Neural network to detect and segment the lips on the users face for virtual lipstic try on effect. ### Full body segmentation[​](#full-body-segmentation "Direct link to Full body segmentation") Neural network trained to recognize the human body in full length and separate it from the background in images and videos. ### Acne removal (tap)[​](#acne-removal-tap "Direct link to Acne removal (tap)") [Acne removal](/far-sdk/effects/guides/feature_params.md#acne-removal) is an algorithm to remove acne in photos by single taps on the selected area. ### Acne removal (auto)[​](#acne-removal-auto "Direct link to Acne removal (auto)") The algorithm to automatically remove acne in photos. ### Eye bag removal[​](#eye-bag-removal "Direct link to Eye bag removal") The algorithm segments and lightens the area under the eyes in photos for beautification purposes. ### Hair strands painting[​](#hair-strands-painting "Direct link to Hair strands painting") The algorithms to change the hair color with several colors applied simultaneously, e.g. for strands highlight or coloring. ### Teeth tone[​](#teeth-tone "Direct link to Teeth tone") Detection of the person's teeth shade. Can be used for dental bleaching simulation. ## Features[​](#features "Direct link to Features") ### 2+ faces detection (Multi-face)[​](#2-faces-detection-multi-face "Direct link to 2+ faces detection (Multi-face)") Algorithm allowing for AR 3D Masks application to several people simultaneously for more engaging group AR experiences. For the quality user experience, we generally don’t recommend supporting more than 3 faces on mobile devices due to limited computing capabilities. ### Pulse (Heart rate)[​](#pulse-heart-rate "Direct link to Pulse (Heart rate)") Algorithm analyses fine patterns of the facial areas and their color variations within time to detect pulse frequency in real-time. ### Ruler (Distance to phone)[​](#ruler-distance-to-phone "Direct link to Ruler (Distance to phone)") Algorithm analyses face area size to estimate distance from user's face to camera in real-time. ### Text Texture (on AR 3D Mask)[​](#text-texture-on-ar-3d-mask "Direct link to Text Texture (on AR 3D Mask)") Allows to write text as texture on any 2D or 3D model in the effect. ### AR 3D Mask on a picture from Camera Roll[​](#ar-3d-mask-on-a-picture-from-camera-roll "Direct link to AR 3D Mask on a picture from Camera Roll") AR 3D Mask application on pre-recorded images the user uploads from the Camera Roll. ### AR 3D Mask on video from Camera Roll[​](#ar-3d-mask-on-video-from-camera-roll "Direct link to AR 3D Mask on video from Camera Roll") AR 3D Mask application on pre-recorded videos the user uploads from the Camera Roll. ### Post-processing effects[​](#post-processing-effects "Direct link to Post-processing effects") Graphical camera effects and animations applied on pre-recorded videos. ### Continuous photo editing[​](#continuous-photo-editing "Direct link to Continuous photo editing") AR effect is processed in real-time on the image. E.g. beautification slider to control the face modification or "Before/After" slider. ### Touches[​](#touches "Direct link to Touches") Small AR scenarios enabled thought the user touches on the screen. AR objects or camera effects can change color and behaviour. Applied in FaceAR games or interactive face filters to increase engagement. ### Trigger[​](#trigger "Direct link to Trigger") Small AR scenarios enabled through user facial expressions. The user can interact with effects or call them opening mouth, smiling, raising eyebrows or frowning. Applied in FaceAR games or interactive face filters to increase engagement. ### SFX[​](#sfx "Direct link to SFX") Sound effects support in FaceAR experiences, e.g. add music to filters. ### Glasses detection[​](#glasses-detection "Direct link to Glasses detection") The algorithm detects if the user wears glasses and removes them in a virtual AR 3D Mask. Applied in glasses try-on for convenient frame choice and face filters for AR glasses would not overlay the real glasses. ### Glasses frame color[​](#glasses-frame-color "Direct link to Glasses frame color") Detection of the color of the glasses that the person is wearing. ## Graphical technologies[​](#graphical-technologies "Direct link to Graphical technologies") ### Face beautification[​](#face-beautification "Direct link to Face beautification") The AR filter based on face tracking which automatically retouches the appearance applying skin smooth, morphing, eye makeup, teeth whitening, eye flare and LUT effect. ### Morphing[​](#morphing "Direct link to Morphing") Changing the size and proportions of the face, e.g. slim down the cheeks, nose or modify them for fun AR 3D Masks. ### Skinned mesh animation[​](#skinned-mesh-animation "Direct link to Skinned mesh animation") AR models look not static but moving, animated and transforming. ### Physically-based rendering[​](#physically-based-rendering "Direct link to Physically-based rendering") AR models behave like the real objects in the flow of real-world light and physics, e.g. support gravity or mirror the light with the camera rotates and user tilts. ### LUT post-processing[​](#lut-post-processing "Direct link to LUT post-processing") Real-time or offline color correction of pre-recorded images, e.g. Instagram-like filters. ### Texture sequences[​](#texture-sequences "Direct link to Texture sequences") The digital representation of the surface of an AR object providing the sophisticated and life-like object representation. ### Video textures[​](#video-textures "Direct link to Video textures") Infuse a static image with dynamic qualities and explicit action to achieve an enhanced look and feel of the FaceAR video experience. ### Action units[​](#action-units "Direct link to Action units") The fundamental actions of individual muscles or groups of muscles of the face that enable the AR 3D Mask to support user facial expressions, e.g. in emojis, avatars or full-face AR 3D Masks. ### Lips shine, gloss, chameleon[​](#lips-shine-gloss-chameleon "Direct link to Lips shine, gloss, chameleon") Effects simulating lipstick/lip gloss in several texture types: shine, gloss, & chameleon respectively. ### Eyelids gloss, chameleon[​](#eyelids-gloss-chameleon "Direct link to Eyelids gloss, chameleon") Effects simulating eyeshadow in several texture types: shine, gloss, and chameleon. ### Light correction[​](#light-correction "Direct link to Light correction") Automaatic correction of lighting to make the image/video feed look more aesthetically pleasing --- # SDK Features ## Face AR SDK[​](#face-ar-sdk "Direct link to Face AR SDK") | | ![Icon](/far-sdk/img/icons/ios.svg "iOS") | ![Icon](/far-sdk/img/icons/android.svg "Android") | ![Icon](/far-sdk/img/icons/apple.svg "MacOS") | ![Icon](/far-sdk/img/icons/windows.svg "Windows") | ![Icon](/far-sdk/img/icons/html5.svg "Web") | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | | [Single Face Tracking](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=Of7-xNDYknY) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Multi-Face Tracking](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=IE4fC4gSWnA)[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=dJ7NBMzlAt8) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Makeup](/far-sdk/tutorials/capabilities/glossary.md#face-beautification)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=ZGA-_oq9E2Q) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Mask on picture from Camera Roll (pre-recorded picture)](/far-sdk/tutorials/capabilities/glossary.md#ar-3d-mask-on-a-picture-from-camera-roll)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=8XBdbmp8nSo) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Mask on video from Camera Roll (pre-recorded video)](/far-sdk/tutorials/capabilities/glossary.md#ar-3d-mask-on-video-from-camera-roll)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=steQQNeQsxU) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Face attributes](/far-sdk/tutorials/capabilities/glossary.md#face-attributes)CPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Face match](/far-sdk/tutorials/capabilities/glossary.md#face-match)CPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Face shape](/far-sdk/tutorials/capabilities/glossary.md#face-shape)CPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Pupillary distance](/far-sdk/tutorials/capabilities/glossary.md#pupillary-distance)CPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ## Face AR SDK Neural Network Features[​](#face-ar-sdk-neural-network-features "Direct link to Face AR SDK Neural Network Features") | | ![Icon](/far-sdk/img/icons/ios.svg "iOS") | ![Icon](/far-sdk/img/icons/android.svg "Android") | ![Icon](/far-sdk/img/icons/apple.svg "MacOS") | ![Icon](/far-sdk/img/icons/windows.svg "Windows") | ![Icon](/far-sdk/img/icons/html5.svg "Web") | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | | [Background separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=aAlsELbPTX0) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Full body segmentation](/far-sdk/tutorials/capabilities/glossary.md#full-body-segmentation)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=RWq5UyMAqT0) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Skin segmentation](/far-sdk/tutorials/capabilities/glossary.md#skin-segmentation)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=CeLGmY9w2Kg) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Hair segmentation](/far-sdk/tutorials/capabilities/glossary.md#hair-segmentation)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=WvGeyA3FYS4) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Eye segmentation (3 layers)](/far-sdk/tutorials/capabilities/glossary.md#eye-segmentation)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=UZVM-sHbWfY) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Lips segmentation](/far-sdk/tutorials/capabilities/glossary.md#lips-segmentation)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=K2BSZotVM7U) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Acne Removal (manual)](/far-sdk/tutorials/capabilities/glossary.md#acne-removal-tap)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=gqcaDDVheNU) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | *+* | | [Acne Removal (auto)](/far-sdk/tutorials/capabilities/glossary.md#acne-removal-auto)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=jkUDBhnhYpg) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | *+* | | [Hair strands painting](/far-sdk/tutorials/capabilities/glossary.md#hair-strands-painting)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=MmZdVQSqa58) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | *+* | | [Eye bags removal](/far-sdk/tutorials/capabilities/glossary.md#eye-bag-removal)CPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=Gupcf1ExJYM) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | *+* | *+* | | [Brows segmentation](/far-sdk/tutorials/capabilities/glossary.md#brows-segmentation)GPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Glasses detection](/far-sdk/tutorials/capabilities/glossary.md#glasses-detection) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Neck segmentation ans smoothing](/far-sdk/tutorials/capabilities/glossary.md#neck-segmentation-and-smoothing)CPUGPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ## Rendering Engine[​](#rendering-engine "Direct link to Rendering Engine") | | ![Icon](/far-sdk/img/icons/ios.svg "iOS") | ![Icon](/far-sdk/img/icons/android.svg "Android") | ![Icon](/far-sdk/img/icons/apple.svg "MacOS") | ![Icon](/far-sdk/img/icons/windows.svg "Windows") | ![Icon](/far-sdk/img/icons/html5.svg "Web") | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | | 3d modeling & animationGPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | Image-based lightingGPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Physically based rendering](/far-sdk/tutorials/capabilities/glossary.md#physically-based-rendering)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=J6XTFaJL7wc) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Skinned Mesh Animations](/far-sdk/tutorials/capabilities/glossary.md#skinned-mesh-animation)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=j5jnWkwHIVM) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Face Morphing](/far-sdk/tutorials/capabilities/glossary.md#morphing)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=sw8sU2zcD_8) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | High dynamic range imaging (HDRI)GPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Video textures](/far-sdk/tutorials/capabilities/glossary.md#video-textures)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=20EVsXlCGss) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | Multisample anti-aliasingGPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | Sprite animationGPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Lookup Tables (LUT)](/far-sdk/tutorials/capabilities/glossary.md#lut-post-processing)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=vMznyB7eyCI) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Texture sequences](/far-sdk/tutorials/capabilities/glossary.md#texture-sequences)GPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=62GXnNyLypg) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Lips shine, gloss, chameleon](/far-sdk/tutorials/capabilities/glossary.md#lips-shine-gloss-chameleon)GPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Eyelids gloss, chameleon](/far-sdk/tutorials/capabilities/glossary.md#eyelids-gloss-chameleon)GPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Light correction](/far-sdk/tutorials/capabilities/glossary.md#light-correction)CPUGPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ## Other Features[​](#other-features "Direct link to Other Features") | | ![Icon](/far-sdk/img/icons/ios.svg "iOS") | ![Icon](/far-sdk/img/icons/android.svg "Android") | ![Icon](/far-sdk/img/icons/apple.svg "MacOS") | ![Icon](/far-sdk/img/icons/windows.svg "Windows") | ![Icon](/far-sdk/img/icons/html5.svg "Web") | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | | [Face Filter Text Overlays (Text Texture)](/far-sdk/tutorials/capabilities/glossary.md#text-texture-on-ar-3d-mask)CPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=SGIRxCcrvMo) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Interactive Triggers](/far-sdk/tutorials/capabilities/glossary.md#trigger)CPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=7Y9nMgDzpbI) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Interactive Touch](/far-sdk/tutorials/capabilities/glossary.md#touches)CPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=6vRAuWvmlFw) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Face-To-Phone Distance (Ruler)](/far-sdk/tutorials/capabilities/glossary.md#ruler-distance-to-phone)CPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=4HO38U4C6HI) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Heart Rate detector](/far-sdk/tutorials/capabilities/glossary.md#pulse-heart-rate)CPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=FpFU8YhIXHI) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | AvatarsCPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://www.youtube.com/watch?v=oY69-kG4jZ0) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Eye lenses](/far-sdk/tutorials/capabilities/glossary.md#eye-lenses)CPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Teeth tone](/far-sdk/tutorials/capabilities/glossary.md#teeth-tone)CPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | [Glasses frame color](/far-sdk/tutorials/capabilities/glossary.md#glasses-frame-color)CPUGPU | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ## Hand AR SDK[​](#hand-ar-sdk "Direct link to Hand AR SDK") Comes as part of Face AR bundle. | | ![Icon](/far-sdk/img/icons/ios.svg "iOS") | ![Icon](/far-sdk/img/icons/android.svg "Android") | ![Icon](/far-sdk/img/icons/apple.svg "MacOS") | ![Icon](/far-sdk/img/icons/windows.svg "Windows") | ![Icon](/far-sdk/img/icons/html5.svg "Web") | | ------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | :------------------------------------------------------------------: | | Hand AR: NailsCPUGPU[![video icon](/far-sdk/img/icons/video.svg)](https://drive.google.com/file/d/1ch-dXgcN-arxZVYzCkbiIsctme1tsCde/preview) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | Hand AR: Hand GesturesCPU[![video icon](/far-sdk/img/icons/video.svg)](https://drive.google.com/file/d/1ch-dXgcN-arxZVYzCkbiIsctme1tsCde/preview) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | Hand AR: Hand SkeletonCPU[![video icon](/far-sdk/img/icons/video.svg)](https://drive.google.com/file/d/1ch-dXgcN-arxZVYzCkbiIsctme1tsCde/preview) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | Hand AR: RingsCPU[![video icon](/far-sdk/img/icons/video.svg)](https://drive.google.com/file/d/1ch-dXgcN-arxZVYzCkbiIsctme1tsCde/preview) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | | Hand AR: WatchesCPU[![video icon](/far-sdk/img/icons/video.svg)](https://drive.google.com/file/d/1ch-dXgcN-arxZVYzCkbiIsctme1tsCde/preview) | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") | #### Legend[​](#legend "Direct link to Legend") * ![\`${props.title} icon\`](/far-sdk/img/icons/check.svg "Supported") - this technology fully supports the platform and has proper test coverage. * *+* - this technology doesn’t support the platform. * CPU, GPU - the badge shows the processing unit used by this technology. * [![video icon](/far-sdk/img/icons/video.svg)](#legend) - a link to the demonstration video is available for this technology. * [Info](#legend) - a link to an article describing work of this technology. --- # System Requirements ## Supported Platforms[​](#supported-platforms "Direct link to Supported Platforms") ### Mobile ![Icon](/far-sdk/img/icons/ios.svg "iOS") ![Icon](/far-sdk/img/icons/android.svg "Android") ![Icon](/far-sdk/img/icons/html5.svg "Web") ### Requirements * **Any platform: **OpenGL ES3.0 and higher * **Android: **supported from Android 8.0. API level 26 * **iOS: **supported from iOS 13.0. ### Desktop ![Icon](/far-sdk/img/icons/apple.svg "MacOS") ![Icon](/far-sdk/img/icons/windows.svg "Windows") ![Icon](/far-sdk/img/icons/html5.svg "Web") ### Requirements PC * Support OpenGL 4.3 and up (4.1 for MacOS) * Windows 8.1 and up * MacOS 10.13 and up *** ## Supported Browsers[​](#supported-browsers "Direct link to Supported Browsers") ### Mobile ![Icon](/far-sdk/img/icons/chrome.svg "Chrome") ![Icon](/far-sdk/img/icons/firefox.svg "Firefox") ![Icon](/far-sdk/img/icons/safari.svg "Safari") ### Desktop ![Icon](/far-sdk/img/icons/chrome.svg "Chrome") ![Icon](/far-sdk/img/icons/firefox.svg "Firefox") ![Icon](/far-sdk/img/icons/safari.svg "Safari") ### Requirements[​](#requirements "Direct link to Requirements") * Banuba Web SDK supports any browser with **WebGL 2.0 and higher**. All supported browsers are listed [here](https://caniuse.com/webgl2). *** ## Supported languages[​](#supported-languages "Direct link to Supported languages") | Languages | Platforms | | ------------------------------------------------------------------------------------------------- | --------- | | ![Icon](/far-sdk/img/icons/objc.svg) Objective-C
![Icon](/far-sdk/img/icons/swift.svg) Swift | MacOS | | ![Icon](/far-sdk/img/icons/kotlin.svg) Kotlin
![Icon](/far-sdk/img/icons/java.svg) Java | Android | | ![Icon](/far-sdk/img/icons/cpp.svg) C++ | Desktop | | ![Icon](/far-sdk/img/icons/csharp.svg) C# | Unity | | ![Icon](/far-sdk/img/icons/javascript.svg) JavaScript\* | Web | \* Types for TypeScript are also provided --- # Technical Specification This page provides technical metrics of the Face AR SDK feature performance. The values below are for your reference, as they were achieved under fixed lab conditions. Many factors can influence performance, including the state of the specific device, other apps running in the background, Wi-Fi being enabled, etc. We encourage you to test each feature within your environment. Please visit the [SDK Features](/far-sdk/tutorials/capabilities/sdk_features.md) page for more information on feature availability on different platforms. note * **FPS** — Frames per second of the face detection algorithm on a given device. * **Angles** — The maximum angle at which the technology was able to work during the measurement. * **Distance** — Maximum distance at which the technology was able to work during the measurement. * **Real-time (online)** — Technology performance in real-time. * **Photo (offline)** — The processing time needed to take a photo or process it from the gallery. ## SDK Features[​](#sdk-features "Direct link to SDK Features") ### Single-face Tracking[​](#single-face-tracking "Direct link to Single-face Tracking") **Android** | | Android Low | Android High | | -------------- | ----------- | ------------ | | FPS | 25 | 30 | | Angles, degree | 80 | 80 | | Distance, cm | 170 | 180 | **iOS** | | iOS Mid | iOS High | | -------------- | ------- | -------- | | FPS | 30 | 30 | | Angles, degree | 80 | 80 | | Distance, cm | 230 | 230 | ### Multi-face Tracking[​](#multi-face-tracking "Direct link to Multi-face Tracking") **Android** | | Android Low | Android High | | ------------ | ----------- | ------------ | | Max Faces | 5 | 5 | | 4 Faces, FPS | 23 | 28 | | 5 Faces, FPS | 22 | 27 | **iOS** | | iOS Mid | iOS High | | ------------ | ------- | -------- | | Max Faces | 5 | 5 | | 4 Faces, FPS | 30 | 30 | | 5 Faces, FPS | 30 | 30 | Max Faces\* — the maximum number of faces that the SDK can track with acceptable quality and performance on most mobile devices. The actual peak number is limited only by the physical capabilities of the device and its screen proportions. ## Effect performance[​](#effect-performance "Direct link to Effect performance") Banuba SDK allows for a variety of Face AR effects. Some of them only require face tracking and can be represented as a single AR 3D Mask with textures and materials. Other effects are implemented with separately trained neural networks. Below, you may find information on the real-time performance of Face AR effects which only require face tracking, i.e. face filters, avatars with action units, beautification, and makeup filter (without lipstick). **Android** | | Android Low | Android High | | --- | ----------- | ------------ | | FPS | 25 | 29 | **iOS** | | iOS Mid | iOS High | | --- | ------- | -------- | | FPS | 30 | 30 | ## Beautification[​](#beautification "Direct link to Beautification") ### Beautification filter[​](#beautification-filter "Direct link to Beautification filter") Basic face beautification filter includes skin smoothing, morphing, teeth whitening, eyes flare and LUT. It only requires face tracking, so please, refer to the [Effects performance](#effect-performance) section. ## Makeup[​](#makeup "Direct link to Makeup") The Makeup filter allows for a realistic try on of foundation, eyeshadow, eyeliner, highlighter, contour, and blusher. It only requires face tracking, so please, refer to the [Effects performance](#effect-performance) section. The lipstick try on requires lips segmentation neural network with a separate algorithm for Lips Shine effect. ### Lips coloring[​](#lips-coloring "Direct link to Lips coloring") **Android** | | Android Low | Android High | | -------------- | ----------- | ------------ | | Real-time, FPS | 25 | 30 | | Photo, sec | 1 | < 1 | **iOS** | | iOS Mid | iOS High | | -------------- | ------- | -------- | | Real-time, FPS | 30 | 30 | | Photo, sec | < 1 | < 1 | **Web** | | Real-time, FPS | | ------ | -------------- | | Chrome | 30 | | Safari | 26 | ### Lips Shine (Glossy lipstick)[​](#lips-shine-glossy-lipstick "Direct link to Lips Shine (Glossy lipstick)") **Android** | | Android Low | Android High | | -------------- | ----------- | ------------ | | Real-time, FPS | 25 | 30 | | Photo, sec | 1 | < 1 | **iOS** | | iOS Mid | iOS High | | -------------- | ------- | -------- | | Real-time, FPS | 30 | 30 | | Photo, sec | < 1 | < 1 | **Web** | | Real-time, FPS | | ------ | -------------- | | Chrome | 30 | | Safari | 25 | ## Background separation[​](#background-separation "Direct link to Background separation") **Android** | | Android Low | Android High | | ------------- | ----------- | ------------ | | Real-time,FPS | 25 | 30 | | Photo, sec | 1 | < 1 | **iOS** | | iOS Mid | iOS High | | -------------- | ------- | -------- | | Real-time, FPS | 30 | 30 | | Photo, sec | < 1 | < 1 | ### Distance[​](#distance "Direct link to Distance") | Device | Distance, cm | | ------------ | -------------------------------------- | | iOS High | Portrait 280 cm,
Landscape 360 cm | | iOS Low | Portrait 280 cm,
Landscape 360 cm | | Android High | Portrait 310 cm,
Landscape 370 cm | | Mac Mid | 330 cm | ### Image formats support[​](#image-formats-support "Direct link to Image formats support") Currently, the following images formats are supported as a background texture: `.jpeg`, `.jpg`, `.png`, `.ktx`, `.gif`. ### Video formats support[​](#video-formats-support "Direct link to Video formats support") Used as a part of an animated background. | Video format | MacOS\* | iOS | Android\*\* | Windows | | ------------ | ------- | --- | ----------- | ------- | | .mp4 | ✅ | ✅ | ✅ | ✅ | | .avi | ✅ | ❌ | ❌ | ✅ | | .flv | ✅ | ❌ | ❌ | ✅ | | .mkv | ✅ | ❌ | ✅ | ✅ | | .mov | ✅ | ✅ | ❌ | ✅ | | .mts | ✅ | ❌ | ❌ | ✅ | | .webm | ✅ | ❌ | ✅ | ✅ | | .wmv | ✅ | ❌ | ❌ | ✅ | \* MacOS is rather sensitive not only to containers (i.e. file extensions) but to video codecs itself. In case of problems observe application log to find corresponding error messages and test carefully before release. \*\* See more information about supported video formats on Android in the official Android developers guide — . ## Hair segmentation[​](#hair-segmentation "Direct link to Hair segmentation") ### Hair Recoloring[​](#hair-recoloring "Direct link to Hair Recoloring") **Android** | | Android Low | Android High | | ------------- | ----------- | ------------ | | Real-time,FPS | 25 | 30 | | Photo, sec | 1 | < 1 | **iOS** | | iOS Mid | iOS High | | ------------- | ------- | -------- | | Real-time,FPS | 30 | 30 | | Photo, sec | < 1 | < 1 | ## Skin segmentation[​](#skin-segmentation "Direct link to Skin segmentation") **Android** | | Android Low | Android High | | ------------- | ----------- | ------------ | | Real-time,FPS | 25 | 30 | | Photo, sec | < 1 | < 1 | **iOS** | | iOS Mid | iOS High | | ------------- | ------- | -------- | | Real-time,FPS | 30 | 30 | | Photo, sec | < 1 | < 1 | ## Eyes recoloring[​](#eyes-recoloring "Direct link to Eyes recoloring") **Android** | | Android Low | Android High | | ------------- | ----------- | ------------ | | Real-time,FPS | 25 | 30 | | Photo, sec | < 1 | < 1 | **iOS** | | iOS Mid | iOS High | | ------------- | ------- | -------- | | Real-time,FPS | 30 | 30 | | Photo, sec | < 1 | < 1 | ## Hand gestures[​](#hand-gestures "Direct link to Hand gestures") **Basic information** * Supported gestures: * Palm ✋ * Victory✌️ * Rock 🤘 * Like 👍 * Ok 👌 * Maximum distance — 2.5m **iOS** | Device | Realtime FPS | | ------ | ------------ | | Mid | 30 | | High | 30 | **Android** | Device | Realtime FPS | | ------ | ------------ | | Low | 30 | | High | 30 | --- # Token Management To use our SDK in your project, you need to have an SDK token. The FAQ below explains our token management process and guides you on how to store and update tokens after their expiration. Please, read all the information carefully. ## What is an SDK token?[​](#what-is-an-sdk-token "Direct link to What is an SDK token?") An SDK token is an automatically generated set of characters in .txt format unique to each client. It activates the licenced SDK functionality in the client app. There are two types of tokens: * A demo token is provided to start the SDK trial. It's *valid for 14 days*, our standard trial period. The token activates all SDK features for you to assess the SDK performance in your project. * A commercial token is provided after you make payment. It’s *valid throughout the prepaid period*. The token activates the SDK features defined by your software licence. note Once the token expires, the Banuba watermark and screen blur will appear automatically in your app. Therefore, please: * Don’t use demo tokens in live apps. * Observe payments of your SDK license and renew the commercial token on time. ## Why do we use tokens?[​](#why-do-we-use-tokens "Direct link to Why do we use tokens?") The token system helps us to manage billing, as well as protects our software from frauds, inappropriate and unconditioned usage that violates the SDK licensing terms. ## How do I get one?[​](#how-do-i-get-one "Direct link to How do I get one?") * To get the demo token and start the SDK trial, please contact your sales manager or [send us a request](https://www.banuba.com/facear-sdk/face-filters#form) via the website form. * The commercial token is generated and sent out by your account manager who guides your project within our company. ## How does it work?[​](#how-does-it-work "Direct link to How does it work?") **Token storage** We recommend storing your token on the server as it drastically speeds up the process of token renewal. caution If you store the token in the app, you will have to to upload its newest version to the App Store or Play Market after you renew the token. **Expiry and renewal** Tokens are valid only throughout the predefined period of time. For one month after the expiration date, you can still use the SDK in your app, but it will display Banuba watermark. After that, the SDK will stop working entirely, though the app will otherwise work normally. Extending the existing token’s life will be impossible. You will have to receive a new one. See the explanation of token expiration below: | Token expired? | What happens with Face AR SDK | | ------------------------------- | -------------------------------- | | No | Works as expected | | Yes, <= 1 month | Works but watermark is displayed | | Yes, > 1 month | Functionality won't work | To restore the access, you need to request a new token and renew it in your app. * **Demo tokens** may be renewed per client’s request in case the client hasn’t had enough time to evaluate the SDK. * **Commercial tokens** are renewed by an account manager only after the client’s pre-payment for an agreed period. **Other questions left?** Please, read all the information carefully and feel free to [contact us](/far-sdk/support/.md) if you have more questions. --- # Changelog ## \[1.18.0] - 2025-03-09[​](#1180---2025-03-09 "Direct link to \[1.18.0] - 2025-03-09") **Changed** * Upgraded face tracking algorithm * Improved glasses lens segmentation * Improved glasses frame color detection * Improved skin tone detection * Improved 3d objects positioning on the face **Fixed** * Other improvements and performance enhancements ## \[1.17.7] - 2025-12-19[​](#1177---2025-12-19 "Direct link to \[1.17.7] - 2025-12-19") **Added** * Glitter effect for makeup * Glasses detection algorithm * Glasses lens segmentation algorithm * Glasses frame color detection algorithm **Changed** * Virtual Background segmentation improvements * Face shape detection algorithm improvements ## \[1.17.6] - 2025-09-29[​](#1176---2025-09-29 "Direct link to \[1.17.6] - 2025-09-29") **Added** * Face shape detection **Changed** * Enhancements for various platforms and effects ## \[1.17.5] - 2025-08-29[​](#1175---2025-08-29 "Direct link to \[1.17.5] - 2025-08-29") **Added** * Gloss and chameleon color effects for eyeshadows * Numerous enhancements for various platforms and effects **Changed** * Improved virtual background segmentation ## \[1.17.4] - 2025-07-18[​](#1174---2025-07-18 "Direct link to \[1.17.4] - 2025-07-18") **Added** * Gloss and chameleon color effects for makeup * Numerous enhancements for various platforms and effects **Changed** * Improved Gender Recognition model ## \[1.17.3] - 2025-06-06[​](#1173---2025-06-06 "Direct link to \[1.17.3] - 2025-06-06") **Added** * Lips Glare effect for Windows, OSX and Emscripten **Fixed** * Bug fixes and improvements for various platforms ## \[1.17.2] - 2025-05-23[​](#1172---2025-05-23 "Direct link to \[1.17.2] - 2025-05-23") **Added** * "Weatherman" mode * Support for video textures in effects **Changed** * Improved background segmentation * Improved hair segmentation **Fixed** * Bug fixes and improvements for various platforms ## \[1.17.1] - 2025-04-16[​](#1171---2025-04-16 "Direct link to \[1.17.1] - 2025-04-16") **Added** * Experimental feature "Weatherman" **Changed** * Resource linking ## \[1.17.0] - 2025-03-20[​](#1170---2025-03-20 "Direct link to \[1.17.0] - 2025-03-20") **Added** * Light source detection & correction **Changed** * Major performance optimization * Optimized performance of teeth whitening algorithm * Updated face tracking * Improved stability of face description model **Fixed** * Bug fixes and improvements for various platforms ## \[1.16.4] - 2025-01-28[​](#1164---2025-01-28 "Direct link to \[1.16.4] - 2025-01-28") **Added** * Hair segmentation mask stabilization **Fixed** * Bug fixes and improvements for various platforms ## \[1.16.3] - 2024-12-17[​](#1163---2024-12-17 "Direct link to \[1.16.3] - 2024-12-17") **Fixed** * Performance meter crash ## \[1.16.2] - 2024-12-12[​](#1162---2024-12-12 "Direct link to \[1.16.2] - 2024-12-12") **Added** * Light conversion effect **Changed** * Numerous documentation improvements * Switch to FFMPEG 4.10.2 for Windows **Fixed** * Functioning of Biometric Match feature * Scripting memory leak for iOS and macOS ## \[1.16.1] - 2024-11-29[​](#1161---2024-11-29 "Direct link to \[1.16.1] - 2024-11-29") **Changed** * Improved VITA Shade tone detection and teeth whitening **Fixed** * Bug fixes for various platforms ## \[1.16.0] - 2024-10-17[​](#1160---2024-10-17 "Direct link to \[1.16.0] - 2024-10-17") **Added** * Light Source Detection **Changed** * Improved face morphings * Improved hair segmentation **Fixed** * Acne removal * Music playback on certain effects ## \[1.15.1] - 2024-08-07[​](#1151---2024-08-07 "Direct link to \[1.15.1] - 2024-08-07") **Added** * Load Banuba Native Code using Relinker Library (Android) * Zoom and torch to player API * Teeth tone to generator **Fixed** * Permissions on start * Fixed long loading time for makeup * Fixed effects freezing in the new version of Chrome * Crash when frame size is not a multiple of 16 * Face Morphing fix ## \[1.14.0] - 2024-06-07[​](#1140---2024-06-07 "Direct link to \[1.14.0] - 2024-06-07") **Added** * New teeth whitening algorithm * Gender detection * ASTC format support * Frown Action Units **Changed** * Face attributes (Personalized 3D avatar) **Fixed** * Some fixes for prefabs * Texture formats on Metal ## \[1.13.2] - 2024-05-27[​](#1132---2024-05-27 "Direct link to \[1.13.2] - 2024-05-27") **Fixed** * KTX dynamic upload ## \[1.13.1] - 2024-05-30[​](#1131---2024-05-30 "Direct link to \[1.13.1] - 2024-05-30") **Fixed** * Android strip symbols ## \[1.13.0] - 2024-04-30[​](#1130---2024-04-30 "Direct link to \[1.13.0] - 2024-04-30") **Added** * Biometric match * Pupillary distance * Nails and eyelenses prefabs * GLTF 2.0 Support **Changed** * Eyelashes texture ## \[1.12.1] - 2024-04-10[​](#1121---2024-04-10 "Direct link to \[1.12.1] - 2024-04-10") **Fixed** * Web: * Frame leak, when source changed * Analytics crash * Video textures in effects (Win) * Unity: * Background & hair segmentation * Lips shine scale ## \[1.12.0] - 2024-04-02[​](#1120---2024-04-02 "Direct link to \[1.12.0] - 2024-04-02") **Added** * Personalized 3D avatar (iOS) - beta version * Nails segmentation (Android) * Improved creating effects process * FP16 flag XNNPACK for TFLite * Privacy manifest and signing (iOS) * Asynchronous pixels reading * Studio lightning effect * useFutureFilter option (Web) **Fixed** * Actions Units improvements * Face tracking performance improvements (Android + Web Safari) * Video track loading (Web) ## \[1.11.1] - 2024-02-23[​](#1111---2024-02-23 "Direct link to \[1.11.1] - 2024-02-23") **Added** * Fit mode for Unity **Fixed** * Win: GLTF models for AMD GPUs * Unity: * Segmentation clamp issues * Materials in effects * Hand skeleton transformation and rendering ## \[1.11.0] - 2024-02-09[​](#1110---2024-02-09 "Direct link to \[1.11.0] - 2024-02-09") **Added** * Improved background segmentation in landscape * New skin segmentation * Android: * VideoInput to Player API * TextureOutput to Player API * IRenderStatusCallback to Player API * IFrameRotationProviderCallback to Player API * WebAR: mediastream processor * iOS: * Player.onRender callback **Changed** * UV scale **Fixed** * FRX scattering * iOS: * breaking segmentation on the lower parts of the frame, camera orientation track * Android: * Calculation of strides for FrameOutput in the Player API ## \[1.10.1] - 2024-01-10[​](#1101---2024-01-10 "Direct link to \[1.10.1] - 2024-01-10") **Added** * Face-skin segmentation for Unity **Fixed** * Flashing morphing unity * Support for simulators ## \[1.10.0] - 2023-12-20[​](#1100---2023-12-20 "Direct link to \[1.10.0] - 2023-12-20") **Added** * Eye’s dark circles removal * New face morphs in the Makeup API * Teeth segmentation * ActionUnits antijitter * Ability to download iOS packages via SPM * `SeverityLevel.NONE` to disable logs **Changed** * Lip segmentation * Removed GLTF autoscale **Fixed** * iOS: Photo input now tracks device orientation (Player API) * Android: correctly resume audio from a paused or stopped state * Webar: * OpenCV SIMD instructions for Safari 16.4 * Empty files unzip * useFutureInterpolate option * Crash during module loading in an unsupported browser * Webcamera performance improved * Red AR 3D Mask bad colours ## \[1.9.3] - 2023-12-06][​](#193---2023-12-06 "Direct link to \[1.9.3] - 2023-12-06]") **Changed** * Remove auto-scale from GLTF loading ## \[1.9.2] - 2023-11-29[​](#192---2023-11-29 "Direct link to \[1.9.2] - 2023-11-29") **Added** * Face skin segmentation * Android & iOS: apply a watermark to recorded video (Player API) * Full HD video recording iOS ## \[1.9.1] - 2023-11-8[​](#191---2023-11-8 "Direct link to \[1.9.1] - 2023-11-8") **Fixed** * Action Units eyes blinking * Android: support for relative paths for video textures * WebAR: * Player FPS restriction * Perf measure * Rendering delay (Firefox) * NPM package missing modules * Agora Filter Extension v2.0.0 * Unity: * Prefabs ordering (export layers) * Makeup prefab * Camera UV after resize * Morphing prefab * UI rework ## \[1.9.0] - 2023-10-19[​](#190---2023-10-19 "Direct link to \[1.9.0] - 2023-10-19") **Added** * Rendering quality improvements for head wearable and arm wearable products * Earlobes detection * Skin smoothing and morphing improvements in Unity * Transmissive materials in GLTF files * New Effect Player API for Web, iOS & Android. Information about migration can be [here](/far-sdk/tutorials/development/guides/migration.md) **Changed** * Turned off future filter on eyes * Render improvements for caps & rings * Makeup softlight texture **Fixed** * iOS: Online mode for acne removal, some effects, call callback on the main thread, Video recording * WebAR: Process crash, broken textures in effects WebAR ## \[1.8.1] - 2023-09-01[​](#181---2023-09-01 "Direct link to \[1.8.1] - 2023-09-01") **Fixed** * Bug with special symbols in paths * Background transparency * WebGL context lost * Some fixes for OpenGL * Incorrect display of textures in Firefox ## \[1.8.0] - 2023-08-17[​](#180---2023-08-17 "Direct link to \[1.8.0] - 2023-08-17") **Added** * Acne removal for Android * Adjustable acne correction size through effect settings **Changed** * New lip segmentation * Support for Visual Studio 2022 instead of Visual Studio 2019 * WebAR: localhost logic * OEP callback status * GLTF serialization * Default enabled morphing * Skin smoothing **Fixed** * Android Demo App Crash * Added vector binding * TFlite model cache * Lips dithering * View scale * Background segmentation for M1 * Flipped MV in script * Makeup for Unity * Nose morphing * Accelerated physics on processed photos and videos * Makeup eyes colouring * Unity packages ## \[1.7.1] - 2023-06-15[​](#171---2023-06-15 "Direct link to \[1.7.1] - 2023-06-15") **Added** * Web telemetry * Portrait background segmentation for desktop and web * Manual acne removal for iOS **Changed** * Faster loading for segmentation neural networks * Enable OpenCL on Android 12 **Fixed** * Unity packages * Video track * Camera texture * Effect reset * Eyes morphings * Memory leak * JNI initialization * Camera switching * Bad JS cast ## \[1.7.0] - 2023-05-12[​](#170---2023-05-12 "Direct link to \[1.7.0] - 2023-05-12") **Added** * Turbo FRX (default is ON in web) * Motion detector for all segmentation neural networks * Lips ditering * New makeup morphings * Reduced detectors * Background neural networks's with Neural Engine support (iOS only) * Parallel FRX for multiface **Changed** * Sorting of effects for iOS * New EffectPlayer API for iOS * Desktop delivery * FRX update (FRX8) * Render Optimization * Reduced CPU consumption for the hand detector * Resize for metal * Eyebrow correctors **Fixed** * UI for rings * Segmentation postprocessing * TFlite initialization * Action Units eyebrows * Hair AR 3D Mask size * Shaders instancing and names for the new Makeup API * iOS: sound in recordered video, gif playback, video from gallery rotation, * Android: Crash, Assets load, Mips generation * WebAR: Memory leak and some bug fixes for Safari 16.4 * Win: Background issues * Unity: Makeup * OEP: Enable audio for Android, Video as a background ## \[1.6.1] - 2023-03-20[​](#161---2023-03-20 "Direct link to \[1.6.1] - 2023-03-20") **Added** * UI for ring fitting **Changed** * Replace int8 with fp16 in neural networks for Android **Fixed** * Background video play * Neuro\_beauty effect * Photo processing for Android & iOS * Sound in the recorded video * ANR during initialization * Android: Methods for Video Editor; Missed video textures in effects; * iOS: Effect events; * MacOS: Incorrect processing output; * WebAR: missing cleanup; memory leak; playback of the effect's video textures; * OEP: Crash after choosing .gif; background video textures; * Unity: Incorrect camera output. ## \[1.6.0] - 2023-01-23[​](#160---2023-01-23 "Direct link to \[1.6.0] - 2023-01-23") **Added** * Faster face tracking for Safari * Add an API to check compatibility between the browser and SDK * New rings for each finger * Background mode for effects (iOS) * Two types of denoising (Web only) * Play control functions (pause/play) for Android * Python build for M1 **Changed** * SSD Update * Webcam video enhancer opt-in * Agora example update (for Android) * Reduced size of skin\_segm\_tflite * Eyes neural network update **Fixed** * bnb\_UVMORPH texture * Broken face texture when using brow correctors * Missed background in gltf\_avatar effect iOS: MSAA crashed iOS, Crash on Makeup Android: Flipped-up effect test\_touch\_gestures MacOS: Video processing on MacOS WebAR: Token-caused crash of WebAR Win: Missed UI in viewer, spamming error messages during processing Unity: Unity Android crash OEP: Background video mode in OEP ## \[1.5.3] - 2022-11-17[​](#153---2022-11-17 "Direct link to \[1.5.3] - 2022-11-17") **Added** * Unity * Hand tracking * Hand skeleton * Eyebrow segmentation * Makeup **Fixed** * Accurate video seek on Android * Action Units update * Crash on multiface * Videodecoding on the Web ## \[1.5.1] - 2022-10-10[​](#151---2022-10-10 "Direct link to \[1.5.1] - 2022-10-10") **Added** * Variable frame video support (15, 30, 60 FPS etc.) **Changed** * A new blur background algorithm with more accurate borders * Ugly action unit MOUTH\_STRETCH has been excluded from the pipeline * New background segmentation models in landscape for Desktop platforms **Fixed** * Blur background radius settings * Flyout when no lips are found * Android * Crash during JavaScript execution * Video textures for MediaTek+PowerVR devices ## \[1.5.0] - 2022-08-24[​](#150---2022-08-24 "Direct link to \[1.5.0] - 2022-08-24") **Added** * OEP: Added support for BT601 and BT709, full and video ranges * Enabled neural network cache on Android * AR Avatars: technology updates, support for GLTF models added * Makeup API: Added return values for lips, hair, and teeth * Android: Ability to record original video without effects in the Demo app * Unity: More segmentation neural networks * Upgrade TFlite to version 2.9 * New face detector * C API: eval\_js method added * Face tracking for medical AR 3D Masks (does not ship with the regular Face AR SDK, contact your sales manager for more details) * Virtual Background: \*`Background.getBackgroundVideo()` method added * Ability to pause background video right after it's loaded **Changed** * WebAR is now delivered as an NPM package. * iOS Demo: Now video capture stops after pushing videoButton * Upgraded lips segmentation neural network for all platforms except Apple * Effects resources are now loaded asynchronously * iOS, MacOS: New background landscape model * Android Demo App: Improved UI Layout in landscape mode * Eyebrow technology: improved performance and stability * Smile and mouth opening trigger improvements * Improved stability of face tracking and detection * Unity * Plugin and Demo scene refactoring * Morphing refactoring **Fixed** * C API built on Windows * Crash on Viewer closing * Various serialization issues with effects (broken physics, etc.) * Crash on some devices with error: Resource deadlock would occur * Background video unexpected line * Deserialization of empty textures * Crash or image freeze when a face goes out of the screen border * OEP: * Memory leak in FPS Draw * Memory leak when loading effect synchronously * Released external surface before creating new (leak fix) * * WebAR: * Upside down screenshot on Safari 14 * Added better error messages for Outputs * Runtime error when using iframe.srcdoc * Inability to reuse MediaStream * Player.destroy() memory leak * Makeup API: * Makeup.lashes affects Eyelashes.color * Unity: * Various effects issues * Various startup errors * Demo scene UI fixes * Broken face in landscape * Beautification scene * Issue with multiple faces ## \[1.4.4] - 2022-07-26[​](#144---2022-07-26 "Direct link to \[1.4.4] - 2022-07-26") **Added** * Option to disable future frame filtration in the recognizer **Changed** * Disable the future frame feature for OEP ## \[1.4.3] - 2022-07-11[​](#143---2022-07-11 "Direct link to \[1.4.3] - 2022-07-11") **Added** * `Background.getBackgroundVideo()` method added to the virtual background feature * Android OEP Demo: Add virtual background samples **Changed** * Add play and loop parameters to the background texture **Fixed** * OEP: Frame is flashing when switching from blur to video * The app does not start on lower-end devices ## \[1.4.2] - 2022-07-04[​](#142---2022-07-04 "Direct link to \[1.4.2] - 2022-07-04") **Added** * The path to the application cache (internal storage) is available via `resource_manager` * Enable neural network model cache on Android * Enable the software MSAA on the Adreno 6xx series * iOS: support of the OpenGL backend for the OEP Demo app * Android, iOS OEP Demo functionality improvements **Changed** * Upgrade Tflite to version 2.8 (iOS, Android) * Virtual background: scaling and rotation are disabled when background is not set * Virtual background: rotation changed from ccw to cw * Virtual background: [video formats](/far-sdk/tutorials/capabilities/technical_specification.md#video-formats-support) support updates * Android: use xnnpack runner while gpu runner is is loading **Fixed** * Unexpected line on background video when using OEP * Background video frame is flashing when switching the background * SDK Version 1.3.X Crash on iOS system 12.X * Build made with modules crashed when the license  expired (missing watermark file) * OEP: iOS 1.3.1 blur and transparency don't work * OEP: BG video orientation (recorded front, back cameras) is flipped, stretched, or squeezed * OEP: Background flipped upside down If portrait orientation lock is off * MacOS: hair and BG recognition * Xiaomi Mi8 Pro crash * Exception: 'transformation matrix is singular' * Windows: camera crashes when creating a new camera * Windows: deadlock ## \[1.4.1] - 2022-05-16[​](#141---2022-05-16 "Direct link to \[1.4.1] - 2022-05-16") **Fixed** * The makeup API didn't work on iOS versions below 14 * WebAR failed to start on Chrome 100 and up (Windows platform only) * Video processing was taking longer * Unity: Incorrect face texture display in some cases ## \[1.4.0] - 2022-04-20[​](#140---2022-04-20 "Direct link to \[1.4.0] - 2022-04-20") **Added** * Face AR SDK for [iOS](/far-sdk/tutorials/development/installation.md) and [Android](/far-sdk/tutorials/development/installation.md) distribution as pods or maven packages * iOS, Android: All Face AR SDK [examples on Github](/far-sdk/tutorials/development/samples.md) are switched to pods, and Maven packages * Component-based face tracking for all supported platforms (only in online mode) * Unity: brand-new demo scene with effects carousel * Android: restored external texture support to the `effect_player` * Hand AR: Textured nails * M1 simulator support has been restored * Makeup API: Eyebrows makeup * WebAR: ability to load Effect via Request * WebAR: ability to play gif as background texture * WebAR: Effect.preload progress listener * WebAR: Check out our new [tutorials section](/far-sdk/tutorials/development/basic_integration.md) with extensive WebAR insides and best-practices * Rendering Engine: GLTF support **Changed** * iOS: Upgraded lips segmentation neural network * B\&W lip colouring support without additional parameters * Makeup API: reduced number of uniforms * WebAR: Improved WebAR archive UX * Rendering Engine: Reduce LUT memory size * OEP: extend supported image formats **Fixed** * Black screen flash on effects loading * Android: memory leak in the OEP * Android: Pixel 3a crashes after applying certain face filters * Android: deadlock during video\_frame draw * Android: Xiaomi Mi 8 Pro crashes * iOS: iPhone 13 is recognised as middle-end hardware * Fixed invisible entities for some face filters * Effects API: `Api.drawingAreaWidth()` & `Api.drawingAreaHeight()` return 0 * Windows: Impossible to apply effect with path containing extended Unicode * Windows, MacOS: Banuba Viewer saves video without sound * MSAA issues in face filters * WebAR: Fixed processing of RGB images * WebAR: Fixed MediaStreamCapture crash on Safari 14 * Makeup API: Fixed serialization of empty textures * Hand AR: hand skeleton false detections * OEP: crash if input buffer format changes * OEP: green frame if texture is not ready * OEP: fixed rapid camera switch ## \[1.3.1] - 2022-03-02[​](#131---2022-03-02 "Direct link to \[1.3.1] - 2022-03-02") **Changed** * An OCV-based camera on Windows is able to select a camera by index **Fixed** * OEP crash or black background on first choice of background blur * OEP Metal crash * `noFaceActions` & `faceActions` functions are incorrect call in config.js * Viewer v1.3.0 crashes on Macbook * WebAR: MediaStreamCapture produces black frames * WebAR: Makeup effect crashes web page on iPhone 8, iOS 15 * WebAR: Fixed result of `ImagCapture.takePhoto` * WebAR: Effect crashes on iOS Safari * WebAR: Effect animations lag on iOS Safari 14 * Android alooper crash * Photo loading black screen: buffer is not large enough for dimensions * Incorrect face texture display when using lip correctors * Hand skeleton can be recognized on different objects ## \[0.38.6] - 2022-02-16[​](#0386---2022-02-16 "Direct link to \[0.38.6] - 2022-02-16") **Added** * Offscreen app performance improvement **Changed** * Drop any frame\_data fields on effect loading **Fixed** * Fix SDK work on Android 6 * Recording audio from effect\_player to file * Crash when making a photo or video from the Demo App * Avoid the copy frame in the case of the Banuba SDK Demo * OEP Android FATAL EXCEPTION: CameraThread * Distorted image after turning on blur for row stride not equal width ## \[1.3.0] - 2022-01-27[​](#130---2022-01-27 "Direct link to \[1.3.0] - 2022-01-27") Version 1.3.0 also includes all changes from SDK v0.x releases up to v0.38.5 version. Please, refer to the changelog below. **Added** * Face tracking antijitter based on optical flow algorithm * Ability to draw face mesh landmarks (and test effect) * Face mesh lip correctors (optional) * Face mesh eyebrow correctors (optional) * Native Metal API support (iOS, MacOS) * Metal multiinstance * Arm64 support with a Metal backend for simulators and MacOS * Makeup API: Lips morphing * WebAR: Tflite libs for emscripten (3.0.1) * WebAR: Throw an error when rendering an unexpected DOM element * unloadEffect method * OEP: Metal YUV converter * [SDK Manager](/far-sdk/generated/javadoc/banuba_sdk/com/banuba/sdk/manager/package-summary.html) added to the API documentation * AR Rings technology **Changed** * WebAR: TFLite delegate creation failed error now shows as warning * WebAR: Optimized CPU and GPU usage * WebAR: Reduced RAM usage and camera pixels retrieving time * WebAR: Added warning if effect.evalJs is called before the effect application * Rebuild OpenCV for Android only with the required set of features * Switch Hand Gestures and recognition to Tflite for iOS and Mac * The Android Beauty example switched to the Makeup API usage * Face mesh correctors are controlled directly from needed effects (including Makeup API) * An Android Demo app can be built now with Java 11+ and Gradle 7+ versions * Correct work of alpha channel blending (used in background transparency) * OEP: Improved performance of the YUV converter for Android * Makeup API: Improved the effect activation time * Removed copying of U and V textures for i420 **Fixed** * Makeup API: Correctly resolve loading resources from different modules * Makeup API: Hair blending incorrect brightness * Makeup API: Incorrect blur behaviour when comparing to 0.x version * WebAR: Electron: License error 0xff00f * WebAR: User-friendly error messages for misconfigured `locateFile` * WebAR: Delay in loading animation on Safari * WebAR: Makeup effect crashes iOS Safari * WebAR: Fixed GPU memory leak * Various crashes in the Offscreen Effect Player (OEP) * SDK v1.1.0 crashed on the unload effect in the Android quickstart app * Incorrect work of Api.playVideoRange * Windows: Effects cannot be enabled when extended Unicode is used in the app's location name * Windows: OEP-desktop-c-api example build has failed to launch * Android Demo app has crashed on switching effects and closing activity * iOS: `sdkManager.output?.takeSnapshot` method not working in SDK 1.x * iOS: Distorted videos/snapshots when using non-standard RenderSize ## \[0.38.5] - 2021-12-14[​](#0385---2021-12-14 "Direct link to \[0.38.5] - 2021-12-14") **Added** * EffectPlayer sound playback recording **Changed** * Android: Updated lips segmentation neural network **Fixed** * Incorrect shiny lip application * OEP: fix YUV aliasing ## \[1.2.1] - 2021-12-01[​](#121---2021-12-01 "Direct link to \[1.2.1] - 2021-12-01") **Fixed** * iOS: x86\_64 simulator support * Makeup API: Incorrect display of blurred background * Makeup API: Incorrect virtual background ratio in landscape mode * Virtual background: incorrect alpha blending when using transparent images ## \[1.2.0] - 2021-11-24[​](#120---2021-11-24 "Direct link to \[1.2.0] - 2021-11-24") Version 1.2.0 also includes all changes from SDK v0.x releases up to v0.38.4 version. Please, refer to the changelog below. **Added** * Hand gestures and Hand skeleton model for all platforms * Light Streaks effect support in Scene (1.x versions) * Add a warning if the versions of neural nework and FaceAR SDK are different * Face skin segmentation neural network for Tflite (all platforms except iOS and MacOS) * Makeup API: Eyelashes 3D support * Android: Automatically select RGB or YUV camera mode (better performance on some low and mid-end devices) * WebAR: Ability to enable heavy hair neural networks * Error message when trying to load an old effect (from 0.x versions) * Ruler (distance to face) effect * Introduce `evalJs` for calling effect methods from the application * iOS: Support for hair segmentation in landscape mode * Ability to [combine Face AR effects](/far-sdk/effects/virtual_background.md) with a virtual background in runtime * Eval js support for OEP **Changed** * New Tflite Lips neural network (all platforms except iOS and MacOS) * WebAR: Optimized Image processing * Remove unnecessary libraries and dependencies * iOS, MacOS: Switch face tracking neural networks to Tflite * Android: Updated hair segmentation neural network * The new body segmentation (v2) neural network is enabled by default **Fixed** * WebAR: Fixed Emscripten auto GC * WebAR: Added ability to release WASM memory * WebAR: Fixed Photo editing * WebAR: Hand segmentation and gesture support * WebAR: Fixed Next.js compatibility * Remove unneeded iteration for hair recolor * The second call of BNBUtilityManager.initialize or BanubaSdkManager.inialize causes a crash in the release * Eye segmentation nn performance incorrect display * Makeup API: Fixed Hair coloring algorithm * Various OEP fixes * Imgui display on Windows * Effects fix for Safari 15 ## \[1.1.1] - 2021-10-19[​](#111---2021-10-19 "Direct link to \[1.1.1] - 2021-10-19") **Added** * Makeup API: beauty morphings **Changed** * Makeup API: Blur algorithm **Fixed** * callJsMethod fails when pass parameters ## \[0.38.4] - 2021-11-02[​](#0384---2021-11-02 "Direct link to \[0.38.4] - 2021-11-02") **Fixed** * `full_image_from_yuv_i420_img` is too slow ## \[0.38.3] - 2021-10-07[​](#0383---2021-10-07 "Direct link to \[0.38.3] - 2021-10-07") **Added** * `face_search_mode` in the EffectPlayer API * Windows: Sign and add description to EffectPlayer dlls * i420 yuv pixel format support **Changed** * Improved face tracking performance * New tflite hair segmentation neural network for Android and Windows **Fixed** * Morphing behaviour at the screen edges ## \[0.38.2] - 2021-09-16[​](#0382---2021-09-16 "Direct link to \[0.38.2] - 2021-09-16") **Added** * iOS 15 support ## \[1.1.0] - 2021-09-30[​](#110---2021-09-30 "Direct link to \[1.1.0] - 2021-09-30") Version 1.1.0 also includes all changes from SDK v0.x releases up to v0.38 version. **Added** * Makeup API support * Action Units: Multi-face support * Full Body Segmentation v2 neural network (for all platforms) * Windows: Added description to dlls * Windows: Sign Banuba SDK dlls * Face triggers support (mouth open, smile, etc.) * iOS: Effect info UI in SDK Demo App * iOS 15 support * Api.isMirroring() * WebAR: Distance to phone support **Changed** * Android: Updated lips segmentation neural network * Skin segmentation neural network support on all platforms (including the Web) **Fixed** * nn\_api Lips aliasing * WebAR: iOS 13 does not show a video stream ## \[0.38.1] - 2021-08-17[​](#0381---2021-08-17 "Direct link to \[0.38.1] - 2021-08-17") **Added** * C API documentation + deadlock fixes (Windows) ## \[0.38.0] - 2021-08-16[​](#0380---2021-08-16 "Direct link to \[0.38.0] - 2021-08-16") **Added** * Windows: MSMF camera usage * iOS, Android: Hand gesture tracking **Changed** * Android: Use tflite GPU info library to range device classes * iOS: Use the same background segmentation neural networks for all devices * iOS: Remove unneeded UI from the demo app * Renamed license utils symbols **Fixed** * tflite\_runner assorted fixes * iOS: White eyelashes when making photos * iOS: Time range issues in video player * MacOS: Crash on M1 * CubemapEverest test effect autorotation * WebAR: broken SIMD support * WebAR: Fixed creation of MediaStreamCapture * Android: Multitouch crash in demo app * Unity: minimum version of Android SDK * Unity: Android build on Windows * Win32: Background NN work * Incorrect lips shine work ## \[0.37.1] - 2021-07-27[​](#0371---2021-07-27 "Direct link to \[0.37.1] - 2021-07-27") **Added** * Android x86\_64 preliminary support **Changed** * Android: Common gradle for SDK Demo projects * Updated background segmentation neural network models for Web and Desktop platforms **Fixed** * iOS: missing image from camera when using ARKit on iPhone 12 * Android: Crash on C API * Android: Missing photos in the gallery when using the Demo app on some Android devices ## \[0.37.0] - 2021-07-09[​](#0370---2021-07-09 "Direct link to \[0.37.0] - 2021-07-09") **Added** * New Eyes segmentation neural network with separate detection of eye parts: pupil, sclera, and iris (all platforms) * Token updates for Eye bags and Acne features (**new token required**) * iOS: Lips corrector * MacOS: reworked implementation of the MacOS framework * WebAR: API to set the number of faces to track * Unity: Action Units interface * Ability to show camera frames during effect initialization * M1 support (including simulators) * Accepting YUV i420 * Lip morphing effect * Ability to set Neck smoothing from JS (in effect) * Effect Player C API **Changed** * Update Win tflite x64 and x86 from 2.3 to 2.4.1 * Eyes corrector is included in release archives by default * Eyes corrector enabled for Win and Web * Preload all Android NN classes instead of creating them on request * Improved performance of the Lips Shine effect * Makeup API updates: * The SetInitialRotation method is added to the bg-image and bg-video classes * Transparent BG fix * Consistent methods of naming * Fixed usage of the skin segmentation with background features * Android: Remove unneeded rotateBg calls and all related code * iOS: Include bitcode into minimal builds * iOS: Updated background segmentation NNs for high-end and low-end devices * Updated Face tracking neural network (all platforms) **Fixed** * Makeup Transfer exception * WebAR: Fixed inactive tab video throttling * Standalone: remove legacy resource copy * Effects of video texture crashes on some devices with MediaTek and PowerVR * OEP long loading during app initialization ## \[1.0.0] - 2021-06-22[​](#100---2021-06-22 "Direct link to \[1.0.0] - 2021-06-22") **Added** * WebAR: Video texture support * WebAR: API to set the number of faces to track * WebAR: Lips effects support on iOS (Safari) * Xcode 12.5 supports **Changed** * [Examples apps](/far-sdk/tutorials/development/samples.md) optimised for SDK v1.x **Fixed** * Android: Screen is flashing when switching effects * Android: GPU-specific deadlock issues * iOS: Demo app crash on iOS < 13.5 * WebAR: Fixed texture alpha-blending * Windows: Effect with the video has failed to load * Windows: standalone build failure on the x86 platform * Do not crash the app when assert has failed * JS engine fixes * Lips shine effect * Various effects fixes ## \[0.36.1] - 2021-05-24[​](#0361---2021-05-24 "Direct link to \[0.36.1] - 2021-05-24") **Changed** * WebAR: FrameData is available in the WebAR SDK * Distance to phone improvements **Fixed** * Unity: Triggers incorrect work * Unity: iOS camera initialization ## \[0.36.0] - 2021-05-03[​](#0360---2021-05-03 "Direct link to \[0.36.0] - 2021-05-03") **Added** * WebAR: Human-readable exception messages * WebAR: Updated background segmentation * WebAR: Optional SIMD * WebAR: Made WebAR SDK SSR compatible * WebAR: Crop, resize, horizontalFlip support * Desktop: Updated background segmentation * Android: Offscreen Effect Player (OEP) example * iOS: Offscreen Effect Player (OEP) example * Unity: Face morphing support * GIF textures support * Lip segmentation support for Web and Desktop * Makeup API: Exposed extra APIs * Hand AR API: Nails segmentation * Windows: SDK dlls come signed **Changed** * Invalidate texture cache in case of file change * WebAR: Speed up frames obtaining * WebAR: Improved memory usage * WebAR: Throw if the effect has zero length * Mac: build SDK as a macOS framework * iOS: Remove ARKit dependency if ARKit face search is disabled **Fixed** * Error logs when loading empty effect * Android: Region cropping when applying zoom * WebAR: Prevent playback stops during unsuccessful effect application * WebAR: Inactive tab throttling * Makeup API: "black square" on lips with alpha channel * Unity: UI scaling for the Beautification scene * Crash on effect switching * Standalone demo app signing ## \[1.0.0-beta] - 2021-04-23[​](#100-beta---2021-04-23 "Direct link to \[1.0.0-beta] - 2021-04-23") **Added** * New render engine aka *Scene* * WebGL 1.0 support (for Safari) * Metal support ## \[0.35.0] - 2021-02-26[​](#0350---2021-02-26 "Direct link to \[0.35.0] - 2021-02-26") **Added** * Support non-ASCII symbols in paths * New Background segmentation neural networks (Desktop) * New Background segmentation neural networks (Web) * Hair segmentation support on the Windows platform * WebAR: `Effect.preload` and `Player.applyEffect` will now throw an exception if the effect's underlying source is not a .zip archive * Initial support for the Apple M1 **Changed** * ARKit disabled by default (iOS) * Strip unnecessary symbols on macOS * Use only the minimum required subset of OpenCV on macOS * Use TFLite 2.4.1 without Metal delegate on macOS * API to set animated background in effects dynamically **Fixed** * WebAR: Firefox video processing issue * Android: Orientation fixes * Android: Sound issues and minor improvements * Unity: iOS plugin size ## \[0.34.1] - 2021-01-26[​](#0341---2021-01-26 "Direct link to \[0.34.1] - 2021-01-26") **Added** * Face Ruler for Android platform * Unity: New Action Units effect with background segmentation **Changed** * Distribute EffectPlayer for iOS as xcframework **Fixed** * Build with disabled face tracking ## \[0.34.0] - 2021-01-19[​](#0340---2021-01-19 "Direct link to \[0.34.0] - 2021-01-19") **Added** * WebAR: Background support in landscape * The BG support field in effect\_info * Android: Java 8+ API desugaring support * Viewer extra options for processing **Changed** * tflite\_runner: different delegates support each feature * Android: Add static TensorFlow lite version * Enable RGB cameras on devices with Snapdragon 625 * Processed image location in Banuba Viewer * Skin smoothing NN update (iOS) **Fixed** * WebAR: ES6 to ES5 transpilation issue * WebAR: loading of non existing effects * WebAR: several performance issues * Hair segmentation: TFLite input copy error * Prior fixes to work with frx\_meta logic * Incorrect effects display on Android 10 * Android: Effect size after rotation ## \[0.33.1] - 2020-12-10[​](#0331---2020-12-10 "Direct link to \[0.33.1] - 2020-12-10") **Added** * Add listener as soon as test\_Ruler or FaceRuler effect is activated **Changed** * Enable a face recognition neural network for mid-end Android devices **Fixed** * `setEffectSize` fix for Android * Lips shine AR 3D Mask incorrect work with back camera ## \[0.33.0] - 2020-11-30[​](#0330---2020-11-30 "Direct link to \[0.33.0] - 2020-11-30") **Added** * Display FPS stats in the Desktop Viewer App * Beauty scene for Unity plugin * Android: Ability to Override Detected Resolution * Lips recoloring with a glitter effect (also supported in Banuba Viewer) * Distance to face (ruler feature) **Changed** * Updated tflite for Windows to 2.3 * Both tflite runners were created on first request * Text texture is enabled by default **Fixed** * iOS: incorrect BG work on photo in landscape * Unity: fix aspect on mobile devices * Delayed camera start * Repacking errors * Front camera flip * 'Face not found' message after loading photo from gallery * Added handling of IllegalArgumentException to prevent crashes dependent on surface configuration ## \[0.32.1] - 2020-11-05[​](#0321---2020-11-05 "Direct link to \[0.32.1] - 2020-11-05") **Added** * Effect activation listener * Unity: Separate render target for the beauty scene for the LUTs **Changed** * Decreases CPU load on MacOS **Fixed** * Crash during effect preload * Mesh trembling with fast face tracking * Unity: Aspect of Background segmentation * Crash during fast effect switching ## \[0.32.0] - 2020-10-20[​](#0320---2020-10-20 "Direct link to \[0.32.0] - 2020-10-20") **Added** * New background model * New WebAR API * WebAR Quickstart Demo App * WebAR beauty demo app * Native OSX camera implementation * Web and Desktop getting started added * Possibility to customise the capture session preset (iOS) * Desktop app examples for Windows and Mac * Demo effects without face recognition * Xcode 12 supports **Changed** * Eye corrector v2.0: improved stability and performance * Makeup API improvements * Use setBackgroundTexture with an absolute path * Face tracking stability and performance optimization * Update offline face tracking (Android) **Fixed** * Crash with bitcode in the JavaScript core * Separate AR 3D Mask for neck smoothing feature * Crash with effect reset (Android) * Missing logs in Viewer Standalone * Android video player loop * Memory leak on desktop when using animated textures * Quickstart example app fixes * Unity background fix * ARKit face detection failure ## \[0.31.0] - 2020-08-27[​](#0310---2020-08-27 "Direct link to \[0.31.0] - 2020-08-27") **Added** * SDK features control and repacking with client token and client configuration * Minimal SDK archive * SDK build for MacOS * Makeup transfer feature * Photo online processing in Banuba Viewer * Ability to enable effects and neural networks without a face recognizer * Eye brow segmentation NN * Neck smoothing neural network * Set camera FPS mode on Android (fixed/adaptive) * Represent SDK frames as OpenGL textures (WebRTC for Android) * New beautification API **Changed** * Updated Background Segmentation neural network for standalone builds * Face recognizer works on full frame * Landmarks smooth filter * Updated eye corrector * Updated face recognition neural network * Unity scene works in full screen mode * OpenCV updated to 4.3.0 * Range of android devices hardware class and max resolution for it **Fixed** * Unity plane does not update rect * Video player fix * Heart rate measurement with neural network face search * Segmentation neural networks work with arkit * Unity WebGL build * Fix Unity failure on the Windows platform * Crashes on effect unload * Correctly handle MRT rendering into background camera texture * Black screen on devices with ARKit * Background segmentation on Windows x86 * WebAR SDK blocks the backspace key * Banuba SDK works on devices with iOS 14 ## \[0.30.2] - 2020-07-15[​](#0302---2020-07-15 "Direct link to \[0.30.2] - 2020-07-15") **Fixed** * Second AR 3D Mask freezes on the screen in scene effects ## \[0.30.1] - 2020-07-14[​](#0301---2020-07-14 "Direct link to \[0.30.1] - 2020-07-14") **Added** * Eyes correction feature **Changed** * Hide Boost symbols **Fixed** * Asynchrony of sound and video after file import * iOS: The app freezes after background in Editing mode * Android Beauty: Screen is flashing after launch * Crash on Editing Image * App crashed in editing mod on iPhone XS Max * EffectPlayer is not launched for the first time * Second face is missed if to use the front Camera (with ARKit) ## \[0.30.0] - 2020-06-11[​](#0300---2020-06-11 "Direct link to \[0.30.0] - 2020-06-11") **Added** * WebAR support for Unity platform * Background segmentation for Unity platform * Max Faces support in client token * Videocall example for iOS * *Minimal* configuration of Banuba SDK * EffectPlayer EffectManager **Changed** * The EP version has changed to 5.6 * Enable bitcode by default (iOS) **Fixed** * Compilation error on Ubuntu * Body segmentation neural network rotation * Viewer Standalone build * MSVC x64 Eigen crash * The app won't throw an exception when neural network resources are missing * Optimized face beautification * Fix audio session (iOS) * Bakground segmentation failures (iOS) * Creepy smile fixes * Portrait match fixes * Skin smoothing fixes ## \[0.29.1] - 2020-05-19[​](#0291---2020-05-19 "Direct link to \[0.29.1] - 2020-05-19") **Fixed** * Crash on Android with neural face recognition ## \[0.29.0] - 2020-04-30[​](#0290---2020-04-30 "Direct link to \[0.29.0] - 2020-04-30") **Added** * Creepy smile neural network (iOS) * Manual audio session in BNBEffectPlayer * Skin smoothing neural network (iOS) * New face recognition and tracking algorithm for offline (Android) **Changed** * Banuba Viewer colour picker reacts to background and lip neural networks * Make WebAR SDK ES6 module * Updated llvm backend for WebAR * WebAR improvements **Fixed** * Lip segmentation on the Android HQ photo * Memset buffer overflow when using Action Units * Jaw mesh stretching fix in the face tracking algorithm ## \[0.28.3] - 2020-04-29[​](#0283---2020-04-29 "Direct link to \[0.28.3] - 2020-04-29") **Added** * Enabled bitcode in iOS release **Fixed** * Portrait match technology ## \[0.28.2] - 2020-04-22[​](#0282---2020-04-22 "Direct link to \[0.28.2] - 2020-04-22") **Added** * Portrait match technology ## \[0.28.1] - 2020-04-09[​](#0281---2020-04-09 "Direct link to \[0.28.1] - 2020-04-09") **Added** * Update Effect Player for video calls, support callkit audio session specifics **Changed** * Switch to a fast face recognition algorithm for weak iOS devices **Fixed** * Celebrity match technology fixes * Crash on Banuba Viewer close ## \[0.28.0] - 2020-03-23[​](#0280---2020-03-23 "Direct link to \[0.28.0] - 2020-03-23") **Added** * New face recognition and tracking algorithm for realtime (iOS) **Changed** * Full Body segmentation can be applied again (iOS) * Banuba Viewer UI changes * Adapt Action Units to use the new face recognition algorithm (iOS) * Adapt triggers to use Action Units (iOS) **Fixed** * Physics behaviour for effects on devices with ARKit (iOS) * Effects render on low-level Android devices ## \[0.27.2] - 2020-03-12[​](#0272---2020-03-12 "Direct link to \[0.27.2] - 2020-03-12") **Fixed** * Unity openCV error * Small recognizer fixes for the iOS platform ## \[0.27.1] - 2020-02-27[​](#0271---2020-02-27 "Direct link to \[0.27.1] - 2020-02-27") **Added** * ARKit multiface support **Changed** * Android strong device list updated **Fixed** * Multiface effects render * Multiface issues * Crash when processing a photo with two faces * Effects render with ARKit on iPhone X ## \[0.27.0] - 2020-02-19[​](#0270---2020-02-19 "Direct link to \[0.27.0] - 2020-02-19") **Added** * Greatly improved face detection in offline mode (iOS, for photos) * Objective-C full support * More examples for iOS and Android * Improved beauty effect * Lip shine effect improved * Ability to choose a camera from the command line on Desktops **Changed** * Persistent OpenGL context on Android (don't recreate it after the app goes in background) * Safely ignore GL errors on Android * The beauty effect is enabled by default **Fixed** * Lips shine effect * Neural network behaviour after the face was lost ## \[0.26.0] - 2020-01-17[​](#0260---2020-01-17 "Direct link to \[0.26.0] - 2020-01-17") **Added** * Advanced lip recoloring * Action Units from ARKit * x86 support for Windows * Lazy textures load **Changed** * Improve Unity sample effects * The Bokeh effect improved * Enable beauty by default in sample effects * Improve acne and bag removal performance * Hair stand blending performance improvement **Fixed** * Threads leak on Android * WebGL FPS stabilization * Memory issue on Android * Memory issue on iPhone6+ * Crash during rendering on Adreno 610 ## \[0.25.2] - 2020-01-09[​](#0252---2020-01-09 "Direct link to \[0.25.2] - 2020-01-09") **Fixed** * Camera open error on Android ## \[0.25.1] - 2019-12-31[​](#0251---2019-12-31 "Direct link to \[0.25.1] - 2019-12-31") **Fixed** * Bundle version in the xCode project ## \[0.25.0] - 2019-12-23[​](#0250---2019-12-23 "Direct link to \[0.25.0] - 2019-12-23") **Added** * Eye bug removal * Neural network based acne removal * Use `ARKit` for face tracking when available * `dvcam` post-process effect * Eyes state trigger and ruler features in recognizer API * API to change sound volume from Java Script * Option to add effects from an external folder in the sample application (Android) * Improve API (SDK for browsers) **Changed** * Don't reload effect if there was an error in JavaScript. **Fixed** * Decrease memory pressure while creating multiple `BanubaSdkManager` instances (Android) * Crash on effects with 3 or more faces * Improved camera FPS on selected low-end Android devices ## \[0.24.1] - 2019-11-06[​](#0241---2019-11-06 "Direct link to \[0.24.1] - 2019-11-06") **Changed** * Update documentation with examples of new UI **Fixed** * Video recording on Android * Crash after exiting from the application * Memory leak on Android * Crashes when interacting with the Android Demo app ## \[0.24.0] - 2019-11-01[​](#0240---2019-11-01 "Direct link to \[0.24.0] - 2019-11-01") **Added** * Glasses detection * Improved stability (aka jittersing) of face tracking * Extended `Recognizer` API * Recognition results in Python bindings **Changed** * Migration to AndroidX * New redesigned UI for Banuba SDK Demo AP (Android and iOS) **Fixed** * Video texture decoding on Android 10 * Crash while going to background on iOS * Audio recording speed on Android * Lag during neural network initialization * Various camera fixes for Android ## \[0.23.0] - 2019-10-02[​](#0230---2019-10-02 "Direct link to \[0.23.0] - 2019-10-02") **Added** * A neural network based approach to detecting faces. Quality, detection angles, and speed of the face detection was improved * Neural networks support for Windows and Web * Unity plugin **Changed** * Sync audio and video during recording on Android * Fast background on iPhone 6 and lower. * Correct neural network behaviour during device rotations **Fixed** * Video texture support (Android 10) * Crashes on Adreno chipsets * Stability fixes ## \[0.22.0] - 2019-08-28[​](#0220---2019-08-28 "Direct link to \[0.22.0] - 2019-08-28") **Added** * Lip colouring API in `Beauty` effect * Option to switch off face recognition in `config.json` * Option to set preferred frame-rate on iOS **Changed** * Lips segmentation neural network updated (Android) **Fixed** * Rendering on Andreno GPUs * Video texture decoding issue on Android * Android crash on app coming from background * Crash on iPhone 5 after video capture ## \[0.21.0] - 2019-08-05[​](#0210---2019-08-05 "Direct link to \[0.21.0] - 2019-08-05") **Added** * Hair and lips recoloring in the "Beauty" effect * `EffectPlayer` threading model documentation * SDK feature documentation * API to check if device is compatible with Neural Networks player **Changed** * `BanubaSdkManager` can be instantiated more than once (see "Migration Guides"). * Use the background AR 3D Mask transform for the background separation layer from `config.json` **Fixed** * Sample app signing (iOS) * Rendering bugs after the effect switch * Correct screenshot size on Android * "End touch" event (iOS) ## \[0.20.2] - 2019-07-24[​](#0202---2019-07-24 "Direct link to \[0.20.2] - 2019-07-24") **Fixed** * background separation layer from config.json (Android) ## \[0.20.1] - 2019-07-17[​](#0201---2019-07-17 "Direct link to \[0.20.1] - 2019-07-17") **Fixed** * Fix photo processing with MSAA enabled on Android ## \[0.20.0] - 2019-07-12[​](#0200---2019-07-12 "Direct link to \[0.20.0] - 2019-07-12") **Added** * Sample ASMR effects * Render passes * Rendered frame forwarding as byte arrays (Android) * Ability to debug JS * Reset effect cache API call **Changed** * Watermark gravity (Android) * Sample background separation effect blending improved * Background separation feature respects gyroscope data * Ignore the gyroscope during photo and video processing * Beauty effect improvements ## \[0.19.1] - 2019-06-24[​](#0191---2019-06-24 "Direct link to \[0.19.1] - 2019-06-24") **Added** * Colour post-processing effect * Face rect API from face recognition result **Changed** * Beauty effect improvements **Fixed** * Post-processing effect (when applied to framebuffer) * Image glitches and crashes in photo editing mode (Android) ## \[0.19.0] - 2019-06-17[​](#0190---2019-06-17 "Direct link to \[0.19.0] - 2019-06-17") **Added** * Bokeh effect example * Icons and cons for sample effects * New documentation, programming guides have been added * Rendering view transformation API * Post-process library * Beauty app example **Changed** * Removed `Beauty` effect API parameters: * eyes\_sharping\_str * blur\_bg\_enable * blur\_lod * remove\_bag\_intensity * eyes\_luts * Renamed `Beauty` effect API parameters: * makeup\_tex -> eyebrows\_tex * makeup\_alpha -> eyebrows\_alpha * eyebrows\_tex -> lashes\_tex * eyebrows\_alpha -> lashes\_alpha * Gravity in effect now respects device orientation * Remove life-cycle methods from `BanubaSDKManader` on Android * External texture is disabled by default (Android) * Action Units sample effect updated * Use only one camera session for all tasks (Android) * Improved photo processing speed * Sound Changer is supplied as a separate plugin * Swift 5 support **Fixed** * Gracefully handle exceptions on OS X * Missing and frozen video textures on iOS and Android * Open GL crashes on Android * Neural network overload on Android * Rendering bugs for the Web version * Stretched camera preview (iOS) ## \[0.18.1] - 2019-05-29[​](#0181---2019-05-29 "Direct link to \[0.18.1] - 2019-05-29") **Added** * SVG watermark support (Android) * Proguard rules for the banuba\_sdk module (Android) **Changed** * 'banuba\_sdk' is now supplied in compiled form * `BNBFullImageData` can be created from RGB `CVPixelBuffer` with padding * Suspend frame processing while taking low res photo (Android) * Beauty effect: return default parameters after animation * Restart camera preview session on HR photo (Android) **Fixed** * Video texture freeze (Android) * Crash during render size change ## \[0.18.0] - 2019-05-24[​](#0180---2019-05-24 "Direct link to \[0.18.0] - 2019-05-24") **Added** * Processing a bitmap and applying the selected effect to it (Android) * Image editing mode in platform modules (Android) * Acne removing technology in photo processing * Watermarks on video (Android) * Considering device orientation in photo taking * Ability to setup several listeners in EffectPlayer * Support for Bitmap in the FullImageData constructor (Android) * Universal framework for devices and simulators (iOS) **Changed** * Added assertions in the EffetPlayer life cycle (for video processing) * The hair segmentation neural network is updated **Fixed** * Losing face orientation after an Android activity restart is fixed * Physics on multiface effects is working correctly * The app has crashed on 32 bit Androids with enabled neural networks * ActionUnits improvements ## \[0.17.1] - 2019-04-24[​](#0171---2019-04-24 "Direct link to \[0.17.1] - 2019-04-24") **Fixed** * Exposure settings (iOS) * Continuous photo rendering with updated parameters ## \[0.17.0] - 2019-04-18[​](#0170---2019-04-18 "Direct link to \[0.17.0] - 2019-04-18") **Added** * Continuous photo rendering with updated parameters * Conversion-free RGB input support * Image file processing example (iOS) * Support for landscape frame input * API to check Android hardware performance **Changed** * Documentation improved * Swift 4.2 support in the example app * Updated lip segmentation neural network * Improved rendering quality on high-end Android devices **Fixed** * Removed duplicate functionality in Android samples * Correct video orientation (iOS) ## \[0.16.0] - 2019-04-03[​](#0160---2019-04-03 "Direct link to \[0.16.0] - 2019-04-03") **Added** * Neural networks for Android: lips, skin, hair, eyes, iris segmentation; background separation * Neural network rendering for Android * Full body segmentation neural network for iOS * Release binaries for Windows * New post process effects: acid whip, cathode and rave * x86\_64 build variant for iOS simulators * Face detection in any orientation (Android) **Changed** * Camera FPS increased on Huawei devices * Video now paused when app is in background * Process screenshot or HQ camera option for Android * Performance on low-end Android devices **Fixed** * Video textures playback * Audio resume after background (Android) * Launch time on first run (Android) ## \[0.15.0] - 2019-03-14[​](#0150---2019-03-14 "Direct link to \[0.15.0] - 2019-03-14") **Added** * Action Units and Blend Shapes * Take high resolution photos with effects, camera switches, and video recording (Android) * Post processing stage with simple effects * A new neural network for eye segmentation (iOS) * Multi-touch **Changed** * Method to process photos from a file readded to the API * Hide eyes if there is no face in the test\_Eyes effect * Binary size reduced for iOS **Fixed** * Photo in Landscape mode on iOS * Animation position on photos * Acne removal performance on photos * Sounds after background (Android) ## \[0.14.3] - 2019-02-21[​](#0143---2019-02-21 "Direct link to \[0.14.3] - 2019-02-21") **Fixed** * Android crash with external texture * Rendering area size for iOS * Java documentation ## \[0.14.2] - 2019-02-09[​](#0142---2019-02-09 "Direct link to \[0.14.2] - 2019-02-09") **Fixed** * Version number in the iOS framework ## \[0.14.1] - 2019-02-01[​](#0141---2019-02-01 "Direct link to \[0.14.1] - 2019-02-01") **Added** * iPad support for the Demo app * Analytics serialization * New lifecycle: effect is paused before background * Videoprocessing (desktop only) **Changed** * Photo processing optimization * Android Demo Activity GC optimization ## \[0.14.0] - 2019-01-15[​](#0140---2019-01-15 "Direct link to \[0.14.0] - 2019-01-15") **Added** * Haptic feedback. **Changed** * Client APIs are automatically generated for both Java and Obj-C. * Documentation reflecting Java and Obj-C classes. **Fixed** * Draw state after VAO modification made by external code (Android). * Sound session restoration on iOS. * FPS degradation on video textures (Android). ## \[0.13.3] - 2019-01-15[​](#0133---2019-01-15 "Direct link to \[0.13.3] - 2019-01-15") **Fixed** * Audio session configuration (iOS) ## \[0.13.2] - 2019-01-14[​](#0132---2019-01-14 "Direct link to \[0.13.2] - 2019-01-14") **Changed** * Restore the old RFX classifier ## \[0.13.1] - 2019-01-11[​](#0131---2019-01-11 "Direct link to \[0.13.1] - 2019-01-11") **Added** * Watermarks on video (iOS only) **Changed** * Beauty soft light texture without eye shadows **Fixed** * Stretched picture during video preview on iOS * Fix JS calls with arguments * Crashfixes ## \[0.13.0] - 2018-12-27[​](#0130---2018-12-27 "Direct link to \[0.13.0] - 2018-12-27") **Added** * Ability to modify the user's voice (voice changer); iOS only * A smaller face recognition classifier * Eye segmentation textures * Neural network for face detection (optional, disabled by default) * Acne removal * Lip segmentation **Changed** * Improved hair segmentation on Android * BanubaSdkManager improvements on Android **Fixed** * FPS calculation, * Overdraw on Android * Black screen on Mali devices ## \[0.12.6] - 2018-12-20[​](#0126---2018-12-20 "Direct link to \[0.12.6] - 2018-12-20") **Fixed** * Reverted unnecessary cropping of video pixel buffer ## \[0.12.5] - 2018-12-19[​](#0125---2018-12-19 "Direct link to \[0.12.5] - 2018-12-19") **Fixed** * Fix video recording for a custom size of input frame ## \[0.12.4] - 2018-12-18[​](#0124---2018-12-18 "Direct link to \[0.12.4] - 2018-12-18") **Added** * Functions for retrieving effects and screen sizes from JS **Fixed** * Rendering artefacts near the eyelid in a beauty effect (z-fighting) * Camera initial mode fix, custom aspect ratio support * Adjust the configuring exposure settings method ## \[0.12.3] - 2018-12-14[​](#0123---2018-12-14 "Direct link to \[0.12.3] - 2018-12-14") **Fixed** * Fix beautification issues at high resolution * Fix coordinate conversion in touch events ## \[0.12.2] - 2018-12-12[​](#0122---2018-12-12 "Direct link to \[0.12.2] - 2018-12-12") **Fixed** * Video recording (copy + flip on BanubaSDK side), memory management improvements ## \[0.12.1] - 2018-12-11[​](#0121---2018-12-11 "Direct link to \[0.12.1] - 2018-12-11") **Changed** * Turn off the frame\_brightness feature by default. * Enable gyroscope on demand. * Process image improved **Fixed** * Exposure point settings (iOS) **Added** * Effect events for Android * Touch events for Android ## \[0.12.0] - 2018-12-04[​](#0120---2018-12-04 "Direct link to \[0.12.0] - 2018-12-04") **Changed** * EffectPlayer life cycle methods updated * Strict checks of the surface lifecycle * Face detection algorithm has been reverted to a more stable implementation * Resource finding path changed to subfolder (bnb) * Naming banuba.core -> banuba.sdk (iOS) * VideoRecording via TextureCache (iOS) **Added** * Search locations in ResourceManager error message * Ability to setup log level and subscribe to SDK's log callback from client code * Methods for getting CPU Info on Android * Experimental neural network support on Android (background separation, hair segmentation) (special build is required) * Bin record interface in BanubaCore * Improved error reporting while effect loading * Beautification effect added to resources (special build is required) * Process a single image method with custom input and output formats (ability to take high-quality photos) * Experimental skin segmentation NN added to iOS * Experimental eye segmentation NN added to iOS * Ability to flip a rendered image along the Y axis * Touch events on iOS **Fixed** * Fix pushFrameYVU420 method * Fix crash in effect\_context::update (race condition) * Return draw error when effect loading failed * The Bokeh effect works on Android * Fix slow wireframe in DebugRenderer * Fix iOS crashes in shader compilation * Fix unpack alignment for textures with `width * components` not multiples of 4 * Fix process image when external camera texture * Fix BG copy MRT on ANGLE WebGL (Web) * Fix depth test\&write state after morph with hair compacting * Fix the minimum and maximum possible coordinates (in face recognition) * Fix the exposure point ## \[0.11.2] - 2018-11-20[​](#0112---2018-11-20 "Direct link to \[0.11.2] - 2018-11-20") **Fixed** * Drawing artefacts with some effects ## \[0.11.1] - 2018-11-15[​](#0111---2018-11-15 "Direct link to \[0.11.1] - 2018-11-15") **Changed** * Updated face recognition algorithm. **Added** * Ability to link with a simulator on iOS. **Fixed** * Release number for frameworks fixed * Fix runtime crashes with aligned new on iOS 10 * Correctly stop the Effect Player in onDestroy and initialization in onCreate ## \[0.11.0] - 2018-11-08[​](#0110---2018-11-08 "Direct link to \[0.11.0] - 2018-11-08") **Added** * Sound volume control is in effect. * Callback to receive events from effects (mainly for analytics). * Support for the Bokeh effect. **Changed** * A new face model classifier. * Rendering performance optimization. **Fixed** * Various crashes ## \[0.10.2] - 2018-10-08[​](#0102---2018-10-08 "Direct link to \[0.10.2] - 2018-10-08") **Fixed** * Issues with effects display (black background instead of camera texture). * Dynamic shadow lags by one frame. ## \[0.10.1] - 2018-10-05[​](#0101---2018-10-05 "Direct link to \[0.10.1] - 2018-10-05") **Fixed** * Face recognition black AR 3D Mask effects have been fixed on some Android devices. ## \[0.10.0] - 2018-10-05[​](#0100---2018-10-05 "Direct link to \[0.10.0] - 2018-10-05") **Added** * Support of new pixel formats in effect\_player::process\_frame: RGBA, BGRA, ARGB, RGB, and BGR. Not supported in BanubaCore yet. * Binding EffectPlayer and Recognizer for Python. **Fixed** * Launch on iOS 10. * Issue with audio engine lifecycle. * Render: The issue with the effect's shadows has been fixed. * Render: The issue with the depth buffer on the Xiaomi Redmi 4a has been fixed. **Changed** * Render optimization: * Excess loading of 1x1 textures for background and hair AR 3D Masks was removed when these features were not used. * Colour correction for easysnap (lut-textures background loading - speeds up beauty effect loading and small speed up of lut layer rendering). * Broken effects fixes. ## \[0.9.1] - 2018-10-02[​](#091---2018-10-02 "Direct link to \[0.9.1] - 2018-10-02") **Fixed** * The dynamic shadows drawing issue has been fixed (for Banuba 3.0). **Changed** * EffectPlayer for Backend major version was raised to 5.0. * Render performance optimization. * The Beauty Effect for both platforms should be taken from 2b959fa12a966956c6f158ded762b634eac988de or later (update Android effect). ## \[0.9.0] - 2018-09-28[​](#090---2018-09-28 "Direct link to \[0.9.0] - 2018-09-28") **Fixed** * A few crashes in the face recognition engine were fixed. * The issue with the AR 3D Mask  not respecting head volume after switching the effects has been fixed. **Changed** * Render performance optimization. ## \[0.8.6] - 2018-09-24[​](#086---2018-09-24 "Direct link to \[0.8.6] - 2018-09-24") **Added** * Android version assembled with NDK 18. **Changed** * Face recognition improved performance, improved anti-tremble, smoothing and so on. ## \[0.8.5] - 2018-09-21[​](#085---2018-09-21 "Direct link to \[0.8.5] - 2018-09-21") **Fixed** * Fixed initialization crash. ## \[0.8.4] - 2018-09-21[​](#084---2018-09-21 "Direct link to \[0.8.4] - 2018-09-21") **Added** * Debug render antialiasing. **Changed** * Beautification effect performance has increased. * Face recognition performance has increased. ## \[0.8.3] - 2018-09-21[​](#083---2018-09-21 "Direct link to \[0.8.3] - 2018-09-21") **Changed** * The binary file size was reduced for iOS (17.7 MB against 19.7 MB). ## \[0.8.2] - 2018-09-19[​](#082---2018-09-19 "Direct link to \[0.8.2] - 2018-09-19") **Fixed** * Issues with single frame processing were fixed. **Changed** * Performance has improved. ## \[0.8.1] - 2018-09-14[​](#081---2018-09-14 "Direct link to \[0.8.1] - 2018-09-14") **Added** * Minor render optimizations (excess glGetInteger were removed). **Changed** * Low-light feature has been reverted because it has issues. ## \[0.8.0] - 2018-09-12[​](#080---2018-09-12 "Direct link to \[0.8.0] - 2018-09-12") **Added** * A new audio player. * Callback on low light detection. **Changed** * Improvements in face recognition library (performance in multiface mode has been increased, recognition angles have been increased, predictability of recognition work time has been improved - detection distribution by frames with self scheduler. * Improvements in render performance (number of passed parameters into shader interpolation were decreased - pixel shaders patch in glfx). * More accurate draw of the camera image (NEAREST filtration). ## \[0.7.2] - 2018-09-11[​](#072---2018-09-11 "Direct link to \[0.7.2] - 2018-09-11") **Fixed** * Issues with `effect_player_wrap-ios.framework` were fixed. ## \[0.7.1] - 2018-09-07[​](#071---2018-09-07 "Direct link to \[0.7.1] - 2018-09-07") **Fixed** * Fixed issue with crash in v0.7.0 release. ## \[0.7.0] - 2018-09-05[​](#070---2018-09-05 "Direct link to \[0.7.0] - 2018-09-05") **Added** * Photo mode frame processing (high resolution frame processing). * Consistency mode for camera external texture (Android). * Ability to get the version of EffectPlayer from the backend. * Ability to get the number of rendered frames and pass the number in push\_frame. * Face recognition finds faces at a large angle (up to 30 degrees). * Ability to set the texture parameters through suffixes in their names. * The framework version is transferred to Manifest after building, AAR assembling completely automated (Andorid). **Fixed** * Fixed issue with context loss on Android. * Effects with occlusion are fixed. * Small bug fixes. **Changed** * Strong reference on Delegate has been removed in iOS. ## \[0.6.2] - 2018-08-31[​](#062---2018-08-31 "Direct link to \[0.6.2] - 2018-08-31") **Fixed** * Fixed deadlock when drawing regular camera texture. ## \[0.6.1] - 2018-08-29[​](#061---2018-08-29 "Direct link to \[0.6.1] - 2018-08-29") **Added** * Consistent external texture for Android. * Zeroing face counter on onStop event. ## \[0.6.0] - 2018-08-21[​](#060---2018-08-21 "Direct link to \[0.6.0] - 2018-08-21") **Fixed** * Fixed and significantly improved inconsistency modes (which were given earlier in unversioned release). * Images strides from the camera were fixed for Android. * Fixed issue with long camera initialization. **Changed** * Default iOS mode was changed to inconsistency-without-face (was given earlier in unversioned release). * Updates in gender recognition (works fast and once in 3 seconds at the moment). ## \[0.5.2] - 2018-07-31[​](#052---2018-07-31 "Direct link to \[0.5.2] - 2018-07-31") **Added** * Possibility to enable inconsistency mode on iOS, it is possible to skip frame processing to render the image. * Possibility to receive device orientation in script (it is possible to disable background separation according to orientation). * Possibility to create a few recognizer instances (basically for DiffCat, not yet presented in EP API). * Possibility to transmit both camera matrix into the script (needed for morphing creation in accordance to distance from camera). * Recognizer coverage with performance tests has started. **Fixed** * Potential crash with keeping color\_plane was fixed. * Fixed wrong\_fb\_after\_morph. ## \[0.5.1] - 2018-07-26[​](#051---2018-07-26 "Direct link to \[0.5.1] - 2018-07-26") **Fixed** * Beauty settings doesn’t apply issue has been fixed. **Changed** * Unnecessary Android resources were removed. ## \[0.5.0] - 2018-07-24[​](#050---2018-07-24 "Direct link to \[0.5.0] - 2018-07-24") **Added** * Consistency/inconsistency modes switching (Android). * Blur background. * Performance collection using systrace (Android). * 32-bit support (but slow at the moment). * Possibility to transfer a frame number that has come from the camera. * Possibility to disable background separation and other recognizer features from scripts. * Switching between external textures and drawing from ImageReader (Android). **Fixed** * Huawei issues (Android). * Colours conversion bug fix (colour correction). **Changed** * Optimized morphing. --- # API Overview * iOS * Android * Web * Desktop ![image](/far-sdk/assets/images/ios_player_api_overview-329f7e9e59ebec11fef743ed4309d096.svg) **Banuba SDK** for **iOS** can be divided into three main entities: `Input`, `Player` and `Output`. The plethora of input options multiplied by the plethora of output options covers many use cases. ## Input[​](#input "Direct link to Input") Processes frames from one of the producers: * `Camera` - a real-time `CameraDevice` feed * `Photo` - an image from the gallery or a photo taken from `CameraDevice` * `Stream` - a frames sequence from the *WebRTC* stream or any other provider * Custom - your own implementation for the `Input` protocol ## Player[​](#player "Direct link to Player") Allows to `use` different data inputs like `Camera` feed or `Photo`, to apply an **effect** on top of it and to `use` several outputs like `View` and `Video` file simultaneously. The `Effect` component makes up an essential part of the SDK usage. The **effect** is represented as a folder with scripts and resources and can be loaded with the `load` method. Supports the following rendering modes: * `loop` *(default)* - render in the display-linked loop with the defined FPS * `manual` - render manually by calling the `render` method ## Output[​](#output "Direct link to Output") Presents a rendered frame onto one of the available surfaces: * `View` - on screen presentation * `Frame`, `PixelBuffer`, `PixelBufferYUV` - in memory presentation * `Video` - in video file presentation * Custom - your own implementation for the `Output` protocol ## CameraDevice[​](#cameradevice "Direct link to CameraDevice") Accesses the device's camera to generate a feed of frames in real time or takes high-quality photos.
By default, tracks UI orientation to properly manage frame rotation. ## RenderTarget[​](#rendertarget "Direct link to RenderTarget") Manages a `CALayer` object with a `Metal` context and provides offscreen rendering for the `Player` and presenting for the `Output`. ## Use cases[​](#use-cases "Direct link to Use cases") Common use cases and relevant samples are described in [this repository](https://github.com/Banuba/banuba-sdk-ios-samples).
tip See more use cases in [Samples](/far-sdk/tutorials/development/samples.md). ![image](/far-sdk/assets/images/android_player_api_overview-69f91128c9d7c1f1e3e7286f7f919f36.svg) **Player API** is a set of classes and methods that help facilitate and speed up the integration of the Banuba SDK into applications. The **Player API** concept distinguishes three main entities: **Input**, **Player**, and **Output**. Basic **Player API** packages: * `com.banuba.sdk.input` — all the classes responsible for the input data, the Input entity. * `com.banuba.sdk.player` — the rendering thread and the `Player`, the Player entity. * `com.banuba.sdk.output` — all the classes responsible for the output data, the Output entity. * `com.banuba.sdk.frame` — the pixel buffer used for both input and output. ## Input[​](#input "Direct link to Input") **Input** receives frames from a camera, image, or user input and provides them to the `Player`. The `Player` can only work with one Input at a time. * `CameraInput` — this class provides frames from the front or rear camera in real time. * `ProtoInput` — this class provides frames as photos taken by the camera or loaded from a file. * `StreamInput` — this class provides user frames from user data. * `VideoInput` — this class provides frames from a video file. ## Player[​](#player "Direct link to Player") The **Player** class requests frames from **Input**, then processes these frames and passes the results to one or more **Outputs**. By default, frame processing works automatically. Optionally, you can enable on-demand processing. ## Output[​](#output "Direct link to Output") **Output** receives the result of the work from the **Player** and renders it to the surface or a texture, writes it into a file, or provides the user with frames in a supported format. * `FrameOutput` — provides the user with data in the form of a buffer of pixels. * `SurfaceOutput` — this class renders frames to the `SurfaceView`. * `TextureOutput` — this class renders frames to the `TextureView`. * `VideoOutput` — this class writes frames to a video file. ## Input and Output of user data[​](#input-and-output-of-user-data "Direct link to Input and Output of user data") The **Input** and the **Output** can operate on a pixel buffer. * `FramePixelBuffer` — provides access to an array of pixels as a byte buffer. In the `StreamInput` class, it is used as the input data buffer, and in the `FrameOutput` class, it is used as the output data buffer. * `FramePixelBufferFormat` — pixel buffer format can be one of: `RGBA`, `I420_BT601_FULL`, `I420_BT601_VIDEO`, `I420_BT709_FULL` or `I420_BT709_VIDEO`. ## Use cases[​](#use-cases "Direct link to Use cases") ### CameraDevice[​](#cameradevice "Direct link to CameraDevice") The camera device is associated with the device's physical camera. All camera settings are made using this class. ``` // Variable declaration somewhere inside the activity private val cameraDevice by lazy(LazyThreadSafetyMode.NONE) { CameraDevice(requireNotNull(this.applicationContext), this@MainActivity) } ... // Somewhere in the initialization code /* You can change the camera settings at any time as follows. */ cameraDevice.configurator .setLens(CameraDeviceConfigurator.LensSelector.BACK) /* Set back camera as input */ .setVideoCaptureSize(CameraDeviceConfigurator.SD_CAPTURE_SIZE) /* Video capturing size 640, 480 */ .setImageCaptureSize(CameraDeviceConfigurator.HD_CAPTURE_SIZE) /* Image capture size 1280, 720 */ .commit() /* You must call this method to apply the new settings. */ /* But if you're happy with the camera's default settings, then you can safely skip manual settings. */ /* We start the camera and then player starts taking frames */ /* You must obtain permission before calling the cameraDevice.start() method. */ cameraDevice.start() ... // Somewhere in the interruption code /* After this method, the camera will stop capturing frames and transmitting them to player */ cameraDevice.stop() ``` ### CameraInput[​](#camerainput "Direct link to CameraInput") Allows to receive and process frames from the `CameraDevice`. The `Player` will only process the most recently received frame, all other frames will be discarded. Frames will be processed in online mode. ``` // Somewhere in the initialization code /* There is no need to create a variable for this class since this class is only used to transfer frames from the camera to the player. But it is necessary that cameraDevice is created */ player.use(CameraInput(cameraDevice), ...) ``` ### PhotoInput[​](#photoinput "Direct link to PhotoInput") Allows you to process photos from the `CameraDevice`, from the Android [`Bitmap`](https://developer.android.com/reference/android/graphics/Bitmap), from the Android [`Image`](https://developer.android.com/reference/android/media/Image), or from the [`FramePixelBuffer`](#framepixelbuffer) with or without a given orientation and mirroring. The photo will be processed in the offline mode. ``` // Somewhere in the image processing code val photoInput = PhotoInput() player.use(photoInput, ...) /* cameraDevice must be created and started before taking a photo */ photoInput.take(cameraDevice, object: CameraDevice.IErrorOccurred { override fun onError(exception: Exception) { /* Did an error occur? Now we just ignore it */ } }) ``` ### StreamInput[​](#streaminput "Direct link to StreamInput") Pushes the user data stream to the `Player`. User data can come from anywhere, for example, received over the network. Frames will be processed in online mode. ``` // Variable declaration private val streamInput by lazy(LazyThreadSafetyMode.NONE) { StreamInput() } ... // Somewhere in the initialization code player.use(streamInput, ...) ... // somewhere in the code when receiving the next frame /* You can see an example of creating a FramePixelBuffer below in section 1FramePixelBuffer */ val frame = FramePixelBuffer(...) /* myFrameTimestampInNanoseconds - it is not necessary to put the timestamp, you can always transmit 0. But if you use VideoOutput, then recording in the video file will be based on the time that you put here. Be careful with the timestamp; note that it must be transmitted in nanoseconds. */ streamInput.push(frame, myFrameTimestampInNanoseconds) ``` ### VideoInput[​](#videoinput "Direct link to VideoInput") Used when you need to process a video file, all the frames will be processed sequentially one by one. Must be used with `MANUAL` player rendering. Supported video formats depend on the specific device and installed **Android** codecs. Frames will be processed in offline mode. ``` // Variable declaration private val videoInput by lazy(LazyThreadSafetyMode.NONE) { VideoInput() } ... // Somewhere in the initialization code /* Asynchronous video file processing */ videoInput.processVideoFile(File("path_to/my_video.mp4"), object: VideoInput.IVideoFrameStatus { override fun onStart() { /* Start of video extraction */ /* We switch the player to the manual mode so that we can process the video file frame by frame */ player.setRenderMode(Player.RenderMode.MANUAL) } override fun onFrame() { /* The video frame was extracted and pushed to the player */ /* We call synchronous rendering so that the player has time to process the frame */ player.render() } override fun onError(throwable: Throwable) { /* Did an error occur? Now we just ignore it */ } override fun onFinish() { /* Processing of the video file has completed. If there were any errors and nothing was read from the video file, then this function is always called if function onStart() was called. We also return the player to the previous rendering mode */ player.setRenderMode(Player.RenderMode.LOOP) } }) ... // Somewhere in the interruption code /* If processing the current video file is no longer needed, then call this method */ videoInput.stopProcessing() ``` ### FramePixelBufferFormat[​](#framepixelbufferformat "Direct link to FramePixelBufferFormat") Pixel buffer format can be one of: * `BPC8_RGBA` - 4 bytes per pixel, analogue of android type [`Bitmap.Config.ARGB_8888`](https://developer.android.com/reference/android/graphics/Bitmap.Config). * `I420_BT601_FULL` - yuv i420 image encoded by standard bt601 full range. * `I420_BT601_VIDEO` - yuv i420 image encoded by standard bt601 video range. * `I420_BT709_FULL` - yuv i420 image encoded by standard bt709 full range. * `I420_BT709_VIDEO` - yuv i420 image encoded by standard bt709 video range. ### FramePixelBuffer[​](#framepixelbuffer "Direct link to FramePixelBuffer") A wrapper for pixel images that can store images of different formats and provide convenient access to all image parameters. example of creating RGBA FramePixelBuffer ``` /* The format BPC8_RGBA has a single plane of densely packed pixels */ val myWidth = 400 /* width of the image */ val myHeight = 300 /* height of the image */ val myPixelStride = 4 /* 4 because BPC8_RGBA format and the pixels are tightly packed */ val myRowStride = myWidth * myPixelStride /* Stride is bytes per row of pixels */ val myArrayOfPixels: ByteBuffer = ... /* array of pixels with RGBA data 400x300 pixels */ val frame = FramePixelBuffer(myArrayOfPixels, intArrayOf(0 /* 0 because the pixels start from a zero byte in the buffer */), intArrayOf(myRowStride), intArrayOf(myPixelStride), myWidth, myHeight, FramePixelBufferFormat.BPC8_RGBA ) ``` example of creating YUV FramePixelBuffer ``` /* You can read more about YUV format here: https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering */ /* Any format I420_*** has a single plane of densely packed pixels */ val myWidth = 400 /* width of the image */ val myHeight = 300 /* height of the image */ val myArrayOfPixels: ByteBuffer = ... /* array of pixels with i420 data 400x300 pixels */ val myYPlanePixelStride = 1 /* 1 because i420 format and the pixels are tightly packed */ val myUPlanePixelStride = 1 /* 1 because i420 format and the pixels are tightly packed */ val myVPlanePixelStride = 1 /* 1 because i420 format and the pixels are tightly packed */ val myYPlaneRowStride = myWidth /* in this case the stride is equal to the width */ val myUPlaneRowStride = myWidth /* in this case the stride is equal to the width */ val myVPlaneRowStride = myWidth /* in this case the stride is equal to the width */ val myOffsetToYPlane = 0 /* plane Y starts from the beginning of buffer myArrayOfPixels */ val myOffsetToUPlane = myOffsetToYPlane + myYPlanePixelStride * myHeight val myOffsetToVPlane = myOffsetToUPlane + myUPlanePixelStride * myHeight / 4 val frame = FramePixelBuffer(myArrayOfPixels, intArrayOf(myOffsetToYPlane, myOffsetToUPlane, myOffsetToVPlane), intArrayOf(myYPlaneRowStride, myUPlaneRowStride, myVPlaneRowStride), intArrayOf(myYPlanePixelStride, myUPlanePixelStride, myVPlanePixelStride), myWidth, myHeight, FramePixelBufferFormat.I420_BT601_FULL ) ``` ### Player[​](#player-1 "Direct link to Player") The main class with which you can manage **Banuba SDK**, **effects** and the entire rendering process. Rendering in this class is done in a separate thread, the **render thread**. ``` // Variable declaration private val player by lazy(LazyThreadSafetyMode.NONE) { Player() } ... // Somewhere in the initialization code /* Initialization of the Banuba SDK must occur before the player starts working, otherwise there will be a crash indicating an error */ BanubaSdkManager.initialize(this, <#MY BANUBA CLIENT TOKEN#>); /* cameraDevice and textureOutput must be created and declared before use */ /* In fact, you can specify more than one output as an output. The number of outputs is not limited. But every additional one affects performance. If this is an output to the surface, then you won’t notice the difference. But if this is rendering into a video file, then the performance directly depends on the capabilities of the phone. You can use multiple outputs as follows: player.use(CameraInput(cameraDevice), intArrayOf(myOutput1, myOutput2, ..., myOutputN)) */ player.use(CameraInput(cameraDevice), textureOutput) /* Loading any effect */ player.loadAsync("PineappleGlasses") /* And running the player */ player.play() ... // Somewhere in the destruction code /* After you are done using the player, you must free all resources by calling method close() */ player.close() ``` ### PlayerTouchListener[​](#playertouchlistener "Direct link to PlayerTouchListener") This class represents user clicks in **Banuba SDK**. This is required by some **effects**, and is one way to interact with them. ``` // Somewhere in the initialization code /* The player must be created and declared before use. The mySurfaceView is a UI element and must also exist in the layout */ mySurfaceView.setOnTouchListener(PlayerTouchListener(this.applicationContext, player)); ``` ### FrameOutput[​](#frameoutput "Direct link to FrameOutput") Allows you to receive the processing result frames in the form of an array of pixels in the desired format and in the desired orientation. ``` // Variable declaration private val frameOutput by lazy(LazyThreadSafetyMode.NONE) { FrameOutput(object : FrameOutput.IFramePixelBufferProvider { override fun onFrame(output: IOutput, framePixelBuffer: FramePixelBuffer?) { /* This is your code for working with the framePixelBuffer */ } }) } ... // Somewhere in the initialization code frameOutput.setFormat(FramePixelBufferFormat.I420_BT601_FULL) frameOutput.setOrientation(Orientation.UP, false) player.use(..., frameOutput) ... // Somewhere in the destruction code /* After finishing using the frameOutput, you must free all the resources by calling the close() method */ frameOutput.close() ``` ### SurfaceOutput[​](#surfaceoutput "Direct link to SurfaceOutput") Allows you to display the processing result on an [SurfaceView](https://developer.android.com/reference/android/view/SurfaceView). ``` // Variable declaration /* The mySurfaceView is a UI element and must exist in the layout */ private val surfaceOutput by lazy(LazyThreadSafetyMode.NONE) { SurfaceOutput(mySurfaceView.holder) } ... // Somewhere in the initialization code player.use(..., surfaceOutput) ... // Somewhere in the destruction code /* After finishing using the surfaceOutput, you must free all resources by calling the close() method */ surfaceOutput.close() ``` ### TextureOutput[​](#textureoutput "Direct link to TextureOutput") Allows you to display the processing result on an [TextureView](https://developer.android.com/reference/android/view/TextureView). ``` // Variable declaration /* The myTextureView is a UI element and must exist in the layout */ private val textureOutput by lazy(LazyThreadSafetyMode.NONE) { TextureOutput(myTextureView) } ... // Somewhere in the initialization code player.use(..., textureOutput) ... // Somewhere in the destruction code /* After finishing using the textureOutput, you must free all resources by calling the close() method */ textureOutput.close() ``` ### VideoOutput[​](#videooutput "Direct link to VideoOutput") The class allows you to record the processing result to a video file. ``` // Variable declaration private val videoOutput by lazy(LazyThreadSafetyMode.NONE) { VideoOutput() } ... // Somewhere in the initialization code player.use(..., videoOutput) /* Before you start recording a video, you must obtain permission to record to the storage. */ videoOutput.startRecording(File("path_to/my_output_video.mp4")) ... // Somewhere in the interruption code /* It is necessary to interrupt video recording when it is no longer needed */ videoOutput.stopRecording() ... // Somewhere in the destruction code /* After you are done using the videoOutput, you must free all the resources by calling the close() method */ videoOutput.close() ``` ![image](/far-sdk/assets/images/web_overview-747754fce3a1ba126bdff47046ba649d.svg) `BanubaSDK.js` exports different APIs for **Web AR** development like *Player*, *Effect*, several types of *Input* and *Output*. A generic workflow looks like: > *Input* -> *Player* + *Effect* -> *Output* ### Player[​](#player "Direct link to Player") The *Player* allows to consume different data inputs like webcam or image file, to apply an effect on top of it and to produce an output like rendering to DOM node or an image file. ### Effect[​](#effect "Direct link to Effect") The *Effect* allows to consume an effect or a face filter as remote or local archive. ### Input[​](#input "Direct link to Input") The *Input* can be one of the following: * Webcam * Image as Blob or URL * Video as Blob or URL * 3rd-party MediaStream like HTMLVideoElement stream or WebRTC stream ### Output[​](#output "Direct link to Output") The *Output* can be one of the following: * HTML Element * Image as Blob * Video as Blob * MediaStream that can be used by 3rd-parties like WebRTC peer connection The plenty of input options multiplied by the plenty of output options covers lots of use cases like: * Photo booth app with realtime webcam video processing and photo capturing * Photo and video files post-processing app * P2P video call app with face filter applied And many more. ## How it looks in JavaScript[​](#how-it-looks-in-javascript "Direct link to How it looks in JavaScript") Real-time webcam video processing and DOM rendering: ``` import { Webcam, Player, Module, Effect, Dom } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/face_tracker.zip")) await player.use(new Webcam()) player.applyEffect(new Effect("Glasses.zip")) Dom.render(player, "#webar-app") ``` And with screenshot capturing: ``` import { Webcam, Player, Effect, Module, Dom, ImageCapture } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/face_tracker.zip")) await player.use(new Webcam()) player.applyEffect(new Effect("Glasses.zip")) Dom.render(player, "#webar-app") const capture = new ImageCapture(player) const photo = await capture.takePhoto() ``` ![image](/far-sdk/assets/images/desktop_player_api_overview-45d2577b869d02247fa0b0dd15d29a70.svg) **Banuba SDK** for **desktop** can be divided into three main entities: `input`, `player` and `output`. The plethora of input options multiplied by the plethora of output options covers many use cases. ## Basic Player API interfaces:[​](#basic-player-api-interfaces "Direct link to Basic Player API interfaces:") * `bnb::player_api::interfaces::input` - receive frames and transfer them to the `player`. * `bnb::player_api::interfaces::output` - presents frames on the surface or read in memory. * `bnb::player_api::interfaces::player` - frames processing and rendering. * `bnb::player_api::interfaces::render_target` - rendering context. * `bnb::player_api::interfaces::render_delegate` - connection between the application rendering and `player` rendering. ## Input[​](#input "Direct link to Input") Processes frames from one of the producers: * `bnb::player_api::live_input` - live stream with the ability to skip frames. * `bnb::player_api::photo_input` - photo or image. * `bnb::player_api::stream_input` - stream without frames skipping. * Custom - your own implementation for the `bnb::player_api::interfaces::input` interface. ## Output[​](#output "Direct link to Output") Presents a rendered frame onto one of the available surfaces: * `bnb::player_api::opengl_frame_output` - array of pixels, should be used with `opengl_render_target`. * `bnb::player_api::metal_frame_output` - array of pixels, should be used with `metal_render_target`. * `bnb::player_api::texture_output` - **GPU** texture, texture type depends on the used `render_target`. * `bnb::player_api::window_output` - window should work with the same **GAPI** as the used `render_target`. * Custom - your own implementation for the `bnb::player_api::interfaces::output` interface. ## Player[​](#player "Direct link to Player") * `bnb::player_api::player` - processes frames and applies effects. ## Render target[​](#render-target "Direct link to Render target") * `bnb::player_api::opengl_render_target` - **OpenGL** implementation for the `render_target` * `bnb::player_api::metal_render_target` - **Metal** implementation for the `render_target`. ## Render delegate[​](#render-delegate "Direct link to Render delegate") This is always a custom implementation. To implement the interface, you need to inherit from interface `bnb::player_api::interfaces::render_delegate` and override three methods: * `activate()` - activating the rendering context, if necessary. * `started()` - frame rendering has started. After this, `finished(...)` will be called. * `finished(int64_t frame_number)` - frame rendering has ended. The frame number is any non-negative number. If `-1` is passed, then the frame rendering has failed. ## Input and output of user data[​](#input-and-output-of-user-data "Direct link to Input and output of user data") The **input** and the **output** can operate on a pixel buffer. * `bnb::full_image_t` - provides access to an array of pixels as a byte buffer. * `bnb::pixel_buffer_format` - pixel buffer format can be one of: `bpc8_rgb`, `bpc8_rgba`, `bpc8_bgr`, `bpc8_bgra`, `bpc8_argb`, `i420`, `nv12`. ## Use cases[​](#use-cases "Direct link to Use cases") ### live\_input[​](#live_input "Direct link to live_input") Used to receive and subsequently process frames in real time. If a new frame is received and the player has not yet processed the previous frame, the new frame will be skipped. Suitable for receiving frames from a camera. ``` #include ... // Creating an instance auto input = bnb::player_api::live_input::create(); ... // Adding an input to the player player->use(input); ... // Somewhere in the loop auto frame = bnb::full_image_t::create_bpc8(...); auto frame_time = get_my_current_timestamp_us(); // Pushing frame asynchronously into input input.push(frame, frame_time); ``` ### photo\_input[​](#photo_input "Direct link to photo_input") Used for obtaining and subsequent processing of photographs. ``` #include ... // Creating an instance auto input = bnb::player_api::photo_input::create(); ... // Adding an input to the player player->use(input); ... auto filepath = std::string("/path/to/the/photo.png"); // Somewhere where need to upload a photo input.push(filepath); ``` ### stream\_input[​](#stream_input "Direct link to stream_input") Used to receive and subsequently process a stream of frames. Does not skip frames if the previous frame is still not processed. Suitable for processing video streams. ``` #include ... // Creating an instance auto input = bnb::player_api::stream_input::create(); ... // Adding an input to the player player->use(input); ... // Somewhere in the loop auto frame = bnb::full_image_t::create_bpc8(...); auto frame_time = get_my_video_timestamp_us(); // Pushing frame synchronously into input input.push(frame, frame_time); ``` ### Custom input[​](#custom-input "Direct link to Custom input") A custom **input** is needed if the standard implementation does not contain the required logic. ``` #include #include class my_custom_input : public bnb::player_api::interfaces::input { public: my_custom_input() { auto config = bnb::interfaces::processor_configuration::create(); m_frame_processor = bnb::interfaces::frame_processor::create_video_processor(config); } void my_custom_push_method(...) { m_timestamp_us = get_timestamp(); auto fi = bnb::full_image_t::create_bpc8(...); // create an RGB image auto fd = bnb::interfaces::frame_data::create(); fd->add_full_img(fi); fd->add_timestamp_us(m_timestamp_us); m_frame_processor->push(fd); } frame_processor_sptr get_frame_processor() const noexcept override { return m_frame_processor; } uint64_t get_frame_time_us() const noexcept override { return m_timestamp_us; } private: bnb::player_api::frame_processor_sptr m_frame_processor; uint64_t m_timestamp_us; }; ``` ### opengl\_frame\_output[​](#opengl_frame_output "Direct link to opengl_frame_output") Allows you to receive the processing result frames as an array of pixels in the desired format and orientation. Important `opengl_frame_output` only works in conjunction with `opengl_render_target`. ``` #include ... // Creating an instance auto output = bnb::player_api::opengl_frame_output::create([](const bnb::full_image_t& image) { // We work here with the resulting `image` }, bnb::pixel_buffer_format::bpc8_rgba); output->set_orientation(bnb::orientation::left, true); ... // Adding an output to the player player->use(output); ``` ### metal\_frame\_output[​](#metal_frame_output "Direct link to metal_frame_output") Allows you to receive the processing result frames as an array of pixels in the desired format and orientation. Important `metal_frame_output` only works in conjunction with `metal_render_target` and it available only on `Apple` platform. ``` #include ... // Creating an instance auto output = bnb::player_api::metal_frame_output::create([](const bnb::full_image_t& image) { // We work here with the resulting `image` }, bnb::pixel_buffer_format::bpc8_rgba); output->set_orientation(bnb::orientation::left, true); ... // Adding an output to the player player->use(output); ``` ### texture\_output[​](#texture_output "Direct link to texture_output") Allows you to get texture. ``` #include ... // Creating an instance auto output = bnb::player_api::texture_output::create([](const bnb::player_api::texture_t texture) { // We work here with the resulting `texture` }); ... // Adding an output to the player player->use(output); ``` ### window\_output[​](#window_output "Direct link to window_output") Renders frames onto the window surface, at the specified location, and in the specified orientation. ``` #include ... // Creating an instance with opengl_render_target auto output = bnb::player_api::window_output::create(nullptr); ... // Or creating an instance with metal_render_target CAMetalLayer* metal_layer = get_my_metal_layer(); // When using metal rendering, you need to pass the render layer. auto output = bnb::player_api::window_output::create(static_cast(metal_layer)); ... // Setup orientaion and mirroring output->set_orientation(bnb::orientation::left, true); ... // Adding an output to the player player->use(output); ... // Set the rendering size and position. output->set_frame_layout(position_left, position_top, render_width, render_height); ``` ### Custom output[​](#custom-output "Direct link to Custom output") A custom **output** is needed if the standard implementation does not contain the required logic. ``` #include ... class my_custom_output : public bnb::player_api_interfaces { public: void attach() override { /* output is attached to the player */ } void detach() override { /* output is detached from the player */ } void present(const bnb::player_api::render_target_sptr& render_target) override { // custom code ... render_target->present(position_left, position_top, render_width, render_height, my_orientaion_matrix_4x4); } }; ``` ### full\_image\_t[​](#full_image_t "Direct link to full_image_t") Three functions are available to create images in different formats: * `bnb::full_image_t::create_bpc8` - for image formats `bpc8_rgb`, `bpc8_rgba`, `bpc8_bgr`, `bpc8_bgra`, `bpc8_argb`. * `bnb::full_image_t::create_nv12` - for nv12 yuv images with `bt601` or `bt709` color standard and with `full` or `video` color range. * `bnb::full_image_t::create_i420` - for i420 yuv images with `bt601` or `bt709` color standard and with `full` or `video` color range. ``` #include ... // creating RGB image auto rgb_image = bnb::full_image_t::create_bpc8( rgb_image_data, // uint8_t* raw pointer to image data width * 3, // rgb image stride. width, // width of the image height, // height of the image bnb::pixel_buffer_format::bpc8_rgb, // image format bnb::orientation::up, // image orientation false, // image mirroring [](uint8_t* data) { /* deleting the image data */ } // deleter of the image data ); ``` Retrieving image data: ``` #include ... // Getting RGB image data auto rgb_image = bnb::full_image_t::create_bpc8(...); uint8_t* rgb_image_bytes = rgb_image.get_base_ptr_of_plane(0); int32_t width = rgb_image.get_width_of_plane(0); int32_t height = rgb_image.get_height_of_plane(0); int32_t stride = rgb_image.get_bytes_per_row_of_plane(0); ``` ### player[​](#player-1 "Direct link to player") The **player** requests frames from the input, then processes those frames using the Banuba SDK and passes the results to one or more outputs. Default frame processing in Banuba SDK works automatically. If you wish, you can enable on-demand processing. ``` #include ... auto fps = 30; auto render_target = /* render target */; auto renderer = /* my custom renderer */ // Creating an instance auto player = bnb::player_api::player::create(fps, render_target, renderer); ... // Configuring the player and loading the effect player->use(/* some input */) .use(/* some output */) .use(/* some output */); player->load_async(/* path to the effect */); ``` ### opengl\_render\_target[​](#opengl_render_target "Direct link to opengl_render_target") The render target allows the player to render frames processed by the player onto outputs using OpenGL technology. Important `opengl_render_target` only works with OpenGL based outputs. ``` #include ... // Creating an instance auto render_target = bnb::player_api::opengl_render_target::create(); ... // Attach render target to the player auto player = bnb::player_api::player::create(/* fps value */, render_target, /* some renderer */); ``` ### metal\_render\_target[​](#metal_render_target "Direct link to metal_render_target") The render target allows the player to render frames processed by the player onto outputs using OpenGL technology. Important `metal_render_target` only works with METAL based outputs. ``` #include ... // Creating an instance auto render_target = bnb::player_api::metal_render_target::create(); ... // Attach render target to the player auto player = bnb::player_api::player::create(/* fps value */, render_target, /* some renderer */); ``` ### Custom render delegate[​](#custom-render-delegate "Direct link to Custom render delegate") Necessary for connecting the **player** and application in the context of rendering. ``` class my_custom_renderer : public bnb::player_api::interfaces::render_delegate { public: my_custom_renderer() { } void activate() override { /* make context current */ } void started() override { /* frame rendering started */ } void finished(int64_t frame_number) override { /* frame rendering finished */ } }; ``` Still have any questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # android ![image](/far-sdk/assets/images/android_player_api_overview-69f91128c9d7c1f1e3e7286f7f919f36.svg) **Player API** is a set of classes and methods that help facilitate and speed up the integration of the Banuba SDK into applications. The **Player API** concept distinguishes three main entities: **Input**, **Player**, and **Output**. Basic **Player API** packages: * `com.banuba.sdk.input` — all the classes responsible for the input data, the Input entity. * `com.banuba.sdk.player` — the rendering thread and the `Player`, the Player entity. * `com.banuba.sdk.output` — all the classes responsible for the output data, the Output entity. * `com.banuba.sdk.frame` — the pixel buffer used for both input and output. ## Input[​](#input "Direct link to Input") **Input** receives frames from a camera, image, or user input and provides them to the `Player`. The `Player` can only work with one Input at a time. * `CameraInput` — this class provides frames from the front or rear camera in real time. * `ProtoInput` — this class provides frames as photos taken by the camera or loaded from a file. * `StreamInput` — this class provides user frames from user data. * `VideoInput` — this class provides frames from a video file. ## Player[​](#player "Direct link to Player") The **Player** class requests frames from **Input**, then processes these frames and passes the results to one or more **Outputs**. By default, frame processing works automatically. Optionally, you can enable on-demand processing. ## Output[​](#output "Direct link to Output") **Output** receives the result of the work from the **Player** and renders it to the surface or a texture, writes it into a file, or provides the user with frames in a supported format. * `FrameOutput` — provides the user with data in the form of a buffer of pixels. * `SurfaceOutput` — this class renders frames to the `SurfaceView`. * `TextureOutput` — this class renders frames to the `TextureView`. * `VideoOutput` — this class writes frames to a video file. ## Input and Output of user data[​](#input-and-output-of-user-data "Direct link to Input and Output of user data") The **Input** and the **Output** can operate on a pixel buffer. * `FramePixelBuffer` — provides access to an array of pixels as a byte buffer. In the `StreamInput` class, it is used as the input data buffer, and in the `FrameOutput` class, it is used as the output data buffer. * `FramePixelBufferFormat` — pixel buffer format can be one of: `RGBA`, `I420_BT601_FULL`, `I420_BT601_VIDEO`, `I420_BT709_FULL` or `I420_BT709_VIDEO`. ## Use cases[​](#use-cases "Direct link to Use cases") ### CameraDevice[​](#cameradevice "Direct link to CameraDevice") The camera device is associated with the device's physical camera. All camera settings are made using this class. ``` // Variable declaration somewhere inside the activity private val cameraDevice by lazy(LazyThreadSafetyMode.NONE) { CameraDevice(requireNotNull(this.applicationContext), this@MainActivity) } ... // Somewhere in the initialization code /* You can change the camera settings at any time as follows. */ cameraDevice.configurator .setLens(CameraDeviceConfigurator.LensSelector.BACK) /* Set back camera as input */ .setVideoCaptureSize(CameraDeviceConfigurator.SD_CAPTURE_SIZE) /* Video capturing size 640, 480 */ .setImageCaptureSize(CameraDeviceConfigurator.HD_CAPTURE_SIZE) /* Image capture size 1280, 720 */ .commit() /* You must call this method to apply the new settings. */ /* But if you're happy with the camera's default settings, then you can safely skip manual settings. */ /* We start the camera and then player starts taking frames */ /* You must obtain permission before calling the cameraDevice.start() method. */ cameraDevice.start() ... // Somewhere in the interruption code /* After this method, the camera will stop capturing frames and transmitting them to player */ cameraDevice.stop() ``` ### CameraInput[​](#camerainput "Direct link to CameraInput") Allows to receive and process frames from the `CameraDevice`. The `Player` will only process the most recently received frame, all other frames will be discarded. Frames will be processed in online mode. ``` // Somewhere in the initialization code /* There is no need to create a variable for this class since this class is only used to transfer frames from the camera to the player. But it is necessary that cameraDevice is created */ player.use(CameraInput(cameraDevice), ...) ``` ### PhotoInput[​](#photoinput "Direct link to PhotoInput") Allows you to process photos from the `CameraDevice`, from the Android [`Bitmap`](https://developer.android.com/reference/android/graphics/Bitmap), from the Android [`Image`](https://developer.android.com/reference/android/media/Image), or from the [`FramePixelBuffer`](#framepixelbuffer) with or without a given orientation and mirroring. The photo will be processed in the offline mode. ``` // Somewhere in the image processing code val photoInput = PhotoInput() player.use(photoInput, ...) /* cameraDevice must be created and started before taking a photo */ photoInput.take(cameraDevice, object: CameraDevice.IErrorOccurred { override fun onError(exception: Exception) { /* Did an error occur? Now we just ignore it */ } }) ``` ### StreamInput[​](#streaminput "Direct link to StreamInput") Pushes the user data stream to the `Player`. User data can come from anywhere, for example, received over the network. Frames will be processed in online mode. ``` // Variable declaration private val streamInput by lazy(LazyThreadSafetyMode.NONE) { StreamInput() } ... // Somewhere in the initialization code player.use(streamInput, ...) ... // somewhere in the code when receiving the next frame /* You can see an example of creating a FramePixelBuffer below in section 1FramePixelBuffer */ val frame = FramePixelBuffer(...) /* myFrameTimestampInNanoseconds - it is not necessary to put the timestamp, you can always transmit 0. But if you use VideoOutput, then recording in the video file will be based on the time that you put here. Be careful with the timestamp; note that it must be transmitted in nanoseconds. */ streamInput.push(frame, myFrameTimestampInNanoseconds) ``` ### VideoInput[​](#videoinput "Direct link to VideoInput") Used when you need to process a video file, all the frames will be processed sequentially one by one. Must be used with `MANUAL` player rendering. Supported video formats depend on the specific device and installed **Android** codecs. Frames will be processed in offline mode. ``` // Variable declaration private val videoInput by lazy(LazyThreadSafetyMode.NONE) { VideoInput() } ... // Somewhere in the initialization code /* Asynchronous video file processing */ videoInput.processVideoFile(File("path_to/my_video.mp4"), object: VideoInput.IVideoFrameStatus { override fun onStart() { /* Start of video extraction */ /* We switch the player to the manual mode so that we can process the video file frame by frame */ player.setRenderMode(Player.RenderMode.MANUAL) } override fun onFrame() { /* The video frame was extracted and pushed to the player */ /* We call synchronous rendering so that the player has time to process the frame */ player.render() } override fun onError(throwable: Throwable) { /* Did an error occur? Now we just ignore it */ } override fun onFinish() { /* Processing of the video file has completed. If there were any errors and nothing was read from the video file, then this function is always called if function onStart() was called. We also return the player to the previous rendering mode */ player.setRenderMode(Player.RenderMode.LOOP) } }) ... // Somewhere in the interruption code /* If processing the current video file is no longer needed, then call this method */ videoInput.stopProcessing() ``` ### FramePixelBufferFormat[​](#framepixelbufferformat "Direct link to FramePixelBufferFormat") Pixel buffer format can be one of: * `BPC8_RGBA` - 4 bytes per pixel, analogue of android type [`Bitmap.Config.ARGB_8888`](https://developer.android.com/reference/android/graphics/Bitmap.Config). * `I420_BT601_FULL` - yuv i420 image encoded by standard bt601 full range. * `I420_BT601_VIDEO` - yuv i420 image encoded by standard bt601 video range. * `I420_BT709_FULL` - yuv i420 image encoded by standard bt709 full range. * `I420_BT709_VIDEO` - yuv i420 image encoded by standard bt709 video range. ### FramePixelBuffer[​](#framepixelbuffer "Direct link to FramePixelBuffer") A wrapper for pixel images that can store images of different formats and provide convenient access to all image parameters. example of creating RGBA FramePixelBuffer ``` /* The format BPC8_RGBA has a single plane of densely packed pixels */ val myWidth = 400 /* width of the image */ val myHeight = 300 /* height of the image */ val myPixelStride = 4 /* 4 because BPC8_RGBA format and the pixels are tightly packed */ val myRowStride = myWidth * myPixelStride /* Stride is bytes per row of pixels */ val myArrayOfPixels: ByteBuffer = ... /* array of pixels with RGBA data 400x300 pixels */ val frame = FramePixelBuffer(myArrayOfPixels, intArrayOf(0 /* 0 because the pixels start from a zero byte in the buffer */), intArrayOf(myRowStride), intArrayOf(myPixelStride), myWidth, myHeight, FramePixelBufferFormat.BPC8_RGBA ) ``` example of creating YUV FramePixelBuffer ``` /* You can read more about YUV format here: https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering */ /* Any format I420_*** has a single plane of densely packed pixels */ val myWidth = 400 /* width of the image */ val myHeight = 300 /* height of the image */ val myArrayOfPixels: ByteBuffer = ... /* array of pixels with i420 data 400x300 pixels */ val myYPlanePixelStride = 1 /* 1 because i420 format and the pixels are tightly packed */ val myUPlanePixelStride = 1 /* 1 because i420 format and the pixels are tightly packed */ val myVPlanePixelStride = 1 /* 1 because i420 format and the pixels are tightly packed */ val myYPlaneRowStride = myWidth /* in this case the stride is equal to the width */ val myUPlaneRowStride = myWidth /* in this case the stride is equal to the width */ val myVPlaneRowStride = myWidth /* in this case the stride is equal to the width */ val myOffsetToYPlane = 0 /* plane Y starts from the beginning of buffer myArrayOfPixels */ val myOffsetToUPlane = myOffsetToYPlane + myYPlanePixelStride * myHeight val myOffsetToVPlane = myOffsetToUPlane + myUPlanePixelStride * myHeight / 4 val frame = FramePixelBuffer(myArrayOfPixels, intArrayOf(myOffsetToYPlane, myOffsetToUPlane, myOffsetToVPlane), intArrayOf(myYPlaneRowStride, myUPlaneRowStride, myVPlaneRowStride), intArrayOf(myYPlanePixelStride, myUPlanePixelStride, myVPlanePixelStride), myWidth, myHeight, FramePixelBufferFormat.I420_BT601_FULL ) ``` ### Player[​](#player-1 "Direct link to Player") The main class with which you can manage **Banuba SDK**, **effects** and the entire rendering process. Rendering in this class is done in a separate thread, the **render thread**. ``` // Variable declaration private val player by lazy(LazyThreadSafetyMode.NONE) { Player() } ... // Somewhere in the initialization code /* Initialization of the Banuba SDK must occur before the player starts working, otherwise there will be a crash indicating an error */ BanubaSdkManager.initialize(this, <#MY BANUBA CLIENT TOKEN#>); /* cameraDevice and textureOutput must be created and declared before use */ /* In fact, you can specify more than one output as an output. The number of outputs is not limited. But every additional one affects performance. If this is an output to the surface, then you won’t notice the difference. But if this is rendering into a video file, then the performance directly depends on the capabilities of the phone. You can use multiple outputs as follows: player.use(CameraInput(cameraDevice), intArrayOf(myOutput1, myOutput2, ..., myOutputN)) */ player.use(CameraInput(cameraDevice), textureOutput) /* Loading any effect */ player.loadAsync("PineappleGlasses") /* And running the player */ player.play() ... // Somewhere in the destruction code /* After you are done using the player, you must free all resources by calling method close() */ player.close() ``` ### PlayerTouchListener[​](#playertouchlistener "Direct link to PlayerTouchListener") This class represents user clicks in **Banuba SDK**. This is required by some **effects**, and is one way to interact with them. ``` // Somewhere in the initialization code /* The player must be created and declared before use. The mySurfaceView is a UI element and must also exist in the layout */ mySurfaceView.setOnTouchListener(PlayerTouchListener(this.applicationContext, player)); ``` ### FrameOutput[​](#frameoutput "Direct link to FrameOutput") Allows you to receive the processing result frames in the form of an array of pixels in the desired format and in the desired orientation. ``` // Variable declaration private val frameOutput by lazy(LazyThreadSafetyMode.NONE) { FrameOutput(object : FrameOutput.IFramePixelBufferProvider { override fun onFrame(output: IOutput, framePixelBuffer: FramePixelBuffer?) { /* This is your code for working with the framePixelBuffer */ } }) } ... // Somewhere in the initialization code frameOutput.setFormat(FramePixelBufferFormat.I420_BT601_FULL) frameOutput.setOrientation(Orientation.UP, false) player.use(..., frameOutput) ... // Somewhere in the destruction code /* After finishing using the frameOutput, you must free all the resources by calling the close() method */ frameOutput.close() ``` ### SurfaceOutput[​](#surfaceoutput "Direct link to SurfaceOutput") Allows you to display the processing result on an [SurfaceView](https://developer.android.com/reference/android/view/SurfaceView). ``` // Variable declaration /* The mySurfaceView is a UI element and must exist in the layout */ private val surfaceOutput by lazy(LazyThreadSafetyMode.NONE) { SurfaceOutput(mySurfaceView.holder) } ... // Somewhere in the initialization code player.use(..., surfaceOutput) ... // Somewhere in the destruction code /* After finishing using the surfaceOutput, you must free all resources by calling the close() method */ surfaceOutput.close() ``` ### TextureOutput[​](#textureoutput "Direct link to TextureOutput") Allows you to display the processing result on an [TextureView](https://developer.android.com/reference/android/view/TextureView). ``` // Variable declaration /* The myTextureView is a UI element and must exist in the layout */ private val textureOutput by lazy(LazyThreadSafetyMode.NONE) { TextureOutput(myTextureView) } ... // Somewhere in the initialization code player.use(..., textureOutput) ... // Somewhere in the destruction code /* After finishing using the textureOutput, you must free all resources by calling the close() method */ textureOutput.close() ``` ### VideoOutput[​](#videooutput "Direct link to VideoOutput") The class allows you to record the processing result to a video file. ``` // Variable declaration private val videoOutput by lazy(LazyThreadSafetyMode.NONE) { VideoOutput() } ... // Somewhere in the initialization code player.use(..., videoOutput) /* Before you start recording a video, you must obtain permission to record to the storage. */ videoOutput.startRecording(File("path_to/my_output_video.mp4")) ... // Somewhere in the interruption code /* It is necessary to interrupt video recording when it is no longer needed */ videoOutput.stopRecording() ... // Somewhere in the destruction code /* After you are done using the videoOutput, you must free all the resources by calling the close() method */ videoOutput.close() ``` --- # desktop ![image](/far-sdk/assets/images/desktop_player_api_overview-45d2577b869d02247fa0b0dd15d29a70.svg) **Banuba SDK** for **desktop** can be divided into three main entities: `input`, `player` and `output`. The plethora of input options multiplied by the plethora of output options covers many use cases. ## Basic Player API interfaces:[​](#basic-player-api-interfaces "Direct link to Basic Player API interfaces:") * `bnb::player_api::interfaces::input` - receive frames and transfer them to the `player`. * `bnb::player_api::interfaces::output` - presents frames on the surface or read in memory. * `bnb::player_api::interfaces::player` - frames processing and rendering. * `bnb::player_api::interfaces::render_target` - rendering context. * `bnb::player_api::interfaces::render_delegate` - connection between the application rendering and `player` rendering. ## Input[​](#input "Direct link to Input") Processes frames from one of the producers: * `bnb::player_api::live_input` - live stream with the ability to skip frames. * `bnb::player_api::photo_input` - photo or image. * `bnb::player_api::stream_input` - stream without frames skipping. * Custom - your own implementation for the `bnb::player_api::interfaces::input` interface. ## Output[​](#output "Direct link to Output") Presents a rendered frame onto one of the available surfaces: * `bnb::player_api::opengl_frame_output` - array of pixels, should be used with `opengl_render_target`. * `bnb::player_api::metal_frame_output` - array of pixels, should be used with `metal_render_target`. * `bnb::player_api::texture_output` - **GPU** texture, texture type depends on the used `render_target`. * `bnb::player_api::window_output` - window should work with the same **GAPI** as the used `render_target`. * Custom - your own implementation for the `bnb::player_api::interfaces::output` interface. ## Player[​](#player "Direct link to Player") * `bnb::player_api::player` - processes frames and applies effects. ## Render target[​](#render-target "Direct link to Render target") * `bnb::player_api::opengl_render_target` - **OpenGL** implementation for the `render_target` * `bnb::player_api::metal_render_target` - **Metal** implementation for the `render_target`. ## Render delegate[​](#render-delegate "Direct link to Render delegate") This is always a custom implementation. To implement the interface, you need to inherit from interface `bnb::player_api::interfaces::render_delegate` and override three methods: * `activate()` - activating the rendering context, if necessary. * `started()` - frame rendering has started. After this, `finished(...)` will be called. * `finished(int64_t frame_number)` - frame rendering has ended. The frame number is any non-negative number. If `-1` is passed, then the frame rendering has failed. ## Input and output of user data[​](#input-and-output-of-user-data "Direct link to Input and output of user data") The **input** and the **output** can operate on a pixel buffer. * `bnb::full_image_t` - provides access to an array of pixels as a byte buffer. * `bnb::pixel_buffer_format` - pixel buffer format can be one of: `bpc8_rgb`, `bpc8_rgba`, `bpc8_bgr`, `bpc8_bgra`, `bpc8_argb`, `i420`, `nv12`. ## Use cases[​](#use-cases "Direct link to Use cases") ### live\_input[​](#live_input "Direct link to live_input") Used to receive and subsequently process frames in real time. If a new frame is received and the player has not yet processed the previous frame, the new frame will be skipped. Suitable for receiving frames from a camera. ``` #include ... // Creating an instance auto input = bnb::player_api::live_input::create(); ... // Adding an input to the player player->use(input); ... // Somewhere in the loop auto frame = bnb::full_image_t::create_bpc8(...); auto frame_time = get_my_current_timestamp_us(); // Pushing frame asynchronously into input input.push(frame, frame_time); ``` ### photo\_input[​](#photo_input "Direct link to photo_input") Used for obtaining and subsequent processing of photographs. ``` #include ... // Creating an instance auto input = bnb::player_api::photo_input::create(); ... // Adding an input to the player player->use(input); ... auto filepath = std::string("/path/to/the/photo.png"); // Somewhere where need to upload a photo input.push(filepath); ``` ### stream\_input[​](#stream_input "Direct link to stream_input") Used to receive and subsequently process a stream of frames. Does not skip frames if the previous frame is still not processed. Suitable for processing video streams. ``` #include ... // Creating an instance auto input = bnb::player_api::stream_input::create(); ... // Adding an input to the player player->use(input); ... // Somewhere in the loop auto frame = bnb::full_image_t::create_bpc8(...); auto frame_time = get_my_video_timestamp_us(); // Pushing frame synchronously into input input.push(frame, frame_time); ``` ### Custom input[​](#custom-input "Direct link to Custom input") A custom **input** is needed if the standard implementation does not contain the required logic. ``` #include #include class my_custom_input : public bnb::player_api::interfaces::input { public: my_custom_input() { auto config = bnb::interfaces::processor_configuration::create(); m_frame_processor = bnb::interfaces::frame_processor::create_video_processor(config); } void my_custom_push_method(...) { m_timestamp_us = get_timestamp(); auto fi = bnb::full_image_t::create_bpc8(...); // create an RGB image auto fd = bnb::interfaces::frame_data::create(); fd->add_full_img(fi); fd->add_timestamp_us(m_timestamp_us); m_frame_processor->push(fd); } frame_processor_sptr get_frame_processor() const noexcept override { return m_frame_processor; } uint64_t get_frame_time_us() const noexcept override { return m_timestamp_us; } private: bnb::player_api::frame_processor_sptr m_frame_processor; uint64_t m_timestamp_us; }; ``` ### opengl\_frame\_output[​](#opengl_frame_output "Direct link to opengl_frame_output") Allows you to receive the processing result frames as an array of pixels in the desired format and orientation. Important `opengl_frame_output` only works in conjunction with `opengl_render_target`. ``` #include ... // Creating an instance auto output = bnb::player_api::opengl_frame_output::create([](const bnb::full_image_t& image) { // We work here with the resulting `image` }, bnb::pixel_buffer_format::bpc8_rgba); output->set_orientation(bnb::orientation::left, true); ... // Adding an output to the player player->use(output); ``` ### metal\_frame\_output[​](#metal_frame_output "Direct link to metal_frame_output") Allows you to receive the processing result frames as an array of pixels in the desired format and orientation. Important `metal_frame_output` only works in conjunction with `metal_render_target` and it available only on `Apple` platform. ``` #include ... // Creating an instance auto output = bnb::player_api::metal_frame_output::create([](const bnb::full_image_t& image) { // We work here with the resulting `image` }, bnb::pixel_buffer_format::bpc8_rgba); output->set_orientation(bnb::orientation::left, true); ... // Adding an output to the player player->use(output); ``` ### texture\_output[​](#texture_output "Direct link to texture_output") Allows you to get texture. ``` #include ... // Creating an instance auto output = bnb::player_api::texture_output::create([](const bnb::player_api::texture_t texture) { // We work here with the resulting `texture` }); ... // Adding an output to the player player->use(output); ``` ### window\_output[​](#window_output "Direct link to window_output") Renders frames onto the window surface, at the specified location, and in the specified orientation. ``` #include ... // Creating an instance with opengl_render_target auto output = bnb::player_api::window_output::create(nullptr); ... // Or creating an instance with metal_render_target CAMetalLayer* metal_layer = get_my_metal_layer(); // When using metal rendering, you need to pass the render layer. auto output = bnb::player_api::window_output::create(static_cast(metal_layer)); ... // Setup orientaion and mirroring output->set_orientation(bnb::orientation::left, true); ... // Adding an output to the player player->use(output); ... // Set the rendering size and position. output->set_frame_layout(position_left, position_top, render_width, render_height); ``` ### Custom output[​](#custom-output "Direct link to Custom output") A custom **output** is needed if the standard implementation does not contain the required logic. ``` #include ... class my_custom_output : public bnb::player_api_interfaces { public: void attach() override { /* output is attached to the player */ } void detach() override { /* output is detached from the player */ } void present(const bnb::player_api::render_target_sptr& render_target) override { // custom code ... render_target->present(position_left, position_top, render_width, render_height, my_orientaion_matrix_4x4); } }; ``` ### full\_image\_t[​](#full_image_t "Direct link to full_image_t") Three functions are available to create images in different formats: * `bnb::full_image_t::create_bpc8` - for image formats `bpc8_rgb`, `bpc8_rgba`, `bpc8_bgr`, `bpc8_bgra`, `bpc8_argb`. * `bnb::full_image_t::create_nv12` - for nv12 yuv images with `bt601` or `bt709` color standard and with `full` or `video` color range. * `bnb::full_image_t::create_i420` - for i420 yuv images with `bt601` or `bt709` color standard and with `full` or `video` color range. ``` #include ... // creating RGB image auto rgb_image = bnb::full_image_t::create_bpc8( rgb_image_data, // uint8_t* raw pointer to image data width * 3, // rgb image stride. width, // width of the image height, // height of the image bnb::pixel_buffer_format::bpc8_rgb, // image format bnb::orientation::up, // image orientation false, // image mirroring [](uint8_t* data) { /* deleting the image data */ } // deleter of the image data ); ``` Retrieving image data: ``` #include ... // Getting RGB image data auto rgb_image = bnb::full_image_t::create_bpc8(...); uint8_t* rgb_image_bytes = rgb_image.get_base_ptr_of_plane(0); int32_t width = rgb_image.get_width_of_plane(0); int32_t height = rgb_image.get_height_of_plane(0); int32_t stride = rgb_image.get_bytes_per_row_of_plane(0); ``` ### player[​](#player-1 "Direct link to player") The **player** requests frames from the input, then processes those frames using the Banuba SDK and passes the results to one or more outputs. Default frame processing in Banuba SDK works automatically. If you wish, you can enable on-demand processing. ``` #include ... auto fps = 30; auto render_target = /* render target */; auto renderer = /* my custom renderer */ // Creating an instance auto player = bnb::player_api::player::create(fps, render_target, renderer); ... // Configuring the player and loading the effect player->use(/* some input */) .use(/* some output */) .use(/* some output */); player->load_async(/* path to the effect */); ``` ### opengl\_render\_target[​](#opengl_render_target "Direct link to opengl_render_target") The render target allows the player to render frames processed by the player onto outputs using OpenGL technology. Important `opengl_render_target` only works with OpenGL based outputs. ``` #include ... // Creating an instance auto render_target = bnb::player_api::opengl_render_target::create(); ... // Attach render target to the player auto player = bnb::player_api::player::create(/* fps value */, render_target, /* some renderer */); ``` ### metal\_render\_target[​](#metal_render_target "Direct link to metal_render_target") The render target allows the player to render frames processed by the player onto outputs using OpenGL technology. Important `metal_render_target` only works with METAL based outputs. ``` #include ... // Creating an instance auto render_target = bnb::player_api::metal_render_target::create(); ... // Attach render target to the player auto player = bnb::player_api::player::create(/* fps value */, render_target, /* some renderer */); ``` ### Custom render delegate[​](#custom-render-delegate "Direct link to Custom render delegate") Necessary for connecting the **player** and application in the context of rendering. ``` class my_custom_renderer : public bnb::player_api::interfaces::render_delegate { public: my_custom_renderer() { } void activate() override { /* make context current */ } void started() override { /* frame rendering started */ } void finished(int64_t frame_number) override { /* frame rendering finished */ } }; ``` --- # ios ![image](/far-sdk/assets/images/ios_player_api_overview-329f7e9e59ebec11fef743ed4309d096.svg) **Banuba SDK** for **iOS** can be divided into three main entities: `Input`, `Player` and `Output`. The plethora of input options multiplied by the plethora of output options covers many use cases. ## Input[​](#input "Direct link to Input") Processes frames from one of the producers: * `Camera` - a real-time `CameraDevice` feed * `Photo` - an image from the gallery or a photo taken from `CameraDevice` * `Stream` - a frames sequence from the *WebRTC* stream or any other provider * Custom - your own implementation for the `Input` protocol ## Player[​](#player "Direct link to Player") Allows to `use` different data inputs like `Camera` feed or `Photo`, to apply an **effect** on top of it and to `use` several outputs like `View` and `Video` file simultaneously. The `Effect` component makes up an essential part of the SDK usage. The **effect** is represented as a folder with scripts and resources and can be loaded with the `load` method. Supports the following rendering modes: * `loop` *(default)* - render in the display-linked loop with the defined FPS * `manual` - render manually by calling the `render` method ## Output[​](#output "Direct link to Output") Presents a rendered frame onto one of the available surfaces: * `View` - on screen presentation * `Frame`, `PixelBuffer`, `PixelBufferYUV` - in memory presentation * `Video` - in video file presentation * Custom - your own implementation for the `Output` protocol ## CameraDevice[​](#cameradevice "Direct link to CameraDevice") Accesses the device's camera to generate a feed of frames in real time or takes high-quality photos.
By default, tracks UI orientation to properly manage frame rotation. ## RenderTarget[​](#rendertarget "Direct link to RenderTarget") Manages a `CALayer` object with a `Metal` context and provides offscreen rendering for the `Player` and presenting for the `Output`. ## Use cases[​](#use-cases "Direct link to Use cases") Common use cases and relevant samples are described in [this repository](https://github.com/Banuba/banuba-sdk-ios-samples).
tip See more use cases in [Samples](/far-sdk/tutorials/development/samples.md). --- # web ![image](/far-sdk/assets/images/web_overview-747754fce3a1ba126bdff47046ba649d.svg) `BanubaSDK.js` exports different APIs for **Web AR** development like *Player*, *Effect*, several types of *Input* and *Output*. A generic workflow looks like: > *Input* -> *Player* + *Effect* -> *Output* ### Player[​](#player "Direct link to Player") The *Player* allows to consume different data inputs like webcam or image file, to apply an effect on top of it and to produce an output like rendering to DOM node or an image file. ### Effect[​](#effect "Direct link to Effect") The *Effect* allows to consume an effect or a face filter as remote or local archive. ### Input[​](#input "Direct link to Input") The *Input* can be one of the following: * Webcam * Image as Blob or URL * Video as Blob or URL * 3rd-party MediaStream like HTMLVideoElement stream or WebRTC stream ### Output[​](#output "Direct link to Output") The *Output* can be one of the following: * HTML Element * Image as Blob * Video as Blob * MediaStream that can be used by 3rd-parties like WebRTC peer connection The plenty of input options multiplied by the plenty of output options covers lots of use cases like: * Photo booth app with realtime webcam video processing and photo capturing * Photo and video files post-processing app * P2P video call app with face filter applied And many more. ## How it looks in JavaScript[​](#how-it-looks-in-javascript "Direct link to How it looks in JavaScript") Real-time webcam video processing and DOM rendering: ``` import { Webcam, Player, Module, Effect, Dom } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/face_tracker.zip")) await player.use(new Webcam()) player.applyEffect(new Effect("Glasses.zip")) Dom.render(player, "#webar-app") ``` And with screenshot capturing: ``` import { Webcam, Player, Effect, Module, Dom, ImageCapture } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/face_tracker.zip")) await player.use(new Webcam()) player.applyEffect(new Effect("Glasses.zip")) Dom.render(player, "#webar-app") const capture = new ImageCapture(player) const photo = await capture.takePhoto() ``` --- # Getting Started ## Get the client token[​](#get-the-client-token "Direct link to Get the client token") To start working with **Banuba SDK** in your project, you need to have a client token. To receive it, please fill in the [form on banuba.com](https://www.banuba.com/face-filters-sdk), or contact us via . * iOS * Android * Web * Flutter * ReactNative * Desktop ## Installation[​](#installation "Direct link to Installation") 1. Add [BanubaSdk SPM packages](/far-sdk/tutorials/development/installation.md?ios-packages=spm#spm-packages) into your project info Details about the **SPM** and **CocoaPods** packages see in [Installation](/far-sdk/tutorials/development/installation.md). ## Integration[​](#integration "Direct link to Integration") 1. Setup `banubaClientToken` common/common/AppDelegate.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Initialize `BanubaSdkManager` common/common/AppDelegate.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Create `Player` and `load` the effect camera/camera/ViewController.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Run the application! 🎉 🚀 💅 tip See more use cases and code samples in the [GitHub repo](https://github.com/Banuba/banuba-sdk-ios-samples). ## Installation[​](#installation "Direct link to Installation") To get started, add the **Banuba SDK** packages to your project. Add the custom maven repo to your `build.gradle.kts`: camera/settings.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") And add the dependency on **Banuba SDK** package to your `build.gradle.kts`: common/build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") info Details about the packages see in [Installation](/far-sdk/tutorials/development/installation.md). ## Integration[​](#integration "Direct link to Integration") camera/src/main/java/com/banuba/sdk/example/camera/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") ## Requirements[​](#requirements "Direct link to Requirements") * [Nodejs](https://nodejs.org/en/) installed * Browser with [WebGL 2.0](https://caniuse.com/#feat=webgl2) and higher ## Integration[​](#integration "Direct link to Integration") 1. Setup the client token BanubaClientToken.js ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Import the required types from the [@banuba/webar](https://www.npmjs.com/package/@banuba/webar) NPM package index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") info See the details about the NPM package in [Installation](/far-sdk/tutorials/development/installation.md). 3. Initialize `Player` and apply the `Effect` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Run a local web server from a terminal inside the folder ``` npx live-server ``` 5. Open [localhost:8080](http://localhost:8080) and start clicking 🎉 🚀 💅 tip Follow the instructions of the demo app [README.md](https://github.com/Banuba/quickstart-web/blob/master/README.md) to get more info. note The demo app leverages [jsDelivr](https://jsdelivr.com/) CDN for ease of getting started, for a real life application please use the [@banuba/webar](https://www.npmjs.com/package/@banuba/webar) npm package. [Banuba SDK](https://pub.dev/packages/banuba_sdk) for [Flutter](https://docs.flutter.dev/) is available for iOS and Android and provides the following functionality: * Load and interact with any Effect (including the `Makeup` effect). * Still image processing. * Interaction with Camera (open/close, take photo, flashlight control, facing). * Screen recording. * [Videocall](/far-sdk/tutorials/development/videocall.md) powered by [Agora](https://docs.agora.io/). If this is not enough, you should go with native integration. ## Integration guide[​](#integration-guide "Direct link to Integration guide") * Android * iOS 1. Add `banuba_sdk` plugin: ``` flutter pub add banuba_sdk ``` 2. Add code from [the basic sample](https://github.com/Banuba/banuba-sdk-flutter/blob/master/example/lib/main.dart) into your app. Don't forget to `initialize` `BanubaSdkManager` with the Client Token: example/lib/main.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Add `effects` folder into your project. Link it with your app: add the following code into app `build.gradle`. example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add `banuba_sdk` plugin: ``` flutter pub add banuba_sdk ``` 2. Link to **Banuba SDK** podspecs in `ios/Podfile`: ``` source 'https://github.com/sdk-banuba/banuba-sdk-podspecs.git' ``` 3. Add code from [the basic sample](https://github.com/Banuba/banuba-sdk-flutter/blob/master/example/lib/main.dart) into your app. Don't forget to `initialize` `BanubaSdkManager` with the Client Token: example/lib/main.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Add `effects` folder into your project. Link it with your app: add the
folder into `Runner` **Xcode** project (`File` -> `Add Files to 'Runner'...`). [Banuba SDK](https://www.npmjs.com/package/@banuba/react-native) for [React Native](https://reactnative.dev/) available for iOS and Android and provides the following functionality: * Load and interact with any Effect (including `Makeup` effect). * Interaction with camera (open/close). * Screen recording (screenshots and video). * [Videocall](/far-sdk/tutorials/development/videocall.md) powered by [Agora](https://docs.agora.io/). If this is not enough, you should go with native integration. ## Integration guide[​](#integration-guide "Direct link to Integration guide") * Android * iOS 1. Add `@banuba/react-native` dependency ``` yarn add @banuba/react-native ``` 2. Add our **Maven repository** example/android/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Add [`effects` folder](https://github.com/Banuba/banuba-sdk-react-native/tree/master/example/effects) and add a task to copy them into app example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Copy code from [this file](https://github.com/Banuba/banuba-sdk-react-native/blob/master/example/src/App.tsx) into your app. Don't forget to intialize the SDK with the Client Token example/src/App.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add `@banuba/react-native` dependency ``` yarn add @banuba/react-native ``` 2. Add our podspecs repo to your `Podfile` example/ios/Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Add [`effects` folder](https://github.com/Banuba/banuba-sdk-react-native/tree/master/example/effects) and link it to Xcode project: `(File -> Add Files to ...)` 4. Copy code from [this file](https://github.com/Banuba/banuba-sdk-react-native/blob/master/example/src/App.tsx) into your app. Don't forget to intialize the SDK with the Client Token example/src/App.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") The steps below apply to desktop integration (**Windows** and/or **macOS**) with **C++**. 1. Download **Banuba SDK** binaries [from GitHub](https://github.com/Banuba/FaceAR-SDK-desktop-releases). 2. Integrate libraries downloaded on the previous step into your build system. If you use **CMake**, consider our [quickstart-desktop-cpp](https://github.com/Banuba/quickstart-desktop-cpp) sample. for Windows Besides the **Banuba SDK** itself, you will require third party libraries from the `bin` folder. 3. Create rendering context or copy and paste into your project [ready-to-use helpers](https://github.com/Banuba/quickstart-desktop-cpp/tree/master/helpers/src) sources based on [GLFW](https://www.glfw.org/) 4. Setup `BNB_CLIENT_TOKEN` helpers/src/BanubaClientToken.hpp ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 5. Initialize **Banuba SDK** with the **Client Token** and path to resources from the archive with the binaries. Create `Player`, `Camera`, `Input` and `Output` and load the **effect**. realtime-camera-preview/main.cpp ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Run the application! 🎉 🚀 💅 info The **effects** are also resources, you may initialize **Banuba SDK** with several resource paths (one for effects and one for SDK assets). for macOS Resources for **MacOS** are inside `BanubaEffectPlayer.xcframework`. Still have questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # android ## Installation[​](#installation "Direct link to Installation") To get started, add the **Banuba SDK** packages to your project. Add the custom maven repo to your `build.gradle.kts`: camera/settings.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") And add the dependency on **Banuba SDK** package to your `build.gradle.kts`: common/build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") info Details about the packages see in [Installation](/far-sdk/tutorials/development/installation.md). ## Integration[​](#integration "Direct link to Integration") camera/src/main/java/com/banuba/sdk/example/camera/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") --- # desktop The steps below apply to desktop integration (**Windows** and/or **macOS**) with **C++**. 1. Download **Banuba SDK** binaries [from GitHub](https://github.com/Banuba/FaceAR-SDK-desktop-releases). 2. Integrate libraries downloaded on the previous step into your build system. If you use **CMake**, consider our [quickstart-desktop-cpp](https://github.com/Banuba/quickstart-desktop-cpp) sample. for Windows Besides the **Banuba SDK** itself, you will require third party libraries from the `bin` folder. 3. Create rendering context or copy and paste into your project [ready-to-use helpers](https://github.com/Banuba/quickstart-desktop-cpp/tree/master/helpers/src) sources based on [GLFW](https://www.glfw.org/) 4. Setup `BNB_CLIENT_TOKEN` helpers/src/BanubaClientToken.hpp ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 5. Initialize **Banuba SDK** with the **Client Token** and path to resources from the archive with the binaries. Create `Player`, `Camera`, `Input` and `Output` and load the **effect**. realtime-camera-preview/main.cpp ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Run the application! 🎉 🚀 💅 info The **effects** are also resources, you may initialize **Banuba SDK** with several resource paths (one for effects and one for SDK assets). for macOS Resources for **MacOS** are inside `BanubaEffectPlayer.xcframework`. --- # flutter [Banuba SDK](https://pub.dev/packages/banuba_sdk) for [Flutter](https://docs.flutter.dev/) is available for iOS and Android and provides the following functionality: * Load and interact with any Effect (including the `Makeup` effect). * Still image processing. * Interaction with Camera (open/close, take photo, flashlight control, facing). * Screen recording. * [Videocall](/far-sdk/tutorials/development/videocall.md) powered by [Agora](https://docs.agora.io/). If this is not enough, you should go with native integration. ## Integration guide[​](#integration-guide "Direct link to Integration guide") * Android * iOS 1. Add `banuba_sdk` plugin: ``` flutter pub add banuba_sdk ``` 2. Add code from [the basic sample](https://github.com/Banuba/banuba-sdk-flutter/blob/master/example/lib/main.dart) into your app. Don't forget to `initialize` `BanubaSdkManager` with the Client Token: example/lib/main.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Add `effects` folder into your project. Link it with your app: add the following code into app `build.gradle`. example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add `banuba_sdk` plugin: ``` flutter pub add banuba_sdk ``` 2. Link to **Banuba SDK** podspecs in `ios/Podfile`: ``` source 'https://github.com/sdk-banuba/banuba-sdk-podspecs.git' ``` 3. Add code from [the basic sample](https://github.com/Banuba/banuba-sdk-flutter/blob/master/example/lib/main.dart) into your app. Don't forget to `initialize` `BanubaSdkManager` with the Client Token: example/lib/main.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Add `effects` folder into your project. Link it with your app: add the
folder into `Runner` **Xcode** project (`File` -> `Add Files to 'Runner'...`). --- # ios ## Installation[​](#installation "Direct link to Installation") 1. Add [BanubaSdk SPM packages](/far-sdk/tutorials/development/installation.md?ios-packages=spm#spm-packages) into your project info Details about the **SPM** and **CocoaPods** packages see in [Installation](/far-sdk/tutorials/development/installation.md). ## Integration[​](#integration "Direct link to Integration") 1. Setup `banubaClientToken` common/common/AppDelegate.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Initialize `BanubaSdkManager` common/common/AppDelegate.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Create `Player` and `load` the effect camera/camera/ViewController.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Run the application! 🎉 🚀 💅 tip See more use cases and code samples in the [GitHub repo](https://github.com/Banuba/banuba-sdk-ios-samples). --- # react\_native [Banuba SDK](https://www.npmjs.com/package/@banuba/react-native) for [React Native](https://reactnative.dev/) available for iOS and Android and provides the following functionality: * Load and interact with any Effect (including `Makeup` effect). * Interaction with camera (open/close). * Screen recording (screenshots and video). * [Videocall](/far-sdk/tutorials/development/videocall.md) powered by [Agora](https://docs.agora.io/). If this is not enough, you should go with native integration. ## Integration guide[​](#integration-guide "Direct link to Integration guide") * Android * iOS 1. Add `@banuba/react-native` dependency ``` yarn add @banuba/react-native ``` 2. Add our **Maven repository** example/android/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Add [`effects` folder](https://github.com/Banuba/banuba-sdk-react-native/tree/master/example/effects) and add a task to copy them into app example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Copy code from [this file](https://github.com/Banuba/banuba-sdk-react-native/blob/master/example/src/App.tsx) into your app. Don't forget to intialize the SDK with the Client Token example/src/App.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add `@banuba/react-native` dependency ``` yarn add @banuba/react-native ``` 2. Add our podspecs repo to your `Podfile` example/ios/Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Add [`effects` folder](https://github.com/Banuba/banuba-sdk-react-native/tree/master/example/effects) and link it to Xcode project: `(File -> Add Files to ...)` 4. Copy code from [this file](https://github.com/Banuba/banuba-sdk-react-native/blob/master/example/src/App.tsx) into your app. Don't forget to intialize the SDK with the Client Token example/src/App.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") --- # web ## Requirements[​](#requirements "Direct link to Requirements") * [Nodejs](https://nodejs.org/en/) installed * Browser with [WebGL 2.0](https://caniuse.com/#feat=webgl2) and higher ## Integration[​](#integration "Direct link to Integration") 1. Setup the client token BanubaClientToken.js ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Import the required types from the [@banuba/webar](https://www.npmjs.com/package/@banuba/webar) NPM package index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") info See the details about the NPM package in [Installation](/far-sdk/tutorials/development/installation.md). 3. Initialize `Player` and apply the `Effect` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Run a local web server from a terminal inside the folder ``` npx live-server ``` 5. Open [localhost:8080](http://localhost:8080) and start clicking 🎉 🚀 💅 tip Follow the instructions of the demo app [README.md](https://github.com/Banuba/quickstart-web/blob/master/README.md) to get more info. note The demo app leverages [jsDelivr](https://jsdelivr.com/) CDN for ease of getting started, for a real life application please use the [@banuba/webar](https://www.npmjs.com/package/@banuba/webar) npm package. --- # AR Cloud Guide AR Cloud is a client-server solution that helps to save space in your application. This is a product used to store AR filters on a server instead of in the SDK code. After being selected by the user for the first time, the filter is going to be downloaded from the server and then saved on the phone's memory. * iOS * Android * Flutter [Example of using AR cloud](https://github.com/Banuba/arcloud-ios-swift)

**Banuba AR Cloud SDK** delivery solution includes the `BanubaARCloudSDK.xcframework` with `BanubaUtilities.xcframework` libraries that should be placed into [Frameworks folder](https://github.com/Banuba/arcloud-ios-swift/tree/master/Frameworks) directory or added as an [SPM](https://www.swift.org/package-manager/) dependecies to your project. You can find these libraries here: [BanubaARCloudSDK.xcframework](https://github.com/Banuba/BanubaARCloudSDK-IOS), [BanubaUtilities.xcframework](https://github.com/Banuba/BanubaUtilities-iOS) ## Follow these steps to integrate AR Cloud:[​](#follow-these-steps-to-integrate-ar-cloud "Direct link to Follow these steps to integrate AR Cloud:") 1. Set up `banubaArCloudURL` arcloud-ios-swift/BanubaClientToken.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Initialize **Banuba AR Cloud SDK** arcloud-ios-swift/ARCloud/ARCloudManager.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Fetch the available effects list using the `getAREffects` method arcloud-ios-swift/ARCloud/ARCloudManager.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Download the effect, using the `downloadArEffect` method arcloud-ios-swift/ARCloud/ARCloudManager.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") [Example of using AR cloud](https://github.com/Banuba/arcloud-android-kotlin/)

## Follow these steps to configure AR Cloud:[​](#follow-these-steps-to-configure-ar-cloud "Direct link to Follow these steps to configure AR Cloud:") ## Installation of the ArCloud library[​](#installation-of-the-arcloud-library "Direct link to Installation of the ArCloud library") To start using **Banuba SDK** with **ArCloud** from GitHub Packages, add a custom maven repo to your `build.gradle`: build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Add the `ar-cloud` dependency to your build.gradle: effect\_player\_arcloud\_example/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") ### Initialization of the ArCloudKoinModule module in Koin[​](#initialization-of-the-arcloudkoinmodule-module-in-koin "Direct link to Initialization of the ArCloudKoinModule module in Koin") **Koin** - this is a framework to help you build any kind of Kotlin & Kotlin Multiplatform application, from Android mobile and Multiplatform apps to backend Ktor server applications. You can read more about **Koin** [here](https://insert-koin.io/). In this example, we use **Koin** for dependency injection. effect\_player\_arcloud\_example/src/main/java/com/banuba/sdk/example/effect\_player\_arcloud\_example/Application.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") note `ArCloudKoinModule` is the AR Cloud module which should be initialized and placed before `MainKoinModule`. `MainKoinModule` is the **Koin** module which should be implemented in your application. It is required for configuring AR Cloud dependencies. ### Configuring of AR Cloud dependencies in DI layer[​](#configuring-of-ar-cloud-dependencies-in-di-layer "Direct link to Configuring of AR Cloud dependencies in DI layer") [MainKoinModule.kt](https://github.com/Banuba/arcloud-android-kotlin/blob/master/effect_player_arcloud_example/src/main/java/com/banuba/sdk/example/effect_player_arcloud_example/arcloud/MainKoinModule.kt) effect\_player\_arcloud\_example/src/main/java/com/banuba/sdk/example/effect\_player\_arcloud\_example/Application.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") These are additional important classes: * [ArCloudMasksActivity](https://github.com/Banuba/arcloud-android-kotlin/blob/master/effect_player_arcloud_example/src/main/java/com/banuba/sdk/example/effect_player_arcloud_example/arcloud/ArCloudMasksActivity.kt) - this is the main UI module that configures `Player` with dependent UI components. * [EffectWrapper](https://github.com/Banuba/arcloud-android-kotlin/blob/master/effect_player_arcloud_example/src/main/java/com/banuba/sdk/example/effect_player_arcloud_example/arcloud/EffectWrapper.kt) - this is a data class that wraps an effect taken from an **AR cloud**. * [EffectsAdapter](https://github.com/Banuba/arcloud-android-kotlin/blob/master/effect_player_arcloud_example/src/main/java/com/banuba/sdk/example/effect_player_arcloud_example/arcloud/EffectsAdapter.kt) - this is an adapter for the RecyclerView component used to populate effects data. * [EffectsViewModel](https://github.com/Banuba/arcloud-android-kotlin/blob/master/effect_player_arcloud_example/src/main/java/com/banuba/sdk/example/effect_player_arcloud_example/arcloud/EffectsViewModel.kt) - the [ViewModel](https://developer.android.com/topic/libraries/architecture/viewmodel) component which is responsible for loading and providing the effect data. You can use all classes mentioned above as examples or implement your own solution. Usage of this feature is described on the [ARCloud plugin page on Flutter Pub](https://pub.dev/packages/banuba_arcloud). Real **examples** can be found in [quickstart-flutter-plugin](https://github.com/Banuba/quickstart-flutter-plugin/blob/master/lib/page_arcloud.dart) and the source code of **Banuba** [arcloud-flutter](https://github.com/Banuba/arcloud-flutter/tree/master/example). Still have questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # Face Landmarks Guide Landmarks are anchor points that show the relative position and shape of the main elements of the face. **Banuba SDK** provides the coordinates of the landmarks. To get them, follow these steps. * iOS * Android * Web 1. Import the `BanubaEffectPlayer` framework into your project. ``` import BanubaEffectPlayer ``` 2. Add `BNBFrameDataListener` to your ViewController ``` player.effectPlayer?.add(self as BNBFrameDataListener) ``` note Remove `BNBFrameDataListener` when your ViewController is deinited ``` player.effectPlayer?.remove(self as BNBFrameDataListener) ``` 3. Inherit the `BNBFrameDataListener` interface and add protocol stubs ``` extension ViewController: BNBFrameDataListener { func onFrameDataProcessed(_ frameData: BNBFrameData?) { } } ``` 4. You can get information about the coordinates of the landmarks from the `BNBFrameData`. The `getLandmarks`function returns an array of `NSNumber` with size of `2 * (landmarks number)`. The first value responds to the *X* coord of the first landmark and the second value responds to the *Y* coord of the first landmark and so on. At first, these coordinates will be in the FRX (Camera) system, but you can transform them to the Screen system. You must set the variables `screenWidth` and `screenHeight` to the width and height of your screen beforehead. After the transformation, you can get an array of points in the Screen coordinates in correspondence with the landmarks. In this example it is the `landmarksPoints` array. ``` extension ViewController: BNBFrameDataListener { func onFrameDataProcessed(_ frameData: BNBFrameData?) { guard let fD = frameData else { return } let recognitionResult = fD.getFrxRecognitionResult() let faces = recognitionResult?.getFaces() let landmarksCoordinates = faces?[0].getLandmarks() guard let landmarks = landmarksCoordinates else { return } if landmarks.count != 0 { // get transformation from FRX (Camera) coordinates to Screen coordinates let screenRect = BNBPixelRect(x:0, y:0, w:screenWidth, h:screenHeight) guard let frxResultTransformation = recognitionResult?.getTransform(), let commonToFrxResult = BNBTransformation.makeData(frxResultTransformation.basisTransform), let commonRect = commonToFrxResult.inverseJ()?.transform(frxResultTransformation.fullRoi), let commonToScreen = BNBTransformation.makeRects(commonRect, targetRect: screenRect, rot: BNBRotation.deg0, flipX: false, flipY: false), let FrxResultToCommon = commonToFrxResult.inverseJ(), let frxResultToScreen = FrxResultToCommon.chainRight(commonToScreen) else { return } //create points from transformed coordinates var landmarksPoints: [CGPoint] = [] for i in 0 ..< (landmarks.count / 2) { let xCoord = Float(truncating: landmarks[i * 2]) let yCoord = Float(truncating: landmarks[i * 2 + 1]) let pointBeforeTransformation = BNBPoint2d(x: xCoord, y: yCoord) let pointAfterTransformation = frxResultToScreen.transformPoint(pointBeforeTransformation) landmarksPoints.append(CGPoint(x: CGFloat(pointAfterTransformation.x), y: CGFloat(pointAfterTransformation.y))) } } } } ``` 1. Import the following files into your project: ``` import com.banuba.sdk.effect_player.FrameDataListener import com.banuba.sdk.player.Player import com.banuba.sdk.types.FrxRecognitionResult import com.banuba.sdk.types.TransformableEvent import com.banuba.sdk.types.Transformation import com.banuba.sdk.types.FrameData import com.banuba.sdk.types.FaceData import com.banuba.sdk.types.PixelRect import com.banuba.sdk.types.Point2d import com.banuba.sdk.types.Rotation ``` 2. Add `FrameDataListener` to your `ViewController`: ``` player.effectPlayer.addFrameDataListener(this) ``` And don't forget to remove the `FrameDataListener` when your `ViewController` is destroyed: ``` player.effectPlayer.removeFrameDataListener(this) ``` 3. Inherit the `FrameDataListener` interface and override the `onFrameDataProcessed(...)` function: ``` class ViewController : FrameDataListener { override fun onFrameDataProcessed(frameData: FrameData?) { ... } } ``` 4. From FrameData you can get the information about the coordinates of the landmarks. The function `getLandmarks()` returns an array of `ArrayList` with a size of 2 \* (landmarks number). The first value corresponds to the X coord of the first landmark, the second value corresponds to the Y coord of the first landmark, and so on. At first, these coordinates are in the FRX camera space, but you can transform them into the screen space. You must set the variables `mScreenWidth` and `mScreenHeight` to the width and height of your screen beforehand. After the transformation, you can get an array of points in the Screen coordinates in correspondence with the landmarks. In this example, it is the `mLandmarksPoints` array. ``` class ViewController : FrameDataListener { val screenWidth = 720 /* input screen width */ val screenHeight = 1280 /* input screen height */ /* note: mLandmarksPoints - this variable is updated each frame in asynchronous mode. * To access it from another thread, adding synchronization is required. */ val landmarksPoints: ArrayList = ArrayList() /* output landmarks points */ override fun onFrameDataProcessed(frameData: FrameData?) { frameData ?: return val recognitionResult = frameData.frxRecognitionResult ?: return val faces = if (!recognitionResult.faces.isEmpty()) recognitionResult.faces else return /* note: The example only uses the first face data, but the SDK can recognize * and receive data from several faces. */ val landmarks = if (!faces[0].landmarks.isEmpty()) faces[0].landmarks else return /* Create transformation for landmarks */ val screenRect = PixelRect(0, 0, screenWidth, screenHeight) val frxResultTransformation = recognitionResult.transform val commonToFrxResult = Transformation.makeData(frxResultTransformation.basisTransform)!! val commonRect = commonToFrxResult.inverseJ()!!.transformRect(frxResultTransformation.fullRoi) val commonToScreen = Transformation.makeRects(commonRect, screenRect, Rotation.DEG_0, false, false) val frxResultToCommon = commonToFrxResult.inverseJ()!! val frxResultToScreen = frxResultToCommon.chainRight(commonToScreen)!! /* Create points from transformed coordinates */ val countPoints = landmarks.size / 2 for (i in 0..countPoints) { val xCoord = landmarks[i * 2] val yCoord = landmarks[i * 2 + 1] val pointBeforeTransformation = Point2d(xCoord, yCoord) val pointAfterTransformation = frxResultToScreen.transformPoint(pointBeforeTransformation) landmarksPoints.add(pointBeforeTransformation) } } } ``` It is possible to retrieve face landmarks from the face recognition performed by SDK: ``` player.addEventListener(Player.FRAME_DATA_EVENT, ({ detail: frameData }) => { const hasFace = frameData.get("frxRecognitionResult.faces.0.hasFace") if (!hasFace) return console.log("Face not found") const landmarks = frameData.get("frxRecognitionResult.faces.0.landmarks") console.log("Landmarks:", landmarks) }) ``` The landmarks array stores flattened pairs of 68 points in form of `[x1, y1, x2, y2, ... , x68, y68]`. warning Pay attention that an effect with [Face Recognition](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking) (e.g. [DebugWireframe](/far-sdk/generated/effects/DebugWireframe.zip)) must be applied to the player. See the [FRAME\_DATA\_EVENT](/far-sdk/generated/typedoc/classes/Player.html#FRAME_DATA_EVENT) and [FrameData](/far-sdk/generated/typedoc/classes/FrameData.html#get) docs for more details and examples. Now you can use the face landmarks in the your app. For example, you can display them like in the picture below. ![image](/far-sdk/assets/images/landmarks_68-938993fb47df6b72e725c2acf71386eb.png) Still have questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # Migration Guides ## To version 1.17.0[​](#to-version-1170 "Direct link to To version 1.17.0") `RenderBackendType`(`render_backend_type`) was moved to `types` package. So, now for: ### Android[​](#android "Direct link to Android") Change `com.banuba.sdk.scene.RenderBackendType` to `com.banuba.sdk.types.RenderBackendType`. ### C++[​](#c "Direct link to C++") Change `#include ` to `#include ` ## To version 1.9.0[​](#to-version-190 "Direct link to To version 1.9.0") BanubaSDK introduces the `Player` API for iOS and Android, which implements the most popular use cases and is highly customizable. We continue to support the old API, but starting from this version it is marked as deprecated. The main changes are described below. ### iOS & Android[​](#ios--android "Direct link to iOS & Android") * The class `BanubaSdkManager` deprecated now. Static methods for initialization/deinitialization of the Banuba SDK with the client token and resources path still work. But we are suggest to switch to `BNBUtilityManager` for the SDK initialization instead. * The class `Player` introduced as a replacement for the `BanubaSdkManager`. Now it is the only way to process frames from the `Input`, manage effect playback and present them to the `Output`. * The protocol `Input` with basic use cases implementations: `Camera`, `Photo`, `Stream`; provides frames to the `Player` for further processing and rendering. * The protocol `Output` implements endpoint of the presentation surface, which may be `View`, `PixelBuffer`, `Video`, or any other surface, implemented by own. You can have several outputs in use at a time! * All the three main protocols can be connected between each other through the `player.use(input, outputs)` method call. More details about new `Player` API you can find in our [github examples](/far-sdk/tutorials/development/samples.md). --- # Optimization Guides * Web ## Optimizing WebAR SDK bundle size[​](#optimizing-webar-sdk-bundle-size "Direct link to Optimizing WebAR SDK bundle size") **Banuba WebAR SDK** is [tree-shakable](https://developer.mozilla.org/en-US/docs/Glossary/Tree_shaking), so import only the modules your application relies on: ``` // the named import saves extra KBs import { Webcam, Player, Effect, Dom } from "@banuba/webar" // ... ``` ## Optimizing WebAR SDK assets size[​](#optimizing-webar-sdk-assets-size "Direct link to Optimizing WebAR SDK assets size") `BanubaSDK.wasm` and `BanubaSDK.simd.wasm` are the heavy ones. But they have a good compressability due to the internal format of the files. | Asset | Original | Gzip | Brotli | | ------------------- | -------- | ----- | ------ | | BanubaSDK.wasm | 12Mb | 3.5Mb | 2.5Mb | | BanubaSDK.simd.wasm | 13Mb | 3.8Mb | 2.7Mb | Most hosting environments like [Netlify](https://www.netlify.com/) automatically precompress the files, but sometimes you may have to compress them yourself for better downloading times. You can run the command in the assets folder to get them compressed: ``` npx gzipper compress --brotli . ``` See [gzipper docs](https://www.npmjs.com/package/gzipper#compressc-1) for details. ## Speed up WebAR SDK on modern browsers[​](#speed-up-webar-sdk-on-modern-browsers "Direct link to Speed up WebAR SDK on modern browsers") Banuba WebAR SDK ships with the `BanubaSDK.simd.wasm` file - the SIMD version of the `BanubaSDK.wasm`. Without digging into the details of what SIMD is, the SIMD-enabled file can make processing performance up to several times faster. Taking into consideration that SIMD has [a good support across modern browsers](https://webassembly.org/roadmap/), you should definitely give it a try. SIMD support detection is built into the WebAR SDK. It means the SDK will try to load `BanubaSDK.simd.wasm` if the current browser supports SIMD and will load `BanubaSDK.wasm` otherwise. Don't forget to point BanubaSDK to the SIMD file location if you are using `locateFile`: ``` const player = await Player.create({ clientToken: "xxx-xxx-xxx", // point BanubaSDK where to find these vital files locateFile: { "BanubaSDK.data": "/path/to/BanubaSDK.data", "BanubaSDK.wasm": "/path/to/BanubaSDK.wasm", "BanubaSDK.simd.wasm": "/path/to/BanubaSDK.simd.wasm", }, }) ``` See [Player.create()](/far-sdk/generated/typedoc/classes/Player.html#create) and [locateFile](/far-sdk/generated/typedoc/types/SDKOptions.html) for details. ## Reducing CPU/GPU usage on HiDPI devices[​](#reducing-cpugpu-usage-on-hidpi-devices "Direct link to Reducing CPU/GPU usage on HiDPI devices") On HiDPI devices Banuba WebAR SDK scales the output frames by the [device pixel ratio](https://developer.mozilla.org/en-US/docs/Web/API/Window/devicePixelRatio). This approach allows the SDK to render face AR 3D Masks in a high quality and keep all the AR 3D Mask details. Despite the better rendering quality, this approach utilizes more CPU and GPU resources since the frame size to be processed scales geometrically. One can simply opt out of the default behavior and reduce CPU/GPU utilization by overriding the `devicePixelRatio` used by the SDK: ``` const player = await Player.create({ clientToken: "xxx-xxx-xxx", devicePixelRatio: 1, }) ``` See the [Player.create()](/far-sdk/generated/typedoc/classes/Player.html#create) method docs for more details. The CPU and GPU usage can be reduced even more by processing frames of smaller size, e.g the 640x480 webcam frame size can be used instead of the default 1280x720 frame size: ``` await player.use(new Webcam({ width: 640, height: 480 })) ``` Check out the [Video cropping](/far-sdk/tutorials/development/samples.md#video-cropping) sample for more details. ## Preloading Effects[​](#preloading-effects "Direct link to Preloading Effects") Sometimes you may experience a time lag between [player.applyEffect()](/far-sdk/generated/typedoc/classes/Player.html#applyEffect) call and a visual change due to the long time of the effect archive download. To speed up things, you can preload the Effect and apply it later on demand: ``` const preloaded = await Effect.preload("SomeBigEffect.zip")) // ... player.applyEffect(preloaded) ``` You can also scale the approach and add a local cache of the preloaded effects, or preload an effect on some user interaction like mouse hover a button. Still have questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # Watermark Guide A watermark is a small image that is superimposed on top of the entire video. * iOS * Android If you want to apply a watermark to a recorded video, you should use the `video.watermark` property. ``` guard let watermark = UIImage(named: "YOUR_WATERMARK_IMAGE") else { return } let offset = CGPoint(x: 20.0, y: 20.0) let watermarkInfo = WatermarkInfo(image: watermark, corner: .bottomLeft, offset: offset, targetNormalizedWidth: 0.2) let video = Video(cameraDevice: cameraDevice) player.use(input: camera, outputs: [playerView.uiView, video]) video.watermark = watermarkInfo video.record(...) ``` If you want to apply a watermark to a video, you should create a `WatermarkInfo` structure: ``` val watermark: Drawable = ContextCompat.getDrawable(this, R.drawable.my_watermark_res)!! val width = 101 val height = 24 val aspectRatio = width.toFloat() / height.toFloat() val sizeProvider = { viewportSize: Size -> val targetWidth = (viewportSize.width * 0.5f).toInt() val targetHeight = (targetWidth / aspectRatio).toInt() Size(targetWidth, targetHeight) } val watermarkGravity = Gravity.BOTTOM // or Gravity.RIGHT val positionProvider = GravityPositionProviderAdapter(sizeProvider, watermarkGravity) val myWatermarkInfo = WatermarkInfo(watermark, sizeProvider, positionProvider, width, height, true) ``` Apply the watermark in the `VideoOutput.start` method: ``` videoOutput.start(..., myWatermarkInfo) ``` Still have questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # Adding Banuba SDK to your project * iOS * Android * Web * Desktop - CocoaPods - Swift Package Manager ## CocoaPods packages[​](#cocoapods-packages "Direct link to CocoaPods packages") To start using **Banuba SDK** with [**CocoaPods**](https://guides.cocoapods.org/using/using-cocoapods.html), add a **custom repository** and desired **packages** to your `Podfile`: Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Then install pods: ``` pod install --repo-update ``` See [banuba-sdk-podspecs](https://github.com/sdk-banuba/banuba-sdk-podspecs) repo for the list of all available packages and versions. The complete example, configured for **CocoaPods** usage, can be found in our [Objective-C](https://github.com/Banuba/quickstart-ios-objc) quickstart example. ## SPM packages[​](#spm-packages "Direct link to SPM packages") **Banuba SDK** provides [Swift Package Manager](https://www.swift.org/package-manager/) packages in the custom repositories. Add **Banuba SDK** packages, to your Xcode project: 1. **File** > **Add Package Dependencies...** 2. Search for the package, for example 3. Press the **Add Package** button. After verifying the package, press **Add Package** again. See the [sdk-banuba](https://github.com/sdk-banuba?tab=repositories\&q=swift+package\&type=\&language=\&sort=) repositories for the list of all available packages and versions. The complete example, configured for **SPM** usage, can be found in our [Beauty-iOS](https://github.com/Banuba/banuba-sdk-ios-samples) quickstart example. ## How to choose required packages[​](#how-to-choose-required-packages "Direct link to How to choose required packages") warning Only use the packages with the same version! Packages with different versions (even minor) may conflict or work incorrectly with each other. tip If **feature** or **effect** works incorrect, see application **logs**, to figure out which package is missed. Add packages depends on the specific **features** or **effects**, which your app will use. It is your responsibility to include everything required for the desired behaviour.
See detailed [packages description](#list-of-all-available-packages) in the table below. Example of the packages set for [Face Tracking](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking) and [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation): * BNBSdkApi * BNBFaceTracker * BNBBackground See detailed [packages description](#list-of-all-available-packages) in the table below. ## List of all available packages[​](#list-of-all-available-packages "Direct link to List of all available packages") | Package name | Description | | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | BNBSdkApi | platform-specific API, like `Player`, `Input`, `Output`, etc | | BNBSdkCore | provides the functionality of the native `EffectPlayer`. | | BNBEffectPlayer | contains the necessary shaders, used by `sdk_core` package and provides the following features: math utilities, texture utilities, morphing, beautification, etc. | | BNBScripting | includes the basic functionality used by the effect api, see [Effects](/far-sdk/effects/overview.md). | | BNBFaceTracker | package consists of neural network models used to track face and its features: lips, eyes, etc. Include it whenever you deal with tracking. See more about [Face Tracking](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking). | | BNBFaceAttributes | package consists of neural network models used to extract face attributes: skin color, gender, face shape etc. | | BNBFaceMatch | package provides facilities to measure how faces are similar on two photos | | BNBMakeup | provides prefabs for [Makeup](/far-sdk/effects/prefabs/makeup.md). | | BNBLips | provides neural network models for lips segmentation. See more about [Lips Segmentation](/far-sdk/tutorials/capabilities/glossary.md#lips-segmentation). | | BNBHair | provides neural network models for hair segmentation. See more about [Hair Segmentation](/far-sdk/tutorials/capabilities/glossary.md#hair-segmentation). | | BNBHands | provides neural network models for hand, nail, and finger segmentation. | | BNBEyes | provides neural network models for eyes segmentation. See more about [Eye Segmentation](/far-sdk/tutorials/capabilities/glossary.md#eye-segmentation). | | BNBSkin | provides neural network models for skin segmentation. See more about [Skin Segmentation](/far-sdk/tutorials/capabilities/glossary.md#skin-segmentation). | | BNBBackground | provides neural network models for background separation. See more about [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation). | | BNBBody | provides neural network model to recognize the human body in full and separate it from the background in images and videos. | | BNBAcneEyebagsRemoval | provides neural network models for acne removal and eye bag removal. | | BNBNeck | provides neural network models for neck segmentation. | | BNBResources | includes all the resources of the all packages. **Use it when you don't care about the size or you need all the features!** | | BNBPoseEstimation | private | | BanubaSdk | depends on the all the packages for the operation of all available features. **Use it when you don't care about the size or you need all the features!** | ## Packages[​](#packages "Direct link to Packages") **Maven** To start using **Banuba SDK** , add a custom maven repo to your `build.gradle.kts`: build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Refer to our [Maven](https://nexus.banuba.net/#browse/browse:maven-releases) to find the list of all available packages and their versions. ## How to choose required packages[​](#how-to-choose-required-packages "Direct link to How to choose required packages") warning Use only the packages with the same version! Packages with different versions (even minor) may conflict or work incorrectly with each other. tip If **feature** or **effect** works incorrect, see application **logs**, to figure out which package is missed. Add packages depends on the specific **features** or **effects**, which your app will use. It is your responsibility to include everything required for the desired behaviour.
See detailed [packages description](#list-of-all-available-packages) in the table below. build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") ## List of all available packages[​](#list-of-all-available-packages "Direct link to List of all available packages") | Package name | Description | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | com.banuba.sdk.sdk\_api | platform-specific APIs, like `Player`, `Input`, `Output`, etc. | | com.banuba.sdk.sdk\_core | provides the functionality of the native `EffectPlayer`. | | com.banuba.sdk.effect\_player | contains the necessary shaders, used by `sdk_core` package and provides the following features: math utilities, texture utilities, morphing, beautification, etc. | | com.banuba.sdk.scripting | includes the basic functionality used by the effect api, see [Effects](/far-sdk/effects/overview.md). | | com.banuba.sdk.face\_tracker | consists of neural network models used to track a face and its features: lips, eyes, etc. Include it whenever you deal with tracking. See more about [FRX](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking). | | com.banuba.sdk.face\_attributes | package consists of neural network models used to extract face attributes: skin color, gender, face shape etc. | | com.banuba.sdk.face\_match | package provides facilities to measure how faces are similar on two photos | | com.banuba.sdk.makeup | provides prefabs for [Makeup](/far-sdk/effects/prefabs/makeup.md). | | com.banuba.sdk.lips | provides neural network models for lips segmentation. See more about [Lips Segmentation](/far-sdk/tutorials/capabilities/glossary.md#lips-segmentation). | | com.banuba.sdk.hair | provides neural network models for hair segmentation. See more about [Hair Segmentation](/far-sdk/tutorials/capabilities/glossary.md#hair-segmentation). | | com.banuba.sdk.hands | provides neural network models for hand, nail, and finger segmentation. | | com.banuba.sdk.eyes | provides neural network models for eyes segmentation. See more about [Eye Segmentation](/far-sdk/tutorials/capabilities/glossary.md#eye-segmentation). | | com.banuba.sdk.skin | provides neural network models for skin segmentation. See more about [Skin Segmentation](/far-sdk/tutorials/capabilities/glossary.md#skin-segmentation). | | com.banuba.sdk.background | provides neural network models for background separation. See more about [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation). | | com.banuba.sdk.body | provides neural network model to recognize the human body in full and separate it from the background in images and videos. | | com.banuba.sdk.acne\_eyebags\_removal | provides neural network models for acne removal and eye bag removal. | | com.banuba.sdk.neck | provides neural network models for neck segmentation. | | com.banuba.sdk.banuba\_sdk\_resources | includes all the resources of the all packages. **Use it when you don't care about the size or you need all the features!** | | com.banuba.sdk.pose\_estimation | private | | com.banuba.sdk.banuba\_sdk | depends on the all the packages for the operation of all available features. **Use it when you don't care about the size or you need all the features!** | ## NPM Package[​](#npm-package "Direct link to NPM Package") **[Banuba WebAR](https://www.npmjs.com/package/@banuba/webar)** is delivered as an NPM package, which includes executables (`.js`, `.wasm`, `.simd.wasm`) and resources modules (`modules/*.zip`). ``` npm i @banuba/webar ``` ## Resources Modules[​](#resources-modules "Direct link to Resources Modules") | Module name | Description | | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | face\_tracker.zip | consists of neural network models used to track a face and its features: lips, eyes, etc. Include it whenever you deal with tracking. See more about [FRX](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking). | | face\_attributes.zip | package consists of neural network models used to extract face attributes: skin color, gender, face shape etc. | | face\_match.zip | package provides facilities to measure how faces are similar on two photos | | makeup.zip | provides prefabs for [Makeup](/far-sdk/effects/prefabs/makeup.md). | | lips.zip | provides neural network models for lips segmentation. See more about [Lips Segmentation](/far-sdk/tutorials/capabilities/glossary.md#lips-segmentation). | | hair.zip | provides neural network models for hair segmentation. See more about [Hair Segmentation](/far-sdk/tutorials/capabilities/glossary.md#hair-segmentation). | | hands.zip | provides neural network models for hand, nail, and finger segmentation. | | eyes.zip | provides neural network models for eyes segmentation. See more about [Eye Segmentation](/far-sdk/tutorials/capabilities/glossary.md#eye-segmentation). | | skin.zip | provides neural network models for skin segmentation. See more about [Skin Segmentation](/far-sdk/tutorials/capabilities/glossary.md#skin-segmentation). | | background.zip | provides neural network models for background separation. See more about [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation). | | body.zip | provides neural network model to recognize the human body in full and separate it from the background in images and videos. | | acne\_eyebags\_removal.zip | provides neural network models for acne removal and eye bag removal. | | neck.zip | provides neural network models for neck segmentation. | | pose\_estimation.zip | private | ## Bundlers[​](#bundlers "Direct link to Bundlers") **[Banuba WebAR](https://www.npmjs.com/package/@banuba/webar)** depends on `BanubaSDK.data` and `BanubaSDK.wasm` (or `BanubaSDK.simd.wasm` if you are targeting [SIMD](/far-sdk/tutorials/development/guides/optimization.md#speed-up-webar-sdk-on-modern-browsers)) files. By default the SDK expects these files to be accessible from the application root i.e. by the `/BanubaSDK.data`, `/BanubaSDK.wasm` `/BanubaSDK.simd.wasm` links. It must be taken into consideration when working with application bundlers like [Vite](https://vitejs.dev/), [Rollup](https://rollupjs.org/guide/en/) or [Webpack](https://webpack.js.org/). Generally speaking one should be able to put `BanubaSDK.data`, `BanubaSDK.wasm` and `BanubaSDK.simd.wasm` files into the application assets folder (usually `public/`) and get the SDK loading these files properly. But you may want to place the files somewhere else, that case the [locateFile](/far-sdk/generated/typedoc/types/SDKOptions.html) property of the [Player.create()](/far-sdk/generated/typedoc/classes/Player.html#create) method should help you to set-up SDK properly. * Vite * Rollup * Webpack ### Vite[​](#vite "Direct link to Vite") ``` import { Player, Module /* ... */ } from "@banuba/webar" // vite uses special ?url syntax to import files as URLs import data from "@banuba/webar/BanubaSDK.data?url" import wasm from "@banuba/webar/BanubaSDK.wasm?url" import simd from "@banuba/webar/BanubaSDK.simd.wasm?url" import FaceTracker from "@banuba/webar/face_tracker.zip?url" import Background from "@banuba/webar/background.zip?url" // ... const player = await Player.create({ clientToken: "xxx-xxx-xxx", // point BanubaSDK where to find these vital files locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }) await player.addModule(new Module(FaceTracker), new Module(Background)) // ... ``` tip See Vite [Explicit URL imports](https://vitejs.dev/guide/assets.html#explicit-url-imports) docs for details. ### Rollup[​](#rollup "Direct link to Rollup") ``` import { Player, Module /* ... */ } from "@banuba/webar" // you need to set-up @rollup/plugin-url for the import syntax to work import data from "@banuba/webar/BanubaSDK.data" import wasm from "@banuba/webar/BanubaSDK.wasm" import simd from "@banuba/webar/BanubaSDK.simd.wasm" import FaceTracker from "@banuba/webar/face_tracker.zip" import Background from "@banuba/webar/background.zip" // ... const player = await Player.create({ clientToken: "xxx-xxx-xxx", // point BanubaSDK where to find these vital files locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }) await player.addModule(new Module(FaceTracker), new Module(Background)) // ... ``` tip See [@rollup/plugin-url](https://www.npmjs.com/package/@rollup/plugin-url#include) docs for details. ### Webpack[​](#webpack "Direct link to Webpack") Depending on the version of **Webpack** used, you may have to add following rule to the `module.rules` section of the `webpack.config.js`: ``` module.exports = { module: { rules: [ // ... { test: /\.wasm$/, type: 'javascript/auto', loader: 'file-loader', }, // ... ], }, }, } ``` Now import of `.wasm` files as URLs should work properly: ``` import { Player, Module /* ... */ } from "@banuba/webar" import data from "@banuba/webar/BanubaSDK.data" import wasm from "@banuba/webar/BanubaSDK.wasm" import simd from "@banuba/webar/BanubaSDK.simd.wasm" import FaceTracker from "@banuba/webar/face_tracker.zip" import Background from "@banuba/webar/background.zip" // ... const player = await Player.create({ clientToken: "xxx-xxx-xxx", // point BanubaSDK where to find these vital files locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }) await player.addModule(new Module(FaceTracker), new Module(Background)) // ... ``` info See the related [Webpack issue](https://github.com/webpack/webpack/issues/7352) for details. **Banuba SDK** for desktop platforms (i.e. **Windows** and **MacOS**) is distributed via [GitHub Releases](https://github.com/Banuba/FaceAR-SDK-desktop-releases/releases). Release archives for **Windows** are packed in `.zip` (`bnd_sdk.zip`), for **MacOS** in `.tar.gz` (`bnb_sdk.tar.gz`). Archives for **Windows** and **MacOS** contains identical C++ API, **MacOS** archive also contains **Objective-C** API identical to **iOS**. As usual, **Objecive-C** API is designed to be callable from **Swift**. Still have questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # android ## Packages[​](#packages "Direct link to Packages") **Maven** To start using **Banuba SDK** , add a custom maven repo to your `build.gradle.kts`: build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Refer to our [Maven](https://nexus.banuba.net/#browse/browse:maven-releases) to find the list of all available packages and their versions. ## How to choose required packages[​](#how-to-choose-required-packages "Direct link to How to choose required packages") warning Use only the packages with the same version! Packages with different versions (even minor) may conflict or work incorrectly with each other. tip If **feature** or **effect** works incorrect, see application **logs**, to figure out which package is missed. Add packages depends on the specific **features** or **effects**, which your app will use. It is your responsibility to include everything required for the desired behaviour.
See detailed [packages description](#list-of-all-available-packages) in the table below. build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") ## List of all available packages[​](#list-of-all-available-packages "Direct link to List of all available packages") | Package name | Description | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | com.banuba.sdk.sdk\_api | platform-specific APIs, like `Player`, `Input`, `Output`, etc. | | com.banuba.sdk.sdk\_core | provides the functionality of the native `EffectPlayer`. | | com.banuba.sdk.effect\_player | contains the necessary shaders, used by `sdk_core` package and provides the following features: math utilities, texture utilities, morphing, beautification, etc. | | com.banuba.sdk.scripting | includes the basic functionality used by the effect api, see [Effects](/far-sdk/effects/overview.md). | | com.banuba.sdk.face\_tracker | consists of neural network models used to track a face and its features: lips, eyes, etc. Include it whenever you deal with tracking. See more about [FRX](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking). | | com.banuba.sdk.face\_attributes | package consists of neural network models used to extract face attributes: skin color, gender, face shape etc. | | com.banuba.sdk.face\_match | package provides facilities to measure how faces are similar on two photos | | com.banuba.sdk.makeup | provides prefabs for [Makeup](/far-sdk/effects/prefabs/makeup.md). | | com.banuba.sdk.lips | provides neural network models for lips segmentation. See more about [Lips Segmentation](/far-sdk/tutorials/capabilities/glossary.md#lips-segmentation). | | com.banuba.sdk.hair | provides neural network models for hair segmentation. See more about [Hair Segmentation](/far-sdk/tutorials/capabilities/glossary.md#hair-segmentation). | | com.banuba.sdk.hands | provides neural network models for hand, nail, and finger segmentation. | | com.banuba.sdk.eyes | provides neural network models for eyes segmentation. See more about [Eye Segmentation](/far-sdk/tutorials/capabilities/glossary.md#eye-segmentation). | | com.banuba.sdk.skin | provides neural network models for skin segmentation. See more about [Skin Segmentation](/far-sdk/tutorials/capabilities/glossary.md#skin-segmentation). | | com.banuba.sdk.background | provides neural network models for background separation. See more about [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation). | | com.banuba.sdk.body | provides neural network model to recognize the human body in full and separate it from the background in images and videos. | | com.banuba.sdk.acne\_eyebags\_removal | provides neural network models for acne removal and eye bag removal. | | com.banuba.sdk.neck | provides neural network models for neck segmentation. | | com.banuba.sdk.banuba\_sdk\_resources | includes all the resources of the all packages. **Use it when you don't care about the size or you need all the features!** | | com.banuba.sdk.pose\_estimation | private | | com.banuba.sdk.banuba\_sdk | depends on the all the packages for the operation of all available features. **Use it when you don't care about the size or you need all the features!** | --- # desktop **Banuba SDK** for desktop platforms (i.e. **Windows** and **MacOS**) is distributed via [GitHub Releases](https://github.com/Banuba/FaceAR-SDK-desktop-releases/releases). Release archives for **Windows** are packed in `.zip` (`bnd_sdk.zip`), for **MacOS** in `.tar.gz` (`bnb_sdk.tar.gz`). Archives for **Windows** and **MacOS** contains identical C++ API, **MacOS** archive also contains **Objective-C** API identical to **iOS**. As usual, **Objecive-C** API is designed to be callable from **Swift**. --- # ios * CocoaPods * Swift Package Manager ## CocoaPods packages[​](#cocoapods-packages "Direct link to CocoaPods packages") To start using **Banuba SDK** with [**CocoaPods**](https://guides.cocoapods.org/using/using-cocoapods.html), add a **custom repository** and desired **packages** to your `Podfile`: Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Then install pods: ``` pod install --repo-update ``` See [banuba-sdk-podspecs](https://github.com/sdk-banuba/banuba-sdk-podspecs) repo for the list of all available packages and versions. The complete example, configured for **CocoaPods** usage, can be found in our [Objective-C](https://github.com/Banuba/quickstart-ios-objc) quickstart example. ## SPM packages[​](#spm-packages "Direct link to SPM packages") **Banuba SDK** provides [Swift Package Manager](https://www.swift.org/package-manager/) packages in the custom repositories. Add **Banuba SDK** packages, to your Xcode project: 1. **File** > **Add Package Dependencies...** 2. Search for the package, for example 3. Press the **Add Package** button. After verifying the package, press **Add Package** again. See the [sdk-banuba](https://github.com/sdk-banuba?tab=repositories\&q=swift+package\&type=\&language=\&sort=) repositories for the list of all available packages and versions. The complete example, configured for **SPM** usage, can be found in our [Beauty-iOS](https://github.com/Banuba/banuba-sdk-ios-samples) quickstart example. ## How to choose required packages[​](#how-to-choose-required-packages "Direct link to How to choose required packages") warning Only use the packages with the same version! Packages with different versions (even minor) may conflict or work incorrectly with each other. tip If **feature** or **effect** works incorrect, see application **logs**, to figure out which package is missed. Add packages depends on the specific **features** or **effects**, which your app will use. It is your responsibility to include everything required for the desired behaviour.
See detailed [packages description](#list-of-all-available-packages) in the table below. Example of the packages set for [Face Tracking](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking) and [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation): * BNBSdkApi * BNBFaceTracker * BNBBackground See detailed [packages description](#list-of-all-available-packages) in the table below. ## List of all available packages[​](#list-of-all-available-packages "Direct link to List of all available packages") | Package name | Description | | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | BNBSdkApi | platform-specific API, like `Player`, `Input`, `Output`, etc | | BNBSdkCore | provides the functionality of the native `EffectPlayer`. | | BNBEffectPlayer | contains the necessary shaders, used by `sdk_core` package and provides the following features: math utilities, texture utilities, morphing, beautification, etc. | | BNBScripting | includes the basic functionality used by the effect api, see [Effects](/far-sdk/effects/overview.md). | | BNBFaceTracker | package consists of neural network models used to track face and its features: lips, eyes, etc. Include it whenever you deal with tracking. See more about [Face Tracking](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking). | | BNBFaceAttributes | package consists of neural network models used to extract face attributes: skin color, gender, face shape etc. | | BNBFaceMatch | package provides facilities to measure how faces are similar on two photos | | BNBMakeup | provides prefabs for [Makeup](/far-sdk/effects/prefabs/makeup.md). | | BNBLips | provides neural network models for lips segmentation. See more about [Lips Segmentation](/far-sdk/tutorials/capabilities/glossary.md#lips-segmentation). | | BNBHair | provides neural network models for hair segmentation. See more about [Hair Segmentation](/far-sdk/tutorials/capabilities/glossary.md#hair-segmentation). | | BNBHands | provides neural network models for hand, nail, and finger segmentation. | | BNBEyes | provides neural network models for eyes segmentation. See more about [Eye Segmentation](/far-sdk/tutorials/capabilities/glossary.md#eye-segmentation). | | BNBSkin | provides neural network models for skin segmentation. See more about [Skin Segmentation](/far-sdk/tutorials/capabilities/glossary.md#skin-segmentation). | | BNBBackground | provides neural network models for background separation. See more about [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation). | | BNBBody | provides neural network model to recognize the human body in full and separate it from the background in images and videos. | | BNBAcneEyebagsRemoval | provides neural network models for acne removal and eye bag removal. | | BNBNeck | provides neural network models for neck segmentation. | | BNBResources | includes all the resources of the all packages. **Use it when you don't care about the size or you need all the features!** | | BNBPoseEstimation | private | | BanubaSdk | depends on the all the packages for the operation of all available features. **Use it when you don't care about the size or you need all the features!** | --- # web ## NPM Package[​](#npm-package "Direct link to NPM Package") **[Banuba WebAR](https://www.npmjs.com/package/@banuba/webar)** is delivered as an NPM package, which includes executables (`.js`, `.wasm`, `.simd.wasm`) and resources modules (`modules/*.zip`). ``` npm i @banuba/webar ``` ## Resources Modules[​](#resources-modules "Direct link to Resources Modules") | Module name | Description | | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | face\_tracker.zip | consists of neural network models used to track a face and its features: lips, eyes, etc. Include it whenever you deal with tracking. See more about [FRX](/far-sdk/tutorials/capabilities/glossary.md#frx-face-tracking). | | face\_attributes.zip | package consists of neural network models used to extract face attributes: skin color, gender, face shape etc. | | face\_match.zip | package provides facilities to measure how faces are similar on two photos | | makeup.zip | provides prefabs for [Makeup](/far-sdk/effects/prefabs/makeup.md). | | lips.zip | provides neural network models for lips segmentation. See more about [Lips Segmentation](/far-sdk/tutorials/capabilities/glossary.md#lips-segmentation). | | hair.zip | provides neural network models for hair segmentation. See more about [Hair Segmentation](/far-sdk/tutorials/capabilities/glossary.md#hair-segmentation). | | hands.zip | provides neural network models for hand, nail, and finger segmentation. | | eyes.zip | provides neural network models for eyes segmentation. See more about [Eye Segmentation](/far-sdk/tutorials/capabilities/glossary.md#eye-segmentation). | | skin.zip | provides neural network models for skin segmentation. See more about [Skin Segmentation](/far-sdk/tutorials/capabilities/glossary.md#skin-segmentation). | | background.zip | provides neural network models for background separation. See more about [Background Separation](/far-sdk/tutorials/capabilities/glossary.md#background-separation). | | body.zip | provides neural network model to recognize the human body in full and separate it from the background in images and videos. | | acne\_eyebags\_removal.zip | provides neural network models for acne removal and eye bag removal. | | neck.zip | provides neural network models for neck segmentation. | | pose\_estimation.zip | private | ## Bundlers[​](#bundlers "Direct link to Bundlers") **[Banuba WebAR](https://www.npmjs.com/package/@banuba/webar)** depends on `BanubaSDK.data` and `BanubaSDK.wasm` (or `BanubaSDK.simd.wasm` if you are targeting [SIMD](/far-sdk/tutorials/development/guides/optimization.md#speed-up-webar-sdk-on-modern-browsers)) files. By default the SDK expects these files to be accessible from the application root i.e. by the `/BanubaSDK.data`, `/BanubaSDK.wasm` `/BanubaSDK.simd.wasm` links. It must be taken into consideration when working with application bundlers like [Vite](https://vitejs.dev/), [Rollup](https://rollupjs.org/guide/en/) or [Webpack](https://webpack.js.org/). Generally speaking one should be able to put `BanubaSDK.data`, `BanubaSDK.wasm` and `BanubaSDK.simd.wasm` files into the application assets folder (usually `public/`) and get the SDK loading these files properly. But you may want to place the files somewhere else, that case the [locateFile](/far-sdk/generated/typedoc/types/SDKOptions.html) property of the [Player.create()](/far-sdk/generated/typedoc/classes/Player.html#create) method should help you to set-up SDK properly. * Vite * Rollup * Webpack ### Vite[​](#vite "Direct link to Vite") ``` import { Player, Module /* ... */ } from "@banuba/webar" // vite uses special ?url syntax to import files as URLs import data from "@banuba/webar/BanubaSDK.data?url" import wasm from "@banuba/webar/BanubaSDK.wasm?url" import simd from "@banuba/webar/BanubaSDK.simd.wasm?url" import FaceTracker from "@banuba/webar/face_tracker.zip?url" import Background from "@banuba/webar/background.zip?url" // ... const player = await Player.create({ clientToken: "xxx-xxx-xxx", // point BanubaSDK where to find these vital files locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }) await player.addModule(new Module(FaceTracker), new Module(Background)) // ... ``` tip See Vite [Explicit URL imports](https://vitejs.dev/guide/assets.html#explicit-url-imports) docs for details. ### Rollup[​](#rollup "Direct link to Rollup") ``` import { Player, Module /* ... */ } from "@banuba/webar" // you need to set-up @rollup/plugin-url for the import syntax to work import data from "@banuba/webar/BanubaSDK.data" import wasm from "@banuba/webar/BanubaSDK.wasm" import simd from "@banuba/webar/BanubaSDK.simd.wasm" import FaceTracker from "@banuba/webar/face_tracker.zip" import Background from "@banuba/webar/background.zip" // ... const player = await Player.create({ clientToken: "xxx-xxx-xxx", // point BanubaSDK where to find these vital files locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }) await player.addModule(new Module(FaceTracker), new Module(Background)) // ... ``` tip See [@rollup/plugin-url](https://www.npmjs.com/package/@rollup/plugin-url#include) docs for details. ### Webpack[​](#webpack "Direct link to Webpack") Depending on the version of **Webpack** used, you may have to add following rule to the `module.rules` section of the `webpack.config.js`: ``` module.exports = { module: { rules: [ // ... { test: /\.wasm$/, type: 'javascript/auto', loader: 'file-loader', }, // ... ], }, }, } ``` Now import of `.wasm` files as URLs should work properly: ``` import { Player, Module /* ... */ } from "@banuba/webar" import data from "@banuba/webar/BanubaSDK.data" import wasm from "@banuba/webar/BanubaSDK.wasm" import simd from "@banuba/webar/BanubaSDK.simd.wasm" import FaceTracker from "@banuba/webar/face_tracker.zip" import Background from "@banuba/webar/background.zip" // ... const player = await Player.create({ clientToken: "xxx-xxx-xxx", // point BanubaSDK where to find these vital files locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }) await player.addModule(new Module(FaceTracker), new Module(Background)) // ... ``` info See the related [Webpack issue](https://github.com/webpack/webpack/issues/7352) for details. --- # Known Issues * Web ## MediaStreamCapture stream freezes when a browser tab becomes inactive in Safari[​](#mediastreamcapture-stream-freezes-when-a-browser-tab-becomes-inactive-in-safari "Direct link to MediaStreamCapture stream freezes when a browser tab becomes inactive in Safari") Starting from the 15.3 release Safari began to pause [MediaStream](https://developer.mozilla.org/en-US/docs/Web/API/MediaStream)s obtained from [canvas.captureStream()](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream), which used internally by WebAR's [MediaStreamCapture](/far-sdk/generated/typedoc/classes/MediaStreamCapture.html), when the browser's tab [is not visible](https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibility_API#properties_added_to_the_document_interface). This leads to the video freeze of the WebRTC call app participant using WebAR SDK when the participant minimizes or hides the browser or the browser's tab running the app. Unfortunately, currently there is no known workaround for this issue. ## Page running WebAR SDK consumes too much memory[​](#page-running-webar-sdk-consumes-too-much-memory "Direct link to Page running WebAR SDK consumes too much memory") The most likely reason is a memory leak caused by a dangling [Player](/far-sdk/generated/typedoc/classes/Player.html#constructor) instance which can not be automatically collected by the browser's GC. Consider the code: ``` let webcam document.querySelector("#start").onclick = async () => { const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.use((webcam = new Webcam())) player.play() Dom.render(player, "#webar") } document.querySelector("#stop").onclick = () => { webcam.stop() Dom.unmount("#webar") } ``` Sequence of clicks on the `#start` followed by a click on the `#stop` leads to a memory leak, since the `player` object is still held in the browser memory. To fix it, you should call [Player.destroy()](/far-sdk/generated/typedoc/classes/Player.html#destroy) once the `player` object is not needed anymore. The following fixed code will not cause a memory leak: ``` let webcam, player document.querySelector("#start").onclick = async () => { player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.use((webcam = new Webcam())) player.play() Dom.render(player, "#webar") } document.querySelector("#stop").onclick = () => { webcam.stop() Dom.unmount("#webar") // destroy the player object to prevent accidental memory leaks player.destroy() } ``` note If the app has such a "start - stop - repeat" logic, you may also consider to cache the `player` object instead of constantly re-creating it: ``` let player, webcam document.querySelector("#start").onclick = async () => { // reuse the player instance instead of re-creation if (!player) player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.use((webcam = new Webcam())) player.play() Dom.render(player, "#webar") } document.querySelector("#stop").onclick = () => { webcam.stop() Dom.unmount("#webar") // no need to destroy the player since it will be reused on the next "#start" click } ``` ## Page running WebAR SDK crashes[​](#page-running-webar-sdk-crashes "Direct link to Page running WebAR SDK crashes") One of the most widespread reasons is a memory leak which drains all the device's RAM. Please check out the [Page running WebAR SDK consumes too much memory](#page-running-webar-sdk-consumes-too-much-memory) section. If that's not your case, please [contact support](/far-sdk/support/.md) and submit the issue. ## Effect animations are delayed in Safari[​](#effect-animations-are-delayed-in-safari "Direct link to Effect animations are delayed in Safari") This is [a known Safari bug](https://bugs.webkit.org/show_bug.cgi?id=232076), and for your convenience we provide you with [the ready-to-go fix](/far-sdk/js/range-requests.sw.js). Assume you have an html page from the [Basic Integration](/far-sdk/tutorials/development/basic_integration.md) section. Download the [range-requests.sw.js](/far-sdk/js/range-requests.sw.js) file and put it next to the WebAR running page. To fix the Safari playback issue prepend the `navigator.serviceWorker.register("./range-requests.sw.js")` to the page's script and point Player to proxy video requests to the service worker: ``` Banuba SDK Web AR
``` You can verify the fix with help of the [Rorschach](/far-sdk/generated/effects/Rorschach.zip) animated effect. note You may want to conditionally include the fix for Safari but not for the other browsers. Check the [quickstart-web](https://github.com/Banuba/quickstart-web) demo app for a possible implementation. note If your app already has a ServiceWorker in your app, simply import the [range-requests.sw.js](/far-sdk/js/range-requests.sw.js) into it: ``` importScripts("range-requests.sw.js") ``` Still have questions about FaceAR SDK? Visit our [FAQ](https://www.banuba.com/faq/) or [contact our support](/far-sdk/support/.md). --- # web ## MediaStreamCapture stream freezes when a browser tab becomes inactive in Safari[​](#mediastreamcapture-stream-freezes-when-a-browser-tab-becomes-inactive-in-safari "Direct link to MediaStreamCapture stream freezes when a browser tab becomes inactive in Safari") Starting from the 15.3 release Safari began to pause [MediaStream](https://developer.mozilla.org/en-US/docs/Web/API/MediaStream)s obtained from [canvas.captureStream()](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream), which used internally by WebAR's [MediaStreamCapture](/far-sdk/generated/typedoc/classes/MediaStreamCapture.html), when the browser's tab [is not visible](https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibility_API#properties_added_to_the_document_interface). This leads to the video freeze of the WebRTC call app participant using WebAR SDK when the participant minimizes or hides the browser or the browser's tab running the app. Unfortunately, currently there is no known workaround for this issue. ## Page running WebAR SDK consumes too much memory[​](#page-running-webar-sdk-consumes-too-much-memory "Direct link to Page running WebAR SDK consumes too much memory") The most likely reason is a memory leak caused by a dangling [Player](/far-sdk/generated/typedoc/classes/Player.html#constructor) instance which can not be automatically collected by the browser's GC. Consider the code: ``` let webcam document.querySelector("#start").onclick = async () => { const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.use((webcam = new Webcam())) player.play() Dom.render(player, "#webar") } document.querySelector("#stop").onclick = () => { webcam.stop() Dom.unmount("#webar") } ``` Sequence of clicks on the `#start` followed by a click on the `#stop` leads to a memory leak, since the `player` object is still held in the browser memory. To fix it, you should call [Player.destroy()](/far-sdk/generated/typedoc/classes/Player.html#destroy) once the `player` object is not needed anymore. The following fixed code will not cause a memory leak: ``` let webcam, player document.querySelector("#start").onclick = async () => { player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.use((webcam = new Webcam())) player.play() Dom.render(player, "#webar") } document.querySelector("#stop").onclick = () => { webcam.stop() Dom.unmount("#webar") // destroy the player object to prevent accidental memory leaks player.destroy() } ``` note If the app has such a "start - stop - repeat" logic, you may also consider to cache the `player` object instead of constantly re-creating it: ``` let player, webcam document.querySelector("#start").onclick = async () => { // reuse the player instance instead of re-creation if (!player) player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.use((webcam = new Webcam())) player.play() Dom.render(player, "#webar") } document.querySelector("#stop").onclick = () => { webcam.stop() Dom.unmount("#webar") // no need to destroy the player since it will be reused on the next "#start" click } ``` ## Page running WebAR SDK crashes[​](#page-running-webar-sdk-crashes "Direct link to Page running WebAR SDK crashes") One of the most widespread reasons is a memory leak which drains all the device's RAM. Please check out the [Page running WebAR SDK consumes too much memory](#page-running-webar-sdk-consumes-too-much-memory) section. If that's not your case, please [contact support](/far-sdk/support/.md) and submit the issue. ## Effect animations are delayed in Safari[​](#effect-animations-are-delayed-in-safari "Direct link to Effect animations are delayed in Safari") This is [a known Safari bug](https://bugs.webkit.org/show_bug.cgi?id=232076), and for your convenience we provide you with [the ready-to-go fix](/far-sdk/js/range-requests.sw.js). Assume you have an html page from the [Basic Integration](/far-sdk/tutorials/development/basic_integration.md) section. Download the [range-requests.sw.js](/far-sdk/js/range-requests.sw.js) file and put it next to the WebAR running page. To fix the Safari playback issue prepend the `navigator.serviceWorker.register("./range-requests.sw.js")` to the page's script and point Player to proxy video requests to the service worker: ``` Banuba SDK Web AR
``` You can verify the fix with help of the [Rorschach](/far-sdk/generated/effects/Rorschach.zip) animated effect. note You may want to conditionally include the fix for Safari but not for the other browsers. Check the [quickstart-web](https://github.com/Banuba/quickstart-web) demo app for a possible implementation. note If your app already has a ServiceWorker in your app, simply import the [range-requests.sw.js](/far-sdk/js/range-requests.sw.js) into it: ``` importScripts("range-requests.sw.js") ``` --- # Vibe Coding If you use AI agents, feel free to use our LLM-ready documentation. Feed the [llms.txt](../../llms.txt) and [llms-full.txt](../../llms-full.txt) files to the AI that you use (e.g., OpenAI, Claude, or Gemini) and speed up vibe coding. Once uploaded, you can improve your integration experience, query the model for technical details, code examples, and integration best practices, effectively creating a customized support assistant to guide you through the implementation process drastically reducing time to integrate. * [llms.txt](../../llms.txt) * [llms-full.txt](../../llms-full.txt) [llms.txt](../../llms.txt) is for internet indexers. [llms-full.txt](../../llms-full.txt) contains all the information for easier integration. This is how they differ from the regular docs: * Plain text, no HTML, CSS or JavaScript * Markdown formatting * Structured in a way to be easily digested by LLMs --- # Examples of using Banuba SDK To start working with **Banuba SDK** examples, you need to have a client token. To receive it, please fill in our [form on banuba.com](https://www.banuba.com/face-filters-sdk), or contact us via . ## Our GitHub[​](#our-github "Direct link to Our GitHub") Visit our [GitHub page](https://github.com/Banuba) to see all available examples. * iOS * Android * Web * Flutter * ReactNative * macOS * Desktop ## iOS samples (Swift)[​](#ios-samples-swift "Direct link to iOS samples (Swift)") This repository contains basic samples how to use [Player API](/far-sdk/tutorials/development/api_overview.md). ## Agora plugin example (Swift)[​](#agora-plugin-example-swift "Direct link to Agora plugin example (Swift)") This example shows how to use **Banuba SDK** as an **Agora** plugin for a video call between two devices. ## Opentok example (Swift)[​](#opentok-example-swift "Direct link to Opentok example (Swift)") This example demonstrates the use of **Banuba SDK** in conjunction with **Opentok SDK**. ## WebRTC example (Objective-C)[​](#webrtc-example-objective-c "Direct link to WebRTC example (Objective-C)") This example demonstrates how to use **Banuba SDK** in conjunction with **WebRTC**. ## Beauty example (Swift)[​](#beauty-example-swift "Direct link to Beauty example (Swift)") This example demonstrates how to correctly use the **Makeup** effect. info This example uses **SPM** ## [ZEGOCLOUD](https://www.zegocloud.com) example (Swift)[​](#zegocloud-example-swift "Direct link to zegocloud-example-swift") This example demonstrates integration with ZEGOCLOUD videocalling platform. ## ARCloud example (Swift)[​](#arcloud-example-swift "Direct link to ARCloud example (Swift)") This example shows how to dynamically load effects from the network. ## Quickstart example (Objective-C)[​](#quickstart-example-objective-c "Direct link to Quickstart example (Objective-C)") This example shows how to use **Banuba SDK** in **Objective-C** application. ## Requirements[​](#requirements "Direct link to Requirements") * Latest stable Android Studio * Latest Gradle plugin * Latest NDK ## Banuba SDK examples (Kotlin)[​](#banuba-sdk-examples-kotlin "Direct link to Banuba SDK examples (Kotlin)") This example shows how to use **PlayerAPI** for various tasks. It uses the `Player`, `CameraInput`, `SurfaceOutput`, and `VideoOutput` classes. ## Agora plugin example (Kotlin)[​](#agora-plugin-example-kotlin "Direct link to Agora plugin example (Kotlin)") An example of using **Banuba SDK** as an **Agora** plugin for a video call between two devices. ## Opentok example (Java)[​](#opentok-example-java "Direct link to Opentok example (Java)") This example demonstrates the use of **Banuba SDK** in conjunction with **Opentok SDK**. ## WebRTC example (Kotlin)[​](#webrtc-example-kotlin "Direct link to WebRTC example (Kotlin)") This example demonstrates how to use **Banuba SDK** in conjunction with **WebRTC**. In it, a **WebRTC** camera is used, and after processing, the frame is drawn onto the surface using **WebRTC**. This example based on **PlayerAPI**. ## [ZEGOCLOUD](https://www.zegocloud.com) example (Java)[​](#zegocloud-example-java "Direct link to zegocloud-example-java") This example demonstrates integration with ZEGOCLOUD videocalling platform. ## Beauty example (Java)[​](#beauty-example-java "Direct link to Beauty example (Java)") This example demonstrates how to correctly use the **Makeup** effect and how to call **MakeupAPI** scripts. This example is based on **PlayerAPI**. ## Beauty example (Kotlin)[​](#beauty-example-kotlin "Direct link to Beauty example (Kotlin)") This example demonstrates how to correctly use the **Makeup** effect and how to call **MakeupAPI** scripts. ## ARCloud example (Kotlin)[​](#arcloud-example-kotlin "Direct link to ARCloud example (Kotlin)") This example shows how to dynamically load effects from the network. This example is based on **PlayerAPI**. ## Quickstart[​](#quickstart "Direct link to Quickstart") **[Quickstart Web](https://github.com/Banuba/quickstart-web)** ## Beauty[​](#beauty "Direct link to Beauty") **[Beauty demo app](https://github.com/Banuba/beauty-web)** ## Vue[​](#vue "Direct link to Vue") **[Vue demo app](https://github.com/Banuba/quickstart-web-vue)** ## Angular[​](#angular "Direct link to Angular") **[Angular demo app](https://github.com/Banuba/quickstart-web-angular)** ## React[​](#react "Direct link to React") **[React demo app](https://github.com/Banuba/quickstart-web-react)** ``` import React, { useEffect } from "react" import data from "@banuba/webar/BanubaSDK.data" import wasm from "@banuba/webar/BanubaSDK.wasm" import simd from "@banuba/webar/BanubaSDK.simd.wasm" import FaceTracker from "@banuba/webar/face_tracker.zip" import Glasses from "/path/to/Glasses.zip" import { Webcam, Player, Module, Effect, Dom } from "@banuba/webar" export default function WebARComponent() { // componentDidMount useEffect(() => { let webcam Player.create({ clientToken: "xxx-xxx-xxx", locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }).then(async (player) => { await player.addModule(new Module(FaceTracker)) await player.use(webcam = new Webcam()) player.applyEffect(new Effect(Glasses)) Dom.render(player, "#webar") }) // componentWillUnmount return () => { webcam?.stop() Dom.unmount("#webar") } }) return
} ``` tip See [Bundlers](/far-sdk/tutorials/development/installation.md#bundlers) for notes about specific bundlers and `locateFile` usage. ## Agora[​](#agora "Direct link to Agora") **[Banuba Agora Extension](https://github.com/Banuba/agora-plugin-filters-web)** **[Video call demo app](https://github.com/Banuba/videocall-web)** ## ZEGOCLOUD[​](#zegocloud "Direct link to ZEGOCLOUD") **[ZEGOCLOUD sample](https://github.com/Banuba/banuba_sdk_zegocloud_sdk_web)** ## OpenTok (TokBox)[​](#opentok-tokbox "Direct link to OpenTok (TokBox)") **[TokBox demo app](https://github.com/Banuba/videocall-tokbox-web)** ## Customizing video source[​](#customizing-video-source "Direct link to Customizing video source") You can easily modify the built-in [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor) module video by passing [parameters](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints#properties_of_video_tracks) to it. ### Switching from front to back camera[​](#switching-from-front-to-back-camera "Direct link to Switching from front to back camera") For example, you can use back camera of the device by passing [facingMode](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints/facingMode) parameter to it: ``` // ... // The default facingMode value is "user" which means front camera // The "environment" value here means back camera await player.use(new Webcam({ facingMode: "environment" })) // ... ``` ### Rendering WebAR video in full screen on mobile[​](#rendering-webar-video-in-full-screen-on-mobile "Direct link to Rendering WebAR video in full screen on mobile") Simply add the following CSS to force WebAR AR video to fill viewport: ``` ``` If you decide to specify the exact `width` and `height` for the [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor), pay attention that on several mobile devices/operating systems webcam video width and height may be flipped. It's a known platform-specific [webcam bug](https://stackoverflow.com/questions/62538271/getusermedia-selfie-full-screen-on-mobile/62598616#62598616). To work around it, swap the `width` and `height` values: ``` const desiredWidth = 360 const desiredHeight = 540 await player.use(new Webcam({ width: desiredHeight, height: desiredWidth, })) ``` Also, you may want to check out the [Video cropping](#video-cropping) section for more advanced scenarios. ### External MediaStream[​](#external-mediastream "Direct link to External MediaStream") If the built-in [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor) can not fit your needs you can use a custom [MediaStream](/far-sdk/generated/typedoc/classes/MediaStream.html) with [Player](/far-sdk/generated/typedoc/classes/Player.html#constructor): ``` import { MediaStream /* ... */ } from "@banuba/webar" // ... /* process video from the camera */ const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true }) await player.use(new MediaStream(camera)) /* or even from another canvas */ const canvas = $("canvas").captureStream() await player.use(new MediaStream(canvas)) // ... ``` See [MediaStream](/far-sdk/generated/typedoc/classes/MediaStream.html) docs for more details. ## Capturing processed video[​](#capturing-processed-video "Direct link to Capturing processed video") You can easily capture the processed video, take screenshots, video recordings or pass the captured video to a WebRTC connection. ### Screenshot[​](#screenshot "Direct link to Screenshot") ``` import { ImageCapture /* ... */ } from "@banuba/webar" // ... const capture = new ImageCapture(player) const photo = await capture.takePhoto() // ... ``` See [ImageCapture.takePhoto()](/far-sdk/generated/typedoc/classes/ImageCapture.html#takePhoto) docs for more details. ### Video[​](#video "Direct link to Video") ``` import { VideoRecorder /* ... */ } from "@banuba/webar" // ... const recorder = new VideoRecorder(player) recorder.start() await new Promise((r) => setTimeout(r, 5000)) // wait for 5 sec const video = await recorder.stop() // ... ``` See [VideoRecorder](/far-sdk/generated/typedoc/classes/VideoRecorder.html) docs for more details. ### MediaStream[​](#mediastream "Direct link to MediaStream") ``` import { MediaStreamCapture /* ... */ } from "@banuba/webar" // ... // the capture is an instance of window.MediaStream const capture = new MediaStreamCapture(player) // so it can be used as a video source $("video").srcObject = capture // or can be added to a WebRTC peer connection const connection = new RTCPeerConnection() connection.addTrack(capture.getVideoTrack()) // ... ``` See [MediaStreamCapture](/far-sdk/generated/typedoc/classes/MediaStreamCapture.html) docs for more details. ## Video cropping[​](#video-cropping "Direct link to Video cropping") You can adjust video frame dimensions via [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor) constructor parameters: ``` const wcam = new Webcam({ width: 320, height: 240 }) ``` But this approach is platform-dependent and varies between browsers, e.g. some browsers may be unable to produces frames of the requested dimensions and can yield frames of close but different dimensions instead (e.g. 352x288 instead of requested 320x240). To work around this platform-specific limitations, you can leverage the built-in SDK crop modificator: ``` const desiredWidth = 320 const desiredHeight = 240 function crop(renderWidth, renderHeight) { const dx = (renderWidth - desiredWidth) / 2 const dy = (renderHeight - desiredHeight) / 2 return [dx, dy, desiredWidth, desiredHeight] } await player.use(webcam, { crop }) ``` This way you can get the desired frame size regardless of the platform used. See [Player.use()](/far-sdk/generated/typedoc/classes/Player.html#use) and [InputOptions](//generated/typedoc/types/InputOptions.html) docs for more datails. ## Postprocessing[​](#postprocessing "Direct link to Postprocessing") It's possible to post-process the video processed by WebAR SDK. You can grab the idea from the code-snippet: ``` import { MediaStreamCapture /* ... */ } from "@banuba/webar" // ... const capture = document.createElement("video") capture.autoplay = true capture.srcObject = new MediaStreamCapture(player) const canvas = document.getElementById("postprocessed") const ctx = canvas.getContext("2d") const fontSize = 48 * window.devicePixelRatio function postprocess() { canvas.width = capture.videoWidth canvas.height = capture.videoHeight ctx.drawImage(capture, 0, 0) ctx.font = `${fontSize}px serif` ctx.fillStyle = "red" ctx.fillText("A Watermark", 0.5 * fontSize, 1.25 * fontSize) } ;(function loop() { postprocess() requestAnimationFrame(loop) })() ``` See [Capturing processed video](#capturing-processed-video) > [MediaStream](#mediastream) for details. ## Minimal sample[​](#minimal-sample "Direct link to Minimal sample") A one-file example, a good starting point. ## Quickstart Flutter sample[​](#quickstart-flutter-sample "Direct link to Quickstart Flutter sample") This project demonstrates how to apply effects, makeup; process photos. ## Video call sample[​](#video-call-sample "Direct link to Video call sample") This example demonstrates the use of **Banuba SDK** in conjunction with **AgoraRTC SDK** for a video call on Flutter. This example is forked from the [Flutter plugin of Agora](https://github.com/AgoraIO-Extensions/Agora-Flutter-SDK). Banuba SDK is integrated in `JoinChannelVideo` subsample. See [videocall](/far-sdk/tutorials/development/videocall.md). ## ARCloud sample[​](#arcloud-sample "Direct link to ARCloud sample") Read more about [ARCloud](/far-sdk/tutorials/development/guides/ar_cloud.md). ## Minimal sample[​](#minimal-sample "Direct link to Minimal sample") One-file example, good starting point. This a part of React Native over Banuba module. ## Videocall example[​](#videocall-example "Direct link to Videocall example") This example demonstrates the use of **Banuba SDK** in conjunction with **AgoraRTC SDK** for a video call on React Native. This example is forked from [Agora module for React Native](https://github.com/AgoraIO-Extensions/react-native-agora). Banuba SDK integrated in `JoinChannelVideo` subsample. See [videocall](/far-sdk/tutorials/development/videocall.md). ## macOS sample (Swift)[​](#macos-sample-swift "Direct link to macOS sample (Swift)") This repository contains basic samples how to use Banuba SDK on macOS. Examples bellow are written in C++ and will run both on Windows and macOS. ## Quickstart Desktop (C++)[​](#quickstart-desktop-c "Direct link to Quickstart Desktop (C++)") The starting point for desktop integration in C++. Demonstrates: 1. [on-screen rendering with realtime camera](https://github.com/Banuba/quickstart-desktop-cpp/blob/master/realtime-camera-preview/main.cpp) 2. [photo processing](https://github.com/Banuba/quickstart-desktop-cpp/blob/master/single-image-processing/main.cpp) 3. [video stream processing](https://github.com/Banuba/quickstart-desktop-cpp/blob/master/videostream-processing/main.cpp) --- # android ## Requirements[​](#requirements "Direct link to Requirements") * Latest stable Android Studio * Latest Gradle plugin * Latest NDK ## Banuba SDK examples (Kotlin)[​](#banuba-sdk-examples-kotlin "Direct link to Banuba SDK examples (Kotlin)") This example shows how to use **PlayerAPI** for various tasks. It uses the `Player`, `CameraInput`, `SurfaceOutput`, and `VideoOutput` classes. ## Agora plugin example (Kotlin)[​](#agora-plugin-example-kotlin "Direct link to Agora plugin example (Kotlin)") An example of using **Banuba SDK** as an **Agora** plugin for a video call between two devices. ## Opentok example (Java)[​](#opentok-example-java "Direct link to Opentok example (Java)") This example demonstrates the use of **Banuba SDK** in conjunction with **Opentok SDK**. ## WebRTC example (Kotlin)[​](#webrtc-example-kotlin "Direct link to WebRTC example (Kotlin)") This example demonstrates how to use **Banuba SDK** in conjunction with **WebRTC**. In it, a **WebRTC** camera is used, and after processing, the frame is drawn onto the surface using **WebRTC**. This example based on **PlayerAPI**. ## [ZEGOCLOUD](https://www.zegocloud.com) example (Java)[​](#zegocloud-example-java "Direct link to zegocloud-example-java") This example demonstrates integration with ZEGOCLOUD videocalling platform. ## Beauty example (Java)[​](#beauty-example-java "Direct link to Beauty example (Java)") This example demonstrates how to correctly use the **Makeup** effect and how to call **MakeupAPI** scripts. This example is based on **PlayerAPI**. ## Beauty example (Kotlin)[​](#beauty-example-kotlin "Direct link to Beauty example (Kotlin)") This example demonstrates how to correctly use the **Makeup** effect and how to call **MakeupAPI** scripts. ## ARCloud example (Kotlin)[​](#arcloud-example-kotlin "Direct link to ARCloud example (Kotlin)") This example shows how to dynamically load effects from the network. This example is based on **PlayerAPI**. --- # desktop Examples bellow are written in C++ and will run both on Windows and macOS. ## Quickstart Desktop (C++)[​](#quickstart-desktop-c "Direct link to Quickstart Desktop (C++)") The starting point for desktop integration in C++. Demonstrates: 1. [on-screen rendering with realtime camera](https://github.com/Banuba/quickstart-desktop-cpp/blob/master/realtime-camera-preview/main.cpp) 2. [photo processing](https://github.com/Banuba/quickstart-desktop-cpp/blob/master/single-image-processing/main.cpp) 3. [video stream processing](https://github.com/Banuba/quickstart-desktop-cpp/blob/master/videostream-processing/main.cpp) --- # flutter ## Minimal sample[​](#minimal-sample "Direct link to Minimal sample") A one-file example, a good starting point. ## Quickstart Flutter sample[​](#quickstart-flutter-sample "Direct link to Quickstart Flutter sample") This project demonstrates how to apply effects, makeup; process photos. ## Video call sample[​](#video-call-sample "Direct link to Video call sample") This example demonstrates the use of **Banuba SDK** in conjunction with **AgoraRTC SDK** for a video call on Flutter. This example is forked from the [Flutter plugin of Agora](https://github.com/AgoraIO-Extensions/Agora-Flutter-SDK). Banuba SDK is integrated in `JoinChannelVideo` subsample. See [videocall](/far-sdk/tutorials/development/videocall.md). ## ARCloud sample[​](#arcloud-sample "Direct link to ARCloud sample") Read more about [ARCloud](/far-sdk/tutorials/development/guides/ar_cloud.md). --- # ios ## iOS samples (Swift)[​](#ios-samples-swift "Direct link to iOS samples (Swift)") This repository contains basic samples how to use [Player API](/far-sdk/tutorials/development/api_overview.md). ## Agora plugin example (Swift)[​](#agora-plugin-example-swift "Direct link to Agora plugin example (Swift)") This example shows how to use **Banuba SDK** as an **Agora** plugin for a video call between two devices. ## Opentok example (Swift)[​](#opentok-example-swift "Direct link to Opentok example (Swift)") This example demonstrates the use of **Banuba SDK** in conjunction with **Opentok SDK**. ## WebRTC example (Objective-C)[​](#webrtc-example-objective-c "Direct link to WebRTC example (Objective-C)") This example demonstrates how to use **Banuba SDK** in conjunction with **WebRTC**. ## Beauty example (Swift)[​](#beauty-example-swift "Direct link to Beauty example (Swift)") This example demonstrates how to correctly use the **Makeup** effect. info This example uses **SPM** ## [ZEGOCLOUD](https://www.zegocloud.com) example (Swift)[​](#zegocloud-example-swift "Direct link to zegocloud-example-swift") This example demonstrates integration with ZEGOCLOUD videocalling platform. ## ARCloud example (Swift)[​](#arcloud-example-swift "Direct link to ARCloud example (Swift)") This example shows how to dynamically load effects from the network. ## Quickstart example (Objective-C)[​](#quickstart-example-objective-c "Direct link to Quickstart example (Objective-C)") This example shows how to use **Banuba SDK** in **Objective-C** application. --- # macos ## macOS sample (Swift)[​](#macos-sample-swift "Direct link to macOS sample (Swift)") This repository contains basic samples how to use Banuba SDK on macOS. --- # react\_native ## Minimal sample[​](#minimal-sample "Direct link to Minimal sample") One-file example, good starting point. This a part of React Native over Banuba module. ## Videocall example[​](#videocall-example "Direct link to Videocall example") This example demonstrates the use of **Banuba SDK** in conjunction with **AgoraRTC SDK** for a video call on React Native. This example is forked from [Agora module for React Native](https://github.com/AgoraIO-Extensions/react-native-agora). Banuba SDK integrated in `JoinChannelVideo` subsample. See [videocall](/far-sdk/tutorials/development/videocall.md). --- # web ## Quickstart[​](#quickstart "Direct link to Quickstart") **[Quickstart Web](https://github.com/Banuba/quickstart-web)** ## Beauty[​](#beauty "Direct link to Beauty") **[Beauty demo app](https://github.com/Banuba/beauty-web)** ## Vue[​](#vue "Direct link to Vue") **[Vue demo app](https://github.com/Banuba/quickstart-web-vue)** ## Angular[​](#angular "Direct link to Angular") **[Angular demo app](https://github.com/Banuba/quickstart-web-angular)** ## React[​](#react "Direct link to React") **[React demo app](https://github.com/Banuba/quickstart-web-react)** ``` import React, { useEffect } from "react" import data from "@banuba/webar/BanubaSDK.data" import wasm from "@banuba/webar/BanubaSDK.wasm" import simd from "@banuba/webar/BanubaSDK.simd.wasm" import FaceTracker from "@banuba/webar/face_tracker.zip" import Glasses from "/path/to/Glasses.zip" import { Webcam, Player, Module, Effect, Dom } from "@banuba/webar" export default function WebARComponent() { // componentDidMount useEffect(() => { let webcam Player.create({ clientToken: "xxx-xxx-xxx", locateFile: { "BanubaSDK.data": data, "BanubaSDK.wasm": wasm, "BanubaSDK.simd.wasm": simd, }, }).then(async (player) => { await player.addModule(new Module(FaceTracker)) await player.use(webcam = new Webcam()) player.applyEffect(new Effect(Glasses)) Dom.render(player, "#webar") }) // componentWillUnmount return () => { webcam?.stop() Dom.unmount("#webar") } }) return
} ``` tip See [Bundlers](/far-sdk/tutorials/development/installation.md#bundlers) for notes about specific bundlers and `locateFile` usage. ## Agora[​](#agora "Direct link to Agora") **[Banuba Agora Extension](https://github.com/Banuba/agora-plugin-filters-web)** **[Video call demo app](https://github.com/Banuba/videocall-web)** ## ZEGOCLOUD[​](#zegocloud "Direct link to ZEGOCLOUD") **[ZEGOCLOUD sample](https://github.com/Banuba/banuba_sdk_zegocloud_sdk_web)** ## OpenTok (TokBox)[​](#opentok-tokbox "Direct link to OpenTok (TokBox)") **[TokBox demo app](https://github.com/Banuba/videocall-tokbox-web)** ## Customizing video source[​](#customizing-video-source "Direct link to Customizing video source") You can easily modify the built-in [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor) module video by passing [parameters](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints#properties_of_video_tracks) to it. ### Switching from front to back camera[​](#switching-from-front-to-back-camera "Direct link to Switching from front to back camera") For example, you can use back camera of the device by passing [facingMode](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints/facingMode) parameter to it: ``` // ... // The default facingMode value is "user" which means front camera // The "environment" value here means back camera await player.use(new Webcam({ facingMode: "environment" })) // ... ``` ### Rendering WebAR video in full screen on mobile[​](#rendering-webar-video-in-full-screen-on-mobile "Direct link to Rendering WebAR video in full screen on mobile") Simply add the following CSS to force WebAR AR video to fill viewport: ``` ``` If you decide to specify the exact `width` and `height` for the [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor), pay attention that on several mobile devices/operating systems webcam video width and height may be flipped. It's a known platform-specific [webcam bug](https://stackoverflow.com/questions/62538271/getusermedia-selfie-full-screen-on-mobile/62598616#62598616). To work around it, swap the `width` and `height` values: ``` const desiredWidth = 360 const desiredHeight = 540 await player.use(new Webcam({ width: desiredHeight, height: desiredWidth, })) ``` Also, you may want to check out the [Video cropping](#video-cropping) section for more advanced scenarios. ### External MediaStream[​](#external-mediastream "Direct link to External MediaStream") If the built-in [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor) can not fit your needs you can use a custom [MediaStream](/far-sdk/generated/typedoc/classes/MediaStream.html) with [Player](/far-sdk/generated/typedoc/classes/Player.html#constructor): ``` import { MediaStream /* ... */ } from "@banuba/webar" // ... /* process video from the camera */ const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true }) await player.use(new MediaStream(camera)) /* or even from another canvas */ const canvas = $("canvas").captureStream() await player.use(new MediaStream(canvas)) // ... ``` See [MediaStream](/far-sdk/generated/typedoc/classes/MediaStream.html) docs for more details. ## Capturing processed video[​](#capturing-processed-video "Direct link to Capturing processed video") You can easily capture the processed video, take screenshots, video recordings or pass the captured video to a WebRTC connection. ### Screenshot[​](#screenshot "Direct link to Screenshot") ``` import { ImageCapture /* ... */ } from "@banuba/webar" // ... const capture = new ImageCapture(player) const photo = await capture.takePhoto() // ... ``` See [ImageCapture.takePhoto()](/far-sdk/generated/typedoc/classes/ImageCapture.html#takePhoto) docs for more details. ### Video[​](#video "Direct link to Video") ``` import { VideoRecorder /* ... */ } from "@banuba/webar" // ... const recorder = new VideoRecorder(player) recorder.start() await new Promise((r) => setTimeout(r, 5000)) // wait for 5 sec const video = await recorder.stop() // ... ``` See [VideoRecorder](/far-sdk/generated/typedoc/classes/VideoRecorder.html) docs for more details. ### MediaStream[​](#mediastream "Direct link to MediaStream") ``` import { MediaStreamCapture /* ... */ } from "@banuba/webar" // ... // the capture is an instance of window.MediaStream const capture = new MediaStreamCapture(player) // so it can be used as a video source $("video").srcObject = capture // or can be added to a WebRTC peer connection const connection = new RTCPeerConnection() connection.addTrack(capture.getVideoTrack()) // ... ``` See [MediaStreamCapture](/far-sdk/generated/typedoc/classes/MediaStreamCapture.html) docs for more details. ## Video cropping[​](#video-cropping "Direct link to Video cropping") You can adjust video frame dimensions via [Webcam](/far-sdk/generated/typedoc/classes/Webcam.html#constructor) constructor parameters: ``` const wcam = new Webcam({ width: 320, height: 240 }) ``` But this approach is platform-dependent and varies between browsers, e.g. some browsers may be unable to produces frames of the requested dimensions and can yield frames of close but different dimensions instead (e.g. 352x288 instead of requested 320x240). To work around this platform-specific limitations, you can leverage the built-in SDK crop modificator: ``` const desiredWidth = 320 const desiredHeight = 240 function crop(renderWidth, renderHeight) { const dx = (renderWidth - desiredWidth) / 2 const dy = (renderHeight - desiredHeight) / 2 return [dx, dy, desiredWidth, desiredHeight] } await player.use(webcam, { crop }) ``` This way you can get the desired frame size regardless of the platform used. See [Player.use()](/far-sdk/generated/typedoc/classes/Player.html#use) and [InputOptions](//generated/typedoc/types/InputOptions.html) docs for more datails. ## Postprocessing[​](#postprocessing "Direct link to Postprocessing") It's possible to post-process the video processed by WebAR SDK. You can grab the idea from the code-snippet: ``` import { MediaStreamCapture /* ... */ } from "@banuba/webar" // ... const capture = document.createElement("video") capture.autoplay = true capture.srcObject = new MediaStreamCapture(player) const canvas = document.getElementById("postprocessed") const ctx = canvas.getContext("2d") const fontSize = 48 * window.devicePixelRatio function postprocess() { canvas.width = capture.videoWidth canvas.height = capture.videoHeight ctx.drawImage(capture, 0, 0) ctx.font = `${fontSize}px serif` ctx.fillStyle = "red" ctx.fillText("A Watermark", 0.5 * fontSize, 1.25 * fontSize) } ;(function loop() { postprocess() requestAnimationFrame(loop) })() ``` See [Capturing processed video](#capturing-processed-video) > [MediaStream](#mediastream) for details. --- # Using video calls with the Banuba SDK In our example, **AgoraRTC SDK** is used for video streaming. But the integration can be done based on any video streaming library. important You should have client tokens for both **AgoraRTC SDK** and **Banuba SDK**.
To receive **Banuba SDK** token, fill in the [form on banuba.com](https://www.banuba.com/facear-sdk/face-filters#form), or contact us via .
To generate an **AgoraRTC SDK** tokens, visit [Agora website](https://www.agora.io/). * iOS * Android * Web * Flutter * ReactNative [Example of using video calls with the Banuba SDK](https://github.com/Banuba/banuba-sdk-ios-samples/tree/master/videocall)

## Installation[​](#installation "Direct link to Installation") 1. Add `banuba-sdk-podspecs` repo along with `AgoraRtcEngine_iOS` and `BanubaSdk` packages into the your `Podfile`. Alternatively you may use our [SPM modules](/far-sdk/tutorials/development/installation.md#spm-packages) info See the details about the **Banuba SDK** packages in [Installation](/far-sdk/tutorials/development/installation.md). ## Integration[​](#integration "Direct link to Integration") 1. Setup client tokens videocall/videocall/ViewModel.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Initialize `BanubaSdkManager` common/common/AppDelegate.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize `AgoraRtcEngineKit`, setup video/audio encoders and join the channel videocall/videocall/ViewModel.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Setup `Player`, load the effect and start `Camera` frames forwarding videocall/videocall/ViewModel.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 5. Run the application! 🎉 🚀 💅 [Example of using video calls with the Banuba SDK](https://github.com/Banuba/banuba-sdk-android-samples/tree/master/videocall)

For a video call, you need to receive frames as an array of pixels frame by frame in RGBA format. This can be done using `FrameOutput`. Just create a variable called `frameOutput` and add a callback that will receive an array of pixels `framePixelBuffer`: videocall/src/main/java/com/banuba/sdk/example/videocall/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Now, when initializing the player, the created variable is set to `player.use(myInput, frameOutput)`. ## Follow these steps to configure Videocall[​](#follow-these-steps-to-configure-videocall "Direct link to Follow these steps to configure Videocall") important To get started, add the **Banuba SDK** [integration code](/far-sdk/tutorials/development/basic_integration.md#integration) to your project. Add the **AgoraRTC SDK** dependency to your `build.gradle.kts`: videocall/build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") This example is based on the [**Player API**](/far-sdk/tutorials/development/basic_integration.md#integration). First, create the **Banuba SDK** core - `player`. Create a `surfaceOutput` that will draw the processed image from the **Banuba SDK**. Create a `frameOutput` that will produce the processed image and transfer it to **Agora** as an array of pixels. And create a camera `cameraDevice` and manage it yourself. **Agora** also has its own camera module, but in this example the **Agora** camera is not used, so `setExternalVideoSource(...)` is called to disable the **Agora** camera: videocall/src/main/java/com/banuba/sdk/example/videocall/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Create the **Agora** core with which the videocall will be made - `agoraRtc`, inside we indicate where the **Agora** will draw the received frames: videocall/src/main/java/com/banuba/sdk/example/videocall/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") And then initialize everything and set up a video call in the` onCreate(...)` method. ## How it works[​](#how-it-works "Direct link to How it works") Frames from the **Banuba** camera are processed in the `player`, and then the result is passed to the `onFrame(...)` handler. In the handler, frames are passed to the **Agora** module via `agoraRtc.pushExternalVideoFrame(...)`. And then **Agora** transmits the launched frame to the server, and this is how the video call works. ## Fully working code[​](#fully-working-code "Direct link to Fully working code") MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") * Agora * OpenTok * WebRTC ### Agora[​](#agora "Direct link to Agora") tip Check the official **[Banuba Agora Extension](https://github.com/Banuba/agora-plugin-filters-web)** for Web [@banuba/agora-extension](https://www.npmjs.com/package/@banuba/agora-extension). Or if you need finer control, you may use the **Banuba WebAR** directly: 1. Import **AgoraRTC** and **Banuba WebAR** index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Setup client tokens AgoraAppId.js ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") BanubaClientToken.js ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize `Player` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Initialize `AgoraRTC` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 5. Connect `Player` to `AgoraRTC` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Run the application! 🎉 🚀 💅 tip See [AgoraWebSDK NG API docs](https://agoraio-community.github.io/AgoraWebSDK-NG/api/en/interfaces/iagorartc.html#createcustomvideotrack) for details. tip See [Banuba Video call demo app](https://github.com/Banuba/videocall-web) for more code examples. ### OpenTok (TokBox)[​](#opentok-tokbox "Direct link to OpenTok (TokBox)") ``` import "https://cdn.jsdelivr.net/npm/@opentok/client" import { MediaStream, Player, Module Effect, MediaStreamCapture } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" // ... const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true }) const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/background.zip")) const webar = new MediaStreamCapture(player) await player.use(new MediaStream(camera)) player.applyEffect(new Effect("BackgroundBlur.zip")) player.play() // original audio const audio = camera.getAudioTracks()[0] // webar processed video const video = webar.getVideoTracks()[0] const session = OT.initSession("OT API KEY", "OT SESSION ID") session.connect("OT SESSION TOKEN", async () => { const publisher = await OT.initPublisher( "publisher", { insertMode: "append", audioSource: audio, videoSource: video, width: "100%", height: "100%", }, () => {}, ) session.publish(publisher, () => {}) }) // ... ``` tip See [TokBox Video API docs](https://tokbox.com/developer/sdks/js/reference/OT.html#initPublisher) for details. tip See [Banuba Video call (TokBox) demo app](https://github.com/Banuba/videocall-tokbox-web) for more code examples. ### WebRTC[​](#webrtc "Direct link to WebRTC") Considering the [Fireship WebRTC demo](https://github.com/fireship-io/webrtc-firebase-demo/blob/main/main.js) ``` import { MediaStream as BanubaMediaStream, Player, Module, Effect, MediaStreamCapture, } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" // ... webcamButton.onclick = async () => { localStream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true }) remoteStream = new MediaStream() const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/background.zip")) const webar = new MediaStreamCapture(player) await player.use(new BanubaMediaStream(localStream)) player.applyEffect(new Effect("BackgroundBlur.zip")) player.play() // original audio const audio = localStream.getAudioTracks()[0] // webar processed video const video = webar.getVideoTracks()[0] localStream = new MediaStream([audio, video]) // Push tracks from local stream to peer connection localStream.getTracks().forEach((track) => { pc.addTrack(track, localStream) }) // ... } ``` Due to **Flutter** limitations, for every videocall solution you have to create a **native Flutter plugin**. We have developed one for integration with [Agora](https://docs.agora.io). It is expected that you will develop your own [Flutter plugin](https://docs.flutter.dev/packages-and-plugins/developing-packages) if **Agora** isn't suitable for you. We have created [Agora Extension](https://docs.agora.io/en/video-calling/develop/use-an-extension?platform=flutter) which is accessible from **Flutter**. [The sample](https://github.com/Banuba/banuba-agora-flutter-sdk) described below is a fork of [Flutter plugin of Agora](https://github.com/AgoraIO-Extensions/Agora-Flutter-SDK). Follow the instructions in `README.md` to run it. These are the general steps to integrate the sample code into your app: * Android * iOS 1. Add the **Client Token** and extension properties keys constants example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add common methods to interact with **Banuba extension** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Copy effects in [`assets/effects`](https://github.com/Banuba/banuba-agora-flutter-sdk/tree/main/example/assets/effects) folder 5. Add a reference to **Banuba Maven repo** example/android/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add **Banuba dependencies** and prepare a task to copy effects into app example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add **Client Token** and extension properties keys constants example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add common methods to interact with **Banuba extension** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Copy effects in [`assets/effects`](https://github.com/Banuba/banuba-agora-flutter-sdk/tree/main/example/assets/effects) folder 5. Add Banuba dependencies to `Podfile` example/ios/Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add the `effects` folder added earlier into your project. Link it with your app: add the folder into `Runner` **Xcode** project (`File` -> `Add Files to 'Runner'...`). Due to **React Native** limitations, for every videocall solution you have to create a **React native module**. We developed one for integration with [Agora](https://docs.agora.io). It is expected that you will develop your own [React native module](https://reactnative.dev/docs/native-modules-intro) if **Agora** isn't suitable for you. We have created [Agora Extension](https://docs.agora.io/en/video-calling/develop/use-an-extension?platform=react-native) which is accessible from **React Native**. [The sample](https://github.com/Banuba/banuba-react-native-agora) described below is a fork of [React Native around Agora](https://github.com/AgoraIO-Extensions/react-native-agora). Follow the instructions in `README.md` to run it. These are general steps to integrate the sample code into your app: * Android * iOS 1. Add the **Client Token** and extension properties keys constants example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add the common methods to interact with **Banuba extension** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") warning `intiBanuba()` must be called just after `engine.initialize(...)`, calling it later will cause an error during extention loading. 4. Copy effects in [`effects`](https://github.com/Banuba/banuba-react-native-agora/tree/main/example/effects) folder 5. Add a reference to the **Banuba Maven repo** example/android/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add **Banuba dependencies** and prepare a task to copy effects into app example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add **Client Token** and extension properties keys constants example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add common methods to interact with the **Banuba extension** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") warning `intiBanuba()` must be called immediately after `engine.initialize(...)`, calling it later will cause an error during extention loading. 4. Copy effects in [`effects`](https://github.com/Banuba/banuba-react-native-agora/tree/main/example/effects) folder 5. Add **Banuba dependencies** to `Podfile` example/ios/Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add the `effects` folder added earlier into your project. Link it with your app: add the folder into **Xcode** project (`File` -> `Add Files to ''...`). --- # android [Example of using video calls with the Banuba SDK](https://github.com/Banuba/banuba-sdk-android-samples/tree/master/videocall)

For a video call, you need to receive frames as an array of pixels frame by frame in RGBA format. This can be done using `FrameOutput`. Just create a variable called `frameOutput` and add a callback that will receive an array of pixels `framePixelBuffer`: videocall/src/main/java/com/banuba/sdk/example/videocall/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Now, when initializing the player, the created variable is set to `player.use(myInput, frameOutput)`. ## Follow these steps to configure Videocall[​](#follow-these-steps-to-configure-videocall "Direct link to Follow these steps to configure Videocall") important To get started, add the **Banuba SDK** [integration code](/far-sdk/tutorials/development/basic_integration.md#integration) to your project. Add the **AgoraRTC SDK** dependency to your `build.gradle.kts`: videocall/build.gradle.kts ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") This example is based on the [**Player API**](/far-sdk/tutorials/development/basic_integration.md#integration). First, create the **Banuba SDK** core - `player`. Create a `surfaceOutput` that will draw the processed image from the **Banuba SDK**. Create a `frameOutput` that will produce the processed image and transfer it to **Agora** as an array of pixels. And create a camera `cameraDevice` and manage it yourself. **Agora** also has its own camera module, but in this example the **Agora** camera is not used, so `setExternalVideoSource(...)` is called to disable the **Agora** camera: videocall/src/main/java/com/banuba/sdk/example/videocall/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") Create the **Agora** core with which the videocall will be made - `agoraRtc`, inside we indicate where the **Agora** will draw the received frames: videocall/src/main/java/com/banuba/sdk/example/videocall/MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") And then initialize everything and set up a video call in the` onCreate(...)` method. ## How it works[​](#how-it-works "Direct link to How it works") Frames from the **Banuba** camera are processed in the `player`, and then the result is passed to the `onFrame(...)` handler. In the handler, frames are passed to the **Agora** module via `agoraRtc.pushExternalVideoFrame(...)`. And then **Agora** transmits the launched frame to the server, and this is how the video call works. ## Fully working code[​](#fully-working-code "Direct link to Fully working code") MainActivity.kt ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") --- # flutter Due to **Flutter** limitations, for every videocall solution you have to create a **native Flutter plugin**. We have developed one for integration with [Agora](https://docs.agora.io). It is expected that you will develop your own [Flutter plugin](https://docs.flutter.dev/packages-and-plugins/developing-packages) if **Agora** isn't suitable for you. We have created [Agora Extension](https://docs.agora.io/en/video-calling/develop/use-an-extension?platform=flutter) which is accessible from **Flutter**. [The sample](https://github.com/Banuba/banuba-agora-flutter-sdk) described below is a fork of [Flutter plugin of Agora](https://github.com/AgoraIO-Extensions/Agora-Flutter-SDK). Follow the instructions in `README.md` to run it. These are the general steps to integrate the sample code into your app: * Android * iOS 1. Add the **Client Token** and extension properties keys constants example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add common methods to interact with **Banuba extension** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Copy effects in [`assets/effects`](https://github.com/Banuba/banuba-agora-flutter-sdk/tree/main/example/assets/effects) folder 5. Add a reference to **Banuba Maven repo** example/android/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add **Banuba dependencies** and prepare a task to copy effects into app example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add **Client Token** and extension properties keys constants example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add common methods to interact with **Banuba extension** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/lib/examples/basic/join\_channel\_video/join\_channel\_video.dart ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Copy effects in [`assets/effects`](https://github.com/Banuba/banuba-agora-flutter-sdk/tree/main/example/assets/effects) folder 5. Add Banuba dependencies to `Podfile` example/ios/Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add the `effects` folder added earlier into your project. Link it with your app: add the folder into `Runner` **Xcode** project (`File` -> `Add Files to 'Runner'...`). --- # ios [Example of using video calls with the Banuba SDK](https://github.com/Banuba/banuba-sdk-ios-samples/tree/master/videocall)

## Installation[​](#installation "Direct link to Installation") 1. Add `banuba-sdk-podspecs` repo along with `AgoraRtcEngine_iOS` and `BanubaSdk` packages into the your `Podfile`. Alternatively you may use our [SPM modules](/far-sdk/tutorials/development/installation.md#spm-packages) info See the details about the **Banuba SDK** packages in [Installation](/far-sdk/tutorials/development/installation.md). ## Integration[​](#integration "Direct link to Integration") 1. Setup client tokens videocall/videocall/ViewModel.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Initialize `BanubaSdkManager` common/common/AppDelegate.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize `AgoraRtcEngineKit`, setup video/audio encoders and join the channel videocall/videocall/ViewModel.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Setup `Player`, load the effect and start `Camera` frames forwarding videocall/videocall/ViewModel.swift ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 5. Run the application! 🎉 🚀 💅 --- # react\_native Due to **React Native** limitations, for every videocall solution you have to create a **React native module**. We developed one for integration with [Agora](https://docs.agora.io). It is expected that you will develop your own [React native module](https://reactnative.dev/docs/native-modules-intro) if **Agora** isn't suitable for you. We have created [Agora Extension](https://docs.agora.io/en/video-calling/develop/use-an-extension?platform=react-native) which is accessible from **React Native**. [The sample](https://github.com/Banuba/banuba-react-native-agora) described below is a fork of [React Native around Agora](https://github.com/AgoraIO-Extensions/react-native-agora). Follow the instructions in `README.md` to run it. These are general steps to integrate the sample code into your app: * Android * iOS 1. Add the **Client Token** and extension properties keys constants example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add the common methods to interact with **Banuba extension** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") warning `intiBanuba()` must be called just after `engine.initialize(...)`, calling it later will cause an error during extention loading. 4. Copy effects in [`effects`](https://github.com/Banuba/banuba-react-native-agora/tree/main/example/effects) folder 5. Add a reference to the **Banuba Maven repo** example/android/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add **Banuba dependencies** and prepare a task to copy effects into app example/android/app/build.gradle ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 1. Add **Client Token** and extension properties keys constants example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Add common methods to interact with the **Banuba extension** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize **Banuba** and load an **effect** example/src/examples/basic/JoinChannelVideo/JoinChannelVideo.tsx ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") warning `intiBanuba()` must be called immediately after `engine.initialize(...)`, calling it later will cause an error during extention loading. 4. Copy effects in [`effects`](https://github.com/Banuba/banuba-react-native-agora/tree/main/example/effects) folder 5. Add **Banuba dependencies** to `Podfile` example/ios/Podfile ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Add the `effects` folder added earlier into your project. Link it with your app: add the folder into **Xcode** project (`File` -> `Add Files to ''...`). --- # web * Agora * OpenTok * WebRTC ### Agora[​](#agora "Direct link to Agora") tip Check the official **[Banuba Agora Extension](https://github.com/Banuba/agora-plugin-filters-web)** for Web [@banuba/agora-extension](https://www.npmjs.com/package/@banuba/agora-extension). Or if you need finer control, you may use the **Banuba WebAR** directly: 1. Import **AgoraRTC** and **Banuba WebAR** index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Setup client tokens AgoraAppId.js ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") BanubaClientToken.js ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 3. Initialize `Player` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 4. Initialize `AgoraRTC` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 5. Connect `Player` to `AgoraRTC` index.html ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 6. Run the application! 🎉 🚀 💅 tip See [AgoraWebSDK NG API docs](https://agoraio-community.github.io/AgoraWebSDK-NG/api/en/interfaces/iagorartc.html#createcustomvideotrack) for details. tip See [Banuba Video call demo app](https://github.com/Banuba/videocall-web) for more code examples. ### OpenTok (TokBox)[​](#opentok-tokbox "Direct link to OpenTok (TokBox)") ``` import "https://cdn.jsdelivr.net/npm/@opentok/client" import { MediaStream, Player, Module Effect, MediaStreamCapture } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" // ... const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true }) const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/background.zip")) const webar = new MediaStreamCapture(player) await player.use(new MediaStream(camera)) player.applyEffect(new Effect("BackgroundBlur.zip")) player.play() // original audio const audio = camera.getAudioTracks()[0] // webar processed video const video = webar.getVideoTracks()[0] const session = OT.initSession("OT API KEY", "OT SESSION ID") session.connect("OT SESSION TOKEN", async () => { const publisher = await OT.initPublisher( "publisher", { insertMode: "append", audioSource: audio, videoSource: video, width: "100%", height: "100%", }, () => {}, ) session.publish(publisher, () => {}) }) // ... ``` tip See [TokBox Video API docs](https://tokbox.com/developer/sdks/js/reference/OT.html#initPublisher) for details. tip See [Banuba Video call (TokBox) demo app](https://github.com/Banuba/videocall-tokbox-web) for more code examples. ### WebRTC[​](#webrtc "Direct link to WebRTC") Considering the [Fireship WebRTC demo](https://github.com/fireship-io/webrtc-firebase-demo/blob/main/main.js) ``` import { MediaStream as BanubaMediaStream, Player, Module, Effect, MediaStreamCapture, } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js" // ... webcamButton.onclick = async () => { localStream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true }) remoteStream = new MediaStream() const player = await Player.create({ clientToken: "xxx-xxx-xxx" }) await player.addModule(new Module("https://cdn.jsdelivr.net/npm/@banuba/webar/dist/modules/background.zip")) const webar = new MediaStreamCapture(player) await player.use(new BanubaMediaStream(localStream)) player.applyEffect(new Effect("BackgroundBlur.zip")) player.play() // original audio const audio = localStream.getAudioTracks()[0] // webar processed video const video = webar.getVideoTracks()[0] localStream = new MediaStream([audio, video]) // Push tracks from local stream to peer connection localStream.getTracks().forEach((track) => { pc.addTrack(track, localStream) }) // ... } ``` --- # Banuba Face AR SDK for Unity Unity Face AR SDK allows developers to create cross-platform face tracking apps with custom AR effects in Unity3D. Banuba Face AR SDK for Unity is a native library compiled for the following platforms: * Windows * MacOS * iOS * Android ## Requirements[​](#requirements "Direct link to Requirements") * Latest Unity Hub with latest installed **Unity Editor LTS**. Minimum Unity Editor Version: **2019.3.13f1** * Installed Unity platform plugins for each required platform: Android, iOS or Desktop * **Windows**: Before launching Unity you should check presence of the following tools in Microsoft VS2019: ![Image](/far-sdk/assets/images/tools-52640adedca1f7d9dc839b2ab8ebeaeb.png) ## Usage[​](#usage "Direct link to Usage") ### Token[​](#token "Direct link to Token") Before you commit to a license, you are free to test all the features of the SDK for free. To start it, [send us a message](https://www.banuba.com/facear-sdk/face-filters#form).
We will get back to you with the trial token. You can store the token within the app. Feel free to [contact us](https://docs.banuba.com/far-sdk/support) if you have any questions. ### How To Import To Your Project[​](#how-to-import-to-your-project "Direct link to How To Import To Your Project") 1. Download the latest assets package [BanubaSDK-import.unitypackage ](https://github.com/Banuba/quickstart-unity/releases)and import it using Unity Package Manager. 2. Put your Client Token to the `Assets/Resources/BanubaClientToken.txt` 3. Find [Demo Scene](/far-sdk/tutorials/unity/demo_scene.md), open and click run! 🎉 🚀 💅 --- # Unity Demo Scene Banuba SDK provides a **Demo Scene** for our Unity SDK. The Demo scene contains several AR effects. To launch the Demo scene on the device follow the steps bellow: 1. In the project files tree, find and open **BanubaSDKDemo.unity** which is located under *Assets* -> *BanubaFaceAR* -> *Demo* 2. To add effects, find the `EffectsManager` object in Hierarchy and populate `Effects` list in Inspector with desired effects prefabs that you can find inside folders under *Assets* -> *BanubaFaceAR* -> *Effects*. At launch, the list will already be filled with all available effects. ![image](/far-sdk/assets/images/demo_1-ca9fb65dd771d6a37a982d28d743b31b.png) 3. Select the **LoaderScene** and **BanubaSDKDemo** scenes in **File** -> **Build Settings** and make sure their indexes are 0 and 1 respectively. Then launch it on the needed platform as usual. That�s how a properly configured scene should look like. ![image](/far-sdk/assets/images/demo_2-5c138f26f171ec2ec2116b4bb145d819.png) --- # Face AR SDK for Unity Overview ![image](/far-sdk/assets/images/unity_overview-90d94c3d663f22905d904de0c25da6dc.svg) Unity Face AR SDK provides ready-to-use assets (prefabs, scripts, materials, scenes and etc.) for fast integration based on BanubaSDKBridge API. ### BanubaSDKManager[​](#banubasdkmanager "Direct link to BanubaSDKManager") *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/BanubaSDKManager.cs` Public Properties: * Max Face Count - Set the maximum number of faces to search BanubaSDKManager.cs main functionality: * initialize BanubaSDK with [token](/far-sdk/tutorials/unity/basic_integration.md#usage) Assets/BanubaFaceAR/BaseAssets/Scripts/BanubaSDKManager.cs ``` private void Awake() { //... // Load Token var tokenResourceFile = Resources.Load("BanubaClientToken"); var tokenLine = tokenResourceFile.text.Trim(); // Banuba Face AR SDK static environment initialization var error = IntPtr.Zero; BanubaSDKBridge.bnb_recognizer_env_init(tokenLine, out error); Utils.CheckError(error); // The recognizer object init method needs a path to its resources. They are placed in // Assets/StreamingAssets folder, and unity does not compress resources placed there which is important. // Full path to Assets/StreamingAssets is platform dependent. Unity provides it as // Application.streamingAssetsPath property. // We recommend using only one instance of the recognizer object to decrease memory consumption. #if (UNITY_ANDROID || UNITY_WEBGL) && !UNITY_EDITOR var resourcesPath = Application.persistentDataPath; #else var resourcesPath = Application.streamingAssetsPath; #endif Recognizer = new Recognizer(resourcesPath + "/BanubaFaceAR/"); // set maximum faces to search BanubaSDKBridge.bnb_recognizer_set_max_faces(Recognizer, _maxFaceCount, out error); Utils.CheckError(error); //... } ``` * Process the input image and notify the subscribers if there is a recognition result with `public event Action onRecognitionResult;` Assets/BanubaFaceAR/BaseAssets/Scripts/BanubaSDKManager.cs ``` public static bool processCameraImage(BanubaSDKBridge.bnb_bpc8_image_t cameraImage) { if (instance == null) { return false; } var error = IntPtr.Zero; var frameData = BanubaSDKBridge.bnb_frame_data_init(out error); Utils.CheckError(error); BanubaSDKBridge.bnb_frame_data_set_bpc8_img(frameData, ref cameraImage, out error); Utils.CheckError(error); BanubaSDKBridge.bnb_recognizer_push_frame_data(instance.Recognizer, frameData, out error); Utils.CheckError(error); var outFrameData = new FrameData(); bool process = BanubaSDKBridge.bnb_recognizer_pop_frame_data(instance.Recognizer, outFrameData, out error); Utils.CheckError(error); if (process) { instance.onRecognitionResult?.Invoke(outFrameData); } return process; } ``` All other feature-based instances must subscribe to `onRecognitionResult` Feature Based Class Example ``` private void Awake() { BanubaSDKManager.instance.onRecognitionResult += OnRecognitionResult; } private void OnDestroy() { BanubaSDKManager.instance.onRecognitionResult -= OnRecognitionResult; } private void OnRecognitionResult(FrameData frameData) { // Do Something with frameData } ``` note Currently BanubaSDKManager.cs provides synchronous image processing only, but BanubaSDK API also privides async processing. ### Camera Device[​](#camera-device "Direct link to Camera Device") *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/CameraDevice.cs` Camera device class is based on [WebCamTexture](https://docs.unity3d.com/ScriptReference/WebCamTexture.html). It takes an image from the camera device and pushes it to the BanubaSDKManager Push Camera Frame Example ``` // Create Camera Image var cameraImage = new BanubaSDKBridge.bnb_bpc8_image_t { format = new BanubaSDKBridge.bnb_image_format_t() }; // Create Color32 pixel array var data = new Color32[texSize]; //... // Fill the data with WebCamTexture.GetPixels32(data) or any other method // depends on what you want to process //... // Marshaling GCHandle pinnedData = GCHandle.Alloc(data, GCHandleType.Pinned);; cameraImage.format.orientation = AngleToOrientation(0); // image orientation cameraImage.format.require_mirroring = 1; // selfie mode (horisontal flip) cameraImage.format.face_orientation = 0; cameraImage.format.width = (uint) width; cameraImage.format.height = (uint) height; cameraImage.data = pinnedData.AddrOfPinnedObject(); //retrieve the IntPtr cameraImage.pixel_format = BanubaSDKBridge.bnb_pixel_format_t.BNB_RGBA; //image format // Process image with BanubaSDKManager BanubaSDKManager.processCameraImage(cameraImage); // Free pinned data pinnedData.Free(); ``` It also provides `public event Action onCameraTexture;` that allows subscribers to retrieve some data from the original camera image like `Plane Controller` ### Camera[​](#camera "Direct link to Camera") Components: * *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/CameraController.cs` Retrieve the needed camera settings based on the recognition result. ### Camera Plane[​](#camera-plane "Direct link to Camera Plane") Components: * *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/PlaneController.cs` A Canvas UI element for rendering camera image with right transformation. A child of the `Surface Canvas` ## Face AR Effect[​](#face-ar-effect "Direct link to Face AR Effect") ### Faces Controller[​](#faces-controller "Direct link to Faces Controller") *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/FacesController.cs` A ready-to-use prefab for rendering Face Mesh Properties: * Enable UV Draw - Instanciate a UV Draw Component (Difference between static and dynamic face meshes) * Enable Static Pos - Instanciate a Static Pos Component (Face Mesh without mimics) Instanciate GameObjects with [FaceController](/far-sdk/tutorials/unity/overview.md#face-controller) Component depending on faces detected on the camera image. By default it contains Face0 Object as a child and if detected faces > 1, it copies Face0 and increments it's index. note Maximum searching face count setup placed in the properties of the [BanubaSDKManager](/far-sdk/tutorials/unity/overview.md#banubasdkmanager) ### Face Controller[​](#face-controller "Direct link to Face Controller") *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/FaceController.cs` Apply transformations retrieved from the Frame Data ### Face Mesh Controller[​](#face-mesh-controller "Direct link to Face Mesh Controller") *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/FaceMeshController.cs` A basic Component for rendering Face Mesh ### Face AR example[​](#face-ar-example "Direct link to Face AR example") ![Image](/far-sdk/assets/images/minimal_example-0b3c3601b09d49182318ac66d76b4554.png) ## Morphing[​](#morphing "Direct link to Morphing") ### Morphing Feature[​](#morphing-feature "Direct link to Morphing Feature") Components: * *script:* `Assets/BanubaFaceAR/FeatureMorphing/Scripts/MorphingFeature.cs` A ready-to-use prefab for applying face morphing filter. Public Properies: * Morph Shape - `IMorphDraw` component field. Defines the type of the morphing filter. Available shapes: * [UV Morphing](/far-sdk/tutorials/unity/overview.md#uv-morphing) * [Custom Morphing](/far-sdk/tutorials/unity/overview.md#custom-morphing) * [Faces Controller](/far-sdk/tutorials/unity/overview.md#faces-controller) * [Effect Reference](/far-sdk/tutorials/unity/overview.md#morphing-result-camera) ### Morphing Draw Camera[​](#morphing-draw-camera "Direct link to Morphing Draw Camera") Components: * *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/Blur.cs` * *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/CameraDevice.cs` * *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/RenderToTexture.cs` This camera renders a blurred difference between Face Mesh vertices and Morphing Shape vertices to the [RenderTexture](https://docs.unity3d.com/ScriptReference/RenderTexture.html) with `RenderToTexture.cs` Component #### UV Morphing[​](#uv-morphing "Direct link to UV Morphing") This shape morphs face mesh with the mesh provided by user. You can find a basic example of a uv morphing shape here `platform/unity/BanubaSdk/Assets/BanubaFaceAR/FeatureMorphing/effect/MorphTest.prefab` **Components:** * [MeshRenderer](https://docs.unity3d.com/ScriptReference/MeshRenderer.html) with the mesh of morph shape * *script:* `MorphDraw.cs`. Updates material properties. * *material:* `MorphDraw.mat`. important UV Morph draw shapes required `Enable UV Draw` and `Enable Static Pos` in [Faces Controller](/far-sdk/tutorials/unity/overview.md#faces-controller) ![morph](/far-sdk/assets/images/morph_draw-d234572f5a8cb8950f656becd28e1c08.png) #### Custom Morphing[​](#custom-morphing "Direct link to Custom Morphing") A ready-to-use Morph Shape with 28 parts of face that can be changed in the runtime. **Components:** * [MeshRenderer](https://docs.unity3d.com/ScriptReference/MeshRenderer.html) with the mesh of morph shape * *script:* `CustomMorphDraw.cs`. Updates material properties. * *material:* `CustomMorphDraw.mat`. ![custom morph](/far-sdk/assets/images/custom_morph_draw-32b063e5003825105e9319cc0bb85ffe.png) ### Morphing Result Camera[​](#morphing-result-camera "Direct link to Morphing Result Camera") **Components:** * *script:* `Assets/BanubaFaceAR/FeatureMorphing/Scripts/MorphingPostEffect.cs` * *script:* `Assets/BanubaFaceAR/BaseAssets/Scripts/CameraController.cs` It takes a Morphing result from the [Morphing Draw Camera](/far-sdk/tutorials/unity/overview.md#morphing-draw-camera) and applies morphing in the [OnRenderImage](https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnRenderImage.html) event function ### Morphing Example[​](#morphing-example "Direct link to Morphing Example") You can find the basic example here: ![Image](/far-sdk/assets/images/morphing_example-ea67a7628f92f43a5314ac0115f3c882.png) ## Segmentation[​](#segmentation "Direct link to Segmentation") ### Segmentation Feature[​](#segmentation-feature "Direct link to Segmentation Feature") Components: * *script:* `Assets/BanubaFaceAR/FeatureSegmentation/Scripts/SegmentationFeature.cs` Properties: * [Type](/far-sdk/tutorials/unity/overview.md#segmentation-types) - type of the segmentation feature * Use Segmentation Shader - if enabled, uses the default segmentation shader located here `Assets/BanubaFaceAR/FeatureSegmentation/Shaders/Segmentation.shader` * Plane - [Camera Plain Reference](/far-sdk/tutorials/unity/overview.md#camera-plane) * *unityUI:* [Raw Image](https://docs.unity3d.com/2018.2/Documentation/ScriptReference/UI.RawImage.html) Location: `Assets/BanubaFaceAR/FeatureSegmentation/Prefabs/SegmentationFeature.prefab` A ready-to-use prefab with all segmentation features provided by **Banuba SDK**. ### Segmentation Types:[​](#segmentation-types "Direct link to Segmentation Types:") ``` public enum bnb_segm_type_t { BNB_BACKGROUND = 0, BNB_BODY, BNB_FACE, BNB_HAIR, BNB_NECK, BNB_SKIN, BNB_LIPS, BNB_BROW_LEFT, BNB_BROW_RIGHT, BNB_EYE_PUPIL_LEFT, BNB_EYE_PUPIL_RIGHT, BNB_EYE_SCLERA_LEFT, BNB_EYE_SCLERA_RIGHT, BNB_EYE_IRIS_LEFT, BNB_EYE_IRIS_RIGHT, BNB_FACE_SKIN } ``` ### Segmentation Example:[​](#segmentation-example "Direct link to Segmentation Example:") **Background:** ![Image](/far-sdk/assets/images/segmentation_example-ff253946249a7179eff32aa4acea9c65.png) **Face Skin:** ![Image](/far-sdk/assets/images/segmentation_face_example-7a3cfca57e4ac3ad05e62602b84fa0ca.png) ## MakeUp[​](#makeup "Direct link to MakeUp") ### How to add Makeup to your Effect[​](#how-to-add-makeup-to-your-effect "Direct link to How to add Makeup to your Effect") 1. Find `Makeup.prefab` in Project window, drag it and attach to your effect in Hierarchy window. ![image](/far-sdk/assets/images/unity_makeup_1_1-980e894d2e5407a7a691c64dafe87efc.png) 2. Then you need to assign a reference of your effect's PlaneController component to each segmentation AR 3D Mask (*SmoothCamera* is not segmentation) under **Canvas** object by dragging a **CameraPlane** object from Hierarchy to a **Plane** field of a Segmentation Feature component. You can do it one by one, or just by selecting all of them in the Hierarchy and assigning reference to all of them at once as shown below. ![image](/far-sdk/assets/images/unity_makeup_1_2-78be8d8cb28d821972e588d065a0fe2a.png) 3. Add the required references for FacesController. If you haven't got FacesController in your current effect, you could find it here Assets/BanubaFaceAR/BaseAssets/Prefabs/FacesController.prefab and drag it into the effect root. ![image](/far-sdk/assets/images/unity_makeup_1_4-f9867e2d5b30569edea54b1ae037d608.jpg) 4. Assign effect's **ResultCamera** to **RenderCamera** field on **Canvas** object. ![image](/far-sdk/assets/images/unity_makeup_1_3-097af1f60f71b3ea90aa6a0eb6d757d8.png) 5. Change the FaceMesh Material on Assets/BanubaFaceAR/Makeup/Materials/EyeFaceMakeup.mat: ![image](/far-sdk/assets/images/unity_makeup_1_5-7d368aca622eaf451dd2432877ca3994.jpg) 6. Makeup is ready! ### How to use[​](#how-to-use "Direct link to How to use") After Makeup is added to your object, it's time to tweak some options. Each separate makeup feature is represented as a component on the Makeup object that provides you with different options to adjust. You can do it both from code and Inspector. ![image](/far-sdk/assets/images/unity_makeup_2_1-ee9f09c41ccbf00391a76e320faac492.png) *Tweak options from Inspector to see makeup in Editor Playmode* note Lips makeup component has 3 option presets for aa particular lips look. To apply them, find and click the button in the upper-right corner of the Lips Makeup component window and choose the preset you want from the end of the list. ![image](/far-sdk/assets/images/unity_makeup_2_2-ad72af3b5336225ba6c8b1313bfa9c08.png) To access these options from the code at runtime, you can get a reference to MakeAPI component which stores references to all other makeup components. ``` MakeupAPI makeupAPI = makeupGameObject.GetComponent(); makeupAPI.Lips.color = new Color(0.8f, 0.1f, 0, 0.6f); makeupAPI.Lips.brightness = 0.9f; makeupAPI.Skin.softeningStrength = 1; ``` *Adjusting Makeup from code* If you want to completely disable or enable a particular makeup feature, just change the state of the MonoBehaviour's `enabled` property. ``` makeupAPI.Skin.enabled = false; // now Skin makeup is completely disabled ``` ### Makeup example scene[​](#makeup-example-scene "Direct link to Makeup example scene") Banuba Unity plugin provides an example scene with Makeup effect. Everything is already set up. Find and open `MakeupExample.unity` scene under *Assets* => *BanubaFaceAR* => *Makeup*. Click Play button and see how it works! ## Hand skeleton[​](#hand-skeleton "Direct link to Hand skeleton") Hand skeleton feature allows you to detect and render hand skeleton 2D. 1. Enable Hand skeleton feature with **BanubaSDKBridge.bnb\_recognizer\_insert\_feature** function. ``` // var recognizer = BanubaSDKManager.instance.Recognizer // ... var featuresId = BanubaSDKBridge.bnb_recognizer_get_features_id(); BanubaSDKBridge.bnb_recognizer_insert_feature(recognizer, featureId.hand_skeleton, out var error); ``` 2. Get the detected hand with **BanubaSDKBridge.bnb\_frame\_data\_get\_hand** function. ``` // frameData = BanubaSDKBridge.bnb_recognizer_process_frame_data(..., frameData, ...) /// ... var error = IntPtr.Zero; var hand = BanubaSDKBridge.bnb_frame_data_get_hand(frameData, Screen.height, Screen.height, BanubaSDKBri, bnb_rect_fit_mode_t, bnb_fit_height, out error); Utils.CheckError(error); ``` 3. bnb\_hand\_data\_t contains landmarks for hand skeleton, transformation for the landmarks and current detected gesture(For gesture detecting see Hand Gestures section). Apply these landmarks as vertices of you mesh. ``` [StructLayout(LayoutKind.Sequential)] public struct bnb_hand_data_t { public bnb_hand_gesture_t gesture; public int vertices_count; public IntPtr vertices; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 16)] public float[] transform; }; ``` ### How to add and use the Hand gestures feature[​](#how-to-add-and-use-the-hand-gestures-feature "Direct link to How to add and use the Hand gestures feature") With the Unity Face AR Hand Gestures feature you can get and use hand gestures triggers in your app. At the the moment, our algorithms are able to recognize 5 hand gestures: * Like 👍 * Ok 👌 * Palm ✋ * Rock 🤘 * Victory/Peace ✌️ 1. Enable the Hand gestures feature with the **BanubaSDKBridge.bnb\_recognizer\_insert\_feature** function. ``` // var recognizer = BanubaSDKManager.instance.Recognizer // ... var featuresId = BanubaSDKBridge.bnb_recognizer_get_features_id(); BanubaSDKBridge.bnb_recognizer_insert_feature(recognizer, featureId.hand_gestures, out var error); ``` **NOTE:** **featureId.hand\_gestures** always enables **featureId.hand\_skeleton**. 2. Get this frame's detected gesture with **BanubaSDKBridge.bnb\_frame\_data\_get\_gesture** function. ``` // frameData = BanubaSDKBridge.bnb_recognizer_process_frame_data(..., frameData, ...) /// ... var gesture = BanubaSDKBridge.bnb_frame_data_get_gesture(frameData, out var error); ``` ### Example prefab and script[​](#example-prefab-and-script "Direct link to Example prefab and script") You can find **HandSkeleton.prefab** example prefab under *Assets* -> *BanubaFaceAR* -> *Hands* -> *Prefabs*. It has an attached HandSkeleton.cs component that contains all needed logic to enable and use hand skeleton and gestures. 1. Drop the prefab into scene. ![image](/far-sdk/assets/images/hand_skelet_unity1-29e93577d533286dc01b73c26e1fb9a1.jpg) 2. Add render camera reference for the **HandSkeleton.prefab**. ![image](/far-sdk/assets/images/hand_skelet_unity2-8bc208f3d0f8814d15e6c7933d266524.jpg) 3. Hit Play to see how it works! --- # Using video calls with the Banuba SDK In our example, **AgoraRTC SDK** is used for video streaming. But integration can be done based on any video streaming library. You can read more about Agora [here](https://www.agora.io/en/unity/). [Example of using video calling with the Banuba SDK](https://github.com/Banuba/videocall-unity)

## How To Run[​](#how-to-run "Direct link to How To Run") 1. Get the client token for Banuba SDK. Please fill in our form on banuba.com website, or contact us via . 2. Open the the project. 3. Download and import the [Agora SDK package from Unity Asset Store](https://assetstore.unity.com/packages/tools/video/agora-video-sdk-for-unity-134502) with Unity Package Manager. If it is not available, contact [Agora Support](https://www.agora.io/en/customer-support/) 4. Download and import the [BanubaSDK-import.unitypackage](https://github.com/Banuba/quickstart-unity/releases) 5. Visit [agora.io](https://www.agora.io) to sign up and get a token, as well as an app and channel ID. 6. Find the `Assets/Resources/BanubaClientToken.txt` and past your client token here. 7. Open the scene `VideoCallDemo/demo/MainScene.scene.` 8. Find the the VideoCanvas object in the scene and set AppID, Token, and your channel name in the properties of the DemoVideoCall script. ![image](/far-sdk/assets/images/videocall_example-048b1ac9667bd95430cb05f16581596d.png) 8. Run the project in the Editor. ## How It Works[​](#how-it-works "Direct link to How It Works") 1. Initialize `AgoraSDK` in `Start` method with methods below: Assets/VideoCallDemo/demo/DemoVideoCall.cs ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") 2. Initialize **BanubaSDK**. The MainScene.scene contains [BanubaSDKManager](/far-sdk/tutorials/unity/overview.md#banubasdkmanager) reference from `BanubaSDK-import.unitypackage` 3. Render Camera and any AR Effect with the `BNB.RenderToTexture.cs` to the RenderTexture ![image](/far-sdk/assets/images/videocall_example_2-4bb607e6eccf828c6fb7ad34571d1d05.png) 3. Send Video Frames in `Update` method Assets/VideoCallDemo/demo/DemoVideoCall.cs ``` loading... ``` ![Icon](/far-sdk/img/icons/github.svg "GitHub") --- # Banuba Face AR SDK ## Introduction[​](#introduction "Direct link to Introduction") Welcome to **Banuba Face AR SDK**. This document will help you to get started with our SDK and guide you on how to create your project with **Face AR** features. Banuba SDK allows developers to include AR features into their applications. Our documentation will give you a complite guide how to use the features described. How to read this documentation: 1. [**View the samples.**](/far-sdk/tutorials/development/samples.md) The SDK is delivered with examples of feature applications for each platform. They cover a variety of real-life use cases and give a comprehensive overview of how to run and use the SDK. 2. [**Setup your project**](/far-sdk/tutorials/development/basic_integration.md) using our examples. 3. [**This document**](/far-sdk/tutorials/capabilities/sdk_features.md) review the features across the supported platforms. 4. Try to create your own effect with [**Banuba Studio**](https://studio.banuba.com/) or buy some from the [**Banuba Asset Store**](https://assetstore.banuba.net/). ## Architecture[​](#architecture "Direct link to Architecture") The image below shows the components of the **Banuba Face AR SDK**. ![image](/far-sdk/assets/images/introduction_0-e6e520e3eafab1de9d6776fe0c97fc71.svg) ### EffectPlayer[​](#effectplayer "Direct link to EffectPlayer") EffectPlayer is a low-level library for effects' playing. Its code doesn’t depend on any platform-specific APIs or compilers. EffectPlayer is written in **C++**, but has bindings to all supported platform-specific languages and runtimes. You need to use **Java** or **Kotlin** on Android, and **Objective-C** or **Swift** on iOS and macOS for dealing with this API. Additionally, **C++** API is available on Windows and macOS. EffectPlayer features: * Consumes the camera frames and requests for the frame drawing. Serves it in an asynchronous manner using multithreading if available on a platform. * Runs all recognition operations on the input frames. * Runs and manages all platform-specific modules encapsulated in C++. For example audio playback, video playback, accelerometer, scripting engine, etc. * Implements all logic of loading and playing interactive effects (loading from disk, rendering, scripting of effect logic, etc). ### Platform modules[​](#platform-modules "Direct link to Platform modules") The functionality of the platform modules depends on the specific platform. Generally, it includes: * camera features (permission management, configuration, lifecycle implementation) * effect's rendering context setup * video recording * high-resolution photo taking * preparing of EffectPlayer's resources on the app's launch The source code of the platform modules is included in SDK's distribution archive. You can modify the code and adapt the SDK up to your use case if its default functionality is not enough. ## More info[​](#more-info "Direct link to More info") * Visit our [getting started](/far-sdk/tutorials/development/basic_integration.md) page for more information about SDK integration and examples of Demo apps. * Have questions about Face AR SDK? Visit the [FAQ page](https://www.banuba.com/faq/). * Can't find an answer? [Contact our support](/far-sdk/support/.md). ---