Skip to main content

Banuba SDK Web AR Common use-cases

Customizing video source

You can easily modify the built-in Webcam module video by passing parameters to it.

Switching from front to back camera

For example, you can use back camera of the device by passing facingMode parameter to it:

// ...

// The default facingMode value is "user" which means front camera
// The "environment" value here means back camera
await player.use(new Webcam({ facingMode: "environment" }))

// ...

Rendering WebAR video in full screen on mobile

Simply add the following CSS to force WebAR AR video to fill viewport:

<style>
/* The simplest way to force Banuba WebAR canvas to fill viewport */
#webar > canvas {
width: 100vw;
height: 100vh;
object-fit: cover;
}
</style>

<script type="module">
import { Webcam, Player, Dom } from "https://cdn.jsdelivr.net/npm/@banuba/webar/dist/BanubaSDK.browser.esm.js"

Player.create({ clientToken: "xxx-xxx-xxx" })
.then(async (player) => {
await player.use(new Webcam())
Dom.render(player, "#webar")
})
</script>

If you decide to specify exact width and height for the Webcam, pay attention that on several mobile devices / operating systems webcam video width and height may be flipped. It's a known platform-specific webcam bug.

To workaround it, swap the width and height values:

const desiredWidth = 360
const desiredHeight = 540

await player.use(new Webcam({
width: desiredHeight,
height: desiredWidth,
}))

Also, you may want to checkout the Video cropping section for more advanced scenarios.

External MediaStream

If the built-in Webcam can not fit your needs you can use a custom MediaStream with Player:

import { MediaStream /* ... */ } from "@banuba/webar"

// ...

/* process video from the camera */
const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true })
await player.use(new MediaStream(camera))

/* or even from another canvas */
const canvas = $("canvas").captureStream()
await player.use(new MediaStream(canvas))

// ...

See Banuba WebAR SDK MediaStream docs for more details.

Capturing processed video

You can easily capture the processed video, take screenshots, video recordings or pass the captured video to a WebRTC connection.

Screenshot

import { ImageCapture /* ... */ } from "@banuba/webar"

// ...

const capture = new ImageCapture(player)
const photo = await capture.takePhoto()

// ...

See Banuba WebAR SDK ImageCapture.takePhoto() docs for more details.

Video

import { VideoRecorder /* ... */ } from "@banuba/webar"

// ...

const recorder = new VideoRecorder(player)
recorder.start()
await new Promise((r) => setTimeout(r, 5000)) // wait for 5 sec
const video = await recorder.stop()

// ...

See Banuba WebAR SDK VideoRecorder docs for more details.

MediaStream

import { MediaStreamCapture /* ... */ } from "@banuba/webar"

// ...

// the capture is an instance of window.MediaStream
const capture = new MediaStreamCapture(player)

// so it can be used as a video source
$("video").srcObject = capture

// or can be added to a WebRTC peer connection
const connection = new RTCPeerConnection()
connection.addTrack(capture.getVideoTrack())

// ...

See Banuba WebAR SDK MediaStreamCapture docs for more details.

Video cropping

You can adjust video frame dimensions via Webcam constructor parameters:

const wcam = new Webcam({ width: 320, height: 240 })

But this approach is platform dependent and varies between browsers, e.g. some browsers may be unable to produces frames of the requested dimensions and can yield frames of close but different dimensions instead (e.g. 352x288 instead of requested 320x240).

To workaround this platform-specific limitations, you can levered the built-in SDK crop and resize modificators:

const desiredWidth = 320
const desiredHeight = 240

function resize(frameWidth, frameHeight) {
const wRatio = desiredWidth / frameWidth
const hRatio = desiredHeight / frameHeight
const ratio = Math.max(wRatio, hRatio)

const resizedWidth = ratio * frameWidth
const resizedHeight = ratio * frameHeight

return [resizedWidth, resizedHeight]
}

function crop(renderWidth, renderHeight) {
const dx = (renderWidth - desiredWidth) / 2
const dy = (renderHeight - desiredHeight) / 2

return [dx, dy, desiredWidth, desiredHeight]
}

await player.use(webcam, { resize, crop })

This way you can get the desired frame size despite the platform used.

See Player.use() and FramingOptions docs for more datails.

Postprocessing

It's possible to postprocess the processed by WebAR SDK video.

You can grab the idea from the code-snippet:

import { MediaStreamCapture /* ... */ } from "@banuba/webar"

// ...

const capture = document.createElement("video")
capture.autoplay = true
capture.srcObject = new MediaStreamCapture(player)

const canvas = document.getElementById("postprocessed")
const ctx = canvas.getContext("2d")
const fontSize = 48 * window.devicePixelRatio

function postprocess() {
canvas.width = capture.videoWidth
canvas.height = capture.videoHeight

ctx.drawImage(capture, 0, 0)

ctx.font = `${fontSize}px serif`
ctx.fillStyle = "red"
ctx.fillText("A Watermark", 0.5 * fontSize, 1.25 * fontSize)
}

;(function loop() {
postprocess()
requestAnimationFrame(loop)
})()

See Capturing processed video > MediaStream for details.