react-native-vision-camera/docs/docs/guides/FRAME_PROCESSORS.mdx
2021-06-08 14:06:26 +02:00

178 lines
7.4 KiB
Plaintext

---
id: frame-processors
title: Frame Processors
sidebar_label: Frame Processors
---
import useBaseUrl from '@docusaurus/useBaseUrl';
<div>
<svg xmlns="http://www.w3.org/2000/svg" width="283" height="535" style={{ float: 'right' }}>
<image href={useBaseUrl("img/frame-processors.gif")} x="18" y="33" width="247" height="469" />
<image href={useBaseUrl("img/frame.png")} width="283" height="535" />
</svg>
</div>
### What are frame processors?
Frame processors are functions that are written in JavaScript (or TypeScript) which can be used to **process frames the camera "sees"**.
Inside those functions you can call **Frame Processor Plugins**, which are high performance native functions specifically designed for certain use-cases.
For example, you might want to create a QR code scanner **without writing any native code**, while still **achieving native performance**:
```jsx
function App() {
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
const qrCodes = scanQRCodes(frame)
console.log(`Detected QR Codes: ${qrCodes}`)
}, [])
return (
<Camera
{...cameraProps}
frameProcessor={frameProcessor}
/>
)
}
```
Frame processors are by far not limited to QR code detection, other examples include:
* **AI** for **facial recognition**
* **AI** for **object detection**
* Using **Tensorflow**, **MLKit Vision** or other libraries
* Creating **realtime video-chats** using **WebRTC** to directly send the camera frames over the network
* Creating scanners for QR codes, Barcodes or even custom codes such as Snapchat's SnapCodes or Apple's AppClips
* Creating **snapchat-like filters**, e.g. draw a dog-mask filter over the user's face
* Creating **color filters** with depth-detection
:::note
Frame Processors require [**react-native-reanimated**](https://github.com/software-mansion/react-native-reanimated) 2.2.0 or higher.
:::
### Interacting with Frame Processors
Since Frame Processors run in Worklets, you can also easily read from, and assign to [**Shared Values**](https://docs.swmansion.com/react-native-reanimated/docs/shared-values):
```tsx {6}
// represents position of the cat on the screen 🐈
const catBounds = useSharedValue({ top: 0, left: 0, right: 0, bottom: 0 })
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
catBounds.value = scanFrameForCat(frame)
}, [catBounds])
const boxOverlayStyle = useAnimatedStyle(() => ({
position: 'absolute',
borderWidth: 1,
borderColor: 'red',
...catBounds.value
}), [catBounds])
return (
<View>
<Camera {...cameraProps} frameProcessor={frameProcessor} />
// draws a red rectangle on the screen which surrounds the cat
<Reanimated.View style={boxOverlayStyle} />
</View>
)
```
And you can also call back to the React-JS thread using [`runOnJS`](https://docs.swmansion.com/react-native-reanimated/docs/api/runOnJS):
```tsx {9}
const onQRCodeDetected = useCallback((qrCode: string) => {
navigation.push("ProductPage", { productId: qrCode })
}, [])
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
const qrCodes = scanQRCodes(frame)
if (qrCodes.length > 0) {
runOnJS(onQRCodeDetected)(qrCodes[0])
}
}, [onQRCodeDetected])
```
### Technical
* **Frame Processors** are JS functions that will be **workletized** using [react-native-reanimated](https://github.com/software-mansion/react-native-reanimated). They are created on a **custom camera thread** using a separate JavaScript Runtime (_"VisionCamera JS-Runtime"_) and are **invoked synchronously** (using JSI) without ever going over the bridge.
* **Frame Processor Plugins** are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be **synchronously called** from your JS Frame Processors (using JSI) without ever going over the bridge.
> Learn how to [**create Frame Processor Plugins**](frame-processors-plugins-overview)
### Using Frame Processor Plugins
Frame Processor Plugins are distributed through npm. To install the [**vision-camera-qrcode-scanner**](https://github.com/mrousavy/vision-camera-qrcode-scanner) plugin, run:
```bash
npm i vision-camera-qrcode-scanner
cd ios && pod install
```
Then add it to your `babel.config.js`. For the QR Code Scanner, this will be `__scanQRCodes`:
```js {6}
module.exports = {
plugins: [
[
'react-native-reanimated/plugin',
{
globals: ['__scanQRCodes'],
},
],
```
:::note
You have to restart metro-bundler for changes in the `babel.config.js` file to take effect.
:::
That's it! 🎉 Now you can use it:
```ts
const frameProcessor = useFrameProcessor((frame: Frame) => {
'worklet'
const codes = scanQRCodes(frame)
// ...
}, [])
```
Check out [**Frame Processor community plugins**](/docs/guides/frame-processor-plugin-list) to discover plugins!
### Benchmarks
I have used [MLKit Vision Image Labeling](https://firebase.google.com/docs/ml-kit/ios/label-images) to label 4k Camera frames in realtime.
* Fully natively (written in pure Objective-C, no React interaction at all), I have measured an average of **68ms** per call.
* As a Frame Processor Plugin (written in Objective-C, called through a JS Frame Processor function), I have measured an average of **69ms** per call, meaning **the Frame Processor API only takes ~1ms longer than a fully native implementation**, making it **the fastest way to run any sort of Frame Processing in React Native**.
> All measurements are recorded on an iPhone 11 Pro, benchmarked total execution time of the [`captureOutput`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate/1385775-captureoutput) function by using [`CFAbsoluteTimeGetCurrent`](https://developer.apple.com/documentation/corefoundation/1543542-cfabsolutetimegetcurrent). Running smaller images (lower than 4k resolution) is much quicker and many algorithms can even run at 60 FPS.
### Performance
Frame Processors will be **synchronously** called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of **30 FPS**, you have about **33ms** to finish processing frames. Use [`frameProcessorFps`](../api/interfaces/cameraprops.cameraprops-1#frameprocessorfps) to throttle the frame processor's FPS. For a QR Code Scanner, **5 FPS** might suffice.
### ESLint react-hooks plugin
If you are using the [react-hooks ESLint plugin](https://www.npmjs.com/package/eslint-plugin-react-hooks), make sure to add `useFrameProcessor` to `additionalHooks` inside your ESLint config. (See ["advanced configuration"](https://www.npmjs.com/package/eslint-plugin-react-hooks#advanced-configuration))
### Disabling Frame Processors
The Frame Processor API spawns a secondary JavaScript Runtime which consumes a small amount of extra CPU and RAM. If you're not using Frame Processors at all, you can disable them by setting the `VISION_CAMERA_DISABLE_FRAME_PROCESSORS` flag. Inside your `project.pbxproj`, find the `GCC_PREPROCESSOR_DEFINITIONS` parameter and add the flag:
```txt {3}
GCC_PREPROCESSOR_DEFINITIONS = (
"DEBUG=1",
"VISION_CAMERA_DISABLE_FRAME_PROCESSORS=1",
"$(inherited)",
);
```
Make sure to add this to your Debug-, as well as your Release-configuration.
<br />
#### 🚀 Next section: [Zooming with Reanimated](/docs/guides/animated) (or [creating a Frame Processor Plugin](/docs/guides/frame-processors-plugins-overview))