docs: New V3 docs for new API (#1842)
* docs: New V3 docs for new API * fix: Prefer Wide-Angle unless explicitly opted-out * docs: Update DEVICES * Finish Devices docs * Switch links * Revert "Switch links" This reverts commit 06f196ae0e67787cbd5768e125be6d0a3cb5bbc9. * docs: New LIFECYCLE * docs: New CAPTURING docs * Update Worklets links * docs: Update TROUBLESHOOTING and ZOOMING * fix: Update `getAvailableCameraDevices()` usages * docs: Update FORMATS * Update Errors.kt * docs: Fix broken links * docs: Update references to old hooks * docs: Create Frame Processor Tips * docs: Auto-dark mode * fix: Fix FPS filter * feat: Add `'max'` flag to format filter * fix: Use loop * fix: Fix bug in `getCameraFormat` * fix: Find best aspect ratio as well * fix: Switch between formats on FPS change * Update FRAME_PROCESSOR_PLUGIN_LIST.mdx * Add FPS graph explanation * feat: Support HDR filter * docs: Add HDR docs * docs: Add Video Stabilization * docs: Update Skia docs * Skia links * Add Skia labels * Update SKIA_FRAME_PROCESSORS.mdx * docs: Add Performance * Update some wording * Update headers / and zoom * Add examples for devices * fix highlights * fix: Expose `Frame` * docs: Update FP docs * Update links * Update FRAME_PROCESSOR_CREATE_PLUGIN_IOS.mdx
This commit is contained in:
parent
9dd91a4001
commit
2d66d5893c
@ -50,10 +50,9 @@ You're looking at the V3 version of VisionCamera, which features a full rewrite
|
|||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
function App() {
|
function App() {
|
||||||
const devices = useCameraDevices('wide-angle-camera')
|
const device = useCameraDevice('back')
|
||||||
const device = devices.back
|
|
||||||
|
|
||||||
if (device == null) return <LoadingView />
|
if (device == null) return <NoCameraErrorView />
|
||||||
return (
|
return (
|
||||||
<Camera
|
<Camera
|
||||||
style={StyleSheet.absoluteFill}
|
style={StyleSheet.absoluteFill}
|
||||||
|
@ -13,9 +13,9 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
## Camera Actions
|
## Camera Functions
|
||||||
|
|
||||||
The Camera provides certain actions using member functions which are available by using a [ref object](https://reactjs.org/docs/refs-and-the-dom.html):
|
The Camera provides certain functions which are available through a [ref object](https://reactjs.org/docs/refs-and-the-dom.html):
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
function App() {
|
function App() {
|
||||||
@ -31,17 +31,17 @@ function App() {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The most important actions are:
|
To use these functions, you need to wait until the [`onInitialized`](/docs/api/interfaces/CameraProps#oninitialized) event has been fired.
|
||||||
|
|
||||||
* [Taking Photos](#taking-photos)
|
### Taking Photos
|
||||||
* [Recording Videos](#recording-videos)
|
|
||||||
|
|
||||||
## Taking Photos
|
|
||||||
|
|
||||||
To take a photo you first have to enable photo capture:
|
To take a photo you first have to enable photo capture:
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
<Camera {...props} photo={true} />
|
<Camera
|
||||||
|
{...props}
|
||||||
|
photo={true}
|
||||||
|
/>
|
||||||
```
|
```
|
||||||
|
|
||||||
Then, simply use the Camera's [`takePhoto(...)`](/docs/api/classes/Camera#takephoto) function:
|
Then, simply use the Camera's [`takePhoto(...)`](/docs/api/classes/Camera#takephoto) function:
|
||||||
@ -52,11 +52,11 @@ const photo = await camera.current.takePhoto({
|
|||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
You can customize capture options such as [automatic red-eye reduction](/docs/api/interfaces/TakePhotoOptions#enableautoredeyereduction), [automatic image stabilization](/docs/api/interfaces/TakePhotoOptions#enableautostabilization), [combining images from constituent physical camera devices](/docs/api/interfaces/TakePhotoOptions#enablevirtualdevicefusion) to create a single high quality fused image, [enable flash](/docs/api/interfaces/TakePhotoOptions#flash), [prioritize speed over quality](/docs/api/interfaces/TakePhotoOptions#qualityprioritization) and more using the `options` parameter. (See [`TakePhotoOptions`](/docs/api/interfaces/TakePhotoOptions))
|
You can customize capture options such as [automatic red-eye reduction](/docs/api/interfaces/TakePhotoOptions#enableautoredeyereduction), [automatic image stabilization](/docs/api/interfaces/TakePhotoOptions#enableautostabilization), [enable flash](/docs/api/interfaces/TakePhotoOptions#flash), [prioritize speed over quality](/docs/api/interfaces/TakePhotoOptions#qualityprioritization), [disable the shutter sound](/docs/api/interfaces/TakePhotoOptions#enableshuttersound) and more using the [`TakePhotoOptions`](/docs/api/interfaces/TakePhotoOptions) parameter.
|
||||||
|
|
||||||
This function returns a [`PhotoFile`](/docs/api/interfaces/PhotoFile) which contains a [`path`](/docs/api/interfaces/PhotoFile#path) property you can display in your App using an `<Image>` or `<FastImage>`.
|
This function returns a [`PhotoFile`](/docs/api/interfaces/PhotoFile) which is stored in a temporary directory and can either be displayed using `<Image>` or `<FastImage>`, uploaded to a backend, or saved to the Camera Roll using [react-native-cameraroll](https://github.com/react-native-cameraroll/react-native-cameraroll).
|
||||||
|
|
||||||
## Recording Videos
|
### Recording Videos
|
||||||
|
|
||||||
To start a video recording you first have to enable video capture:
|
To start a video recording you first have to enable video capture:
|
||||||
|
|
||||||
@ -78,6 +78,8 @@ camera.current.startRecording({
|
|||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You can customize capture options such as [video codec](/docs/api/interfaces/RecordVideoOptions#videoCodec), [file type](/docs/api/interfaces/RecordVideoOptions#fileType), [enable flash](/docs/api/interfaces/RecordVideoOptions#flash) and more using the [`RecordVideoOptions`](/docs/api/interfaces/RecordVideoOptions) parameter.
|
||||||
|
|
||||||
For any error that occured _while recording the video_, the `onRecordingError` callback will be invoked with a [`CaptureError`](/docs/api/classes/CameraCaptureError) and the recording is therefore cancelled.
|
For any error that occured _while recording the video_, the `onRecordingError` callback will be invoked with a [`CaptureError`](/docs/api/classes/CameraCaptureError) and the recording is therefore cancelled.
|
||||||
|
|
||||||
To stop the video recording, you can call [`stopRecording(...)`](/docs/api/classes/Camera#stoprecording):
|
To stop the video recording, you can call [`stopRecording(...)`](/docs/api/classes/Camera#stoprecording):
|
||||||
@ -86,7 +88,7 @@ To stop the video recording, you can call [`stopRecording(...)`](/docs/api/class
|
|||||||
await camera.current.stopRecording()
|
await camera.current.stopRecording()
|
||||||
```
|
```
|
||||||
|
|
||||||
Once a recording has been stopped, the `onRecordingFinished` callback passed to the `startRecording` function will be invoked with a [`VideoFile`](/docs/api/interfaces/VideoFile) which you can then use to display in a [`<Video>`](https://github.com/react-native-video/react-native-video) component.
|
Once a recording has been stopped, the `onRecordingFinished` callback passed to the `startRecording(..)` function will be invoked with a [`VideoFile`](/docs/api/interfaces/VideoFile) which you can then use to display in a [`<Video>`](https://github.com/react-native-video/react-native-video) component, uploaded to a backend, or saved to the Camera Roll using [react-native-cameraroll](https://github.com/react-native-cameraroll/react-native-cameraroll).
|
||||||
|
|
||||||
To pause/resume the recordings, you can use `pauseRecording()` and `resumeRecording()`:
|
To pause/resume the recordings, you can use `pauseRecording()` and `resumeRecording()`:
|
||||||
|
|
||||||
|
@ -4,6 +4,8 @@ title: Camera Devices
|
|||||||
sidebar_label: Camera Devices
|
sidebar_label: Camera Devices
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
import useBaseUrl from '@docusaurus/useBaseUrl';
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||||
|
|
||||||
<div>
|
<div>
|
||||||
@ -13,94 +15,221 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
### What are camera devices?
|
## What are Camera Devices?
|
||||||
|
|
||||||
Camera devices are the physical (or "virtual") devices that can be used to record videos or capture photos.
|
Camera Devices are the physical (or "virtual") devices that can be used to record videos or capture photos.
|
||||||
|
|
||||||
* **Physical**: A physical camera device is a **camera lens on your phone**. Different physical camera devices have different specifications, such as different capture formats, field of views, frame rates, focal lengths, and more. Some phones have multiple physical camera devices.
|
* **Physical**: A physical Camera Device is a **camera lens on your phone**. Different physical Camera Devices have different specifications, such as different capture formats, resolutions, zoom levels, and more. Some phones have multiple physical Camera Devices.
|
||||||
|
|
||||||
> Examples: _"Backside Wide-Angle Camera"_, _"Frontside Wide-Angle Camera (FaceTime HD)"_, _"Ultra-Wide-Angle back camera"_.
|
> Examples: _"Backside Wide-Angle Camera"_, _"Frontside Wide-Angle Camera (FaceTime HD)"_, _"Ultra-Wide-Angle back camera"_
|
||||||
|
* **Virtual**: A virtual camera device is a **combination of one or more physical camera devices**, and provides features such as _virtual-device-switchover_ while zooming (see video on the right) or _combined photo delivery_ from all physical cameras to produce higher quality images.
|
||||||
* **Virtual**: A virtual camera device is a **combination of one or more physical camera devices**, and provides features such as _virtual-device-switchover_ while zooming or _combined photo delivery_ from all physical cameras to produce higher quality images.
|
|
||||||
|
|
||||||
> Examples: _"Triple-Camera"_, _"Dual-Wide-Angle Camera"_
|
> Examples: _"Triple-Camera"_, _"Dual-Wide-Angle Camera"_
|
||||||
|
|
||||||
### Get available camera devices
|
## Select the default Camera
|
||||||
|
|
||||||
To get a list of all available camera devices, use [the `getAvailableCameraDevices` function](/docs/api/classes/Camera#getavailablecameradevices):
|
If you simply want to use the default [`CameraDevice`](/docs/api/interfaces/CameraDevice), you can just use whatever is available:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
```ts
|
```ts
|
||||||
const devices = await Camera.getAvailableCameraDevices()
|
const device = useCameraDevice('back')
|
||||||
```
|
```
|
||||||
|
|
||||||
Each camera device provides properties describing the features of this device. For example, a camera device provides the `hasFlash` property which is `true` if the device supports activating the flash when taking photos or recording videos.
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
The most important properties are:
|
```ts
|
||||||
|
const devices = Camera.getAvailableCameraDevices()
|
||||||
* `devices`: A list of physical device types this camera device consists of. For a **single physical camera device**, this property is always an array of one element. **For virtual multi-cameras** this property contains all the physical camera devices that are combined to create this virtual multi-camera device
|
const device = devices.find((d) => d.position === 'back')
|
||||||
* `position`: The position of the camera device relative to the phone (`front`, `back`)
|
|
||||||
* `hasFlash`: Whether this camera device supports using the flash to take photos or record videos
|
|
||||||
* `hasTorch`: Whether this camera device supports enabling/disabling the torch at any time ([`Camera.torch` prop](/docs/api/interfaces/CameraProps#torch))
|
|
||||||
* `isMultiCam`: Determines whether the camera device is a virtual multi-camera device which contains multiple combined physical camera devices.
|
|
||||||
* `minZoom`: The minimum available zoom factor. This value is often `1`. When you pass `zoom={0}` to the Camera, the `minZoom` factor will be applied.
|
|
||||||
* `neutralZoom`: The zoom factor where the camera is "neutral". For any wide-angle cameras this property might be the same as `minZoom`, where as for ultra-wide-angle cameras ("fish-eye") this might be a value higher than `minZoom` (e.g. `2`). It is recommended that you always start at `neutralZoom` and let the user manually zoom out to `minZoom` on demand.
|
|
||||||
* `maxZoom`: The maximum available zoom factor. When you pass `zoom={1}` to the Camera, the `maxZoom` factor will be applied.
|
|
||||||
* `formats`: A list of all available formats (See [Camera Formats](formats))
|
|
||||||
* `supportsFocus`: Determines whether this camera device supports focusing (See [Focusing](focusing))
|
|
||||||
|
|
||||||
:::note
|
|
||||||
See the [`CameraDevice` type](/docs/api/interfaces/CameraDevice) for full API reference
|
|
||||||
:::
|
|
||||||
|
|
||||||
For debugging purposes you can use the `id` or `name` properties to log and compare devices. You can also use the `devices` properties to determine the physical camera devices this camera device consists of, for example:
|
|
||||||
|
|
||||||
* For a single Wide-Angle camera, this would be `["wide-angle-camera"]`
|
|
||||||
* For a Triple-Camera, this would be `["wide-angle-camera", "ultra-wide-angle-camera", "telephoto-camera"]`
|
|
||||||
|
|
||||||
Always choose a camera device that is best fitted for your use-case; so you might filter out any cameras that do not support flash, have low zoom values, are not on the back side of the phone, do not contain a format with high resolution or fps, and more.
|
|
||||||
|
|
||||||
:::caution
|
|
||||||
Make sure to be careful when filtering out unneeded camera devices, since not every phone supports all camera device types. Some phones don't even have front-cameras. You always want to have a camera device, even when it's not the one that has the best features.
|
|
||||||
:::
|
|
||||||
|
|
||||||
### The `useCameraDevices` hook
|
|
||||||
|
|
||||||
VisionCamera provides a hook to make camera device selection a lot easier. You can specify a device type to only find devices with the given type:
|
|
||||||
|
|
||||||
```tsx
|
|
||||||
function App() {
|
|
||||||
const devices = useCameraDevices('wide-angle-camera')
|
|
||||||
const device = devices.back
|
|
||||||
|
|
||||||
if (device == null) return <LoadingView />
|
|
||||||
return (
|
|
||||||
<Camera
|
|
||||||
style={StyleSheet.absoluteFill}
|
|
||||||
device={device}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Or just return the "best matching camera device". This function prefers camera devices with more physical cameras, and always ranks "wide-angle" physical camera devices first.
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
> Example: `triple-camera` > `dual-wide-camera` > `dual-camera` > `wide-angle-camera` > `ultra-wide-angle-camera` > `telephoto-camera` > ...
|
And VisionCamera will automatically find the best matching [`CameraDevice`](/docs/api/interfaces/CameraDevice) for you!
|
||||||
|
|
||||||
```tsx
|
**🚀 Continue with: [Camera Lifecycle](lifecycle)**
|
||||||
function App() {
|
|
||||||
|
## Custom Device Selection
|
||||||
|
|
||||||
|
For advanced use-cases, you might want to select a different [`CameraDevice`](/docs/api/interfaces/CameraDevice) for your app.
|
||||||
|
|
||||||
|
A [`CameraDevice`](/docs/api/interfaces/CameraDevice) consists of the following specifications:
|
||||||
|
|
||||||
|
- [`id`](/docs/api/interfaces/CameraDevice#id): A unique ID used to identify this Camera Device
|
||||||
|
- [`position`](/docs/api/interfaces/CameraDevice#position): The position of this Camera Device relative to the phone
|
||||||
|
- `back`: The Camera Device is located on the back of the phone
|
||||||
|
- `front`: The Camera Device is located on the front of the phone
|
||||||
|
- `external`: The Camera Device is an external device. These devices can be either:
|
||||||
|
- USB Camera Devices (if they support the [USB Video Class (UVC) Specification](https://en.wikipedia.org/wiki/List_of_USB_video_class_devices))
|
||||||
|
- [Continuity Camera Devices](https://support.apple.com/en-us/HT213244) (e.g. your iPhone's or Mac's Camera connected through WiFi/Continuity)
|
||||||
|
- Bluetooth/WiFi Camera Devices (if they are supported in the platform-native Camera APIs)
|
||||||
|
- [`physicalDevices`](/docs/api/interfaces/CameraDevice#physicalDevices): The physical Camera Devices (lenses) this Camera Device consists of. This can either be one of these values ("physical" device) or any combination of these values ("virtual" device):
|
||||||
|
- `ultra-wide-angle-camera`: The "fish-eye" camera for 0.5x zoom
|
||||||
|
- `wide-angle-camera`: The "default" camera for 1x zoom
|
||||||
|
- `telephoto-camera`: A zoomed-in camera for 3x zoom
|
||||||
|
- [`sensorOrientation`](/docs/api/interfaces/CameraDevice#sensorOrientation): The orientation of the Camera sensor/lens relative to the phone. Cameras are usually in `landscapeLeft` orientation, meaning they are rotated by 90°. This includes their resolutions, so a 4k format might be 3840x2160, not 2160x3840
|
||||||
|
- [`minZoom`](/docs/api/interfaces/CameraDevice#minZoom): The minimum possible zoom factor for this Camera Device. If this is a multi-cam, this is the point where the device with the widest field of view is used (e.g. ultra-wide)
|
||||||
|
- [`maxZoom`](/docs/api/interfaces/CameraDevice#maxZoom): The maximum possible zoom factor for this Camera Device. If this is a multi-cam, this is the point where the device with the lowest field of view is used (e.g. telephoto)
|
||||||
|
- [`neutralZoom`](/docs/api/interfaces/CameraDevice#neutralZoom): A value between `minZoom` and `maxZoom` where the "default" Camera Device is used (e.g. wide-angle). When using multi-cams, make sure to start off at this zoom level, so the user can optionally zoom out to the ultra-wide-angle Camera instead of already starting zoomed out
|
||||||
|
- [`formats`](/docs/api/interfaces/CameraDevice#formats): The list of [`CameraDeviceFormat`s](/docs/api/interfaces/CameraDeviceFormat) (See ["Camera Formats"](/docs/guides/formats)) this Camera Device supports. A format specifies:
|
||||||
|
- Video Resolution (see ["Formats: Video Resolution"](/docs/guides/formats#video-resolution))
|
||||||
|
- Photo Resolution (see ["Formats: Photo Resolution"](/docs/guides/formats#photo-resolution))
|
||||||
|
- FPS (see ["Formats: FPS"](/docs/guides/formats#fps))
|
||||||
|
- Video Stabilization Mode (see: ["Formats: Video Stabilization Mode"](/docs/guides/formats#videoStabilization))
|
||||||
|
- Pixel Format (see: ["Formats: Pixel Format"](/docs/guides/formats#pixelFormat))
|
||||||
|
|
||||||
|
### Examples on an iPhone
|
||||||
|
|
||||||
|
Here's a list of some Camera Devices an iPhone 13 Pro has:
|
||||||
|
|
||||||
|
- Back Wide Angle Camera (`['wide-angle-camera']`)
|
||||||
|
- Back Ultra-Wide Angle Camera (`['ultra-wide-angle-camera']`)
|
||||||
|
- Back Telephoto Camera (`['telephoto-camera']`)
|
||||||
|
- Back Dual Camera (Wide + Telephoto)
|
||||||
|
- Back Dual-Wide Camera (Ultra-Wide + Wide)
|
||||||
|
- Back Triple Camera (Ultra-Wide + Wide + Telephoto)
|
||||||
|
- Back LiDAR Camera (Wide + LiDAR-Depth)
|
||||||
|
- Front Wide Angle (`['wide-angle-camera']`)
|
||||||
|
- Front True-Depth (Wide + Depth)
|
||||||
|
|
||||||
|
### Selecting Multi-Cams
|
||||||
|
|
||||||
|
Multi-Cams are virtual Camera Devices that consist of more than one physical Camera Device. For example:
|
||||||
|
|
||||||
|
- ultra-wide + wide + telephoto = "Triple-Camera"
|
||||||
|
- ultra-wide + wide = "Dual-Wide-Camera"
|
||||||
|
- wide + telephoto = "Dual-Camera"
|
||||||
|
|
||||||
|
Benefits of Multi-Cams:
|
||||||
|
|
||||||
|
- Multi-Cams can smoothly switch between the physical Camera Devices (lenses) while zooming.
|
||||||
|
- Multi-Cams can capture Frames from all physical Camera Devices at the same time and fuse them together to create higher-quality Photos.
|
||||||
|
|
||||||
|
Downsides of Multi-Cams:
|
||||||
|
|
||||||
|
- The Camera takes longer to initialize and uses more resources
|
||||||
|
|
||||||
|
To use the "Triple-Camera" in your app, you can just search for a device that contains all three physical Camera Devices:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const device = useCameraDevice('back', {
|
||||||
|
physicalDevices: [
|
||||||
|
'ultra-wide-angle-camera',
|
||||||
|
'wide-angle-camera',
|
||||||
|
'telephoto-camera'
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const devices = Camera.getAvailableCameraDevices()
|
||||||
|
const device = getCameraDevice(devices, 'back', {
|
||||||
|
physicalDevices: [
|
||||||
|
'ultra-wide-angle-camera',
|
||||||
|
'wide-angle-camera',
|
||||||
|
'telephoto-camera'
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
This will try to find a [`CameraDevice`](/docs/api/interfaces/CameraDevice) that consists of all three physical Camera Devices, or the next best match (e.g. "Dual-Camera", or just a single wide-angle-camera) if not found. With the "Triple-Camera", we can now zoom out to a wider field of view:
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="/img/multi-camera.gif" width="55%" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
If you want to do the filtering/sorting fully yourself, you can also just get all devices, then implement your own filter:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
const devices = useCameraDevices()
|
const devices = useCameraDevices()
|
||||||
const device = devices.back
|
const device = useMemo(() => findBestDevice(devices), [devices])
|
||||||
|
|
||||||
if (device == null) return <LoadingView />
|
|
||||||
return (
|
|
||||||
<Camera
|
|
||||||
style={StyleSheet.absoluteFill}
|
|
||||||
device={device}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const devices = Camera.getAvailableCameraDevices()
|
||||||
|
const device = findBestDevice(devices)
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
### Selecting external Cameras
|
||||||
|
|
||||||
|
VisionCamera supports using `external` Camera Devices, such as:
|
||||||
|
- USB Camera Devices (if they support the [USB Video Class (UVC) Specification](https://en.wikipedia.org/wiki/List_of_USB_video_class_devices))
|
||||||
|
- [Continuity Camera Devices](https://support.apple.com/en-us/HT213244) (e.g. your iPhone's or Mac's Camera connected through WiFi/Continuity)
|
||||||
|
- Bluetooth/WiFi Camera Devices (if they are supported in the platform-native Camera APIs)
|
||||||
|
|
||||||
|
Since `external` Camera Devices can be plugged in/out at any point, you need to make sure to listen for changes in the Camera Devices list when using `external` Cameras:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
The hooks (`useCameraDevice(..)` and `useCameraDevices()`) already automatically listen for Camera Device changes!
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const usbCamera = useCameraDevice('external')
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
Add a listener by using the [`addCameraDevicesChangedListener(..)`](/docs/api/classes/Camera#addcameradeviceschangedlistener) API:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const listener = Camera.addCameraDevicesChangedListener((devices) => {
|
||||||
|
console.log(`Devices changed: ${devices}`)
|
||||||
|
this.usbCamera = devices.find((d) => d.position === "external")
|
||||||
|
})
|
||||||
|
// ...
|
||||||
|
listener.remove()
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
<br />
|
<br />
|
||||||
|
|
||||||
#### 🚀 Next section: [Camera Lifecycle](lifecycle)
|
#### 🚀 Next section: [Camera Lifecycle](lifecycle)
|
||||||
|
@ -12,7 +12,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
|
|
||||||
## Why?
|
## Why?
|
||||||
|
|
||||||
Since the Camera library is quite big, there is a lot that can "go wrong". The VisionCamera library provides thoroughly typed errors to help you quickly identify the cause and fix the problem.
|
Since the Camera library is quite big, there is a lot that can "go wrong". VisionCamera provides thoroughly typed errors to help you quickly identify the cause and fix the problem.
|
||||||
|
|
||||||
```ts
|
```ts
|
||||||
switch (error.code) {
|
switch (error.code) {
|
||||||
|
@ -10,9 +10,9 @@ To focus the camera to a specific point, simply use the Camera's [`focus(...)`](
|
|||||||
await camera.current.focus({ x: tapEvent.x, y: tapEvent.y })
|
await camera.current.focus({ x: tapEvent.x, y: tapEvent.y })
|
||||||
```
|
```
|
||||||
|
|
||||||
The focus function expects a [`Point`](/docs/api/interfaces/Point) parameter which represents the location relative to the Camera view where you want to focus the Camera to (in _points_). If you use [react-native-gesture-handler](https://docs.swmansion.com/react-native-gesture-handler/), this will consist of the [`x`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#x) and [`y`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#y) properties of the tap event payload.
|
The focus function expects a [`Point`](/docs/api/interfaces/Point) parameter which represents the location relative to the Camera view where you want to focus the Camera to (in _points_). If you use [react-native-gesture-handler](https://docs.swmansion.com/react-native-gesture-handler/), this will consist of the [`x`](https://docs.swmansion.com/react-native-gesture-handler/docs/next/gesture-handlers/api/tap-gh/#x) and [`y`](https://docs.swmansion.com/react-native-gesture-handler/docs/next/gesture-handlers/api/tap-gh/#y) properties of the tap event payload.
|
||||||
|
|
||||||
So for example, `{ x: 0, y: 0 }` will focus to the upper left corner, while `{ x: CAM_WIDTH, y: CAM_HEIGHT }` will focus to the bottom right corner.
|
So for example, `{ x: 0, y: 0 }` will focus to the upper left corner, while `{ x: VIEW_WIDTH, y: VIEW_HEIGHT }` will focus to the bottom right corner.
|
||||||
|
|
||||||
Focussing adjusts auto-focus (AF) and auto-exposure (AE).
|
Focussing adjusts auto-focus (AF) and auto-exposure (AE).
|
||||||
|
|
||||||
@ -22,4 +22,4 @@ Focussing adjusts auto-focus (AF) and auto-exposure (AE).
|
|||||||
|
|
||||||
<br />
|
<br />
|
||||||
|
|
||||||
#### 🚀 Next section: [Camera Errors](errors)
|
#### 🚀 Next section: [HDR](hdr)
|
||||||
|
@ -4,123 +4,178 @@ title: Camera Formats
|
|||||||
sidebar_label: Camera Formats
|
sidebar_label: Camera Formats
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
import useBaseUrl from '@docusaurus/useBaseUrl';
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||||
|
|
||||||
<div>
|
<div>
|
||||||
<img align="right" width="283" src={useBaseUrl("img/example.png")} />
|
<img align="right" width="283" src={useBaseUrl("img/example.png")} />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
### What are camera formats?
|
## What are camera formats?
|
||||||
|
|
||||||
Each camera device (see [Camera Devices](devices)) provides a number of capture formats that have different specifications. There are formats specifically designed for high-resolution photo capture, which have very high photo output quality but in return only support frame-rates of up to 30 FPS. On the other side, there might be formats that are designed for slow-motion video capture which have frame-rates up to 240 FPS.
|
Each camera device (see ["Camera Devices"](devices)) provides a number of formats that have different specifications.
|
||||||
|
There are formats specifically designed for high-resolution photo capture (but lower FPS), or formats that are designed for slow-motion video capture which have frame-rates of up to 240 FPS (but lower resolution).
|
||||||
|
|
||||||
### What if I don't want to choose a format?
|
## What if I don't want to choose a format?
|
||||||
|
|
||||||
If you don't want to specify the best format for your camera device, you don't have to. The Camera _automatically chooses the best matching format_ for the current camera device. This is why the Camera's `format` property is _optional_.
|
If you don't want to specify a Camera Format, you don't have to. The Camera automatically chooses the best matching format for the current camera device. This is why the Camera's `format` property is _optional_.
|
||||||
|
|
||||||
### What you need to know about cameras
|
**🚀 Continue with: [Taking Photos/Recording Videos](./capturing)**
|
||||||
|
|
||||||
|
## Choosing custom formats
|
||||||
|
|
||||||
To understand a bit more about camera formats, you first need to understand a few "general camera basics":
|
To understand a bit more about camera formats, you first need to understand a few "general camera basics":
|
||||||
|
|
||||||
* Each camera device is built differently, e.g. _Telephoto devices_ often don't provide frame-rates as high as _Wide-Angle devices_.
|
* Each camera device is built differently, e.g. front-facing Cameras often don't have resolutions as high as the Cameras on the back. (see ["Camera Devices"](devices))
|
||||||
* Formats are designed for specific use-cases, so formats with high resolution photo output don't support frame-rates as high as formats with lower resolution.
|
* Formats are designed for specific use-cases, here are some examples for formats on a Camera Device:
|
||||||
* Different formats provide different field-of-views (FOV), maximum zoom factors, color spaces (iOS only), resolutions, frame rate ranges, and systems to assist with capture (auto-focus systems, video stabilization systems, ...)
|
* 4k Photos, 4k Videos, 30 FPS (high quality)
|
||||||
|
* 4k Photos, 1080p Videos, 60 FPS (high FPS)
|
||||||
|
* 4k Photos, 1080p Videos, 240 FPS (ultra high FPS/slow motion)
|
||||||
|
* 720p Photos, 720p Videos, 30 FPS (smaller buffers/e.g. faster face detection)
|
||||||
|
* Each app has different requirements, so the format filtering is up to you.
|
||||||
|
|
||||||
### Get started
|
To get all available formats, simply use the `CameraDevice`'s [`formats` property](/docs/api/interfaces/CameraDevice#formats). These are a [CameraFormat's](/docs/api/interfaces/CameraDeviceFormat) props:
|
||||||
|
|
||||||
Each application has different needs, so the format filtering is up to you.
|
- [`photoHeight`](/docs/api/interfaces/CameraDeviceFormat#photoHeight)/[`photoWidth`](/docs/api/interfaces/CameraDeviceFormat#photoWidth): The resolution that will be used for capturing photos. Choose a format with your desired resolution.
|
||||||
|
- [`videoHeight`](/docs/api/interfaces/CameraDeviceFormat#videoHeight)/[`videoWidth`](/docs/api/interfaces/CameraDeviceFormat#videoWidth): The resolution that will be used for recording videos. Choose a format with your desired resolution.
|
||||||
|
- [`minFps`](/docs/api/interfaces/CameraDeviceFormat#minFps)/[`maxFps`](/docs/api/interfaces/CameraDeviceFormat#maxFps): A range of possible values for the `fps` property. For example, if your format has `minFps: 1` and `maxFps: 60`, you can either use `fps={30}`, `fps={60}` or any other value in between for recording videos.
|
||||||
|
- [`videoStabilizationModes`](/docs/api/interfaces/CameraDeviceFormat#videoStabilizationModes): All supported Video Stabilization Modes, digital and optical. If this specific format contains your desired [`VideoStabilizationMode`](/docs/api/#videostabilizationmode), you can pass it to your `<Camera>` via the [`videoStabilizationMode` property](/docs/api/interfaces/CameraProps#videoStabilizationMode).
|
||||||
|
- [`pixelFormats`](/docs/api/interfaces/CameraDeviceFormat#pixelFormats): All supported Pixel Formats. If this specific format contains your desired [`PixelFormat`](/docs/api/#PixelFormat), you can pass it to your `<Camera>` via the [`pixelFormat` property](/docs/api/interfaces/CameraProps#pixelFormat).
|
||||||
|
- [`supportsVideoHDR`](/docs/api/interfaces/CameraDeviceFormat#supportsVideoHDR): Whether this specific format supports true 10-bit HDR for video capture. If this is `true`, you can enable `hdr` on your `<Camera>`.
|
||||||
|
- [`supportsPhotoHDR`](/docs/api/interfaces/CameraDeviceFormat#supportsPhotoHDR): Whether this specific format supports HDR for photo capture. It will use multiple captures to fuse over-exposed and under-exposed Images together to form one HDR photo. If this is `true`, you can enable `hdr` on your `<Camera>`.
|
||||||
|
- [`supportsDepthCapture`](/docs/api/interfaces/CameraDeviceFormat#supportsDepthCapture): Whether this specific format supports depth data capture. For devices like the TrueDepth/LiDAR cameras, this will always be true.
|
||||||
|
- ...and more. See the [`CameraDeviceFormat` type](/docs/api/interfaces/CameraDeviceFormat) for all supported properties.
|
||||||
|
|
||||||
To get all available formats, simply use the `CameraDevice`'s `.formats` property. See how to get a camera device in the [Camera Devices guide](devices).
|
You can either find a matching format manually by looping through your `CameraDevice`'s [`formats` property](/docs/api/interfaces/CameraDevice#formats), or by using the helper functions from VisionCamera:
|
||||||
|
|
||||||
:::note
|
<Tabs
|
||||||
You can also manually get all camera devices and decide which device to use based on the available `formats`.
|
groupId="component-style"
|
||||||
:::
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
This example shows how you would pick the format with the _highest frame rate_:
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
```tsx
|
]}>
|
||||||
function App() {
|
<TabItem value="hooks">
|
||||||
const devices = useCameraDevices('wide-angle-camera')
|
|
||||||
const device = devices.back
|
|
||||||
|
|
||||||
const format = useMemo(() => {
|
|
||||||
return device?.formats.reduce((prev, curr) => {
|
|
||||||
if (prev == null) return curr
|
|
||||||
if (curr.maxFps > prev.maxFps) return curr
|
|
||||||
else return prev
|
|
||||||
}, undefined)
|
|
||||||
}, [device?.formats])
|
|
||||||
|
|
||||||
if (device == null) return <LoadingView />
|
|
||||||
return (
|
|
||||||
<Camera
|
|
||||||
style={StyleSheet.absoluteFill}
|
|
||||||
device={device}
|
|
||||||
format={format}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that you don't want to simply pick the highest frame rate, as those formats often have incredibly low resolutions. You want to find a balance between high frame rate and high resolution, so instead you might want to use the `.sort` function.
|
|
||||||
|
|
||||||
### Sort
|
|
||||||
|
|
||||||
To sort your formats, create a custom comparator function which will be used as the `.sort` function's argument. The custom comparator then compares formats, preferring ones with higher frame rate AND higher resolution.
|
|
||||||
|
|
||||||
Implement this however you want, I personally use a "point-based system":
|
|
||||||
|
|
||||||
```ts
|
```ts
|
||||||
export const sortFormatsByResolution = (left: CameraDeviceFormat, right: CameraDeviceFormat): number => {
|
const device = ...
|
||||||
// in this case, points aren't "normalized" (e.g. higher resolution = 1 point, lower resolution = -1 points)
|
const format = useCameraFormat(device, [
|
||||||
let leftPoints = left.photoHeight * left.photoWidth
|
{ videoResolution: { width: 3048, height: 2160 } },
|
||||||
let rightPoints = right.photoHeight * right.photoWidth
|
{ fps: 60 }
|
||||||
|
])
|
||||||
// we also care about video dimensions, not only photo.
|
|
||||||
leftPoints += left.videoWidth * left.videoHeight
|
|
||||||
rightPoints += right.videoWidth * right.videoHeight
|
|
||||||
|
|
||||||
// you can also add points for FPS, etc
|
|
||||||
|
|
||||||
return rightPoints - leftPoints
|
|
||||||
}
|
|
||||||
|
|
||||||
// and then call it:
|
|
||||||
const formats = useMemo(() => device?.formats.sort(sortFormatsByResolution), [device?.formats])
|
|
||||||
```
|
```
|
||||||
|
|
||||||
:::caution
|
</TabItem>
|
||||||
Be careful that you don't `filter` out a lot of formats since you might end up having no format to use at all. (_Remember; not all devices support e.g. 240 FPS._) Always carefully sort instead of filter, and pick the best available format - that way you are guaranteed to have a format available, even if your desired specifications aren't fully met.
|
<TabItem value="imperative">
|
||||||
:::
|
|
||||||
|
|
||||||
### Props
|
```ts
|
||||||
|
const device = ...
|
||||||
|
const format = getCameraFormat(device, [
|
||||||
|
{ videoResolution: { width: 3048, height: 2160 } },
|
||||||
|
{ fps: 60 }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
The **filter is ordered by priority (descending)**, so if there is no format that supports both 4k and 60 FPS, the function will prefer 4k@30FPS formats over 1080p@60FPS formats, because 4k is a more important requirement than 60 FPS.
|
||||||
|
|
||||||
|
If you want to record slow-motion videos, you want a format with a really high FPS setting, for example:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const device = ...
|
||||||
|
const format = useCameraFormat(device, [
|
||||||
|
{ fps: 240 }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const device = ...
|
||||||
|
const format = getCameraFormat(device, [
|
||||||
|
{ fps: 240 }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
If there is no format that has exactly 240 FPS, the closest thing to it will be used.
|
||||||
|
|
||||||
|
You can also use the `'max'` flag to just use the maximum available resolution:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const device = ...
|
||||||
|
const format = useCameraFormat(device, [
|
||||||
|
{ videoResolution: 'max' },
|
||||||
|
{ photoResolution: 'max' }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const device = ...
|
||||||
|
const format = getCameraFormat(device, [
|
||||||
|
{ videoResolution: 'max' },
|
||||||
|
{ photoResolution: 'max' }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Camera Props
|
||||||
|
|
||||||
The `Camera` View provides a few props that depend on the specified `format`. For example, you can only set the `fps` prop to a value that is supported by the current `format`. So if you have a format that supports 240 FPS, you can set the `fps` to `240`:
|
The `Camera` View provides a few props that depend on the specified `format`. For example, you can only set the `fps` prop to a value that is supported by the current `format`. So if you have a format that supports 240 FPS, you can set the `fps` to `240`:
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
function App() {
|
function App() {
|
||||||
// ...
|
// ...
|
||||||
|
const format = ...
|
||||||
|
const fps = format.maxFps >= 240 ? 240 : format.maxFps
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Camera
|
<Camera
|
||||||
style={StyleSheet.absoluteFill}
|
style={StyleSheet.absoluteFill}
|
||||||
device={device}
|
device={device}
|
||||||
format={format}
|
format={format}
|
||||||
fps={240}
|
fps={fps}
|
||||||
/>
|
/>
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
|
||||||
You should always verify that the format supports the desired FPS, and fall back to `undefined` (or a value that is supported, like `30`) if it doesn't.
|
|
||||||
:::
|
|
||||||
|
|
||||||
Other props that depend on the `format`:
|
Other props that depend on the `format`:
|
||||||
|
|
||||||
* `fps`: Specifies the frame rate to use
|
* `fps`: Specifies the frame rate to use
|
||||||
* `hdr`: Enables HDR photo or video capture and preview
|
* `hdr`: Enables HDR photo or video capture and preview
|
||||||
* `lowLightBoost`: Enables a night-mode/low-light-boost for photo or video capture and preview
|
* `lowLightBoost`: Enables a night-mode/low-light-boost for photo or video capture and preview
|
||||||
* `videoStabilizationMode`: Specifies the video stabilization mode to use for this camera device
|
* `videoStabilizationMode`: Specifies the video stabilization mode to use for the video pipeline
|
||||||
|
* `pixelFormat`: Specifies the pixel format to use for the video pipeline
|
||||||
|
|
||||||
|
|
||||||
<br />
|
<br />
|
||||||
|
@ -15,19 +15,19 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
### What are frame processors?
|
## What are frame processors?
|
||||||
|
|
||||||
Frame processors are functions that are written in JavaScript (or TypeScript) which can be used to **process frames the camera "sees"**.
|
Frame processors are functions that are written in JavaScript (or TypeScript) which can be used to process frames the camera "sees".
|
||||||
Inside those functions you can call **Frame Processor Plugins**, which are high performance native functions specifically designed for certain use-cases.
|
Inside those functions you can call **Frame Processor Plugins**, which are high performance native functions specifically designed for certain use-cases.
|
||||||
|
|
||||||
For example, you might want to create a [Hotdog/Not Hotdog detector app](https://apps.apple.com/us/app/not-hotdog/id1212457521) **without writing any native code**, while still **achieving native performance**:
|
For example, you might want to create an object detector app without writing any native code, while still achieving native performance:
|
||||||
|
|
||||||
```jsx
|
```jsx
|
||||||
function App() {
|
function App() {
|
||||||
const frameProcessor = useFrameProcessor((frame) => {
|
const frameProcessor = useFrameProcessor((frame) => {
|
||||||
'worklet'
|
'worklet'
|
||||||
const isHotdog = detectIsHotdog(frame)
|
const objects = detectObjects(frame)
|
||||||
console.log(isHotdog ? "Hotdog!" : "Not Hotdog.")
|
console.log(`Detected ${objects.length} objects.`)
|
||||||
}, [])
|
}, [])
|
||||||
|
|
||||||
return (
|
return (
|
||||||
@ -39,74 +39,101 @@ function App() {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Frame processors are by far not limited to Hotdog detection, other examples include:
|
Frame processors are by far not limited to object detection, other examples include:
|
||||||
|
|
||||||
* **AI** for **facial recognition**
|
* **ML** for **facial recognition**
|
||||||
* **AI** for **object detection**
|
|
||||||
* Using **Tensorflow**, **MLKit Vision**, **Apple Vision** or other libraries
|
* Using **Tensorflow**, **MLKit Vision**, **Apple Vision** or other libraries
|
||||||
* Creating **realtime video-chats** using **WebRTC** to directly send the camera frames over the network
|
* Creating **realtime video-chats** using **WebRTC** to directly send the camera frames over the network
|
||||||
* Creating scanners for **QR codes**, **Barcodes** or even custom codes such as **Snapchat's SnapCodes** or **Apple's AppClips**
|
* Creating scanners for **QR codes**, **Barcodes** or even custom codes such as **Snapchat's SnapCodes** or **Apple's AppClips**
|
||||||
* Creating **snapchat-like filters**, e.g. draw a dog-mask filter over the user's face
|
* Creating **snapchat-like filters**, e.g. draw a dog-mask filter over the user's face
|
||||||
* Creating **color filters** with depth-detection
|
* Creating **color filters** with depth-detection
|
||||||
* Drawing boxes, text, overlays, or colors on the screen in realtime
|
* **Drawing** boxes, text, overlays, or colors on the screen in realtime
|
||||||
* Rendering filters and shaders such as Blur, inverted colors, beauty filter, or more on the screen
|
* Rendering **filters** and shaders such as Blur, inverted colors, beauty filter, or more on the screen
|
||||||
|
|
||||||
Because they are written in JS, Frame Processors are **simple**, **powerful**, **extensible** and **easy to create** while still running at **native performance**. (Frame Processors can run up to **1000 times a second**!) Also, you can use **fast-refresh** to quickly see changes while developing or publish [over-the-air updates](https://github.com/microsoft/react-native-code-push) to tweak the Hotdog detector's sensitivity in live apps without pushing a native update.
|
Because they are written in JS, Frame Processors are simple, powerful, extensible and easy to create while still running at native performance. (Frame Processors can run up to 1000 times a second!) Also, you can use fast-refresh to quickly see changes while developing or publish [over-the-air updates](https://github.com/microsoft/react-native-code-push) to tweak the object detector's sensitivity in live apps without pushing a native update.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
Frame Processors require [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 1.0.0 or higher.
|
Frame Processors require [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
### Interacting with Frame Processors
|
## The `Frame`
|
||||||
|
|
||||||
Since Frame Processors run in [**Worklets**](https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/worklets), you can directly use JS values such as React state:
|
A Frame Processor is called for every Camera frame, and exposes information about the frame in the [`Frame`](/docs/api/interfaces/Frame) parameter.
|
||||||
|
The [`Frame`](/docs/api/interfaces/Frame) parameter wraps the native GPU-based frame buffer in a C++ HostObject (a ~1.5MB buffer at 4k), and allows you to access information such as it's resolution or pixel format directly from JS:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const frameProcessor = useFrameProcessor((frame) => {
|
||||||
|
'worklet'
|
||||||
|
console.log(`Frame: ${frame.width}x${frame.height} (${frame.pixelFormat})`)
|
||||||
|
}, [])
|
||||||
|
```
|
||||||
|
|
||||||
|
Additionally, you can also directly access the Frame's pixel data using [`toArrayBuffer()`](/docs/api/interfaces/Frame#toarraybuffer):
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const frameProcessor = useFrameProcessor((frame) => {
|
||||||
|
'worklet'
|
||||||
|
if (frame.pixelFormat === 'rgb') {
|
||||||
|
const data = frame.toArrayBuffer()
|
||||||
|
console.log(`Pixel at 0,0: RGB(${data[0]}, ${data[1]}, ${data[2]})`)
|
||||||
|
}
|
||||||
|
}, [])
|
||||||
|
```
|
||||||
|
|
||||||
|
It is however recommended to use native **Frame Processor Plugins** for processing, as those are much faster than JavaScript and can sometimes operate with the GPU buffer directly.
|
||||||
|
You can simply pass a `Frame` to a native Frame Processor Plugin directly.
|
||||||
|
|
||||||
|
## Interacting with Frame Processors
|
||||||
|
|
||||||
|
### Access JS values
|
||||||
|
|
||||||
|
Since Frame Processors run in [**Worklets**](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md), you can directly use JS values such as React state which are readonly-copied into the Frame Processor:
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
// can be controlled with a Slider
|
// User can look for specific objects
|
||||||
const [sensitivity, setSensitivity] = useState(0.4)
|
const targetObject = 'banana'
|
||||||
|
|
||||||
const frameProcessor = useFrameProcessor((frame) => {
|
const frameProcessor = useFrameProcessor((frame) => {
|
||||||
'worklet'
|
'worklet'
|
||||||
const isHotdog = detectIsHotdog(frame, sensitivity)
|
const objects = detectObjects(frame)
|
||||||
console.log(isHotdog ? "Hotdog!" : "Not Hotdog.")
|
const bananas = objects.filter((o) => o.type === targetObject)
|
||||||
}, [sensitivity])
|
console.log(`Detected ${bananas} bananas!`)
|
||||||
|
}, [targetObject])
|
||||||
```
|
```
|
||||||
|
|
||||||
You can also easily read from, and assign to [**Shared Values**](https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/shared-values) to create smooth, 60 FPS animations.
|
### Shared Values
|
||||||
|
|
||||||
In this example, we detect a cat in the frame - if a cat was found, we assign the `catBounds` Shared Value to the coordinates of the cat (relative to the screen) which we can then use in a `useAnimatedStyle` hook to position the red rectangle surrounding the cat. This updates in realtime on the UI Thread, and can also be smoothly animated with `withSpring` or `withTiming`.
|
You can also easily read from, and assign to [**Shared Values**](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md#shared-values), which can be written to from inside a Frame Processor and read from any other context (either React JS, Skia, or Reanimated):
|
||||||
|
|
||||||
```tsx {7}
|
```tsx
|
||||||
// represents position of the cat on the screen 🐈
|
const bananas = useSharedValue([])
|
||||||
const catBounds = useSharedValue({ top: 0, left: 0, right: 0, bottom: 0 })
|
|
||||||
|
|
||||||
// continously sets 'catBounds' to the cat's coordinates on screen
|
// Detect Bananas in Frame Processor
|
||||||
const frameProcessor = useFrameProcessor((frame) => {
|
const frameProcessor = useFrameProcessor((frame) => {
|
||||||
'worklet'
|
'worklet'
|
||||||
catBounds.value = scanFrameForCat(frame)
|
const objects = detectObjects(frame)
|
||||||
}, [catBounds])
|
bananas.value = objects.filter((o) => o.type === 'banana')
|
||||||
|
}, [bananas])
|
||||||
|
|
||||||
// uses 'catBounds' to position the red rectangle on screen.
|
// Draw bananas in a Skia Canvas
|
||||||
// smoothly updates on UI thread whenever 'catBounds' is changed
|
const onDraw = useDrawCallback((canvas) => {
|
||||||
const boxOverlayStyle = useAnimatedStyle(() => ({
|
for (const banana of bananas.value) {
|
||||||
position: 'absolute',
|
const rect = Skia.XYWHRect(banana.x,
|
||||||
borderWidth: 1,
|
banana.y,
|
||||||
borderColor: 'red',
|
banana.width,
|
||||||
...catBounds.value
|
banana.height)
|
||||||
}), [catBounds])
|
const paint = Skia.Paint()
|
||||||
|
paint.setColor(Skia.Color('red'))
|
||||||
return (
|
frame.drawRect(rect, paint)
|
||||||
<View>
|
}
|
||||||
<Camera {...cameraProps} frameProcessor={frameProcessor} />
|
})
|
||||||
// draws a red rectangle on the screen which surrounds the cat
|
|
||||||
<Reanimated.View style={boxOverlayStyle} />
|
|
||||||
</View>
|
|
||||||
)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Call functions
|
||||||
|
|
||||||
And you can also call back to the React-JS thread by using `createRunInJsFn(...)`:
|
And you can also call back to the React-JS thread by using `createRunInJsFn(...)`:
|
||||||
|
|
||||||
```tsx {1}
|
```tsx
|
||||||
const onQRCodeDetected = Worklets.createRunInJsFn((qrCode: string) => {
|
const onQRCodeDetected = Worklets.createRunInJsFn((qrCode: string) => {
|
||||||
navigation.push("ProductPage", { productId: qrCode })
|
navigation.push("ProductPage", { productId: qrCode })
|
||||||
})
|
})
|
||||||
@ -120,14 +147,20 @@ const frameProcessor = useFrameProcessor((frame) => {
|
|||||||
}, [onQRCodeDetected])
|
}, [onQRCodeDetected])
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Threading
|
||||||
|
|
||||||
|
By default, Frame Processors run synchronously with the Camera pipeline. Anything that takes longer than one Frame interval might block the Camera from streaming new Frames.
|
||||||
|
For example, if your Camera is running at **30 FPS**, your Frame Processor has **33ms** to finish executing before the next Frame is dropped. At **60 FPS**, you only have **16ms**.
|
||||||
|
|
||||||
### Running asynchronously
|
### Running asynchronously
|
||||||
|
|
||||||
Since Frame Processors run synchronously with the Camera Pipeline, anything taking longer than one Frame interval might block the Camera from streaming new Frames. To avoid this, you can use `runAsync` to run code asynchronously on a different Thread:
|
For longer running processing, you can use [`runAsync(..)`](/docs/api/#runasync) to run code asynchronously on a different Thread:
|
||||||
|
|
||||||
```ts
|
```ts
|
||||||
const frameProcessor = useFrameProcessor((frame) => {
|
const frameProcessor = useFrameProcessor((frame) => {
|
||||||
'worklet'
|
'worklet'
|
||||||
console.log("I'm running synchronously at 60 FPS!")
|
console.log("I'm running synchronously at 60 FPS!")
|
||||||
|
|
||||||
runAsync(() => {
|
runAsync(() => {
|
||||||
'worklet'
|
'worklet'
|
||||||
console.log("I'm running asynchronously, possibly at a lower FPS rate!")
|
console.log("I'm running asynchronously, possibly at a lower FPS rate!")
|
||||||
@ -137,12 +170,13 @@ const frameProcessor = useFrameProcessor((frame) => {
|
|||||||
|
|
||||||
### Running at a throttled FPS rate
|
### Running at a throttled FPS rate
|
||||||
|
|
||||||
Some Frame Processor Plugins don't need to run on every Frame, for example a Frame Processor that detects the brightness in a Frame only needs to run twice per second:
|
Some Frame Processor Plugins don't need to run on every Frame, for example a Frame Processor that detects the brightness in a Frame only needs to run twice per second. You can achieve this by using [`runAtTargetFps(..)`](/docs/api/#runAtTargetFps):
|
||||||
|
|
||||||
```ts
|
```ts
|
||||||
const frameProcessor = useFrameProcessor((frame) => {
|
const frameProcessor = useFrameProcessor((frame) => {
|
||||||
'worklet'
|
'worklet'
|
||||||
console.log("I'm running synchronously at 60 FPS!")
|
console.log("I'm running synchronously at 60 FPS!")
|
||||||
|
|
||||||
runAtTargetFps(2, () => {
|
runAtTargetFps(2, () => {
|
||||||
'worklet'
|
'worklet'
|
||||||
console.log("I'm running synchronously at 2 FPS!")
|
console.log("I'm running synchronously at 2 FPS!")
|
||||||
@ -150,9 +184,22 @@ const frameProcessor = useFrameProcessor((frame) => {
|
|||||||
}, [])
|
}, [])
|
||||||
```
|
```
|
||||||
|
|
||||||
### Using Frame Processor Plugins
|
## Native Frame Processor Plugins
|
||||||
|
|
||||||
Frame Processor Plugins are distributed through npm. To install the [**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler) plugin, run:
|
Since JavaScript is slower than native languages, it is recommended to use native Frame Processor Plugins for heavy processing.
|
||||||
|
Such native plugins benefit of faster languages (Objective-C/Swift, Java/Kotlin, or C++), and can make use of CPU-Vector- or GPU-acceleration.
|
||||||
|
|
||||||
|
### Creating native Frame Processor Plugins
|
||||||
|
|
||||||
|
VisionCamera provides an easy-to-use API for creating native Frame Processor Plugins, which are used to either wrap existing algorithms (example: ["MLKit Face Detection"](https://developers.google.com/ml-kit/vision/face-detection)), or build your own custom algorithms.
|
||||||
|
It's binding point is a simple callback function that gets called with the native frame type (`CMSampleBuffer` or `Image`), that you can use for any kind of processing.
|
||||||
|
The native plugin can accept parameters (e.g. for configuration) and return any kind of values for result, which are bridged through JSI.
|
||||||
|
|
||||||
|
See: ["Creating Frame Processor Plugins"](/docs/guides/frame-processors-plugins-overview).
|
||||||
|
|
||||||
|
### Using Community Plugins
|
||||||
|
|
||||||
|
Community Frame Processor Plugins are distributed through npm. To install the [**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler) plugin, run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
npm i vision-camera-image-labeler
|
npm i vision-camera-image-labeler
|
||||||
@ -169,53 +216,26 @@ const frameProcessor = useFrameProcessor((frame) => {
|
|||||||
}, [])
|
}, [])
|
||||||
```
|
```
|
||||||
|
|
||||||
Check out [**Frame Processor community plugins**](/docs/guides/frame-processor-plugin-list) to discover plugins, or [**start creating a plugin yourself**](/docs/guides/frame-processors-plugins-overview)!
|
Check out [Frame Processor community plugins](/docs/guides/frame-processor-plugin-list) to discover available community plugins.
|
||||||
|
|
||||||
### Selecting a Format for a Frame Processor
|
## Selecting a Format for a Frame Processor
|
||||||
|
|
||||||
When running frame processors, it is often important to choose an appropriate [format](/docs/guides/formats). Here are some general tips to consider:
|
When running frame processors, it is often important to choose an appropriate [format](/docs/guides/formats). Here are some general tips to consider:
|
||||||
|
|
||||||
* If you are running heavy AI/ML calculations in your frame processor, make sure to [select a format](/docs/guides/formats) that has a lower resolution to optimize it's performance.
|
* If you are running heavy AI/ML calculations in your frame processor, make sure to [select a format](/docs/guides/formats) that has a lower resolution to optimize it's performance. You can also resize the Frame on-demand.
|
||||||
* Sometimes a frame processor plugin only works with specific [pixel formats](/docs/api/interfaces/CameraDeviceFormat#pixelformat). Some plugins (like MLKit) don't work with `x420`.
|
* Sometimes a frame processor plugin only works with specific [pixel formats](/docs/api/interfaces/CameraDeviceFormat#pixelformats). Some plugins (like Tensorflow Lite Models) don't work with `yuv`, so use a [`pixelFormat`](/docs/api/interfaces/CameraProps#pixelFormat) of `rgb` instead.
|
||||||
|
* Some Frame Processor plugins don't work with HDR formats. In this case you need to disable [`hdr`](/docs/api/interfaces/CameraProps#hdr).
|
||||||
|
|
||||||
### Benchmarks
|
## Benchmarks
|
||||||
|
|
||||||
Frame Processors are _really_ fast. I have used [MLKit Vision Image Labeling](https://firebase.google.com/docs/ml-kit/ios/label-images) to label 4k Camera frames in realtime, and measured the following results:
|
Frame Processors are _really_ fast. I have used [MLKit Vision Image Labeling](https://firebase.google.com/docs/ml-kit/ios/label-images) to label 4k Camera frames in realtime, and measured the following results:
|
||||||
|
|
||||||
* Fully natively (written in pure Objective-C, no React interaction at all), I have measured an average of **68ms** per call.
|
* Fully natively (written in pure Objective-C, no React interaction at all), I have measured an average of **68ms** per call.
|
||||||
* As a Frame Processor Plugin (written in Objective-C, called through a JS Frame Processor function), I have measured an average of **69ms** per call.
|
* As a Frame Processor Plugin (written in Objective-C, called through a JS Frame Processor function), I have measured an average of **69ms** per call.
|
||||||
|
|
||||||
This means that **the Frame Processor API only takes ~1ms longer than a fully native implementation**, making it **the fastest and easiest way to run any sort of Frame Processing in React Native**.
|
This means that the Frame Processor API only takes ~1ms longer than a fully native implementation, making it **the fastest and easiest way to run any sort of Frame Processing in React Native**.
|
||||||
|
|
||||||
> All measurements are recorded on an iPhone 11 Pro, benchmarked total execution time of the [`captureOutput`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate/1385775-captureoutput) function by using [`CFAbsoluteTimeGetCurrent`](https://developer.apple.com/documentation/corefoundation/1543542-cfabsolutetimegetcurrent). Running smaller images (lower than 4k resolution) is much quicker and many algorithms can even run at 60 FPS.
|
## Disabling Frame Processors
|
||||||
|
|
||||||
### Avoiding Frame-drops
|
|
||||||
|
|
||||||
Frame Processors will be **synchronously** called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of **30 FPS**, you have about **33ms** to finish processing frames. For a QR Code Scanner, **5 FPS** (200ms) might suffice, while a object tracking AI might run at the same frame rate as the Camera itself (e.g. **60 FPS** (16ms)).
|
|
||||||
|
|
||||||
### ESLint react-hooks plugin
|
|
||||||
|
|
||||||
If you are using the [react-hooks ESLint plugin](https://www.npmjs.com/package/eslint-plugin-react-hooks), make sure to add `useFrameProcessor` to `additionalHooks` inside your ESLint config. (See ["advanced configuration"](https://www.npmjs.com/package/eslint-plugin-react-hooks#advanced-configuration))
|
|
||||||
|
|
||||||
### Technical
|
|
||||||
|
|
||||||
#### Frame Processors
|
|
||||||
|
|
||||||
**Frame Processors** are JS functions that will be **workletized** using [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core). They are created on a **parallel camera thread** using a separate JavaScript Runtime (_"VisionCamera JS-Runtime"_) and are **invoked synchronously** (using JSI) without ever going over the bridge. In a **Frame Processor** you can write normal JS code, call back to the React-JS Thread (e.g. `setState`), use [Shared Values](https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/shared-values/) and call **Frame Processor Plugins**.
|
|
||||||
|
|
||||||
#### Frame Processor Plugins
|
|
||||||
|
|
||||||
**Frame Processor Plugins** are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be **synchronously called** from your JS Frame Processors (using JSI) without ever going over the bridge. Because VisionCamera provides an easy-to-use plugin API, you can easily create a **Frame Processor Plugin** yourself. Some examples include [Barcode Scanning](https://developers.google.com/ml-kit/vision/barcode-scanning), [Face Detection](https://developers.google.com/ml-kit/vision/face-detection), [Image Labeling](https://developers.google.com/ml-kit/vision/image-labeling), [Text Recognition](https://developers.google.com/ml-kit/vision/text-recognition) and more.
|
|
||||||
|
|
||||||
> Learn how to [**create Frame Processor Plugins**](frame-processors-plugins-overview), or check out the [**example Frame Processor Plugin for iOS**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [**Android**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java).
|
|
||||||
|
|
||||||
#### The `Frame` object
|
|
||||||
|
|
||||||
The Frame Processor gets called with a `Frame` object, which is a **JSI HostObject**. It holds a reference to the native (C++) Frame Image Buffer (~10 MB in size) and exposes properties such as `width`, `height`, `bytesPerRow` and more to JavaScript so you can synchronously access them. The `Frame` object can be passed around in JS, as well as returned from- and passed to a native **Frame Processor Plugin**.
|
|
||||||
|
|
||||||
> See [**this tweet**](https://twitter.com/mrousavy/status/1412300883149393921) for more information.
|
|
||||||
|
|
||||||
### Disabling Frame Processors
|
|
||||||
|
|
||||||
The Frame Processor API spawns a secondary JavaScript Runtime which consumes a small amount of extra CPU and RAM. Additionally, compile time increases since Frame Processors are written in native C++. If you're not using Frame Processors at all, you can disable them:
|
The Frame Processor API spawns a secondary JavaScript Runtime which consumes a small amount of extra CPU and RAM. Additionally, compile time increases since Frame Processors are written in native C++. If you're not using Frame Processors at all, you can disable them:
|
||||||
|
|
||||||
@ -228,7 +248,7 @@ The Frame Processor API spawns a secondary JavaScript Runtime which consumes a s
|
|||||||
]}>
|
]}>
|
||||||
<TabItem value="rn">
|
<TabItem value="rn">
|
||||||
|
|
||||||
#### Android
|
### Android
|
||||||
|
|
||||||
Inside your `gradle.properties` file, add the `disableFrameProcessors` flag:
|
Inside your `gradle.properties` file, add the `disableFrameProcessors` flag:
|
||||||
|
|
||||||
@ -238,7 +258,7 @@ VisionCamera_disableFrameProcessors=true
|
|||||||
|
|
||||||
Then, clean and rebuild your project.
|
Then, clean and rebuild your project.
|
||||||
|
|
||||||
#### iOS
|
### iOS
|
||||||
|
|
||||||
Inside your `Podfile`, add the `VCDisableFrameProcessors` flag:
|
Inside your `Podfile`, add the `VCDisableFrameProcessors` flag:
|
||||||
|
|
||||||
|
@ -12,7 +12,7 @@ import TabItem from '@theme/TabItem';
|
|||||||
|
|
||||||
Frame Processor Plugins are **native functions** which can be directly called from a JS Frame Processor. (See ["Frame Processors"](frame-processors))
|
Frame Processor Plugins are **native functions** which can be directly called from a JS Frame Processor. (See ["Frame Processors"](frame-processors))
|
||||||
|
|
||||||
They **receive a frame from the Camera** as an input and can return any kind of output. For example, a `detectFaces` function returns an array of detected faces in the frame:
|
They receive a frame from the Camera as an input and can return any kind of output. For example, a `detectFaces` function returns an array of detected faces in the frame:
|
||||||
|
|
||||||
```tsx {4-5}
|
```tsx {4-5}
|
||||||
function App() {
|
function App() {
|
||||||
@ -28,7 +28,7 @@ function App() {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
To achieve **maximum performance**, the `detectFaces` function is written in a native language (e.g. Objective-C), but it will be directly called from the VisionCamera Frame Processor JavaScript-Runtime.
|
For maximum performance, the `detectFaces` function is written in a native language (e.g. Objective-C), but it will be directly called from the VisionCamera Frame Processor JavaScript-Runtime through JSI.
|
||||||
|
|
||||||
### Types
|
### Types
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
id: frame-processors-skia
|
id: skia-frame-processors
|
||||||
title: Skia Frame Processors
|
title: Drawing to a Frame (Skia)
|
||||||
sidebar_label: Skia Frame Processors
|
sidebar_label: Drawing to a Frame (Skia)
|
||||||
---
|
---
|
||||||
|
|
||||||
import Tabs from '@theme/Tabs';
|
import Tabs from '@theme/Tabs';
|
||||||
@ -10,110 +10,62 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
|
|
||||||
<div>
|
<div>
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="283" height="535" style={{ float: 'right' }}>
|
<svg xmlns="http://www.w3.org/2000/svg" width="283" height="535" style={{ float: 'right' }}>
|
||||||
<image href={useBaseUrl("img/frame-processors.gif")} x="18" y="33" width="247" height="469" />
|
<image href={useBaseUrl("img/demo_drawing.mp4")} x="18" y="33" width="247" height="469" />
|
||||||
<image href={useBaseUrl("img/frame.png")} width="283" height="535" />
|
<image href={useBaseUrl("img/frame.png")} width="283" height="535" />
|
||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
### What are Skia Frame Processors?
|
## Skia Frame Processors
|
||||||
|
|
||||||
Skia Frame Processors are [Frame Processors](frame-processors) that allow you to draw onto the Frame using [react-native-skia](https://github.com/Shopify/react-native-skia).
|
It is technically pretty difficult to draw onto a Camera Frame and have it rendered into the resulting photo or video in realtime, but I have built a working proof of concept of that using Metal/OpenGL and [Skia](https://skia.org) straight in VisionCamera.
|
||||||
|
|
||||||
Skia Frame Processors were introduced in VisionCamera V3 RC.0, but were removed again after VisionCamera V3 RC.9 due to the significantly increased complexity of the video pipeline in the codebase.
|
This allows you to draw onto a Frame using [react-native-skia](https://shopify.github.io/react-native-skia/)'s easy to use JavaScript APIs:
|
||||||
|
|
||||||
```
|
```ts
|
||||||
yarn add react-native-vision-camera@rc.9
|
|
||||||
```
|
|
||||||
|
|
||||||
They worked perfectly fine for those RCs with some minor inconsistencies (landscape orientation didn't work on Android), which proves the concept. If you want to learn more about Skia Frame Processors, we at [Margelo](https://margelo.io) can build a custom solution for your company to implement drawable Frame Processors (e.g. filters, blurring, masks, colors, etc). See [PR #1740](https://github.com/mrousavy/react-native-vision-camera/pull/1740) for more details.
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
|
|
||||||
For example, you might want to draw a rectangle around a user's face **without writing any native code**, while still **achieving native performance**:
|
|
||||||
|
|
||||||
```jsx
|
|
||||||
function App() {
|
|
||||||
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
||||||
'worklet'
|
'worklet'
|
||||||
const faces = detectFaces(frame)
|
|
||||||
faces.forEach((face) => {
|
|
||||||
frame.drawRect(face.rectangle, redPaint)
|
|
||||||
})
|
|
||||||
}, [])
|
|
||||||
|
|
||||||
return (
|
// Detect objects using GPU-accelerated ML API
|
||||||
<Camera
|
const bananas = detectBananas()
|
||||||
{...cameraProps}
|
|
||||||
frameProcessor={frameProcessor}
|
// Draw banana outline using GPU-accelerated Skia drawing API
|
||||||
/>
|
for (const banana of bananas) {
|
||||||
)
|
const rect = Skia.XYWHRect(banana.x,
|
||||||
|
banana.y,
|
||||||
|
banana.width,
|
||||||
|
banana.height)
|
||||||
|
const paint = Skia.Paint()
|
||||||
|
paint.setColor(Skia.Color('red'))
|
||||||
|
frame.drawRect(rect, paint)
|
||||||
}
|
}
|
||||||
|
}, [])
|
||||||
```
|
```
|
||||||
|
|
||||||
With Skia, you can also implement realtime filters, blurring, shaders, and much more. For example, this is how you invert the colors in a Frame:
|
..or even apply color-filters in realtime:
|
||||||
|
|
||||||
```jsx
|
```ts
|
||||||
const INVERTED_COLORS_SHADER = `
|
const invertColorsFilter = Skia.RuntimeEffect.Make(`
|
||||||
uniform shader image;
|
uniform shader image;
|
||||||
|
|
||||||
half4 main(vec2 pos) {
|
half4 main(vec2 pos) {
|
||||||
vec4 color = image.eval(pos);
|
vec4 color = image.eval(pos);
|
||||||
return vec4(1.0 - color.rgb, 1.0);
|
return vec4((1.0 - color).rgb, 1.0);
|
||||||
}
|
}
|
||||||
`;
|
`)
|
||||||
|
const paint = Skia.Paint(invertColorsFilter)
|
||||||
function App() {
|
|
||||||
const imageFilter = Skia.ImageFilter.MakeRuntimeShader(/* INVERTED_COLORS_SHADER */)
|
|
||||||
const paint = Skia.Paint()
|
|
||||||
paint.setImageFilter(imageFilter)
|
|
||||||
|
|
||||||
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
||||||
'worklet'
|
'worklet'
|
||||||
|
|
||||||
|
// Draw frame using Skia Shader to invert colors
|
||||||
frame.render(paint)
|
frame.render(paint)
|
||||||
}, [])
|
}, [])
|
||||||
|
|
||||||
return (
|
|
||||||
<Camera
|
|
||||||
{...cameraProps}
|
|
||||||
frameProcessor={frameProcessor}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Rendered outputs
|
## VisionCamera Skia Integration
|
||||||
|
|
||||||
The rendered results of the Skia Frame Processor are rendered to an offscreen context and will be displayed in the Camera Preview, recorded to a video file (`startRecording()`) and captured in a photo (`takePhoto()`). In other words, you draw into the Frame, not just ontop of it.
|
While the Skia Integration was originally planned for V3, I decided to remove it again because VisionCamera got insanely complex with that code integrated (custom Metal/OpenGL rendering pipeline with intermediate steps and tons of build setup, and fully custom Skia Preview View) and I wanted to keep VisionCamera lean so other users aren't affected by build issues caused by Skia or those GPU APIs. See [PR #1740](https://github.com/mrousavy/react-native-vision-camera/pull/1740) for more information.
|
||||||
|
|
||||||
### Performance
|
|
||||||
|
|
||||||
VisionCamera sets up an additional Skia rendering context which requires a few resources.
|
|
||||||
|
|
||||||
On iOS, Metal is used for GPU Acceleration. On Android, OpenGL is used for GPU Acceleration.
|
|
||||||
C++/JSI is used for highly efficient communication between JS and Skia.
|
|
||||||
|
|
||||||
### Disabling Skia Frame Processors
|
|
||||||
|
|
||||||
Skia Frame Processors ship with additional C++ files which might slightly increase the app's build time. If you're not using Skia Frame Processors at all, you can disable them:
|
|
||||||
|
|
||||||
#### Android
|
|
||||||
|
|
||||||
Inside your `gradle.properties` file, add the `disableSkia` flag:
|
|
||||||
|
|
||||||
```groovy
|
|
||||||
VisionCamera_disableSkia=true
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, clean and rebuild your project.
|
|
||||||
|
|
||||||
#### iOS
|
|
||||||
|
|
||||||
Inside your `Podfile`, add the `VCDisableSkia` flag:
|
|
||||||
|
|
||||||
```ruby
|
|
||||||
$VCDisableSkia = true
|
|
||||||
```
|
|
||||||
|
|
||||||
|
At my app development agency, [Margelo](https://margelo.io), we have worked a lot with 2D and 3D graphics and Camera realtime processing (see the Snapchat-style mask filter on our website for example - that is running in VisionCamera/React Native!), if you need this feature [reach out to us](https://margelo.io#contact) and we'll build a customized/tailored solution for your company! :)
|
||||||
|
|
||||||
<br />
|
<br />
|
||||||
|
|
||||||
|
69
docs/docs/guides/FRAME_PROCESSORS_TIPS.mdx
Normal file
69
docs/docs/guides/FRAME_PROCESSORS_TIPS.mdx
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
---
|
||||||
|
id: frame-processors-tips
|
||||||
|
title: Frame Processors Tips
|
||||||
|
sidebar_label: Frame Processors Tips
|
||||||
|
---
|
||||||
|
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||||
|
|
||||||
|
## Avoiding Frame-drops
|
||||||
|
|
||||||
|
Frame Processors will be **synchronously** called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of **30 FPS**, you have about **33ms** to finish processing frames. At **60 FPS**, you only have **16ms**.
|
||||||
|
If your Frame Procesor has not finished executing when the next frame arrives, the next frame will be dropped.
|
||||||
|
|
||||||
|
Some general tips:
|
||||||
|
|
||||||
|
- Use `runAsync(..)` if you don't need your Frame Processor to run synchronously
|
||||||
|
- Use `runAtTargetFps(..)` if you don't need your Frame Processor to run on every frame
|
||||||
|
- Use Shared Values (`useSharedValue(..)`) instead of React State (`useState(..)`) when sharing data
|
||||||
|
- Prefer native Frame Processor Plugins instead of pure JavaScript based plugins
|
||||||
|
|
||||||
|
### FPS Graph
|
||||||
|
|
||||||
|
Use the FPS Graph to profile your Frame Processor's performance over time:
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
<Camera {...props} enableFpsGraph={true} />
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fast Frame Processor Plugins
|
||||||
|
|
||||||
|
If you use native Frame Processor Plugins, make sure they are optimized for realtime Camera use-cases. Some general tips:
|
||||||
|
|
||||||
|
- Prefer plugins that use the **[PixelFormat](/docs/api#pixelformat) `yuv` instead of `rgb`**, as `yuv` is more efficient in both memory usage and processing efficiency
|
||||||
|
- Prefer plugins that can work with the **native Frame types** (`CMSampleBuffer` and `Image`/`HardwareBuffer`) instead of passing the byte array (`frame.toArrayBuffer()`), as the latter involves a GPU -> CPU copy
|
||||||
|
- If you need to use the byte array (`frame.toArrayBuffer()`), prefer plugins that work with **`uint8` instead of `float`** types, as `uint8` is much more efficient
|
||||||
|
- Prefer plugins that support **GPU acceleration**. For Tensorflow, this might be the CoreML or Metal GPU delegates
|
||||||
|
- For operations such as resizing, **prefer GPU or CPU vector acceleration** (e.g. Accelerate/vImage) instead of just array loops
|
||||||
|
|
||||||
|
## ESLint react-hooks plugin
|
||||||
|
|
||||||
|
If you are using the [react-hooks ESLint plugin](https://www.npmjs.com/package/eslint-plugin-react-hooks), make sure to add `useFrameProcessor` to `additionalHooks` inside your ESLint config so dependencies are detected properly. (See ["advanced configuration"](https://www.npmjs.com/package/eslint-plugin-react-hooks#advanced-configuration))
|
||||||
|
|
||||||
|
## Technical
|
||||||
|
|
||||||
|
### Frame Processors
|
||||||
|
|
||||||
|
Frame Processors are JS functions that will be _workletized_ using [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core). They are created on a parallel camera thread using a separate JavaScript Runtime (_"VisionCamera JS-Runtime"_) and are invoked synchronously (using JSI) without ever going over the bridge. In a Frame Processor you can write normal JS code, call back to the React-JS Thread (e.g. `setState`), use [Shared Values](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md#shared-values) and call Frame Processor Plugins.
|
||||||
|
|
||||||
|
### Frame Processor Plugins
|
||||||
|
|
||||||
|
Frame Processor Plugins are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be synchronously called from your JS Frame Processors (using JSI) without ever going over the bridge. Because VisionCamera provides an easy-to-use plugin API, you can easily create a Frame Processor Plugin yourself. Some examples include [Barcode Scanning](https://developers.google.com/ml-kit/vision/barcode-scanning), [Face Detection](https://developers.google.com/ml-kit/vision/face-detection), [Image Labeling](https://developers.google.com/ml-kit/vision/image-labeling), [Text Recognition](https://developers.google.com/ml-kit/vision/text-recognition) and more.
|
||||||
|
|
||||||
|
> Learn how to [**create Frame Processor Plugins**](frame-processors-plugins-overview), or check out the [**example Frame Processor Plugin for iOS**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [**Android**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java).
|
||||||
|
|
||||||
|
### The `Frame` object
|
||||||
|
|
||||||
|
The Frame Processor gets called with a `Frame` object, which is a JSI HostObject. It holds a reference to the native (C++) Frame's GPU Buffer (~10 MB in size) and exposes properties such as `width`, `height`, `bytesPerRow` and more to JavaScript so you can synchronously access them. You can access the Frame data in JavaScript using `frame.toArrayBuffer()`, which copies over the GPU buffer to the CPU.
|
||||||
|
The `Frame` object can be passed around in JS, as well as returned from- and passed to a native Frame Processor Plugin.
|
||||||
|
|
||||||
|
With 4k Frames, roughly 1.5 GB of Frame data flow through your Frame Processor per second.
|
||||||
|
|
||||||
|
> See [**this tweet**](https://twitter.com/mrousavy/status/1412300883149393921) for more information.
|
||||||
|
|
||||||
|
|
||||||
|
<br />
|
||||||
|
|
||||||
|
#### 🚀 Next section: [Zooming](/docs/guides/zooming) (or [creating a Frame Processor Plugin](/docs/guides/frame-processors-plugins-overview))
|
@ -75,7 +75,7 @@ The Frame Processor Plugin will be exposed to JS through the `VisionCameraProxy`
|
|||||||
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
||||||
5. Create a new Java file which registers the Frame Processor Plugin in a React Package, for the Face Detector plugin this file will be called `FaceDetectorFrameProcessorPluginPackage.java`:
|
5. Create a new Java file which registers the Frame Processor Plugin in a React Package, for the Face Detector plugin this file will be called `FaceDetectorFrameProcessorPluginPackage.java`:
|
||||||
|
|
||||||
```java {12}
|
```java {13}
|
||||||
import com.facebook.react.ReactPackage;
|
import com.facebook.react.ReactPackage;
|
||||||
import com.facebook.react.bridge.NativeModule;
|
import com.facebook.react.bridge.NativeModule;
|
||||||
import com.facebook.react.bridge.ReactApplicationContext;
|
import com.facebook.react.bridge.ReactApplicationContext;
|
||||||
@ -140,7 +140,7 @@ The Frame Processor Plugin will be exposed to JS through the `VisionCameraProxy`
|
|||||||
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
||||||
5. Create a new Kotlin file which registers the Frame Processor Plugin in a React Package, for the Face Detector plugin this file will be called `FaceDetectorFrameProcessorPluginPackage.kt`:
|
5. Create a new Kotlin file which registers the Frame Processor Plugin in a React Package, for the Face Detector plugin this file will be called `FaceDetectorFrameProcessorPluginPackage.kt`:
|
||||||
|
|
||||||
```kotlin {9}
|
```kotlin {9-11}
|
||||||
import com.facebook.react.ReactPackage
|
import com.facebook.react.ReactPackage
|
||||||
import com.facebook.react.bridge.NativeModule
|
import com.facebook.react.bridge.NativeModule
|
||||||
import com.facebook.react.bridge.ReactApplicationContext
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
@ -118,7 +118,7 @@ public class FaceDetectorFrameProcessorPlugin: FrameProcessorPlugin {
|
|||||||
|
|
||||||
6. In your `AppDelegate.m`, add the following code to `application:didFinishLaunchingWithOptions:`:
|
6. In your `AppDelegate.m`, add the following code to `application:didFinishLaunchingWithOptions:`:
|
||||||
|
|
||||||
```objc {5}
|
```objc {5-8}
|
||||||
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
|
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
|
||||||
{
|
{
|
||||||
...
|
...
|
||||||
|
@ -19,6 +19,7 @@ cd ios && pod install
|
|||||||
|
|
||||||
## Plugin List
|
## Plugin List
|
||||||
|
|
||||||
|
* [mrousavy/**react-native-fast-tflite**](https://github.com/mrousavy/react-native-fast-tflite): A plugin to run any Tensorflow Lite model inside React Native, written in C++ with GPU acceleration.
|
||||||
* [mrousavy/**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler): A plugin to label images using MLKit Vision Image Labeler.
|
* [mrousavy/**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler): A plugin to label images using MLKit Vision Image Labeler.
|
||||||
* [mrousavy/**vision-camera-resize-plugin**](https://github.com/mrousavy/vision-camera-resize-plugin): A plugin for fast frame resizing to optimize execution speed of expensive AI algorithms.
|
* [mrousavy/**vision-camera-resize-plugin**](https://github.com/mrousavy/vision-camera-resize-plugin): A plugin for fast frame resizing to optimize execution speed of expensive AI algorithms.
|
||||||
* [rodgomesc/**vision-camera-face-detector**](https://github.com/rodgomesc/vision-camera-face-detector): A plugin to detect faces using MLKit Vision Face Detector.
|
* [rodgomesc/**vision-camera-face-detector**](https://github.com/rodgomesc/vision-camera-face-detector): A plugin to detect faces using MLKit Vision Face Detector.
|
||||||
|
83
docs/docs/guides/HDR.mdx
Normal file
83
docs/docs/guides/HDR.mdx
Normal file
@ -0,0 +1,83 @@
|
|||||||
|
---
|
||||||
|
id: hdr
|
||||||
|
title: HDR
|
||||||
|
sidebar_label: HDR
|
||||||
|
---
|
||||||
|
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||||
|
|
||||||
|
## What is HDR?
|
||||||
|
|
||||||
|
HDR ("High Dynamic Range") is a capture mode that captures colors in a much wider range, allowing for much better details and brighter colors.
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="/img/sdr_comparison.png" width="51%" />
|
||||||
|
<img src="/img/hdr_comparison.png" width="51%" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
### Photo HDR
|
||||||
|
|
||||||
|
Photo HDR is accomplished by running three captures instead of one, an underexposed photo, a normal photo, and an overexposed photo. Then, these images are fused together to create darker darks and brighter brights.
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="/img/hdr_photo_demo.webp" width="65%" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
### Video HDR
|
||||||
|
|
||||||
|
Video HDR is accomplished by using a 10-bit HDR pixel format with custom configuration on the hardware sensor that allows for capturing wider color ranges.
|
||||||
|
|
||||||
|
* On iOS, this uses the `kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange` Pixel Format.
|
||||||
|
* On Android, this uses the Dynamic Range Format `HLG10`, `HDR10` or `DOLBY_VISION_10B_HDR_OEM`.
|
||||||
|
|
||||||
|
If true 10-bit Video HDR is not available, the OS will sometimes fall back to EDR ("Extended Dynamic Range"), which, similar to how Photo HDR works, just doubles the video frame rate to capture one longer-exposed frame and one shorter exposed frame.
|
||||||
|
|
||||||
|
## Using HDR
|
||||||
|
|
||||||
|
To enable HDR capture, you need to select a format (see ["Camera Formats"](formats)) that supports HDR capture:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const format = useCameraFormat(device, [
|
||||||
|
{ photoHDR: true },
|
||||||
|
{ videoHDR: true },
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const format = getCameraFormat(device, [
|
||||||
|
{ photoHDR: true },
|
||||||
|
{ videoHDR: true },
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
Then, pass the `format` to the Camera and enable the `hdr` prop if it is supported:
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
const format = ...
|
||||||
|
const supportsHdr = format.supportsPhotoHDR && format.supportsVideoHDR
|
||||||
|
|
||||||
|
return <Camera {...props} format={format} hdr={supportsHdr} />
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, all captures (`takePhoto(..)` and `startRecording(..)`) will be configured to use HDR.
|
||||||
|
|
||||||
|
<br />
|
||||||
|
|
||||||
|
#### 🚀 Next section: [Video Stabilization](stabilization)
|
@ -10,45 +10,28 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
<img align="right" width="283" src={useBaseUrl("img/example.png")} />
|
<img align="right" width="283" src={useBaseUrl("img/example.png")} />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
### The `isActive` prop
|
## The `isActive` prop
|
||||||
|
|
||||||
The Camera's `isActive` property can be used to _pause_ the session (`isActive={false}`) while still keeping the session "warm". This is more desirable than completely unmounting the camera, since _resuming_ the session (`isActive={true}`) will be **much faster** than re-mounting the camera view.
|
The Camera's `isActive` property can be used to _pause_ the session (`isActive={false}`) while still keeping the session "warm". This is more desirable than completely unmounting the camera, since _resuming_ the session (`isActive={true}`) will be **much faster** than re-mounting the camera view.
|
||||||
|
|
||||||
For example, you want to **pause the camera** when the user **navigates to another page** or **minimizes the app** since otherwise the camera continues to run in the background without the user seeing it, causing **significant battery drain**. Also, on iOS a green dot indicates the user that the camera is still active, possibly causing the user to raise privacy concerns. (🔗 See ["About the orange and green indicators in your iPhone status bar"](https://support.apple.com/en-us/HT211876))
|
For example, you want to **pause the camera** when the user **navigates to another page** or **minimizes the app** since otherwise the camera continues to run in the background without the user seeing it, causing **significant battery drain**. Also, on iOS a green dot indicates the user that the camera is still active, possibly causing the user to raise privacy concerns. (See ["About the orange and green indicators in your iPhone status bar"](https://support.apple.com/en-us/HT211876))
|
||||||
|
|
||||||
This example demonstrates how you could pause the camera stream once the app goes into background using a [custom `useIsAppForeground` hook](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/src/hooks/useIsForeground.ts):
|
For example, you might want to pause the Camera when the user minimizes the app ([`useAppState()`](https://github.com/react-native-community/hooks?tab=readme-ov-file#useappstate)) or navigates to a new screen ([`useIsFocused()`](https://reactnavigation.org/docs/use-is-focused/)):
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
function App() {
|
function App() {
|
||||||
const devices = useCameraDevices()
|
|
||||||
const device = devices.back
|
|
||||||
const isAppForeground = useIsAppForeground()
|
|
||||||
|
|
||||||
if (device == null) return <LoadingView />
|
|
||||||
return (
|
|
||||||
<Camera
|
|
||||||
style={StyleSheet.absoluteFill}
|
|
||||||
device={device}
|
|
||||||
isActive={isAppForeground}
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Usage with `react-navigation`
|
|
||||||
|
|
||||||
To automatically pause the Camera when the user navigates to a different page, use the [`useIsFocused`](https://reactnavigation.org/docs/use-is-focused/) function:
|
|
||||||
|
|
||||||
```tsx {4}
|
|
||||||
function App() {
|
|
||||||
// ...
|
|
||||||
|
|
||||||
const isFocused = useIsFocused()
|
const isFocused = useIsFocused()
|
||||||
|
const appState = useAppState()
|
||||||
|
const isActive = isFocused && appState === "active"
|
||||||
|
|
||||||
return <Camera {...props} isActive={isFocused} />
|
return <Camera {...props} isActive={isActive} />
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Interruptions
|
||||||
|
|
||||||
|
VisionCamera gracefully handles Camera interruptions such as incoming calls, phone overheating, a different app opening the Camera, etc., and will automatically resume the Camera once it becomes available again.
|
||||||
|
|
||||||
<br />
|
<br />
|
||||||
|
|
||||||
#### 🚀 Next section: [Camera Formats](formats)
|
#### 🚀 Next section: [Camera Formats](formats)
|
||||||
|
@ -10,8 +10,9 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
<img align="right" width="283" src={useBaseUrl("img/11_back.png")} />
|
<img align="right" width="283" src={useBaseUrl("img/11_back.png")} />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
The provided library doesn't work on simulators. These steps allow you to mock the library
|
## Mocking VisionCamera
|
||||||
and use it for developing or testing. Based on
|
|
||||||
|
These steps allow you to mock VisionCamera and use it for developing or testing. Based on
|
||||||
[Detox Mock Guide](https://github.com/wix/Detox/blob/master/docs/Guide.Mocking.md).
|
[Detox Mock Guide](https://github.com/wix/Detox/blob/master/docs/Guide.Mocking.md).
|
||||||
|
|
||||||
### Configure the Metro bundler
|
### Configure the Metro bundler
|
||||||
@ -65,7 +66,7 @@ import RNFS, { writeFile } from 'react-native-fs';
|
|||||||
console.log('[DETOX] Using mocked react-native-vision-camera');
|
console.log('[DETOX] Using mocked react-native-vision-camera');
|
||||||
|
|
||||||
export class VisionCamera extends React.PureComponent {
|
export class VisionCamera extends React.PureComponent {
|
||||||
static async getAvailableCameraDevices() {
|
static getAvailableCameraDevices() {
|
||||||
return (
|
return (
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
|
137
docs/docs/guides/PERFORMANCE.mdx
Normal file
137
docs/docs/guides/PERFORMANCE.mdx
Normal file
@ -0,0 +1,137 @@
|
|||||||
|
---
|
||||||
|
id: performance
|
||||||
|
title: Performance
|
||||||
|
sidebar_label: Performance
|
||||||
|
---
|
||||||
|
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||||
|
|
||||||
|
## Performance of VisionCamera
|
||||||
|
|
||||||
|
VisionCamera is highly optimized to be **as fast as a native Camera app**, and is sometimes even faster than that.
|
||||||
|
I am using highly efficient native GPU buffer formats (such as YUV 4:2:0, or lossless compressed YUV 4:2:0), running the video pipelines in parallel, using C++ for the Frame Processors implementation, and other tricks to make sure VisionCamera is as efficient as possible.
|
||||||
|
|
||||||
|
## Making it faster
|
||||||
|
|
||||||
|
There are a few things you can do to make your Camera faster which requires a core understanding of how Cameras work under the hood:
|
||||||
|
|
||||||
|
### Simpler Camera Device
|
||||||
|
|
||||||
|
Selecting a "simpler" Camera Device (i.e. a Camera Device with _less physical cameras_) allows the Camera to initialize faster as it does not have to start multiple devices at once.
|
||||||
|
You can prefer a simple wide-angle Camera (`['wide-angle-camera']`) over a triple camera (`['ultra-wide-angle-camera', 'wide-angle-camera', 'telephoto-camera']`) to significantly speed up initialization time.
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const fasterDevice = useCameraDevice('back', {
|
||||||
|
physicalDevices: ['wide-angle-camera']
|
||||||
|
})
|
||||||
|
const slowerDevice = useCameraDevice('back', {
|
||||||
|
physicalDevices: ['ultra-wide-angle-camera', 'wide-angle-camera', 'telephoto-camera']
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const devices = Camera.getAvailableCameraDevices()
|
||||||
|
const fasterDevice = getCameraDevice(devices, 'back', {
|
||||||
|
physicalDevices: ['wide-angle-camera']
|
||||||
|
})
|
||||||
|
const slowerDevice = getCameraDevice(devices, 'back', {
|
||||||
|
physicalDevices: ['ultra-wide-angle-camera', 'wide-angle-camera', 'telephoto-camera']
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
See ["Camera Devices"](devices) for more information.
|
||||||
|
|
||||||
|
Note: By default (when not passing the options object), a simpler device is already chosen.
|
||||||
|
|
||||||
|
### No HDR
|
||||||
|
|
||||||
|
HDR uses 10-bit formats and/or additional processing steps that come with additional computation overhead. Disable HDR (don't pass `hdr` to the `<Camera>`) for higher efficiency.
|
||||||
|
|
||||||
|
### Buffer Compression
|
||||||
|
|
||||||
|
Enable Buffer Compression ([`enableBufferCompression`](/docs/api/interfaces/CameraProps#enablebuffercompression)) to use lossless-compressed buffers for the Camera's video pipeline. These buffers can use less memory and are more efficient.
|
||||||
|
|
||||||
|
Note: When not using a `frameProcessor`, buffer compression is automatically enabled.
|
||||||
|
|
||||||
|
### Video Stabilization
|
||||||
|
|
||||||
|
Video Stabilization requires additional overhead to start the algorithm, so disabling [`videoStabilizationMode`](/docs/api/interfaces/CameraProps#videoStabilizationMode) can significantly speed up the Camera initialization time.
|
||||||
|
|
||||||
|
### Pixel Format
|
||||||
|
|
||||||
|
By default, the `native` [`PixelFormat`](/docs/api#PixelFormat) is used, which is much more efficient than `rgb`.
|
||||||
|
|
||||||
|
### Disable unneeded pipelines
|
||||||
|
|
||||||
|
Only enable [`photo`](/docs/api/interfaces/CameraProps#photo) and [`video`](/docs/api/interfaces/CameraProps#video) if needed.
|
||||||
|
|
||||||
|
### Fast Photos
|
||||||
|
|
||||||
|
If you need to take photos as fast as possible, use a [`qualityPrioritization`](/docs/api/interfaces/TakePhotoOptions#qualityprioritization) of `'speed'` to speed up the photo pipeline:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
camera.current.takePhoto({
|
||||||
|
qualityPrioritization: 'speed'
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Appropriate Format resolution
|
||||||
|
|
||||||
|
Choose formats efficiently. If your backend can only handle 1080p videos, don't select a 4k format if you have to downsize it later anyways - instead use 1080p already for the Camera:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const format = useCameraFormat(device, [
|
||||||
|
{ videoResolution: { width: 1920, height: 1080 } }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const format = getCameraFormat(device, [
|
||||||
|
{ videoResolution: { width: 1920, height: 1080 } }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
### Appropriate Format FPS
|
||||||
|
|
||||||
|
Same as with format resolutions, also record at the frame rate you expect. Setting your frame rate higher can use more memory and heat up the battery.
|
||||||
|
If your backend can only handle 30 FPS, there is no need to record at 60 FPS, instead set the Camera' [`fps`](/docs/api/interfaces/CameraProps#fps) to 30:
|
||||||
|
|
||||||
|
```jsx
|
||||||
|
return <Camera {...props} fps={30} />
|
||||||
|
```
|
||||||
|
|
||||||
|
<br />
|
||||||
|
|
||||||
|
#### 🚀 Next section: [Camera Errors](errors)
|
@ -44,11 +44,11 @@ expo install react-native-vision-camera
|
|||||||
|
|
||||||
VisionCamera requires **iOS 12 or higher**, and **Android-SDK version 26 or higher**. See [Troubleshooting](/docs/guides/troubleshooting) if you're having installation issues.
|
VisionCamera requires **iOS 12 or higher**, and **Android-SDK version 26 or higher**. See [Troubleshooting](/docs/guides/troubleshooting) if you're having installation issues.
|
||||||
|
|
||||||
> **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 1.0.0 or higher.
|
> **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher.
|
||||||
|
|
||||||
## Updating manifests
|
## Updating manifests
|
||||||
|
|
||||||
To use a Camera or Microphone you must first specify that your app requires camera and microphone permissions.
|
To use the Camera or Microphone you must first specify that your app requires camera and microphone permissions.
|
||||||
|
|
||||||
<Tabs
|
<Tabs
|
||||||
groupId="environment"
|
groupId="environment"
|
||||||
@ -125,7 +125,39 @@ eas build
|
|||||||
|
|
||||||
## Getting/Requesting Permissions
|
## Getting/Requesting Permissions
|
||||||
|
|
||||||
VisionCamera also provides functions to easily get and request Microphone and Camera permissions.
|
Next, ask the user for Camera or Microphone permissions using the Permissions API:
|
||||||
|
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
Simply use the `useCameraPermission` or `useMicrophonePermission` hook:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const { hasPermission, requestPermission } = useCameraPermission()
|
||||||
|
```
|
||||||
|
|
||||||
|
And if you want to use the microphone:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const { hasPermission, requestPermission } = useMicrophonePermission()
|
||||||
|
```
|
||||||
|
|
||||||
|
There could be three states to this:
|
||||||
|
|
||||||
|
1. First time opening the app, `hasPermission` is false. Call `requestPermission()` now.
|
||||||
|
2. User already granted permission, `hasPermission` is true. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
||||||
|
3. User explicitly denied permission, `hasPermission` is false and `requestPermission()` will return false. Tell the user that he needs to grant Camera Permission, potentially by using the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to open the App Settings.
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
|
||||||
### Getting Permissions
|
### Getting Permissions
|
||||||
|
|
||||||
@ -141,7 +173,7 @@ A permission status can have the following values:
|
|||||||
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
||||||
* `not-determined`: Your app has not yet requested permission from the user. [Continue by calling the **request** functions.](#requesting-permissions)
|
* `not-determined`: Your app has not yet requested permission from the user. [Continue by calling the **request** functions.](#requesting-permissions)
|
||||||
* `denied`: Your app has already requested permissions from the user, but was explicitly denied. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
|
* `denied`: Your app has already requested permissions from the user, but was explicitly denied. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
|
||||||
* `restricted`: (iOS only) Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
|
* `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
|
||||||
|
|
||||||
### Requesting Permissions
|
### Requesting Permissions
|
||||||
|
|
||||||
@ -160,18 +192,20 @@ The permission request status can have the following values:
|
|||||||
|
|
||||||
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
||||||
* `denied`: The user explicitly denied the permission request alert. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
|
* `denied`: The user explicitly denied the permission request alert. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
|
||||||
* `restricted`: (iOS only) Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
|
* `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
## Use the `<Camera>` view
|
## Use the `<Camera>` view
|
||||||
|
|
||||||
If your app has permission to use the Camera and Microphone, simply use the [`useCameraDevices(...)`](/docs/api#usecameradevices) hook to get a Camera device (see [Camera Devices](/docs/guides/devices)) and mount the `<Camera>` view:
|
If your app has permission to use the Camera and Microphone, simply use the [`useCameraDevice(...)`](/docs/api#usecameradevice) hook to get a Camera device (see [Camera Devices](/docs/guides/devices)) and mount the `<Camera>` view:
|
||||||
|
|
||||||
```tsx
|
```tsx
|
||||||
function App() {
|
function App() {
|
||||||
const devices = useCameraDevices()
|
const device = useCameraDevice('back')
|
||||||
const device = devices.back
|
|
||||||
|
|
||||||
if (device == null) return <LoadingView />
|
if (device == null) return <NoCameraDeviceError />
|
||||||
return (
|
return (
|
||||||
<Camera
|
<Camera
|
||||||
style={StyleSheet.absoluteFill}
|
style={StyleSheet.absoluteFill}
|
||||||
|
@ -1,67 +0,0 @@
|
|||||||
---
|
|
||||||
id: skia-frame-processors
|
|
||||||
title: Drawing onto Frame Processors
|
|
||||||
sidebar_label: Drawing onto Frame Processors
|
|
||||||
---
|
|
||||||
|
|
||||||
import Tabs from '@theme/Tabs';
|
|
||||||
import TabItem from '@theme/TabItem';
|
|
||||||
import useBaseUrl from '@docusaurus/useBaseUrl';
|
|
||||||
|
|
||||||
<div>
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" width="283" height="535" style={{ float: 'right' }}>
|
|
||||||
<image href={useBaseUrl("img/frame-processors.gif")} x="18" y="33" width="247" height="469" />
|
|
||||||
<image href={useBaseUrl("img/frame.png")} width="283" height="535" />
|
|
||||||
</svg>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
### Drawing onto Frame Processors
|
|
||||||
|
|
||||||
It is technically pretty difficult to draw onto a Camera Frame and have it rendered into the resulting photo or video in realtime, but I have built a working proof of concept of that using Metal/OpenGL and Skia straight in VisionCamera.
|
|
||||||
|
|
||||||
This allows you to draw onto a Frame using simple JavaScript APIs:
|
|
||||||
|
|
||||||
```ts
|
|
||||||
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
|
||||||
'worklet'
|
|
||||||
|
|
||||||
// Detect faces using GPU-accelerated ML API
|
|
||||||
const faces = detectFaces()
|
|
||||||
|
|
||||||
// Draw faces using GPU-accelerated Skia drawing API
|
|
||||||
for (const face of faces) {
|
|
||||||
const rect = Skia.XYWHRect(face.x, face.y, face.width, face.height)
|
|
||||||
const paint = Skia.Paint()
|
|
||||||
paint.setColor(Skia.Color('red'))
|
|
||||||
frame.drawRect(rect, paint)
|
|
||||||
}
|
|
||||||
}, [])
|
|
||||||
```
|
|
||||||
|
|
||||||
..or even apply color-filters in realtime:
|
|
||||||
|
|
||||||
```ts
|
|
||||||
const invertColorsFilter = Skia.RuntimeEffect.Make(`
|
|
||||||
uniform shader image;
|
|
||||||
half4 main(vec2 pos) {
|
|
||||||
vec4 color = image.eval(pos);
|
|
||||||
return vec4((1.0 - color).rgb, 1.0);
|
|
||||||
}
|
|
||||||
`)
|
|
||||||
const paint = Skia.Paint(invertColorsFilter)
|
|
||||||
|
|
||||||
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
|
||||||
'worklet'
|
|
||||||
|
|
||||||
// Draw frame using Skia Shader to invert colors
|
|
||||||
frame.render(paint)
|
|
||||||
}, [])
|
|
||||||
```
|
|
||||||
|
|
||||||
While this API was originally planned for V3, I decided to remove it again because VisionCamera got insanely complex with that code integrated (custom Metal/OpenGL rendering pipeline with intermediate steps and tons of build setup) and I wanted to keep VisionCamera lean so other users aren't affected by build issues caused by Skia or those GPU APIs. See [this PR for more information](https://github.com/mrousavy/react-native-vision-camera/pull/1740).
|
|
||||||
|
|
||||||
In my agency, [Margelo](https://margelo.io), we have worked a lot with 2D and 3D graphics and Camera realtime processing (see the Snapchat-style mask filter on our website for example - that is running in VisionCamera/React Native!), if you need this feature [reach out to us](https://margelo.io#contact) and we'll build a customized/tailored solution for your company! :)
|
|
||||||
|
|
||||||
<br />
|
|
||||||
|
|
||||||
#### 🚀 Next section: [Zooming](/docs/guides/zooming) (or [creating a Frame Processor Plugin](/docs/guides/frame-processors-plugins-overview))
|
|
79
docs/docs/guides/STABILIZATION.mdx
Normal file
79
docs/docs/guides/STABILIZATION.mdx
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
---
|
||||||
|
id: stabilization
|
||||||
|
title: Video Stabilization
|
||||||
|
sidebar_label: Video Stabilization
|
||||||
|
---
|
||||||
|
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||||
|
|
||||||
|
## What is Video Stabilization?
|
||||||
|
|
||||||
|
Video Stabilization is an algorithm that stabilizes the recorded video to smoothen any jitters, shaking, or abrupt movements from the user's camera movement.
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="/img/action_mode_demo.gif" width="65%" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
There are multiple different approaches to Video Stabilization, either software- or hardware-based. Video Stabilization always crops the image to a smaller view area so it has room to shift the image around, so expect a "zoomed-in" effect. Also, since Video Stabilization is a complex algorithm, enabling it will increase the time the Camera takes to initialize.
|
||||||
|
|
||||||
|
### Software Based Video Stabilization
|
||||||
|
|
||||||
|
A software-based Video Stabilization mode uses CPU or GPU based algorithms that keep track of the camera movement over time by using multiple past frames to compare the change in pixels.
|
||||||
|
|
||||||
|
### Hardware Based Video Stabilization
|
||||||
|
|
||||||
|
Hardware-based Video Stabilization algorithms work with the gyroscope sensor on the device and an actual moving part on the Camera lens to immediately cancel out any abrupt movements like jitters or shaking from the Camera.
|
||||||
|
|
||||||
|
## Using Video Stabilization
|
||||||
|
|
||||||
|
To use Video Stabilization, you need to select a format (see ["Camera Formats"](formats)) that supports the given Video Stabilization mode:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="component-style"
|
||||||
|
defaultValue="hooks"
|
||||||
|
values={[
|
||||||
|
{label: 'Hooks API', value: 'hooks'},
|
||||||
|
{label: 'Imperative API', value: 'imperative'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="hooks">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const format = useCameraFormat(device, [
|
||||||
|
{ videoStabilizationMode: 'cinematic-extended' }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="imperative">
|
||||||
|
|
||||||
|
```ts
|
||||||
|
const format = getCameraFormat(device, [
|
||||||
|
{ videoStabilizationMode: 'cinematic-extended' }
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
Then, pass the `format` to the Camera and enable the `videoStabilizationMode` prop if it is supported:
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
const format = ...
|
||||||
|
const supportsVideoStabilization = format.videoStabilizationModes.includes('cinematic-extended')
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Camera
|
||||||
|
{...props}
|
||||||
|
format={format}
|
||||||
|
videoStabilizationMode={supportsVideoStabilization}
|
||||||
|
/>
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, the video pipeline will stabilize frames over time.
|
||||||
|
|
||||||
|
<br />
|
||||||
|
|
||||||
|
#### 🚀 Next section: [Performance](performance)
|
@ -4,15 +4,26 @@ title: Troubleshooting
|
|||||||
sidebar_label: Troubleshooting
|
sidebar_label: Troubleshooting
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
import useBaseUrl from '@docusaurus/useBaseUrl';
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||||
|
|
||||||
<div>
|
<div>
|
||||||
<img align="right" width="283" src={useBaseUrl("img/11_back.png")} />
|
<img align="right" width="283" src={useBaseUrl("img/11_back.png")} />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
Before opening an issue, make sure you try the following:
|
## Steps to try
|
||||||
|
|
||||||
## iOS
|
If you're experiencing build issues or runtime issues in VisionCamera, make sure you try the following before opening an issue:
|
||||||
|
|
||||||
|
<Tabs
|
||||||
|
groupId="platform"
|
||||||
|
defaultValue="ios"
|
||||||
|
values={[
|
||||||
|
{label: 'iOS', value: 'ios'},
|
||||||
|
{label: 'Android', value: 'android'}
|
||||||
|
]}>
|
||||||
|
<TabItem value="ios">
|
||||||
|
|
||||||
### Build Issues
|
### Build Issues
|
||||||
|
|
||||||
@ -52,10 +63,12 @@ Before opening an issue, make sure you try the following:
|
|||||||
```
|
```
|
||||||
5. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
|
5. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
|
||||||
```tsx
|
```tsx
|
||||||
Camera.getAvailableCameraDevices().then((d) => console.log(JSON.stringify(d, null, 2)))
|
const devices = Camera.getAvailableCameraDevices()
|
||||||
|
console.log(JSON.stringify(d, null, 2))
|
||||||
```
|
```
|
||||||
|
|
||||||
## Android
|
</TabItem>
|
||||||
|
<TabItem value="android">
|
||||||
|
|
||||||
### Build Issues
|
### Build Issues
|
||||||
|
|
||||||
@ -88,7 +101,14 @@ Before opening an issue, make sure you try the following:
|
|||||||
|
|
||||||
### Runtime Issues
|
### Runtime Issues
|
||||||
|
|
||||||
1. Check the logs in Android Studio/Logcat to find out more. In Android Studio, go to **View** > **Tool Windows** > **Logcat** (<kbd>⌘</kbd>+<kbd>6</kbd>) or run `adb logcat` in Terminal.
|
1. Check the logs in Android Studio/Logcat to find out more. In Android Studio, go to **View** > **Tool Windows** > **Logcat** (<kbd>⌘</kbd>+<kbd>6</kbd>) or run `adb logcat` in Terminal. Android Logcat logs look like this:
|
||||||
|
```logcat
|
||||||
|
09:03:46 I ReactNativeJS: Running "App" with {"rootTag":11}
|
||||||
|
09:03:47 I VisionCamera: Loading native C++ library...
|
||||||
|
09:03:47 I VisionCamera: Installing JSI bindings...
|
||||||
|
09:03:47 I VisionCamera: Finished installing JSI bindings!
|
||||||
|
...
|
||||||
|
```
|
||||||
2. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/Camera#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
|
2. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/Camera#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
|
||||||
3. If your Frame Processor is not running, make sure you check the native Android Studio/Logcat logs. There is useful information about the Frame Processor Runtime that will tell you if something goes wrong.
|
3. If your Frame Processor is not running, make sure you check the native Android Studio/Logcat logs. There is useful information about the Frame Processor Runtime that will tell you if something goes wrong.
|
||||||
4. If your Frame Processor is not running, make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
|
4. If your Frame Processor is not running, make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
|
||||||
@ -98,9 +118,13 @@ Before opening an issue, make sure you try the following:
|
|||||||
```
|
```
|
||||||
6. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
|
6. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
|
||||||
```tsx
|
```tsx
|
||||||
Camera.getAvailableCameraDevices().then((d) => console.log(JSON.stringify(d, null, 2)))
|
const devices = Camera.getAvailableCameraDevices()
|
||||||
|
console.log(JSON.stringify(d, null, 2))
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
## Issues
|
## Issues
|
||||||
|
|
||||||
If nothing has helped so far, try browsing the [GitHub issues](https://github.com/mrousavy/react-native-vision-camera/issues?q=is%3Aissue). If your issue doesn't exist, [create a new one](https://github.com/mrousavy/react-native-vision-camera/issues/new/choose). Make sure to fill out the template and include as many details as possible. Also try to reproduce the issue in the [example app](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example).
|
If nothing has helped so far, try browsing the [GitHub issues](https://github.com/mrousavy/react-native-vision-camera/issues?q=is%3Aissue). If your issue doesn't exist, [create a new one](https://github.com/mrousavy/react-native-vision-camera/issues/new/choose). Make sure to fill out the template and include as many details as possible. Also try to reproduce the issue in the [example app](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example).
|
||||||
|
@ -13,15 +13,42 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
|||||||
</svg>
|
</svg>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
The `<Camera>` component already provides a natively implemented zoom gesture which you can enable with the [`enableZoomGesture`](/docs/api/interfaces/CameraProps#enablezoomgesture) prop. This does not require any additional work, but if you want to setup a custom gesture, such as the one in Snapchat or Instagram where you move up your finger while recording, continue reading.
|
## Native Zoom Gesture
|
||||||
|
|
||||||
### Animation libraries
|
The `<Camera>` component already provides a natively implemented zoom gesture which you can enable with the [`enableZoomGesture`](/docs/api/interfaces/CameraProps#enablezoomgesture) prop. If you don't need any additional logic in your zoom gesture, you can skip to the next section.
|
||||||
|
|
||||||
While you can use any animation library to animate the `zoom` property (or use no animation library at all) it is recommended to use [react-native-reanimated](https://github.com/software-mansion/react-native-reanimated) (v2) to achieve best performance. Head over to their [Installation guide](https://docs.swmansion.com/react-native-reanimated/docs/installation) to install Reanimated if you haven't already.
|
**🚀 Next section: [Focusing](focusing)**
|
||||||
|
|
||||||
### Implementation
|
If you want to setup a custom gesture, such as the one in Snapchat or Instagram where you move up your finger while recording, first understand how zoom is expressed.
|
||||||
|
|
||||||
#### Overview
|
## Min, Max and Neutral Zoom
|
||||||
|
|
||||||
|
A Camera device has different minimum, maximum and neutral zoom values. Those values are expressed through the `CameraDevice`'s [`minZoom`](/docs/api/interfaces/CameraDevice#minzoom), [`maxZoom`](/docs/api/interfaces/CameraDevice#maxzoom) and [`neutralZoom`](/docs/api/interfaces/CameraDevice#neutralzoom) props, and are represented in "scale". So if the `maxZoom` property of a device is `2`, that means the view can be enlarged by twice it's zoom, aka the viewport halves.
|
||||||
|
|
||||||
|
* The `minZoom` value is always `1`.
|
||||||
|
* The `maxZoom` value can have very high values (such as `128`), but often you want to clamp this value to something realistic like `16`.
|
||||||
|
* The `neutralZoom` value is often `1`, but can be larger than `1` for devices with "fish-eye" (ultra-wide-angle) cameras. In those cases, the user expects to be at whatever zoom value `neutralZoom` is (e.g. `2`) per default, and if he tries to zoom out even more, he goes to `minZoom` (`1`), which switches over to the "fish-eye" (ultra-wide-angle) camera as seen in this GIF:
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="/img/multi-camera.gif" width="55%" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
The Camera's `zoom` property expects values to be in the same "factor" scale as the `minZoom`, `neutralZoom` and `maxZoom` values - so if you pass `zoom={device.minZoom}` it is at the minimum available zoom, where as if you pass `zoom={device.maxZoom}` the maximum zoom value possible is zoomed in. It is recommended that you start at `device.neutralZoom` and let the user manually zoom out to the fish-eye camera on demand (if available).
|
||||||
|
|
||||||
|
## Logarithmic scale
|
||||||
|
|
||||||
|
A Camera's `zoom` property is represented in a **logarithmic scale**. That means, increasing from `1` to `2` will appear to be a much larger offset than increasing from `127` to `128`. If you want to implement a zoom gesture (`<PinchGestureHandler>`, `<PanGestureHandler>`), try to flatten the `zoom` property to a **linear scale** by raising it **exponentially**. (`zoom.value ** 2`)
|
||||||
|
|
||||||
|
## Pinch-to-zoom
|
||||||
|
|
||||||
|
The above example only demonstrates how to animate the `zoom` property. To actually implement pinch-to-zoom or pan-to-zoom, take a look at the [VisionCamera example app](https://github.com/mrousavy/react-native-vision-camera/tree/main/package/example), the pinch-to-zoom gesture can be found [here](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/src/views/CaptureButton.tsx#L189-L208), and the pan-to-zoom gesture can be found [here](https://github.com/mrousavy/react-native-vision-camera/blob/d8551792e97eaa6fa768f54059ffce054bf748d9/example/src/views/CaptureButton.tsx#L185-L205). They implement a real world use-case, where the maximum zoom value is clamped to a realistic value, and the zoom responds very gracefully by using a logarithmic scale.
|
||||||
|
|
||||||
|
|
||||||
|
## Implementation (Reanimated)
|
||||||
|
|
||||||
|
While you can use any animation library to animate the `zoom` property (or use no animation library at all) it is recommended to use [react-native-reanimated](https://github.com/software-mansion/react-native-reanimated) to achieve best performance. Head over to their [Installation guide](https://docs.swmansion.com/react-native-reanimated/docs/installation) to install Reanimated if you haven't already.
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
1. Make the Camera View animatable using `createAnimatedComponent`
|
1. Make the Camera View animatable using `createAnimatedComponent`
|
||||||
2. Make the Camera's `zoom` property animatable using `addWhitelistedNativeProps`
|
2. Make the Camera's `zoom` property animatable using `addWhitelistedNativeProps`
|
||||||
@ -29,7 +56,7 @@ While you can use any animation library to animate the `zoom` property (or use n
|
|||||||
4. Use [`useAnimatedProps`](https://docs.swmansion.com/react-native-reanimated/docs/api/useAnimatedProps) to map the zoom SharedValue to the zoom property.
|
4. Use [`useAnimatedProps`](https://docs.swmansion.com/react-native-reanimated/docs/api/useAnimatedProps) to map the zoom SharedValue to the zoom property.
|
||||||
5. We apply the animated props to the `ReanimatedCamera` component's `animatedProps` property.
|
5. We apply the animated props to the `ReanimatedCamera` component's `animatedProps` property.
|
||||||
|
|
||||||
#### Code
|
### Code
|
||||||
|
|
||||||
The following example implements a button which smoothly zooms to a random value using [react-native-reanimated](https://github.com/software-mansion/react-native-reanimated):
|
The following example implements a button which smoothly zooms to a random value using [react-native-reanimated](https://github.com/software-mansion/react-native-reanimated):
|
||||||
|
|
||||||
@ -38,28 +65,28 @@ import Reanimated, {
|
|||||||
useAnimatedProps,
|
useAnimatedProps,
|
||||||
useSharedValue,
|
useSharedValue,
|
||||||
withSpring,
|
withSpring,
|
||||||
|
addWhitelistedNativeProps
|
||||||
} from "react-native-reanimated"
|
} from "react-native-reanimated"
|
||||||
|
|
||||||
const ReanimatedCamera = Reanimated.createAnimatedComponent(Camera)
|
const ReanimatedCamera = Reanimated.createAnimatedComponent(Camera)
|
||||||
Reanimated.addWhitelistedNativeProps({
|
addWhitelistedNativeProps({
|
||||||
zoom: true,
|
zoom: true,
|
||||||
})
|
})
|
||||||
|
|
||||||
export function App() {
|
export function App() {
|
||||||
const devices = useCameraDevices()
|
const device = useCameraDevice('back')
|
||||||
const device = devices.back
|
|
||||||
const zoom = useSharedValue(0)
|
const zoom = useSharedValue(0)
|
||||||
|
|
||||||
const onRandomZoomPress = useCallback(() => {
|
const onRandomZoomPress = useCallback(() => {
|
||||||
zoom.value = withSpring(Math.random())
|
zoom.value = withSpring(Math.random())
|
||||||
}, [])
|
}, [zoom])
|
||||||
|
|
||||||
const animatedProps = useAnimatedProps<Partial<CameraProps>>(
|
const animatedProps = useAnimatedProps<Partial<CameraProps>>(
|
||||||
() => ({ zoom: zoom.value }),
|
() => ({ zoom: zoom.value }),
|
||||||
[zoom]
|
[zoom]
|
||||||
)
|
)
|
||||||
|
|
||||||
if (device == null) return <LoadingView />
|
if (device == null) return <NoCameraDeviceError />
|
||||||
return (
|
return (
|
||||||
<>
|
<>
|
||||||
<ReanimatedCamera
|
<ReanimatedCamera
|
||||||
@ -78,28 +105,6 @@ export function App() {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Min, Max and Neutral Zoom
|
|
||||||
|
|
||||||
A Camera device has different minimum, maximum and neutral zoom values. Those values are expressed through the `CameraDevice`'s [`minZoom`](/docs/api/interfaces/CameraDevice#minzoom), [`maxZoom`](/docs/api/interfaces/CameraDevice#maxzoom) and [`neutralZoom`](/docs/api/interfaces/CameraDevice#neutralzoom) props, and are represented in "scale". So if the `maxZoom` property of a device is `2`, that means the view can be enlarged by twice it's zoom, aka the viewport halves.
|
|
||||||
|
|
||||||
* The `minZoom` value is always `1`.
|
|
||||||
* The `maxZoom` value can have very high values (such as `128`), but often you want to clamp this value to something realistic like `16`.
|
|
||||||
* The `neutralZoom` value is often `1`, but can be larger than `1` for devices with "fish-eye" (ultra-wide-angle) cameras. In those cases, the user expects to be at whatever zoom value `neutralZoom` is (e.g. `2`) per default, and if he tries to zoom out even more, he goes to `minZoom` (`1`), which switches over to the "fish-eye" (ultra-wide-angle) camera as seen in this GIF:
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="/img/multi-camera.gif" width="45%" />
|
|
||||||
</div>
|
|
||||||
|
|
||||||
The Camera's `zoom` property expects values to be in the same "factor" scale as the `minZoom`, `neutralZoom` and `maxZoom` values - so if you pass `zoom={device.minZoom}` it is at the minimum available zoom, where as if you pass `zoom={device.maxZoom}` the maximum zoom value possible is zoomed in. It is recommended that you start at `device.neutralZoom` and let the user manually zoom out to the fish-eye camera on demand (if available).
|
|
||||||
|
|
||||||
### Logarithmic scale
|
|
||||||
|
|
||||||
A Camera's `zoom` property is represented in a **logarithmic scale**. That means, increasing from `1` to `2` will appear to be a much larger offset than increasing from `127` to `128`. If you want to implement a zoom gesture (`<PinchGestureHandler>`, `<PanGestureHandler>`), try to flatten the `zoom` property to a **linear scale** by raising it **exponentially**. (`zoom.value ** 2`)
|
|
||||||
|
|
||||||
### Pinch-to-zoom
|
|
||||||
|
|
||||||
The above example only demonstrates how to animate the `zoom` property. To actually implement pinch-to-zoom or pan-to-zoom, take a look at the [VisionCamera example app](https://github.com/mrousavy/react-native-vision-camera/tree/main/package/example), the pinch-to-zoom gesture can be found [here](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/src/views/CaptureButton.tsx#L189-L208), and the pan-to-zoom gesture can be found [here](https://github.com/mrousavy/react-native-vision-camera/blob/d8551792e97eaa6fa768f54059ffce054bf748d9/example/src/views/CaptureButton.tsx#L185-L205). They implement a real world use-case, where the maximum zoom value is clamped to a realistic value, and the zoom responds very gracefully by using a logarithmic scale.
|
|
||||||
|
|
||||||
<br />
|
<br />
|
||||||
|
|
||||||
#### 🚀 Next section: [Focusing](focusing)
|
#### 🚀 Next section: [Focusing](focusing)
|
||||||
|
@ -14,6 +14,9 @@ module.exports = {
|
|||||||
apiKey: '64bc77eda92b7efcb7003b56815f1113',
|
apiKey: '64bc77eda92b7efcb7003b56815f1113',
|
||||||
indexName: 'react-native-vision-camera',
|
indexName: 'react-native-vision-camera',
|
||||||
},
|
},
|
||||||
|
colorMode: {
|
||||||
|
respectPrefersColorScheme: true
|
||||||
|
},
|
||||||
prism: {
|
prism: {
|
||||||
additionalLanguages: ['swift', 'java', 'kotlin'],
|
additionalLanguages: ['swift', 'java', 'kotlin'],
|
||||||
},
|
},
|
||||||
|
@ -11,6 +11,7 @@ module.exports = {
|
|||||||
label: 'Realtime Frame Processing',
|
label: 'Realtime Frame Processing',
|
||||||
items: [
|
items: [
|
||||||
'guides/frame-processors',
|
'guides/frame-processors',
|
||||||
|
'guides/frame-processors-tips',
|
||||||
'guides/frame-processor-plugin-list',
|
'guides/frame-processor-plugin-list',
|
||||||
'guides/skia-frame-processors',
|
'guides/skia-frame-processors',
|
||||||
{
|
{
|
||||||
@ -27,6 +28,9 @@ module.exports = {
|
|||||||
},
|
},
|
||||||
'guides/zooming',
|
'guides/zooming',
|
||||||
'guides/focusing',
|
'guides/focusing',
|
||||||
|
'guides/hdr',
|
||||||
|
'guides/stabilization',
|
||||||
|
'guides/performance',
|
||||||
'guides/errors',
|
'guides/errors',
|
||||||
'guides/mocking',
|
'guides/mocking',
|
||||||
'guides/troubleshooting',
|
'guides/troubleshooting',
|
||||||
|
BIN
docs/static/img/action_mode_demo.gif
vendored
Normal file
BIN
docs/static/img/action_mode_demo.gif
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 3.0 MiB |
BIN
docs/static/img/demo_drawing.mp4
vendored
Normal file
BIN
docs/static/img/demo_drawing.mp4
vendored
Normal file
Binary file not shown.
BIN
docs/static/img/hdr_comparison.png
vendored
Normal file
BIN
docs/static/img/hdr_comparison.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 622 KiB |
BIN
docs/static/img/hdr_photo_demo.webp
vendored
Normal file
BIN
docs/static/img/hdr_photo_demo.webp
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 107 KiB |
BIN
docs/static/img/sdr_comparison.png
vendored
Normal file
BIN
docs/static/img/sdr_comparison.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 576 KiB |
@ -43,7 +43,11 @@ class InvalidTypeScriptUnionError(unionName: String, unionValue: String) :
|
|||||||
CameraError("parameter", "invalid-parameter", "The given value for $unionName could not be parsed! (Received: $unionValue)")
|
CameraError("parameter", "invalid-parameter", "The given value for $unionName could not be parsed! (Received: $unionValue)")
|
||||||
|
|
||||||
class NoCameraDeviceError :
|
class NoCameraDeviceError :
|
||||||
CameraError("device", "no-device", "No device was set! Use `getAvailableCameraDevices()` to select a suitable Camera device.")
|
CameraError(
|
||||||
|
"device",
|
||||||
|
"no-device",
|
||||||
|
"No device was set! Use `useCameraDevice(..)` or `Camera.getAvailableCameraDevices()` to select a suitable Camera device."
|
||||||
|
)
|
||||||
class PixelFormatNotSupportedError(format: String) :
|
class PixelFormatNotSupportedError(format: String) :
|
||||||
CameraError("device", "pixel-format-not-supported", "The pixelFormat $format is not supported on the given Camera Device!")
|
CameraError("device", "pixel-format-not-supported", "The pixelFormat $format is not supported on the given Camera Device!")
|
||||||
class PixelFormatNotSupportedInVideoPipelineError(format: String) :
|
class PixelFormatNotSupportedInVideoPipelineError(format: String) :
|
||||||
|
@ -1,10 +1,10 @@
|
|||||||
import * as React from 'react';
|
import * as React from 'react';
|
||||||
import { useRef, useState, useCallback } from 'react';
|
import { useRef, useState, useCallback, useMemo } from 'react';
|
||||||
import { StyleSheet, Text, View } from 'react-native';
|
import { StyleSheet, Text, View } from 'react-native';
|
||||||
import { PinchGestureHandler, PinchGestureHandlerGestureEvent, TapGestureHandler } from 'react-native-gesture-handler';
|
import { PinchGestureHandler, PinchGestureHandlerGestureEvent, TapGestureHandler } from 'react-native-gesture-handler';
|
||||||
import { CameraRuntimeError, PhotoFile, useCameraDevice, useCameraFormat, useFrameProcessor, VideoFile } from 'react-native-vision-camera';
|
import { CameraRuntimeError, PhotoFile, useCameraDevice, useCameraFormat, useFrameProcessor, VideoFile } from 'react-native-vision-camera';
|
||||||
import { Camera } from 'react-native-vision-camera';
|
import { Camera } from 'react-native-vision-camera';
|
||||||
import { CONTENT_SPACING, MAX_ZOOM_FACTOR, SAFE_AREA_PADDING } from './Constants';
|
import { CONTENT_SPACING, MAX_ZOOM_FACTOR, SAFE_AREA_PADDING, SCREEN_HEIGHT, SCREEN_WIDTH } from './Constants';
|
||||||
import Reanimated, { Extrapolate, interpolate, useAnimatedGestureHandler, useAnimatedProps, useSharedValue } from 'react-native-reanimated';
|
import Reanimated, { Extrapolate, interpolate, useAnimatedGestureHandler, useAnimatedProps, useSharedValue } from 'react-native-reanimated';
|
||||||
import { useEffect } from 'react';
|
import { useEffect } from 'react';
|
||||||
import { useIsForeground } from './hooks/useIsForeground';
|
import { useIsForeground } from './hooks/useIsForeground';
|
||||||
@ -49,19 +49,23 @@ export function CameraPage({ navigation }: Props): React.ReactElement {
|
|||||||
physicalDevices: ['ultra-wide-angle-camera', 'wide-angle-camera', 'telephoto-camera'],
|
physicalDevices: ['ultra-wide-angle-camera', 'wide-angle-camera', 'telephoto-camera'],
|
||||||
});
|
});
|
||||||
|
|
||||||
|
const [targetFps, setTargetFps] = useState(60);
|
||||||
|
|
||||||
|
const screenAspectRatio = SCREEN_HEIGHT / SCREEN_WIDTH;
|
||||||
const format = useCameraFormat(device, [
|
const format = useCameraFormat(device, [
|
||||||
{ fps: 60 }, //
|
{ fps: targetFps },
|
||||||
|
{ videoAspectRatio: screenAspectRatio },
|
||||||
|
{ videoResolution: 'max' },
|
||||||
|
{ photoAspectRatio: screenAspectRatio },
|
||||||
|
{ photoResolution: 'max' },
|
||||||
]);
|
]);
|
||||||
|
|
||||||
//#region Memos
|
|
||||||
const [targetFps, setTargetFps] = useState(30);
|
|
||||||
const fps = Math.min(format?.maxFps ?? 1, targetFps);
|
const fps = Math.min(format?.maxFps ?? 1, targetFps);
|
||||||
|
|
||||||
const supportsFlash = device?.hasFlash ?? false;
|
const supportsFlash = device?.hasFlash ?? false;
|
||||||
const supportsHdr = format?.supportsPhotoHDR;
|
const supportsHdr = format?.supportsPhotoHDR;
|
||||||
const supports60Fps = (format?.maxFps ?? 0) >= 60;
|
const supports60Fps = useMemo(() => device?.formats.some((f) => f.maxFps >= 60), [device?.formats]);
|
||||||
const canToggleNightMode = device?.supportsLowLightBoost ?? false;
|
const canToggleNightMode = device?.supportsLowLightBoost ?? false;
|
||||||
//#endregion
|
|
||||||
|
|
||||||
//#region Animated Zoom
|
//#region Animated Zoom
|
||||||
// This just maps the zoom factor to a percentage value.
|
// This just maps the zoom factor to a percentage value.
|
||||||
|
@ -90,9 +90,9 @@ enum DeviceError: String {
|
|||||||
case .configureError:
|
case .configureError:
|
||||||
return "Failed to lock the device for configuration."
|
return "Failed to lock the device for configuration."
|
||||||
case .noDevice:
|
case .noDevice:
|
||||||
return "No device was set! Use `getAvailableCameraDevices()` to select a suitable Camera device."
|
return "No device was set! Use `useCameraDevice(..)` or `Camera.getAvailableCameraDevices()` to select a suitable Camera device."
|
||||||
case .invalid:
|
case .invalid:
|
||||||
return "The given Camera device was invalid. Use `getAvailableCameraDevices()` to select a suitable Camera device."
|
return "The given Camera device was invalid. Use `useCameraDevice(..)` or `Camera.getAvailableCameraDevices()` to select a suitable Camera device."
|
||||||
case .flashUnavailable:
|
case .flashUnavailable:
|
||||||
return "The Camera Device does not have a flash unit! Make sure you select a device where `hasFlash`/`hasTorch` is true!"
|
return "The Camera Device does not have a flash unit! Make sure you select a device where `hasFlash`/`hasTorch` is true!"
|
||||||
case .lowLightBoostNotSupported:
|
case .lowLightBoostNotSupported:
|
||||||
@ -133,7 +133,7 @@ enum FormatError {
|
|||||||
var message: String {
|
var message: String {
|
||||||
switch self {
|
switch self {
|
||||||
case .invalidFormat:
|
case .invalidFormat:
|
||||||
return "The given format was invalid. Did you check if the current device supports the given format by using `getAvailableCameraDevices(...)`?"
|
return "The given format was invalid. Did you check if the current device supports the given format in `device.formats`?"
|
||||||
case let .invalidFps(fps):
|
case let .invalidFps(fps):
|
||||||
return "The given format cannot run at \(fps) FPS! Make sure your FPS is lower than `format.maxFps` but higher than `format.minFps`."
|
return "The given format cannot run at \(fps) FPS! Make sure your FPS is lower than `format.maxFps` but higher than `format.minFps`."
|
||||||
case .invalidHdr:
|
case .invalidHdr:
|
||||||
|
@ -19,7 +19,6 @@ RCT_EXTERN_METHOD(getMicrophonePermissionStatus : (RCTPromiseResolveBlock)resolv
|
|||||||
RCT_EXTERN_METHOD(requestCameraPermission : (RCTPromiseResolveBlock)resolve reject : (RCTPromiseRejectBlock)reject);
|
RCT_EXTERN_METHOD(requestCameraPermission : (RCTPromiseResolveBlock)resolve reject : (RCTPromiseRejectBlock)reject);
|
||||||
RCT_EXTERN_METHOD(requestMicrophonePermission : (RCTPromiseResolveBlock)resolve reject : (RCTPromiseRejectBlock)reject);
|
RCT_EXTERN_METHOD(requestMicrophonePermission : (RCTPromiseResolveBlock)resolve reject : (RCTPromiseRejectBlock)reject);
|
||||||
|
|
||||||
RCT_EXTERN__BLOCKING_SYNCHRONOUS_METHOD(getAvailableCameraDevices);
|
|
||||||
RCT_EXTERN__BLOCKING_SYNCHRONOUS_METHOD(installFrameProcessorBindings);
|
RCT_EXTERN__BLOCKING_SYNCHRONOUS_METHOD(installFrameProcessorBindings);
|
||||||
|
|
||||||
// Camera View Properties
|
// Camera View Properties
|
||||||
|
@ -37,18 +37,17 @@ type RefType = React.Component<NativeCameraViewProps> & Readonly<NativeMethods>;
|
|||||||
*
|
*
|
||||||
* Read the [VisionCamera documentation](https://react-native-vision-camera.com/) for more information.
|
* Read the [VisionCamera documentation](https://react-native-vision-camera.com/) for more information.
|
||||||
*
|
*
|
||||||
* The `<Camera>` component's most important (and therefore _required_) properties are:
|
* The `<Camera>` component's most important properties are:
|
||||||
*
|
*
|
||||||
* * {@linkcode CameraProps.device | device}: Specifies the {@linkcode CameraDevice} to use. Get a {@linkcode CameraDevice} by using the {@linkcode useCameraDevice | useCameraDevice()} hook, or manually by using the {@linkcode CameraDevices.getAvailableCameraDevices CameraDevices.getAvailableCameraDevices()} function.
|
* * {@linkcode CameraProps.device | device}: Specifies the {@linkcode CameraDevice} to use. Get a {@linkcode CameraDevice} by using the {@linkcode useCameraDevice | useCameraDevice(..)} hook, or manually by using the {@linkcode CameraDevices.getAvailableCameraDevices CameraDevices.getAvailableCameraDevices()} function.
|
||||||
* * {@linkcode CameraProps.isActive | isActive}: A boolean value that specifies whether the Camera should actively stream video frames or not. This can be compared to a Video component, where `isActive` specifies whether the video is paused or not. If you fully unmount the `<Camera>` component instead of using `isActive={false}`, the Camera will take a bit longer to start again.
|
* * {@linkcode CameraProps.isActive | isActive}: A boolean value that specifies whether the Camera should actively stream video frames or not. This can be compared to a Video component, where `isActive` specifies whether the video is paused or not. If you fully unmount the `<Camera>` component instead of using `isActive={false}`, the Camera will take a bit longer to start again.
|
||||||
*
|
*
|
||||||
* @example
|
* @example
|
||||||
* ```tsx
|
* ```tsx
|
||||||
* function App() {
|
* function App() {
|
||||||
* const devices = useCameraDevices('wide-angle-camera')
|
* const device = useCameraDevice('back')
|
||||||
* const device = devices.back
|
|
||||||
*
|
*
|
||||||
* if (device == null) return <LoadingView />
|
* if (device == null) return <NoCameraErrorView />
|
||||||
* return (
|
* return (
|
||||||
* <Camera
|
* <Camera
|
||||||
* style={StyleSheet.absoluteFill}
|
* style={StyleSheet.absoluteFill}
|
||||||
@ -256,7 +255,7 @@ export class Camera extends React.PureComponent<CameraProps> {
|
|||||||
/**
|
/**
|
||||||
* Get a list of all available camera devices on the current phone.
|
* Get a list of all available camera devices on the current phone.
|
||||||
*
|
*
|
||||||
* If you use Hooks, use the `useCameraDevices()` hook instead.
|
* If you use Hooks, use the `useCameraDevices(..)` hook instead.
|
||||||
*
|
*
|
||||||
* * For Camera Devices attached to the phone, it is safe to assume that this will never change.
|
* * For Camera Devices attached to the phone, it is safe to assume that this will never change.
|
||||||
* * For external Camera Devices (USB cameras, Mac continuity cameras, etc.) the available Camera Devices could change over time when the external Camera device gets plugged in or plugged out, so use {@link addCameraDevicesChangedListener | addCameraDevicesChangedListener(...)} to listen for such changes.
|
* * For external Camera Devices (USB cameras, Mac continuity cameras, etc.) the available Camera Devices could change over time when the external Camera device gets plugged in or plugged out, so use {@link addCameraDevicesChangedListener | addCameraDevicesChangedListener(...)} to listen for such changes.
|
||||||
|
@ -4,9 +4,9 @@ import type { PixelFormat } from './PixelFormat';
|
|||||||
/**
|
/**
|
||||||
* Represents the camera device position.
|
* Represents the camera device position.
|
||||||
*
|
*
|
||||||
* * `"back"`: Indicates that the device is physically located on the back of the system hardware
|
* * `"back"`: Indicates that the device is physically located on the back of the phone
|
||||||
* * `"front"`: Indicates that the device is physically located on the front of the system hardware
|
* * `"front"`: Indicates that the device is physically located on the front of the phone
|
||||||
* * `"external"`: The camera device is an external camera, and has no fixed facing relative to the device's screen.
|
* * `"external"`: The camera device is an external camera, and has no fixed facing relative to the phone. (e.g. USB or Continuity Cameras)
|
||||||
*/
|
*/
|
||||||
export type CameraPosition = 'front' | 'back' | 'external';
|
export type CameraPosition = 'front' | 'back' | 'external';
|
||||||
|
|
||||||
@ -46,7 +46,14 @@ export type AutoFocusSystem = 'contrast-detection' | 'phase-detection' | 'none';
|
|||||||
export type VideoStabilizationMode = 'off' | 'standard' | 'cinematic' | 'cinematic-extended' | 'auto';
|
export type VideoStabilizationMode = 'off' | 'standard' | 'cinematic' | 'cinematic-extended' | 'auto';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* A Camera Device's video format. Do not create instances of this type yourself, only use {@linkcode Camera.getAvailableCameraDevices | Camera.getAvailableCameraDevices()}.
|
* A Camera Device's stream-configuration format.
|
||||||
|
*
|
||||||
|
* A format specifies:
|
||||||
|
* - Video Resolution (`videoWidth`/`videoHeight`)
|
||||||
|
* - Photo Resolution (`photoWidth`/`photoHeight`)
|
||||||
|
* - Possible FPS ranges (`fps`)
|
||||||
|
* - Video Stabilization Modes (`videoStabilizationModes`)
|
||||||
|
* - Pixel Formats (`pixelFormats`)
|
||||||
*/
|
*/
|
||||||
export interface CameraDeviceFormat {
|
export interface CameraDeviceFormat {
|
||||||
/**
|
/**
|
||||||
@ -134,7 +141,13 @@ export interface CameraDevice {
|
|||||||
*/
|
*/
|
||||||
physicalDevices: PhysicalCameraDeviceType[];
|
physicalDevices: PhysicalCameraDeviceType[];
|
||||||
/**
|
/**
|
||||||
* Specifies the physical position of this camera. (back or front)
|
* Specifies the physical position of this camera.
|
||||||
|
* - `back`: The Camera Device is located on the back of the phone. These devices can be used for capturing what's in front of the user.
|
||||||
|
* - `front`: The Camera Device is located on the front of the phone. These devices can be used for selfies or FaceTime.
|
||||||
|
* - `external`: The Camera Device is an external device. These devices can be either:
|
||||||
|
* - USB Camera Devices (if they support the [USB Video Class (UVC) Specification](https://en.wikipedia.org/wiki/List_of_USB_video_class_devices))
|
||||||
|
* - [Continuity Camera Devices](https://support.apple.com/en-us/HT213244) (e.g. your iPhone's or Mac's Camera connected through WiFi/Continuity)
|
||||||
|
* - Bluetooth/WiFi Camera Devices (if they are supported in the platform-native Camera APIs; Camera2 and AVFoundation)
|
||||||
*/
|
*/
|
||||||
position: CameraPosition;
|
position: CameraPosition;
|
||||||
/**
|
/**
|
||||||
|
@ -22,9 +22,9 @@ export interface CameraProps extends ViewProps {
|
|||||||
*
|
*
|
||||||
* @example
|
* @example
|
||||||
* ```tsx
|
* ```tsx
|
||||||
* const devices = useCameraDevices('wide-angle-camera')
|
* const device = useCameraDevice('back')
|
||||||
* const device = devices.back
|
|
||||||
*
|
*
|
||||||
|
* if (device == null) return <NoCameraErrorView />
|
||||||
* return (
|
* return (
|
||||||
* <Camera
|
* <Camera
|
||||||
* device={device}
|
* device={device}
|
||||||
@ -122,13 +122,13 @@ export interface CameraProps extends ViewProps {
|
|||||||
/**
|
/**
|
||||||
* Specify the frames per second this camera should use. Make sure the given `format` includes a frame rate range with the given `fps`.
|
* Specify the frames per second this camera should use. Make sure the given `format` includes a frame rate range with the given `fps`.
|
||||||
*
|
*
|
||||||
* Requires `format` to be set.
|
* Requires `format` to be set that supports the given `fps`.
|
||||||
*/
|
*/
|
||||||
fps?: number;
|
fps?: number;
|
||||||
/**
|
/**
|
||||||
* Enables or disables HDR on this camera device. Make sure the given `format` supports HDR mode.
|
* Enables or disables HDR on this camera device. Make sure the given `format` supports HDR mode.
|
||||||
*
|
*
|
||||||
* Requires `format` to be set.
|
* Requires `format` to be set that supports `photoHDR`/`videoHDR`.
|
||||||
*/
|
*/
|
||||||
hdr?: boolean;
|
hdr?: boolean;
|
||||||
/**
|
/**
|
||||||
@ -155,7 +155,7 @@ export interface CameraProps extends ViewProps {
|
|||||||
/**
|
/**
|
||||||
* Enables or disables low-light boost on this camera device. Make sure the given `format` supports low-light boost.
|
* Enables or disables low-light boost on this camera device. Make sure the given `format` supports low-light boost.
|
||||||
*
|
*
|
||||||
* Requires `format` to be set.
|
* Requires a `format` to be set that supports `lowLightBoost`.
|
||||||
*/
|
*/
|
||||||
lowLightBoost?: boolean;
|
lowLightBoost?: boolean;
|
||||||
/**
|
/**
|
||||||
|
@ -2,11 +2,21 @@ import type { Orientation } from './Orientation';
|
|||||||
import { PixelFormat } from './PixelFormat';
|
import { PixelFormat } from './PixelFormat';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* A single frame, as seen by the camera.
|
* A single frame, as seen by the camera. This is backed by a C++ HostObject wrapping the native GPU buffer.
|
||||||
|
* At a 4k resolution, a Frame can be 1.5MB in size.
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* const frameProcessor = useFrameProcessor((frame) => {
|
||||||
|
* 'worklet'
|
||||||
|
* console.log(`Frame: ${frame.width}x${frame.height} (${frame.pixelFormat})`)
|
||||||
|
* }, [])
|
||||||
|
* ```
|
||||||
*/
|
*/
|
||||||
export interface Frame {
|
export interface Frame {
|
||||||
/**
|
/**
|
||||||
* Whether the underlying buffer is still valid or not. The buffer will be released after the frame processor returns, or `close()` is called.
|
* Whether the underlying buffer is still valid or not.
|
||||||
|
* A Frame is valid as long as your Frame Processor (or a `runAsync(..)` operation) is still running
|
||||||
*/
|
*/
|
||||||
isValid: boolean;
|
isValid: boolean;
|
||||||
/**
|
/**
|
||||||
@ -37,7 +47,7 @@ export interface Frame {
|
|||||||
* Represents the orientation of the Frame.
|
* Represents the orientation of the Frame.
|
||||||
*
|
*
|
||||||
* Some ML Models are trained for specific orientations, so they need to be taken into
|
* Some ML Models are trained for specific orientations, so they need to be taken into
|
||||||
* consideration when running a frame processor. See also: `isMirrored`
|
* consideration when running a frame processor. See also: {@linkcode isMirrored}
|
||||||
*/
|
*/
|
||||||
orientation: Orientation;
|
orientation: Orientation;
|
||||||
/**
|
/**
|
||||||
@ -47,8 +57,21 @@ export interface Frame {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the underlying data of the Frame as a uint8 array buffer.
|
* Get the underlying data of the Frame as a uint8 array buffer.
|
||||||
|
* The format of the buffer depends on the Frame's {@linkcode pixelFormat}.
|
||||||
*
|
*
|
||||||
* Note that Frames are allocated on the GPU, so calling `toArrayBuffer()` will copy from the GPU to the CPU.
|
* Note that Frames are allocated on the GPU, so calling `toArrayBuffer()` will copy from the GPU to the CPU.
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* const frameProcessor = useFrameProcessor((frame) => {
|
||||||
|
* 'worklet'
|
||||||
|
*
|
||||||
|
* if (frame.pixelFormat === 'rgb') {
|
||||||
|
* const data = frame.toArrayBuffer()
|
||||||
|
* console.log(`Pixel at 0,0: RGB(${data[0]}, ${data[1]}, ${data[2]})`)
|
||||||
|
* }
|
||||||
|
* }, [])
|
||||||
|
* ```
|
||||||
*/
|
*/
|
||||||
toArrayBuffer(): Uint8Array;
|
toArrayBuffer(): Uint8Array;
|
||||||
/**
|
/**
|
||||||
@ -61,17 +84,20 @@ export interface Frame {
|
|||||||
toString(): string;
|
toString(): string;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** @internal */
|
||||||
export interface FrameInternal extends Frame {
|
export interface FrameInternal extends Frame {
|
||||||
/**
|
/**
|
||||||
* Increment the Frame Buffer ref-count by one.
|
* Increment the Frame Buffer ref-count by one.
|
||||||
*
|
*
|
||||||
* This is a private API, do not use this.
|
* This is a private API, do not use this.
|
||||||
|
* @internal
|
||||||
*/
|
*/
|
||||||
incrementRefCount(): void;
|
incrementRefCount(): void;
|
||||||
/**
|
/**
|
||||||
* Increment the Frame Buffer ref-count by one.
|
* Increment the Frame Buffer ref-count by one.
|
||||||
*
|
*
|
||||||
* This is a private API, do not use this.
|
* This is a private API, do not use this.
|
||||||
|
* @internal
|
||||||
*/
|
*/
|
||||||
decrementRefCount(): void;
|
decrementRefCount(): void;
|
||||||
}
|
}
|
||||||
|
@ -25,7 +25,7 @@ export interface TakePhotoOptions {
|
|||||||
*/
|
*/
|
||||||
enableAutoRedEyeReduction?: boolean;
|
enableAutoRedEyeReduction?: boolean;
|
||||||
/**
|
/**
|
||||||
* Indicates whether still image stabilization will be employed when capturing the photo
|
* Indicates whether still image stabilization will be enabled when capturing the photo
|
||||||
*
|
*
|
||||||
* @default false
|
* @default false
|
||||||
*/
|
*/
|
||||||
|
@ -24,39 +24,57 @@ export interface DeviceFilter {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the best matching Camera device that satisfies your requirements using a sorting filter.
|
* Get the best matching Camera device that best satisfies your requirements using a sorting filter.
|
||||||
* @param devices All available Camera Devices this function will use for filtering. To get devices, use `Camera.getAvailableCameraDevices()`.
|
* @param position The position of the Camera device relative to the phone.
|
||||||
* @param filter The filter you want to use. The device that matches your filter the closest will be returned.
|
* @param filter The filter you want to use. The Camera device that matches your filter the closest will be returned
|
||||||
* @returns The device that matches your filter the closest.
|
* @returns The Camera device that matches your filter the closest, or `undefined` if no such Camera Device exists on the given {@linkcode position}.
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* const devices = Camera.getAvailableCameraDevices()
|
||||||
|
* const device = getCameraDevice(devices, 'back', {
|
||||||
|
* physicalDevices: ['wide-angle-camera']
|
||||||
|
* })
|
||||||
|
* ```
|
||||||
*/
|
*/
|
||||||
export function getCameraDevice(devices: CameraDevice[], position: CameraPosition, filter: DeviceFilter = {}): CameraDevice {
|
export function getCameraDevice(devices: CameraDevice[], position: CameraPosition, filter: DeviceFilter = {}): CameraDevice {
|
||||||
|
const explicitlyWantsNonWideAngle = filter.physicalDevices != null && !filter.physicalDevices.includes('wide-angle-camera');
|
||||||
|
|
||||||
const filtered = devices.filter((d) => d.position === position);
|
const filtered = devices.filter((d) => d.position === position);
|
||||||
const sortedDevices = filtered.sort((left, right) => {
|
|
||||||
|
let bestDevice = filtered[0];
|
||||||
|
if (bestDevice == null) throw new CameraRuntimeError('device/invalid-device', 'No Camera Device could be found!');
|
||||||
|
|
||||||
|
// Compare each device using a point scoring system
|
||||||
|
for (const device of devices) {
|
||||||
let leftPoints = 0;
|
let leftPoints = 0;
|
||||||
let rightPoints = 0;
|
let rightPoints = 0;
|
||||||
|
|
||||||
// prefer higher hardware-level
|
// prefer higher hardware-level
|
||||||
if (left.hardwareLevel === 'full') leftPoints += 4;
|
if (bestDevice.hardwareLevel === 'full') leftPoints += 4;
|
||||||
if (right.hardwareLevel === 'full') rightPoints += 4;
|
if (device.hardwareLevel === 'full') rightPoints += 4;
|
||||||
|
|
||||||
|
if (!explicitlyWantsNonWideAngle) {
|
||||||
|
// prefer wide-angle-camera as a default
|
||||||
|
if (bestDevice.physicalDevices.includes('wide-angle-camera')) leftPoints += 1;
|
||||||
|
if (device.physicalDevices.includes('wide-angle-camera')) rightPoints += 1;
|
||||||
|
}
|
||||||
|
|
||||||
// compare devices. two possible scenarios:
|
// compare devices. two possible scenarios:
|
||||||
// 1. user wants all cameras ([ultra-wide, wide, tele]) to zoom. prefer those devices that have all 3 cameras.
|
// 1. user wants all cameras ([ultra-wide, wide, tele]) to zoom. prefer those devices that have all 3 cameras.
|
||||||
// 2. user wants only one ([wide]) for faster performance. prefer those devices that only have one camera, if they have more, we rank them lower.
|
// 2. user wants only one ([wide]) for faster performance. prefer those devices that only have one camera, if they have more, we rank them lower.
|
||||||
if (filter.physicalDevices != null) {
|
if (filter.physicalDevices != null) {
|
||||||
for (const device of left.physicalDevices) {
|
for (const d of bestDevice.physicalDevices) {
|
||||||
if (filter.physicalDevices.includes(device)) leftPoints += 1;
|
if (filter.physicalDevices.includes(d)) leftPoints += 1;
|
||||||
else leftPoints -= 1;
|
else leftPoints -= 1;
|
||||||
}
|
}
|
||||||
for (const device of right.physicalDevices) {
|
for (const d of device.physicalDevices) {
|
||||||
if (filter.physicalDevices.includes(device)) rightPoints += 1;
|
if (filter.physicalDevices.includes(d)) rightPoints += 1;
|
||||||
else rightPoints -= 1;
|
else rightPoints -= 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return leftPoints - rightPoints;
|
if (rightPoints > leftPoints) bestDevice = device;
|
||||||
});
|
}
|
||||||
|
|
||||||
const device = sortedDevices[0];
|
return bestDevice;
|
||||||
if (device == null) throw new CameraRuntimeError('device/invalid-device', 'No Camera Device could be found!');
|
|
||||||
return device;
|
|
||||||
}
|
}
|
||||||
|
@ -12,12 +12,12 @@ export interface FormatFilter {
|
|||||||
* The target resolution of the video (and frame processor) output pipeline.
|
* The target resolution of the video (and frame processor) output pipeline.
|
||||||
* If no format supports the given resolution, the format closest to this value will be used.
|
* If no format supports the given resolution, the format closest to this value will be used.
|
||||||
*/
|
*/
|
||||||
videoResolution?: Size;
|
videoResolution?: Size | 'max';
|
||||||
/**
|
/**
|
||||||
* The target resolution of the photo output pipeline.
|
* The target resolution of the photo output pipeline.
|
||||||
* If no format supports the given resolution, the format closest to this value will be used.
|
* If no format supports the given resolution, the format closest to this value will be used.
|
||||||
*/
|
*/
|
||||||
photoResolution?: Size;
|
photoResolution?: Size | 'max';
|
||||||
/**
|
/**
|
||||||
* The target aspect ratio of the video (and preview) output, expressed as a factor: `width / height`.
|
* The target aspect ratio of the video (and preview) output, expressed as a factor: `width / height`.
|
||||||
*
|
*
|
||||||
@ -58,6 +58,14 @@ export interface FormatFilter {
|
|||||||
* If no format supports the target pixel format, the best other matching format will be used.
|
* If no format supports the target pixel format, the best other matching format will be used.
|
||||||
*/
|
*/
|
||||||
pixelFormat?: PixelFormat;
|
pixelFormat?: PixelFormat;
|
||||||
|
/**
|
||||||
|
* Whether you want to find a format that supports Photo HDR.
|
||||||
|
*/
|
||||||
|
photoHDR?: boolean;
|
||||||
|
/**
|
||||||
|
* Whether you want to find a format that supports Photo HDR.
|
||||||
|
*/
|
||||||
|
videoHDR?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
type FilterWithPriority<T> = {
|
type FilterWithPriority<T> = {
|
||||||
@ -84,96 +92,121 @@ function filtersToFilterMap(filters: FormatFilter[]): FilterMap {
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the best matching Camera format for the given device that satisfies your requirements using a sorting filter. By default, formats are sorted by highest to lowest resolution.
|
* Get the best matching Camera format for the given device that satisfies your requirements using a sorting filter. By default, formats are sorted by highest to lowest resolution.
|
||||||
|
*
|
||||||
|
* The {@linkcode filters | filters} are ranked by priority, from highest to lowest.
|
||||||
|
* This means the first item you pass will have a higher priority than the second, and so on.
|
||||||
|
*
|
||||||
* @param device The Camera Device you're currently using
|
* @param device The Camera Device you're currently using
|
||||||
* @param filters The filter you want to use. The format that matches your filter the closest will be returned. The filter is ranked by priority, descending.
|
* @param filter The filter you want to use. The format that matches your filter the closest will be returned
|
||||||
* @returns The format that matches your filter the closest.
|
* @returns The format that matches your filter the closest.
|
||||||
|
*
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* const format = getCameraFormat(device, [
|
||||||
|
* { videoResolution: { width: 3048, height: 2160 } },
|
||||||
|
* { fps: 60 }
|
||||||
|
* ])
|
||||||
|
* ```
|
||||||
*/
|
*/
|
||||||
export function getCameraFormat(device: CameraDevice, filters: FormatFilter[]): CameraDeviceFormat {
|
export function getCameraFormat(device: CameraDevice, filters: FormatFilter[]): CameraDeviceFormat {
|
||||||
// Combine filters into a single filter map for constant-time lookup
|
// Combine filters into a single filter map for constant-time lookup
|
||||||
const filter = filtersToFilterMap(filters);
|
const filter = filtersToFilterMap(filters);
|
||||||
|
|
||||||
// Sort list because we will pick first element
|
let bestFormat = device.formats[0];
|
||||||
// TODO: Use reduce instead of sort?
|
if (bestFormat == null)
|
||||||
const copy = [...device.formats];
|
throw new CameraRuntimeError('device/invalid-device', `The given Camera Device (${device.id}) does not have any formats!`);
|
||||||
const sortedFormats = copy.sort((left, right) => {
|
|
||||||
|
// Compare each format using a point scoring system
|
||||||
|
for (const format of device.formats) {
|
||||||
let leftPoints = 0;
|
let leftPoints = 0;
|
||||||
let rightPoints = 0;
|
let rightPoints = 0;
|
||||||
|
|
||||||
const leftVideoResolution = left.videoWidth * left.videoHeight;
|
const leftVideoResolution = bestFormat.videoWidth * bestFormat.videoHeight;
|
||||||
const rightVideoResolution = right.videoWidth * right.videoHeight;
|
const rightVideoResolution = format.videoWidth * format.videoHeight;
|
||||||
if (filter.videoResolution != null) {
|
if (filter.videoResolution != null) {
|
||||||
|
if (filter.videoResolution.target === 'max') {
|
||||||
|
// We just want the maximum resolution
|
||||||
|
if (leftVideoResolution > rightVideoResolution) leftPoints += filter.videoResolution.priority;
|
||||||
|
if (rightVideoResolution > leftVideoResolution) rightPoints += filter.videoResolution.priority;
|
||||||
|
} else {
|
||||||
// Find video resolution closest to the filter (ignoring orientation)
|
// Find video resolution closest to the filter (ignoring orientation)
|
||||||
const targetResolution = filter.videoResolution.target.width * filter.videoResolution.target.height;
|
const targetResolution = filter.videoResolution.target.width * filter.videoResolution.target.height;
|
||||||
const leftDiff = Math.abs(leftVideoResolution - targetResolution);
|
const leftDiff = Math.abs(leftVideoResolution - targetResolution);
|
||||||
const rightDiff = Math.abs(rightVideoResolution - targetResolution);
|
const rightDiff = Math.abs(rightVideoResolution - targetResolution);
|
||||||
if (leftDiff < rightDiff) leftPoints += filter.videoResolution.priority;
|
if (leftDiff < rightDiff) leftPoints += filter.videoResolution.priority;
|
||||||
else if (rightDiff < leftDiff) rightPoints += filter.videoResolution.priority;
|
if (rightDiff < leftDiff) rightPoints += filter.videoResolution.priority;
|
||||||
} else {
|
}
|
||||||
// No filter is set, so just prefer higher resolutions
|
|
||||||
if (leftVideoResolution > rightVideoResolution) leftPoints++;
|
|
||||||
else if (rightVideoResolution > leftVideoResolution) rightPoints++;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
const leftPhotoResolution = left.photoWidth * left.photoHeight;
|
const leftPhotoResolution = bestFormat.photoWidth * bestFormat.photoHeight;
|
||||||
const rightPhotoResolution = right.photoWidth * right.photoHeight;
|
const rightPhotoResolution = format.photoWidth * format.photoHeight;
|
||||||
if (filter.photoResolution != null) {
|
if (filter.photoResolution != null) {
|
||||||
|
if (filter.photoResolution.target === 'max') {
|
||||||
|
// We just want the maximum resolution
|
||||||
|
if (leftPhotoResolution > rightPhotoResolution) leftPoints += filter.photoResolution.priority;
|
||||||
|
if (rightPhotoResolution > leftPhotoResolution) rightPoints += filter.photoResolution.priority;
|
||||||
|
} else {
|
||||||
// Find closest photo resolution to the filter (ignoring orientation)
|
// Find closest photo resolution to the filter (ignoring orientation)
|
||||||
const targetResolution = filter.photoResolution.target.width * filter.photoResolution.target.height;
|
const targetResolution = filter.photoResolution.target.width * filter.photoResolution.target.height;
|
||||||
const leftDiff = Math.abs(leftPhotoResolution - targetResolution);
|
const leftDiff = Math.abs(leftPhotoResolution - targetResolution);
|
||||||
const rightDiff = Math.abs(rightPhotoResolution - targetResolution);
|
const rightDiff = Math.abs(rightPhotoResolution - targetResolution);
|
||||||
if (leftDiff < rightDiff) leftPoints += filter.photoResolution.priority;
|
if (leftDiff < rightDiff) leftPoints += filter.photoResolution.priority;
|
||||||
else if (rightDiff < leftDiff) rightPoints += filter.photoResolution.priority;
|
if (rightDiff < leftDiff) rightPoints += filter.photoResolution.priority;
|
||||||
} else {
|
}
|
||||||
// No filter is set, so just prefer higher resolutions
|
|
||||||
if (leftPhotoResolution > rightPhotoResolution) leftPoints++;
|
|
||||||
else if (rightPhotoResolution > leftPhotoResolution) rightPoints++;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find closest aspect ratio (video)
|
// Find closest aspect ratio (video)
|
||||||
if (filter.videoAspectRatio != null) {
|
if (filter.videoAspectRatio != null) {
|
||||||
const leftAspect = left.videoWidth / right.videoHeight;
|
const leftAspect = bestFormat.videoWidth / bestFormat.videoHeight;
|
||||||
const rightAspect = right.videoWidth / right.videoHeight;
|
const rightAspect = format.videoWidth / format.videoHeight;
|
||||||
const leftDiff = Math.abs(leftAspect - filter.videoAspectRatio.target);
|
const leftDiff = Math.abs(leftAspect - filter.videoAspectRatio.target);
|
||||||
const rightDiff = Math.abs(rightAspect - filter.videoAspectRatio.target);
|
const rightDiff = Math.abs(rightAspect - filter.videoAspectRatio.target);
|
||||||
if (leftDiff < rightDiff) leftPoints += filter.videoAspectRatio.priority;
|
if (leftDiff < rightDiff) leftPoints += filter.videoAspectRatio.priority;
|
||||||
else if (rightDiff < leftDiff) rightPoints += filter.videoAspectRatio.priority;
|
if (rightDiff < leftDiff) rightPoints += filter.videoAspectRatio.priority;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find closest aspect ratio (photo)
|
// Find closest aspect ratio (photo)
|
||||||
if (filter.photoAspectRatio != null) {
|
if (filter.photoAspectRatio != null) {
|
||||||
const leftAspect = left.photoWidth / right.photoHeight;
|
const leftAspect = bestFormat.photoWidth / bestFormat.photoHeight;
|
||||||
const rightAspect = right.photoWidth / right.photoHeight;
|
const rightAspect = format.photoWidth / format.photoHeight;
|
||||||
const leftDiff = Math.abs(leftAspect - filter.photoAspectRatio.target);
|
const leftDiff = Math.abs(leftAspect - filter.photoAspectRatio.target);
|
||||||
const rightDiff = Math.abs(rightAspect - filter.photoAspectRatio.target);
|
const rightDiff = Math.abs(rightAspect - filter.photoAspectRatio.target);
|
||||||
if (leftDiff < rightDiff) leftPoints += filter.photoAspectRatio.priority;
|
if (leftDiff < rightDiff) leftPoints += filter.photoAspectRatio.priority;
|
||||||
else if (rightDiff < leftDiff) rightPoints += filter.photoAspectRatio.priority;
|
if (rightDiff < leftDiff) rightPoints += filter.photoAspectRatio.priority;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find closest max FPS
|
// Find closest max FPS
|
||||||
if (filter.fps != null) {
|
if (filter.fps != null) {
|
||||||
const leftDiff = Math.abs(left.maxFps - filter.fps.target);
|
if (bestFormat.maxFps >= filter.fps.target) leftPoints += filter.fps.priority;
|
||||||
const rightDiff = Math.abs(right.maxFps - filter.fps.target);
|
if (format.maxFps >= filter.fps.target) rightPoints += filter.fps.priority;
|
||||||
if (leftDiff < rightDiff) leftPoints += filter.fps.priority;
|
|
||||||
else if (rightDiff < leftDiff) rightPoints += filter.fps.priority;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find video stabilization mode
|
// Find video stabilization mode
|
||||||
if (filter.videoStabilizationMode != null) {
|
if (filter.videoStabilizationMode != null) {
|
||||||
if (left.videoStabilizationModes.includes(filter.videoStabilizationMode.target)) leftPoints++;
|
if (bestFormat.videoStabilizationModes.includes(filter.videoStabilizationMode.target)) leftPoints++;
|
||||||
if (right.videoStabilizationModes.includes(filter.videoStabilizationMode.target)) rightPoints++;
|
if (format.videoStabilizationModes.includes(filter.videoStabilizationMode.target)) rightPoints++;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find pixel format
|
// Find pixel format
|
||||||
if (filter.pixelFormat != null) {
|
if (filter.pixelFormat != null) {
|
||||||
if (left.pixelFormats.includes(filter.pixelFormat.target)) leftPoints++;
|
if (bestFormat.pixelFormats.includes(filter.pixelFormat.target)) leftPoints++;
|
||||||
if (right.pixelFormats.includes(filter.pixelFormat.target)) rightPoints++;
|
if (format.pixelFormats.includes(filter.pixelFormat.target)) rightPoints++;
|
||||||
}
|
}
|
||||||
|
|
||||||
return rightPoints - leftPoints;
|
// Find Photo HDR formats
|
||||||
});
|
if (filter.photoHDR != null) {
|
||||||
|
if (bestFormat.supportsPhotoHDR === filter.photoHDR.target) leftPoints++;
|
||||||
const format = sortedFormats[0];
|
if (format.supportsPhotoHDR === filter.photoHDR.target) rightPoints++;
|
||||||
if (format == null)
|
}
|
||||||
throw new CameraRuntimeError('device/invalid-device', `The given Camera Device (${device.id}) does not have any formats!`);
|
|
||||||
return format;
|
// Find Video HDR formats
|
||||||
|
if (filter.videoHDR != null) {
|
||||||
|
if (bestFormat.supportsVideoHDR === filter.videoHDR.target) leftPoints++;
|
||||||
|
if (format.supportsVideoHDR === filter.videoHDR.target) rightPoints++;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (rightPoints > leftPoints) bestFormat = format;
|
||||||
|
}
|
||||||
|
|
||||||
|
return bestFormat;
|
||||||
}
|
}
|
||||||
|
@ -7,11 +7,10 @@ import { useCameraDevices } from './useCameraDevices';
|
|||||||
* Get the best matching Camera device that best satisfies your requirements using a sorting filter.
|
* Get the best matching Camera device that best satisfies your requirements using a sorting filter.
|
||||||
* @param position The position of the Camera device relative to the phone.
|
* @param position The position of the Camera device relative to the phone.
|
||||||
* @param filter The filter you want to use. The Camera device that matches your filter the closest will be returned
|
* @param filter The filter you want to use. The Camera device that matches your filter the closest will be returned
|
||||||
* @returns The Camera device that matches your filter the closest.
|
* @returns The Camera device that matches your filter the closest, or `undefined` if no such Camera Device exists on the given {@linkcode position}.
|
||||||
* @example
|
* @example
|
||||||
* ```ts
|
* ```ts
|
||||||
* const [position, setPosition] = useState<CameraPosition>('back')
|
* const device = useCameraDevice('back', {
|
||||||
* const device = useCameraDevice(position, {
|
|
||||||
* physicalDevices: ['wide-angle-camera']
|
* physicalDevices: ['wide-angle-camera']
|
||||||
* })
|
* })
|
||||||
* ```
|
* ```
|
||||||
|
@ -2,7 +2,7 @@ export * from './Camera';
|
|||||||
export * from './CameraDevice';
|
export * from './CameraDevice';
|
||||||
export * from './CameraError';
|
export * from './CameraError';
|
||||||
export * from './CameraProps';
|
export * from './CameraProps';
|
||||||
export { Frame } from './Frame';
|
export * from './Frame';
|
||||||
export * from './FrameProcessorPlugins';
|
export * from './FrameProcessorPlugins';
|
||||||
export * from './Orientation';
|
export * from './Orientation';
|
||||||
export * from './PhotoFile';
|
export * from './PhotoFile';
|
||||||
|
Loading…
Reference in New Issue
Block a user