react-native-vision-camera/docs/docs/guides/CAPTURING.mdx
2021-06-07 15:00:21 +02:00

129 lines
5.3 KiB
Plaintext

---
id: capturing
title: Taking Photos/Recording Videos
sidebar_label: Taking Photos/Recording Videos
---
import useBaseUrl from '@docusaurus/useBaseUrl';
<div>
<svg xmlns="http://www.w3.org/2000/svg" width="283" height="535" style={{ float: 'right' }}>
<image href={useBaseUrl("img/demo_capture.gif")} x="18" y="33" width="247" height="469" />
<image href={useBaseUrl("img/frame.png")} width="283" height="535" />
</svg>
</div>
## Camera Actions
The Camera provides certain actions using member functions which are available by using a [ref object](https://reactjs.org/docs/refs-and-the-dom.html):
```tsx
function App() {
const camera = useRef<Camera>(null)
// ...
return (
<Camera
ref={camera}
{...cameraProps}
/>
)
}
```
The most important actions are:
* [Taking Photos](#taking-photos)
- [Taking Snapshots](#taking-snapshots)
* [Recording Videos](#recording-videos)
* [Focussing](#focussing)
## Taking Photos
To take a photo you first have to enable photo capture:
```tsx
<Camera {...props} photo={true} />
```
Then, simply use the Camera's [`takePhoto(...)`](../api/classes/camera.camera-1#takephoto) function:
```ts
const photo = await camera.current.takePhoto({
flash: 'on'
})
```
You can customize capture options such as [automatic red-eye reduction](../api/interfaces/photofile.takephotooptions#enableautoredeyereduction), [automatic image stabilization](../api/interfaces/photofile.takephotooptions#enableautostabilization), [combining images from constituent physical camera devices](../api/interfaces/photofile.takephotooptions#enablevirtualdevicefusion) to create a single high quality fused image, [enable flash](../api/interfaces/photofile.takephotooptions#flash), [prioritize speed over quality](../api/interfaces/photofile.takephotooptions#qualityprioritization) and more using the `options` parameter. (See [`TakePhotoOptions`](../api/interfaces/photofile.takephotooptions))
This function returns a [`PhotoFile`](../api/interfaces/photofile.photofile-1) which contains a [`path`](../api/interfaces/photofile.photofile-1#path) property you can display in your App using an `<Image>` or `<FastImage>`.
### Taking Snapshots
Compared to iOS, Cameras on Android tend to be slower in image capture. If you care about speed, you can use the Camera's [`takeSnapshot(...)`](../api/classes/camera.camera-1#takesnapshot) function (Android only) which simply takes a snapshot of the Camera View instead of actually taking a photo through the Camera lens.
```ts
const snapshot = await camera.current.takeSnapshot({
quality: 85,
skipMetadata: true
})
```
:::note
While taking snapshots is faster than taking photos, the resulting image has way lower quality. You can combine both functions to create a snapshot to present to the user at first, then deliver the actual high-res photo afterwards.
:::
:::note
The `takeSnapshot` function also works with `photo={false}`. For this reason, devices that do not support photo and video capture at the same time (see ["The `supportsPhotoAndVideoCapture` prop"](/docs/guides/devices/#the-supportsphotoandvideocapture-prop)) can use `video={true}` and fall back to snapshot capture for photos. (See ["Taking Snapshots"](/docs/guides/capturing#taking-snapshots))
:::
## Recording Videos
To start a video recording you first have to enable video capture:
```tsx
<Camera
{...props}
video={true}
audio={true} // <-- optional
/>
```
Then, simply use the Camera's [`startRecording(...)`](../api/classes/camera.camera-1#startrecording) function:
```ts
camera.current.startRecording({
flash: 'on',
onRecordingFinished: (video) => console.log(video),
onRecordingError: (error) => console.error(error),
})
```
For any error that occured _while recording the video_, the `onRecordingError` callback will be invoked with a [`CaptureError`](../api/classes/cameraerror.cameracaptureerror) and the recording is therefore cancelled.
To stop the video recording, you can call [`stopRecording(...)`](../api/classes/camera.camera-1#stoprecording):
```ts
await camera.current.stopRecording()
```
Once a recording has been stopped, the `onRecordingFinished` callback passed to the `startRecording` function will be invoked with a [`VideoFile`](../api/interfaces/videofile.videofile-1) which you can then use to display in a [`<Video>`](https://github.com/react-native-video/react-native-video) component.
## Focussing
To focus the camera to a specific point, simply use the Camera's [`focus(...)`](../api/classes/camera.camera-1#focus) function:
```ts
await camera.current.focus({ x: tapEvent.x, y: tapEvent.y })
```
The focus function expects a [`Point`](../api/interfaces/point.point-1) parameter which represents the location relative to the Camera View where you want to focus the Camera to. If you use react-native-gesture-handler, this will consist of the [`x`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#x) and [`y`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#y) properties of the tap event payload.
So for example, `{ x: 0, y: 0 }` will focus to the upper left corner, while `{ x: CAM_WIDTH, y: CAM_HEIGHT }` will focus to the bottom right corner.
Focussing adjusts auto-focus (AF) and auto-exposure (AE).
<br />
#### 🚀 Next section: [Frame Processors](frame-processors)