chore: Upgrade a whole lotta dependencies (#436)
* chore: Upgrade a lot of dependencies for `./` * chore: Upgrade a lot of dependencies for `./example` * chore: Upgrade a lot of dependencies for `./docs` * Use new `EventEmitter` syntax (`.remove()`) * Update Podfile.lock * docs: Use watch mode * docs: Replace all relative links with absolute * Fix all links * Update docusaurus.config.js * Upgrade docusaurus-plugin-typedoc to fix docs build * Update yarn.lock * Upgrade typescript to 4.4.3 * Fix error unknown * Update package.json * Upgrade typedoc * Upgrade a few more deps * Fix deprecated sidebar syntax * Update Gemfile.lock
This commit is contained in:
@@ -45,7 +45,7 @@ To take a photo you first have to enable photo capture:
|
||||
<Camera {...props} photo={true} />
|
||||
```
|
||||
|
||||
Then, simply use the Camera's [`takePhoto(...)`](../api/classes/camera.camera-1#takephoto) function:
|
||||
Then, simply use the Camera's [`takePhoto(...)`](/docs/api/classes/Camera#takephoto) function:
|
||||
|
||||
```ts
|
||||
const photo = await camera.current.takePhoto({
|
||||
@@ -53,13 +53,13 @@ const photo = await camera.current.takePhoto({
|
||||
})
|
||||
```
|
||||
|
||||
You can customize capture options such as [automatic red-eye reduction](../api/interfaces/photofile.takephotooptions#enableautoredeyereduction), [automatic image stabilization](../api/interfaces/photofile.takephotooptions#enableautostabilization), [combining images from constituent physical camera devices](../api/interfaces/photofile.takephotooptions#enablevirtualdevicefusion) to create a single high quality fused image, [enable flash](../api/interfaces/photofile.takephotooptions#flash), [prioritize speed over quality](../api/interfaces/photofile.takephotooptions#qualityprioritization) and more using the `options` parameter. (See [`TakePhotoOptions`](../api/interfaces/photofile.takephotooptions))
|
||||
You can customize capture options such as [automatic red-eye reduction](/docs/api/interfaces/TakePhotoOptions#enableautoredeyereduction), [automatic image stabilization](/docs/api/interfaces/TakePhotoOptions#enableautostabilization), [combining images from constituent physical camera devices](/docs/api/interfaces/TakePhotoOptions#enablevirtualdevicefusion) to create a single high quality fused image, [enable flash](/docs/api/interfaces/TakePhotoOptions#flash), [prioritize speed over quality](/docs/api/interfaces/TakePhotoOptions#qualityprioritization) and more using the `options` parameter. (See [`TakePhotoOptions`](/docs/api/interfaces/TakePhotoOptions))
|
||||
|
||||
This function returns a [`PhotoFile`](../api/interfaces/photofile.photofile-1) which contains a [`path`](../api/interfaces/photofile.photofile-1#path) property you can display in your App using an `<Image>` or `<FastImage>`.
|
||||
This function returns a [`PhotoFile`](/docs/api/interfaces/PhotoFile) which contains a [`path`](/docs/api/interfaces/PhotoFile#path) property you can display in your App using an `<Image>` or `<FastImage>`.
|
||||
|
||||
### Taking Snapshots
|
||||
|
||||
Compared to iOS, Cameras on Android tend to be slower in image capture. If you care about speed, you can use the Camera's [`takeSnapshot(...)`](../api/classes/camera.camera-1#takesnapshot) function (Android only) which simply takes a snapshot of the Camera View instead of actually taking a photo through the Camera lens.
|
||||
Compared to iOS, Cameras on Android tend to be slower in image capture. If you care about speed, you can use the Camera's [`takeSnapshot(...)`](/docs/api/classes/Camera#takesnapshot) function (Android only) which simply takes a snapshot of the Camera View instead of actually taking a photo through the Camera lens.
|
||||
|
||||
```ts
|
||||
const snapshot = await camera.current.takeSnapshot({
|
||||
@@ -88,7 +88,7 @@ To start a video recording you first have to enable video capture:
|
||||
/>
|
||||
```
|
||||
|
||||
Then, simply use the Camera's [`startRecording(...)`](../api/classes/camera.camera-1#startrecording) function:
|
||||
Then, simply use the Camera's [`startRecording(...)`](/docs/api/classes/Camera#startrecording) function:
|
||||
|
||||
```ts
|
||||
camera.current.startRecording({
|
||||
@@ -98,15 +98,15 @@ camera.current.startRecording({
|
||||
})
|
||||
```
|
||||
|
||||
For any error that occured _while recording the video_, the `onRecordingError` callback will be invoked with a [`CaptureError`](../api/classes/cameraerror.cameracaptureerror) and the recording is therefore cancelled.
|
||||
For any error that occured _while recording the video_, the `onRecordingError` callback will be invoked with a [`CaptureError`](/docs/api/classes/CameraCaptureError) and the recording is therefore cancelled.
|
||||
|
||||
To stop the video recording, you can call [`stopRecording(...)`](../api/classes/camera.camera-1#stoprecording):
|
||||
To stop the video recording, you can call [`stopRecording(...)`](/docs/api/classes/Camera#stoprecording):
|
||||
|
||||
```ts
|
||||
await camera.current.stopRecording()
|
||||
```
|
||||
|
||||
Once a recording has been stopped, the `onRecordingFinished` callback passed to the `startRecording` function will be invoked with a [`VideoFile`](../api/interfaces/videofile.videofile-1) which you can then use to display in a [`<Video>`](https://github.com/react-native-video/react-native-video) component.
|
||||
Once a recording has been stopped, the `onRecordingFinished` callback passed to the `startRecording` function will be invoked with a [`VideoFile`](/docs/api/interfaces/VideoFile) which you can then use to display in a [`<Video>`](https://github.com/react-native-video/react-native-video) component.
|
||||
|
||||
<br />
|
||||
|
||||
|
@@ -27,7 +27,7 @@ Camera devices are the physical (or "virtual") devices that can be used to recor
|
||||
|
||||
### Get available camera devices
|
||||
|
||||
To get a list of all available camera devices, use [the `getAvailableCameraDevices` function](/docs/api/classes/camera.camera-1#getavailablecameradevices):
|
||||
To get a list of all available camera devices, use [the `getAvailableCameraDevices` function](/docs/api/classes/Camera#getavailablecameradevices):
|
||||
|
||||
```ts
|
||||
const devices = await Camera.getAvailableCameraDevices()
|
||||
@@ -40,7 +40,7 @@ The most important properties are:
|
||||
* `devices`: A list of physical device types this camera device consists of. For a **single physical camera device**, this property is always an array of one element. **For virtual multi-cameras** this property contains all the physical camera devices that are combined to create this virtual multi-camera device
|
||||
* `position`: The position of the camera device relative to the phone (`front`, `back`)
|
||||
* `hasFlash`: Whether this camera device supports using the flash to take photos or record videos
|
||||
* `hasTorch`: Whether this camera device supports enabling/disabling the torch at any time ([`Camera.torch` prop](/docs/api/interfaces/cameraprops.cameraprops-1#torch))
|
||||
* `hasTorch`: Whether this camera device supports enabling/disabling the torch at any time ([`Camera.torch` prop](/docs/api/interfaces/CameraProps#torch))
|
||||
* `isMultiCam`: Determines whether the camera device is a virtual multi-camera device which contains multiple combined physical camera devices.
|
||||
* `minZoom`: The minimum available zoom factor. This value is often `1`. When you pass `zoom={0}` to the Camera, the `minZoom` factor will be applied.
|
||||
* `neutralZoom`: The zoom factor where the camera is "neutral". For any wide-angle cameras this property might be the same as `minZoom`, where as for ultra-wide-angle cameras ("fish-eye") this might be a value higher than `minZoom` (e.g. `2`). It is recommended that you always start at `neutralZoom` and let the user manually zoom out to `minZoom` on demand.
|
||||
@@ -50,7 +50,7 @@ The most important properties are:
|
||||
* `supportsFocus`: Determines whether this camera device supports focusing (See [Focusing](focusing))
|
||||
|
||||
:::note
|
||||
See the [`CameraDevice` type](../api/interfaces/cameradevice.cameradevice-1) for full API reference
|
||||
See the [`CameraDevice` type](/docs/api/interfaces/CameraDevice) for full API reference
|
||||
:::
|
||||
|
||||
For debugging purposes you can use the `id` or `name` properties to log and compare devices. You can also use the `devices` properties to determine the physical camera devices this camera device consists of, for example:
|
||||
@@ -115,7 +115,7 @@ function App() {
|
||||
|
||||
### The `supportsParallelVideoProcessing` prop
|
||||
|
||||
Camera devices provide the [`supportsParallelVideoProcessing` property](/docs/api/interfaces/cameradevice.cameradevice-1#supportsparallelvideoprocessing) which determines whether the device supports using Video Recordings (`video={true}`) and Frame Processors (`frameProcessor={...}`) at the same time.
|
||||
Camera devices provide the [`supportsParallelVideoProcessing` property](/docs/api/interfaces/CameraDevice#supportsparallelvideoprocessing) which determines whether the device supports using Video Recordings (`video={true}`) and Frame Processors (`frameProcessor={...}`) at the same time.
|
||||
|
||||
If this property is `false`, you can either enable `video`, or add a `frameProcessor`, but not both.
|
||||
|
||||
|
@@ -4,20 +4,20 @@ title: Focusing
|
||||
sidebar_label: Focusing
|
||||
---
|
||||
|
||||
To focus the camera to a specific point, simply use the Camera's [`focus(...)`](../api/classes/camera.camera-1#focus) function:
|
||||
To focus the camera to a specific point, simply use the Camera's [`focus(...)`](/docs/api/classes/Camera#focus) function:
|
||||
|
||||
```ts
|
||||
await camera.current.focus({ x: tapEvent.x, y: tapEvent.y })
|
||||
```
|
||||
|
||||
The focus function expects a [`Point`](../api/interfaces/point.point-1) parameter which represents the location relative to the Camera view where you want to focus the Camera to (in _points_). If you use [react-native-gesture-handler](https://docs.swmansion.com/react-native-gesture-handler/), this will consist of the [`x`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#x) and [`y`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#y) properties of the tap event payload.
|
||||
The focus function expects a [`Point`](/docs/api/interfaces/Point) parameter which represents the location relative to the Camera view where you want to focus the Camera to (in _points_). If you use [react-native-gesture-handler](https://docs.swmansion.com/react-native-gesture-handler/), this will consist of the [`x`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#x) and [`y`](https://docs.swmansion.com/react-native-gesture-handler/docs/api/gesture-handlers/tap-gh#y) properties of the tap event payload.
|
||||
|
||||
So for example, `{ x: 0, y: 0 }` will focus to the upper left corner, while `{ x: CAM_WIDTH, y: CAM_HEIGHT }` will focus to the bottom right corner.
|
||||
|
||||
Focussing adjusts auto-focus (AF) and auto-exposure (AE).
|
||||
|
||||
:::note
|
||||
`focus(...)` will fail if the selected Camera device does not support focusing (see [`CameraDevice.supportsFocus`](/docs/api/interfaces/cameradevice.cameradevice-1#supportsfocus))
|
||||
`focus(...)` will fail if the selected Camera device does not support focusing (see [`CameraDevice.supportsFocus`](/docs/api/interfaces/CameraDevice#supportsfocus))
|
||||
:::
|
||||
|
||||
<br />
|
||||
|
@@ -173,7 +173,7 @@ This means that **the Frame Processor API only takes ~1ms longer than a fully na
|
||||
|
||||
### Avoiding Frame-drops
|
||||
|
||||
Frame Processors will be **synchronously** called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of **30 FPS**, you have about **33ms** to finish processing frames. Use [`frameProcessorFps`](../api/interfaces/cameraprops.cameraprops-1#frameprocessorfps) to throttle the frame processor's FPS. For a QR Code Scanner, **5 FPS** (200ms) might suffice, while a object tracking AI might run at the same frame rate as the Camera itself (e.g. **60 FPS** (16ms)).
|
||||
Frame Processors will be **synchronously** called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of **30 FPS**, you have about **33ms** to finish processing frames. Use [`frameProcessorFps`](/docs/api/interfaces/CameraProps#frameprocessorfps) to throttle the frame processor's FPS. For a QR Code Scanner, **5 FPS** (200ms) might suffice, while a object tracking AI might run at the same frame rate as the Camera itself (e.g. **60 FPS** (16ms)).
|
||||
|
||||
### ESLint react-hooks plugin
|
||||
|
||||
|
@@ -164,7 +164,7 @@ The permission request status can have the following values:
|
||||
|
||||
## Use the `<Camera>` view
|
||||
|
||||
If your app has permission to use the Camera and Microphone, simply use the [`useCameraDevices(...)`](/docs/api/modules/hooks_usecameradevices) hook to get a Camera device (see [Camera Devices](/docs/guides/devices)) and mount the `<Camera>` view:
|
||||
If your app has permission to use the Camera and Microphone, simply use the [`useCameraDevices(...)`](/docs/api#usecameradevices) hook to get a Camera device (see [Camera Devices](/docs/guides/devices)) and mount the `<Camera>` view:
|
||||
|
||||
```tsx
|
||||
function App() {
|
||||
|
@@ -65,7 +65,7 @@ Before opening an issue, make sure you try the following:
|
||||
distributionUrl=https\://services.gradle.org/distributions/gradle-6.5-all.zip
|
||||
```
|
||||
5. If you're having runtime issues, check the logs in Android Studio/Logcat to find out more. In Android Studio, go to **View** > **Tool Windows** > **Logcat** (<kbd>⌘</kbd>+<kbd>6</kbd>) or run `adb logcat` in Terminal.
|
||||
6. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/camera.camera-1#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
|
||||
6. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/Camera#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
|
||||
7. If your Frame Processor is not running, make sure you check the native Android Studio/Logcat logs to find out why. Also make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
|
||||
|
||||
## Issues
|
||||
|
@@ -13,7 +13,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||
</svg>
|
||||
</div>
|
||||
|
||||
The `<Camera>` component already provides a natively implemented zoom gesture which you can enable with the [`enableZoomGesture`](/docs/api/interfaces/cameraprops.cameraprops-1#enablezoomgesture) prop. This does not require any additional work, but if you want to setup a custom gesture, such as the one in Snapchat or Instagram where you move up your finger while recording, continue reading.
|
||||
The `<Camera>` component already provides a natively implemented zoom gesture which you can enable with the [`enableZoomGesture`](/docs/api/interfaces/CameraProps#enablezoomgesture) prop. This does not require any additional work, but if you want to setup a custom gesture, such as the one in Snapchat or Instagram where you move up your finger while recording, continue reading.
|
||||
|
||||
### Animation libraries
|
||||
|
||||
@@ -80,7 +80,7 @@ export function App() {
|
||||
|
||||
### Min, Max and Neutral Zoom
|
||||
|
||||
A Camera device has different minimum, maximum and neutral zoom values. Those values are expressed through the `CameraDevice`'s [`minZoom`](/docs/api/interfaces/cameradevice.cameradevice-1#minzoom), [`maxZoom`](/docs/api/interfaces/cameradevice.cameradevice-1#maxzoom) and [`neutralZoom`](/docs/api/interfaces/cameradevice.cameradevice-1#neutralzoom) props, and are represented in "scale". So if the `maxZoom` property of a device is `2`, that means the view can be enlarged by twice it's zoom, aka the viewport halves.
|
||||
A Camera device has different minimum, maximum and neutral zoom values. Those values are expressed through the `CameraDevice`'s [`minZoom`](/docs/api/interfaces/CameraDevice#minzoom), [`maxZoom`](/docs/api/interfaces/CameraDevice#maxzoom) and [`neutralZoom`](/docs/api/interfaces/CameraDevice#neutralzoom) props, and are represented in "scale". So if the `maxZoom` property of a device is `2`, that means the view can be enlarged by twice it's zoom, aka the viewport halves.
|
||||
|
||||
* The `minZoom` value is always `1`.
|
||||
* The `maxZoom` value can have very high values (such as `128`), but often you want to clamp this value to something realistic like `16`.
|
||||
|
@@ -101,9 +101,7 @@ module.exports = {
|
||||
{
|
||||
docs: {
|
||||
sidebarPath: require.resolve('./sidebars.js'),
|
||||
// Please change this to your repo.
|
||||
editUrl:
|
||||
'https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/',
|
||||
editUrl: 'https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/',
|
||||
},
|
||||
theme: {
|
||||
customCss: require.resolve('./src/css/custom.css'),
|
||||
@@ -119,14 +117,13 @@ module.exports = {
|
||||
entryPoints: ['../src'],
|
||||
exclude: "../src/index.ts",
|
||||
tsconfig: '../tsconfig.json',
|
||||
watch: process.env.TYPEDOC_WATCH,
|
||||
excludePrivate: true,
|
||||
excludeProtected: true,
|
||||
excludeExternals: true,
|
||||
excludeInternal: true,
|
||||
readme: "none",
|
||||
sidebar: {
|
||||
sidebarFile: 'typedoc-sidebar.js',
|
||||
fullNames: false,
|
||||
indexLabel: 'Overview'
|
||||
}
|
||||
},
|
||||
|
@@ -4,7 +4,7 @@
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"docusaurus": "docusaurus",
|
||||
"start": "docusaurus start",
|
||||
"start": "TYPEDOC_WATCH=true docusaurus start",
|
||||
"build": "docusaurus build",
|
||||
"swizzle": "docusaurus swizzle",
|
||||
"deploy": "docusaurus deploy",
|
||||
@@ -12,8 +12,8 @@
|
||||
"clear": "docusaurus clear"
|
||||
},
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "^2.0.0-beta.2",
|
||||
"@docusaurus/preset-classic": "^2.0.0-beta.4",
|
||||
"@docusaurus/core": "^2.0.0-beta.6",
|
||||
"@docusaurus/preset-classic": "^2.0.0-beta.6",
|
||||
"@mdx-js/react": "^1.6.22",
|
||||
"clsx": "^1.1.1",
|
||||
"react": "^17.0.2",
|
||||
@@ -32,9 +32,9 @@
|
||||
]
|
||||
},
|
||||
"devDependencies": {
|
||||
"docusaurus-plugin-typedoc": "^0.14.2",
|
||||
"typedoc": "^0.20.36",
|
||||
"typedoc-plugin-markdown": "^3.9.0",
|
||||
"typescript": "4.2.4"
|
||||
"docusaurus-plugin-typedoc": "^0.16.3",
|
||||
"typedoc": "^0.22.4",
|
||||
"typedoc-plugin-markdown": "^3.11.0",
|
||||
"typescript": "^4.4.3"
|
||||
}
|
||||
}
|
||||
|
@@ -23,6 +23,11 @@ module.exports = {
|
||||
'guides/errors',
|
||||
'guides/troubleshooting',
|
||||
],
|
||||
API: require('./typedoc-sidebar.js'),
|
||||
API: [
|
||||
{
|
||||
type: 'autogenerated',
|
||||
dirName: 'api',
|
||||
}
|
||||
],
|
||||
},
|
||||
};
|
||||
|
3232
docs/yarn.lock
3232
docs/yarn.lock
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user