feat: Full Android rewrite (CameraX -> Camera2) (#1674)

* Nuke CameraX

* fix: Run View Finder on UI Thread

* Open Camera, set up Threads

* fix init

* Mirror if needed

* Try PreviewView

* Use max resolution

* Add `hardwareLevel` property

* Check if output type is supported

* Replace `frameRateRanges` with `minFps` and `maxFps`

* Remove `isHighestPhotoQualitySupported`

* Remove `colorSpace`

The native platforms will use the best / most accurate colorSpace by default anyways.

* HDR

* Check from format

* fix

* Remove `supportsParallelVideoProcessing`

* Correctly return video/photo sizes on Android now. Finally

* Log all Device props

* Log if optimized usecase is used

* Cleanup

* Configure Camera Input only once

* Revert "Configure Camera Input only once"

This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e.

* Extract Camera configuration

* Try to reconfigure all

* Hook based

* Properly set up `CameraSession`

* Delete unused

* fix: Fix recreate when outputs change

* Update NativePreviewView.kt

* Use callback for closing

* Catch CameraAccessException

* Finally got it stable

* Remove isMirrored

* Implement `takePhoto()`

* Add ExifInterface library

* Run findViewById on UI Thread

* Add Photo Output Surface to takePhoto

* Fix Video Stabilization Modes

* Optimize Imports

* More logs

* Update CameraSession.kt

* Close Image

* Use separate Executor in CameraQueue

* Delete hooks

* Use same Thread again

* If opened, call error

* Update CameraSession.kt

* Log HW level

* fix: Don't enable Stream Use Case if it's not 100% supported

* Move some stuff

* Cleanup PhotoOutputSynchronizer

* Try just open in suspend fun

* Some synchronization fixes

* fix logs

* Update CameraDevice+createCaptureSession.kt

* Update CameraDevice+createCaptureSession.kt

* fixes

* fix: Use Snapshot Template for speed capture prio

* Use PREVIEW template for repeating request

* Use `TEMPLATE_RECORD` if video use-case is attached

* Use `isRunning` flag

* Recreate session everytime on active/inactive

* Lazily get values in capture session

* Stability

* Rebuild session if outputs change

* Set `didOutputsChange` back to false

* Capture first in lock

* Try

* kinda fix it? idk

* fix: Keep Outputs

* Refactor into single method

* Update CameraView.kt

* Use Enums for type safety

* Implement Orientation (I think)

* Move RefCount management to Java (Frame)

* Don't crash when dropping a Frame

* Prefer Devices with higher max resolution

* Prefer multi-cams

* Use FastImage for Media Page

* Return orientation in takePhoto()

* Load orientation from EXIF Data

* Add `isMirrored` props and documentation for PhotoFile

* fix: Return `not-determined` on Android

* Update CameraViewModule.kt

* chore: Upgrade packages

* fix: Fix Metro Config

* Cleanup config

* Properly mirror Images on save

* Prepare MediaRecorder

* Start/Stop MediaRecorder

* Remove `takeSnapshot()`

It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot

* Use `MediaCodec`

* Move to `VideoRecording` class

* Cleanup Snapshot

* Create `SkiaPreviewView` hybrid class

* Create OpenGL context

* Create `SkiaPreviewView`

* Fix texture creation missing context

* Draw red frame

* Somehow get it working

* Add Skia CMake setup

* Start looping

* Init OpenGL

* Refactor into `SkiaRenderer`

* Cleanup PreviewSize

* Set up

* Only re-render UI if there is a new Frame

* Preview

* Fix init

* Try rendering Preview

* Update SkiaPreviewView.kt

* Log version

* Try using Skia (fail)

* Drawwwww!!!!!!!!!! 🎉

* Use Preview Size

* Clear first

* Refactor into SkiaRenderer

* Add `previewType: "none"` on iOS

* Simplify a lot

* Draw Camera? For some reason? I have no idea anymore

* Fix OpenGL errors

* Got it kinda working again?

* Actually draw Frame woah

* Clean up code

* Cleanup

* Update on main

* Synchronize render calls

* holy shit

* Update SkiaRenderer.cpp

* Update SkiaRenderer.cpp

* Refactor

* Update SkiaRenderer.cpp

* Check for `NO_INPUT_TEXTURE`^

* Post & Wait

* Set input size

* Add Video back again

* Allow session without preview

* Convert JPEG to byte[]

* feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689)

* Try to pass YUV Buffers as Pixmaps

* Create pixmap!

* Clean up

* Render to preview

* Only render if we have an output surface

* Update SkiaRenderer.cpp

* Fix Y+U+V sampling code

* Cleanup

* Fix Semaphore 0

* Use 4:2:0 YUV again idk

* Update SkiaRenderer.h

* Set minSdk to 26

* Set surface

* Revert "Set minSdk to 26"

This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48.

* Set previewType

* feat: Video Recording with Camera2 (#1691)

* Rename

* Update CameraSession.kt

* Use `SurfaceHolder` instead of `SurfaceView` for output

* Update CameraOutputs.kt

* Update CameraSession.kt

* fix: Fix crash when Preview is null

* Check if snapshot capture is supported

* Update RecordingSession.kt

* S

* Use `MediaRecorder`

* Make audio optional

* Add Torch

* Output duration

* Update RecordingSession.kt

* Start RecordingSession

* logs

* More log

* Base for preparing pass-through Recording

* Use `ImageWriter` to append Images to the Recording Surface

* Stream PRIVATE GPU_SAMPLED_IMAGE Images

* Add flags

* Close session on stop

* Allow customizing `videoCodec` and `fileType`

* Enable Torch

* Fix Torch Mode

* Fix comparing outputs with hashCode

* Update CameraSession.kt

* Correctly pass along Frame Processor

* fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz

* Use CAMCORDER instead of MIC microphone

* Use 1 channel

* fix: Use `Orientation`

* Add `native` PixelFormat

* Update iOS to latest Skia integration

* feat: Add `pixelFormat` property to Camera

* Catch error in configureSession

* Fix JPEG format

* Clean up best match finder

* Update CameraDeviceDetails.kt

* Clamp sizes by maximum CamcorderProfile size

* Remove `getAvailableVideoCodecs`

* chore: release 3.0.0-rc.5

* Use maximum video size of RECORD as default

* Update CameraDeviceDetails.kt

* Add a todo

* Add JSON device to issue report

* Prefer `full` devices and flash

* Lock to 30 FPS on Samsung

* Implement Zoom

* Refactor

* Format -> PixelFormat

* fix: Feat `pixelFormat` -> `pixelFormats`

* Update TROUBLESHOOTING.mdx

* Format

* fix: Implement `zoom` for Photo Capture

* fix: Don't run if `isActive` is `false`

* fix: Call `examplePlugin(frame)`

* fix: Fix Flash

* fix: Use `react-native-worklets-core`!

* fix: Fix import
This commit is contained in:
Marc Rousavy
2023-08-21 12:50:14 +02:00
committed by GitHub
parent 61fd4e0474
commit 37a3548a81
141 changed files with 3991 additions and 2251 deletions

View File

@@ -34,7 +34,6 @@ function App() {
The most important actions are:
* [Taking Photos](#taking-photos)
- [Taking Snapshots](#taking-snapshots)
* [Recording Videos](#recording-videos)
## Taking Photos
@@ -57,25 +56,6 @@ You can customize capture options such as [automatic red-eye reduction](/docs/ap
This function returns a [`PhotoFile`](/docs/api/interfaces/PhotoFile) which contains a [`path`](/docs/api/interfaces/PhotoFile#path) property you can display in your App using an `<Image>` or `<FastImage>`.
### Taking Snapshots
Compared to iOS, Cameras on Android tend to be slower in image capture. If you care about speed, you can use the Camera's [`takeSnapshot(...)`](/docs/api/classes/Camera#takesnapshot) function (Android only) which simply takes a snapshot of the Camera View instead of actually taking a photo through the Camera lens.
```ts
const snapshot = await camera.current.takeSnapshot({
quality: 85,
skipMetadata: true
})
```
:::note
While taking snapshots is faster than taking photos, the resulting image has way lower quality. You can combine both functions to create a snapshot to present to the user at first, then deliver the actual high-res photo afterwards.
:::
:::note
The `takeSnapshot` function also works with `photo={false}`. For this reason VisionCamera will automatically fall-back to snapshot capture if you are trying to use more use-cases than the Camera natively supports. (see ["The `supportsParallelVideoProcessing` prop"](/docs/guides/devices#the-supportsparallelvideoprocessing-prop))
:::
## Recording Videos
To start a video recording you first have to enable video capture:

View File

@@ -46,7 +46,6 @@ The most important properties are:
* `neutralZoom`: The zoom factor where the camera is "neutral". For any wide-angle cameras this property might be the same as `minZoom`, where as for ultra-wide-angle cameras ("fish-eye") this might be a value higher than `minZoom` (e.g. `2`). It is recommended that you always start at `neutralZoom` and let the user manually zoom out to `minZoom` on demand.
* `maxZoom`: The maximum available zoom factor. When you pass `zoom={1}` to the Camera, the `maxZoom` factor will be applied.
* `formats`: A list of all available formats (See [Camera Formats](formats))
* `supportsParallelVideoProcessing`: Determines whether this camera devices supports using Video Recordings and Frame Processors at the same time. (See [`supportsParallelVideoProcessing`](#the-supportsparallelvideoprocessing-prop))
* `supportsFocus`: Determines whether this camera device supports focusing (See [Focusing](focusing))
:::note
@@ -113,27 +112,6 @@ function App() {
}
```
### The `supportsParallelVideoProcessing` prop
Camera devices provide the [`supportsParallelVideoProcessing` property](/docs/api/interfaces/CameraDevice#supportsparallelvideoprocessing) which determines whether the device supports using Video Recordings (`video={true}`) and Frame Processors (`frameProcessor={...}`) at the same time.
If this property is `false`, you can either enable `video`, or add a `frameProcessor`, but not both.
* On iOS this value is always `true`.
* On newer Android devices this value is always `true`.
* On older Android devices this value is `false` if the Camera's hardware level is `LEGACY` or `LIMITED`, `true` otherwise. (See [`INFO_SUPPORTED_HARDWARE_LEVEL`](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics#INFO_SUPPORTED_HARDWARE_LEVEL) or [the tables at "Regular capture"](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#regular-capture))
#### Examples
* An app that only supports **taking photos** (e.g. a vintage Polaroid Camera app) works on every Camera device because the `supportsParallelVideoProcessing` only affects _video processing_.
* An app that supports **taking photos** and **videos** (e.g. a Camera app) works on every Camera device because only a single _video processing_ feature is used (`video`).
* An app that only uses **Frame Processors** (e.g. the "Hotdog/Not Hotdog detector" app) (no taking photos or videos) works on every Camera device because it only uses a single _video processing_ feature (`frameProcessor`).
* An app that uses **Frame Processors** and supports **taking photos** and **videos** (e.g. Snapchat, Instagram) only works on Camera devices where `supportsParallelVideoProcessing` is `true`. (iPhones and newer Android Phones)
:::note
Actually the limitation also affects the `photo` feature, but VisionCamera will automatically fall-back to **Snapshot capture** if you are trying to use multiple features (`photo` + `video` + `frameProcessor`) and they are not natively supported. (See ["Taking Snapshots"](/docs/guides/capturing#taking-snapshots))
:::
<br />
#### 🚀 Next section: [Camera Lifecycle](lifecycle)

View File

@@ -39,13 +39,6 @@ You can also manually get all camera devices and decide which device to use base
This example shows how you would pick the format with the _highest frame rate_:
```tsx
function getMaxFps(format: CameraDeviceFormat): number {
return format.frameRateRanges.reduce((prev, curr) => {
if (curr.maxFrameRate > prev) return curr.maxFrameRate
else return prev
}, 0)
}
function App() {
const devices = useCameraDevices('wide-angle-camera')
const device = devices.back
@@ -53,7 +46,7 @@ function App() {
const format = useMemo(() => {
return device?.formats.reduce((prev, curr) => {
if (prev == null) return curr
if (getMaxFps(curr) > getMaxFps(prev)) return curr
if (curr.maxFps > prev.maxFps) return curr
else return prev
}, undefined)
}, [device?.formats])
@@ -127,7 +120,6 @@ Other props that depend on the `format`:
* `fps`: Specifies the frame rate to use
* `hdr`: Enables HDR photo or video capture and preview
* `lowLightBoost`: Enables a night-mode/low-light-boost for photo or video capture and preview
* `colorSpace`: Uses the specified color-space for photo or video capture and preview (iOS only since Android only uses `YUV`)
* `videoStabilizationMode`: Specifies the video stabilization mode to use for this camera device

View File

@@ -54,7 +54,7 @@ Frame processors are by far not limited to Hotdog detection, other examples incl
Because they are written in JS, Frame Processors are **simple**, **powerful**, **extensible** and **easy to create** while still running at **native performance**. (Frame Processors can run up to **1000 times a second**!) Also, you can use **fast-refresh** to quickly see changes while developing or publish [over-the-air updates](https://github.com/microsoft/react-native-code-push) to tweak the Hotdog detector's sensitivity in live apps without pushing a native update.
:::note
Frame Processors require [**react-native-worklets**](https://github.com/chrfalch/react-native-worklets) 1.0.0 or higher.
Frame Processors require [**react-native-worklets-core**](https://github.com/chrfalch/react-native-worklets-core) 1.0.0 or higher.
:::
### Interacting with Frame Processors
@@ -201,7 +201,7 @@ If you are using the [react-hooks ESLint plugin](https://www.npmjs.com/package/e
#### Frame Processors
**Frame Processors** are JS functions that will be **workletized** using [react-native-worklets](https://github.com/chrfalch/react-native-worklets). They are created on a **parallel camera thread** using a separate JavaScript Runtime (_"VisionCamera JS-Runtime"_) and are **invoked synchronously** (using JSI) without ever going over the bridge. In a **Frame Processor** you can write normal JS code, call back to the React-JS Thread (e.g. `setState`), use [Shared Values](https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/shared-values/) and call **Frame Processor Plugins**.
**Frame Processors** are JS functions that will be **workletized** using [react-native-worklets-core](https://github.com/chrfalch/react-native-worklets-core). They are created on a **parallel camera thread** using a separate JavaScript Runtime (_"VisionCamera JS-Runtime"_) and are **invoked synchronously** (using JSI) without ever going over the bridge. In a **Frame Processor** you can write normal JS code, call back to the React-JS Thread (e.g. `setState`), use [Shared Values](https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/shared-values/) and call **Frame Processor Plugins**.
> See [**the example Frame Processor**](https://github.com/mrousavy/react-native-vision-camera/blob/cf68a4c6476d085ec48fc424a53a96962e0c33f9/example/src/CameraPage.tsx#L199-L203)

View File

@@ -39,69 +39,69 @@ module.exports = {
### Create proxy for original and mocked modules
1. Create a new folder `vision-camera` anywhere in your project.
2. Inside that folder, create `vision-camera.js` and `vision-camera.e2e.js`.
3. Inside `vision-camera.js`, export the original react native modules you need to mock, and
inside `vision-camera.e2e.js` export the mocked modules.
1. Create a new folder `vision-camera` anywhere in your project.
2. Inside that folder, create `vision-camera.js` and `vision-camera.e2e.js`.
3. Inside `vision-camera.js`, export the original react native modules you need to mock, and
inside `vision-camera.e2e.js` export the mocked modules.
In this example, several functions of the modules `Camera` and `sortDevices` are mocked.
Define your mocks following the [original definitions](https://github.com/mrousavy/react-native-vision-camera/tree/main/src).
In this example, several functions of the modules `Camera` and `sortDevices` are mocked.
Define your mocks following the [original definitions](https://github.com/mrousavy/react-native-vision-camera/tree/main/src).
```js
// vision-camera.js
```js
// vision-camera.js
import { Camera, sortDevices } from 'react-native-vision-camera';
import { Camera, sortDevices } from 'react-native-vision-camera';
export const VisionCamera = Camera;
export const visionCameraSortDevices = sortDevices;
```
export const VisionCamera = Camera;
export const visionCameraSortDevices = sortDevices;
```
```js
// vision-camera.e2e.js
```js
// vision-camera.e2e.js
import React from 'react';
import RNFS, { writeFile } from 'react-native-fs';
import React from 'react';
import RNFS, { writeFile } from 'react-native-fs';
console.log('[DETOX] Using mocked react-native-vision-camera');
console.log('[DETOX] Using mocked react-native-vision-camera');
export class VisionCamera extends React.PureComponent {
static async getAvailableCameraDevices() {
return (
[
{
position: 'back',
},
]
);
}
export class VisionCamera extends React.PureComponent {
static async getAvailableCameraDevices() {
return (
[
{
position: 'back',
},
]
);
}
static async getCameraPermissionStatus() {
return 'authorized';
}
static async getCameraPermissionStatus() {
return 'granted';
}
static async requestCameraPermission() {
return 'authorized';
}
static async requestCameraPermission() {
return 'granted';
}
async takePhoto() {
const writePath = `${RNFS.DocumentDirectoryPath}/simulated_camera_photo.png`;
async takePhoto() {
const writePath = `${RNFS.DocumentDirectoryPath}/simulated_camera_photo.png`;
const imageDataBase64 = 'some_large_base_64_encoded_simulated_camera_photo';
await writeFile(writePath, imageDataBase64, 'base64');
const imageDataBase64 = 'some_large_base_64_encoded_simulated_camera_photo';
await writeFile(writePath, imageDataBase64, 'base64');
return { path: writePath };
}
return { path: writePath };
}
render() {
return null;
}
}
render() {
return null;
}
}
export const visionCameraSortDevices = (_left, _right) => 1;
```
export const visionCameraSortDevices = (_left, _right) => 1;
```
These mocked modules allows us to get authorized camera permissions, get one back camera
available and take a fake photo, while the component doesn't render when instantiated.
These mocked modules allows us to get granted camera permissions, get one back camera
available and take a fake photo, while the component doesn't render when instantiated.
### Use proxy module

View File

@@ -44,7 +44,7 @@ expo install react-native-vision-camera
VisionCamera requires **iOS 11 or higher**, and **Android-SDK version 21 or higher**. See [Troubleshooting](/docs/guides/troubleshooting) if you're having installation issues.
> **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-worklets**](https://github.com/chrfalch/react-native-worklets) 1.0.0 or higher.
> **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-worklets-core**](https://github.com/chrfalch/react-native-worklets-core) 1.0.0 or higher.
## Updating manifests
@@ -138,7 +138,7 @@ const microphonePermission = await Camera.getMicrophonePermissionStatus()
A permission status can have the following values:
* `authorized`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
* `not-determined`: Your app has not yet requested permission from the user. [Continue by calling the **request** functions.](#requesting-permissions)
* `denied`: Your app has already requested permissions from the user, but was explicitly denied. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
* `restricted`: (iOS only) Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
@@ -158,7 +158,7 @@ const newMicrophonePermission = await Camera.requestMicrophonePermission()
The permission request status can have the following values:
* `authorized`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
* `denied`: The user explicitly denied the permission request alert. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
* `restricted`: (iOS only) Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.

View File

@@ -1,10 +0,0 @@
# TODO
This is an internal TODO list which I am using to keep track of some of the features that are still missing.
* [ ] Mirror images from selfie cameras (iOS Done, Android WIP)
* [ ] Allow camera switching (front <-> back) while recording and stich videos together
* [ ] Make `startRecording()` async. Due to NativeModules limitations, we can only have either one callback or one promise in a native function. For `startRecording()` we need both, since you probably also want to catch any errors that occured during a `startRecording()` call (or wait until the recording has actually started, since this can also take some time)
* [ ] Return a `jsi::Value` reference for images (`UIImage`/`Bitmap`) on `takePhoto()` and `takeSnapshot()`. This way, we skip the entire file writing and reading, making image capture _a lot_ faster.
* [ ] Implement frame processors. The idea here is that the user passes a small JS function (worklet) to the `Camera::frameProcessor` prop which will then get called on every frame the camera previews. (I'd say we cap it to 30 times per second, even if the camera fps is higher) This can then be used to scan QR codes, detect faces, detect depth, render something ontop of the camera such as color filters, QR code boundaries or even dog filters, possibly even use AR - all from a single, small, and highly flexible JS function!
* [ ] Create a custom MPEG4 encoder to allow for more customizability in `recordVideo()` (`bitRate`, `priority`, `minQuantizationParameter`, `allowFrameReordering`, `expectedFrameRate`, `realTime`, `minimizeMemoryUsage`)

View File

@@ -14,45 +14,63 @@ Before opening an issue, make sure you try the following:
## iOS
1. Try cleaning and rebuilding **everything**:
### Build Issues
1. Try building through Xcode instead of the commandline. The error panel should give you more information about any build errors.
2. Try cleaning and rebuilding **everything**:
```sh
rm -rf package-lock.json && rm -rf yarn.lock && rm -rf node_modules
rm -rf ios/Podfile.lock && rm -rf ios/Pods
npm i # or "yarn"
cd ios && pod repo update && pod update && pod install
```
2. Check your minimum iOS version. VisionCamera requires a minimum iOS version of **12.4**.
3. Check your minimum iOS version. VisionCamera requires a minimum iOS version of **12.4**.
1. Open your `Podfile`
2. Make sure `platform :ios` is set to `12.4` or higher
3. Make sure `iOS Deployment Target` is set to `12.4` or higher (`IPHONEOS_DEPLOYMENT_TARGET` in `project.pbxproj`)
3. Check your Swift version. VisionCamera requires a minimum Swift version of **5.2**.
4. Check your Swift version. VisionCamera requires a minimum Swift version of **5.2**.
1. Open `project.pbxproj` in a Text Editor
2. If the `LIBRARY_SEARCH_PATH` value is set, make sure there is no explicit reference to Swift-5.0. If there is, remove it. See [this StackOverflow answer](https://stackoverflow.com/a/66281846/1123156).
3. If the `SWIFT_VERSION` value is set, make sure it is set to `5.2` or higher.
4. Make sure you have created a Swift bridging header in your project.
5. Make sure you have created a Swift bridging header in your project.
1. Open your project (`.xcworkspace`) in Xcode
2. Press **File** > **New** > **File** (<kbd>⌘</kbd>+<kbd>N</kbd>)
3. Select **Swift File** and press **Next**
4. Choose whatever name you want, e.g. `File.swift` and press **Create**
5. Press **Create Bridging Header** when promted.
5. If you're having build issues, try:
1. Building without Skia. Set `$VCDisableSkia = true` in the top of your Podfile, and try rebuilding.
2. Building without Frame Processors. Set `$VCDisableFrameProcessors = true` in the top of your Podfile, and try rebuilding.
6. If you're having runtime issues, check the logs in Xcode to find out more. In Xcode, go to **View** > **Debug Area** > **Activate Console** (<kbd>⇧</kbd>+<kbd>⌘</kbd>+<kbd>C</kbd>).
6. Try building without Skia. Set `$VCDisableSkia = true` in the top of your Podfile, and try rebuilding.
7. Try building without Frame Processors. Set `$VCDisableFrameProcessors = true` in the top of your Podfile, and try rebuilding.
### Runtime Issues
1. Check the logs in Xcode to find out more. In Xcode, go to **View** > **Debug Area** > **Activate Console** (<kbd>⇧</kbd>+<kbd>⌘</kbd>+<kbd>C</kbd>).
* For errors without messages, there's often an error code attached. Look up the error code on [osstatus.com](https://www.osstatus.com) to get more information about a specific error.
7. If your Frame Processor is not running, make sure you check the native Xcode logs to find out why. Also make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
2. If your Frame Processor is not running, make sure you check the native Xcode logs. There is useful information about the Frame Processor Runtime that will tell you if something goes wrong.
3. If your Frame Processor is not running, make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
4. If you are experiencing black-screens, try removing all properties such as `fps`, `hdr` or `format` on the `<Camera>` component except for the required ones:
```tsx
<Camera device={device} isActive={true} style={{ width: 500, height: 500 }} />
```
5. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
```tsx
Camera.getAvailableCameraDevices().then((d) => console.log(JSON.stringify(d, null, 2)))
```
## Android
1. Try cleaning and rebuilding **everything**:
### Build Issues
1. Try building through Android Studio instead of the commandline. The error panel should give you more information about any build errors.
2. Scroll up in the build output to make sure you're not missing any errors. Remember: "Build failed" is not an error message. Scroll further up.
3. Try cleaning and rebuilding **everything**:
```sh
./android/gradlew clean
rm -rf package-lock.json && rm -rf yarn.lock && rm -rf node_modules
npm i # or "yarn"
rm -rf android/.gradle android/.idea android/app/build android/build
rm -rf package-lock.json yarn.lock node_modules
yarn # or `npm i`
```
2. Since the Android implementation uses the not-yet fully stable **CameraX** API, make sure you've browsed the [CameraX issue tracker](https://issuetracker.google.com/issues?q=componentid:618491%20status:open) to find out if your issue is a limitation by the **CameraX** library even I cannot get around.
3. Make sure you have installed the [Android NDK](https://developer.android.com/ndk).
4. Make sure your minimum SDK version is **21 or higher**, and target SDK version is **33 or higher**. See [the example's `build.gradle`](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/android/build.gradle#L5-L10) for reference.
4. Make sure you have installed the [Android NDK](https://developer.android.com/ndk).
5. Make sure your minimum SDK version is **21 or higher**, and target SDK version is **33 or higher**. See [the example's `build.gradle`](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/android/build.gradle#L5-L10) for reference.
1. Open your `build.gradle`
2. Set `buildToolsVersion` to `33.0.0` or higher
3. Set `compileSdkVersion` to `33` or higher
@@ -63,16 +81,27 @@ Before opening an issue, make sure you try the following:
```
classpath("com.android.tools.build:gradle:7.3.1")
```
4. Make sure your Gradle Wrapper version is `7.5.1` or higher. In `gradle-wrapper.properties`, set:
6. Make sure your Gradle Wrapper version is `7.5.1` or higher. In `gradle-wrapper.properties`, set:
```
distributionUrl=https\://services.gradle.org/distributions/gradle-7.5.1-all.zip
```
5. If you're having build issues, try:
1. Building without Skia. Set `disableSkia = true` in your `gradle.properties`, and try rebuilding.
2. Building without Frame Processors. Set `disableFrameProcessors = true` in your `gradle.properties`, and try rebuilding.
6. If you're having runtime issues, check the logs in Android Studio/Logcat to find out more. In Android Studio, go to **View** > **Tool Windows** > **Logcat** (<kbd>⌘</kbd>+<kbd>6</kbd>) or run `adb logcat` in Terminal.
7. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/Camera#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
8. If your Frame Processor is not running, make sure you check the native Android Studio/Logcat logs to find out why. Also make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
7. Try building without Skia. Set `disableSkia = true` in your `gradle.properties`, and try rebuilding.
8. Try building without Frame Processors. Set `disableFrameProcessors = true` in your `gradle.properties`, and try rebuilding.
### Runtime Issues
1. Check the logs in Android Studio/Logcat to find out more. In Android Studio, go to **View** > **Tool Windows** > **Logcat** (<kbd>⌘</kbd>+<kbd>6</kbd>) or run `adb logcat` in Terminal.
2. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/Camera#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
3. If your Frame Processor is not running, make sure you check the native Android Studio/Logcat logs. There is useful information about the Frame Processor Runtime that will tell you if something goes wrong.
4. If your Frame Processor is not running, make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
5. If you are experiencing black-screens, try removing all properties such as `fps`, `hdr` or `format` on the `<Camera>` component except for the required ones:
```tsx
<Camera device={device} isActive={true} style={{ width: 500, height: 500 }} />
```
6. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
```tsx
Camera.getAvailableCameraDevices().then((d) => console.log(JSON.stringify(d, null, 2)))
```
## Issues