feat: ✨ V3 ✨ (#1466)
See https://github.com/mrousavy/react-native-vision-camera/issues/1376 ## Breaking Changes * Frame Processors are now **synchronous**. Previously they ran on a separate Thread. If you want to run something on a separate Thread now, use `runAsync` inside a Frame Processor * Frame Processor Plugins are no longer in the global object with the `__` prefix, but rather stored directly in the `FrameProcessorPlugins` object exported by react-native-vision-camera. (e.g. replace `__scanQRCodes(frame)` with `FrameProcessorPlugins.scanQRCodes(frame)`) * `frameProcessorFps` no longer exists. Use `runAtTargetFps` inside a Frame Processor to throttle some calls. * `onFrameProcessorPerformanceSuggestionAvailable` no longer exists. Use the FPS display (`enableFpsGraph={true}`) to see how your Frame Processor performs over time. This is more in-line with how React Native works (Dev Tools / Perf Monitor) * VisionCamera V3 will not work on RN 0.70 or below. You need to use RN 0.71. This is because the build script got way simpler and smaller, making it faster to build and way less error prone. Backwards compatibility is just too complex here. * Reanimated is no longer used as a Worklet Runtime. Instead, VisionCamera now uses [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core). ## Progress You can test the latest V3 release by creating a new RN project with RN 0.71 and installing VisionCamera + RNWorklets: ```sh yarn add react-native-vision-camera@3.0.0-rc.5 yarn add react-native-worklets-core yarn add @shopify/react-native-skia ``` Things to test: * TensorFlow Lite plugin to load any `.tflite` model!! ✨ (see [this PR for more info](https://github.com/mrousavy/react-native-vision-camera/pull/1633), will be a separate library soon) * Drawing onto a Frame using Skia!! 🎉 * Using `frame.toArrayBuffer()` to get the Frame's byte content in JS * New Android build script. This should drastically speed up the build time! 💨 * New Worklet library. This replaces Reanimated Worklets. Should be faster and more stable :) * New synchronous Frame Processors. Should be faster :) * `runAtTargetFps` and `runAsync` in Frame Processors * Using HostObjects or HostFunctions (like models from PyTorch) inside a Frame Processor. This will probably require a few native bindings on PyTorch's end to make the integration work (cc @raedle) Overall V3 is close to completion. I have a few things to do the coming days so not sure how much work I can put into this. **If anyone wants to support the development of v3, I'd appreciate donations / sponsors: https://github.com/sponsors/mrousavy** ❤️ :) ## Related issues features - resolves https://github.com/mrousavy/react-native-vision-camera/issues/1376 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/281 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/211 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/130 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/117 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/76 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/75 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/562 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/565 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/570 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/287 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/311 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/315 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/323 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/340 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/354 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/420 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/434 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/452 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/496 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/497 - resolves https://github.com/mrousavy/react-native-vision-camera/issues/499 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/516 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/527 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/542 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/548 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/561 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/740 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/770 ...and then pretty much every Android issue lol - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1675 (**maybe**, please test @PrernaBudhraja) - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1671 .. maybe also (not tested): - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1698 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1687 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1685 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1681 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1650 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1646 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1635 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1631 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1621 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1615 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1612 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1605 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1599 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1585 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1581 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1569 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1568 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1565 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1561 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1558 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1554 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1551 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1547 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1543 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1538 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1536 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1534 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1528 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1520 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1498 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1489 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1477 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1474 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1463 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1462 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1449 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1443 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1437 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1431 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1429 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1427 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1423 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1416 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1407 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1403 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1402 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1398 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1396 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1395 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1379 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1377 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1374 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1373 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1365 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1356 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1353 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1352 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1351 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1343 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1340 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1334 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1330 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1322 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1296 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1283 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1260 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1253 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1251 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1245 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1238 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1227 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1226 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1225 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1222 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1211 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1208 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1193 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1191 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1184 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1164 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1143 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1128 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1122 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1120 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1110 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1097 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1081 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1080 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1064 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1053 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1047 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1044 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1032 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1026 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1023 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1015 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/1012 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/997 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/960 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/959 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/954 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/946 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/945 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/922 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/908 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/907 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/868 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/855 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/834 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/793 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/779 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/746 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/740 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/727 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/671 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/613 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/595 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/588 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/570 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/569 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/542 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/516 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/515 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/434 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/354 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/323 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/315 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/281 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/211 - fixes https://github.com/mrousavy/react-native-vision-camera/issues/76
This commit is contained in:
commit
02a0371c65
27
.github/ISSUE_TEMPLATE/BUG_REPORT.yml
vendored
27
.github/ISSUE_TEMPLATE/BUG_REPORT.yml
vendored
@ -5,9 +5,9 @@ labels: [🐛 bug]
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: What were you trying to do?
|
||||
description: Explain what you are trying to do.
|
||||
placeholder: I wanted to take a picture.
|
||||
label: What's happening?
|
||||
description: Explain what you are trying to do and what happened instead. Be as precise as possible, I can't help you if I don't understand your issue.
|
||||
placeholder: I wanted to take a picture, but the method failed with this error "[capture/photo-not-enabled] Failed to take photo, photo is not enabled!"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
@ -15,18 +15,16 @@ body:
|
||||
label: Reproduceable Code
|
||||
description: Share a small reproduceable code snippet here (or the entire file if necessary). This will be automatically formatted into code, so no need for backticks.
|
||||
render: tsx
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: What happened instead?
|
||||
description: Explain what happened instead of the desired outcome. Did something crash?
|
||||
placeholder: The app crashes with an `InvalidPhotoCodec` error.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Relevant log output
|
||||
description: Please copy and paste any relevant log output (Xcode Logs/Android Studio Logcat). This will be automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Camera Device
|
||||
description: Please paste the JSON Camera `device` that was used here. (`console.log(JSON.stringify(device, null, 2))`) This will be automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
- type: input
|
||||
attributes:
|
||||
label: Device
|
||||
@ -38,15 +36,22 @@ body:
|
||||
attributes:
|
||||
label: VisionCamera Version
|
||||
description: Which version of react-native-vision-camera are you using?
|
||||
placeholder: ex. 2.0.1-beta.1
|
||||
placeholder: ex. 3.1.6
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Can you reproduce this issue in the VisionCamera Example app?
|
||||
description: Run the example app (`package/example/`) and see if the issue is reproduceable here.
|
||||
options:
|
||||
- label: I can reproduce the issue in the VisionCamera Example app.
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Additional information
|
||||
description: Please check all the boxes that apply
|
||||
options:
|
||||
- label: I am using Expo
|
||||
- label: I have enabled Frame Processors (react-native-worklets-core)
|
||||
- label: I have read the [Troubleshooting Guide](https://react-native-vision-camera.com/docs/guides/troubleshooting)
|
||||
required: true
|
||||
- label: I agree to follow this project's [Code of Conduct](https://github.com/mrousavy/react-native-vision-camera/blob/main/CODE_OF_CONDUCT.md)
|
||||
|
64
.github/ISSUE_TEMPLATE/BUILD_ERROR.yml
vendored
Normal file
64
.github/ISSUE_TEMPLATE/BUILD_ERROR.yml
vendored
Normal file
@ -0,0 +1,64 @@
|
||||
name: 🔧 Build Error
|
||||
description: File a build error bug report
|
||||
title: "🔧 "
|
||||
labels: [🔧 build error]
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: How were you trying to build the app?
|
||||
description: Explain how you tried to build the app, through Xcode, `yarn ios`, a CI, or other. Be as precise as possible, I can't help you if I don't understand your issue.
|
||||
placeholder: I tried to build my app with react-native-vision-camera using the `yarn ios` command, and it failed.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Full build logs
|
||||
description: Share the full build logs that appear in the console. Make sure you don't just paste the last few lines here, but rather everything from start to end.
|
||||
render: tsx
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Project dependencies
|
||||
description: Share all of your project's dependencies including their versions from `package.json`. This is useful if there are any other conflicting libraries.
|
||||
render: tsx
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
attributes:
|
||||
label: Target platforms
|
||||
description: Select the platforms where the build error occurs.
|
||||
multiple: true
|
||||
options:
|
||||
- iOS
|
||||
- Android
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
attributes:
|
||||
label: Operating system
|
||||
description: Select your operating system that you are trying to build on.
|
||||
multiple: true
|
||||
options:
|
||||
- MacOS
|
||||
- Windows
|
||||
- Linux
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Can you build the VisionCamera Example app?
|
||||
description: Try to build the example app (`package/example/`) and see if the issue is reproduceable here.
|
||||
options:
|
||||
- label: I can build the VisionCamera Example app.
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Additional information
|
||||
description: Please check all the boxes that apply
|
||||
options:
|
||||
- label: I am using Expo
|
||||
- label: I have enabled Frame Processors (react-native-worklets-core)
|
||||
- label: I have read the [Troubleshooting Guide](https://react-native-vision-camera.com/docs/guides/troubleshooting)
|
||||
required: true
|
||||
- label: I agree to follow this project's [Code of Conduct](https://github.com/mrousavy/react-native-vision-camera/blob/main/CODE_OF_CONDUCT.md)
|
||||
required: true
|
||||
- label: I searched for [similar issues in this repository](https://github.com/mrousavy/react-native-vision-camera/issues) and found none.
|
||||
required: true
|
57
.github/workflows/build-android.yml
vendored
57
.github/workflows/build-android.yml
vendored
@ -6,6 +6,7 @@ on:
|
||||
- main
|
||||
paths:
|
||||
- '.github/workflows/build-android.yml'
|
||||
- 'cpp/**'
|
||||
- 'android/**'
|
||||
- 'example/android/**'
|
||||
- 'yarn.lock'
|
||||
@ -13,22 +14,26 @@ on:
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/workflows/build-android.yml'
|
||||
- 'cpp/**'
|
||||
- 'android/**'
|
||||
- 'example/android/**'
|
||||
- 'yarn.lock'
|
||||
- 'example/yarn.lock'
|
||||
|
||||
jobs:
|
||||
build_example:
|
||||
build:
|
||||
name: Build Android Example App
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./package
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Setup JDK 1.8
|
||||
- name: Setup JDK 11
|
||||
uses: actions/setup-java@v1
|
||||
with:
|
||||
java-version: 1.8
|
||||
java-version: 11
|
||||
|
||||
- name: Get yarn cache directory path
|
||||
id: yarn-cache-dir-path
|
||||
@ -55,7 +60,49 @@ jobs:
|
||||
key: ${{ runner.os }}-gradle-${{ hashFiles('**/*.gradle*', '**/gradle-wrapper.properties') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-gradle-
|
||||
- name: Run Gradle Build for android/
|
||||
run: cd android && ./gradlew assembleDebug --build-cache && cd ..
|
||||
- name: Run Gradle Build for example/android/
|
||||
run: cd example/android && ./gradlew assembleDebug --build-cache && cd ../..
|
||||
|
||||
build-no-frame-processors:
|
||||
name: Build Android Example App (without Frame Processors)
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./package
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Setup JDK 11
|
||||
uses: actions/setup-java@v1
|
||||
with:
|
||||
java-version: 11
|
||||
|
||||
- name: Get yarn cache directory path
|
||||
id: yarn-cache-dir-path
|
||||
run: echo "::set-output name=dir::$(yarn cache dir)"
|
||||
- name: Restore node_modules from cache
|
||||
uses: actions/cache@v2
|
||||
id: yarn-cache
|
||||
with:
|
||||
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
|
||||
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-yarn-
|
||||
- name: Install node_modules
|
||||
run: yarn install --frozen-lockfile
|
||||
- name: Install node_modules for example/
|
||||
run: yarn install --frozen-lockfile --cwd example
|
||||
- name: Remove react-native-worklets-core
|
||||
run: yarn remove react-native-worklets-core --cwd example
|
||||
|
||||
- name: Restore Gradle cache
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: |
|
||||
~/.gradle/caches
|
||||
~/.gradle/wrapper
|
||||
key: ${{ runner.os }}-gradle-${{ hashFiles('**/*.gradle*', '**/gradle-wrapper.properties') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-gradle-
|
||||
- name: Run Gradle Build for example/android/
|
||||
run: cd example/android && ./gradlew assembleDebug --build-cache && cd ../..
|
||||
|
70
.github/workflows/build-ios.yml
vendored
70
.github/workflows/build-ios.yml
vendored
@ -6,12 +6,14 @@ on:
|
||||
- main
|
||||
paths:
|
||||
- '.github/workflows/build-ios.yml'
|
||||
- 'cpp/**'
|
||||
- 'ios/**'
|
||||
- '*.podspec'
|
||||
- 'example/ios/**'
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/workflows/build-ios.yml'
|
||||
- 'cpp/**'
|
||||
- 'ios/**'
|
||||
- '*.podspec'
|
||||
- 'example/ios/**'
|
||||
@ -22,7 +24,7 @@ jobs:
|
||||
runs-on: macOS-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: example/ios
|
||||
working-directory: package/example/ios
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
@ -47,9 +49,8 @@ jobs:
|
||||
- name: Setup Ruby (bundle)
|
||||
uses: ruby/setup-ruby@v1
|
||||
with:
|
||||
ruby-version: 2.6
|
||||
ruby-version: 2.6.10
|
||||
bundler-cache: true
|
||||
working-directory: example/ios
|
||||
|
||||
- name: Restore Pods cache
|
||||
uses: actions/cache@v2
|
||||
@ -62,7 +63,68 @@ jobs:
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pods-
|
||||
- name: Install Pods
|
||||
run: bundle exec pod check || bundle exec pod install
|
||||
run: pod install
|
||||
- name: Install xcpretty
|
||||
run: gem install xcpretty
|
||||
- name: Build App
|
||||
run: "set -o pipefail && xcodebuild \
|
||||
CC=clang CPLUSPLUS=clang++ LD=clang LDPLUSPLUS=clang++ \
|
||||
-derivedDataPath build -UseModernBuildSystem=YES \
|
||||
-workspace VisionCameraExample.xcworkspace \
|
||||
-scheme VisionCameraExample \
|
||||
-sdk iphonesimulator \
|
||||
-configuration Debug \
|
||||
-destination 'platform=iOS Simulator,name=iPhone 11 Pro' \
|
||||
build \
|
||||
CODE_SIGNING_ALLOWED=NO | xcpretty"
|
||||
|
||||
build-no-frame-processors:
|
||||
name: Build iOS Example App without Frame Processors
|
||||
runs-on: macOS-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: package/example/ios
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Get yarn cache directory path
|
||||
id: yarn-cache-dir-path
|
||||
run: echo "::set-output name=dir::$(yarn cache dir)"
|
||||
- name: Restore node_modules from cache
|
||||
uses: actions/cache@v2
|
||||
id: yarn-cache
|
||||
with:
|
||||
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
|
||||
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-yarn-
|
||||
- name: Install node_modules for example/
|
||||
run: yarn install --frozen-lockfile --cwd ..
|
||||
- name: Remove react-native-worklets-core
|
||||
run: yarn remove react-native-worklets-core --cwd ..
|
||||
|
||||
- name: Restore buildcache
|
||||
uses: mikehardy/buildcache-action@v1
|
||||
continue-on-error: true
|
||||
|
||||
- name: Setup Ruby (bundle)
|
||||
uses: ruby/setup-ruby@v1
|
||||
with:
|
||||
ruby-version: 2.6.10
|
||||
bundler-cache: true
|
||||
|
||||
- name: Restore Pods cache
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: |
|
||||
example/ios/Pods
|
||||
~/Library/Caches/CocoaPods
|
||||
~/.cocoapods
|
||||
key: ${{ runner.os }}-pods-${{ hashFiles('**/Podfile.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-pods-
|
||||
- name: Install Pods
|
||||
run: pod install
|
||||
- name: Install xcpretty
|
||||
run: gem install xcpretty
|
||||
- name: Build App
|
||||
|
19
.github/workflows/notice-yarn-changes.yml
vendored
19
.github/workflows/notice-yarn-changes.yml
vendored
@ -1,19 +0,0 @@
|
||||
name: Notice yarn.lock changes
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
check:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
- name: Notice yarn.lock changes
|
||||
uses: Simek/yarn-lock-changes@main
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
collapsibleThreshold: '25'
|
||||
failOnDowngrade: 'false'
|
||||
path: 'yarn.lock'
|
||||
updateComment: 'true'
|
6
.github/workflows/validate-android.yml
vendored
6
.github/workflows/validate-android.yml
vendored
@ -20,13 +20,13 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./android
|
||||
working-directory: ./package/android
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Setup JDK 1.8
|
||||
- name: Setup JDK 11
|
||||
uses: actions/setup-java@v1
|
||||
with:
|
||||
java-version: 1.8
|
||||
java-version: 11
|
||||
|
||||
- name: Get yarn cache directory path
|
||||
id: yarn-cache-dir-path
|
||||
|
36
.github/workflows/validate-cpp.yml
vendored
36
.github/workflows/validate-cpp.yml
vendored
@ -6,30 +6,32 @@ on:
|
||||
- main
|
||||
paths:
|
||||
- '.github/workflows/validate-cpp.yml'
|
||||
- 'cpp/**'
|
||||
- 'android/src/main/cpp/**'
|
||||
- 'package/cpp/**'
|
||||
- 'package/android/src/main/cpp/**'
|
||||
- 'package/ios/**'
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/workflows/validate-cpp.yml'
|
||||
- 'cpp/**'
|
||||
- 'android/src/main/cpp/**'
|
||||
- 'package/cpp/**'
|
||||
- 'package/android/src/main/cpp/**'
|
||||
- 'package/ios/**'
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
name: cpplint
|
||||
name: Check clang-format
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
path:
|
||||
- 'package/cpp'
|
||||
- 'package/android/src/main/cpp'
|
||||
- 'package/ios'
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: reviewdog/action-cpplint@master
|
||||
- name: Run clang-format style check
|
||||
uses: mrousavy/clang-format-action@v1
|
||||
with:
|
||||
github_token: ${{ secrets.github_token }}
|
||||
reporter: github-pr-review
|
||||
flags: --linelength=230 --exclude "android/src/main/cpp/reanimated-headers"
|
||||
targets: --recursive cpp android/src/main/cpp
|
||||
filter: "-legal/copyright\
|
||||
,-readability/todo\
|
||||
,-build/namespaces\
|
||||
,-whitespace/comments\
|
||||
,-build/include_order\
|
||||
,-build/c++11\
|
||||
"
|
||||
clang-format-version: '16'
|
||||
check-path: ${{ matrix.path }}
|
||||
clang-format-style-path: package/cpp/.clang-format
|
||||
|
||||
|
5
.github/workflows/validate-ios.yml
vendored
5
.github/workflows/validate-ios.yml
vendored
@ -15,6 +15,9 @@ on:
|
||||
jobs:
|
||||
SwiftLint:
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./package
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Run SwiftLint GitHub Action (--strict)
|
||||
@ -27,7 +30,7 @@ jobs:
|
||||
runs-on: macOS-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./ios
|
||||
working-directory: ./package/ios
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
|
42
.github/workflows/validate-js.yml
vendored
42
.github/workflows/validate-js.yml
vendored
@ -6,32 +6,35 @@ on:
|
||||
- main
|
||||
paths:
|
||||
- '.github/workflows/validate-js.yml'
|
||||
- 'src/**'
|
||||
- '*.json'
|
||||
- '*.js'
|
||||
- '*.lock'
|
||||
- 'example/src/**'
|
||||
- 'example/*.json'
|
||||
- 'example/*.js'
|
||||
- 'example/*.lock'
|
||||
- 'example/*.tsx'
|
||||
- 'package/src/**'
|
||||
- 'package/*.json'
|
||||
- 'package/*.js'
|
||||
- 'package/*.lock'
|
||||
- 'package/example/src/**'
|
||||
- 'package/example/*.json'
|
||||
- 'package/example/*.js'
|
||||
- 'package/example/*.lock'
|
||||
- 'package/example/*.tsx'
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/workflows/validate-js.yml'
|
||||
- 'src/**'
|
||||
- '*.json'
|
||||
- '*.js'
|
||||
- '*.lock'
|
||||
- 'example/src/**'
|
||||
- 'example/*.json'
|
||||
- 'example/*.js'
|
||||
- 'example/*.lock'
|
||||
- 'example/*.tsx'
|
||||
- 'package/src/**'
|
||||
- 'package/*.json'
|
||||
- 'package/*.js'
|
||||
- 'package/*.lock'
|
||||
- 'package/example/src/**'
|
||||
- 'package/example/*.json'
|
||||
- 'package/example/*.js'
|
||||
- 'package/example/*.lock'
|
||||
- 'package/example/*.tsx'
|
||||
|
||||
jobs:
|
||||
compile:
|
||||
name: Compile JS (tsc)
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./package
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
@ -70,6 +73,9 @@ jobs:
|
||||
lint:
|
||||
name: Lint JS (eslint, prettier)
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ./package
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
|
72
.gitignore
vendored
72
.gitignore
vendored
@ -1,70 +1,6 @@
|
||||
# OSX
|
||||
#
|
||||
.DS_Store
|
||||
|
||||
# XDE
|
||||
.expo/
|
||||
|
||||
# VSCode
|
||||
jsconfig.json
|
||||
|
||||
# Xcode
|
||||
#
|
||||
build/
|
||||
*.pbxuser
|
||||
!default.pbxuser
|
||||
*.mode1v3
|
||||
!default.mode1v3
|
||||
*.mode2v3
|
||||
!default.mode2v3
|
||||
*.perspectivev3
|
||||
!default.perspectivev3
|
||||
xcuserdata
|
||||
*.xccheckout
|
||||
*.moved-aside
|
||||
DerivedData
|
||||
*.hmap
|
||||
*.ipa
|
||||
*.xcuserstate
|
||||
project.xcworkspace
|
||||
|
||||
# Android/IJ
|
||||
#
|
||||
.idea
|
||||
.gradle
|
||||
local.properties
|
||||
android.iml
|
||||
*.hprof
|
||||
|
||||
# Cocoapods
|
||||
#
|
||||
example/ios/Pods
|
||||
|
||||
# node.js
|
||||
#
|
||||
node_modules/
|
||||
npm-debug.log
|
||||
yarn-debug.log
|
||||
yarn-error.log
|
||||
|
||||
# BUCK
|
||||
buck-out/
|
||||
\.buckd/
|
||||
android/app/libs
|
||||
android/keystores/debug.keystore
|
||||
|
||||
# Expo
|
||||
.expo/*
|
||||
|
||||
# generated by bob
|
||||
lib/
|
||||
|
||||
# we only use yarn
|
||||
package-lock.json
|
||||
|
||||
# TypeDoc/Docusaurus stuff
|
||||
docs/docs/api
|
||||
|
||||
# External native build folder generated in Android Studio 2.2 and later
|
||||
.externalNativeBuild
|
||||
.cxx/
|
||||
# no yarn/npm in the root repo!
|
||||
./package-lock.json
|
||||
./yarn.lock
|
||||
**/node_modules/
|
||||
|
3
.vscode/settings.json
vendored
3
.vscode/settings.json
vendored
@ -1,3 +0,0 @@
|
||||
{
|
||||
"typescript.tsdk": "node_modules/typescript/lib"
|
||||
}
|
@ -2,7 +2,8 @@
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. Don't be rude.
|
||||
1. Don't be an asshole.
|
||||
2. Don't waste anyone's time.
|
||||
|
||||
## Get started
|
||||
|
||||
@ -10,6 +11,7 @@
|
||||
2. Install dependencies
|
||||
```
|
||||
cd react-native-vision-camera
|
||||
cd package
|
||||
yarn bootstrap
|
||||
```
|
||||
|
||||
@ -39,7 +41,11 @@ Read the READMEs in [`android/`](android/README.md) and [`ios/`](ios/README.md)
|
||||
1. Open the `example/android/` folder with Android Studio
|
||||
2. Start the metro bundler in the `example/` directory using `yarn start`
|
||||
3. Select your device in the devices drop-down
|
||||
4. Hit run
|
||||
4. Once your device is connected, make sure it can find the metro bundler's port:
|
||||
```
|
||||
adb reverse tcp:8081 tcp:8081
|
||||
```
|
||||
6. Hit run
|
||||
|
||||
> Run `yarn check-android` to validate codestyle
|
||||
|
||||
|
77
README.md
77
README.md
@ -1,56 +1,46 @@
|
||||
<a href="https://margelo.io">
|
||||
<img src="./docs/static/img/banner.svg" width="100%" />
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="./docs/static/img/banner-dark.png" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="./docs/static/img/banner-light.png" />
|
||||
<img alt="VisionCamera" src="./docs/static/img/banner-light.png" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
<h1 align="center">Vision Camera</h1>
|
||||
|
||||
<div align="center">
|
||||
<img src="docs/static/img/11.png" width="50%">
|
||||
<br />
|
||||
<br />
|
||||
<blockquote><b>📸 The Camera library that sees the vision.</b></blockquote>
|
||||
<pre align="center">npm i <a href="https://www.npmjs.com/package/react-native-vision-camera">react-native-vision-camera</a><br/>npx pod-install </pre>
|
||||
<a align="center" href='https://ko-fi.com/F1F8CLXG' target='_blank'>
|
||||
<img height='36' style='border:0px;height:36px;' src='https://az743702.vo.msecnd.net/cdn/kofi2.png?v=0' border='0' alt='Buy Me a Coffee at ko-fi.com' />
|
||||
</a>
|
||||
<br/>
|
||||
<a align="center" href="https://github.com/mrousavy?tab=followers">
|
||||
<img src="https://img.shields.io/github/followers/mrousavy?label=Follow%20%40mrousavy&style=social" />
|
||||
</a>
|
||||
<br />
|
||||
<a align="center" href="https://twitter.com/mrousavy">
|
||||
<img src="https://img.shields.io/twitter/follow/mrousavy?label=Follow%20%40mrousavy&style=social" />
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<br/>
|
||||
<br/>
|
||||
<br />
|
||||
|
||||
<div>
|
||||
<img align="right" width="35%" src="docs/static/img/example.png">
|
||||
</div>
|
||||
|
||||
### ‼️‼️‼️‼️‼️ ✨ VisionCamera V3 ‼️‼️‼️‼️‼️
|
||||
### Features
|
||||
|
||||
**See [this discussion](https://github.com/mrousavy/react-native-vision-camera/issues/1376) for the latest upcoming version of VisionCamera**
|
||||
VisionCamera is a powerful and fast Camera component for React Native. It features:
|
||||
|
||||
* Photo and Video capture
|
||||
* Customizable devices and multi-cameras ("fish-eye" zoom)
|
||||
* Customizable resolutions and aspect-ratios (4k/8k images)
|
||||
* Customizable FPS (30..240 FPS)
|
||||
* [Frame Processors](https://react-native-vision-camera.com/docs/guides/frame-processors) (JS worklets to run QR-Code scanning, facial recognition, AI object detection, realtime video chats, ...)
|
||||
* Smooth zooming (Reanimated)
|
||||
* Fast pause and resume
|
||||
* HDR & Night modes
|
||||
* Custom C++/GPU accelerated video pipeline (OpenGL)
|
||||
|
||||
Install VisionCamera from npm:
|
||||
|
||||
```sh
|
||||
yarn add react-native-vision-camera
|
||||
cd ios && pod install
|
||||
```
|
||||
|
||||
..and get started by [setting up permissions](https://react-native-vision-camera.com/docs/guides)!
|
||||
|
||||
### Documentation
|
||||
|
||||
* [Guides](https://react-native-vision-camera.com/docs/guides)
|
||||
* [API](https://react-native-vision-camera.com/docs/api)
|
||||
* [Example](./example/)
|
||||
|
||||
### Features
|
||||
|
||||
* Photo, Video and Snapshot capture
|
||||
* Customizable devices and multi-cameras (smoothly zoom out to "fish-eye" camera)
|
||||
* Customizable FPS
|
||||
* [Frame Processors](https://react-native-vision-camera.com/docs/guides/frame-processors) (JS worklets to run QR-Code scanning, facial recognition, AI object detection, realtime video chats, ...)
|
||||
* Smooth zooming (Reanimated)
|
||||
* Fast pause and resume
|
||||
* HDR & Night modes
|
||||
|
||||
> See the [example](./example/) app
|
||||
* [Example](./package/example/)
|
||||
* [Frame Processor Plugins](https://react-native-vision-camera.com/docs/guides/frame-processor-plugin-list)
|
||||
|
||||
### Example
|
||||
|
||||
@ -70,6 +60,8 @@ function App() {
|
||||
}
|
||||
```
|
||||
|
||||
> See the [example](./package/example/) app
|
||||
|
||||
### Adopting at scale
|
||||
|
||||
<a href="https://github.com/sponsors/mrousavy">
|
||||
@ -80,6 +72,9 @@ VisionCamera is provided _as is_, I work on it in my free time.
|
||||
|
||||
If you're integrating VisionCamera in a production app, consider [funding this project](https://github.com/sponsors/mrousavy) and <a href="mailto:me@mrousavy.com?subject=Adopting VisionCamera at scale">contact me</a> to receive premium enterprise support, help with issues, prioritize bugfixes, request features, help at integrating VisionCamera and/or Frame Processors, and more.
|
||||
|
||||
<br />
|
||||
### Socials
|
||||
|
||||
#### 🚀 Get started by [setting up permissions](https://react-native-vision-camera.com/docs/guides/)!
|
||||
* 🐦 [**Follow me on Twitter**](https://twitter.com/mrousavy) for updates
|
||||
* 📝 [**Check out my blog**](https://mrousavy.com/blog) for examples and experiments
|
||||
* 💖 [**Sponsor me on GitHub**](https://github.com/sponsors/mrousavy) to support my work
|
||||
* 🍪 [**Buy me a Ko-Fi**](https://ko-fi.com/mrousavy) to support my work
|
||||
|
@ -1,67 +0,0 @@
|
||||
require "json"
|
||||
|
||||
package = JSON.parse(File.read(File.join(__dir__, "package.json")))
|
||||
|
||||
reactVersion = '0.0.0'
|
||||
begin
|
||||
reactVersion = JSON.parse(File.read(File.join(__dir__, "..", "react-native", "package.json")))["version"]
|
||||
rescue
|
||||
reactVersion = '0.66.0'
|
||||
end
|
||||
rnVersion = reactVersion.split('.')[1]
|
||||
|
||||
folly_flags = '-DFOLLY_NO_CONFIG -DFOLLY_MOBILE=1 -DFOLLY_USE_LIBCPP=1 -DRNVERSION=' + rnVersion
|
||||
folly_compiler_flags = folly_flags + ' ' + '-Wno-comma -Wno-shorten-64-to-32'
|
||||
folly_version = '2021.04.26.00'
|
||||
boost_compiler_flags = '-Wno-documentation'
|
||||
|
||||
Pod::Spec.new do |s|
|
||||
s.name = "VisionCamera"
|
||||
s.version = package["version"]
|
||||
s.summary = package["description"]
|
||||
s.homepage = package["homepage"]
|
||||
s.license = package["license"]
|
||||
s.authors = package["author"]
|
||||
|
||||
s.platforms = { :ios => "11.0" }
|
||||
s.source = { :git => "https://github.com/mrousavy/react-native-vision-camera.git", :tag => "#{s.version}" }
|
||||
|
||||
s.pod_target_xcconfig = {
|
||||
"USE_HEADERMAP" => "YES",
|
||||
"HEADER_SEARCH_PATHS" => "\"$(PODS_TARGET_SRCROOT)/ReactCommon\" \"$(PODS_TARGET_SRCROOT)\" \"$(PODS_ROOT)/RCT-Folly\" \"$(PODS_ROOT)/boost\" \"$(PODS_ROOT)/boost-for-react-native\" \"$(PODS_ROOT)/DoubleConversion\" \"$(PODS_ROOT)/Headers/Private/React-Core\" "
|
||||
}
|
||||
s.compiler_flags = folly_compiler_flags + ' ' + boost_compiler_flags
|
||||
s.xcconfig = {
|
||||
"CLANG_CXX_LANGUAGE_STANDARD" => "c++17",
|
||||
"HEADER_SEARCH_PATHS" => "\"$(PODS_ROOT)/boost\" \"$(PODS_ROOT)/boost-for-react-native\" \"$(PODS_ROOT)/glog\" \"$(PODS_ROOT)/RCT-Folly\" \"${PODS_ROOT}/Headers/Public/React-hermes\" \"${PODS_ROOT}/Headers/Public/hermes-engine\"",
|
||||
"OTHER_CFLAGS" => "$(inherited)" + " " + folly_flags
|
||||
}
|
||||
|
||||
s.requires_arc = true
|
||||
|
||||
# All source files that should be publicly visible
|
||||
# Note how this does not include headers, since those can nameclash.
|
||||
s.source_files = [
|
||||
"ios/**/*.{m,mm,swift}",
|
||||
"ios/CameraBridge.h",
|
||||
"ios/Frame Processor/Frame.h",
|
||||
"ios/Frame Processor/FrameProcessorCallback.h",
|
||||
"ios/Frame Processor/FrameProcessorRuntimeManager.h",
|
||||
"ios/Frame Processor/FrameProcessorPluginRegistry.h",
|
||||
"ios/Frame Processor/FrameProcessorPlugin.h",
|
||||
"ios/React Utils/RCTBridge+runOnJS.h",
|
||||
"ios/React Utils/JSConsoleHelper.h",
|
||||
"cpp/**/*.{cpp}",
|
||||
]
|
||||
# Any private headers that are not globally unique should be mentioned here.
|
||||
# Otherwise there will be a nameclash, since CocoaPods flattens out any header directories
|
||||
# See https://github.com/firebase/firebase-ios-sdk/issues/4035 for more details.
|
||||
s.preserve_paths = [
|
||||
"cpp/**/*.h",
|
||||
"ios/**/*.h"
|
||||
]
|
||||
|
||||
s.dependency "React-callinvoker"
|
||||
s.dependency "React"
|
||||
s.dependency "React-Core"
|
||||
end
|
@ -1,260 +0,0 @@
|
||||
project(VisionCamera)
|
||||
cmake_minimum_required(VERSION 3.4.1)
|
||||
|
||||
set (CMAKE_VERBOSE_MAKEFILE ON)
|
||||
set (CMAKE_CXX_STANDARD 14)
|
||||
|
||||
if(${REACT_NATIVE_VERSION} GREATER_EQUAL 71)
|
||||
include("${NODE_MODULES_DIR}/react-native/ReactAndroid/cmake-utils/folly-flags.cmake")
|
||||
add_compile_options(${folly_FLAGS})
|
||||
else()
|
||||
set (CMAKE_CXX_FLAGS "-DFOLLY_NO_CONFIG=1 -DFOLLY_HAVE_CLOCK_GETTIME=1 -DFOLLY_HAVE_MEMRCHR=1 -DFOLLY_USE_LIBCPP=1 -DFOLLY_MOBILE=1 -DON_ANDROID -DONANDROID -DFOR_HERMES=${FOR_HERMES}")
|
||||
endif()
|
||||
|
||||
|
||||
set (PACKAGE_NAME "VisionCamera")
|
||||
set (BUILD_DIR ${CMAKE_SOURCE_DIR}/build)
|
||||
if(${REACT_NATIVE_VERSION} GREATER_EQUAL 71)
|
||||
# Consume shared libraries and headers from prefabs
|
||||
find_package(fbjni REQUIRED CONFIG)
|
||||
find_package(ReactAndroid REQUIRED CONFIG)
|
||||
else()
|
||||
set (RN_SO_DIR ${NODE_MODULES_DIR}/react-native/ReactAndroid/src/main/jni/first-party/react/jni)
|
||||
endif()
|
||||
# VisionCamera shared
|
||||
|
||||
if(${REACT_NATIVE_VERSION} LESS 66)
|
||||
set (
|
||||
INCLUDE_JSI_CPP
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/jsi/jsi/jsi.cpp"
|
||||
)
|
||||
set (
|
||||
INCLUDE_JSIDYNAMIC_CPP
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/jsi/jsi/JSIDynamic.cpp"
|
||||
)
|
||||
endif()
|
||||
|
||||
add_library(
|
||||
${PACKAGE_NAME}
|
||||
SHARED
|
||||
src/main/cpp/VisionCamera.cpp
|
||||
src/main/cpp/JSIJNIConversion.cpp
|
||||
src/main/cpp/FrameHostObject.cpp
|
||||
src/main/cpp/FrameProcessorRuntimeManager.cpp
|
||||
src/main/cpp/CameraView.cpp
|
||||
src/main/cpp/VisionCameraScheduler.cpp
|
||||
src/main/cpp/java-bindings/JFrameProcessorPlugin.cpp
|
||||
src/main/cpp/java-bindings/JImageProxy.cpp
|
||||
src/main/cpp/java-bindings/JHashMap.cpp
|
||||
)
|
||||
|
||||
# includes
|
||||
if(${REACT_NATIVE_VERSION} GREATER_EQUAL 71)
|
||||
target_include_directories(
|
||||
${PACKAGE_NAME}
|
||||
PRIVATE
|
||||
"${NODE_MODULES_DIR}/react-native/ReactAndroid/src/main/jni/react/turbomodule"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/callinvoker"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/jsi"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/react/renderer/graphics/platform/cxx"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/runtimeexecutor"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/yoga"
|
||||
# --- Reanimated ---
|
||||
# New
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/AnimatedSensor"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/Tools"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/SpecTools"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/SharedItems"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/Registries"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/LayoutAnimations"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/hidden_headers"
|
||||
"src/main/cpp"
|
||||
)
|
||||
else()
|
||||
file (GLOB LIBFBJNI_INCLUDE_DIR "${BUILD_DIR}/fbjni-*-headers.jar/")
|
||||
|
||||
target_include_directories(
|
||||
${PACKAGE_NAME}
|
||||
PRIVATE
|
||||
# --- fbjni ---
|
||||
"${LIBFBJNI_INCLUDE_DIR}"
|
||||
# --- Third Party (required by RN) ---
|
||||
"${BUILD_DIR}/third-party-ndk/boost"
|
||||
"${BUILD_DIR}/third-party-ndk/double-conversion"
|
||||
"${BUILD_DIR}/third-party-ndk/folly"
|
||||
"${BUILD_DIR}/third-party-ndk/glog"
|
||||
# --- React Native ---
|
||||
"${NODE_MODULES_DIR}/react-native/React"
|
||||
"${NODE_MODULES_DIR}/react-native/React/Base"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactAndroid/src/main/jni"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactAndroid/src/main/java/com/facebook/react/turbomodule/core/jni"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/callinvoker"
|
||||
"${NODE_MODULES_DIR}/react-native/ReactCommon/jsi"
|
||||
"${NODE_MODULES_DIR}/hermes-engine/android/include/"
|
||||
${INCLUDE_JSI_CPP} # only on older RN versions
|
||||
${INCLUDE_JSIDYNAMIC_CPP} # only on older RN versions
|
||||
# --- Reanimated ---
|
||||
# Old
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/headers/AnimatedSensor"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/headers/Tools"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/headers/SpecTools"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/headers/SharedItems"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/headers/Registries"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/headers/LayoutAnimations"
|
||||
# New
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/AnimatedSensor"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/Tools"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/SpecTools"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/SharedItems"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/Registries"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/LayoutAnimations"
|
||||
"${NODE_MODULES_DIR}/react-native-reanimated/Common/cpp/hidden_headers"
|
||||
"src/main/cpp"
|
||||
)
|
||||
endif()
|
||||
|
||||
|
||||
|
||||
# find libraries
|
||||
|
||||
file (GLOB LIBRN_DIR "${BUILD_DIR}/react-native-0*/jni/${ANDROID_ABI}")
|
||||
|
||||
if(${FOR_HERMES})
|
||||
string(APPEND CMAKE_CXX_FLAGS " -DFOR_HERMES=1")
|
||||
|
||||
if(${REACT_NATIVE_VERSION} GREATER_EQUAL 71)
|
||||
find_package(hermes-engine REQUIRED CONFIG)
|
||||
elseif(${REACT_NATIVE_VERSION} GREATER_EQUAL 69)
|
||||
# Bundled Hermes from module `com.facebook.react:hermes-engine` or project `:ReactAndroid:hermes-engine`
|
||||
target_include_directories(
|
||||
${PACKAGE_NAME}
|
||||
PRIVATE
|
||||
"${JS_RUNTIME_DIR}/API"
|
||||
"${JS_RUNTIME_DIR}/public"
|
||||
)
|
||||
else()
|
||||
# From `hermes-engine` npm package
|
||||
target_include_directories(
|
||||
${PACKAGE_NAME}
|
||||
PRIVATE
|
||||
"${JS_RUNTIME_DIR}/android/include"
|
||||
)
|
||||
endif()
|
||||
|
||||
if(${REACT_NATIVE_VERSION} GREATER_EQUAL 71)
|
||||
target_link_libraries(
|
||||
${PACKAGE_NAME}
|
||||
"hermes-engine::libhermes"
|
||||
)
|
||||
else()
|
||||
target_link_libraries(
|
||||
${PACKAGE_NAME}
|
||||
"${BUILD_DIR}/third-party-ndk/hermes/jni/${ANDROID_ABI}/libhermes.so"
|
||||
)
|
||||
endif()
|
||||
file (GLOB LIBREANIMATED_DIR "${BUILD_DIR}/react-native-reanimated-*-hermes.aar/jni/${ANDROID_ABI}")
|
||||
else()
|
||||
file (GLOB LIBJSC_DIR "${BUILD_DIR}/android-jsc*.aar/jni/${ANDROID_ABI}")
|
||||
|
||||
if(${REACT_NATIVE_VERSION} GREATER_EQUAL 71)
|
||||
set(JS_ENGINE_LIB ReactAndroid::jscexecutor)
|
||||
else()
|
||||
# Use JSC
|
||||
find_library(
|
||||
JS_ENGINE_LIB
|
||||
jscexecutor
|
||||
PATHS ${LIBRN_DIR}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
)
|
||||
endif()
|
||||
target_link_libraries(
|
||||
${PACKAGE_NAME}
|
||||
${JS_ENGINE_LIB}
|
||||
)
|
||||
|
||||
# Use Reanimated JSC
|
||||
file (GLOB LIBREANIMATED_DIR "${BUILD_DIR}/react-native-reanimated-*-jsc.aar/jni/${ANDROID_ABI}")
|
||||
endif()
|
||||
|
||||
if(${REACT_NATIVE_VERSION} LESS 71)
|
||||
find_library(
|
||||
FBJNI_LIB
|
||||
fbjni
|
||||
PATHS ${LIBRN_DIR}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
)
|
||||
endif()
|
||||
|
||||
if(${REACT_NATIVE_VERSION} LESS 69)
|
||||
find_library(
|
||||
FOLLY_LIB
|
||||
folly_json
|
||||
PATHS ${LIBRN_DIR}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
)
|
||||
elseif(${REACT_NATIVE_VERSION} LESS 71)
|
||||
find_library(
|
||||
FOLLY_LIB
|
||||
folly_runtime
|
||||
PATHS ${LIBRN_DIR}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
)
|
||||
endif()
|
||||
|
||||
if(${REACT_NATIVE_VERSION} LESS 71)
|
||||
find_library(
|
||||
REACT_NATIVE_JNI_LIB
|
||||
reactnativejni
|
||||
PATHS ${LIBRN_DIR}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
)
|
||||
endif()
|
||||
|
||||
if(${REACT_NATIVE_VERSION} GREATER_EQUAL 71)
|
||||
target_link_libraries(
|
||||
${PACKAGE_NAME}
|
||||
ReactAndroid::folly_runtime
|
||||
ReactAndroid::glog
|
||||
ReactAndroid::jsi
|
||||
ReactAndroid::reactnativejni
|
||||
fbjni::fbjni
|
||||
)
|
||||
elseif(${REACT_NATIVE_VERSION} LESS 66)
|
||||
# JSI lib didn't exist on RN 0.65 and before. Simply omit it.
|
||||
set (JSI_LIB "")
|
||||
else()
|
||||
# RN 0.66 distributes libjsi.so, can be used instead of compiling jsi.cpp manually.
|
||||
find_library(
|
||||
JSI_LIB
|
||||
jsi
|
||||
PATHS ${LIBRN_DIR}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
)
|
||||
endif()
|
||||
|
||||
find_library(
|
||||
REANIMATED_LIB
|
||||
reanimated
|
||||
PATHS ${LIBREANIMATED_DIR}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
)
|
||||
|
||||
find_library(
|
||||
LOG_LIB
|
||||
log
|
||||
)
|
||||
|
||||
# linking
|
||||
message(WARNING "VisionCamera linking: FOR_HERMES=${FOR_HERMES}")
|
||||
target_link_libraries(
|
||||
${PACKAGE_NAME}
|
||||
${LOG_LIB}
|
||||
${JSI_LIB}
|
||||
${REANIMATED_LIB}
|
||||
${REACT_NATIVE_JNI_LIB}
|
||||
${FBJNI_LIB}
|
||||
${FOLLY_LIB}
|
||||
android
|
||||
)
|
@ -1,638 +0,0 @@
|
||||
import groovy.json.JsonSlurper
|
||||
import org.apache.tools.ant.filters.ReplaceTokens
|
||||
import java.nio.file.Paths
|
||||
|
||||
static def findNodeModules(baseDir) {
|
||||
def basePath = baseDir.toPath().normalize()
|
||||
// Node's module resolution algorithm searches up to the root directory,
|
||||
// after which the base path will be null
|
||||
while (basePath) {
|
||||
def nodeModulesPath = Paths.get(basePath.toString(), "node_modules")
|
||||
def reactNativePath = Paths.get(nodeModulesPath.toString(), "react-native")
|
||||
if (nodeModulesPath.toFile().exists() && reactNativePath.toFile().exists()) {
|
||||
return nodeModulesPath.toString()
|
||||
}
|
||||
basePath = basePath.getParent()
|
||||
}
|
||||
throw new GradleException("VisionCamera: Failed to find node_modules/ path!")
|
||||
}
|
||||
static def findNodeModulePath(baseDir, packageName) {
|
||||
def basePath = baseDir.toPath().normalize()
|
||||
// Node's module resolution algorithm searches up to the root directory,
|
||||
// after which the base path will be null
|
||||
while (basePath) {
|
||||
def candidatePath = Paths.get(basePath.toString(), "node_modules", packageName)
|
||||
if (candidatePath.toFile().exists()) {
|
||||
return candidatePath.toString()
|
||||
}
|
||||
basePath = basePath.getParent()
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
def isNewArchitectureEnabled() {
|
||||
// To opt-in for the New Architecture, you can either:
|
||||
// - Set `newArchEnabled` to true inside the `gradle.properties` file
|
||||
// - Invoke gradle with `-newArchEnabled=true`
|
||||
// - Set an environment variable `ORG_GRADLE_PROJECT_newArchEnabled=true`
|
||||
return project.hasProperty("newArchEnabled") && project.newArchEnabled == "true"
|
||||
}
|
||||
|
||||
def nodeModules = findNodeModules(projectDir)
|
||||
logger.warn("VisionCamera: node_modules/ found at: ${nodeModules}")
|
||||
def reactNative = new File("$nodeModules/react-native")
|
||||
def CMAKE_NODE_MODULES_DIR = project.getProjectDir().getParentFile().getParent()
|
||||
def reactProperties = new Properties()
|
||||
file("$nodeModules/react-native/ReactAndroid/gradle.properties").withInputStream { reactProperties.load(it) }
|
||||
def REACT_NATIVE_FULL_VERSION = reactProperties.getProperty("VERSION_NAME")
|
||||
def REACT_NATIVE_VERSION = reactProperties.getProperty("VERSION_NAME").split("\\.")[1].toInteger()
|
||||
|
||||
def FOR_HERMES = System.getenv("FOR_HERMES") == "True"
|
||||
rootProject.getSubprojects().forEach({project ->
|
||||
if (project.plugins.hasPlugin("com.android.application")) {
|
||||
FOR_HERMES = REACT_NATIVE_VERSION >= 71 && project.hermesEnabled || project.ext.react.enableHermes
|
||||
}
|
||||
})
|
||||
def jsRuntimeDir = {
|
||||
if (FOR_HERMES) {
|
||||
if (REACT_NATIVE_VERSION >= 69) {
|
||||
return Paths.get(CMAKE_NODE_MODULES_DIR, "react-native", "sdks", "hermes")
|
||||
} else {
|
||||
return Paths.get(CMAKE_NODE_MODULES_DIR, "hermes-engine")
|
||||
}
|
||||
} else {
|
||||
return Paths.get(CMAKE_NODE_MODULES_DIR, "react-native", "ReactCommon", "jsi")
|
||||
}
|
||||
}.call()
|
||||
logger.warn("VisionCamera: Building with ${FOR_HERMES ? "Hermes" : "JSC"}...")
|
||||
|
||||
buildscript {
|
||||
// Buildscript is evaluated before everything else so we can't use getExtOrDefault
|
||||
def kotlin_version = rootProject.ext.has('kotlinVersion') ? rootProject.ext.get('kotlinVersion') : project.properties['VisionCamera_kotlinVersion']
|
||||
|
||||
repositories {
|
||||
google()
|
||||
mavenCentral()
|
||||
maven {
|
||||
url "https://plugins.gradle.org/m2/"
|
||||
}
|
||||
}
|
||||
|
||||
dependencies {
|
||||
classpath 'com.android.tools.build:gradle:4.2.2'
|
||||
classpath 'de.undercouch:gradle-download-task:4.1.2'
|
||||
// noinspection DifferentKotlinGradleVersion
|
||||
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
|
||||
classpath "org.jetbrains.kotlin:kotlin-android-extensions:$kotlin_version"
|
||||
}
|
||||
}
|
||||
|
||||
apply plugin: 'com.android.library'
|
||||
apply plugin: 'de.undercouch.download'
|
||||
apply plugin: 'kotlin-android'
|
||||
apply plugin: 'kotlin-android-extensions'
|
||||
|
||||
def getExtOrDefault(name) {
|
||||
return rootProject.ext.has(name) ? rootProject.ext.get(name) : project.properties['VisionCamera_' + name]
|
||||
}
|
||||
|
||||
def getExtOrIntegerDefault(name) {
|
||||
return rootProject.ext.has(name) ? rootProject.ext.get(name) : (project.properties['VisionCamera_' + name]).toInteger()
|
||||
}
|
||||
|
||||
def resolveBuildType() {
|
||||
def buildType = "debug"
|
||||
tasks.all({ task ->
|
||||
if (task.name == "buildCMakeRelease") {
|
||||
buildType = "release"
|
||||
}
|
||||
})
|
||||
return buildType
|
||||
}
|
||||
|
||||
// plugin.js file only exists since REA v2.
|
||||
def hasReanimated2 = file("${nodeModules}/react-native-reanimated/plugin.js").exists()
|
||||
def disableFrameProcessors = rootProject.ext.has("disableFrameProcessors") ? rootProject.ext.get("disableFrameProcessors").asBoolean() : false
|
||||
def ENABLE_FRAME_PROCESSORS = hasReanimated2 && !disableFrameProcessors
|
||||
|
||||
if (ENABLE_FRAME_PROCESSORS) {
|
||||
logger.warn("VisionCamera: Frame Processors are enabled! Building C++ part...")
|
||||
} else {
|
||||
if (disableFrameProcessors) {
|
||||
logger.warn("VisionCamera: Frame Processors are disabled because the user explicitly disabled it ('disableFrameProcessors=${disableFrameProcessors}'). C++ part will not be built.")
|
||||
} else if (!hasReanimated2) {
|
||||
logger.warn("VisionCamera: Frame Processors are disabled because REA v2 does not exist. C++ part will not be built.")
|
||||
}
|
||||
}
|
||||
|
||||
android {
|
||||
compileSdkVersion getExtOrIntegerDefault('compileSdkVersion')
|
||||
buildToolsVersion getExtOrDefault('buildToolsVersion')
|
||||
ndkVersion getExtOrDefault('ndkVersion')
|
||||
|
||||
if (REACT_NATIVE_VERSION >= 71) {
|
||||
buildFeatures {
|
||||
prefab true
|
||||
}
|
||||
}
|
||||
|
||||
defaultConfig {
|
||||
minSdkVersion 21
|
||||
targetSdkVersion getExtOrIntegerDefault('targetSdkVersion')
|
||||
|
||||
if (ENABLE_FRAME_PROCESSORS) {
|
||||
externalNativeBuild {
|
||||
cmake {
|
||||
cppFlags "-fexceptions", "-frtti", "-std=c++1y", "-DONANDROID"
|
||||
abiFilters 'x86', 'x86_64', 'armeabi-v7a', 'arm64-v8a'
|
||||
arguments '-DANDROID_STL=c++_shared',
|
||||
"-DREACT_NATIVE_VERSION=${REACT_NATIVE_VERSION}",
|
||||
"-DNODE_MODULES_DIR=${nodeModules}",
|
||||
"-DFOR_HERMES=${FOR_HERMES}",
|
||||
"-DJS_RUNTIME_DIR=${jsRuntimeDir}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
dexOptions {
|
||||
javaMaxHeapSize "4g"
|
||||
}
|
||||
|
||||
if (ENABLE_FRAME_PROCESSORS) {
|
||||
externalNativeBuild {
|
||||
cmake {
|
||||
path "CMakeLists.txt"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
packagingOptions {
|
||||
// Exclude all Libraries that are already present in the user's app (through React Native or by him installing REA)
|
||||
excludes = ["**/libc++_shared.so",
|
||||
"**/libfbjni.so",
|
||||
"**/libjsi.so",
|
||||
"**/libreactnativejni.so",
|
||||
"**/libfolly_json.so",
|
||||
"**/libreanimated.so",
|
||||
"**/libjscexecutor.so",
|
||||
"**/libhermes.so",
|
||||
"**/libfolly_runtime.so",
|
||||
"**/libglog.so",
|
||||
]
|
||||
// META-INF is duplicate by CameraX.
|
||||
exclude "META-INF/**"
|
||||
}
|
||||
|
||||
buildTypes {
|
||||
release {
|
||||
minifyEnabled false
|
||||
}
|
||||
}
|
||||
|
||||
lintOptions {
|
||||
disable 'GradleCompatible'
|
||||
}
|
||||
compileOptions {
|
||||
sourceCompatibility JavaVersion.VERSION_1_8
|
||||
targetCompatibility JavaVersion.VERSION_1_8
|
||||
}
|
||||
|
||||
configurations {
|
||||
extractHeaders
|
||||
extractJNI
|
||||
}
|
||||
}
|
||||
|
||||
repositories {
|
||||
mavenCentral()
|
||||
google()
|
||||
|
||||
def found = false
|
||||
def defaultDir = null
|
||||
def androidSourcesName = 'React Native sources'
|
||||
|
||||
if (rootProject.ext.has('reactNativeAndroidRoot')) {
|
||||
defaultDir = rootProject.ext.get('reactNativeAndroidRoot')
|
||||
} else {
|
||||
defaultDir = new File(
|
||||
projectDir,
|
||||
'/../../../node_modules/react-native/android'
|
||||
)
|
||||
}
|
||||
|
||||
if (defaultDir.exists()) {
|
||||
maven {
|
||||
url defaultDir.toString()
|
||||
name androidSourcesName
|
||||
}
|
||||
|
||||
logger.info(":${project.name}:reactNativeAndroidRoot ${defaultDir.canonicalPath}")
|
||||
found = true
|
||||
} else {
|
||||
def parentDir = rootProject.projectDir
|
||||
|
||||
1.upto(5, {
|
||||
if (found) return true
|
||||
parentDir = parentDir.parentFile
|
||||
|
||||
def androidSourcesDir = new File(
|
||||
parentDir,
|
||||
'node_modules/react-native'
|
||||
)
|
||||
|
||||
def androidPrebuiltBinaryDir = new File(
|
||||
parentDir,
|
||||
'node_modules/react-native/android'
|
||||
)
|
||||
|
||||
if (androidPrebuiltBinaryDir.exists()) {
|
||||
maven {
|
||||
url androidPrebuiltBinaryDir.toString()
|
||||
name androidSourcesName
|
||||
}
|
||||
|
||||
logger.info(":${project.name}:reactNativeAndroidRoot ${androidPrebuiltBinaryDir.canonicalPath}")
|
||||
found = true
|
||||
} else if (androidSourcesDir.exists()) {
|
||||
maven {
|
||||
url androidSourcesDir.toString()
|
||||
name androidSourcesName
|
||||
}
|
||||
|
||||
logger.info(":${project.name}:reactNativeAndroidRoot ${androidSourcesDir.canonicalPath}")
|
||||
found = true
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
if (!found) {
|
||||
throw new GradleException(
|
||||
"${project.name}: unable to locate React Native android sources. " +
|
||||
"Ensure you have you installed React Native as a dependency in your project and try again."
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
def kotlin_version = getExtOrDefault('kotlinVersion')
|
||||
|
||||
dependencies {
|
||||
if (REACT_NATIVE_VERSION >= 71) {
|
||||
implementation "com.facebook.react:react-android:"
|
||||
implementation "com.facebook.react:hermes-android:"
|
||||
} else {
|
||||
// noinspection GradleDynamicVersion
|
||||
implementation 'com.facebook.react:react-native:+'
|
||||
}
|
||||
|
||||
if (ENABLE_FRAME_PROCESSORS) {
|
||||
implementation project(':react-native-reanimated')
|
||||
|
||||
if (REACT_NATIVE_VERSION < 71) {
|
||||
//noinspection GradleDynamicVersion
|
||||
extractHeaders("com.facebook.fbjni:fbjni:0.4.0:headers")
|
||||
//noinspection GradleDynamicVersion
|
||||
extractJNI("com.facebook.fbjni:fbjni:0.4.0")
|
||||
|
||||
def rnAarMatcher = "**/react-native/**/*${resolveBuildType()}.aar"
|
||||
if (REACT_NATIVE_VERSION < 69) {
|
||||
rnAarMatcher = "**/**/*.aar"
|
||||
}
|
||||
|
||||
def rnAAR = fileTree("$reactNative/android").matching({ it.include rnAarMatcher }).singleFile
|
||||
def jscAAR = fileTree("${nodeModules}/jsc-android/dist/org/webkit/android-jsc").matching({ it.include "**/**/*.aar" }).singleFile
|
||||
extractJNI(files(rnAAR, jscAAR))
|
||||
}
|
||||
|
||||
def jsEngine = FOR_HERMES ? "hermes" : "jsc"
|
||||
def reaAAR = "${nodeModules}/react-native-reanimated/android/react-native-reanimated-${REACT_NATIVE_VERSION}-${jsEngine}.aar"
|
||||
extractJNI(files(reaAAR))
|
||||
}
|
||||
|
||||
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
|
||||
implementation "org.jetbrains.kotlinx:kotlinx-coroutines-guava:1.5.2"
|
||||
implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:1.5.2"
|
||||
|
||||
implementation "androidx.camera:camera-core:1.1.0"
|
||||
implementation "androidx.camera:camera-camera2:1.1.0"
|
||||
implementation "androidx.camera:camera-lifecycle:1.1.0"
|
||||
implementation "androidx.camera:camera-video:1.1.0"
|
||||
|
||||
implementation "androidx.camera:camera-view:1.1.0"
|
||||
implementation "androidx.camera:camera-extensions:1.1.0"
|
||||
|
||||
implementation "androidx.exifinterface:exifinterface:1.3.3"
|
||||
}
|
||||
|
||||
|
||||
if (ENABLE_FRAME_PROCESSORS) {
|
||||
// third-party-ndk deps headers
|
||||
// mostly a copy of https://github.com/software-mansion/react-native-reanimated/blob/master/android/build.gradle#L115
|
||||
def customDownloadsDir = System.getenv("REACT_NATIVE_DOWNLOADS_DIR")
|
||||
def downloadsDir = customDownloadsDir ? new File(customDownloadsDir) : new File("$buildDir/downloads")
|
||||
def thirdPartyNdkDir = new File("$buildDir/third-party-ndk")
|
||||
def thirdPartyVersionsFile = new File("${nodeModules}/react-native/ReactAndroid/gradle.properties")
|
||||
def thirdPartyVersions = new Properties()
|
||||
thirdPartyVersions.load(new FileInputStream(thirdPartyVersionsFile))
|
||||
|
||||
def BOOST_VERSION = thirdPartyVersions["BOOST_VERSION"]
|
||||
def boost_file = new File(downloadsDir, "boost_${BOOST_VERSION}.tar.gz")
|
||||
def DOUBLE_CONVERSION_VERSION = thirdPartyVersions["DOUBLE_CONVERSION_VERSION"]
|
||||
def double_conversion_file = new File(downloadsDir, "double-conversion-${DOUBLE_CONVERSION_VERSION}.tar.gz")
|
||||
def FOLLY_VERSION = thirdPartyVersions["FOLLY_VERSION"]
|
||||
def folly_file = new File(downloadsDir, "folly-${FOLLY_VERSION}.tar.gz")
|
||||
def GLOG_VERSION = thirdPartyVersions["GLOG_VERSION"]
|
||||
def glog_file = new File(downloadsDir, "glog-${GLOG_VERSION}.tar.gz")
|
||||
|
||||
task createNativeDepsDirectories {
|
||||
doLast {
|
||||
downloadsDir.mkdirs()
|
||||
thirdPartyNdkDir.mkdirs()
|
||||
}
|
||||
}
|
||||
|
||||
task downloadBoost(dependsOn: createNativeDepsDirectories, type: Download) {
|
||||
def transformedVersion = BOOST_VERSION.replace("_", ".")
|
||||
def srcUrl = "https://boostorg.jfrog.io/artifactory/main/release/${transformedVersion}/source/boost_${BOOST_VERSION}.tar.gz"
|
||||
if (REACT_NATIVE_VERSION < 69) {
|
||||
srcUrl = "https://github.com/react-native-community/boost-for-react-native/releases/download/v${transformedVersion}-0/boost_${BOOST_VERSION}.tar.gz"
|
||||
}
|
||||
src(srcUrl)
|
||||
onlyIfNewer(true)
|
||||
overwrite(false)
|
||||
dest(boost_file)
|
||||
}
|
||||
|
||||
task prepareBoost(dependsOn: downloadBoost, type: Copy) {
|
||||
from(tarTree(resources.gzip(downloadBoost.dest)))
|
||||
from("src/main/jni/third-party/boost/Android.mk")
|
||||
include("Android.mk", "boost_${BOOST_VERSION}/boost/**/*.hpp", "boost/boost/**/*.hpp")
|
||||
includeEmptyDirs = false
|
||||
into("$thirdPartyNdkDir") // /boost_X_XX_X
|
||||
doLast {
|
||||
file("$thirdPartyNdkDir/boost_${BOOST_VERSION}").renameTo("$thirdPartyNdkDir/boost")
|
||||
}
|
||||
}
|
||||
|
||||
task downloadDoubleConversion(dependsOn: createNativeDepsDirectories, type: Download) {
|
||||
src("https://github.com/google/double-conversion/archive/v${DOUBLE_CONVERSION_VERSION}.tar.gz")
|
||||
onlyIfNewer(true)
|
||||
overwrite(false)
|
||||
dest(double_conversion_file)
|
||||
}
|
||||
|
||||
task prepareDoubleConversion(dependsOn: downloadDoubleConversion, type: Copy) {
|
||||
from(tarTree(downloadDoubleConversion.dest))
|
||||
from("src/main/jni/third-party/double-conversion/Android.mk")
|
||||
include("double-conversion-${DOUBLE_CONVERSION_VERSION}/src/**/*", "Android.mk")
|
||||
filesMatching("*/src/**/*", { fname -> fname.path = "double-conversion/${fname.name}" })
|
||||
includeEmptyDirs = false
|
||||
into("$thirdPartyNdkDir/double-conversion")
|
||||
}
|
||||
|
||||
task downloadFolly(dependsOn: createNativeDepsDirectories, type: Download) {
|
||||
src("https://github.com/facebook/folly/archive/v${FOLLY_VERSION}.tar.gz")
|
||||
onlyIfNewer(true)
|
||||
overwrite(false)
|
||||
dest(folly_file)
|
||||
}
|
||||
|
||||
task prepareFolly(dependsOn: downloadFolly, type: Copy) {
|
||||
from(tarTree(downloadFolly.dest))
|
||||
from("src/main/jni/third-party/folly/Android.mk")
|
||||
include("folly-${FOLLY_VERSION}/folly/**/*", "Android.mk")
|
||||
eachFile { fname -> fname.path = (fname.path - "folly-${FOLLY_VERSION}/") }
|
||||
includeEmptyDirs = false
|
||||
into("$thirdPartyNdkDir/folly")
|
||||
}
|
||||
|
||||
task downloadGlog(dependsOn: createNativeDepsDirectories, type: Download) {
|
||||
src("https://github.com/google/glog/archive/v${GLOG_VERSION}.tar.gz")
|
||||
onlyIfNewer(true)
|
||||
overwrite(false)
|
||||
dest(glog_file)
|
||||
}
|
||||
|
||||
task prepareGlog(dependsOn: downloadGlog, type: Copy) {
|
||||
from(tarTree(downloadGlog.dest))
|
||||
from("src/main/jni/third-party/glog/")
|
||||
include("glog-${GLOG_VERSION}/src/**/*", "Android.mk", "config.h")
|
||||
includeEmptyDirs = false
|
||||
filesMatching("**/*.h.in") {
|
||||
filter(ReplaceTokens, tokens: [
|
||||
ac_cv_have_unistd_h : "1",
|
||||
ac_cv_have_stdint_h : "1",
|
||||
ac_cv_have_systypes_h : "1",
|
||||
ac_cv_have_inttypes_h : "1",
|
||||
ac_cv_have_libgflags : "0",
|
||||
ac_google_start_namespace : "namespace google {",
|
||||
ac_cv_have_uint16_t : "1",
|
||||
ac_cv_have_u_int16_t : "1",
|
||||
ac_cv_have___uint16 : "0",
|
||||
ac_google_end_namespace : "}",
|
||||
ac_cv_have___builtin_expect : "1",
|
||||
ac_google_namespace : "google",
|
||||
ac_cv___attribute___noinline : "__attribute__ ((noinline))",
|
||||
ac_cv___attribute___noreturn : "__attribute__ ((noreturn))",
|
||||
ac_cv___attribute___printf_4_5: "__attribute__((__format__ (__printf__, 4, 5)))"
|
||||
])
|
||||
it.path = (it.name - ".in")
|
||||
}
|
||||
into("$thirdPartyNdkDir/glog")
|
||||
|
||||
doLast {
|
||||
copy {
|
||||
from(fileTree(dir: "$thirdPartyNdkDir/glog", includes: ["stl_logging.h", "logging.h", "raw_logging.h", "vlog_is_on.h", "**/src/glog/log_severity.h"]).files)
|
||||
includeEmptyDirs = false
|
||||
into("$thirdPartyNdkDir/glog/exported/glog")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
task prepareThirdPartyNdkHeaders {
|
||||
if (!boost_file.exists()) {
|
||||
dependsOn(prepareBoost)
|
||||
}
|
||||
if (!double_conversion_file.exists()) {
|
||||
dependsOn(prepareDoubleConversion)
|
||||
}
|
||||
if (!folly_file.exists()) {
|
||||
dependsOn(prepareFolly)
|
||||
}
|
||||
if (!glog_file.exists()) {
|
||||
dependsOn(prepareGlog)
|
||||
}
|
||||
}
|
||||
|
||||
prepareThirdPartyNdkHeaders.mustRunAfter createNativeDepsDirectories
|
||||
|
||||
/*
|
||||
COPY-PASTE from react-native-reanimated.
|
||||
Vision Camera includes "hermes/hermes.h" header file in `NativeProxy.cpp`.
|
||||
Previously, we used header files from `hermes-engine` package in `node_modules`.
|
||||
Starting from React Native 0.69, Hermes is no longer distributed as package on NPM.
|
||||
On the new architecture, Hermes is downloaded from GitHub and then compiled from sources.
|
||||
However, on the old architecture, we need to download Hermes header files on our own
|
||||
as well as unzip Hermes AAR in order to obtain `libhermes.so` shared library.
|
||||
For more details, see https://reactnative.dev/architecture/bundled-hermes
|
||||
or https://github.com/reactwg/react-native-new-architecture/discussions/4
|
||||
*/
|
||||
if (REACT_NATIVE_VERSION >= 69 && !isNewArchitectureEnabled()) {
|
||||
// copied from `react-native/ReactAndroid/hermes-engine/build.gradle`
|
||||
|
||||
def customDownloadDir = System.getenv("REACT_NATIVE_DOWNLOADS_DIR")
|
||||
def downloadDir = customDownloadDir ? new File(customDownloadDir) : new File(reactNative, "sdks/download")
|
||||
|
||||
// By default we are going to download and unzip hermes inside the /sdks/hermes folder
|
||||
// but you can provide an override for where the hermes source code is located.
|
||||
def hermesDir = System.getenv("REACT_NATIVE_OVERRIDE_HERMES_DIR") ?: new File(reactNative, "sdks/hermes")
|
||||
|
||||
def hermesVersion = "main"
|
||||
def hermesVersionFile = new File(reactNative, "sdks/.hermesversion")
|
||||
if (hermesVersionFile.exists()) {
|
||||
hermesVersion = hermesVersionFile.text
|
||||
}
|
||||
|
||||
task downloadHermes(type: Download) {
|
||||
src("https://github.com/facebook/hermes/tarball/${hermesVersion}")
|
||||
onlyIfNewer(true)
|
||||
overwrite(false)
|
||||
dest(new File(downloadDir, "hermes.tar.gz"))
|
||||
}
|
||||
|
||||
task unzipHermes(dependsOn: downloadHermes, type: Copy) {
|
||||
from(tarTree(downloadHermes.dest)) {
|
||||
eachFile { file ->
|
||||
// We flatten the unzip as the tarball contains a `facebook-hermes-<SHA>`
|
||||
// folder at the top level.
|
||||
if (file.relativePath.segments.size() > 1) {
|
||||
file.relativePath = new org.gradle.api.file.RelativePath(!file.isDirectory(), file.relativePath.segments.drop(1))
|
||||
}
|
||||
}
|
||||
}
|
||||
into(hermesDir)
|
||||
}
|
||||
}
|
||||
|
||||
task prepareHermes() {
|
||||
if (REACT_NATIVE_VERSION >= 69) {
|
||||
if (!isNewArchitectureEnabled()) {
|
||||
dependsOn(unzipHermes)
|
||||
}
|
||||
|
||||
doLast {
|
||||
def hermesAAR = file("$reactNative/android/com/facebook/react/hermes-engine/$REACT_NATIVE_FULL_VERSION/hermes-engine-$REACT_NATIVE_FULL_VERSION-${resolveBuildType()}.aar") // e.g. hermes-engine-0.70.0-rc.1-debug.aar
|
||||
if (!hermesAAR.exists()) {
|
||||
throw new GradleScriptException("Could not find hermes-engine AAR", null)
|
||||
}
|
||||
|
||||
def soFiles = zipTree(hermesAAR).matching({ it.include "**/*.so" })
|
||||
|
||||
copy {
|
||||
from soFiles
|
||||
from "$reactNative/ReactAndroid/src/main/jni/first-party/hermes/Android.mk"
|
||||
into "$thirdPartyNdkDir/hermes"
|
||||
}
|
||||
}
|
||||
} else {
|
||||
doLast {
|
||||
def hermesPackagePath = findNodeModulePath(projectDir, "hermes-engine")
|
||||
if (!hermesPackagePath) {
|
||||
throw new GradleScriptException("Could not find the hermes-engine npm package", null)
|
||||
}
|
||||
|
||||
def hermesAAR = file("$hermesPackagePath/android/hermes-debug.aar")
|
||||
if (!hermesAAR.exists()) {
|
||||
throw new GradleScriptException("The hermes-engine npm package is missing \"android/hermes-debug.aar\"", null)
|
||||
}
|
||||
|
||||
def soFiles = zipTree(hermesAAR).matching({ it.include "**/*.so" })
|
||||
|
||||
copy {
|
||||
from soFiles
|
||||
from "$reactNative/ReactAndroid/src/main/jni/first-party/hermes/Android.mk"
|
||||
into "$thirdPartyNdkDir/hermes"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
prepareHermes.mustRunAfter prepareThirdPartyNdkHeaders
|
||||
|
||||
task prepareJSC {
|
||||
doLast {
|
||||
def jscPackagePath = file("${nodeModules}/jsc-android")
|
||||
if (!jscPackagePath.exists()) {
|
||||
throw new GradleScriptException("Could not find the jsc-android npm package", null)
|
||||
}
|
||||
|
||||
def jscDist = file("$jscPackagePath/dist")
|
||||
if (!jscDist.exists()) {
|
||||
throw new GradleScriptException("The jsc-android npm package is missing its \"dist\" directory", null)
|
||||
}
|
||||
|
||||
def jscAAR = fileTree(jscDist).matching({ it.include "**/android-jsc/**/*.aar" }).singleFile
|
||||
def soFiles = zipTree(jscAAR).matching({ it.include "**/*.so" })
|
||||
|
||||
def headerFiles = fileTree(jscDist).matching({ it.include "**/include/*.h" })
|
||||
|
||||
copy {
|
||||
from(soFiles)
|
||||
from(headerFiles)
|
||||
from("$reactNative/ReactAndroid/src/main/jni/third-party/jsc/Android.mk")
|
||||
|
||||
filesMatching("**/*.h", { it.path = "JavaScriptCore/${it.name}" })
|
||||
|
||||
includeEmptyDirs(false)
|
||||
into("$thirdPartyNdkDir/jsc")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
prepareJSC.mustRunAfter prepareHermes
|
||||
|
||||
task extractAARHeaders {
|
||||
doLast {
|
||||
configurations.extractHeaders.files.each {
|
||||
def file = it.absoluteFile
|
||||
copy {
|
||||
from zipTree(file)
|
||||
into "$buildDir/$file.name"
|
||||
include "**/*.h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
extractAARHeaders.mustRunAfter prepareJSC
|
||||
|
||||
task extractJNIFiles {
|
||||
doLast {
|
||||
configurations.extractJNI.files.each {
|
||||
def file = it.absoluteFile
|
||||
|
||||
copy {
|
||||
from zipTree(file)
|
||||
into "$buildDir/$file.name"
|
||||
include "jni/**/*"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
extractJNIFiles.mustRunAfter extractAARHeaders
|
||||
|
||||
// pre-native build pipeline
|
||||
|
||||
tasks.whenTaskAdded { task ->
|
||||
if (!task.name.contains('Clean') && (task.name.contains('externalNative') || task.name.contains('CMake'))) {
|
||||
task.dependsOn(extractJNIFiles)
|
||||
if (REACT_NATIVE_VERSION < 71) {
|
||||
task.dependsOn(extractAARHeaders)
|
||||
task.dependsOn(prepareThirdPartyNdkHeaders)
|
||||
task.dependsOn(prepareJSC)
|
||||
task.dependsOn(prepareHermes)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
183
android/gradlew
vendored
183
android/gradlew
vendored
@ -1,183 +0,0 @@
|
||||
#!/usr/bin/env sh
|
||||
|
||||
#
|
||||
# Copyright 2015 the original author or authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
##############################################################################
|
||||
##
|
||||
## Gradle start up script for UN*X
|
||||
##
|
||||
##############################################################################
|
||||
|
||||
# Attempt to set APP_HOME
|
||||
# Resolve links: $0 may be a link
|
||||
PRG="$0"
|
||||
# Need this for relative symlinks.
|
||||
while [ -h "$PRG" ] ; do
|
||||
ls=`ls -ld "$PRG"`
|
||||
link=`expr "$ls" : '.*-> \(.*\)$'`
|
||||
if expr "$link" : '/.*' > /dev/null; then
|
||||
PRG="$link"
|
||||
else
|
||||
PRG=`dirname "$PRG"`"/$link"
|
||||
fi
|
||||
done
|
||||
SAVED="`pwd`"
|
||||
cd "`dirname \"$PRG\"`/" >/dev/null
|
||||
APP_HOME="`pwd -P`"
|
||||
cd "$SAVED" >/dev/null
|
||||
|
||||
APP_NAME="Gradle"
|
||||
APP_BASE_NAME=`basename "$0"`
|
||||
|
||||
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
|
||||
DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
|
||||
|
||||
# Use the maximum available, or set MAX_FD != -1 to use that value.
|
||||
MAX_FD="maximum"
|
||||
|
||||
warn () {
|
||||
echo "$*"
|
||||
}
|
||||
|
||||
die () {
|
||||
echo
|
||||
echo "$*"
|
||||
echo
|
||||
exit 1
|
||||
}
|
||||
|
||||
# OS specific support (must be 'true' or 'false').
|
||||
cygwin=false
|
||||
msys=false
|
||||
darwin=false
|
||||
nonstop=false
|
||||
case "`uname`" in
|
||||
CYGWIN* )
|
||||
cygwin=true
|
||||
;;
|
||||
Darwin* )
|
||||
darwin=true
|
||||
;;
|
||||
MINGW* )
|
||||
msys=true
|
||||
;;
|
||||
NONSTOP* )
|
||||
nonstop=true
|
||||
;;
|
||||
esac
|
||||
|
||||
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
|
||||
|
||||
# Determine the Java command to use to start the JVM.
|
||||
if [ -n "$JAVA_HOME" ] ; then
|
||||
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
|
||||
# IBM's JDK on AIX uses strange locations for the executables
|
||||
JAVACMD="$JAVA_HOME/jre/sh/java"
|
||||
else
|
||||
JAVACMD="$JAVA_HOME/bin/java"
|
||||
fi
|
||||
if [ ! -x "$JAVACMD" ] ; then
|
||||
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
|
||||
|
||||
Please set the JAVA_HOME variable in your environment to match the
|
||||
location of your Java installation."
|
||||
fi
|
||||
else
|
||||
JAVACMD="java"
|
||||
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
|
||||
|
||||
Please set the JAVA_HOME variable in your environment to match the
|
||||
location of your Java installation."
|
||||
fi
|
||||
|
||||
# Increase the maximum file descriptors if we can.
|
||||
if [ "$cygwin" = "false" -a "$darwin" = "false" -a "$nonstop" = "false" ] ; then
|
||||
MAX_FD_LIMIT=`ulimit -H -n`
|
||||
if [ $? -eq 0 ] ; then
|
||||
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then
|
||||
MAX_FD="$MAX_FD_LIMIT"
|
||||
fi
|
||||
ulimit -n $MAX_FD
|
||||
if [ $? -ne 0 ] ; then
|
||||
warn "Could not set maximum file descriptor limit: $MAX_FD"
|
||||
fi
|
||||
else
|
||||
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
|
||||
fi
|
||||
fi
|
||||
|
||||
# For Darwin, add options to specify how the application appears in the dock
|
||||
if $darwin; then
|
||||
GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
|
||||
fi
|
||||
|
||||
# For Cygwin or MSYS, switch paths to Windows format before running java
|
||||
if [ "$cygwin" = "true" -o "$msys" = "true" ] ; then
|
||||
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
|
||||
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
|
||||
JAVACMD=`cygpath --unix "$JAVACMD"`
|
||||
|
||||
# We build the pattern for arguments to be converted via cygpath
|
||||
ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
|
||||
SEP=""
|
||||
for dir in $ROOTDIRSRAW ; do
|
||||
ROOTDIRS="$ROOTDIRS$SEP$dir"
|
||||
SEP="|"
|
||||
done
|
||||
OURCYGPATTERN="(^($ROOTDIRS))"
|
||||
# Add a user-defined pattern to the cygpath arguments
|
||||
if [ "$GRADLE_CYGPATTERN" != "" ] ; then
|
||||
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
|
||||
fi
|
||||
# Now convert the arguments - kludge to limit ourselves to /bin/sh
|
||||
i=0
|
||||
for arg in "$@" ; do
|
||||
CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -`
|
||||
CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option
|
||||
|
||||
if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition
|
||||
eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"`
|
||||
else
|
||||
eval `echo args$i`="\"$arg\""
|
||||
fi
|
||||
i=`expr $i + 1`
|
||||
done
|
||||
case $i in
|
||||
0) set -- ;;
|
||||
1) set -- "$args0" ;;
|
||||
2) set -- "$args0" "$args1" ;;
|
||||
3) set -- "$args0" "$args1" "$args2" ;;
|
||||
4) set -- "$args0" "$args1" "$args2" "$args3" ;;
|
||||
5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
|
||||
6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
|
||||
7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
|
||||
8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
|
||||
9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Escape application args
|
||||
save () {
|
||||
for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" ; done
|
||||
echo " "
|
||||
}
|
||||
APP_ARGS=`save "$@"`
|
||||
|
||||
# Collect all arguments for the java command, following the shell quoting and substitution rules
|
||||
eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS"
|
||||
|
||||
exec "$JAVACMD" "$@"
|
@ -1,6 +0,0 @@
|
||||
rootProject.name = 'VisionCamera'
|
||||
|
||||
include ':react-native-reanimated'
|
||||
project(':react-native-reanimated').projectDir = new File(rootProject.projectDir, '../node_modules/react-native-reanimated/android/')
|
||||
|
||||
include ':VisionCamera'
|
@ -1,58 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 14.06.21.
|
||||
//
|
||||
|
||||
#include "CameraView.h"
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <jsi/jsi.h>
|
||||
|
||||
#include <memory>
|
||||
#include <string>
|
||||
#include <regex>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using namespace jni;
|
||||
|
||||
using TSelf = local_ref<CameraView::jhybriddata>;
|
||||
|
||||
TSelf CameraView::initHybrid(alias_ref<HybridClass::jhybridobject> jThis) {
|
||||
return makeCxxInstance(jThis);
|
||||
}
|
||||
|
||||
void CameraView::registerNatives() {
|
||||
registerHybrid({
|
||||
makeNativeMethod("initHybrid", CameraView::initHybrid),
|
||||
makeNativeMethod("frameProcessorCallback", CameraView::frameProcessorCallback),
|
||||
});
|
||||
}
|
||||
|
||||
void CameraView::frameProcessorCallback(const alias_ref<JImageProxy::javaobject>& frame) {
|
||||
if (frameProcessor_ == nullptr) {
|
||||
__android_log_write(ANDROID_LOG_WARN, TAG, "Called Frame Processor callback, but `frameProcessor` is null!");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
frameProcessor_(frame);
|
||||
} catch (const jsi::JSError& error) {
|
||||
// TODO: jsi::JSErrors cannot be caught on Hermes. They crash the entire app.
|
||||
auto stack = std::regex_replace(error.getStack(), std::regex("\n"), "\n ");
|
||||
__android_log_print(ANDROID_LOG_ERROR, TAG, "Frame Processor threw an error! %s\nIn: %s", error.getMessage().c_str(), stack.c_str());
|
||||
} catch (const std::exception& exception) {
|
||||
__android_log_print(ANDROID_LOG_ERROR, TAG, "Frame Processor threw a C++ error! %s", exception.what());
|
||||
}
|
||||
}
|
||||
|
||||
void CameraView::setFrameProcessor(const TFrameProcessor&& frameProcessor) {
|
||||
frameProcessor_ = frameProcessor;
|
||||
}
|
||||
|
||||
void vision::CameraView::unsetFrameProcessor() {
|
||||
frameProcessor_ = nullptr;
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,43 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 14.06.21.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
#include <memory>
|
||||
|
||||
#include "java-bindings/JImageProxy.h"
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using TFrameProcessor = std::function<void(jni::alias_ref<JImageProxy::javaobject>)>;
|
||||
|
||||
class CameraView : public jni::HybridClass<CameraView> {
|
||||
public:
|
||||
static auto constexpr kJavaDescriptor = "Lcom/mrousavy/camera/CameraView;";
|
||||
static auto constexpr TAG = "VisionCamera";
|
||||
static jni::local_ref<jhybriddata> initHybrid(jni::alias_ref<jhybridobject> jThis);
|
||||
static void registerNatives();
|
||||
|
||||
// TODO: Use template<> to avoid heap allocation for std::function<>
|
||||
void setFrameProcessor(const TFrameProcessor&& frameProcessor);
|
||||
void unsetFrameProcessor();
|
||||
|
||||
private:
|
||||
friend HybridBase;
|
||||
jni::global_ref<CameraView::javaobject> javaPart_;
|
||||
TFrameProcessor frameProcessor_;
|
||||
|
||||
void frameProcessorCallback(const jni::alias_ref<JImageProxy::javaobject>& frame);
|
||||
|
||||
explicit CameraView(jni::alias_ref<CameraView::jhybridobject> jThis) :
|
||||
javaPart_(jni::make_global(jThis)),
|
||||
frameProcessor_(nullptr)
|
||||
{}
|
||||
};
|
||||
|
||||
} // namespace vision
|
@ -1,100 +0,0 @@
|
||||
//
|
||||
// Created by Marc on 19/06/2021.
|
||||
//
|
||||
|
||||
#include "FrameHostObject.h"
|
||||
#include <android/log.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <jni.h>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
FrameHostObject::FrameHostObject(jni::alias_ref<JImageProxy::javaobject> image): frame(make_global(image)) { }
|
||||
|
||||
FrameHostObject::~FrameHostObject() {
|
||||
// Hermes' Garbage Collector (Hades GC) calls destructors on a separate Thread
|
||||
// which might not be attached to JNI. Ensure that we use the JNI class loader when
|
||||
// deallocating the `frame` HybridClass, because otherwise JNI cannot call the Java
|
||||
// destroy() function.
|
||||
jni::ThreadScope::WithClassLoader([&] { frame.reset(); });
|
||||
}
|
||||
|
||||
std::vector<jsi::PropNameID> FrameHostObject::getPropertyNames(jsi::Runtime& rt) {
|
||||
std::vector<jsi::PropNameID> result;
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("toString")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("isValid")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("width")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("height")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("bytesPerRow")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("planesCount")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("close")));
|
||||
return result;
|
||||
}
|
||||
|
||||
jsi::Value FrameHostObject::get(jsi::Runtime& runtime, const jsi::PropNameID& propNameId) {
|
||||
auto name = propNameId.utf8(runtime);
|
||||
|
||||
if (name == "toString") {
|
||||
auto toString = [this] (jsi::Runtime& runtime, const jsi::Value&, const jsi::Value*, size_t) -> jsi::Value {
|
||||
if (!this->frame) {
|
||||
return jsi::String::createFromUtf8(runtime, "[closed frame]");
|
||||
}
|
||||
auto width = this->frame->getWidth();
|
||||
auto height = this->frame->getHeight();
|
||||
auto str = std::to_string(width) + " x " + std::to_string(height) + " Frame";
|
||||
return jsi::String::createFromUtf8(runtime, str);
|
||||
};
|
||||
return jsi::Function::createFromHostFunction(runtime, jsi::PropNameID::forUtf8(runtime, "toString"), 0, toString);
|
||||
}
|
||||
if (name == "close") {
|
||||
auto close = [this] (jsi::Runtime& runtime, const jsi::Value&, const jsi::Value*, size_t) -> jsi::Value {
|
||||
if (!this->frame) {
|
||||
throw jsi::JSError(runtime, "Trying to close an already closed frame! Did you call frame.close() twice?");
|
||||
}
|
||||
this->close();
|
||||
return jsi::Value::undefined();
|
||||
};
|
||||
return jsi::Function::createFromHostFunction(runtime, jsi::PropNameID::forUtf8(runtime, "close"), 0, close);
|
||||
}
|
||||
|
||||
if (name == "isValid") {
|
||||
return jsi::Value(this->frame && this->frame->getIsValid());
|
||||
}
|
||||
if (name == "width") {
|
||||
this->assertIsFrameStrong(runtime, name);
|
||||
return jsi::Value(this->frame->getWidth());
|
||||
}
|
||||
if (name == "height") {
|
||||
this->assertIsFrameStrong(runtime, name);
|
||||
return jsi::Value(this->frame->getHeight());
|
||||
}
|
||||
if (name == "bytesPerRow") {
|
||||
this->assertIsFrameStrong(runtime, name);
|
||||
return jsi::Value(this->frame->getBytesPerRow());
|
||||
}
|
||||
if (name == "planesCount") {
|
||||
this->assertIsFrameStrong(runtime, name);
|
||||
return jsi::Value(this->frame->getPlanesCount());
|
||||
}
|
||||
|
||||
return jsi::Value::undefined();
|
||||
}
|
||||
|
||||
void FrameHostObject::assertIsFrameStrong(jsi::Runtime& runtime, const std::string& accessedPropName) const {
|
||||
if (!this->frame) {
|
||||
auto message = "Cannot get `" + accessedPropName + "`, frame is already closed!";
|
||||
throw jsi::JSError(runtime, message.c_str());
|
||||
}
|
||||
}
|
||||
|
||||
void FrameHostObject::close() {
|
||||
if (this->frame) {
|
||||
this->frame->close();
|
||||
}
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,39 +0,0 @@
|
||||
//
|
||||
// Created by Marc on 19/06/2021.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jsi/jsi.h>
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
|
||||
#include "java-bindings/JImageProxy.h"
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
class JSI_EXPORT FrameHostObject : public jsi::HostObject {
|
||||
public:
|
||||
explicit FrameHostObject(jni::alias_ref<JImageProxy::javaobject> image);
|
||||
~FrameHostObject();
|
||||
|
||||
public:
|
||||
jsi::Value get(jsi::Runtime &, const jsi::PropNameID &name) override;
|
||||
std::vector<jsi::PropNameID> getPropertyNames(jsi::Runtime &rt) override;
|
||||
|
||||
void close();
|
||||
|
||||
public:
|
||||
jni::global_ref<JImageProxy> frame;
|
||||
|
||||
private:
|
||||
static auto constexpr TAG = "VisionCamera";
|
||||
|
||||
void assertIsFrameStrong(jsi::Runtime& runtime, const std::string& accessedPropName) const; // NOLINT(runtime/references)
|
||||
};
|
||||
|
||||
} // namespace vision
|
@ -1,278 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 11.06.21.
|
||||
//
|
||||
|
||||
#include "FrameProcessorRuntimeManager.h"
|
||||
#include <android/log.h>
|
||||
#include <jni.h>
|
||||
#include <utility>
|
||||
#include <string>
|
||||
|
||||
#include "RuntimeDecorator.h"
|
||||
#include "RuntimeManager.h"
|
||||
#include "reanimated-headers/AndroidScheduler.h"
|
||||
#include "reanimated-headers/AndroidErrorHandler.h"
|
||||
|
||||
#include "MakeJSIRuntime.h"
|
||||
#include "CameraView.h"
|
||||
#include "FrameHostObject.h"
|
||||
#include "JSIJNIConversion.h"
|
||||
#include "VisionCameraScheduler.h"
|
||||
#include "java-bindings/JImageProxy.h"
|
||||
#include "java-bindings/JFrameProcessorPlugin.h"
|
||||
|
||||
namespace vision {
|
||||
|
||||
// type aliases
|
||||
using TSelf = local_ref<HybridClass<vision::FrameProcessorRuntimeManager>::jhybriddata>;
|
||||
using TJSCallInvokerHolder = jni::alias_ref<facebook::react::CallInvokerHolder::javaobject>;
|
||||
using TAndroidScheduler = jni::alias_ref<VisionCameraScheduler::javaobject>;
|
||||
|
||||
// JNI binding
|
||||
void vision::FrameProcessorRuntimeManager::registerNatives() {
|
||||
registerHybrid({
|
||||
makeNativeMethod("initHybrid",
|
||||
FrameProcessorRuntimeManager::initHybrid),
|
||||
makeNativeMethod("installJSIBindings",
|
||||
FrameProcessorRuntimeManager::installJSIBindings),
|
||||
makeNativeMethod("initializeRuntime",
|
||||
FrameProcessorRuntimeManager::initializeRuntime),
|
||||
makeNativeMethod("registerPlugin",
|
||||
FrameProcessorRuntimeManager::registerPlugin),
|
||||
});
|
||||
}
|
||||
|
||||
// JNI init
|
||||
TSelf vision::FrameProcessorRuntimeManager::initHybrid(
|
||||
alias_ref<jhybridobject> jThis,
|
||||
jlong jsRuntimePointer,
|
||||
TJSCallInvokerHolder jsCallInvokerHolder,
|
||||
TAndroidScheduler androidScheduler) {
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG,
|
||||
"Initializing FrameProcessorRuntimeManager...");
|
||||
|
||||
// cast from JNI hybrid objects to C++ instances
|
||||
auto runtime = reinterpret_cast<jsi::Runtime*>(jsRuntimePointer);
|
||||
auto jsCallInvoker = jsCallInvokerHolder->cthis()->getCallInvoker();
|
||||
auto scheduler = std::shared_ptr<VisionCameraScheduler>(androidScheduler->cthis());
|
||||
scheduler->setJSCallInvoker(jsCallInvoker);
|
||||
|
||||
return makeCxxInstance(jThis, runtime, jsCallInvoker, scheduler);
|
||||
}
|
||||
|
||||
void vision::FrameProcessorRuntimeManager::initializeRuntime() {
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG,
|
||||
"Initializing Vision JS-Runtime...");
|
||||
|
||||
// create JSI runtime and decorate it
|
||||
auto runtime = makeJSIRuntime();
|
||||
reanimated::RuntimeDecorator::decorateRuntime(*runtime, "FRAME_PROCESSOR");
|
||||
runtime->global().setProperty(*runtime, "_FRAME_PROCESSOR",
|
||||
jsi::Value(true));
|
||||
|
||||
// create REA runtime manager
|
||||
auto errorHandler = std::make_shared<reanimated::AndroidErrorHandler>(scheduler_);
|
||||
_runtimeManager = std::make_unique<reanimated::RuntimeManager>(std::move(runtime),
|
||||
errorHandler,
|
||||
scheduler_);
|
||||
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG,
|
||||
"Initialized Vision JS-Runtime!");
|
||||
}
|
||||
|
||||
global_ref<CameraView::javaobject> FrameProcessorRuntimeManager::findCameraViewById(int viewId) {
|
||||
static const auto findCameraViewByIdMethod = javaPart_->getClass()->getMethod<CameraView(jint)>("findCameraViewById");
|
||||
auto weakCameraView = findCameraViewByIdMethod(javaPart_.get(), viewId);
|
||||
return make_global(weakCameraView);
|
||||
}
|
||||
|
||||
void FrameProcessorRuntimeManager::logErrorToJS(const std::string& message) {
|
||||
if (!this->jsCallInvoker_) {
|
||||
return;
|
||||
}
|
||||
|
||||
this->jsCallInvoker_->invokeAsync([this, message]() {
|
||||
if (this->runtime_ == nullptr) {
|
||||
return;
|
||||
}
|
||||
|
||||
auto& runtime = *this->runtime_;
|
||||
auto consoleError = runtime
|
||||
.global()
|
||||
.getPropertyAsObject(runtime, "console")
|
||||
.getPropertyAsFunction(runtime, "error");
|
||||
consoleError.call(runtime, jsi::String::createFromUtf8(runtime, message));
|
||||
});
|
||||
}
|
||||
|
||||
void FrameProcessorRuntimeManager::setFrameProcessor(jsi::Runtime& runtime,
|
||||
int viewTag,
|
||||
const jsi::Value& frameProcessor) {
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG,
|
||||
"Setting new Frame Processor...");
|
||||
|
||||
if (!_runtimeManager || !_runtimeManager->runtime) {
|
||||
throw jsi::JSError(runtime,
|
||||
"setFrameProcessor(..): VisionCamera's RuntimeManager is not yet initialized!");
|
||||
}
|
||||
|
||||
// find camera view
|
||||
auto cameraView = findCameraViewById(viewTag);
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Found CameraView!");
|
||||
|
||||
// convert jsi::Function to a ShareableValue (can be shared across runtimes)
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG,
|
||||
"Adapting Shareable value from function (conversion to worklet)...");
|
||||
auto worklet = reanimated::ShareableValue::adapt(runtime,
|
||||
frameProcessor,
|
||||
_runtimeManager.get());
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Successfully created worklet!");
|
||||
|
||||
scheduler_->scheduleOnUI([=]() {
|
||||
// cast worklet to a jsi::Function for the new runtime
|
||||
auto& rt = *_runtimeManager->runtime;
|
||||
auto function = std::make_shared<jsi::Function>(worklet->getValue(rt).asObject(rt).asFunction(rt));
|
||||
|
||||
// assign lambda to frame processor
|
||||
cameraView->cthis()->setFrameProcessor([this, &rt, function](jni::alias_ref<JImageProxy::javaobject> frame) {
|
||||
try {
|
||||
// create HostObject which holds the Frame (JImageProxy)
|
||||
auto hostObject = std::make_shared<FrameHostObject>(frame);
|
||||
function->callWithThis(rt, *function, jsi::Object::createFromHostObject(rt, hostObject));
|
||||
} catch (jsi::JSError& jsError) {
|
||||
auto message = "Frame Processor threw an error: " + jsError.getMessage();
|
||||
__android_log_write(ANDROID_LOG_ERROR, TAG, message.c_str());
|
||||
this->logErrorToJS(message);
|
||||
}
|
||||
});
|
||||
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Frame Processor set!");
|
||||
});
|
||||
}
|
||||
|
||||
void FrameProcessorRuntimeManager::unsetFrameProcessor(int viewTag) {
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Removing Frame Processor...");
|
||||
|
||||
// find camera view
|
||||
auto cameraView = findCameraViewById(viewTag);
|
||||
|
||||
// call Java method to unset frame processor
|
||||
cameraView->cthis()->unsetFrameProcessor();
|
||||
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Frame Processor removed!");
|
||||
}
|
||||
|
||||
// actual JSI installer
|
||||
void FrameProcessorRuntimeManager::installJSIBindings() {
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Installing JSI bindings...");
|
||||
|
||||
if (runtime_ == nullptr) {
|
||||
__android_log_write(ANDROID_LOG_ERROR, TAG,
|
||||
"JS-Runtime was null, Frame Processor JSI bindings could not be installed!");
|
||||
return;
|
||||
}
|
||||
|
||||
auto& jsiRuntime = *runtime_;
|
||||
|
||||
auto setFrameProcessor = [this](jsi::Runtime &runtime,
|
||||
const jsi::Value &thisValue,
|
||||
const jsi::Value *arguments,
|
||||
size_t count) -> jsi::Value {
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG,
|
||||
"Setting new Frame Processor...");
|
||||
|
||||
if (!arguments[0].isNumber()) {
|
||||
throw jsi::JSError(runtime,
|
||||
"Camera::setFrameProcessor: First argument ('viewTag') must be a number!");
|
||||
}
|
||||
if (!arguments[1].isObject()) {
|
||||
throw jsi::JSError(runtime,
|
||||
"Camera::setFrameProcessor: Second argument ('frameProcessor') must be a function!");
|
||||
}
|
||||
|
||||
double viewTag = arguments[0].asNumber();
|
||||
const jsi::Value& frameProcessor = arguments[1];
|
||||
this->setFrameProcessor(runtime, static_cast<int>(viewTag), frameProcessor);
|
||||
|
||||
return jsi::Value::undefined();
|
||||
};
|
||||
jsiRuntime.global().setProperty(jsiRuntime,
|
||||
"setFrameProcessor",
|
||||
jsi::Function::createFromHostFunction(
|
||||
jsiRuntime,
|
||||
jsi::PropNameID::forAscii(jsiRuntime,
|
||||
"setFrameProcessor"),
|
||||
2, // viewTag, frameProcessor
|
||||
setFrameProcessor));
|
||||
|
||||
|
||||
auto unsetFrameProcessor = [this](jsi::Runtime &runtime,
|
||||
const jsi::Value &thisValue,
|
||||
const jsi::Value *arguments,
|
||||
size_t count) -> jsi::Value {
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Removing Frame Processor...");
|
||||
if (!arguments[0].isNumber()) {
|
||||
throw jsi::JSError(runtime,
|
||||
"Camera::unsetFrameProcessor: First argument ('viewTag') must be a number!");
|
||||
}
|
||||
|
||||
auto viewTag = arguments[0].asNumber();
|
||||
this->unsetFrameProcessor(static_cast<int>(viewTag));
|
||||
|
||||
return jsi::Value::undefined();
|
||||
};
|
||||
jsiRuntime.global().setProperty(jsiRuntime,
|
||||
"unsetFrameProcessor",
|
||||
jsi::Function::createFromHostFunction(
|
||||
jsiRuntime,
|
||||
jsi::PropNameID::forAscii(jsiRuntime,
|
||||
"unsetFrameProcessor"),
|
||||
1, // viewTag
|
||||
unsetFrameProcessor));
|
||||
|
||||
__android_log_write(ANDROID_LOG_INFO, TAG, "Finished installing JSI bindings!");
|
||||
}
|
||||
|
||||
void FrameProcessorRuntimeManager::registerPlugin(alias_ref<JFrameProcessorPlugin::javaobject> plugin) {
|
||||
// _runtimeManager might never be null, but we can never be too sure.
|
||||
if (!_runtimeManager || !_runtimeManager->runtime) {
|
||||
throw std::runtime_error("Tried to register plugin before initializing JS runtime! Call `initializeRuntime()` first.");
|
||||
}
|
||||
|
||||
auto& runtime = *_runtimeManager->runtime;
|
||||
|
||||
// we need a strong reference on the plugin, make_global does that.
|
||||
auto pluginGlobal = make_global(plugin);
|
||||
// name is always prefixed with two underscores (__)
|
||||
auto name = "__" + pluginGlobal->getName();
|
||||
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Installing Frame Processor Plugin \"%s\"...", name.c_str());
|
||||
|
||||
auto callback = [pluginGlobal](jsi::Runtime& runtime,
|
||||
const jsi::Value& thisValue,
|
||||
const jsi::Value* arguments,
|
||||
size_t count) -> jsi::Value {
|
||||
// Unbox object and get typed HostObject
|
||||
auto boxedHostObject = arguments[0].asObject(runtime).asHostObject(runtime);
|
||||
auto frameHostObject = static_cast<FrameHostObject*>(boxedHostObject.get());
|
||||
|
||||
// parse params - we are offset by `1` because the frame is the first parameter.
|
||||
auto params = JArrayClass<jobject>::newArray(count - 1);
|
||||
for (size_t i = 1; i < count; i++) {
|
||||
params->setElement(i - 1, JSIJNIConversion::convertJSIValueToJNIObject(runtime, arguments[i]));
|
||||
}
|
||||
|
||||
// call implemented virtual method
|
||||
auto result = pluginGlobal->callback(frameHostObject->frame, params);
|
||||
|
||||
// convert result from JNI to JSI value
|
||||
return JSIJNIConversion::convertJNIObjectToJSIValue(runtime, result);
|
||||
};
|
||||
|
||||
runtime.global().setProperty(runtime, name.c_str(), jsi::Function::createFromHostFunction(runtime,
|
||||
jsi::PropNameID::forAscii(runtime, name),
|
||||
1, // frame
|
||||
callback));
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,64 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 11.06.21.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <jsi/jsi.h>
|
||||
#include <ReactCommon/CallInvokerHolder.h>
|
||||
#include <memory>
|
||||
#include <string>
|
||||
|
||||
#include "RuntimeManager.h"
|
||||
#include "reanimated-headers/AndroidScheduler.h"
|
||||
|
||||
#include "CameraView.h"
|
||||
#include "VisionCameraScheduler.h"
|
||||
#include "java-bindings/JFrameProcessorPlugin.h"
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
class FrameProcessorRuntimeManager : public jni::HybridClass<FrameProcessorRuntimeManager> {
|
||||
public:
|
||||
static auto constexpr kJavaDescriptor = "Lcom/mrousavy/camera/frameprocessor/FrameProcessorRuntimeManager;";
|
||||
static auto constexpr TAG = "VisionCamera";
|
||||
static jni::local_ref<jhybriddata> initHybrid(jni::alias_ref<jhybridobject> jThis,
|
||||
jlong jsContext,
|
||||
jni::alias_ref<facebook::react::CallInvokerHolder::javaobject> jsCallInvokerHolder,
|
||||
jni::alias_ref<vision::VisionCameraScheduler::javaobject> androidScheduler);
|
||||
static void registerNatives();
|
||||
|
||||
explicit FrameProcessorRuntimeManager(jni::alias_ref<FrameProcessorRuntimeManager::jhybridobject> jThis,
|
||||
jsi::Runtime* runtime,
|
||||
std::shared_ptr<facebook::react::CallInvoker> jsCallInvoker,
|
||||
std::shared_ptr<vision::VisionCameraScheduler> scheduler) :
|
||||
javaPart_(jni::make_global(jThis)),
|
||||
runtime_(runtime),
|
||||
jsCallInvoker_(jsCallInvoker),
|
||||
scheduler_(scheduler)
|
||||
{}
|
||||
|
||||
private:
|
||||
friend HybridBase;
|
||||
jni::global_ref<FrameProcessorRuntimeManager::javaobject> javaPart_;
|
||||
jsi::Runtime* runtime_;
|
||||
std::shared_ptr<facebook::react::CallInvoker> jsCallInvoker_;
|
||||
std::shared_ptr<reanimated::RuntimeManager> _runtimeManager;
|
||||
std::shared_ptr<vision::VisionCameraScheduler> scheduler_;
|
||||
|
||||
jni::global_ref<CameraView::javaobject> findCameraViewById(int viewId);
|
||||
void initializeRuntime();
|
||||
void installJSIBindings();
|
||||
void registerPlugin(alias_ref<JFrameProcessorPlugin::javaobject> plugin);
|
||||
void logErrorToJS(const std::string& message);
|
||||
|
||||
void setFrameProcessor(jsi::Runtime& runtime, // NOLINT(runtime/references)
|
||||
int viewTag,
|
||||
const jsi::Value& frameProcessor);
|
||||
void unsetFrameProcessor(int viewTag);
|
||||
};
|
||||
|
||||
} // namespace vision
|
@ -1,199 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 22.06.21.
|
||||
//
|
||||
|
||||
#include "JSIJNIConversion.h"
|
||||
|
||||
#include <jsi/jsi.h>
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <android/log.h>
|
||||
|
||||
#include <string>
|
||||
#include <utility>
|
||||
#include <memory>
|
||||
|
||||
#include <react/jni/NativeMap.h>
|
||||
#include <react/jni/ReadableNativeMap.h>
|
||||
#include <react/jni/WritableNativeMap.h>
|
||||
|
||||
#include <jsi/JSIDynamic.h>
|
||||
#include <folly/dynamic.h>
|
||||
|
||||
#include "FrameHostObject.h"
|
||||
#include "java-bindings/JImageProxy.h"
|
||||
#include "java-bindings/JArrayList.h"
|
||||
#include "java-bindings/JHashMap.h"
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
jobject JSIJNIConversion::convertJSIValueToJNIObject(jsi::Runtime &runtime, const jsi::Value &value) {
|
||||
if (value.isBool()) {
|
||||
// jsi::Bool
|
||||
|
||||
auto boolean = jni::JBoolean::valueOf(value.getBool());
|
||||
return boolean.release();
|
||||
|
||||
} else if (value.isNumber()) {
|
||||
// jsi::Number
|
||||
|
||||
auto number = jni::JDouble::valueOf(value.getNumber());
|
||||
return number.release();
|
||||
|
||||
} else if (value.isNull() || value.isUndefined()) {
|
||||
// jsi::undefined
|
||||
|
||||
return nullptr;
|
||||
|
||||
} else if (value.isString()) {
|
||||
// jsi::String
|
||||
|
||||
auto string = jni::make_jstring(value.getString(runtime).utf8(runtime));
|
||||
return string.release();
|
||||
|
||||
} else if (value.isObject()) {
|
||||
// jsi::Object
|
||||
|
||||
auto object = value.asObject(runtime);
|
||||
|
||||
if (object.isArray(runtime)) {
|
||||
// jsi::Array
|
||||
|
||||
auto dynamic = jsi::dynamicFromValue(runtime, value);
|
||||
auto nativeArray = react::ReadableNativeArray::newObjectCxxArgs(std::move(dynamic));
|
||||
return nativeArray.release();
|
||||
|
||||
} else if (object.isHostObject(runtime)) {
|
||||
// jsi::HostObject
|
||||
|
||||
auto boxedHostObject = object.getHostObject(runtime);
|
||||
auto hostObject = dynamic_cast<FrameHostObject*>(boxedHostObject.get());
|
||||
if (hostObject != nullptr) {
|
||||
// return jni local_ref to the JImageProxy
|
||||
return hostObject->frame.get();
|
||||
} else {
|
||||
// it's different kind of HostObject. We don't support it.
|
||||
throw std::runtime_error("Received an unknown HostObject! Cannot convert to a JNI value.");
|
||||
}
|
||||
|
||||
} else if (object.isFunction(runtime)) {
|
||||
// jsi::Function
|
||||
|
||||
// TODO: Convert Function to Callback
|
||||
throw std::runtime_error("Cannot convert a JS Function to a JNI value (yet)!");
|
||||
|
||||
} else {
|
||||
// jsi::Object
|
||||
|
||||
auto dynamic = jsi::dynamicFromValue(runtime, value);
|
||||
auto map = react::ReadableNativeMap::createWithContents(std::move(dynamic));
|
||||
return map.release();
|
||||
|
||||
}
|
||||
} else {
|
||||
// unknown jsi type!
|
||||
|
||||
auto stringRepresentation = value.toString(runtime).utf8(runtime);
|
||||
auto message = "Received unknown JSI value! (" + stringRepresentation + ") Cannot convert to a JNI value.";
|
||||
throw std::runtime_error(message);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
jsi::Value JSIJNIConversion::convertJNIObjectToJSIValue(jsi::Runtime &runtime, const jni::local_ref<jobject>& object) {
|
||||
if (object == nullptr) {
|
||||
// null
|
||||
|
||||
return jsi::Value::undefined();
|
||||
|
||||
} else if (object->isInstanceOf(jni::JBoolean::javaClassStatic())) {
|
||||
// Boolean
|
||||
|
||||
static const auto getBooleanFunc = jni::findClassLocal("java/lang/Boolean")->getMethod<jboolean()>("booleanValue");
|
||||
auto boolean = getBooleanFunc(object.get());
|
||||
return jsi::Value(boolean == true);
|
||||
|
||||
} else if (object->isInstanceOf(jni::JDouble::javaClassStatic())) {
|
||||
// Double
|
||||
|
||||
static const auto getDoubleFunc = jni::findClassLocal("java/lang/Double")->getMethod<jdouble()>("doubleValue");
|
||||
auto d = getDoubleFunc(object.get());
|
||||
return jsi::Value(d);
|
||||
|
||||
} else if (object->isInstanceOf(jni::JInteger::javaClassStatic())) {
|
||||
// Integer
|
||||
|
||||
static const auto getIntegerFunc = jni::findClassLocal("java/lang/Integer")->getMethod<jint()>("intValue");
|
||||
auto i = getIntegerFunc(object.get());
|
||||
return jsi::Value(i);
|
||||
|
||||
} else if (object->isInstanceOf(jni::JString::javaClassStatic())) {
|
||||
// String
|
||||
|
||||
return jsi::String::createFromUtf8(runtime, object->toString());
|
||||
|
||||
} else if (object->isInstanceOf(JArrayList<jobject>::javaClassStatic())) {
|
||||
// ArrayList<E>
|
||||
|
||||
auto arrayList = static_ref_cast<JArrayList<jobject>>(object);
|
||||
auto size = arrayList->size();
|
||||
|
||||
auto result = jsi::Array(runtime, size);
|
||||
size_t i = 0;
|
||||
for (const auto& item : *arrayList) {
|
||||
result.setValueAtIndex(runtime, i, convertJNIObjectToJSIValue(runtime, item));
|
||||
i++;
|
||||
}
|
||||
return result;
|
||||
|
||||
} else if (object->isInstanceOf(react::ReadableArray::javaClassStatic())) {
|
||||
// ReadableArray
|
||||
|
||||
static const auto toArrayListFunc = react::ReadableArray::javaClassLocal()->getMethod<JArrayList<jobject>()>("toArrayList");
|
||||
|
||||
// call recursive, this time ArrayList<E>
|
||||
auto array = toArrayListFunc(object.get());
|
||||
return convertJNIObjectToJSIValue(runtime, array);
|
||||
|
||||
} else if (object->isInstanceOf(JHashMap<jstring, jobject>::javaClassStatic())) {
|
||||
// HashMap<K, V>
|
||||
|
||||
auto map = static_ref_cast<JHashMap<jstring, jobject>>(object);
|
||||
|
||||
auto result = jsi::Object(runtime);
|
||||
for (const auto& entry : *map) {
|
||||
auto key = entry.first->toString();
|
||||
auto value = entry.second;
|
||||
auto jsiValue = convertJNIObjectToJSIValue(runtime, value);
|
||||
result.setProperty(runtime, key.c_str(), jsiValue);
|
||||
}
|
||||
return result;
|
||||
|
||||
} else if (object->isInstanceOf(react::ReadableMap::javaClassStatic())) {
|
||||
// ReadableMap
|
||||
|
||||
static const auto toHashMapFunc = react::ReadableMap::javaClassLocal()->getMethod<JHashMap<jstring, jobject>()>("toHashMap");
|
||||
|
||||
// call recursive, this time HashMap<K, V>
|
||||
auto hashMap = toHashMapFunc(object.get());
|
||||
return convertJNIObjectToJSIValue(runtime, hashMap);
|
||||
|
||||
} else if (object->isInstanceOf(JImageProxy::javaClassStatic())) {
|
||||
// ImageProxy
|
||||
|
||||
auto frame = static_ref_cast<JImageProxy>(object);
|
||||
|
||||
// box into HostObject
|
||||
auto hostObject = std::make_shared<FrameHostObject>(frame);
|
||||
return jsi::Object::createFromHostObject(runtime, hostObject);
|
||||
}
|
||||
|
||||
auto type = object->getClass()->toString();
|
||||
auto message = "Received unknown JNI type \"" + type + "\"! Cannot convert to jsi::Value.";
|
||||
__android_log_write(ANDROID_LOG_ERROR, "VisionCamera", message.c_str());
|
||||
throw std::runtime_error(message);
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,23 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 22.06.21.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jsi/jsi.h>
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
namespace JSIJNIConversion {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
jobject convertJSIValueToJNIObject(jsi::Runtime& runtime, const jsi::Value& value); // NOLINT(runtime/references)
|
||||
|
||||
jsi::Value convertJNIObjectToJSIValue(jsi::Runtime& runtime, const jni::local_ref<jobject>& object); // NOLINT(runtime/references)
|
||||
|
||||
} // namespace JSIJNIConversion
|
||||
|
||||
} // namespace vision
|
@ -1,30 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 06.07.21.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jsi/jsi.h>
|
||||
#include <memory>
|
||||
|
||||
#if FOR_HERMES
|
||||
// Hermes
|
||||
#include <hermes/hermes.h>
|
||||
#else
|
||||
// JSC
|
||||
#include <jsi/JSCRuntime.h>
|
||||
#endif
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
static std::unique_ptr<jsi::Runtime> makeJSIRuntime() {
|
||||
#if FOR_HERMES
|
||||
return facebook::hermes::makeHermesRuntime();
|
||||
#else
|
||||
return facebook::jsc::makeJSCRuntime();
|
||||
#endif
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,13 +0,0 @@
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include "FrameProcessorRuntimeManager.h"
|
||||
#include "CameraView.h"
|
||||
#include "VisionCameraScheduler.h"
|
||||
|
||||
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM *vm, void *) {
|
||||
return facebook::jni::initialize(vm, [] {
|
||||
vision::FrameProcessorRuntimeManager::registerNatives();
|
||||
vision::CameraView::registerNatives();
|
||||
vision::VisionCameraScheduler::registerNatives();
|
||||
});
|
||||
}
|
@ -1,42 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 25.07.21.
|
||||
//
|
||||
|
||||
#include "VisionCameraScheduler.h"
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using TSelf = jni::local_ref<VisionCameraScheduler::jhybriddata>;
|
||||
|
||||
TSelf VisionCameraScheduler::initHybrid(jni::alias_ref<jhybridobject> jThis) {
|
||||
return makeCxxInstance(jThis);
|
||||
}
|
||||
|
||||
void VisionCameraScheduler::scheduleOnUI(std::function<void()> job) {
|
||||
// 1. add job to queue
|
||||
uiJobs.push(job);
|
||||
scheduleTrigger();
|
||||
}
|
||||
|
||||
void VisionCameraScheduler::scheduleTrigger() {
|
||||
// 2. schedule `triggerUI` to be called on the java thread
|
||||
static auto method = javaPart_->getClass()->getMethod<void()>("scheduleTrigger");
|
||||
method(javaPart_.get());
|
||||
}
|
||||
|
||||
void VisionCameraScheduler::triggerUI() {
|
||||
// 3. call job we enqueued in step 1.
|
||||
auto job = uiJobs.pop();
|
||||
job();
|
||||
}
|
||||
|
||||
void VisionCameraScheduler::registerNatives() {
|
||||
registerHybrid({
|
||||
makeNativeMethod("initHybrid", VisionCameraScheduler::initHybrid),
|
||||
makeNativeMethod("triggerUI", VisionCameraScheduler::triggerUI),
|
||||
});
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,38 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 25.07.21.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "Scheduler.h"
|
||||
#include <ReactCommon/CallInvokerHolder.h>
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
class VisionCameraScheduler : public reanimated::Scheduler, public jni::HybridClass<VisionCameraScheduler> {
|
||||
public:
|
||||
static auto constexpr kJavaDescriptor = "Lcom/mrousavy/camera/frameprocessor/VisionCameraScheduler;";
|
||||
static jni::local_ref<jhybriddata> initHybrid(jni::alias_ref<jhybridobject> jThis);
|
||||
static void registerNatives();
|
||||
|
||||
// schedules the given job to be run on the VisionCamera FP Thread at some future point in time
|
||||
void scheduleOnUI(std::function<void()> job) override;
|
||||
|
||||
private:
|
||||
friend HybridBase;
|
||||
jni::global_ref<VisionCameraScheduler::javaobject> javaPart_;
|
||||
|
||||
explicit VisionCameraScheduler(jni::alias_ref<VisionCameraScheduler::jhybridobject> jThis):
|
||||
javaPart_(jni::make_global(jThis)) {}
|
||||
|
||||
// Schedules a call to `triggerUI` on the VisionCamera FP Thread
|
||||
void scheduleTrigger();
|
||||
// Calls the latest job in the job queue
|
||||
void triggerUI() override;
|
||||
};
|
||||
|
||||
} // namespace vision
|
@ -1,21 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 24.06.21.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using namespace jni;
|
||||
|
||||
// TODO: Remove when fbjni 0.2.3 releases.
|
||||
template <typename E = jobject>
|
||||
struct JArrayList : JavaClass<JArrayList<E>, JList<E>> {
|
||||
constexpr static auto kJavaDescriptor = "Ljava/util/ArrayList;";
|
||||
};
|
||||
|
||||
} // namespace vision
|
@ -1,30 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 29.09.21.
|
||||
//
|
||||
|
||||
#include "JFrameProcessorPlugin.h"
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using namespace jni;
|
||||
|
||||
using TCallback = jobject(alias_ref<JImageProxy::javaobject>, alias_ref<JArrayClass<jobject>>);
|
||||
|
||||
local_ref<jobject> JFrameProcessorPlugin::callback(alias_ref<JImageProxy::javaobject> image,
|
||||
alias_ref<JArrayClass<jobject>> params) const {
|
||||
auto callbackMethod = getClass()->getMethod<TCallback>("callback");
|
||||
|
||||
auto result = callbackMethod(self(), image, params);
|
||||
return make_local(result);
|
||||
}
|
||||
|
||||
std::string JFrameProcessorPlugin::getName() const {
|
||||
auto getNameMethod = getClass()->getMethod<jstring()>("getName");
|
||||
return getNameMethod(self())->toStdString();
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,20 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 25.06.21.
|
||||
//
|
||||
|
||||
#include "JHashMap.h"
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using namespace jni;
|
||||
|
||||
template <typename K, typename V>
|
||||
local_ref<JHashMap<K, V>> JHashMap<K, V>::create() {
|
||||
return JHashMap<K, V>::newInstance();
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,23 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 25.06.21.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using namespace jni;
|
||||
|
||||
// TODO: Remove when fbjni 0.2.3 releases.
|
||||
template <typename K = jobject, typename V = jobject>
|
||||
struct JHashMap : JavaClass<JHashMap<K, V>, JMap<K, V>> {
|
||||
constexpr static auto kJavaDescriptor = "Ljava/util/HashMap;";
|
||||
|
||||
static local_ref<JHashMap<K, V>> create();
|
||||
};
|
||||
|
||||
} // namespace vision
|
@ -1,53 +0,0 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 22.06.21.
|
||||
//
|
||||
|
||||
#include "JImageProxy.h"
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using namespace jni;
|
||||
|
||||
int JImageProxy::getWidth() const {
|
||||
static const auto getWidthMethod = getClass()->getMethod<jint()>("getWidth");
|
||||
return getWidthMethod(self());
|
||||
}
|
||||
|
||||
int JImageProxy::getHeight() const {
|
||||
static const auto getWidthMethod = getClass()->getMethod<jint()>("getHeight");
|
||||
return getWidthMethod(self());
|
||||
}
|
||||
|
||||
alias_ref<JClass> getUtilsClass() {
|
||||
static const auto ImageProxyUtilsClass = findClassStatic("com/mrousavy/camera/frameprocessor/ImageProxyUtils");
|
||||
return ImageProxyUtilsClass;
|
||||
}
|
||||
|
||||
bool JImageProxy::getIsValid() const {
|
||||
auto utilsClass = getUtilsClass();
|
||||
static const auto isImageProxyValidMethod = utilsClass->getStaticMethod<jboolean(JImageProxy::javaobject)>("isImageProxyValid");
|
||||
return isImageProxyValidMethod(utilsClass, self());
|
||||
}
|
||||
|
||||
int JImageProxy::getPlanesCount() const {
|
||||
auto utilsClass = getUtilsClass();
|
||||
static const auto getPlanesCountMethod = utilsClass->getStaticMethod<jint(JImageProxy::javaobject)>("getPlanesCount");
|
||||
return getPlanesCountMethod(utilsClass, self());
|
||||
}
|
||||
|
||||
int JImageProxy::getBytesPerRow() const {
|
||||
auto utilsClass = getUtilsClass();
|
||||
static const auto getBytesPerRowMethod = utilsClass->getStaticMethod<jint(JImageProxy::javaobject)>("getBytesPerRow");
|
||||
return getBytesPerRowMethod(utilsClass, self());
|
||||
}
|
||||
|
||||
void JImageProxy::close() {
|
||||
static const auto closeMethod = getClass()->getMethod<void()>("close");
|
||||
closeMethod(self());
|
||||
}
|
||||
|
||||
} // namespace vision
|
@ -1,27 +0,0 @@
|
||||
//
|
||||
// Created by Marc on 19/06/2021.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
using namespace jni;
|
||||
|
||||
struct JImageProxy : public JavaClass<JImageProxy> {
|
||||
static constexpr auto kJavaDescriptor = "Landroidx/camera/core/ImageProxy;";
|
||||
|
||||
public:
|
||||
int getWidth() const;
|
||||
int getHeight() const;
|
||||
bool getIsValid() const;
|
||||
int getPlanesCount() const;
|
||||
int getBytesPerRow() const;
|
||||
void close();
|
||||
};
|
||||
|
||||
} // namespace vision
|
@ -1,30 +0,0 @@
|
||||
// copied from https://github.com/software-mansion/react-native-reanimated/blob/master/android/src/main/cpp/headers/AndroidErrorHandler.h
|
||||
|
||||
#pragma once
|
||||
|
||||
#include "ErrorHandler.h"
|
||||
#include "AndroidScheduler.h"
|
||||
#include "Scheduler.h"
|
||||
#include <jni.h>
|
||||
#include <memory>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include "Logger.h"
|
||||
|
||||
namespace reanimated
|
||||
{
|
||||
|
||||
class AndroidErrorHandler : public JavaClass<AndroidErrorHandler>, public ErrorHandler {
|
||||
std::shared_ptr<ErrorWrapper> error;
|
||||
std::shared_ptr<Scheduler> scheduler;
|
||||
void raiseSpec() override;
|
||||
public:
|
||||
static auto constexpr kJavaDescriptor = "Lcom/swmansion/reanimated/AndroidErrorHandler;";
|
||||
AndroidErrorHandler(
|
||||
std::shared_ptr<Scheduler> scheduler);
|
||||
std::shared_ptr<Scheduler> getScheduler() override;
|
||||
std::shared_ptr<ErrorWrapper> getError() override;
|
||||
void setError(std::string message) override;
|
||||
virtual ~AndroidErrorHandler() {}
|
||||
};
|
||||
|
||||
}
|
@ -1,37 +0,0 @@
|
||||
// copied from https://github.com/software-mansion/react-native-reanimated/blob/master/android/src/main/cpp/headers/AndroidScheduler.h
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <jsi/jsi.h>
|
||||
#include <react/jni/CxxModuleWrapper.h>
|
||||
#include <react/jni/JMessageQueueThread.h>
|
||||
#include "Scheduler.h"
|
||||
|
||||
namespace reanimated {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
class AndroidScheduler : public jni::HybridClass<AndroidScheduler> {
|
||||
public:
|
||||
static auto constexpr kJavaDescriptor = "Lcom/swmansion/reanimated/Scheduler;";
|
||||
static jni::local_ref<jhybriddata> initHybrid(jni::alias_ref<jhybridobject> jThis);
|
||||
static void registerNatives();
|
||||
|
||||
std::shared_ptr<Scheduler> getScheduler() { return scheduler_; }
|
||||
|
||||
void scheduleOnUI();
|
||||
|
||||
private:
|
||||
friend HybridBase;
|
||||
|
||||
void triggerUI();
|
||||
|
||||
jni::global_ref<AndroidScheduler::javaobject> javaPart_;
|
||||
std::shared_ptr<Scheduler> scheduler_;
|
||||
|
||||
explicit AndroidScheduler(jni::alias_ref<AndroidScheduler::jhybridobject> jThis);
|
||||
};
|
||||
|
||||
}
|
@ -1,29 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import androidx.camera.core.FocusMeteringAction
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import kotlinx.coroutines.guava.await
|
||||
import kotlinx.coroutines.withContext
|
||||
import java.util.concurrent.TimeUnit
|
||||
|
||||
suspend fun CameraView.focus(pointMap: ReadableMap) {
|
||||
val cameraControl = camera?.cameraControl ?: throw CameraNotReadyError()
|
||||
if (!pointMap.hasKey("x") || !pointMap.hasKey("y")) {
|
||||
throw InvalidTypeScriptUnionError("point", pointMap.toString())
|
||||
}
|
||||
|
||||
val dpi = resources.displayMetrics.density
|
||||
val x = pointMap.getDouble("x") * dpi
|
||||
val y = pointMap.getDouble("y") * dpi
|
||||
|
||||
// Getting the point from the previewView needs to be run on the UI thread
|
||||
val point = withContext(coroutineScope.coroutineContext) {
|
||||
previewView.meteringPointFactory.createPoint(x.toFloat(), y.toFloat());
|
||||
}
|
||||
|
||||
val action = FocusMeteringAction.Builder(point, FocusMeteringAction.FLAG_AF or FocusMeteringAction.FLAG_AE)
|
||||
.setAutoCancelDuration(5, TimeUnit.SECONDS) // auto-reset after 5 seconds
|
||||
.build()
|
||||
|
||||
cameraControl.startFocusAndMetering(action).await()
|
||||
}
|
@ -1,122 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.Manifest
|
||||
import android.annotation.SuppressLint
|
||||
import android.content.pm.PackageManager
|
||||
import androidx.camera.video.FileOutputOptions
|
||||
import androidx.camera.video.VideoRecordEvent
|
||||
import androidx.core.content.ContextCompat
|
||||
import androidx.core.util.Consumer
|
||||
import com.facebook.react.bridge.*
|
||||
import com.mrousavy.camera.utils.makeErrorMap
|
||||
import java.io.File
|
||||
import java.text.SimpleDateFormat
|
||||
import java.util.*
|
||||
|
||||
data class TemporaryFile(val path: String)
|
||||
|
||||
fun CameraView.startRecording(options: ReadableMap, onRecordCallback: Callback) {
|
||||
if (videoCapture == null) {
|
||||
if (video == true) {
|
||||
throw CameraNotReadyError()
|
||||
} else {
|
||||
throw VideoNotEnabledError()
|
||||
}
|
||||
}
|
||||
|
||||
// check audio permission
|
||||
if (audio == true) {
|
||||
if (ContextCompat.checkSelfPermission(context, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
|
||||
throw MicrophonePermissionError()
|
||||
}
|
||||
}
|
||||
|
||||
if (options.hasKey("flash")) {
|
||||
val enableFlash = options.getString("flash") == "on"
|
||||
// overrides current torch mode value to enable flash while recording
|
||||
camera!!.cameraControl.enableTorch(enableFlash)
|
||||
}
|
||||
|
||||
val id = SimpleDateFormat("yyyyMMdd_HHmmss", Locale.US).format(Date())
|
||||
val file = File.createTempFile("VisionCamera-${id}", ".mp4")
|
||||
val fileOptions = FileOutputOptions.Builder(file).build()
|
||||
|
||||
val recorder = videoCapture!!.output
|
||||
var recording = recorder.prepareRecording(context, fileOptions)
|
||||
|
||||
if (audio == true) {
|
||||
@SuppressLint("MissingPermission")
|
||||
recording = recording.withAudioEnabled()
|
||||
}
|
||||
|
||||
activeVideoRecording = recording.start(ContextCompat.getMainExecutor(context), object : Consumer<VideoRecordEvent> {
|
||||
override fun accept(event: VideoRecordEvent?) {
|
||||
if (event is VideoRecordEvent.Finalize) {
|
||||
if (event.hasError()) {
|
||||
// error occured!
|
||||
val error = when (event.error) {
|
||||
VideoRecordEvent.Finalize.ERROR_ENCODING_FAILED -> VideoEncoderError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_FILE_SIZE_LIMIT_REACHED -> FileSizeLimitReachedError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE -> InsufficientStorageError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_INVALID_OUTPUT_OPTIONS -> InvalidVideoOutputOptionsError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_NO_VALID_DATA -> NoValidDataError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_RECORDER_ERROR -> RecorderError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_SOURCE_INACTIVE -> InactiveSourceError(event.cause)
|
||||
else -> UnknownCameraError(event.cause)
|
||||
}
|
||||
val map = makeErrorMap("${error.domain}/${error.id}", error.message, error)
|
||||
onRecordCallback(null, map)
|
||||
} else {
|
||||
// recording saved successfully!
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", event.outputResults.outputUri.toString())
|
||||
map.putDouble("duration", /* seconds */ event.recordingStats.recordedDurationNanos.toDouble() / 1000000.0 / 1000.0)
|
||||
map.putDouble("size", /* kB */ event.recordingStats.numBytesRecorded.toDouble() / 1000.0)
|
||||
onRecordCallback(map, null)
|
||||
}
|
||||
|
||||
// reset the torch mode
|
||||
camera!!.cameraControl.enableTorch(torch == "on")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
fun CameraView.pauseRecording() {
|
||||
if (videoCapture == null) {
|
||||
throw CameraNotReadyError()
|
||||
}
|
||||
if (activeVideoRecording == null) {
|
||||
throw NoRecordingInProgressError()
|
||||
}
|
||||
|
||||
activeVideoRecording!!.pause()
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
fun CameraView.resumeRecording() {
|
||||
if (videoCapture == null) {
|
||||
throw CameraNotReadyError()
|
||||
}
|
||||
if (activeVideoRecording == null) {
|
||||
throw NoRecordingInProgressError()
|
||||
}
|
||||
|
||||
activeVideoRecording!!.resume()
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
fun CameraView.stopRecording() {
|
||||
if (videoCapture == null) {
|
||||
throw CameraNotReadyError()
|
||||
}
|
||||
if (activeVideoRecording == null) {
|
||||
throw NoRecordingInProgressError()
|
||||
}
|
||||
|
||||
activeVideoRecording!!.stop()
|
||||
|
||||
// reset torch mode to original value
|
||||
camera!!.cameraControl.enableTorch(torch == "on")
|
||||
}
|
@ -1,114 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.annotation.SuppressLint
|
||||
import android.hardware.camera2.*
|
||||
import android.util.Log
|
||||
import androidx.camera.camera2.interop.Camera2CameraInfo
|
||||
import androidx.camera.core.ImageCapture
|
||||
import androidx.camera.core.ImageProxy
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import com.facebook.react.bridge.WritableMap
|
||||
import com.mrousavy.camera.utils.*
|
||||
import kotlinx.coroutines.*
|
||||
import java.io.File
|
||||
import kotlin.system.measureTimeMillis
|
||||
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
suspend fun CameraView.takePhoto(options: ReadableMap): WritableMap = coroutineScope {
|
||||
if (fallbackToSnapshot) {
|
||||
Log.i(CameraView.TAG, "takePhoto() called, but falling back to Snapshot because 1 use-case is already occupied.")
|
||||
return@coroutineScope takeSnapshot(options)
|
||||
}
|
||||
|
||||
val startFunc = System.nanoTime()
|
||||
Log.i(CameraView.TAG, "takePhoto() called")
|
||||
if (imageCapture == null) {
|
||||
if (photo == true) {
|
||||
throw CameraNotReadyError()
|
||||
} else {
|
||||
throw PhotoNotEnabledError()
|
||||
}
|
||||
}
|
||||
|
||||
if (options.hasKey("flash")) {
|
||||
val flashMode = options.getString("flash")
|
||||
imageCapture!!.flashMode = when (flashMode) {
|
||||
"on" -> ImageCapture.FLASH_MODE_ON
|
||||
"off" -> ImageCapture.FLASH_MODE_OFF
|
||||
"auto" -> ImageCapture.FLASH_MODE_AUTO
|
||||
else -> throw InvalidTypeScriptUnionError("flash", flashMode ?: "(null)")
|
||||
}
|
||||
}
|
||||
// All those options are not yet implemented - see https://github.com/mrousavy/react-native-vision-camera/issues/75
|
||||
if (options.hasKey("photoCodec")) {
|
||||
// TODO photoCodec
|
||||
}
|
||||
if (options.hasKey("qualityPrioritization")) {
|
||||
// TODO qualityPrioritization
|
||||
}
|
||||
if (options.hasKey("enableAutoRedEyeReduction")) {
|
||||
// TODO enableAutoRedEyeReduction
|
||||
}
|
||||
if (options.hasKey("enableDualCameraFusion")) {
|
||||
// TODO enableDualCameraFusion
|
||||
}
|
||||
if (options.hasKey("enableAutoStabilization")) {
|
||||
// TODO enableAutoStabilization
|
||||
}
|
||||
if (options.hasKey("enableAutoDistortionCorrection")) {
|
||||
// TODO enableAutoDistortionCorrection
|
||||
}
|
||||
val skipMetadata = if (options.hasKey("skipMetadata")) options.getBoolean("skipMetadata") else false
|
||||
|
||||
val camera2Info = Camera2CameraInfo.from(camera!!.cameraInfo)
|
||||
val lensFacing = camera2Info.getCameraCharacteristic(CameraCharacteristics.LENS_FACING)
|
||||
|
||||
val results = awaitAll(
|
||||
async(coroutineContext) {
|
||||
Log.d(CameraView.TAG, "Taking picture...")
|
||||
val startCapture = System.nanoTime()
|
||||
val pic = imageCapture!!.takePicture(takePhotoExecutor)
|
||||
val endCapture = System.nanoTime()
|
||||
Log.i(CameraView.TAG_PERF, "Finished image capture in ${(endCapture - startCapture) / 1_000_000}ms")
|
||||
pic
|
||||
},
|
||||
async(Dispatchers.IO) {
|
||||
Log.d(CameraView.TAG, "Creating temp file...")
|
||||
File.createTempFile("mrousavy", ".jpg", context.cacheDir).apply { deleteOnExit() }
|
||||
}
|
||||
)
|
||||
val photo = results.first { it is ImageProxy } as ImageProxy
|
||||
val file = results.first { it is File } as File
|
||||
|
||||
val exif: ExifInterface?
|
||||
@Suppress("BlockingMethodInNonBlockingContext")
|
||||
withContext(Dispatchers.IO) {
|
||||
Log.d(CameraView.TAG, "Saving picture to ${file.absolutePath}...")
|
||||
val milliseconds = measureTimeMillis {
|
||||
val flipHorizontally = lensFacing == CameraCharacteristics.LENS_FACING_FRONT
|
||||
photo.save(file, flipHorizontally)
|
||||
}
|
||||
Log.i(CameraView.TAG_PERF, "Finished image saving in ${milliseconds}ms")
|
||||
// TODO: Read Exif from existing in-memory photo buffer instead of file?
|
||||
exif = if (skipMetadata) null else ExifInterface(file)
|
||||
}
|
||||
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", file.absolutePath)
|
||||
map.putInt("width", photo.width)
|
||||
map.putInt("height", photo.height)
|
||||
map.putBoolean("isRawPhoto", photo.isRaw)
|
||||
|
||||
val metadata = exif?.buildMetadataMap()
|
||||
map.putMap("metadata", metadata)
|
||||
|
||||
photo.close()
|
||||
|
||||
Log.d(CameraView.TAG, "Finished taking photo!")
|
||||
|
||||
val endFunc = System.nanoTime()
|
||||
Log.i(CameraView.TAG_PERF, "Finished function execution in ${(endFunc - startFunc) / 1_000_000}ms")
|
||||
return@coroutineScope map
|
||||
}
|
@ -1,60 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.graphics.Bitmap
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import com.facebook.react.bridge.WritableMap
|
||||
import com.mrousavy.camera.utils.buildMetadataMap
|
||||
import kotlinx.coroutines.Dispatchers
|
||||
import kotlinx.coroutines.coroutineScope
|
||||
import kotlinx.coroutines.withContext
|
||||
import java.io.File
|
||||
import java.io.FileOutputStream
|
||||
import kotlinx.coroutines.guava.await
|
||||
|
||||
suspend fun CameraView.takeSnapshot(options: ReadableMap): WritableMap = coroutineScope {
|
||||
val camera = camera ?: throw com.mrousavy.camera.CameraNotReadyError()
|
||||
val enableFlash = options.getString("flash") == "on"
|
||||
|
||||
try {
|
||||
if (enableFlash) {
|
||||
camera.cameraControl.enableTorch(true).await()
|
||||
}
|
||||
|
||||
val bitmap = withContext(coroutineScope.coroutineContext) {
|
||||
previewView.bitmap ?: throw CameraNotReadyError()
|
||||
}
|
||||
|
||||
val quality = if (options.hasKey("quality")) options.getInt("quality") else 100
|
||||
|
||||
val file: File
|
||||
val exif: ExifInterface
|
||||
@Suppress("BlockingMethodInNonBlockingContext")
|
||||
withContext(Dispatchers.IO) {
|
||||
file = File.createTempFile("mrousavy", ".jpg", context.cacheDir).apply { deleteOnExit() }
|
||||
FileOutputStream(file).use { stream ->
|
||||
bitmap.compress(Bitmap.CompressFormat.JPEG, quality, stream)
|
||||
}
|
||||
exif = ExifInterface(file)
|
||||
}
|
||||
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", file.absolutePath)
|
||||
map.putInt("width", bitmap.width)
|
||||
map.putInt("height", bitmap.height)
|
||||
map.putBoolean("isRawPhoto", false)
|
||||
|
||||
val skipMetadata =
|
||||
if (options.hasKey("skipMetadata")) options.getBoolean("skipMetadata") else false
|
||||
val metadata = if (skipMetadata) null else exif.buildMetadataMap()
|
||||
map.putMap("metadata", metadata)
|
||||
|
||||
return@coroutineScope map
|
||||
} finally {
|
||||
if (enableFlash) {
|
||||
// reset to `torch` property
|
||||
camera.cameraControl.enableTorch(this@takeSnapshot.torch == "on")
|
||||
}
|
||||
}
|
||||
}
|
@ -1,549 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.Manifest
|
||||
import android.annotation.SuppressLint
|
||||
import android.content.Context
|
||||
import android.content.pm.PackageManager
|
||||
import android.content.res.Configuration
|
||||
import android.hardware.camera2.*
|
||||
import android.util.Log
|
||||
import android.util.Range
|
||||
import android.view.*
|
||||
import android.view.View.OnTouchListener
|
||||
import android.widget.FrameLayout
|
||||
import androidx.camera.camera2.interop.Camera2Interop
|
||||
import androidx.camera.core.*
|
||||
import androidx.camera.core.impl.*
|
||||
import androidx.camera.extensions.*
|
||||
import androidx.camera.lifecycle.ProcessCameraProvider
|
||||
import androidx.camera.video.*
|
||||
import androidx.camera.video.VideoCapture
|
||||
import androidx.camera.view.PreviewView
|
||||
import androidx.core.content.ContextCompat
|
||||
import androidx.lifecycle.*
|
||||
import com.facebook.jni.HybridData
|
||||
import com.facebook.proguard.annotations.DoNotStrip
|
||||
import com.facebook.react.bridge.*
|
||||
import com.facebook.react.uimanager.events.RCTEventEmitter
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPerformanceDataCollector
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorRuntimeManager
|
||||
import com.mrousavy.camera.utils.*
|
||||
import kotlinx.coroutines.*
|
||||
import kotlinx.coroutines.guava.await
|
||||
import java.lang.IllegalArgumentException
|
||||
import java.util.concurrent.ExecutorService
|
||||
import java.util.concurrent.Executors
|
||||
import kotlin.math.floor
|
||||
import kotlin.math.max
|
||||
import kotlin.math.min
|
||||
|
||||
//
|
||||
// TODOs for the CameraView which are currently too hard to implement either because of CameraX' limitations, or my brain capacity.
|
||||
//
|
||||
// CameraView
|
||||
// TODO: Actually use correct sizes for video and photo (currently it's both the video size)
|
||||
// TODO: Configurable FPS higher than 30
|
||||
// TODO: High-speed video recordings (export in CameraViewModule::getAvailableVideoDevices(), and set in CameraView::configurePreview()) (120FPS+)
|
||||
// TODO: configureSession() enableDepthData
|
||||
// TODO: configureSession() enableHighQualityPhotos
|
||||
// TODO: configureSession() enablePortraitEffectsMatteDelivery
|
||||
// TODO: configureSession() colorSpace
|
||||
|
||||
// CameraView+RecordVideo
|
||||
// TODO: Better startRecording()/stopRecording() (promise + callback, wait for TurboModules/JSI)
|
||||
// TODO: videoStabilizationMode
|
||||
// TODO: Return Video size/duration
|
||||
|
||||
// CameraView+TakePhoto
|
||||
// TODO: Mirror selfie images
|
||||
// TODO: takePhoto() depth data
|
||||
// TODO: takePhoto() raw capture
|
||||
// TODO: takePhoto() photoCodec ("hevc" | "jpeg" | "raw")
|
||||
// TODO: takePhoto() qualityPrioritization
|
||||
// TODO: takePhoto() enableAutoRedEyeReduction
|
||||
// TODO: takePhoto() enableAutoStabilization
|
||||
// TODO: takePhoto() enableAutoDistortionCorrection
|
||||
// TODO: takePhoto() return with jsi::Value Image reference for faster capture
|
||||
|
||||
@Suppress("KotlinJniMissingFunction") // I use fbjni, Android Studio is not smart enough to realize that.
|
||||
@SuppressLint("ClickableViewAccessibility", "ViewConstructor")
|
||||
class CameraView(context: Context, private val frameProcessorThread: ExecutorService) : FrameLayout(context), LifecycleOwner {
|
||||
companion object {
|
||||
const val TAG = "CameraView"
|
||||
const val TAG_PERF = "CameraView.performance"
|
||||
|
||||
private val propsThatRequireSessionReconfiguration = arrayListOf("cameraId", "format", "fps", "hdr", "lowLightBoost", "photo", "video", "enableFrameProcessor")
|
||||
private val arrayListOfZoom = arrayListOf("zoom")
|
||||
}
|
||||
|
||||
// react properties
|
||||
// props that require reconfiguring
|
||||
var cameraId: String? = null // this is actually not a react prop directly, but the result of setting device={}
|
||||
var enableDepthData = false
|
||||
var enableHighQualityPhotos: Boolean? = null
|
||||
var enablePortraitEffectsMatteDelivery = false
|
||||
// use-cases
|
||||
var photo: Boolean? = null
|
||||
var video: Boolean? = null
|
||||
var audio: Boolean? = null
|
||||
var enableFrameProcessor = false
|
||||
// props that require format reconfiguring
|
||||
var format: ReadableMap? = null
|
||||
var fps: Int? = null
|
||||
var hdr: Boolean? = null // nullable bool
|
||||
var colorSpace: String? = null
|
||||
var lowLightBoost: Boolean? = null // nullable bool
|
||||
// other props
|
||||
var isActive = false
|
||||
var torch = "off"
|
||||
var zoom: Float = 1f // in "factor"
|
||||
var orientation: String? = null
|
||||
var enableZoomGesture = false
|
||||
set(value) {
|
||||
field = value
|
||||
setOnTouchListener(if (value) touchEventListener else null)
|
||||
}
|
||||
var frameProcessorFps = 1.0
|
||||
set(value) {
|
||||
field = value
|
||||
actualFrameProcessorFps = if (value == -1.0) 30.0 else value
|
||||
lastFrameProcessorPerformanceEvaluation = System.currentTimeMillis()
|
||||
frameProcessorPerformanceDataCollector.clear()
|
||||
}
|
||||
|
||||
// private properties
|
||||
private var isMounted = false
|
||||
private val reactContext: ReactContext
|
||||
get() = context as ReactContext
|
||||
|
||||
@Suppress("JoinDeclarationAndAssignment")
|
||||
internal val previewView: PreviewView
|
||||
private val cameraExecutor = Executors.newSingleThreadExecutor()
|
||||
internal val takePhotoExecutor = Executors.newSingleThreadExecutor()
|
||||
internal val recordVideoExecutor = Executors.newSingleThreadExecutor()
|
||||
internal var coroutineScope = CoroutineScope(Dispatchers.Main)
|
||||
|
||||
internal var camera: Camera? = null
|
||||
internal var imageCapture: ImageCapture? = null
|
||||
internal var videoCapture: VideoCapture<Recorder>? = null
|
||||
private var imageAnalysis: ImageAnalysis? = null
|
||||
private var preview: Preview? = null
|
||||
|
||||
internal var activeVideoRecording: Recording? = null
|
||||
|
||||
private var lastFrameProcessorCall = System.currentTimeMillis()
|
||||
|
||||
private var extensionsManager: ExtensionsManager? = null
|
||||
|
||||
private val scaleGestureListener: ScaleGestureDetector.SimpleOnScaleGestureListener
|
||||
private val scaleGestureDetector: ScaleGestureDetector
|
||||
private val touchEventListener: OnTouchListener
|
||||
|
||||
private val lifecycleRegistry: LifecycleRegistry
|
||||
private var hostLifecycleState: Lifecycle.State
|
||||
|
||||
private val inputRotation: Int
|
||||
get() {
|
||||
return context.displayRotation
|
||||
}
|
||||
private val outputRotation: Int
|
||||
get() {
|
||||
if (orientation != null) {
|
||||
// user is overriding output orientation
|
||||
return when (orientation!!) {
|
||||
"portrait" -> Surface.ROTATION_0
|
||||
"landscapeRight" -> Surface.ROTATION_90
|
||||
"portraitUpsideDown" -> Surface.ROTATION_180
|
||||
"landscapeLeft" -> Surface.ROTATION_270
|
||||
else -> throw InvalidTypeScriptUnionError("orientation", orientation!!)
|
||||
}
|
||||
} else {
|
||||
// use same as input rotation
|
||||
return inputRotation
|
||||
}
|
||||
}
|
||||
|
||||
private var minZoom: Float = 1f
|
||||
private var maxZoom: Float = 1f
|
||||
|
||||
private var actualFrameProcessorFps = 30.0
|
||||
private val frameProcessorPerformanceDataCollector = FrameProcessorPerformanceDataCollector()
|
||||
private var lastSuggestedFrameProcessorFps = 0.0
|
||||
private var lastFrameProcessorPerformanceEvaluation = System.currentTimeMillis()
|
||||
private val isReadyForNewEvaluation: Boolean
|
||||
get() {
|
||||
val lastPerformanceEvaluationElapsedTime = System.currentTimeMillis() - lastFrameProcessorPerformanceEvaluation
|
||||
return lastPerformanceEvaluationElapsedTime > 1000
|
||||
}
|
||||
|
||||
@DoNotStrip
|
||||
private var mHybridData: HybridData? = null
|
||||
|
||||
@Suppress("LiftReturnOrAssignment", "RedundantIf")
|
||||
internal val fallbackToSnapshot: Boolean
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
get() {
|
||||
if (video != true && !enableFrameProcessor) {
|
||||
// Both use-cases are disabled, so `photo` is the only use-case anyways. Don't need to fallback here.
|
||||
return false
|
||||
}
|
||||
cameraId?.let { cameraId ->
|
||||
val cameraManger = reactContext.getSystemService(Context.CAMERA_SERVICE) as? CameraManager
|
||||
cameraManger?.let {
|
||||
val characteristics = cameraManger.getCameraCharacteristics(cameraId)
|
||||
val hardwareLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)
|
||||
if (hardwareLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) {
|
||||
// Camera only supports a single use-case at a time
|
||||
return true
|
||||
} else {
|
||||
if (video == true && enableFrameProcessor) {
|
||||
// Camera supports max. 2 use-cases, but both are occupied by `frameProcessor` and `video`
|
||||
return true
|
||||
} else {
|
||||
// Camera supports max. 2 use-cases and only one is occupied (either `frameProcessor` or `video`), so we can add `photo`
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
init {
|
||||
if (FrameProcessorRuntimeManager.enableFrameProcessors) {
|
||||
mHybridData = initHybrid()
|
||||
}
|
||||
|
||||
previewView = PreviewView(context)
|
||||
previewView.layoutParams = LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)
|
||||
previewView.installHierarchyFitter() // If this is not called correctly, view finder will be black/blank
|
||||
addView(previewView)
|
||||
|
||||
scaleGestureListener = object : ScaleGestureDetector.SimpleOnScaleGestureListener() {
|
||||
override fun onScale(detector: ScaleGestureDetector): Boolean {
|
||||
zoom = max(min((zoom * detector.scaleFactor), maxZoom), minZoom)
|
||||
update(arrayListOfZoom)
|
||||
return true
|
||||
}
|
||||
}
|
||||
scaleGestureDetector = ScaleGestureDetector(context, scaleGestureListener)
|
||||
touchEventListener = OnTouchListener { _, event -> return@OnTouchListener scaleGestureDetector.onTouchEvent(event) }
|
||||
|
||||
hostLifecycleState = Lifecycle.State.INITIALIZED
|
||||
lifecycleRegistry = LifecycleRegistry(this)
|
||||
reactContext.addLifecycleEventListener(object : LifecycleEventListener {
|
||||
override fun onHostResume() {
|
||||
hostLifecycleState = Lifecycle.State.RESUMED
|
||||
updateLifecycleState()
|
||||
// workaround for https://issuetracker.google.com/issues/147354615, preview must be bound on resume
|
||||
update(propsThatRequireSessionReconfiguration)
|
||||
}
|
||||
override fun onHostPause() {
|
||||
hostLifecycleState = Lifecycle.State.CREATED
|
||||
updateLifecycleState()
|
||||
}
|
||||
override fun onHostDestroy() {
|
||||
hostLifecycleState = Lifecycle.State.DESTROYED
|
||||
updateLifecycleState()
|
||||
cameraExecutor.shutdown()
|
||||
takePhotoExecutor.shutdown()
|
||||
recordVideoExecutor.shutdown()
|
||||
reactContext.removeLifecycleEventListener(this)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
override fun onConfigurationChanged(newConfig: Configuration?) {
|
||||
super.onConfigurationChanged(newConfig)
|
||||
updateOrientation()
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
private fun updateOrientation() {
|
||||
preview?.targetRotation = inputRotation
|
||||
imageCapture?.targetRotation = outputRotation
|
||||
videoCapture?.targetRotation = outputRotation
|
||||
imageAnalysis?.targetRotation = outputRotation
|
||||
}
|
||||
|
||||
private external fun initHybrid(): HybridData
|
||||
private external fun frameProcessorCallback(frame: ImageProxy)
|
||||
|
||||
override fun getLifecycle(): Lifecycle {
|
||||
return lifecycleRegistry
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the custom Lifecycle to match the host activity's lifecycle, and if it's active we narrow it down to the [isActive] and [isAttachedToWindow] fields.
|
||||
*/
|
||||
private fun updateLifecycleState() {
|
||||
val lifecycleBefore = lifecycleRegistry.currentState
|
||||
if (hostLifecycleState == Lifecycle.State.RESUMED) {
|
||||
// Host Lifecycle (Activity) is currently active (RESUMED), so we narrow it down to the view's lifecycle
|
||||
if (isActive && isAttachedToWindow) {
|
||||
lifecycleRegistry.currentState = Lifecycle.State.RESUMED
|
||||
} else {
|
||||
lifecycleRegistry.currentState = Lifecycle.State.CREATED
|
||||
}
|
||||
} else {
|
||||
// Host Lifecycle (Activity) is currently inactive (STARTED or DESTROYED), so that overrules our view's lifecycle
|
||||
lifecycleRegistry.currentState = hostLifecycleState
|
||||
}
|
||||
Log.d(TAG, "Lifecycle went from ${lifecycleBefore.name} -> ${lifecycleRegistry.currentState.name} (isActive: $isActive | isAttachedToWindow: $isAttachedToWindow)")
|
||||
}
|
||||
|
||||
override fun onAttachedToWindow() {
|
||||
super.onAttachedToWindow()
|
||||
updateLifecycleState()
|
||||
if (!isMounted) {
|
||||
isMounted = true
|
||||
invokeOnViewReady()
|
||||
}
|
||||
}
|
||||
|
||||
override fun onDetachedFromWindow() {
|
||||
super.onDetachedFromWindow()
|
||||
updateLifecycleState()
|
||||
}
|
||||
|
||||
/**
|
||||
* Invalidate all React Props and reconfigure the device
|
||||
*/
|
||||
fun update(changedProps: ArrayList<String>) = previewView.post {
|
||||
// TODO: Does this introduce too much overhead?
|
||||
// I need to .post on the previewView because it might've not been initialized yet
|
||||
// I need to use CoroutineScope.launch because of the suspend fun [configureSession]
|
||||
coroutineScope.launch {
|
||||
try {
|
||||
val shouldReconfigureSession = changedProps.containsAny(propsThatRequireSessionReconfiguration)
|
||||
val shouldReconfigureZoom = shouldReconfigureSession || changedProps.contains("zoom")
|
||||
val shouldReconfigureTorch = shouldReconfigureSession || changedProps.contains("torch")
|
||||
val shouldUpdateOrientation = shouldReconfigureSession || changedProps.contains("orientation")
|
||||
|
||||
if (changedProps.contains("isActive")) {
|
||||
updateLifecycleState()
|
||||
}
|
||||
if (shouldReconfigureSession) {
|
||||
configureSession()
|
||||
}
|
||||
if (shouldReconfigureZoom) {
|
||||
val zoomClamped = max(min(zoom, maxZoom), minZoom)
|
||||
camera!!.cameraControl.setZoomRatio(zoomClamped)
|
||||
}
|
||||
if (shouldReconfigureTorch) {
|
||||
camera!!.cameraControl.enableTorch(torch == "on")
|
||||
}
|
||||
if (shouldUpdateOrientation) {
|
||||
updateOrientation()
|
||||
}
|
||||
} catch (e: Throwable) {
|
||||
Log.e(TAG, "update() threw: ${e.message}")
|
||||
invokeOnError(e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Configures the camera capture session. This should only be called when the camera device changes.
|
||||
*/
|
||||
@SuppressLint("RestrictedApi")
|
||||
private suspend fun configureSession() {
|
||||
try {
|
||||
val startTime = System.currentTimeMillis()
|
||||
Log.i(TAG, "Configuring session...")
|
||||
if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
|
||||
throw CameraPermissionError()
|
||||
}
|
||||
if (cameraId == null) {
|
||||
throw NoCameraDeviceError()
|
||||
}
|
||||
if (format != null)
|
||||
Log.i(TAG, "Configuring session with Camera ID $cameraId and custom format...")
|
||||
else
|
||||
Log.i(TAG, "Configuring session with Camera ID $cameraId and default format options...")
|
||||
|
||||
// Used to bind the lifecycle of cameras to the lifecycle owner
|
||||
val cameraProvider = ProcessCameraProvider.getInstance(reactContext).await()
|
||||
|
||||
var cameraSelector = CameraSelector.Builder().byID(cameraId!!).build()
|
||||
|
||||
val tryEnableExtension: (suspend (extension: Int) -> Unit) = lambda@ { extension ->
|
||||
if (extensionsManager == null) {
|
||||
Log.i(TAG, "Initializing ExtensionsManager...")
|
||||
extensionsManager = ExtensionsManager.getInstanceAsync(context, cameraProvider).await()
|
||||
}
|
||||
if (extensionsManager!!.isExtensionAvailable(cameraSelector, extension)) {
|
||||
Log.i(TAG, "Enabling extension $extension...")
|
||||
cameraSelector = extensionsManager!!.getExtensionEnabledCameraSelector(cameraSelector, extension)
|
||||
} else {
|
||||
Log.e(TAG, "Extension $extension is not available for the given Camera!")
|
||||
throw when (extension) {
|
||||
ExtensionMode.HDR -> HdrNotContainedInFormatError()
|
||||
ExtensionMode.NIGHT -> LowLightBoostNotContainedInFormatError()
|
||||
else -> Error("Invalid extension supplied! Extension $extension is not available.")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
val previewBuilder = Preview.Builder()
|
||||
.setTargetRotation(inputRotation)
|
||||
|
||||
val imageCaptureBuilder = ImageCapture.Builder()
|
||||
.setTargetRotation(outputRotation)
|
||||
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
|
||||
|
||||
val videoRecorderBuilder = Recorder.Builder()
|
||||
.setExecutor(cameraExecutor)
|
||||
|
||||
val imageAnalysisBuilder = ImageAnalysis.Builder()
|
||||
.setTargetRotation(outputRotation)
|
||||
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
|
||||
.setBackgroundExecutor(frameProcessorThread)
|
||||
|
||||
if (format == null) {
|
||||
// let CameraX automatically find best resolution for the target aspect ratio
|
||||
Log.i(TAG, "No custom format has been set, CameraX will automatically determine best configuration...")
|
||||
val aspectRatio = aspectRatio(previewView.height, previewView.width) // flipped because it's in sensor orientation.
|
||||
previewBuilder.setTargetAspectRatio(aspectRatio)
|
||||
imageCaptureBuilder.setTargetAspectRatio(aspectRatio)
|
||||
// TODO: Aspect Ratio for Video Recorder?
|
||||
imageAnalysisBuilder.setTargetAspectRatio(aspectRatio)
|
||||
} else {
|
||||
// User has selected a custom format={}. Use that
|
||||
val format = DeviceFormat(format!!)
|
||||
Log.i(TAG, "Using custom format - photo: ${format.photoSize}, video: ${format.videoSize} @ $fps FPS")
|
||||
if (video == true) {
|
||||
previewBuilder.setTargetResolution(format.videoSize)
|
||||
} else {
|
||||
previewBuilder.setTargetResolution(format.photoSize)
|
||||
}
|
||||
imageCaptureBuilder.setTargetResolution(format.photoSize)
|
||||
imageAnalysisBuilder.setTargetResolution(format.photoSize)
|
||||
|
||||
// TODO: Ability to select resolution exactly depending on format? Just like on iOS...
|
||||
when (min(format.videoSize.height, format.videoSize.width)) {
|
||||
in 0..480 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.SD))
|
||||
in 480..720 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.HD, FallbackStrategy.lowerQualityThan(Quality.HD)))
|
||||
in 720..1080 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.FHD, FallbackStrategy.lowerQualityThan(Quality.FHD)))
|
||||
in 1080..2160 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.UHD, FallbackStrategy.lowerQualityThan(Quality.UHD)))
|
||||
in 2160..4320 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.HIGHEST, FallbackStrategy.lowerQualityThan(Quality.HIGHEST)))
|
||||
}
|
||||
|
||||
fps?.let { fps ->
|
||||
if (format.frameRateRanges.any { it.contains(fps) }) {
|
||||
// Camera supports the given FPS (frame rate range)
|
||||
val frameDuration = (1.0 / fps.toDouble()).toLong() * 1_000_000_000
|
||||
|
||||
Log.i(TAG, "Setting AE_TARGET_FPS_RANGE to $fps-$fps, and SENSOR_FRAME_DURATION to $frameDuration")
|
||||
Camera2Interop.Extender(previewBuilder)
|
||||
.setCaptureRequestOption(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range(fps, fps))
|
||||
.setCaptureRequestOption(CaptureRequest.SENSOR_FRAME_DURATION, frameDuration)
|
||||
// TODO: Frame Rate/FPS for Video Recorder?
|
||||
} else {
|
||||
throw FpsNotContainedInFormatError(fps)
|
||||
}
|
||||
}
|
||||
if (hdr == true) {
|
||||
tryEnableExtension(ExtensionMode.HDR)
|
||||
}
|
||||
if (lowLightBoost == true) {
|
||||
tryEnableExtension(ExtensionMode.NIGHT)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Unbind use cases before rebinding
|
||||
videoCapture = null
|
||||
imageCapture = null
|
||||
imageAnalysis = null
|
||||
cameraProvider.unbindAll()
|
||||
|
||||
// Bind use cases to camera
|
||||
val useCases = ArrayList<UseCase>()
|
||||
if (video == true) {
|
||||
Log.i(TAG, "Adding VideoCapture use-case...")
|
||||
|
||||
val videoRecorder = videoRecorderBuilder.build()
|
||||
videoCapture = VideoCapture.withOutput(videoRecorder)
|
||||
videoCapture!!.targetRotation = outputRotation
|
||||
useCases.add(videoCapture!!)
|
||||
}
|
||||
if (photo == true) {
|
||||
if (fallbackToSnapshot) {
|
||||
Log.i(TAG, "Tried to add photo use-case (`photo={true}`) but the Camera device only supports " +
|
||||
"a single use-case at a time. Falling back to Snapshot capture.")
|
||||
} else {
|
||||
Log.i(TAG, "Adding ImageCapture use-case...")
|
||||
imageCapture = imageCaptureBuilder.build()
|
||||
useCases.add(imageCapture!!)
|
||||
}
|
||||
}
|
||||
if (enableFrameProcessor) {
|
||||
Log.i(TAG, "Adding ImageAnalysis use-case...")
|
||||
imageAnalysis = imageAnalysisBuilder.build().apply {
|
||||
setAnalyzer(cameraExecutor, { image ->
|
||||
val now = System.currentTimeMillis()
|
||||
val intervalMs = (1.0 / actualFrameProcessorFps) * 1000.0
|
||||
if (now - lastFrameProcessorCall > intervalMs) {
|
||||
lastFrameProcessorCall = now
|
||||
|
||||
val perfSample = frameProcessorPerformanceDataCollector.beginPerformanceSampleCollection()
|
||||
frameProcessorCallback(image)
|
||||
perfSample.endPerformanceSampleCollection()
|
||||
}
|
||||
image.close()
|
||||
|
||||
if (isReadyForNewEvaluation) {
|
||||
// last evaluation was more than a second ago, evaluate again
|
||||
evaluateNewPerformanceSamples()
|
||||
}
|
||||
})
|
||||
}
|
||||
useCases.add(imageAnalysis!!)
|
||||
}
|
||||
|
||||
preview = previewBuilder.build()
|
||||
Log.i(TAG, "Attaching ${useCases.size} use-cases...")
|
||||
camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, *useCases.toTypedArray())
|
||||
preview!!.setSurfaceProvider(previewView.surfaceProvider)
|
||||
|
||||
minZoom = camera!!.cameraInfo.zoomState.value?.minZoomRatio ?: 1f
|
||||
maxZoom = camera!!.cameraInfo.zoomState.value?.maxZoomRatio ?: 1f
|
||||
|
||||
val duration = System.currentTimeMillis() - startTime
|
||||
Log.i(TAG_PERF, "Session configured in $duration ms! Camera: ${camera!!}")
|
||||
invokeOnInitialized()
|
||||
} catch (exc: Throwable) {
|
||||
Log.e(TAG, "Failed to configure session: ${exc.message}")
|
||||
throw when (exc) {
|
||||
is CameraError -> exc
|
||||
is IllegalArgumentException -> {
|
||||
if (exc.message?.contains("too many use cases") == true) {
|
||||
ParallelVideoProcessingNotSupportedError(exc)
|
||||
} else {
|
||||
InvalidCameraDeviceError(exc)
|
||||
}
|
||||
}
|
||||
else -> UnknownCameraError(exc)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun evaluateNewPerformanceSamples() {
|
||||
lastFrameProcessorPerformanceEvaluation = System.currentTimeMillis()
|
||||
val maxFrameProcessorFps = 30 // TODO: Get maxFrameProcessorFps from ImageAnalyser
|
||||
val averageFps = 1.0 / frameProcessorPerformanceDataCollector.averageExecutionTimeSeconds
|
||||
val suggestedFrameProcessorFps = floor(min(averageFps, maxFrameProcessorFps.toDouble()))
|
||||
|
||||
if (frameProcessorFps == -1.0) {
|
||||
// frameProcessorFps="auto"
|
||||
actualFrameProcessorFps = suggestedFrameProcessorFps
|
||||
} else {
|
||||
// frameProcessorFps={someCustomFpsValue}
|
||||
if (suggestedFrameProcessorFps != lastSuggestedFrameProcessorFps && suggestedFrameProcessorFps != frameProcessorFps) {
|
||||
invokeOnFrameProcessorPerformanceSuggestionAvailable(frameProcessorFps, suggestedFrameProcessorFps)
|
||||
lastSuggestedFrameProcessorFps = suggestedFrameProcessorFps
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,403 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.Manifest
|
||||
import android.content.Context
|
||||
import android.content.pm.PackageManager
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import android.util.Size
|
||||
import androidx.camera.core.CameraSelector
|
||||
import androidx.camera.extensions.ExtensionMode
|
||||
import androidx.camera.extensions.ExtensionsManager
|
||||
import androidx.camera.lifecycle.ProcessCameraProvider
|
||||
import androidx.camera.video.QualitySelector
|
||||
import androidx.core.content.ContextCompat
|
||||
import com.facebook.react.bridge.*
|
||||
import com.facebook.react.module.annotations.ReactModule
|
||||
import com.facebook.react.modules.core.PermissionAwareActivity
|
||||
import com.facebook.react.modules.core.PermissionListener
|
||||
import com.facebook.react.uimanager.UIManagerHelper
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.turbomodule.core.CallInvokerHolderImpl
|
||||
import com.mrousavy.camera.CameraView
|
||||
import com.mrousavy.camera.ViewNotFoundError
|
||||
import java.util.concurrent.ExecutorService
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorRuntimeManager
|
||||
import com.mrousavy.camera.parsers.*
|
||||
import com.mrousavy.camera.utils.*
|
||||
import kotlinx.coroutines.*
|
||||
import kotlinx.coroutines.guava.await
|
||||
import java.util.concurrent.Executors
|
||||
|
||||
@ReactModule(name = CameraViewModule.TAG)
|
||||
@Suppress("unused")
|
||||
class CameraViewModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||
companion object {
|
||||
const val TAG = "CameraView"
|
||||
var RequestCode = 10
|
||||
|
||||
fun parsePermissionStatus(status: Int): String {
|
||||
return when (status) {
|
||||
PackageManager.PERMISSION_DENIED -> "denied"
|
||||
PackageManager.PERMISSION_GRANTED -> "authorized"
|
||||
else -> "not-determined"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var frameProcessorThread: ExecutorService = Executors.newSingleThreadExecutor()
|
||||
private val coroutineScope = CoroutineScope(Dispatchers.Default) // TODO: or Dispatchers.Main?
|
||||
private var frameProcessorManager: FrameProcessorRuntimeManager? = null
|
||||
|
||||
private fun cleanup() {
|
||||
if (coroutineScope.isActive) {
|
||||
coroutineScope.cancel("CameraViewModule has been destroyed.")
|
||||
}
|
||||
frameProcessorManager = null
|
||||
}
|
||||
|
||||
override fun initialize() {
|
||||
super.initialize()
|
||||
|
||||
if (frameProcessorManager == null) {
|
||||
frameProcessorThread.execute {
|
||||
frameProcessorManager = FrameProcessorRuntimeManager(reactApplicationContext, frameProcessorThread)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun onCatalystInstanceDestroy() {
|
||||
super.onCatalystInstanceDestroy()
|
||||
cleanup()
|
||||
}
|
||||
|
||||
override fun invalidate() {
|
||||
super.invalidate()
|
||||
cleanup()
|
||||
}
|
||||
|
||||
override fun getName(): String {
|
||||
return TAG
|
||||
}
|
||||
|
||||
private fun findCameraView(viewId: Int): CameraView {
|
||||
Log.d(TAG, "Finding view $viewId...")
|
||||
val view = if (reactApplicationContext != null) UIManagerHelper.getUIManager(reactApplicationContext, viewId)?.resolveView(viewId) as CameraView? else null
|
||||
Log.d(TAG, if (reactApplicationContext != null) "Found view $viewId!" else "Couldn't find view $viewId!")
|
||||
return view ?: throw ViewNotFoundError(viewId)
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun takePhoto(viewTag: Int, options: ReadableMap, promise: Promise) {
|
||||
coroutineScope.launch {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.takePhoto(options)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Suppress("unused")
|
||||
@ReactMethod
|
||||
fun takeSnapshot(viewTag: Int, options: ReadableMap, promise: Promise) {
|
||||
coroutineScope.launch {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.takeSnapshot(options)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: startRecording() cannot be awaited, because I can't have a Promise and a onRecordedCallback in the same function. Hopefully TurboModules allows that
|
||||
@ReactMethod
|
||||
fun startRecording(viewTag: Int, options: ReadableMap, onRecordCallback: Callback) {
|
||||
coroutineScope.launch {
|
||||
val view = findCameraView(viewTag)
|
||||
try {
|
||||
view.startRecording(options, onRecordCallback)
|
||||
} catch (error: CameraError) {
|
||||
val map = makeErrorMap("${error.domain}/${error.id}", error.message, error)
|
||||
onRecordCallback(null, map)
|
||||
} catch (error: Throwable) {
|
||||
val map = makeErrorMap("capture/unknown", "An unknown error occurred while trying to start a video recording!", error)
|
||||
onRecordCallback(null, map)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun pauseRecording(viewTag: Int, promise: Promise) {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.pauseRecording()
|
||||
return@withPromise null
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun resumeRecording(viewTag: Int, promise: Promise) {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.resumeRecording()
|
||||
return@withPromise null
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun stopRecording(viewTag: Int, promise: Promise) {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.stopRecording()
|
||||
return@withPromise null
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun focus(viewTag: Int, point: ReadableMap, promise: Promise) {
|
||||
coroutineScope.launch {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.focus(point)
|
||||
return@withPromise null
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: This uses the Camera2 API to list all characteristics of a camera device and therefore doesn't work with Camera1. Find a way to use CameraX for this
|
||||
// https://issuetracker.google.com/issues/179925896
|
||||
@ReactMethod
|
||||
fun getAvailableCameraDevices(promise: Promise) {
|
||||
val startTime = System.currentTimeMillis()
|
||||
coroutineScope.launch {
|
||||
withPromise(promise) {
|
||||
val cameraProvider = ProcessCameraProvider.getInstance(reactApplicationContext).await()
|
||||
val extensionsManager = ExtensionsManager.getInstanceAsync(reactApplicationContext, cameraProvider).await()
|
||||
ProcessCameraProvider.getInstance(reactApplicationContext).await()
|
||||
|
||||
val manager = reactApplicationContext.getSystemService(Context.CAMERA_SERVICE) as? CameraManager
|
||||
?: throw CameraManagerUnavailableError()
|
||||
|
||||
val cameraDevices: WritableArray = Arguments.createArray()
|
||||
|
||||
manager.cameraIdList.filter{ id -> id.toIntOrNull() != null }.forEach loop@{ id ->
|
||||
val cameraSelector = CameraSelector.Builder().byID(id).build()
|
||||
|
||||
val characteristics = manager.getCameraCharacteristics(id)
|
||||
val hardwareLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)!!
|
||||
|
||||
val capabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!
|
||||
val isMultiCam = Build.VERSION.SDK_INT >= Build.VERSION_CODES.P &&
|
||||
capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA)
|
||||
val deviceTypes = characteristics.getDeviceTypes()
|
||||
|
||||
val cameraConfig = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
|
||||
val lensFacing = characteristics.get(CameraCharacteristics.LENS_FACING)!!
|
||||
val hasFlash = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE)!!
|
||||
val maxScalerZoom = characteristics.get(CameraCharacteristics.SCALER_AVAILABLE_MAX_DIGITAL_ZOOM)!!
|
||||
val supportsDepthCapture = Build.VERSION.SDK_INT >= Build.VERSION_CODES.M &&
|
||||
capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT)
|
||||
val supportsRawCapture = capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_RAW)
|
||||
val isoRange = characteristics.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE)
|
||||
val digitalStabilizationModes = characteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES)
|
||||
val opticalStabilizationModes = characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_OPTICAL_STABILIZATION)
|
||||
val zoomRange = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R)
|
||||
characteristics.get(CameraCharacteristics.CONTROL_ZOOM_RATIO_RANGE)
|
||||
else null
|
||||
val name = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P)
|
||||
characteristics.get(CameraCharacteristics.INFO_VERSION)
|
||||
else null
|
||||
val fpsRanges = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES)!!
|
||||
|
||||
val supportsHdr = extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.HDR)
|
||||
val supportsLowLightBoost = extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.NIGHT)
|
||||
// see https://developer.android.com/reference/android/hardware/camera2/CameraDevice#regular-capture
|
||||
val supportsParallelVideoProcessing = hardwareLevel != CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY && hardwareLevel != CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
|
||||
|
||||
val fieldOfView = characteristics.getFieldOfView()
|
||||
|
||||
val map = Arguments.createMap()
|
||||
map.putString("id", id)
|
||||
map.putArray("devices", deviceTypes)
|
||||
map.putString("position", parseLensFacing(lensFacing))
|
||||
map.putString("name", name ?: "${parseLensFacing(lensFacing)} ($id)")
|
||||
map.putBoolean("hasFlash", hasFlash)
|
||||
map.putBoolean("hasTorch", hasFlash)
|
||||
map.putBoolean("isMultiCam", isMultiCam)
|
||||
map.putBoolean("supportsParallelVideoProcessing", supportsParallelVideoProcessing)
|
||||
map.putBoolean("supportsRawCapture", supportsRawCapture)
|
||||
map.putBoolean("supportsDepthCapture", supportsDepthCapture)
|
||||
map.putBoolean("supportsLowLightBoost", supportsLowLightBoost)
|
||||
map.putBoolean("supportsFocus", true) // I believe every device here supports focussing
|
||||
if (zoomRange != null) {
|
||||
map.putDouble("minZoom", zoomRange.lower.toDouble())
|
||||
map.putDouble("maxZoom", zoomRange.upper.toDouble())
|
||||
} else {
|
||||
map.putDouble("minZoom", 1.0)
|
||||
map.putDouble("maxZoom", maxScalerZoom.toDouble())
|
||||
}
|
||||
map.putDouble("neutralZoom", 1.0)
|
||||
|
||||
val supportedVideoResolutions: List<Size>
|
||||
val cameraInfos = cameraSelector.filter(cameraProvider.availableCameraInfos)
|
||||
if (cameraInfos.size > 0) {
|
||||
supportedVideoResolutions = QualitySelector
|
||||
.getSupportedQualities(cameraInfos[0])
|
||||
.map { QualitySelector.getResolution(cameraInfos[0], it)!! }
|
||||
} else {
|
||||
supportedVideoResolutions = emptyList()
|
||||
}
|
||||
|
||||
// TODO: Optimize?
|
||||
val maxImageOutputSize = cameraConfig.outputFormats
|
||||
.flatMap { cameraConfig.getOutputSizes(it).toList() }
|
||||
.maxByOrNull { it.width * it.height }!!
|
||||
|
||||
val formats = Arguments.createArray()
|
||||
|
||||
cameraConfig.outputFormats.forEach { formatId ->
|
||||
val formatName = parseImageFormat(formatId)
|
||||
|
||||
cameraConfig.getOutputSizes(formatId).forEach { size ->
|
||||
val isHighestPhotoQualitySupported = areUltimatelyEqual(size, maxImageOutputSize)
|
||||
|
||||
// Get the number of seconds that each frame will take to process
|
||||
val secondsPerFrame = try {
|
||||
cameraConfig.getOutputMinFrameDuration(formatId, size) / 1_000_000_000.0
|
||||
} catch (error: Throwable) {
|
||||
Log.e(TAG, "Minimum Frame Duration for MediaRecorder Output cannot be calculated, format \"$formatName\" is not supported.")
|
||||
null
|
||||
}
|
||||
|
||||
val frameRateRanges = Arguments.createArray()
|
||||
if (secondsPerFrame != null && secondsPerFrame > 0) {
|
||||
val fps = (1.0 / secondsPerFrame).toInt()
|
||||
val frameRateRange = Arguments.createMap()
|
||||
frameRateRange.putInt("minFrameRate", 1)
|
||||
frameRateRange.putInt("maxFrameRate", fps)
|
||||
frameRateRanges.pushMap(frameRateRange)
|
||||
}
|
||||
fpsRanges.forEach { range ->
|
||||
val frameRateRange = Arguments.createMap()
|
||||
frameRateRange.putInt("minFrameRate", range.lower)
|
||||
frameRateRange.putInt("maxFrameRate", range.upper)
|
||||
frameRateRanges.pushMap(frameRateRange)
|
||||
}
|
||||
|
||||
val colorSpaces = Arguments.createArray()
|
||||
colorSpaces.pushString(formatName)
|
||||
|
||||
val videoStabilizationModes = Arguments.createArray()
|
||||
videoStabilizationModes.pushString("off")
|
||||
if (digitalStabilizationModes != null) {
|
||||
if (digitalStabilizationModes.contains(CameraCharacteristics.CONTROL_VIDEO_STABILIZATION_MODE_ON)) {
|
||||
videoStabilizationModes.pushString("auto")
|
||||
videoStabilizationModes.pushString("standard")
|
||||
}
|
||||
}
|
||||
if (opticalStabilizationModes != null) {
|
||||
if (opticalStabilizationModes.contains(CameraCharacteristics.LENS_OPTICAL_STABILIZATION_MODE_ON)) {
|
||||
videoStabilizationModes.pushString("cinematic")
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: Get the pixel format programatically rather than assuming a default of 420v
|
||||
val pixelFormat = "420v"
|
||||
|
||||
val format = Arguments.createMap()
|
||||
format.putDouble("photoHeight", size.height.toDouble())
|
||||
format.putDouble("photoWidth", size.width.toDouble())
|
||||
// since supportedVideoResolutions is sorted from highest resolution to lowest,
|
||||
// videoResolution will be the highest supported video resolution lower than or equal to photo resolution
|
||||
// TODO: Somehow integrate with CamcorderProfileProxy?
|
||||
val videoResolution = supportedVideoResolutions.find { it.width <= size.width && it.height <= size.height }
|
||||
format.putDouble("videoHeight", videoResolution?.height?.toDouble())
|
||||
format.putDouble("videoWidth", videoResolution?.width?.toDouble())
|
||||
format.putBoolean("isHighestPhotoQualitySupported", isHighestPhotoQualitySupported)
|
||||
format.putInt("maxISO", isoRange?.upper)
|
||||
format.putInt("minISO", isoRange?.lower)
|
||||
format.putDouble("fieldOfView", fieldOfView) // TODO: Revisit getAvailableCameraDevices (is fieldOfView accurate?)
|
||||
format.putDouble("maxZoom", (zoomRange?.upper ?: maxScalerZoom).toDouble())
|
||||
format.putArray("colorSpaces", colorSpaces)
|
||||
format.putBoolean("supportsVideoHDR", false) // TODO: supportsVideoHDR
|
||||
format.putBoolean("supportsPhotoHDR", supportsHdr)
|
||||
format.putArray("frameRateRanges", frameRateRanges)
|
||||
format.putString("autoFocusSystem", "none") // TODO: Revisit getAvailableCameraDevices (autoFocusSystem) (CameraCharacteristics.CONTROL_AF_AVAILABLE_MODES or CameraCharacteristics.LENS_INFO_FOCUS_DISTANCE_CALIBRATION)
|
||||
format.putArray("videoStabilizationModes", videoStabilizationModes)
|
||||
format.putString("pixelFormat", pixelFormat)
|
||||
formats.pushMap(format)
|
||||
}
|
||||
}
|
||||
|
||||
map.putArray("formats", formats)
|
||||
cameraDevices.pushMap(map)
|
||||
}
|
||||
|
||||
val difference = System.currentTimeMillis() - startTime
|
||||
Log.w(TAG, "CameraViewModule::getAvailableCameraDevices took: $difference ms")
|
||||
return@withPromise cameraDevices
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun getCameraPermissionStatus(promise: Promise) {
|
||||
val status = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.CAMERA)
|
||||
promise.resolve(parsePermissionStatus(status))
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun getMicrophonePermissionStatus(promise: Promise) {
|
||||
val status = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.RECORD_AUDIO)
|
||||
promise.resolve(parsePermissionStatus(status))
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun requestCameraPermission(promise: Promise) {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) {
|
||||
// API 21 and below always grants permission on app install
|
||||
return promise.resolve("authorized")
|
||||
}
|
||||
|
||||
val activity = reactApplicationContext.currentActivity
|
||||
if (activity is PermissionAwareActivity) {
|
||||
val currentRequestCode = RequestCode++
|
||||
val listener = PermissionListener { requestCode: Int, _: Array<String>, grantResults: IntArray ->
|
||||
if (requestCode == currentRequestCode) {
|
||||
val permissionStatus = if (grantResults.isNotEmpty()) grantResults[0] else PackageManager.PERMISSION_DENIED
|
||||
promise.resolve(parsePermissionStatus(permissionStatus))
|
||||
return@PermissionListener true
|
||||
}
|
||||
return@PermissionListener false
|
||||
}
|
||||
activity.requestPermissions(arrayOf(Manifest.permission.CAMERA), currentRequestCode, listener)
|
||||
} else {
|
||||
promise.reject("NO_ACTIVITY", "No PermissionAwareActivity was found! Make sure the app has launched before calling this function.")
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun requestMicrophonePermission(promise: Promise) {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) {
|
||||
// API 21 and below always grants permission on app install
|
||||
return promise.resolve("authorized")
|
||||
}
|
||||
|
||||
val activity = reactApplicationContext.currentActivity
|
||||
if (activity is PermissionAwareActivity) {
|
||||
val currentRequestCode = RequestCode++
|
||||
val listener = PermissionListener { requestCode: Int, _: Array<String>, grantResults: IntArray ->
|
||||
if (requestCode == currentRequestCode) {
|
||||
val permissionStatus = if (grantResults.isNotEmpty()) grantResults[0] else PackageManager.PERMISSION_DENIED
|
||||
promise.resolve(parsePermissionStatus(permissionStatus))
|
||||
return@PermissionListener true
|
||||
}
|
||||
return@PermissionListener false
|
||||
}
|
||||
activity.requestPermissions(arrayOf(Manifest.permission.RECORD_AUDIO), currentRequestCode, listener)
|
||||
} else {
|
||||
promise.reject("NO_ACTIVITY", "No PermissionAwareActivity was found! Make sure the app has launched before calling this function.")
|
||||
}
|
||||
}
|
||||
}
|
@ -1,112 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import androidx.camera.video.VideoRecordEvent.Finalize.VideoRecordError
|
||||
|
||||
abstract class CameraError(
|
||||
/**
|
||||
* The domain of the error. Error domains are used to group errors.
|
||||
*
|
||||
* Example: "permission"
|
||||
*/
|
||||
val domain: String,
|
||||
/**
|
||||
* The id of the error. Errors are uniquely identified under a given domain.
|
||||
*
|
||||
* Example: "microphone-permission-denied"
|
||||
*/
|
||||
val id: String,
|
||||
/**
|
||||
* A detailed error description of "what went wrong".
|
||||
*
|
||||
* Example: "The microphone permission was denied!"
|
||||
*/
|
||||
message: String,
|
||||
/**
|
||||
* A throwable that caused this error.
|
||||
*/
|
||||
cause: Throwable? = null
|
||||
) : Throwable("[$domain/$id] $message", cause)
|
||||
|
||||
val CameraError.code: String
|
||||
get() = "$domain/$id"
|
||||
|
||||
class MicrophonePermissionError : CameraError("permission", "microphone-permission-denied", "The Microphone permission was denied! If you want to record Video without sound, pass `audio={false}`.")
|
||||
class CameraPermissionError : CameraError("permission", "camera-permission-denied", "The Camera permission was denied!")
|
||||
|
||||
class InvalidTypeScriptUnionError(unionName: String, unionValue: String) : CameraError("parameter", "invalid-parameter", "The given value for $unionName could not be parsed! (Received: $unionValue)")
|
||||
|
||||
class NoCameraDeviceError : CameraError("device", "no-device", "No device was set! Use `getAvailableCameraDevices()` to select a suitable Camera device.")
|
||||
class InvalidCameraDeviceError(cause: Throwable) : CameraError("device", "invalid-device", "The given Camera device could not be found for use-case binding!", cause)
|
||||
class ParallelVideoProcessingNotSupportedError(cause: Throwable) : CameraError("device", "parallel-video-processing-not-supported", "The given LEGACY Camera device does not support parallel " +
|
||||
"video processing (`video={true}` + `frameProcessor={...}`). Disable either `video` or `frameProcessor`. To find out if a device supports parallel video processing, check the `supportsParallelVideoProcessing` property on the CameraDevice. " +
|
||||
"See https://react-native-vision-camera.com/docs/guides/devices#the-supportsparallelvideoprocessing-prop for more information.", cause)
|
||||
|
||||
class FpsNotContainedInFormatError(fps: Int) : CameraError("format", "invalid-fps", "The given FPS were not valid for the currently selected format. Make sure you select a format which `frameRateRanges` includes $fps FPS!")
|
||||
class HdrNotContainedInFormatError() : CameraError(
|
||||
"format", "invalid-hdr",
|
||||
"The currently selected format does not support HDR capture! " +
|
||||
"Make sure you select a format which `frameRateRanges` includes `supportsPhotoHDR`!"
|
||||
)
|
||||
class LowLightBoostNotContainedInFormatError() : CameraError(
|
||||
"format", "invalid-low-light-boost",
|
||||
"The currently selected format does not support low-light boost (night mode)! " +
|
||||
"Make sure you select a format which includes `supportsLowLightBoost`."
|
||||
)
|
||||
|
||||
class CameraNotReadyError : CameraError("session", "camera-not-ready", "The Camera is not ready yet! Wait for the onInitialized() callback!")
|
||||
|
||||
class VideoNotEnabledError : CameraError("capture", "video-not-enabled", "Video capture is disabled! Pass `video={true}` to enable video recordings.")
|
||||
class PhotoNotEnabledError : CameraError("capture", "photo-not-enabled", "Photo capture is disabled! Pass `photo={true}` to enable photo capture.")
|
||||
|
||||
class InvalidFormatError(format: Int) : CameraError("capture", "invalid-photo-format", "The Photo has an invalid format! Expected ${ImageFormat.YUV_420_888}, actual: $format")
|
||||
|
||||
class VideoEncoderError(cause: Throwable?) : CameraError("capture", "encoder-error", "The recording failed while encoding.\n" +
|
||||
"This error may be generated when the video or audio codec encounters an error during encoding. " +
|
||||
"When this happens and the output file is generated, the output file is not properly constructed. " +
|
||||
"The application will need to clean up the output file, such as deleting the file.",
|
||||
cause)
|
||||
|
||||
class InvalidVideoOutputOptionsError(cause: Throwable?) : CameraError("capture", "invalid-video-options",
|
||||
"The recording failed due to invalid output options.\n" +
|
||||
"This error is generated when invalid output options have been used while preparing a recording",
|
||||
cause)
|
||||
|
||||
class RecorderError(cause: Throwable?) : CameraError("capture", "recorder-error",
|
||||
"The recording failed because the Recorder is in an unrecoverable error state.\n" +
|
||||
"When this happens and the output file is generated, the output file is not properly constructed. " +
|
||||
"The application will need to clean up the output file, such as deleting the file. " +
|
||||
"Such an error will usually require creating a new Recorder object to start a new recording.",
|
||||
cause)
|
||||
|
||||
class NoValidDataError(cause: Throwable?) : CameraError("capture", "no-valid-data",
|
||||
"The recording failed because no valid data was produced to be recorded.\n" +
|
||||
"This error is generated when the essential data for a recording to be played correctly is missing, for example, " +
|
||||
"a recording must contain at least one key frame. The application will need to clean up the output file, such as deleting the file.",
|
||||
cause)
|
||||
|
||||
class InactiveSourceError(cause: Throwable?) : CameraError("capture", "inactive-source",
|
||||
"The recording failed because the source becomes inactive and stops sending frames.\n" +
|
||||
"One case is that if camera is closed due to lifecycle stopped, the active recording will be finalized with this error, " +
|
||||
"and the output will be generated, containing the frames produced before camera closing. " +
|
||||
"Attempting to start a new recording will be finalized immediately if the source remains inactive and no output will be generated.",
|
||||
cause)
|
||||
|
||||
class InsufficientStorageError(cause: Throwable?) : CameraError("capture", "insufficient-storage",
|
||||
"The recording failed due to insufficient storage space.\n" +
|
||||
"There are two possible cases that will cause this error.\n" +
|
||||
"1. The storage is already full before the recording starts, so no output file will be generated.\n" +
|
||||
"2. The storage becomes full during recording, so the output file will be generated.",
|
||||
cause)
|
||||
|
||||
class FileSizeLimitReachedError(cause: Throwable?) : CameraError("capture", "file-size-limit-reached",
|
||||
"The recording failed due to file size limitation.\n" +
|
||||
"The file size limitation will refer to OutputOptions.getFileSizeLimit(). The output file will still be generated with this error.",
|
||||
cause)
|
||||
|
||||
class NoRecordingInProgressError : CameraError("capture", "no-recording-in-progress", "No active recording in progress!")
|
||||
|
||||
class CameraManagerUnavailableError : CameraError("system", "no-camera-manager", "The Camera manager instance was unavailable for the current Application!")
|
||||
class ViewNotFoundError(viewId: Int) : CameraError("system", "view-not-found", "The given view (ID $viewId) was not found in the view manager.")
|
||||
|
||||
class UnknownCameraError(cause: Throwable?) : CameraError("unknown", "unknown", cause?.message ?: "An unknown camera error occured.", cause)
|
@ -1,38 +0,0 @@
|
||||
package com.mrousavy.camera.frameprocessor
|
||||
|
||||
data class PerformanceSampleCollection(val endPerformanceSampleCollection: () -> Unit)
|
||||
|
||||
// keep a maximum of `maxSampleSize` historical performance data samples cached.
|
||||
private const val maxSampleSize = 15
|
||||
|
||||
class FrameProcessorPerformanceDataCollector {
|
||||
private var counter = 0
|
||||
private var performanceSamples: ArrayList<Double> = ArrayList()
|
||||
|
||||
val averageExecutionTimeSeconds: Double
|
||||
get() = performanceSamples.average()
|
||||
|
||||
fun beginPerformanceSampleCollection(): PerformanceSampleCollection {
|
||||
val begin = System.currentTimeMillis()
|
||||
|
||||
return PerformanceSampleCollection {
|
||||
val end = System.currentTimeMillis()
|
||||
val seconds = (end - begin) / 1_000.0
|
||||
|
||||
val index = counter % maxSampleSize
|
||||
|
||||
if (performanceSamples.size > index) {
|
||||
performanceSamples[index] = seconds
|
||||
} else {
|
||||
performanceSamples.add(seconds)
|
||||
}
|
||||
|
||||
counter++
|
||||
}
|
||||
}
|
||||
|
||||
fun clear() {
|
||||
counter = 0
|
||||
performanceSamples.clear()
|
||||
}
|
||||
}
|
@ -1,53 +0,0 @@
|
||||
package com.mrousavy.camera.frameprocessor;
|
||||
|
||||
import androidx.annotation.Keep;
|
||||
import androidx.annotation.NonNull;
|
||||
import androidx.annotation.Nullable;
|
||||
import androidx.camera.core.ImageProxy;
|
||||
import com.facebook.proguard.annotations.DoNotStrip;
|
||||
|
||||
/**
|
||||
* Declares a Frame Processor Plugin.
|
||||
*/
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
public abstract class FrameProcessorPlugin {
|
||||
private final @NonNull String mName;
|
||||
|
||||
/**
|
||||
* The actual Frame Processor plugin callback. Called for every frame the ImageAnalyzer receives.
|
||||
* @param image The CameraX ImageProxy. Don't call .close() on this, as VisionCamera handles that.
|
||||
* @return You can return any primitive, map or array you want. See the
|
||||
* <a href="https://react-native-vision-camera.com/docs/guides/frame-processors-plugins-overview#types">Types</a>
|
||||
* table for a list of supported types.
|
||||
*/
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
public abstract @Nullable Object callback(@NonNull ImageProxy image, @NonNull Object[] params);
|
||||
|
||||
/**
|
||||
* Initializes the native plugin part.
|
||||
* @param name Specifies the Frame Processor Plugin's name in the Runtime.
|
||||
* The actual name in the JS Runtime will be prefixed with two underscores (`__`)
|
||||
*/
|
||||
protected FrameProcessorPlugin(@NonNull String name) {
|
||||
mName = name;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the user-defined name of the Frame Processor Plugin.
|
||||
*/
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
public @NonNull String getName() {
|
||||
return mName;
|
||||
}
|
||||
|
||||
/**
|
||||
* Registers the given plugin in the Frame Processor Runtime.
|
||||
* @param plugin An instance of a plugin.
|
||||
*/
|
||||
public static void register(@NonNull FrameProcessorPlugin plugin) {
|
||||
FrameProcessorRuntimeManager.Companion.getPlugins().add(plugin);
|
||||
}
|
||||
}
|
@ -1,79 +0,0 @@
|
||||
package com.mrousavy.camera.frameprocessor
|
||||
|
||||
import android.util.Log
|
||||
import androidx.annotation.Keep
|
||||
import com.facebook.jni.HybridData
|
||||
import com.facebook.proguard.annotations.DoNotStrip
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.turbomodule.core.CallInvokerHolderImpl
|
||||
import com.facebook.react.uimanager.UIManagerHelper
|
||||
import com.mrousavy.camera.CameraView
|
||||
import com.mrousavy.camera.ViewNotFoundError
|
||||
import java.lang.ref.WeakReference
|
||||
import java.util.concurrent.ExecutorService
|
||||
|
||||
@Suppress("KotlinJniMissingFunction") // I use fbjni, Android Studio is not smart enough to realize that.
|
||||
class FrameProcessorRuntimeManager(context: ReactApplicationContext, frameProcessorThread: ExecutorService) {
|
||||
companion object {
|
||||
const val TAG = "FrameProcessorRuntime"
|
||||
val Plugins: ArrayList<FrameProcessorPlugin> = ArrayList()
|
||||
var enableFrameProcessors = true
|
||||
|
||||
init {
|
||||
try {
|
||||
System.loadLibrary("reanimated")
|
||||
System.loadLibrary("VisionCamera")
|
||||
} catch (e: UnsatisfiedLinkError) {
|
||||
Log.w(TAG, "Failed to load Reanimated/VisionCamera C++ library. Frame Processors are disabled!")
|
||||
enableFrameProcessors = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@DoNotStrip
|
||||
private var mHybridData: HybridData? = null
|
||||
private var mContext: WeakReference<ReactApplicationContext>? = null
|
||||
private var mScheduler: VisionCameraScheduler? = null
|
||||
|
||||
init {
|
||||
if (enableFrameProcessors) {
|
||||
val holder = context.catalystInstance.jsCallInvokerHolder as CallInvokerHolderImpl
|
||||
mScheduler = VisionCameraScheduler(frameProcessorThread)
|
||||
mContext = WeakReference(context)
|
||||
mHybridData = initHybrid(context.javaScriptContextHolder.get(), holder, mScheduler!!)
|
||||
initializeRuntime()
|
||||
|
||||
Log.i(TAG, "Installing Frame Processor Plugins...")
|
||||
Plugins.forEach { plugin ->
|
||||
registerPlugin(plugin)
|
||||
}
|
||||
Log.i(TAG, "Successfully installed ${Plugins.count()} Frame Processor Plugins!")
|
||||
|
||||
Log.i(TAG, "Installing JSI Bindings on JS Thread...")
|
||||
context.runOnJSQueueThread {
|
||||
installJSIBindings()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Suppress("unused")
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
fun findCameraViewById(viewId: Int): CameraView {
|
||||
Log.d(TAG, "Finding view $viewId...")
|
||||
val ctx = mContext?.get()
|
||||
val view = if (ctx != null) UIManagerHelper.getUIManager(ctx, viewId)?.resolveView(viewId) as CameraView? else null
|
||||
Log.d(TAG, if (view != null) "Found view $viewId!" else "Couldn't find view $viewId!")
|
||||
return view ?: throw ViewNotFoundError(viewId)
|
||||
}
|
||||
|
||||
// private C++ funcs
|
||||
private external fun initHybrid(
|
||||
jsContext: Long,
|
||||
jsCallInvokerHolder: CallInvokerHolderImpl,
|
||||
scheduler: VisionCameraScheduler
|
||||
): HybridData
|
||||
private external fun initializeRuntime()
|
||||
private external fun registerPlugin(plugin: FrameProcessorPlugin)
|
||||
private external fun installJSIBindings()
|
||||
}
|
@ -1,42 +0,0 @@
|
||||
package com.mrousavy.camera.frameprocessor;
|
||||
|
||||
import android.annotation.SuppressLint;
|
||||
import android.media.Image;
|
||||
|
||||
import androidx.annotation.Keep;
|
||||
import androidx.camera.core.ImageProxy;
|
||||
import com.facebook.proguard.annotations.DoNotStrip;
|
||||
|
||||
@SuppressWarnings("unused") // used through JNI
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
public class ImageProxyUtils {
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
public static boolean isImageProxyValid(ImageProxy imageProxy) {
|
||||
try {
|
||||
Image image = imageProxy.getImage();
|
||||
if (image == null) return false;
|
||||
// will throw an exception if the image is already closed
|
||||
imageProxy.getImage().getCropRect();
|
||||
// no exception thrown, image must still be valid.
|
||||
return true;
|
||||
} catch (Exception e) {
|
||||
// exception thrown, image has already been closed.
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
public static int getPlanesCount(ImageProxy imageProxy) {
|
||||
return imageProxy.getPlanes().length;
|
||||
}
|
||||
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
public static int getBytesPerRow(ImageProxy imageProxy) {
|
||||
return imageProxy.getPlanes()[0].getRowStride();
|
||||
}
|
||||
}
|
@ -1,48 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
|
||||
/**
|
||||
* Parses ImageFormat/PixelFormat int to a string representation useable for the TypeScript types.
|
||||
*/
|
||||
fun parseImageFormat(imageFormat: Int): String {
|
||||
return when (imageFormat) {
|
||||
ImageFormat.YUV_420_888 -> "yuv"
|
||||
ImageFormat.YUV_422_888 -> "yuv"
|
||||
ImageFormat.YUV_444_888 -> "yuv"
|
||||
ImageFormat.JPEG -> "jpeg"
|
||||
ImageFormat.DEPTH_JPEG -> "jpeg-depth"
|
||||
ImageFormat.RAW_SENSOR -> "raw"
|
||||
ImageFormat.RAW_PRIVATE -> "raw"
|
||||
ImageFormat.HEIC -> "heic"
|
||||
ImageFormat.PRIVATE -> "private"
|
||||
ImageFormat.DEPTH16 -> "depth-16"
|
||||
else -> "unknown"
|
||||
/*
|
||||
ImageFormat.UNKNOWN -> "TODOFILL"
|
||||
ImageFormat.RGB_565 -> "TODOFILL"
|
||||
ImageFormat.YV12 -> "TODOFILL"
|
||||
ImageFormat.Y8 -> "TODOFILL"
|
||||
ImageFormat.NV16 -> "TODOFILL"
|
||||
ImageFormat.NV21 -> "TODOFILL"
|
||||
ImageFormat.YUY2 -> "TODOFILL"
|
||||
ImageFormat.FLEX_RGB_888 -> "TODOFILL"
|
||||
ImageFormat.FLEX_RGBA_8888 -> "TODOFILL"
|
||||
ImageFormat.RAW10 -> "TODOFILL"
|
||||
ImageFormat.RAW12 -> "TODOFILL"
|
||||
ImageFormat.DEPTH_POINT_CLOUD -> "TODOFILL"
|
||||
@Suppress("DUPLICATE_LABEL_IN_WHEN")
|
||||
PixelFormat.UNKNOWN -> "TODOFILL"
|
||||
PixelFormat.TRANSPARENT -> "TODOFILL"
|
||||
PixelFormat.TRANSLUCENT -> "TODOFILL"
|
||||
PixelFormat.RGBX_8888 -> "TODOFILL"
|
||||
PixelFormat.RGBA_F16 -> "TODOFILL"
|
||||
PixelFormat.RGBA_8888 -> "TODOFILL"
|
||||
PixelFormat.RGBA_1010102 -> "TODOFILL"
|
||||
PixelFormat.OPAQUE -> "TODOFILL"
|
||||
@Suppress("DUPLICATE_LABEL_IN_WHEN")
|
||||
PixelFormat.RGB_565 -> "TODOFILL"
|
||||
PixelFormat.RGB_888 -> "TODOFILL"
|
||||
*/
|
||||
}
|
||||
}
|
@ -1,15 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
|
||||
/**
|
||||
* Parses Lens Facing int to a string representation useable for the TypeScript types.
|
||||
*/
|
||||
fun parseLensFacing(lensFacing: Int?): String? {
|
||||
return when (lensFacing) {
|
||||
CameraCharacteristics.LENS_FACING_BACK -> "back"
|
||||
CameraCharacteristics.LENS_FACING_FRONT -> "front"
|
||||
CameraCharacteristics.LENS_FACING_EXTERNAL -> "external"
|
||||
else -> null
|
||||
}
|
||||
}
|
@ -1,20 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.util.Size
|
||||
import android.util.SizeF
|
||||
import kotlin.math.max
|
||||
import kotlin.math.min
|
||||
|
||||
val Size.bigger: Int
|
||||
get() = max(this.width, this.height)
|
||||
val Size.smaller: Int
|
||||
get() = min(this.width, this.height)
|
||||
|
||||
val SizeF.bigger: Float
|
||||
get() = max(this.width, this.height)
|
||||
val SizeF.smaller: Float
|
||||
get() = min(this.width, this.height)
|
||||
|
||||
fun areUltimatelyEqual(size1: Size, size2: Size): Boolean {
|
||||
return size1.width * size1.height == size2.width * size2.height
|
||||
}
|
@ -1,28 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import androidx.camera.core.AspectRatio
|
||||
import kotlin.math.abs
|
||||
import kotlin.math.max
|
||||
import kotlin.math.min
|
||||
|
||||
private const val RATIO_4_3_VALUE = 4.0 / 3.0
|
||||
private const val RATIO_16_9_VALUE = 16.0 / 9.0
|
||||
|
||||
/**
|
||||
* [androidx.camera.core.ImageAnalysisConfig] requires enum value of
|
||||
* [androidx.camera.core.AspectRatio]. Currently it has values of 4:3 & 16:9.
|
||||
*
|
||||
* Detecting the most suitable ratio for dimensions provided in @params by counting absolute
|
||||
* of preview ratio to one of the provided values.
|
||||
*
|
||||
* @param width - preview width
|
||||
* @param height - preview height
|
||||
* @return suitable aspect ratio
|
||||
*/
|
||||
fun aspectRatio(width: Int, height: Int): Int {
|
||||
val previewRatio = max(width, height).toDouble() / min(width, height)
|
||||
if (abs(previewRatio - RATIO_4_3_VALUE) <= abs(previewRatio - RATIO_16_9_VALUE)) {
|
||||
return AspectRatio.RATIO_4_3
|
||||
}
|
||||
return AspectRatio.RATIO_16_9
|
||||
}
|
@ -1,58 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.util.Size
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.ReadableArray
|
||||
import com.mrousavy.camera.parsers.bigger
|
||||
import kotlin.math.PI
|
||||
import kotlin.math.atan
|
||||
|
||||
// 35mm is 135 film format, a standard in which focal lengths are usually measured
|
||||
val Size35mm = Size(36, 24)
|
||||
|
||||
/**
|
||||
* Convert a given array of focal lengths to the corresponding TypeScript union type name.
|
||||
*
|
||||
* Possible values for single cameras:
|
||||
* * `"wide-angle-camera"`
|
||||
* * `"ultra-wide-angle-camera"`
|
||||
* * `"telephoto-camera"`
|
||||
*
|
||||
* Sources for the focal length categories:
|
||||
* * [Telephoto Lens (wikipedia)](https://en.wikipedia.org/wiki/Telephoto_lens)
|
||||
* * [Normal Lens (wikipedia)](https://en.wikipedia.org/wiki/Normal_lens)
|
||||
* * [Wide-Angle Lens (wikipedia)](https://en.wikipedia.org/wiki/Wide-angle_lens)
|
||||
* * [Ultra-Wide-Angle Lens (wikipedia)](https://en.wikipedia.org/wiki/Ultra_wide_angle_lens)
|
||||
*/
|
||||
fun CameraCharacteristics.getDeviceTypes(): ReadableArray {
|
||||
// TODO: Check if getDeviceType() works correctly, even for logical multi-cameras
|
||||
val focalLengths = this.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)!!
|
||||
val sensorSize = this.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE)!!
|
||||
|
||||
// To get valid focal length standards we have to upscale to the 35mm measurement (film standard)
|
||||
val cropFactor = Size35mm.bigger / sensorSize.bigger
|
||||
|
||||
val deviceTypes = Arguments.createArray()
|
||||
|
||||
val containsTelephoto = focalLengths.any { l -> (l * cropFactor) > 35 } // TODO: Telephoto lenses are > 85mm, but we don't have anything between that range..
|
||||
// val containsNormalLens = focalLengths.any { l -> (l * cropFactor) > 35 && (l * cropFactor) <= 55 }
|
||||
val containsWideAngle = focalLengths.any { l -> (l * cropFactor) >= 24 && (l * cropFactor) <= 35 }
|
||||
val containsUltraWideAngle = focalLengths.any { l -> (l * cropFactor) < 24 }
|
||||
|
||||
if (containsTelephoto)
|
||||
deviceTypes.pushString("telephoto-camera")
|
||||
if (containsWideAngle)
|
||||
deviceTypes.pushString("wide-angle-camera")
|
||||
if (containsUltraWideAngle)
|
||||
deviceTypes.pushString("ultra-wide-angle-camera")
|
||||
|
||||
return deviceTypes
|
||||
}
|
||||
|
||||
fun CameraCharacteristics.getFieldOfView(): Double {
|
||||
val focalLengths = this.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)!!
|
||||
val sensorSize = this.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE)!!
|
||||
|
||||
return 2 * atan(sensorSize.bigger / (focalLengths[0] * 2)) * (180 / PI)
|
||||
}
|
@ -1,25 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.annotation.SuppressLint
|
||||
import androidx.camera.camera2.interop.Camera2CameraInfo
|
||||
import androidx.camera.core.CameraSelector
|
||||
import java.lang.IllegalArgumentException
|
||||
|
||||
/**
|
||||
* Create a new [CameraSelector] which selects the camera with the given [cameraId]
|
||||
*/
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
fun CameraSelector.Builder.byID(cameraId: String): CameraSelector.Builder {
|
||||
return this.addCameraFilter { cameras ->
|
||||
cameras.filter { cameraInfoX ->
|
||||
try {
|
||||
val cameraInfo = Camera2CameraInfo.from(cameraInfoX)
|
||||
return@filter cameraInfo.cameraId == cameraId
|
||||
} catch (e: IllegalArgumentException) {
|
||||
// Occurs when the [cameraInfoX] is not castable to a Camera2 Info object.
|
||||
// We can ignore this error because the [getAvailableCameraDevices()] func only returns Camera2 devices.
|
||||
return@filter false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,36 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.content.Context
|
||||
import android.os.Build
|
||||
import android.view.Surface
|
||||
import android.view.WindowManager
|
||||
import com.facebook.react.bridge.ReactContext
|
||||
|
||||
val Context.displayRotation: Int
|
||||
get() {
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
|
||||
// Context.display
|
||||
this.display?.let { display ->
|
||||
return display.rotation
|
||||
}
|
||||
|
||||
// ReactContext.currentActivity.display
|
||||
if (this is ReactContext) {
|
||||
currentActivity?.display?.let { display ->
|
||||
return display.rotation
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WindowManager.defaultDisplay
|
||||
val windowManager = getSystemService(Context.WINDOW_SERVICE) as? WindowManager
|
||||
if (windowManager != null) {
|
||||
@Suppress("DEPRECATION") // deprecated since SDK 30
|
||||
windowManager.defaultDisplay?.let { display ->
|
||||
return display.rotation
|
||||
}
|
||||
}
|
||||
|
||||
// 0
|
||||
return Surface.ROTATION_0
|
||||
}
|
@ -1,33 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.util.Range
|
||||
import android.util.Size
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
|
||||
class DeviceFormat(map: ReadableMap) {
|
||||
val frameRateRanges: List<Range<Int>>
|
||||
val photoSize: Size
|
||||
val videoSize: Size
|
||||
|
||||
init {
|
||||
frameRateRanges = map.getArray("frameRateRanges")!!.toArrayList().map { range ->
|
||||
if (range is HashMap<*, *>)
|
||||
rangeFactory(range["minFrameRate"], range["maxFrameRate"])
|
||||
else
|
||||
throw IllegalArgumentException("DeviceFormat: frameRateRanges contained a Range that was not of type HashMap<*,*>! Actual Type: ${range?.javaClass?.name}")
|
||||
}
|
||||
photoSize = Size(map.getInt("photoWidth"), map.getInt("photoHeight"))
|
||||
videoSize = Size(map.getInt("videoWidth"), map.getInt("videoHeight"))
|
||||
}
|
||||
}
|
||||
|
||||
fun rangeFactory(minFrameRate: Any?, maxFrameRate: Any?): Range<Int> {
|
||||
return when (minFrameRate) {
|
||||
is Int -> Range(minFrameRate, maxFrameRate as Int)
|
||||
is Double -> Range(minFrameRate.toInt(), (maxFrameRate as Double).toInt())
|
||||
else -> throw IllegalArgumentException(
|
||||
"DeviceFormat: frameRateRanges contained a Range that didn't have minFrameRate/maxFrameRate of types Int/Double! " +
|
||||
"Actual Type: ${minFrameRate?.javaClass?.name} & ${maxFrameRate?.javaClass?.name}"
|
||||
)
|
||||
}
|
||||
}
|
@ -1,62 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.WritableMap
|
||||
|
||||
fun ExifInterface.buildMetadataMap(): WritableMap {
|
||||
val metadataMap = Arguments.createMap()
|
||||
metadataMap.putInt("Orientation", this.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL))
|
||||
|
||||
val tiffMap = Arguments.createMap()
|
||||
tiffMap.putInt("ResolutionUnit", this.getAttributeInt(ExifInterface.TAG_RESOLUTION_UNIT, 0))
|
||||
tiffMap.putString("Software", this.getAttribute(ExifInterface.TAG_SOFTWARE))
|
||||
tiffMap.putString("Make", this.getAttribute(ExifInterface.TAG_MAKE))
|
||||
tiffMap.putString("DateTime", this.getAttribute(ExifInterface.TAG_DATETIME))
|
||||
tiffMap.putDouble("XResolution", this.getAttributeDouble(ExifInterface.TAG_X_RESOLUTION, 0.0))
|
||||
tiffMap.putString("Model", this.getAttribute(ExifInterface.TAG_MODEL))
|
||||
tiffMap.putDouble("YResolution", this.getAttributeDouble(ExifInterface.TAG_Y_RESOLUTION, 0.0))
|
||||
metadataMap.putMap("{TIFF}", tiffMap)
|
||||
|
||||
val exifMap = Arguments.createMap()
|
||||
exifMap.putString("DateTimeOriginal", this.getAttribute(ExifInterface.TAG_DATETIME_ORIGINAL))
|
||||
exifMap.putDouble("ExposureTime", this.getAttributeDouble(ExifInterface.TAG_EXPOSURE_TIME, 0.0))
|
||||
exifMap.putDouble("FNumber", this.getAttributeDouble(ExifInterface.TAG_F_NUMBER, 0.0))
|
||||
val lensSpecificationArray = Arguments.createArray()
|
||||
this.getAttributeRange(ExifInterface.TAG_LENS_SPECIFICATION)?.forEach { lensSpecificationArray.pushInt(it.toInt()) }
|
||||
exifMap.putArray("LensSpecification", lensSpecificationArray)
|
||||
exifMap.putDouble("ExposureBiasValue", this.getAttributeDouble(ExifInterface.TAG_EXPOSURE_BIAS_VALUE, 0.0))
|
||||
exifMap.putInt("ColorSpace", this.getAttributeInt(ExifInterface.TAG_COLOR_SPACE, ExifInterface.COLOR_SPACE_S_RGB))
|
||||
exifMap.putInt("FocalLenIn35mmFilm", this.getAttributeInt(ExifInterface.TAG_FOCAL_LENGTH_IN_35MM_FILM, 0))
|
||||
exifMap.putDouble("BrightnessValue", this.getAttributeDouble(ExifInterface.TAG_BRIGHTNESS_VALUE, 0.0))
|
||||
exifMap.putInt("ExposureMode", this.getAttributeInt(ExifInterface.TAG_EXPOSURE_MODE, ExifInterface.EXPOSURE_MODE_AUTO.toInt()))
|
||||
exifMap.putString("LensModel", this.getAttribute(ExifInterface.TAG_LENS_MODEL))
|
||||
exifMap.putInt("SceneType", this.getAttributeInt(ExifInterface.TAG_SCENE_TYPE, ExifInterface.SCENE_TYPE_DIRECTLY_PHOTOGRAPHED.toInt()))
|
||||
exifMap.putInt("PixelXDimension", this.getAttributeInt(ExifInterface.TAG_PIXEL_X_DIMENSION, 0))
|
||||
exifMap.putDouble("ShutterSpeedValue", this.getAttributeDouble(ExifInterface.TAG_SHUTTER_SPEED_VALUE, 0.0))
|
||||
exifMap.putInt("SensingMethod", this.getAttributeInt(ExifInterface.TAG_SENSING_METHOD, ExifInterface.SENSOR_TYPE_NOT_DEFINED.toInt()))
|
||||
val subjectAreaArray = Arguments.createArray()
|
||||
this.getAttributeRange(ExifInterface.TAG_SUBJECT_AREA)?.forEach { subjectAreaArray.pushInt(it.toInt()) }
|
||||
exifMap.putArray("SubjectArea", subjectAreaArray)
|
||||
exifMap.putDouble("ApertureValue", this.getAttributeDouble(ExifInterface.TAG_APERTURE_VALUE, 0.0))
|
||||
exifMap.putString("SubsecTimeDigitized", this.getAttribute(ExifInterface.TAG_SUBSEC_TIME_DIGITIZED))
|
||||
exifMap.putDouble("FocalLength", this.getAttributeDouble(ExifInterface.TAG_FOCAL_LENGTH, 0.0))
|
||||
exifMap.putString("LensMake", this.getAttribute(ExifInterface.TAG_LENS_MAKE))
|
||||
exifMap.putString("SubsecTimeOriginal", this.getAttribute(ExifInterface.TAG_SUBSEC_TIME_ORIGINAL))
|
||||
exifMap.putString("OffsetTimeDigitized", this.getAttribute(ExifInterface.TAG_OFFSET_TIME_DIGITIZED))
|
||||
exifMap.putInt("PixelYDimension", this.getAttributeInt(ExifInterface.TAG_PIXEL_Y_DIMENSION, 0))
|
||||
val isoSpeedRatingsArray = Arguments.createArray()
|
||||
this.getAttributeRange(ExifInterface.TAG_PHOTOGRAPHIC_SENSITIVITY)?.forEach { isoSpeedRatingsArray.pushInt(it.toInt()) }
|
||||
exifMap.putArray("ISOSpeedRatings", isoSpeedRatingsArray)
|
||||
exifMap.putInt("WhiteBalance", this.getAttributeInt(ExifInterface.TAG_WHITE_BALANCE, 0))
|
||||
exifMap.putString("DateTimeDigitized", this.getAttribute(ExifInterface.TAG_DATETIME_DIGITIZED))
|
||||
exifMap.putString("OffsetTimeOriginal", this.getAttribute(ExifInterface.TAG_OFFSET_TIME_ORIGINAL))
|
||||
exifMap.putString("ExifVersion", this.getAttribute(ExifInterface.TAG_EXIF_VERSION))
|
||||
exifMap.putString("OffsetTime", this.getAttribute(ExifInterface.TAG_OFFSET_TIME))
|
||||
exifMap.putInt("Flash", this.getAttributeInt(ExifInterface.TAG_FLASH, ExifInterface.FLAG_FLASH_FIRED.toInt()))
|
||||
exifMap.putInt("ExposureProgram", this.getAttributeInt(ExifInterface.TAG_EXPOSURE_PROGRAM, ExifInterface.EXPOSURE_PROGRAM_NOT_DEFINED.toInt()))
|
||||
exifMap.putInt("MeteringMode", this.getAttributeInt(ExifInterface.TAG_METERING_MODE, ExifInterface.METERING_MODE_UNKNOWN.toInt()))
|
||||
metadataMap.putMap("{Exif}", exifMap)
|
||||
|
||||
return metadataMap
|
||||
}
|
@ -1,41 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import androidx.camera.core.ImageCapture
|
||||
import androidx.camera.core.ImageCaptureException
|
||||
import androidx.camera.core.ImageProxy
|
||||
import java.util.concurrent.Executor
|
||||
import kotlin.coroutines.resume
|
||||
import kotlin.coroutines.resumeWithException
|
||||
import kotlin.coroutines.suspendCoroutine
|
||||
|
||||
suspend inline fun ImageCapture.takePicture(options: ImageCapture.OutputFileOptions, executor: Executor) = suspendCoroutine<ImageCapture.OutputFileResults> { cont ->
|
||||
this.takePicture(
|
||||
options, executor,
|
||||
object : ImageCapture.OnImageSavedCallback {
|
||||
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
|
||||
cont.resume(outputFileResults)
|
||||
}
|
||||
|
||||
override fun onError(exception: ImageCaptureException) {
|
||||
cont.resumeWithException(exception)
|
||||
}
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
suspend inline fun ImageCapture.takePicture(executor: Executor) = suspendCoroutine<ImageProxy> { cont ->
|
||||
this.takePicture(
|
||||
executor,
|
||||
object : ImageCapture.OnImageCapturedCallback() {
|
||||
override fun onCaptureSuccess(image: ImageProxy) {
|
||||
super.onCaptureSuccess(image)
|
||||
cont.resume(image)
|
||||
}
|
||||
|
||||
override fun onError(exception: ImageCaptureException) {
|
||||
super.onError(exception)
|
||||
cont.resumeWithException(exception)
|
||||
}
|
||||
}
|
||||
)
|
||||
}
|
@ -1,12 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import androidx.camera.core.ImageProxy
|
||||
|
||||
val ImageProxy.isRaw: Boolean
|
||||
get() {
|
||||
return when (format) {
|
||||
ImageFormat.RAW_SENSOR, ImageFormat.RAW10, ImageFormat.RAW12, ImageFormat.RAW_PRIVATE -> true
|
||||
else -> false
|
||||
}
|
||||
}
|
@ -1,127 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.graphics.Bitmap
|
||||
import android.graphics.BitmapFactory
|
||||
import android.graphics.ImageFormat
|
||||
import android.graphics.Matrix
|
||||
import android.util.Log
|
||||
import androidx.camera.core.ImageProxy
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.mrousavy.camera.CameraView
|
||||
import com.mrousavy.camera.InvalidFormatError
|
||||
import java.io.ByteArrayOutputStream
|
||||
import java.io.File
|
||||
import java.io.FileOutputStream
|
||||
import java.nio.ByteBuffer
|
||||
import kotlin.system.measureTimeMillis
|
||||
|
||||
// TODO: Fix this flip() function (this outputs a black image)
|
||||
fun flip(imageBytes: ByteArray, imageWidth: Int): ByteArray {
|
||||
// separate out the sub arrays
|
||||
var holder = ByteArray(imageBytes.size)
|
||||
var subArray = ByteArray(imageWidth)
|
||||
var subCount = 0
|
||||
for (i in imageBytes.indices) {
|
||||
subArray[subCount] = imageBytes[i]
|
||||
subCount++
|
||||
if (i % imageWidth == 0) {
|
||||
subArray.reverse()
|
||||
if (i == imageWidth) {
|
||||
holder = subArray
|
||||
} else {
|
||||
holder += subArray
|
||||
}
|
||||
subCount = 0
|
||||
subArray = ByteArray(imageWidth)
|
||||
}
|
||||
}
|
||||
subArray = ByteArray(imageWidth)
|
||||
System.arraycopy(imageBytes, imageBytes.size - imageWidth, subArray, 0, subArray.size)
|
||||
return holder + subArray
|
||||
}
|
||||
|
||||
// TODO: This function is slow. Figure out a faster way to flip images, preferably via directly manipulating the byte[] Exif flags
|
||||
fun flipImage(imageBytes: ByteArray): ByteArray {
|
||||
val bitmap = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
|
||||
val matrix = Matrix()
|
||||
|
||||
val exif = ExifInterface(imageBytes.inputStream())
|
||||
val orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_UNDEFINED)
|
||||
|
||||
when (orientation) {
|
||||
ExifInterface.ORIENTATION_ROTATE_180 -> {
|
||||
matrix.setRotate(180f)
|
||||
matrix.postScale(-1f, 1f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_FLIP_VERTICAL -> {
|
||||
matrix.setRotate(180f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_TRANSPOSE -> {
|
||||
matrix.setRotate(90f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_ROTATE_90 -> {
|
||||
matrix.setRotate(90f)
|
||||
matrix.postScale(-1f, 1f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_TRANSVERSE -> {
|
||||
matrix.setRotate(-90f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_ROTATE_270 -> {
|
||||
matrix.setRotate(-90f)
|
||||
matrix.postScale(-1f, 1f)
|
||||
}
|
||||
}
|
||||
|
||||
val newBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.width, bitmap.height, matrix, true)
|
||||
val stream = ByteArrayOutputStream()
|
||||
newBitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream)
|
||||
return stream.toByteArray()
|
||||
}
|
||||
|
||||
fun ImageProxy.save(file: File, flipHorizontally: Boolean) {
|
||||
when (format) {
|
||||
// TODO: ImageFormat.RAW_SENSOR
|
||||
// TODO: ImageFormat.DEPTH_JPEG
|
||||
ImageFormat.JPEG -> {
|
||||
val buffer = planes[0].buffer
|
||||
var bytes = ByteArray(buffer.remaining())
|
||||
|
||||
// copy image from buffer to byte array
|
||||
buffer.get(bytes)
|
||||
|
||||
if (flipHorizontally) {
|
||||
val milliseconds = measureTimeMillis {
|
||||
bytes = flipImage(bytes)
|
||||
}
|
||||
Log.i(CameraView.TAG_PERF, "Flipping Image took $milliseconds ms.")
|
||||
}
|
||||
|
||||
val output = FileOutputStream(file)
|
||||
output.write(bytes)
|
||||
output.close()
|
||||
}
|
||||
ImageFormat.YUV_420_888 -> {
|
||||
// "prebuffer" simply contains the meta information about the following planes.
|
||||
val prebuffer = ByteBuffer.allocate(16)
|
||||
prebuffer.putInt(width)
|
||||
.putInt(height)
|
||||
.putInt(planes[1].pixelStride)
|
||||
.putInt(planes[1].rowStride)
|
||||
|
||||
val output = FileOutputStream(file)
|
||||
output.write(prebuffer.array()) // write meta information to file
|
||||
// Now write the actual planes.
|
||||
var buffer: ByteBuffer
|
||||
var bytes: ByteArray
|
||||
|
||||
for (i in 0..2) {
|
||||
buffer = planes[i].buffer
|
||||
bytes = ByteArray(buffer.remaining()) // makes byte array large enough to hold image
|
||||
buffer.get(bytes) // copies image from buffer to byte array
|
||||
output.write(bytes) // write the byte array to file
|
||||
}
|
||||
output.close()
|
||||
}
|
||||
else -> throw InvalidFormatError(format)
|
||||
}
|
||||
}
|
@ -1,17 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.util.Size
|
||||
import android.view.Surface
|
||||
|
||||
/**
|
||||
* Rotate by a given Surface Rotation
|
||||
*/
|
||||
fun Size.rotated(surfaceRotation: Int): Size {
|
||||
return when (surfaceRotation) {
|
||||
Surface.ROTATION_0 -> Size(width, height)
|
||||
Surface.ROTATION_90 -> Size(height, width)
|
||||
Surface.ROTATION_180 -> Size(width, height)
|
||||
Surface.ROTATION_270 -> Size(height, width)
|
||||
else -> Size(width, height)
|
||||
}
|
||||
}
|
@ -1,24 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import com.facebook.react.bridge.WritableArray
|
||||
|
||||
fun WritableArray.pushInt(value: Int?) {
|
||||
if (value == null)
|
||||
this.pushNull()
|
||||
else
|
||||
this.pushInt(value)
|
||||
}
|
||||
|
||||
fun WritableArray.pushDouble(value: Double?) {
|
||||
if (value == null)
|
||||
this.pushNull()
|
||||
else
|
||||
this.pushDouble(value)
|
||||
}
|
||||
|
||||
fun WritableArray.pushBoolean(value: Boolean?) {
|
||||
if (value == null)
|
||||
this.pushNull()
|
||||
else
|
||||
this.pushBoolean(value)
|
||||
}
|
1
docs/docs/api/_category_.yml
Normal file
1
docs/docs/api/_category_.yml
Normal file
@ -0,0 +1 @@
|
||||
label: "API"
|
383
docs/docs/api/classes/Camera.md
Normal file
383
docs/docs/api/classes/Camera.md
Normal file
@ -0,0 +1,383 @@
|
||||
---
|
||||
id: "Camera"
|
||||
title: "Camera"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
### A powerful `<Camera>` component.
|
||||
|
||||
Read the [VisionCamera documentation](https://react-native-vision-camera.com/) for more information.
|
||||
|
||||
The `<Camera>` component's most important (and therefore _required_) properties are:
|
||||
|
||||
* [`device`](../interfaces/CameraProps.md#device): Specifies the [`CameraDevice`](../interfaces/CameraDevice.md) to use. Get a [`CameraDevice`](../interfaces/CameraDevice.md) by using the [`useCameraDevices()`](../#usecameradevices) hook, or manually by using the [`Camera.getAvailableCameraDevices()`](Camera.md#getavailablecameradevices) function.
|
||||
* [`isActive`](../interfaces/CameraProps.md#isactive): A boolean value that specifies whether the Camera should actively stream video frames or not. This can be compared to a Video component, where `isActive` specifies whether the video is paused or not. If you fully unmount the `<Camera>` component instead of using `isActive={false}`, the Camera will take a bit longer to start again.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```tsx
|
||||
function App() {
|
||||
const devices = useCameraDevices('wide-angle-camera')
|
||||
const device = devices.back
|
||||
|
||||
if (device == null) return <LoadingView />
|
||||
return (
|
||||
<Camera
|
||||
style={StyleSheet.absoluteFill}
|
||||
device={device}
|
||||
isActive={true}
|
||||
/>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
**`Component`**
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- `PureComponent`<[`CameraProps`](../interfaces/CameraProps.md)\>
|
||||
|
||||
↳ **`Camera`**
|
||||
|
||||
## Methods
|
||||
|
||||
### focus
|
||||
|
||||
▸ **focus**(`point`): `Promise`<`void`\>
|
||||
|
||||
Focus the camera to a specific point in the coordinate system.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](CameraRuntimeError.md) When any kind of error occured while focussing. Use the [`code`](CameraRuntimeError.md#code) property to get the actual error
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
await camera.current.focus({
|
||||
x: tapEvent.x,
|
||||
y: tapEvent.y
|
||||
})
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `point` | [`Point`](../interfaces/Point.md) | The point to focus to. This should be relative to the Camera view's coordinate system, and expressed in Pixel on iOS and Points on Android. * `(0, 0)` means **top left**. * `(CameraView.width, CameraView.height)` means **bottom right**. Make sure the value doesn't exceed the CameraView's dimensions. |
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<`void`\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:250](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L250)
|
||||
|
||||
___
|
||||
|
||||
### pauseRecording
|
||||
|
||||
▸ **pauseRecording**(): `Promise`<`void`\>
|
||||
|
||||
Pauses the current video recording.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraCaptureError`](CameraCaptureError.md) When any kind of error occured while pausing the video recording. Use the [`code`](CameraCaptureError.md#code) property to get the actual error
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
// Start
|
||||
await camera.current.startRecording()
|
||||
await timeout(1000)
|
||||
// Pause
|
||||
await camera.current.pauseRecording()
|
||||
await timeout(500)
|
||||
// Resume
|
||||
await camera.current.resumeRecording()
|
||||
await timeout(2000)
|
||||
// Stop
|
||||
const video = await camera.current.stopRecording()
|
||||
```
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<`void`\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:175](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L175)
|
||||
|
||||
___
|
||||
|
||||
### resumeRecording
|
||||
|
||||
▸ **resumeRecording**(): `Promise`<`void`\>
|
||||
|
||||
Resumes a currently paused video recording.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraCaptureError`](CameraCaptureError.md) When any kind of error occured while resuming the video recording. Use the [`code`](CameraCaptureError.md#code) property to get the actual error
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
// Start
|
||||
await camera.current.startRecording()
|
||||
await timeout(1000)
|
||||
// Pause
|
||||
await camera.current.pauseRecording()
|
||||
await timeout(500)
|
||||
// Resume
|
||||
await camera.current.resumeRecording()
|
||||
await timeout(2000)
|
||||
// Stop
|
||||
const video = await camera.current.stopRecording()
|
||||
```
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<`void`\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:203](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L203)
|
||||
|
||||
___
|
||||
|
||||
### startRecording
|
||||
|
||||
▸ **startRecording**(`options`): `void`
|
||||
|
||||
Start a new video recording.
|
||||
|
||||
Records in the following formats:
|
||||
* **iOS**: QuickTime (`.mov`)
|
||||
* **Android**: MPEG4 (`.mp4`)
|
||||
|
||||
**`Blocking`**
|
||||
|
||||
This function is synchronized/blocking.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraCaptureError`](CameraCaptureError.md) When any kind of error occured while starting the video recording. Use the [`code`](CameraCaptureError.md#code) property to get the actual error
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
camera.current.startRecording({
|
||||
onRecordingFinished: (video) => console.log(video),
|
||||
onRecordingError: (error) => console.error(error),
|
||||
})
|
||||
setTimeout(() => {
|
||||
camera.current.stopRecording()
|
||||
}, 5000)
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `options` | [`RecordVideoOptions`](../interfaces/RecordVideoOptions.md) |
|
||||
|
||||
#### Returns
|
||||
|
||||
`void`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:138](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L138)
|
||||
|
||||
___
|
||||
|
||||
### stopRecording
|
||||
|
||||
▸ **stopRecording**(): `Promise`<`void`\>
|
||||
|
||||
Stop the current video recording.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraCaptureError`](CameraCaptureError.md) When any kind of error occured while stopping the video recording. Use the [`code`](CameraCaptureError.md#code) property to get the actual error
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
await camera.current.startRecording()
|
||||
setTimeout(async () => {
|
||||
const video = await camera.current.stopRecording()
|
||||
}, 5000)
|
||||
```
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<`void`\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:224](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L224)
|
||||
|
||||
___
|
||||
|
||||
### takePhoto
|
||||
|
||||
▸ **takePhoto**(`options?`): `Promise`<[`PhotoFile`](../interfaces/PhotoFile.md)\>
|
||||
|
||||
Take a single photo and write it's content to a temporary file.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraCaptureError`](CameraCaptureError.md) When any kind of error occured while capturing the photo. Use the [`code`](CameraCaptureError.md#code) property to get the actual error
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const photo = await camera.current.takePhoto({
|
||||
qualityPrioritization: 'quality',
|
||||
flash: 'on',
|
||||
enableAutoRedEyeReduction: true
|
||||
})
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `options?` | [`TakePhotoOptions`](../interfaces/TakePhotoOptions.md) |
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<[`PhotoFile`](../interfaces/PhotoFile.md)\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:108](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L108)
|
||||
|
||||
___
|
||||
|
||||
### getAvailableCameraDevices
|
||||
|
||||
▸ `Static` **getAvailableCameraDevices**(): `Promise`<[`CameraDevice`](../interfaces/CameraDevice.md)[]\>
|
||||
|
||||
Get a list of all available camera devices on the current phone.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](CameraRuntimeError.md) When any kind of error occured while getting all available camera devices. Use the [`code`](CameraRuntimeError.md#code) property to get the actual error
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const devices = await Camera.getAvailableCameraDevices()
|
||||
const filtered = devices.filter((d) => matchesMyExpectations(d))
|
||||
const sorted = devices.sort(sortDevicesByAmountOfCameras)
|
||||
return {
|
||||
back: sorted.find((d) => d.position === "back"),
|
||||
front: sorted.find((d) => d.position === "front")
|
||||
}
|
||||
```
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<[`CameraDevice`](../interfaces/CameraDevice.md)[]\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:276](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L276)
|
||||
|
||||
___
|
||||
|
||||
### getCameraPermissionStatus
|
||||
|
||||
▸ `Static` **getCameraPermissionStatus**(): `Promise`<[`CameraPermissionStatus`](../#camerapermissionstatus)\>
|
||||
|
||||
Gets the current Camera Permission Status. Check this before mounting the Camera to ensure
|
||||
the user has permitted the app to use the camera.
|
||||
|
||||
To actually prompt the user for camera permission, use [`requestCameraPermission()`](Camera.md#requestcamerapermission).
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](CameraRuntimeError.md) When any kind of error occured while getting the current permission status. Use the [`code`](CameraRuntimeError.md#code) property to get the actual error
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<[`CameraPermissionStatus`](../#camerapermissionstatus)\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:291](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L291)
|
||||
|
||||
___
|
||||
|
||||
### getMicrophonePermissionStatus
|
||||
|
||||
▸ `Static` **getMicrophonePermissionStatus**(): `Promise`<[`CameraPermissionStatus`](../#camerapermissionstatus)\>
|
||||
|
||||
Gets the current Microphone-Recording Permission Status. Check this before mounting the Camera to ensure
|
||||
the user has permitted the app to use the microphone.
|
||||
|
||||
To actually prompt the user for microphone permission, use [`requestMicrophonePermission()`](Camera.md#requestmicrophonepermission).
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](CameraRuntimeError.md) When any kind of error occured while getting the current permission status. Use the [`code`](CameraRuntimeError.md#code) property to get the actual error
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<[`CameraPermissionStatus`](../#camerapermissionstatus)\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:306](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L306)
|
||||
|
||||
___
|
||||
|
||||
### requestCameraPermission
|
||||
|
||||
▸ `Static` **requestCameraPermission**(): `Promise`<[`CameraPermissionRequestResult`](../#camerapermissionrequestresult)\>
|
||||
|
||||
Shows a "request permission" alert to the user, and resolves with the new camera permission status.
|
||||
|
||||
If the user has previously blocked the app from using the camera, the alert will not be shown
|
||||
and `"denied"` will be returned.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](CameraRuntimeError.md) When any kind of error occured while requesting permission. Use the [`code`](CameraRuntimeError.md#code) property to get the actual error
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<[`CameraPermissionRequestResult`](../#camerapermissionrequestresult)\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:321](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L321)
|
||||
|
||||
___
|
||||
|
||||
### requestMicrophonePermission
|
||||
|
||||
▸ `Static` **requestMicrophonePermission**(): `Promise`<[`CameraPermissionRequestResult`](../#camerapermissionrequestresult)\>
|
||||
|
||||
Shows a "request permission" alert to the user, and resolves with the new microphone permission status.
|
||||
|
||||
If the user has previously blocked the app from using the microphone, the alert will not be shown
|
||||
and `"denied"` will be returned.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](CameraRuntimeError.md) When any kind of error occured while requesting permission. Use the [`code`](CameraRuntimeError.md#code) property to get the actual error
|
||||
|
||||
#### Returns
|
||||
|
||||
`Promise`<[`CameraPermissionRequestResult`](../#camerapermissionrequestresult)\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:336](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L336)
|
88
docs/docs/api/classes/CameraCaptureError.md
Normal file
88
docs/docs/api/classes/CameraCaptureError.md
Normal file
@ -0,0 +1,88 @@
|
||||
---
|
||||
id: "CameraCaptureError"
|
||||
title: "CameraCaptureError"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents any kind of error that occured while trying to capture a video or photo.
|
||||
|
||||
See the ["Camera Errors" documentation](https://react-native-vision-camera.com/docs/guides/errors) for more information about Camera Errors.
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- `CameraError`<[`CaptureError`](../#captureerror)\>
|
||||
|
||||
↳ **`CameraCaptureError`**
|
||||
|
||||
## Accessors
|
||||
|
||||
### cause
|
||||
|
||||
• `get` **cause**(): `undefined` \| `Error`
|
||||
|
||||
#### Returns
|
||||
|
||||
`undefined` \| `Error`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.cause
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:132](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L132)
|
||||
|
||||
___
|
||||
|
||||
### code
|
||||
|
||||
• `get` **code**(): `TCode`
|
||||
|
||||
#### Returns
|
||||
|
||||
`TCode`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.code
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:126](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L126)
|
||||
|
||||
___
|
||||
|
||||
### message
|
||||
|
||||
• `get` **message**(): `string`
|
||||
|
||||
#### Returns
|
||||
|
||||
`string`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.message
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:129](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L129)
|
||||
|
||||
## Methods
|
||||
|
||||
### toString
|
||||
|
||||
▸ **toString**(): `string`
|
||||
|
||||
#### Returns
|
||||
|
||||
`string`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.toString
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:150](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L150)
|
88
docs/docs/api/classes/CameraRuntimeError.md
Normal file
88
docs/docs/api/classes/CameraRuntimeError.md
Normal file
@ -0,0 +1,88 @@
|
||||
---
|
||||
id: "CameraRuntimeError"
|
||||
title: "CameraRuntimeError"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents any kind of error that occured in the Camera View Module.
|
||||
|
||||
See the ["Camera Errors" documentation](https://react-native-vision-camera.com/docs/guides/errors) for more information about Camera Errors.
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- `CameraError`<[`PermissionError`](../#permissionerror) \| [`ParameterError`](../#parametererror) \| [`DeviceError`](../#deviceerror) \| [`FormatError`](../#formaterror) \| [`SessionError`](../#sessionerror) \| [`SystemError`](../#systemerror) \| [`UnknownError`](../#unknownerror)\>
|
||||
|
||||
↳ **`CameraRuntimeError`**
|
||||
|
||||
## Accessors
|
||||
|
||||
### cause
|
||||
|
||||
• `get` **cause**(): `undefined` \| `Error`
|
||||
|
||||
#### Returns
|
||||
|
||||
`undefined` \| `Error`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.cause
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:132](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L132)
|
||||
|
||||
___
|
||||
|
||||
### code
|
||||
|
||||
• `get` **code**(): `TCode`
|
||||
|
||||
#### Returns
|
||||
|
||||
`TCode`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.code
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:126](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L126)
|
||||
|
||||
___
|
||||
|
||||
### message
|
||||
|
||||
• `get` **message**(): `string`
|
||||
|
||||
#### Returns
|
||||
|
||||
`string`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.message
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:129](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L129)
|
||||
|
||||
## Methods
|
||||
|
||||
### toString
|
||||
|
||||
▸ **toString**(): `string`
|
||||
|
||||
#### Returns
|
||||
|
||||
`string`
|
||||
|
||||
#### Inherited from
|
||||
|
||||
CameraError.toString
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:150](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L150)
|
2
docs/docs/api/classes/_category_.yml
Normal file
2
docs/docs/api/classes/_category_.yml
Normal file
@ -0,0 +1,2 @@
|
||||
label: "Classes"
|
||||
position: 3
|
636
docs/docs/api/index.md
Normal file
636
docs/docs/api/index.md
Normal file
@ -0,0 +1,636 @@
|
||||
---
|
||||
id: "index"
|
||||
title: "VisionCamera"
|
||||
sidebar_label: "Overview"
|
||||
sidebar_position: 0.5
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
## Classes
|
||||
|
||||
- [Camera](classes/Camera.md)
|
||||
- [CameraCaptureError](classes/CameraCaptureError.md)
|
||||
- [CameraRuntimeError](classes/CameraRuntimeError.md)
|
||||
|
||||
## Interfaces
|
||||
|
||||
- [CameraDevice](interfaces/CameraDevice.md)
|
||||
- [CameraDeviceFormat](interfaces/CameraDeviceFormat.md)
|
||||
- [CameraProps](interfaces/CameraProps.md)
|
||||
- [ErrorWithCause](interfaces/ErrorWithCause.md)
|
||||
- [PhotoFile](interfaces/PhotoFile.md)
|
||||
- [Point](interfaces/Point.md)
|
||||
- [RecordVideoOptions](interfaces/RecordVideoOptions.md)
|
||||
- [TakePhotoOptions](interfaces/TakePhotoOptions.md)
|
||||
- [TemporaryFile](interfaces/TemporaryFile.md)
|
||||
- [VideoFile](interfaces/VideoFile.md)
|
||||
|
||||
## Type Aliases
|
||||
|
||||
### AutoFocusSystem
|
||||
|
||||
Ƭ **AutoFocusSystem**: ``"contrast-detection"`` \| ``"phase-detection"`` \| ``"none"``
|
||||
|
||||
Indicates a format's autofocus system.
|
||||
|
||||
* `"none"`: Indicates that autofocus is not available
|
||||
* `"contrast-detection"`: Indicates that autofocus is achieved by contrast detection. Contrast detection performs a focus scan to find the optimal position
|
||||
* `"phase-detection"`: Indicates that autofocus is achieved by phase detection. Phase detection has the ability to achieve focus in many cases without a focus scan. Phase detection autofocus is typically less visually intrusive than contrast detection autofocus
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:53](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L53)
|
||||
|
||||
___
|
||||
|
||||
### CameraDevices
|
||||
|
||||
Ƭ **CameraDevices**: { [key in CameraPosition]: CameraDevice \| undefined }
|
||||
|
||||
#### Defined in
|
||||
|
||||
[hooks/useCameraDevices.ts:7](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/hooks/useCameraDevices.ts#L7)
|
||||
|
||||
___
|
||||
|
||||
### CameraPermissionRequestResult
|
||||
|
||||
Ƭ **CameraPermissionRequestResult**: ``"granted"`` \| ``"denied"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:15](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L15)
|
||||
|
||||
___
|
||||
|
||||
### CameraPermissionStatus
|
||||
|
||||
Ƭ **CameraPermissionStatus**: ``"granted"`` \| ``"not-determined"`` \| ``"denied"`` \| ``"restricted"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Camera.tsx:14](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Camera.tsx#L14)
|
||||
|
||||
___
|
||||
|
||||
### CameraPosition
|
||||
|
||||
Ƭ **CameraPosition**: ``"front"`` \| ``"back"`` \| ``"unspecified"`` \| ``"external"``
|
||||
|
||||
Represents the camera device position.
|
||||
|
||||
* `"back"`: Indicates that the device is physically located on the back of the system hardware
|
||||
* `"front"`: Indicates that the device is physically located on the front of the system hardware
|
||||
|
||||
#### iOS only
|
||||
* `"unspecified"`: Indicates that the device's position relative to the system hardware is unspecified
|
||||
|
||||
#### Android only
|
||||
* `"external"`: The camera device is an external camera, and has no fixed facing relative to the device's screen. (Android only)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraPosition.ts:13](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraPosition.ts#L13)
|
||||
|
||||
___
|
||||
|
||||
### CaptureError
|
||||
|
||||
Ƭ **CaptureError**: ``"capture/invalid-photo-format"`` \| ``"capture/encoder-error"`` \| ``"capture/muxer-error"`` \| ``"capture/recording-in-progress"`` \| ``"capture/no-recording-in-progress"`` \| ``"capture/file-io-error"`` \| ``"capture/create-temp-file-error"`` \| ``"capture/invalid-video-options"`` \| ``"capture/create-recorder-error"`` \| ``"capture/recorder-error"`` \| ``"capture/no-valid-data"`` \| ``"capture/inactive-source"`` \| ``"capture/insufficient-storage"`` \| ``"capture/file-size-limit-reached"`` \| ``"capture/invalid-photo-codec"`` \| ``"capture/not-bound-error"`` \| ``"capture/capture-type-not-supported"`` \| ``"capture/video-not-enabled"`` \| ``"capture/photo-not-enabled"`` \| ``"capture/aborted"`` \| ``"capture/unknown"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:31](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L31)
|
||||
|
||||
___
|
||||
|
||||
### DeviceError
|
||||
|
||||
Ƭ **DeviceError**: ``"device/configuration-error"`` \| ``"device/no-device"`` \| ``"device/invalid-device"`` \| ``"device/torch-unavailable"`` \| ``"device/microphone-unavailable"`` \| ``"device/pixel-format-not-supported"`` \| ``"device/low-light-boost-not-supported"`` \| ``"device/focus-not-supported"`` \| ``"device/camera-not-available-on-simulator"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:8](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L8)
|
||||
|
||||
___
|
||||
|
||||
### FormatError
|
||||
|
||||
Ƭ **FormatError**: ``"format/invalid-fps"`` \| ``"format/invalid-hdr"`` \| ``"format/invalid-low-light-boost"`` \| ``"format/invalid-format"`` \| ``"format/invalid-color-space"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:18](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L18)
|
||||
|
||||
___
|
||||
|
||||
### FrameProcessor
|
||||
|
||||
Ƭ **FrameProcessor**: `Object`
|
||||
|
||||
#### Type declaration
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `frameProcessor` | (`frame`: `Frame`) => `void` |
|
||||
| `type` | ``"frame-processor"`` |
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:7](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L7)
|
||||
|
||||
___
|
||||
|
||||
### LogicalCameraDeviceType
|
||||
|
||||
Ƭ **LogicalCameraDeviceType**: ``"dual-camera"`` \| ``"dual-wide-camera"`` \| ``"triple-camera"``
|
||||
|
||||
Indentifiers for a logical camera (Combinations of multiple physical cameras to create a single logical camera).
|
||||
|
||||
* `"dual-camera"`: A combination of wide-angle and telephoto cameras that creates a capture device.
|
||||
* `"dual-wide-camera"`: A device that consists of two cameras of fixed focal length, one ultrawide angle and one wide angle.
|
||||
* `"triple-camera"`: A device that consists of three cameras of fixed focal length, one ultrawide angle, one wide angle, and one telephoto.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:21](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L21)
|
||||
|
||||
___
|
||||
|
||||
### ParameterError
|
||||
|
||||
Ƭ **ParameterError**: ``"parameter/invalid-parameter"`` \| ``"parameter/unsupported-os"`` \| ``"parameter/unsupported-output"`` \| ``"parameter/unsupported-input"`` \| ``"parameter/invalid-combination"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:2](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L2)
|
||||
|
||||
___
|
||||
|
||||
### PermissionError
|
||||
|
||||
Ƭ **PermissionError**: ``"permission/microphone-permission-denied"`` \| ``"permission/camera-permission-denied"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:1](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L1)
|
||||
|
||||
___
|
||||
|
||||
### PhysicalCameraDeviceType
|
||||
|
||||
Ƭ **PhysicalCameraDeviceType**: ``"ultra-wide-angle-camera"`` \| ``"wide-angle-camera"`` \| ``"telephoto-camera"``
|
||||
|
||||
Indentifiers for a physical camera (one that actually exists on the back/front of the device)
|
||||
|
||||
* `"ultra-wide-angle-camera"`: A built-in camera with a shorter focal length than that of a wide-angle camera. (focal length between below 24mm)
|
||||
* `"wide-angle-camera"`: A built-in wide-angle camera. (focal length between 24mm and 35mm)
|
||||
* `"telephoto-camera"`: A built-in camera device with a longer focal length than a wide-angle camera. (focal length between above 85mm)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:12](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L12)
|
||||
|
||||
___
|
||||
|
||||
### SessionError
|
||||
|
||||
Ƭ **SessionError**: ``"session/camera-not-ready"`` \| ``"session/camera-cannot-be-opened"`` \| ``"session/camera-has-been-disconnected"`` \| ``"session/audio-session-setup-failed"`` \| ``"session/audio-in-use-by-other-app"`` \| ``"session/audio-session-failed-to-activate"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:24](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L24)
|
||||
|
||||
___
|
||||
|
||||
### SystemError
|
||||
|
||||
Ƭ **SystemError**: ``"system/camera-module-not-found"`` \| ``"system/no-camera-manager"`` \| ``"system/frame-processors-unavailable"`` \| ``"system/view-not-found"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:53](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L53)
|
||||
|
||||
___
|
||||
|
||||
### UnknownError
|
||||
|
||||
Ƭ **UnknownError**: ``"unknown/unknown"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:58](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L58)
|
||||
|
||||
___
|
||||
|
||||
### VideoStabilizationMode
|
||||
|
||||
Ƭ **VideoStabilizationMode**: ``"off"`` \| ``"standard"`` \| ``"cinematic"`` \| ``"cinematic-extended"`` \| ``"auto"``
|
||||
|
||||
Indicates a format's supported video stabilization mode. Enabling video stabilization may introduce additional latency into the video capture pipeline.
|
||||
|
||||
* `"off"`: No video stabilization. Indicates that video should not be stabilized
|
||||
* `"standard"`: Standard software-based video stabilization. Standard video stabilization reduces the field of view by about 10%.
|
||||
* `"cinematic"`: Advanced software-based video stabilization. This applies more aggressive cropping or transformations than standard.
|
||||
* `"cinematic-extended"`: Extended software- and hardware-based stabilization that aggressively crops and transforms the video to apply a smooth cinematic stabilization.
|
||||
* `"auto"`: Indicates that the most appropriate video stabilization mode for the device and format should be chosen automatically
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:64](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L64)
|
||||
|
||||
## Variables
|
||||
|
||||
### VisionCameraProxy
|
||||
|
||||
• `Const` **VisionCameraProxy**: `TVisionCameraProxy` = `proxy`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[FrameProcessorPlugins.ts:95](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/FrameProcessorPlugins.ts#L95)
|
||||
|
||||
## Functions
|
||||
|
||||
### createFrameProcessor
|
||||
|
||||
▸ **createFrameProcessor**(`frameProcessor`, `type`): [`FrameProcessor`](#frameprocessor)
|
||||
|
||||
Create a new Frame Processor function which you can pass to the `<Camera>`.
|
||||
(See ["Frame Processors"](https://mrousavy.github.io/react-native-vision-camera/docs/guides/frame-processors))
|
||||
|
||||
Make sure to add the `'worklet'` directive to the top of the Frame Processor function, otherwise it will not get compiled into a worklet.
|
||||
|
||||
Also make sure to memoize the returned object, so that the Camera doesn't reset the Frame Processor Context each time.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `frameProcessor` | (`frame`: `Frame`) => `void` |
|
||||
| `type` | ``"frame-processor"`` |
|
||||
|
||||
#### Returns
|
||||
|
||||
[`FrameProcessor`](#frameprocessor)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[hooks/useFrameProcessor.ts:13](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/hooks/useFrameProcessor.ts#L13)
|
||||
|
||||
___
|
||||
|
||||
### isErrorWithCause
|
||||
|
||||
▸ **isErrorWithCause**(`error`): error is ErrorWithCause
|
||||
|
||||
Checks if the given `error` is of type [`ErrorWithCause`](interfaces/ErrorWithCause.md)
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `error` | `unknown` | Any unknown object to validate |
|
||||
|
||||
#### Returns
|
||||
|
||||
error is ErrorWithCause
|
||||
|
||||
`true` if the given `error` is of type [`ErrorWithCause`](interfaces/ErrorWithCause.md)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:176](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L176)
|
||||
|
||||
___
|
||||
|
||||
### parsePhysicalDeviceTypes
|
||||
|
||||
▸ **parsePhysicalDeviceTypes**(`physicalDeviceTypes`): [`PhysicalCameraDeviceType`](#physicalcameradevicetype) \| [`LogicalCameraDeviceType`](#logicalcameradevicetype)
|
||||
|
||||
Parses an array of physical device types into a single [`PhysicalCameraDeviceType`](#physicalcameradevicetype) or [`LogicalCameraDeviceType`](#logicalcameradevicetype), depending what matches.
|
||||
|
||||
**`Method`**
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `physicalDeviceTypes` | [`PhysicalCameraDeviceType`](#physicalcameradevicetype)[] |
|
||||
|
||||
#### Returns
|
||||
|
||||
[`PhysicalCameraDeviceType`](#physicalcameradevicetype) \| [`LogicalCameraDeviceType`](#logicalcameradevicetype)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:27](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L27)
|
||||
|
||||
___
|
||||
|
||||
### runAsync
|
||||
|
||||
▸ **runAsync**(`frame`, `func`): `void`
|
||||
|
||||
Runs the given function asynchronously, while keeping a strong reference to the Frame.
|
||||
|
||||
For example, if you want to run a heavy face detection algorithm
|
||||
while still drawing to the screen at 60 FPS, you can use `runAsync(...)`
|
||||
to offload the face detection algorithm to a separate thread.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
console.log('New Frame')
|
||||
runAsync(frame, () => {
|
||||
'worklet'
|
||||
const faces = detectFaces(frame)
|
||||
const face = [faces0]
|
||||
console.log(`Detected a new face: ${face}`)
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `frame` | `Frame` | The current Frame of the Frame Processor. |
|
||||
| `func` | () => `void` | The function to execute. |
|
||||
|
||||
#### Returns
|
||||
|
||||
`void`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[FrameProcessorPlugins.ts:177](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/FrameProcessorPlugins.ts#L177)
|
||||
|
||||
___
|
||||
|
||||
### runAtTargetFps
|
||||
|
||||
▸ **runAtTargetFps**<`T`\>(`fps`, `func`): `T` \| `undefined`
|
||||
|
||||
Runs the given function at the given target FPS rate.
|
||||
|
||||
For example, if you want to run a heavy face detection algorithm
|
||||
only once per second, you can use `runAtTargetFps(1, ...)` to
|
||||
throttle it to 1 FPS.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
console.log('New Frame')
|
||||
runAtTargetFps(5, () => {
|
||||
'worklet'
|
||||
const faces = detectFaces(frame)
|
||||
console.log(`Detected a new face: ${faces[0]}`)
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
#### Type parameters
|
||||
|
||||
| Name |
|
||||
| :------ |
|
||||
| `T` |
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `fps` | `number` | The target FPS rate at which the given function should be executed |
|
||||
| `func` | () => `T` | The function to execute. |
|
||||
|
||||
#### Returns
|
||||
|
||||
`T` \| `undefined`
|
||||
|
||||
The result of the function if it was executed, or `undefined` otherwise.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[FrameProcessorPlugins.ts:136](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/FrameProcessorPlugins.ts#L136)
|
||||
|
||||
___
|
||||
|
||||
### sortDevices
|
||||
|
||||
▸ **sortDevices**(`left`, `right`): `number`
|
||||
|
||||
Compares two devices by the following criteria:
|
||||
* `wide-angle-camera`s are ranked higher than others
|
||||
* Devices with more physical cameras are ranked higher than ones with less. (e.g. "Triple Camera" > "Wide-Angle Camera")
|
||||
|
||||
> Note that this makes the `sort()` function descending, so the first element (`[0]`) is the "best" device.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const devices = camera.devices.sort(sortDevices)
|
||||
const bestDevice = devices[0]
|
||||
```
|
||||
|
||||
**`Method`**
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `left` | [`CameraDevice`](interfaces/CameraDevice.md) |
|
||||
| `right` | [`CameraDevice`](interfaces/CameraDevice.md) |
|
||||
|
||||
#### Returns
|
||||
|
||||
`number`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[utils/FormatFilter.ts:18](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/utils/FormatFilter.ts#L18)
|
||||
|
||||
___
|
||||
|
||||
### sortFormats
|
||||
|
||||
▸ **sortFormats**(`left`, `right`): `number`
|
||||
|
||||
Sort formats by resolution and aspect ratio difference (to the Screen size).
|
||||
|
||||
> Note that this makes the `sort()` function descending, so the first element (`[0]`) is the "best" device.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `left` | [`CameraDeviceFormat`](interfaces/CameraDeviceFormat.md) |
|
||||
| `right` | [`CameraDeviceFormat`](interfaces/CameraDeviceFormat.md) |
|
||||
|
||||
#### Returns
|
||||
|
||||
`number`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[utils/FormatFilter.ts:72](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/utils/FormatFilter.ts#L72)
|
||||
|
||||
___
|
||||
|
||||
### tryParseNativeCameraError
|
||||
|
||||
▸ **tryParseNativeCameraError**<`T`\>(`nativeError`): [`CameraCaptureError`](classes/CameraCaptureError.md) \| [`CameraRuntimeError`](classes/CameraRuntimeError.md) \| `T`
|
||||
|
||||
Tries to parse an error coming from native to a typed JS camera error.
|
||||
|
||||
**`Method`**
|
||||
|
||||
#### Type parameters
|
||||
|
||||
| Name |
|
||||
| :------ |
|
||||
| `T` |
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `nativeError` | `T` | The native error instance. This is a JSON in the legacy native module architecture. |
|
||||
|
||||
#### Returns
|
||||
|
||||
[`CameraCaptureError`](classes/CameraCaptureError.md) \| [`CameraRuntimeError`](classes/CameraRuntimeError.md) \| `T`
|
||||
|
||||
A [`CameraRuntimeError`](classes/CameraRuntimeError.md) or [`CameraCaptureError`](classes/CameraCaptureError.md), or the `nativeError` itself if it's not parsable
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:202](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L202)
|
||||
|
||||
___
|
||||
|
||||
### useCameraDevices
|
||||
|
||||
▸ **useCameraDevices**(): [`CameraDevices`](#cameradevices)
|
||||
|
||||
Gets the best available [`CameraDevice`](interfaces/CameraDevice.md). Devices with more cameras are preferred.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](classes/CameraRuntimeError.md) if no device was found.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```tsx
|
||||
const device = useCameraDevice()
|
||||
// ...
|
||||
return <Camera device={device} />
|
||||
```
|
||||
|
||||
#### Returns
|
||||
|
||||
[`CameraDevices`](#cameradevices)
|
||||
|
||||
The best matching [`CameraDevice`](interfaces/CameraDevice.md).
|
||||
|
||||
#### Defined in
|
||||
|
||||
[hooks/useCameraDevices.ts:29](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/hooks/useCameraDevices.ts#L29)
|
||||
|
||||
▸ **useCameraDevices**(`deviceType`): [`CameraDevices`](#cameradevices)
|
||||
|
||||
Gets a [`CameraDevice`](interfaces/CameraDevice.md) for the requested device type.
|
||||
|
||||
**`Throws`**
|
||||
|
||||
[`CameraRuntimeError`](classes/CameraRuntimeError.md) if no device was found.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```tsx
|
||||
const device = useCameraDevice('wide-angle-camera')
|
||||
// ...
|
||||
return <Camera device={device} />
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `deviceType` | [`PhysicalCameraDeviceType`](#physicalcameradevicetype) \| [`LogicalCameraDeviceType`](#logicalcameradevicetype) | Specifies a device type which will be used as a device filter. |
|
||||
|
||||
#### Returns
|
||||
|
||||
[`CameraDevices`](#cameradevices)
|
||||
|
||||
A [`CameraDevice`](interfaces/CameraDevice.md) for the requested device type.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[hooks/useCameraDevices.ts:44](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/hooks/useCameraDevices.ts#L44)
|
||||
|
||||
___
|
||||
|
||||
### useCameraFormat
|
||||
|
||||
▸ **useCameraFormat**(`device?`): [`CameraDeviceFormat`](interfaces/CameraDeviceFormat.md) \| `undefined`
|
||||
|
||||
Returns the best format for the given camera device.
|
||||
|
||||
This function tries to choose a format with the highest possible photo-capture resolution and best matching aspect ratio.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `device?` | [`CameraDevice`](interfaces/CameraDevice.md) | The Camera Device |
|
||||
|
||||
#### Returns
|
||||
|
||||
[`CameraDeviceFormat`](interfaces/CameraDeviceFormat.md) \| `undefined`
|
||||
|
||||
The best matching format for the given camera device, or `undefined` if the camera device is `undefined`.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[hooks/useCameraFormat.ts:14](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/hooks/useCameraFormat.ts#L14)
|
||||
|
||||
___
|
||||
|
||||
### useFrameProcessor
|
||||
|
||||
▸ **useFrameProcessor**(`frameProcessor`, `dependencies`): [`FrameProcessor`](#frameprocessor)
|
||||
|
||||
Returns a memoized Frame Processor function wich you can pass to the `<Camera>`.
|
||||
(See ["Frame Processors"](https://mrousavy.github.io/react-native-vision-camera/docs/guides/frame-processors))
|
||||
|
||||
Make sure to add the `'worklet'` directive to the top of the Frame Processor function, otherwise it will not get compiled into a worklet.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const qrCodes = scanQRCodes(frame)
|
||||
console.log(`QR Codes: ${qrCodes}`)
|
||||
}, [])
|
||||
```
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `frameProcessor` | (`frame`: `Frame`) => `void` | The Frame Processor |
|
||||
| `dependencies` | `DependencyList` | The React dependencies which will be copied into the VisionCamera JS-Runtime. |
|
||||
|
||||
#### Returns
|
||||
|
||||
[`FrameProcessor`](#frameprocessor)
|
||||
|
||||
The memoized Frame Processor.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[hooks/useFrameProcessor.ts:49](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/hooks/useFrameProcessor.ts#L49)
|
246
docs/docs/api/interfaces/CameraDevice.md
Normal file
246
docs/docs/api/interfaces/CameraDevice.md
Normal file
@ -0,0 +1,246 @@
|
||||
---
|
||||
id: "CameraDevice"
|
||||
title: "CameraDevice"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents a camera device discovered by the [`Camera.getAvailableCameraDevices()`](../classes/Camera.md#getavailablecameradevices) function
|
||||
|
||||
## Properties
|
||||
|
||||
### devices
|
||||
|
||||
• **devices**: [`PhysicalCameraDeviceType`](../#physicalcameradevicetype)[]
|
||||
|
||||
The physical devices this `CameraDevice` contains.
|
||||
|
||||
* If this camera device is a **logical camera** (combination of multiple physical cameras), there are multiple cameras in this array.
|
||||
* If this camera device is a **physical camera**, there is only a single element in this array.
|
||||
|
||||
You can check if the camera is a logical multi-camera by using the `isMultiCam` property.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:149](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L149)
|
||||
|
||||
___
|
||||
|
||||
### formats
|
||||
|
||||
• **formats**: [`CameraDeviceFormat`](CameraDeviceFormat.md)[]
|
||||
|
||||
All available formats for this camera device. Use this to find the best format for your use case and set it to the Camera's [`Camera's .format`](CameraProps.md#format) property.
|
||||
|
||||
See [the Camera Formats documentation](https://react-native-vision-camera.com/docs/guides/formats) for more information about Camera Formats.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:203](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L203)
|
||||
|
||||
___
|
||||
|
||||
### hardwareLevel
|
||||
|
||||
• **hardwareLevel**: ``"legacy"`` \| ``"limited"`` \| ``"full"``
|
||||
|
||||
The hardware level of the Camera.
|
||||
- On Android, some older devices are running at a `legacy` or `limited` level which means they are running in a backwards compatible mode.
|
||||
- On iOS, all devices are `full`.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:229](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L229)
|
||||
|
||||
___
|
||||
|
||||
### hasFlash
|
||||
|
||||
• **hasFlash**: `boolean`
|
||||
|
||||
Specifies whether this camera supports enabling flash for photo capture.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:161](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L161)
|
||||
|
||||
___
|
||||
|
||||
### hasTorch
|
||||
|
||||
• **hasTorch**: `boolean`
|
||||
|
||||
Specifies whether this camera supports continuously enabling the flash to act like a torch (flash with video capture)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:165](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L165)
|
||||
|
||||
___
|
||||
|
||||
### id
|
||||
|
||||
• **id**: `string`
|
||||
|
||||
The native ID of the camera device instance.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:140](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L140)
|
||||
|
||||
___
|
||||
|
||||
### isMultiCam
|
||||
|
||||
• **isMultiCam**: `boolean`
|
||||
|
||||
A property indicating whether the device is a virtual multi-camera consisting of multiple combined physical cameras.
|
||||
|
||||
Examples:
|
||||
* The Dual Camera, which supports seamlessly switching between a wide and telephoto camera while zooming and generating depth data from the disparities between the different points of view of the physical cameras.
|
||||
* The TrueDepth Camera, which generates depth data from disparities between a YUV camera and an Infrared camera pointed in the same direction.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:173](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L173)
|
||||
|
||||
___
|
||||
|
||||
### maxZoom
|
||||
|
||||
• **maxZoom**: `number`
|
||||
|
||||
Maximum available zoom factor (e.g. `128`)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:181](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L181)
|
||||
|
||||
___
|
||||
|
||||
### minZoom
|
||||
|
||||
• **minZoom**: `number`
|
||||
|
||||
Minimum available zoom factor (e.g. `1`)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:177](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L177)
|
||||
|
||||
___
|
||||
|
||||
### name
|
||||
|
||||
• **name**: `string`
|
||||
|
||||
A friendly localized name describing the camera.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:157](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L157)
|
||||
|
||||
___
|
||||
|
||||
### neutralZoom
|
||||
|
||||
• **neutralZoom**: `number`
|
||||
|
||||
The zoom factor where the camera is "neutral".
|
||||
|
||||
* For single-physical cameras this property is always `1.0`.
|
||||
* For multi cameras this property is a value between `minZoom` and `maxZoom`, where the camera is in _wide-angle_ mode and hasn't switched to the _ultra-wide-angle_ ("fish-eye") or telephoto camera yet.
|
||||
|
||||
Use this value as an initial value for the zoom property if you implement custom zoom. (e.g. reanimated shared value should be initially set to this value)
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const device = ...
|
||||
|
||||
const zoom = useSharedValue(device.neutralZoom) // <-- initial value so it doesn't start at ultra-wide
|
||||
const cameraProps = useAnimatedProps(() => ({
|
||||
zoom: zoom.value
|
||||
}))
|
||||
```
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:197](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L197)
|
||||
|
||||
___
|
||||
|
||||
### position
|
||||
|
||||
• **position**: [`CameraPosition`](../#cameraposition)
|
||||
|
||||
Specifies the physical position of this camera. (back or front)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:153](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L153)
|
||||
|
||||
___
|
||||
|
||||
### sensorOrientation
|
||||
|
||||
• **sensorOrientation**: `Orientation`
|
||||
|
||||
Represents the sensor's orientation relative to the phone.
|
||||
For most phones this will be landscape, as Camera sensors are usually always rotated by 90 degrees (i.e. width and height are flipped).
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:234](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L234)
|
||||
|
||||
___
|
||||
|
||||
### supportsDepthCapture
|
||||
|
||||
• **supportsDepthCapture**: `boolean`
|
||||
|
||||
Whether this camera supports taking photos with depth data.
|
||||
|
||||
**! Work in Progress !**
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:213](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L213)
|
||||
|
||||
___
|
||||
|
||||
### supportsFocus
|
||||
|
||||
• **supportsFocus**: `boolean`
|
||||
|
||||
Specifies whether this device supports focusing ([`Camera.focus(...)`](../classes/Camera.md#focus))
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:223](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L223)
|
||||
|
||||
___
|
||||
|
||||
### supportsLowLightBoost
|
||||
|
||||
• **supportsLowLightBoost**: `boolean`
|
||||
|
||||
Whether this camera device supports low light boost.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:207](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L207)
|
||||
|
||||
___
|
||||
|
||||
### supportsRawCapture
|
||||
|
||||
• **supportsRawCapture**: `boolean`
|
||||
|
||||
Whether this camera supports taking photos in RAW format
|
||||
|
||||
**! Work in Progress !**
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:219](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L219)
|
189
docs/docs/api/interfaces/CameraDeviceFormat.md
Normal file
189
docs/docs/api/interfaces/CameraDeviceFormat.md
Normal file
@ -0,0 +1,189 @@
|
||||
---
|
||||
id: "CameraDeviceFormat"
|
||||
title: "CameraDeviceFormat"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
A Camera Device's video format. Do not create instances of this type yourself, only use [`Camera.getAvailableCameraDevices()`](../classes/Camera.md#getavailablecameradevices).
|
||||
|
||||
## Properties
|
||||
|
||||
### autoFocusSystem
|
||||
|
||||
• **autoFocusSystem**: [`AutoFocusSystem`](../#autofocussystem)
|
||||
|
||||
Specifies this format's auto focus system.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:121](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L121)
|
||||
|
||||
___
|
||||
|
||||
### fieldOfView
|
||||
|
||||
• **fieldOfView**: `number`
|
||||
|
||||
The video field of view in degrees
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:97](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L97)
|
||||
|
||||
___
|
||||
|
||||
### maxFps
|
||||
|
||||
• **maxFps**: `number`
|
||||
|
||||
The maximum frame rate this Format is able to run at. High resolution formats often run at lower frame rates.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:117](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L117)
|
||||
|
||||
___
|
||||
|
||||
### maxISO
|
||||
|
||||
• **maxISO**: `number`
|
||||
|
||||
Maximum supported ISO value
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:89](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L89)
|
||||
|
||||
___
|
||||
|
||||
### maxZoom
|
||||
|
||||
• **maxZoom**: `number`
|
||||
|
||||
The maximum zoom factor (e.g. `128`)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:101](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L101)
|
||||
|
||||
___
|
||||
|
||||
### minFps
|
||||
|
||||
• **minFps**: `number`
|
||||
|
||||
The minum frame rate this Format needs to run at. High resolution formats often run at lower frame rates.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:113](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L113)
|
||||
|
||||
___
|
||||
|
||||
### minISO
|
||||
|
||||
• **minISO**: `number`
|
||||
|
||||
Minimum supported ISO value
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:93](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L93)
|
||||
|
||||
___
|
||||
|
||||
### photoHeight
|
||||
|
||||
• **photoHeight**: `number`
|
||||
|
||||
The height of the highest resolution a still image (photo) can be produced in
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:73](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L73)
|
||||
|
||||
___
|
||||
|
||||
### photoWidth
|
||||
|
||||
• **photoWidth**: `number`
|
||||
|
||||
The width of the highest resolution a still image (photo) can be produced in
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:77](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L77)
|
||||
|
||||
___
|
||||
|
||||
### pixelFormats
|
||||
|
||||
• **pixelFormats**: `PixelFormat`[]
|
||||
|
||||
Specifies this format's supported pixel-formats.
|
||||
In most cases, this is `['native', 'yuv']`.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:130](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L130)
|
||||
|
||||
___
|
||||
|
||||
### supportsPhotoHDR
|
||||
|
||||
• **supportsPhotoHDR**: `boolean`
|
||||
|
||||
Specifies whether this format supports HDR mode for photo capture
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:109](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L109)
|
||||
|
||||
___
|
||||
|
||||
### supportsVideoHDR
|
||||
|
||||
• **supportsVideoHDR**: `boolean`
|
||||
|
||||
Specifies whether this format supports HDR mode for video capture
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:105](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L105)
|
||||
|
||||
___
|
||||
|
||||
### videoHeight
|
||||
|
||||
• **videoHeight**: `number`
|
||||
|
||||
The video resolutions's height
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:81](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L81)
|
||||
|
||||
___
|
||||
|
||||
### videoStabilizationModes
|
||||
|
||||
• **videoStabilizationModes**: [`VideoStabilizationMode`](../#videostabilizationmode)[]
|
||||
|
||||
All supported video stabilization modes
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:125](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L125)
|
||||
|
||||
___
|
||||
|
||||
### videoWidth
|
||||
|
||||
• **videoWidth**: `number`
|
||||
|
||||
The video resolution's width
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:85](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraDevice.ts#L85)
|
406
docs/docs/api/interfaces/CameraProps.md
Normal file
406
docs/docs/api/interfaces/CameraProps.md
Normal file
@ -0,0 +1,406 @@
|
||||
---
|
||||
id: "CameraProps"
|
||||
title: "CameraProps"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- `ViewProps`
|
||||
|
||||
↳ **`CameraProps`**
|
||||
|
||||
## Properties
|
||||
|
||||
### audio
|
||||
|
||||
• `Optional` **audio**: `boolean`
|
||||
|
||||
Enables **audio capture** for video recordings (see ["Recording Videos"](https://react-native-vision-camera.com/docs/guides/capturing/#recording-videos))
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:61](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L61)
|
||||
|
||||
___
|
||||
|
||||
### device
|
||||
|
||||
• **device**: [`CameraDevice`](CameraDevice.md)
|
||||
|
||||
The Camera Device to use.
|
||||
|
||||
See the [Camera Devices](https://react-native-vision-camera.com/docs/guides/devices) section in the documentation for more information about Camera Devices.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```tsx
|
||||
const devices = useCameraDevices('wide-angle-camera')
|
||||
const device = devices.back
|
||||
|
||||
return (
|
||||
<Camera
|
||||
device={device}
|
||||
isActive={true}
|
||||
style={StyleSheet.absoluteFill}
|
||||
/>
|
||||
)
|
||||
```
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:37](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L37)
|
||||
|
||||
___
|
||||
|
||||
### enableDepthData
|
||||
|
||||
• `Optional` **enableDepthData**: `boolean`
|
||||
|
||||
Also captures data from depth-perception sensors. (e.g. disparity maps)
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:145](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L145)
|
||||
|
||||
___
|
||||
|
||||
### enableFpsGraph
|
||||
|
||||
• `Optional` **enableFpsGraph**: `boolean`
|
||||
|
||||
If `true`, show a debug view to display the FPS of the Camera session.
|
||||
This is useful for debugging your Frame Processor's speed.
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:173](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L173)
|
||||
|
||||
___
|
||||
|
||||
### enableHighQualityPhotos
|
||||
|
||||
• `Optional` **enableHighQualityPhotos**: `boolean`
|
||||
|
||||
Indicates whether the Camera should prepare the photo pipeline to provide maximum quality photos.
|
||||
|
||||
This enables:
|
||||
* High Resolution Capture ([`isHighResolutionCaptureEnabled`](https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/1648721-ishighresolutioncaptureenabled))
|
||||
* Virtual Device fusion for greater detail ([`isVirtualDeviceConstituentPhotoDeliveryEnabled`](https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/3192189-isvirtualdeviceconstituentphotod))
|
||||
* Dual Device fusion for greater detail ([`isDualCameraDualPhotoDeliveryEnabled`](https://developer.apple.com/documentation/avfoundation/avcapturephotosettings/2873917-isdualcameradualphotodeliveryena))
|
||||
* Sets the maximum quality prioritization to `.quality` ([`maxPhotoQualityPrioritization`](https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/3182995-maxphotoqualityprioritization))
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:166](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L166)
|
||||
|
||||
___
|
||||
|
||||
### enablePortraitEffectsMatteDelivery
|
||||
|
||||
• `Optional` **enablePortraitEffectsMatteDelivery**: `boolean`
|
||||
|
||||
A boolean specifying whether the photo render pipeline is prepared for portrait effects matte delivery.
|
||||
|
||||
When enabling this, you must also set `enableDepthData` to `true`.
|
||||
|
||||
**`Platform`**
|
||||
|
||||
iOS 12.0+
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:154](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L154)
|
||||
|
||||
___
|
||||
|
||||
### enableZoomGesture
|
||||
|
||||
• `Optional` **enableZoomGesture**: `boolean`
|
||||
|
||||
Enables or disables the native pinch to zoom gesture.
|
||||
|
||||
If you want to implement a custom zoom gesture, see [the Zooming with Reanimated documentation](https://react-native-vision-camera.com/docs/guides/animated).
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:106](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L106)
|
||||
|
||||
___
|
||||
|
||||
### format
|
||||
|
||||
• `Optional` **format**: [`CameraDeviceFormat`](CameraDeviceFormat.md)
|
||||
|
||||
Selects a given format. By default, the best matching format is chosen.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:113](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L113)
|
||||
|
||||
___
|
||||
|
||||
### fps
|
||||
|
||||
• `Optional` **fps**: `number`
|
||||
|
||||
Specify the frames per second this camera should use. Make sure the given `format` includes a frame rate range with the given `fps`.
|
||||
|
||||
Requires `format` to be set.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:119](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L119)
|
||||
|
||||
___
|
||||
|
||||
### frameProcessor
|
||||
|
||||
• `Optional` **frameProcessor**: [`FrameProcessor`](../#frameprocessor)
|
||||
|
||||
A worklet which will be called for every frame the Camera "sees".
|
||||
|
||||
> See [the Frame Processors documentation](https://mrousavy.github.io/react-native-vision-camera/docs/guides/frame-processors) for more information
|
||||
|
||||
**`Example`**
|
||||
|
||||
```tsx
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const qrCodes = scanQRCodes(frame)
|
||||
console.log(`Detected QR Codes: ${qrCodes}`)
|
||||
}, [])
|
||||
|
||||
return <Camera {...cameraProps} frameProcessor={frameProcessor} />
|
||||
```
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:204](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L204)
|
||||
|
||||
___
|
||||
|
||||
### hdr
|
||||
|
||||
• `Optional` **hdr**: `boolean`
|
||||
|
||||
Enables or disables HDR on this camera device. Make sure the given `format` supports HDR mode.
|
||||
|
||||
Requires `format` to be set.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:125](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L125)
|
||||
|
||||
___
|
||||
|
||||
### isActive
|
||||
|
||||
• **isActive**: `boolean`
|
||||
|
||||
Whether the Camera should actively stream video frames, or not. See the [documentation about the `isActive` prop](https://react-native-vision-camera.com/docs/guides/lifecycle#the-isactive-prop) for more information.
|
||||
|
||||
This can be compared to a Video component, where `isActive` specifies whether the video is paused or not.
|
||||
|
||||
> Note: If you fully unmount the `<Camera>` component instead of using `isActive={false}`, the Camera will take a bit longer to start again. In return, it will use less resources since the Camera will be completely destroyed when unmounted.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:45](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L45)
|
||||
|
||||
___
|
||||
|
||||
### lowLightBoost
|
||||
|
||||
• `Optional` **lowLightBoost**: `boolean`
|
||||
|
||||
Enables or disables low-light boost on this camera device. Make sure the given `format` supports low-light boost.
|
||||
|
||||
Requires `format` to be set.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:131](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L131)
|
||||
|
||||
___
|
||||
|
||||
### onError
|
||||
|
||||
• `Optional` **onError**: (`error`: [`CameraRuntimeError`](../classes/CameraRuntimeError.md)) => `void`
|
||||
|
||||
#### Type declaration
|
||||
|
||||
▸ (`error`): `void`
|
||||
|
||||
Called when any kind of runtime error occured.
|
||||
|
||||
##### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `error` | [`CameraRuntimeError`](../classes/CameraRuntimeError.md) |
|
||||
|
||||
##### Returns
|
||||
|
||||
`void`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:183](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L183)
|
||||
|
||||
___
|
||||
|
||||
### onInitialized
|
||||
|
||||
• `Optional` **onInitialized**: () => `void`
|
||||
|
||||
#### Type declaration
|
||||
|
||||
▸ (): `void`
|
||||
|
||||
Called when the camera was successfully initialized.
|
||||
|
||||
##### Returns
|
||||
|
||||
`void`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:187](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L187)
|
||||
|
||||
___
|
||||
|
||||
### orientation
|
||||
|
||||
• `Optional` **orientation**: `Orientation`
|
||||
|
||||
Represents the orientation of all Camera Outputs (Photo, Video, and Frame Processor). If this value is not set, the device orientation is used.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:177](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L177)
|
||||
|
||||
___
|
||||
|
||||
### photo
|
||||
|
||||
• `Optional` **photo**: `boolean`
|
||||
|
||||
Enables **photo capture** with the `takePhoto` function (see ["Taking Photos"](https://react-native-vision-camera.com/docs/guides/capturing#taking-photos))
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:51](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L51)
|
||||
|
||||
___
|
||||
|
||||
### pixelFormat
|
||||
|
||||
• `Optional` **pixelFormat**: ``"yuv"`` \| ``"rgb"`` \| ``"native"``
|
||||
|
||||
Specifies the pixel format for the video pipeline.
|
||||
|
||||
Frames from a [Frame Processor](https://mrousavy.github.io/react-native-vision-camera/docs/guides/frame-processors) will be streamed in the pixel format specified here.
|
||||
|
||||
While `native` and `yuv` are the most efficient formats, some ML models (such as MLKit Barcode detection) require input Frames to be in RGB colorspace, otherwise they just output nonsense.
|
||||
|
||||
- `native`: The hardware native GPU buffer format. This is the most efficient format. (`PRIVATE` on Android, sometimes YUV on iOS)
|
||||
- `yuv`: The YUV (Y'CbCr 4:2:0 or NV21, 8-bit) format, either video- or full-range, depending on hardware capabilities. This is the second most efficient format.
|
||||
- `rgb`: The RGB (RGB, RGBA or ABGRA, 8-bit) format. This is least efficient and requires explicit conversion.
|
||||
|
||||
**`Default`**
|
||||
|
||||
`native`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:75](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L75)
|
||||
|
||||
___
|
||||
|
||||
### torch
|
||||
|
||||
• `Optional` **torch**: ``"off"`` \| ``"on"``
|
||||
|
||||
Set the current torch mode.
|
||||
|
||||
Note: The torch is only available on `"back"` cameras, and isn't supported by every phone.
|
||||
|
||||
**`Default`**
|
||||
|
||||
"off"
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:86](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L86)
|
||||
|
||||
___
|
||||
|
||||
### video
|
||||
|
||||
• `Optional` **video**: `boolean`
|
||||
|
||||
Enables **video capture** with the `startRecording` function (see ["Recording Videos"](https://react-native-vision-camera.com/docs/guides/capturing/#recording-videos))
|
||||
|
||||
Note: If both the `photo` and `video` properties are enabled at the same time and the device is running at a `hardwareLevel` of `'legacy'` or `'limited'`, VisionCamera _might_ use a lower resolution for video capture due to hardware constraints.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:57](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L57)
|
||||
|
||||
___
|
||||
|
||||
### videoStabilizationMode
|
||||
|
||||
• `Optional` **videoStabilizationMode**: [`VideoStabilizationMode`](../#videostabilizationmode)
|
||||
|
||||
Specifies the video stabilization mode to use.
|
||||
|
||||
Requires a `format` to be set that contains the given `videoStabilizationMode`.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:137](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L137)
|
||||
|
||||
___
|
||||
|
||||
### zoom
|
||||
|
||||
• `Optional` **zoom**: `number`
|
||||
|
||||
Specifies the zoom factor of the current camera, in "factor"/scale.
|
||||
|
||||
This value ranges from `minZoom` (e.g. `1`) to `maxZoom` (e.g. `128`). It is recommended to set this value
|
||||
to the CameraDevice's `neutralZoom` per default and let the user zoom out to the fish-eye (ultra-wide) camera
|
||||
on demand (if available)
|
||||
|
||||
**Note:** Linearly increasing this value always appears logarithmic to the user.
|
||||
|
||||
**`Default`**
|
||||
|
||||
1.0
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:98](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraProps.ts#L98)
|
98
docs/docs/api/interfaces/ErrorWithCause.md
Normal file
98
docs/docs/api/interfaces/ErrorWithCause.md
Normal file
@ -0,0 +1,98 @@
|
||||
---
|
||||
id: "ErrorWithCause"
|
||||
title: "ErrorWithCause"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents a JSON-style error cause. This contains native `NSError`/`Throwable` information, and can have recursive [`.cause`](ErrorWithCause.md#cause) properties until the ultimate cause has been found.
|
||||
|
||||
## Properties
|
||||
|
||||
### cause
|
||||
|
||||
• `Optional` **cause**: [`ErrorWithCause`](ErrorWithCause.md)
|
||||
|
||||
Optional additional cause for nested errors
|
||||
|
||||
* iOS: N/A
|
||||
* Android: `Throwable.cause`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:105](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L105)
|
||||
|
||||
___
|
||||
|
||||
### code
|
||||
|
||||
• `Optional` **code**: `number`
|
||||
|
||||
The native error's code.
|
||||
|
||||
* iOS: `NSError.code`
|
||||
* Android: N/A
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:70](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L70)
|
||||
|
||||
___
|
||||
|
||||
### details
|
||||
|
||||
• `Optional` **details**: `Record`<`string`, `unknown`\>
|
||||
|
||||
Optional additional details
|
||||
|
||||
* iOS: `NSError.userInfo`
|
||||
* Android: N/A
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:91](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L91)
|
||||
|
||||
___
|
||||
|
||||
### domain
|
||||
|
||||
• `Optional` **domain**: `string`
|
||||
|
||||
The native error's domain.
|
||||
|
||||
* iOS: `NSError.domain`
|
||||
* Android: N/A
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:77](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L77)
|
||||
|
||||
___
|
||||
|
||||
### message
|
||||
|
||||
• **message**: `string`
|
||||
|
||||
The native error description
|
||||
|
||||
* iOS: `NSError.message`
|
||||
* Android: `Throwable.message`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:84](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L84)
|
||||
|
||||
___
|
||||
|
||||
### stacktrace
|
||||
|
||||
• `Optional` **stacktrace**: `string`
|
||||
|
||||
Optional Java stacktrace
|
||||
|
||||
* iOS: N/A
|
||||
* Android: `Throwable.stacktrace.toString()`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraError.ts:98](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/CameraError.ts#L98)
|
118
docs/docs/api/interfaces/Frame.md
Normal file
118
docs/docs/api/interfaces/Frame.md
Normal file
@ -0,0 +1,118 @@
|
||||
---
|
||||
id: "Frame"
|
||||
title: "Frame"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
A single frame, as seen by the camera.
|
||||
|
||||
## Properties
|
||||
|
||||
### bytesPerRow
|
||||
|
||||
• **bytesPerRow**: `number`
|
||||
|
||||
Returns the amount of bytes per row.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Frame.ts:20](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Frame.ts#L20)
|
||||
|
||||
___
|
||||
|
||||
### height
|
||||
|
||||
• **height**: `number`
|
||||
|
||||
Returns the height of the frame, in pixels.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Frame.ts:16](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Frame.ts#L16)
|
||||
|
||||
___
|
||||
|
||||
### isValid
|
||||
|
||||
• **isValid**: `boolean`
|
||||
|
||||
Whether the underlying buffer is still valid or not. The buffer will be released after the frame processor returns, or `close()` is called.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Frame.ts:8](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Frame.ts#L8)
|
||||
|
||||
___
|
||||
|
||||
### planesCount
|
||||
|
||||
• **planesCount**: `number`
|
||||
|
||||
Returns the number of planes this frame contains.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Frame.ts:24](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Frame.ts#L24)
|
||||
|
||||
___
|
||||
|
||||
### width
|
||||
|
||||
• **width**: `number`
|
||||
|
||||
Returns the width of the frame, in pixels.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Frame.ts:12](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Frame.ts#L12)
|
||||
|
||||
## Methods
|
||||
|
||||
### close
|
||||
|
||||
▸ **close**(): `void`
|
||||
|
||||
Closes and disposes the Frame.
|
||||
Only close frames that you have created yourself, e.g. by copying the frame you receive in a frame processor.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
const smallerCopy = resize(frame, 480, 270)
|
||||
// run AI ...
|
||||
smallerCopy.close()
|
||||
// don't close `frame`!
|
||||
})
|
||||
```
|
||||
|
||||
#### Returns
|
||||
|
||||
`void`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Frame.ts:48](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Frame.ts#L48)
|
||||
|
||||
___
|
||||
|
||||
### toString
|
||||
|
||||
▸ **toString**(): `string`
|
||||
|
||||
Returns a string representation of the frame.
|
||||
|
||||
**`Example`**
|
||||
|
||||
```ts
|
||||
console.log(frame.toString()) // -> "3840 x 2160 Frame"
|
||||
```
|
||||
|
||||
#### Returns
|
||||
|
||||
`string`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Frame.ts:33](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Frame.ts#L33)
|
@ -0,0 +1,26 @@
|
||||
---
|
||||
id: "FrameProcessorPerformanceSuggestion"
|
||||
title: "FrameProcessorPerformanceSuggestion"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
## Properties
|
||||
|
||||
### suggestedFrameProcessorFps
|
||||
|
||||
• **suggestedFrameProcessorFps**: `number`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:9](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/CameraProps.ts#L9)
|
||||
|
||||
___
|
||||
|
||||
### type
|
||||
|
||||
• **type**: ``"can-use-higher-fps"`` \| ``"should-use-lower-fps"``
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraProps.ts:8](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/CameraProps.ts#L8)
|
26
docs/docs/api/interfaces/FrameRateRange.md
Normal file
26
docs/docs/api/interfaces/FrameRateRange.md
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
id: "FrameRateRange"
|
||||
title: "FrameRateRange"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
## Properties
|
||||
|
||||
### maxFrameRate
|
||||
|
||||
• **maxFrameRate**: `number`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:104](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/CameraDevice.ts#L104)
|
||||
|
||||
___
|
||||
|
||||
### minFrameRate
|
||||
|
||||
• **minFrameRate**: `number`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[CameraDevice.ts:103](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/CameraDevice.ts#L103)
|
178
docs/docs/api/interfaces/PhotoFile.md
Normal file
178
docs/docs/api/interfaces/PhotoFile.md
Normal file
@ -0,0 +1,178 @@
|
||||
---
|
||||
id: "PhotoFile"
|
||||
title: "PhotoFile"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents a Photo taken by the Camera written to the local filesystem.
|
||||
|
||||
See [`Camera.takePhoto()`](../classes/Camera.md#takephoto)
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- [`TemporaryFile`](TemporaryFile.md)
|
||||
|
||||
↳ **`PhotoFile`**
|
||||
|
||||
## Properties
|
||||
|
||||
### height
|
||||
|
||||
• **height**: `number`
|
||||
|
||||
The height of the photo, in pixels.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:62](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L62)
|
||||
|
||||
___
|
||||
|
||||
### isMirrored
|
||||
|
||||
• **isMirrored**: `boolean`
|
||||
|
||||
Whether this photo is mirrored (selfies) or not.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:76](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L76)
|
||||
|
||||
___
|
||||
|
||||
### isRawPhoto
|
||||
|
||||
• **isRawPhoto**: `boolean`
|
||||
|
||||
Whether this photo is in RAW format or not.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:66](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L66)
|
||||
|
||||
___
|
||||
|
||||
### metadata
|
||||
|
||||
• `Optional` **metadata**: `Object`
|
||||
|
||||
Metadata information describing the captured image. (iOS only)
|
||||
|
||||
**`See`**
|
||||
|
||||
[AVCapturePhoto.metadata](https://developer.apple.com/documentation/avfoundation/avcapturephoto/2873982-metadata)
|
||||
|
||||
**`Platform`**
|
||||
|
||||
iOS
|
||||
|
||||
#### Type declaration
|
||||
|
||||
| Name | Type | Description |
|
||||
| :------ | :------ | :------ |
|
||||
| `DPIHeight` | `number` | **`Platform`** iOS |
|
||||
| `DPIWidth` | `number` | **`Platform`** iOS |
|
||||
| `Orientation` | `number` | Orientation of the EXIF Image. * 1 = 0 degrees: the correct orientation, no adjustment is required. * 2 = 0 degrees, mirrored: image has been flipped back-to-front. * 3 = 180 degrees: image is upside down. * 4 = 180 degrees, mirrored: image has been flipped back-to-front and is upside down. * 5 = 90 degrees: image has been flipped back-to-front and is on its side. * 6 = 90 degrees, mirrored: image is on its side. * 7 = 270 degrees: image has been flipped back-to-front and is on its far side. * 8 = 270 degrees, mirrored: image is on its far side. |
|
||||
| `{Exif}` | { `ApertureValue`: `number` ; `BrightnessValue`: `number` ; `ColorSpace`: `number` ; `DateTimeDigitized`: `string` ; `DateTimeOriginal`: `string` ; `ExifVersion`: `string` ; `ExposureBiasValue`: `number` ; `ExposureMode`: `number` ; `ExposureProgram`: `number` ; `ExposureTime`: `number` ; `FNumber`: `number` ; `Flash`: `number` ; `FocalLenIn35mmFilm`: `number` ; `FocalLength`: `number` ; `ISOSpeedRatings`: `number`[] ; `LensMake`: `string` ; `LensModel`: `string` ; `LensSpecification`: `number`[] ; `MeteringMode`: `number` ; `OffsetTime`: `string` ; `OffsetTimeDigitized`: `string` ; `OffsetTimeOriginal`: `string` ; `PixelXDimension`: `number` ; `PixelYDimension`: `number` ; `SceneType`: `number` ; `SensingMethod`: `number` ; `ShutterSpeedValue`: `number` ; `SubjectArea`: `number`[] ; `SubsecTimeDigitized`: `string` ; `SubsecTimeOriginal`: `string` ; `WhiteBalance`: `number` } | - |
|
||||
| `{Exif}.ApertureValue` | `number` | - |
|
||||
| `{Exif}.BrightnessValue` | `number` | - |
|
||||
| `{Exif}.ColorSpace` | `number` | - |
|
||||
| `{Exif}.DateTimeDigitized` | `string` | - |
|
||||
| `{Exif}.DateTimeOriginal` | `string` | - |
|
||||
| `{Exif}.ExifVersion` | `string` | - |
|
||||
| `{Exif}.ExposureBiasValue` | `number` | - |
|
||||
| `{Exif}.ExposureMode` | `number` | - |
|
||||
| `{Exif}.ExposureProgram` | `number` | - |
|
||||
| `{Exif}.ExposureTime` | `number` | - |
|
||||
| `{Exif}.FNumber` | `number` | - |
|
||||
| `{Exif}.Flash` | `number` | - |
|
||||
| `{Exif}.FocalLenIn35mmFilm` | `number` | - |
|
||||
| `{Exif}.FocalLength` | `number` | - |
|
||||
| `{Exif}.ISOSpeedRatings` | `number`[] | - |
|
||||
| `{Exif}.LensMake` | `string` | - |
|
||||
| `{Exif}.LensModel` | `string` | - |
|
||||
| `{Exif}.LensSpecification` | `number`[] | - |
|
||||
| `{Exif}.MeteringMode` | `number` | - |
|
||||
| `{Exif}.OffsetTime` | `string` | - |
|
||||
| `{Exif}.OffsetTimeDigitized` | `string` | - |
|
||||
| `{Exif}.OffsetTimeOriginal` | `string` | - |
|
||||
| `{Exif}.PixelXDimension` | `number` | - |
|
||||
| `{Exif}.PixelYDimension` | `number` | - |
|
||||
| `{Exif}.SceneType` | `number` | - |
|
||||
| `{Exif}.SensingMethod` | `number` | - |
|
||||
| `{Exif}.ShutterSpeedValue` | `number` | - |
|
||||
| `{Exif}.SubjectArea` | `number`[] | - |
|
||||
| `{Exif}.SubsecTimeDigitized` | `string` | - |
|
||||
| `{Exif}.SubsecTimeOriginal` | `string` | - |
|
||||
| `{Exif}.WhiteBalance` | `number` | - |
|
||||
| `{MakerApple}?` | `Record`<`string`, `unknown`\> | Represents any data Apple cameras write to the metadata **`Platform`** iOS |
|
||||
| `{TIFF}` | { `DateTime`: `string` ; `HostComputer?`: `string` ; `Make`: `string` ; `Model`: `string` ; `ResolutionUnit`: `number` ; `Software`: `string` ; `XResolution`: `number` ; `YResolution`: `number` } | - |
|
||||
| `{TIFF}.DateTime` | `string` | - |
|
||||
| `{TIFF}.HostComputer?` | `string` | **`Platform`** iOS |
|
||||
| `{TIFF}.Make` | `string` | - |
|
||||
| `{TIFF}.Model` | `string` | - |
|
||||
| `{TIFF}.ResolutionUnit` | `number` | - |
|
||||
| `{TIFF}.Software` | `string` | - |
|
||||
| `{TIFF}.XResolution` | `number` | - |
|
||||
| `{TIFF}.YResolution` | `number` | - |
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:85](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L85)
|
||||
|
||||
___
|
||||
|
||||
### orientation
|
||||
|
||||
• **orientation**: `Orientation`
|
||||
|
||||
Display orientation of the photo, relative to the Camera's sensor orientation.
|
||||
|
||||
Note that Camera sensors are landscape, so e.g. "portrait" photos will have a value of "landscape-left", etc.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:72](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L72)
|
||||
|
||||
___
|
||||
|
||||
### path
|
||||
|
||||
• **path**: `string`
|
||||
|
||||
The path of the file.
|
||||
|
||||
* **Note:** If you want to consume this file (e.g. for displaying it in an `<Image>` component), you might have to add the `file://` prefix.
|
||||
|
||||
* **Note:** This file might get deleted once the app closes because it lives in the temp directory.
|
||||
|
||||
#### Inherited from
|
||||
|
||||
[TemporaryFile](TemporaryFile.md).[path](TemporaryFile.md#path)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[TemporaryFile.ts:12](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/TemporaryFile.ts#L12)
|
||||
|
||||
___
|
||||
|
||||
### thumbnail
|
||||
|
||||
• `Optional` **thumbnail**: `Record`<`string`, `unknown`\>
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:77](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L77)
|
||||
|
||||
___
|
||||
|
||||
### width
|
||||
|
||||
• **width**: `number`
|
||||
|
||||
The width of the photo, in pixels.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:58](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L58)
|
32
docs/docs/api/interfaces/Point.md
Normal file
32
docs/docs/api/interfaces/Point.md
Normal file
@ -0,0 +1,32 @@
|
||||
---
|
||||
id: "Point"
|
||||
title: "Point"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents a Point in a 2 dimensional coordinate system.
|
||||
|
||||
## Properties
|
||||
|
||||
### x
|
||||
|
||||
• **x**: `number`
|
||||
|
||||
The X coordinate of this Point. (double)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Point.ts:8](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Point.ts#L8)
|
||||
|
||||
___
|
||||
|
||||
### y
|
||||
|
||||
• **y**: `number`
|
||||
|
||||
The Y coordinate of this Point. (double)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Point.ts:12](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/Point.ts#L12)
|
96
docs/docs/api/interfaces/RecordVideoOptions.md
Normal file
96
docs/docs/api/interfaces/RecordVideoOptions.md
Normal file
@ -0,0 +1,96 @@
|
||||
---
|
||||
id: "RecordVideoOptions"
|
||||
title: "RecordVideoOptions"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
## Properties
|
||||
|
||||
### fileType
|
||||
|
||||
• `Optional` **fileType**: ``"mov"`` \| ``"mp4"``
|
||||
|
||||
Specifies the output file type to record videos into.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[VideoFile.ts:12](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/VideoFile.ts#L12)
|
||||
|
||||
___
|
||||
|
||||
### flash
|
||||
|
||||
• `Optional` **flash**: ``"off"`` \| ``"auto"`` \| ``"on"``
|
||||
|
||||
Set the video flash mode. Natively, this just enables the torch while recording.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[VideoFile.ts:8](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/VideoFile.ts#L8)
|
||||
|
||||
___
|
||||
|
||||
### onRecordingError
|
||||
|
||||
• **onRecordingError**: (`error`: [`CameraCaptureError`](../classes/CameraCaptureError.md)) => `void`
|
||||
|
||||
#### Type declaration
|
||||
|
||||
▸ (`error`): `void`
|
||||
|
||||
Called when there was an unexpected runtime error while recording the video.
|
||||
|
||||
##### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `error` | [`CameraCaptureError`](../classes/CameraCaptureError.md) |
|
||||
|
||||
##### Returns
|
||||
|
||||
`void`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[VideoFile.ts:16](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/VideoFile.ts#L16)
|
||||
|
||||
___
|
||||
|
||||
### onRecordingFinished
|
||||
|
||||
• **onRecordingFinished**: (`video`: [`VideoFile`](VideoFile.md)) => `void`
|
||||
|
||||
#### Type declaration
|
||||
|
||||
▸ (`video`): `void`
|
||||
|
||||
Called when the recording has been successfully saved to file.
|
||||
|
||||
##### Parameters
|
||||
|
||||
| Name | Type |
|
||||
| :------ | :------ |
|
||||
| `video` | [`VideoFile`](VideoFile.md) |
|
||||
|
||||
##### Returns
|
||||
|
||||
`void`
|
||||
|
||||
#### Defined in
|
||||
|
||||
[VideoFile.ts:20](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/VideoFile.ts#L20)
|
||||
|
||||
___
|
||||
|
||||
### videoCodec
|
||||
|
||||
• `Optional` **videoCodec**: ``"h265"``
|
||||
|
||||
The Video Codec to record in.
|
||||
- `h264`: Widely supported, but might be less efficient, especially with larger sizes or framerates.
|
||||
- `h265`: The HEVC (High-Efficient-Video-Codec) for higher efficient video recordings.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[VideoFile.ts:26](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/VideoFile.ts#L26)
|
111
docs/docs/api/interfaces/TakePhotoOptions.md
Normal file
111
docs/docs/api/interfaces/TakePhotoOptions.md
Normal file
@ -0,0 +1,111 @@
|
||||
---
|
||||
id: "TakePhotoOptions"
|
||||
title: "TakePhotoOptions"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
## Properties
|
||||
|
||||
### enableAutoDistortionCorrection
|
||||
|
||||
• `Optional` **enableAutoDistortionCorrection**: `boolean`
|
||||
|
||||
Specifies whether the photo output should use content aware distortion correction on this photo request.
|
||||
For example, the algorithm may not apply correction to faces in the center of a photo, but may apply it to faces near the photo’s edges.
|
||||
|
||||
**`Platform`**
|
||||
|
||||
iOS
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:40](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L40)
|
||||
|
||||
___
|
||||
|
||||
### enableAutoRedEyeReduction
|
||||
|
||||
• `Optional` **enableAutoRedEyeReduction**: `boolean`
|
||||
|
||||
Specifies whether red-eye reduction should be applied automatically on flash captures.
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:26](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L26)
|
||||
|
||||
___
|
||||
|
||||
### enableAutoStabilization
|
||||
|
||||
• `Optional` **enableAutoStabilization**: `boolean`
|
||||
|
||||
Indicates whether still image stabilization will be employed when capturing the photo
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:32](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L32)
|
||||
|
||||
___
|
||||
|
||||
### enableShutterSound
|
||||
|
||||
• `Optional` **enableShutterSound**: `boolean`
|
||||
|
||||
Whether to play the default shutter "click" sound when taking a picture or not.
|
||||
|
||||
**`Default`**
|
||||
|
||||
true
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:46](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L46)
|
||||
|
||||
___
|
||||
|
||||
### flash
|
||||
|
||||
• `Optional` **flash**: ``"off"`` \| ``"auto"`` \| ``"on"``
|
||||
|
||||
Whether the Flash should be enabled or disabled
|
||||
|
||||
**`Default`**
|
||||
|
||||
"auto"
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:20](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L20)
|
||||
|
||||
___
|
||||
|
||||
### qualityPrioritization
|
||||
|
||||
• `Optional` **qualityPrioritization**: ``"quality"`` \| ``"balanced"`` \| ``"speed"``
|
||||
|
||||
Indicates how photo quality should be prioritized against speed.
|
||||
|
||||
* `"quality"` Indicates that photo quality is paramount, even at the expense of shot-to-shot time
|
||||
* `"balanced"` Indicates that photo quality and speed of delivery are balanced in priority
|
||||
* `"speed"` Indicates that speed of photo delivery is most important, even at the expense of quality
|
||||
|
||||
**`Default`**
|
||||
|
||||
"balanced"
|
||||
|
||||
#### Defined in
|
||||
|
||||
[PhotoFile.ts:14](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/PhotoFile.ts#L14)
|
62
docs/docs/api/interfaces/TakeSnapshotOptions.md
Normal file
62
docs/docs/api/interfaces/TakeSnapshotOptions.md
Normal file
@ -0,0 +1,62 @@
|
||||
---
|
||||
id: "TakeSnapshotOptions"
|
||||
title: "TakeSnapshotOptions"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
## Properties
|
||||
|
||||
### flash
|
||||
|
||||
• `Optional` **flash**: ``"off"`` \| ``"on"``
|
||||
|
||||
Whether the Flash should be enabled or disabled
|
||||
|
||||
**`Default`**
|
||||
|
||||
"off"
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Snapshot.ts:16](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Snapshot.ts#L16)
|
||||
|
||||
___
|
||||
|
||||
### quality
|
||||
|
||||
• `Optional` **quality**: `number`
|
||||
|
||||
Specifies the quality of the JPEG. (0-100, where 100 means best quality (no compression))
|
||||
|
||||
It is recommended to set this to `90` or even `80`, since the user probably won't notice a difference between `90`/`80` and `100`.
|
||||
|
||||
**`Default`**
|
||||
|
||||
100
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Snapshot.ts:9](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Snapshot.ts#L9)
|
||||
|
||||
___
|
||||
|
||||
### skipMetadata
|
||||
|
||||
• `Optional` **skipMetadata**: `boolean`
|
||||
|
||||
When set to `true`, metadata reading and mapping will be skipped. ([`metadata`](PhotoFile.md#metadata) will be `null`)
|
||||
|
||||
This might result in a faster capture, as metadata reading and mapping requires File IO.
|
||||
|
||||
**`Default`**
|
||||
|
||||
false
|
||||
|
||||
**`Platform`**
|
||||
|
||||
Android
|
||||
|
||||
#### Defined in
|
||||
|
||||
[Snapshot.ts:27](https://github.com/mrousavy/react-native-vision-camera/blob/c2fb5bf1/src/Snapshot.ts#L27)
|
32
docs/docs/api/interfaces/TemporaryFile.md
Normal file
32
docs/docs/api/interfaces/TemporaryFile.md
Normal file
@ -0,0 +1,32 @@
|
||||
---
|
||||
id: "TemporaryFile"
|
||||
title: "TemporaryFile"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents a temporary file in the local filesystem.
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- **`TemporaryFile`**
|
||||
|
||||
↳ [`PhotoFile`](PhotoFile.md)
|
||||
|
||||
↳ [`VideoFile`](VideoFile.md)
|
||||
|
||||
## Properties
|
||||
|
||||
### path
|
||||
|
||||
• **path**: `string`
|
||||
|
||||
The path of the file.
|
||||
|
||||
* **Note:** If you want to consume this file (e.g. for displaying it in an `<Image>` component), you might have to add the `file://` prefix.
|
||||
|
||||
* **Note:** This file might get deleted once the app closes because it lives in the temp directory.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[TemporaryFile.ts:12](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/TemporaryFile.ts#L12)
|
48
docs/docs/api/interfaces/VideoFile.md
Normal file
48
docs/docs/api/interfaces/VideoFile.md
Normal file
@ -0,0 +1,48 @@
|
||||
---
|
||||
id: "VideoFile"
|
||||
title: "VideoFile"
|
||||
sidebar_position: 0
|
||||
custom_edit_url: null
|
||||
---
|
||||
|
||||
Represents a Video taken by the Camera written to the local filesystem.
|
||||
|
||||
Related: [`Camera.startRecording()`](../classes/Camera.md#startrecording), [`Camera.stopRecording()`](../classes/Camera.md#stoprecording)
|
||||
|
||||
## Hierarchy
|
||||
|
||||
- [`TemporaryFile`](TemporaryFile.md)
|
||||
|
||||
↳ **`VideoFile`**
|
||||
|
||||
## Properties
|
||||
|
||||
### duration
|
||||
|
||||
• **duration**: `number`
|
||||
|
||||
Represents the duration of the video, in seconds.
|
||||
|
||||
#### Defined in
|
||||
|
||||
[VideoFile.ts:38](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/VideoFile.ts#L38)
|
||||
|
||||
___
|
||||
|
||||
### path
|
||||
|
||||
• **path**: `string`
|
||||
|
||||
The path of the file.
|
||||
|
||||
* **Note:** If you want to consume this file (e.g. for displaying it in an `<Image>` component), you might have to add the `file://` prefix.
|
||||
|
||||
* **Note:** This file might get deleted once the app closes because it lives in the temp directory.
|
||||
|
||||
#### Inherited from
|
||||
|
||||
[TemporaryFile](TemporaryFile.md).[path](TemporaryFile.md#path)
|
||||
|
||||
#### Defined in
|
||||
|
||||
[TemporaryFile.ts:12](https://github.com/mrousavy/react-native-vision-camera/blob/c66550ed/package/src/TemporaryFile.ts#L12)
|
2
docs/docs/api/interfaces/_category_.yml
Normal file
2
docs/docs/api/interfaces/_category_.yml
Normal file
@ -0,0 +1,2 @@
|
||||
label: "Interfaces"
|
||||
position: 4
|
@ -34,7 +34,6 @@ function App() {
|
||||
The most important actions are:
|
||||
|
||||
* [Taking Photos](#taking-photos)
|
||||
- [Taking Snapshots](#taking-snapshots)
|
||||
* [Recording Videos](#recording-videos)
|
||||
|
||||
## Taking Photos
|
||||
@ -57,25 +56,6 @@ You can customize capture options such as [automatic red-eye reduction](/docs/ap
|
||||
|
||||
This function returns a [`PhotoFile`](/docs/api/interfaces/PhotoFile) which contains a [`path`](/docs/api/interfaces/PhotoFile#path) property you can display in your App using an `<Image>` or `<FastImage>`.
|
||||
|
||||
### Taking Snapshots
|
||||
|
||||
Compared to iOS, Cameras on Android tend to be slower in image capture. If you care about speed, you can use the Camera's [`takeSnapshot(...)`](/docs/api/classes/Camera#takesnapshot) function (Android only) which simply takes a snapshot of the Camera View instead of actually taking a photo through the Camera lens.
|
||||
|
||||
```ts
|
||||
const snapshot = await camera.current.takeSnapshot({
|
||||
quality: 85,
|
||||
skipMetadata: true
|
||||
})
|
||||
```
|
||||
|
||||
:::note
|
||||
While taking snapshots is faster than taking photos, the resulting image has way lower quality. You can combine both functions to create a snapshot to present to the user at first, then deliver the actual high-res photo afterwards.
|
||||
:::
|
||||
|
||||
:::note
|
||||
The `takeSnapshot` function also works with `photo={false}`. For this reason VisionCamera will automatically fall-back to snapshot capture if you are trying to use more use-cases than the Camera natively supports. (see ["The `supportsParallelVideoProcessing` prop"](/docs/guides/devices#the-supportsparallelvideoprocessing-prop))
|
||||
:::
|
||||
|
||||
## Recording Videos
|
||||
|
||||
To start a video recording you first have to enable video capture:
|
||||
@ -108,6 +88,14 @@ await camera.current.stopRecording()
|
||||
|
||||
Once a recording has been stopped, the `onRecordingFinished` callback passed to the `startRecording` function will be invoked with a [`VideoFile`](/docs/api/interfaces/VideoFile) which you can then use to display in a [`<Video>`](https://github.com/react-native-video/react-native-video) component.
|
||||
|
||||
To pause/resume the recordings, you can use `pauseRecording()` and `resumeRecording()`:
|
||||
|
||||
```ts
|
||||
await camera.current.pauseRecording()
|
||||
...
|
||||
await camera.current.resumeRecording()
|
||||
```
|
||||
|
||||
<br />
|
||||
|
||||
#### 🚀 Next section: [Frame Processors](frame-processors)
|
||||
|
@ -46,7 +46,6 @@ The most important properties are:
|
||||
* `neutralZoom`: The zoom factor where the camera is "neutral". For any wide-angle cameras this property might be the same as `minZoom`, where as for ultra-wide-angle cameras ("fish-eye") this might be a value higher than `minZoom` (e.g. `2`). It is recommended that you always start at `neutralZoom` and let the user manually zoom out to `minZoom` on demand.
|
||||
* `maxZoom`: The maximum available zoom factor. When you pass `zoom={1}` to the Camera, the `maxZoom` factor will be applied.
|
||||
* `formats`: A list of all available formats (See [Camera Formats](formats))
|
||||
* `supportsParallelVideoProcessing`: Determines whether this camera devices supports using Video Recordings and Frame Processors at the same time. (See [`supportsParallelVideoProcessing`](#the-supportsparallelvideoprocessing-prop))
|
||||
* `supportsFocus`: Determines whether this camera device supports focusing (See [Focusing](focusing))
|
||||
|
||||
:::note
|
||||
@ -113,27 +112,6 @@ function App() {
|
||||
}
|
||||
```
|
||||
|
||||
### The `supportsParallelVideoProcessing` prop
|
||||
|
||||
Camera devices provide the [`supportsParallelVideoProcessing` property](/docs/api/interfaces/CameraDevice#supportsparallelvideoprocessing) which determines whether the device supports using Video Recordings (`video={true}`) and Frame Processors (`frameProcessor={...}`) at the same time.
|
||||
|
||||
If this property is `false`, you can either enable `video`, or add a `frameProcessor`, but not both.
|
||||
|
||||
* On iOS this value is always `true`.
|
||||
* On newer Android devices this value is always `true`.
|
||||
* On older Android devices this value is `false` if the Camera's hardware level is `LEGACY` or `LIMITED`, `true` otherwise. (See [`INFO_SUPPORTED_HARDWARE_LEVEL`](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics#INFO_SUPPORTED_HARDWARE_LEVEL) or [the tables at "Regular capture"](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#regular-capture))
|
||||
|
||||
#### Examples
|
||||
|
||||
* An app that only supports **taking photos** (e.g. a vintage Polaroid Camera app) works on every Camera device because the `supportsParallelVideoProcessing` only affects _video processing_.
|
||||
* An app that supports **taking photos** and **videos** (e.g. a Camera app) works on every Camera device because only a single _video processing_ feature is used (`video`).
|
||||
* An app that only uses **Frame Processors** (e.g. the "Hotdog/Not Hotdog detector" app) (no taking photos or videos) works on every Camera device because it only uses a single _video processing_ feature (`frameProcessor`).
|
||||
* An app that uses **Frame Processors** and supports **taking photos** and **videos** (e.g. Snapchat, Instagram) only works on Camera devices where `supportsParallelVideoProcessing` is `true`. (iPhones and newer Android Phones)
|
||||
|
||||
:::note
|
||||
Actually the limitation also affects the `photo` feature, but VisionCamera will automatically fall-back to **Snapshot capture** if you are trying to use multiple features (`photo` + `video` + `frameProcessor`) and they are not natively supported. (See ["Taking Snapshots"](/docs/guides/capturing#taking-snapshots))
|
||||
:::
|
||||
|
||||
<br />
|
||||
|
||||
#### 🚀 Next section: [Camera Lifecycle](lifecycle)
|
||||
|
@ -50,7 +50,7 @@ The `CameraError` type is a baseclass type for all other errors and provides the
|
||||
* `cause.cause?`: The cause that caused this cause. (Recursive) (Optional)
|
||||
|
||||
:::note
|
||||
See [the `CameraError.ts` file](https://github.com/mrousavy/react-native-vision-camera/blob/main/src/CameraError.ts) for a list of all possible error codes
|
||||
See [the `CameraError.ts` file](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/src/CameraError.ts) for a list of all possible error codes
|
||||
:::
|
||||
|
||||
### Runtime Errors
|
||||
|
@ -18,34 +18,6 @@ Each camera device (see [Camera Devices](devices)) provides a number of capture
|
||||
|
||||
If you don't want to specify the best format for your camera device, you don't have to. The Camera _automatically chooses the best matching format_ for the current camera device. This is why the Camera's `format` property is _optional_.
|
||||
|
||||
If you don't want to do a lot of filtering, but still want to let the camera know what your intentions are, you can use the Camera's `preset` property.
|
||||
|
||||
For example, use the `'medium'` preset if you want to create a video-chat application that shouldn't excessively use mobile data:
|
||||
|
||||
```tsx
|
||||
function App() {
|
||||
const devices = useCameraDevices()
|
||||
const device = devices.back
|
||||
|
||||
if (device == null) return <LoadingView />
|
||||
return (
|
||||
<Camera
|
||||
style={StyleSheet.absoluteFill}
|
||||
device={device}
|
||||
preset="medium"
|
||||
/>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
:::note
|
||||
See the [CameraPreset.ts](https://github.com/mrousavy/react-native-vision-camera/blob/main/src/CameraPreset.ts) type for more information about presets
|
||||
:::
|
||||
|
||||
:::warning
|
||||
You cannot set `preset` and `format` at the same time; if `format` is set, `preset` must be `undefined` and vice versa!
|
||||
:::
|
||||
|
||||
### What you need to know about cameras
|
||||
|
||||
To understand a bit more about camera formats, you first need to understand a few "general camera basics":
|
||||
@ -67,13 +39,6 @@ You can also manually get all camera devices and decide which device to use base
|
||||
This example shows how you would pick the format with the _highest frame rate_:
|
||||
|
||||
```tsx
|
||||
function getMaxFps(format: CameraDeviceFormat): number {
|
||||
return format.frameRateRanges.reduce((prev, curr) => {
|
||||
if (curr.maxFrameRate > prev) return curr.maxFrameRate
|
||||
else return prev
|
||||
}, 0)
|
||||
}
|
||||
|
||||
function App() {
|
||||
const devices = useCameraDevices('wide-angle-camera')
|
||||
const device = devices.back
|
||||
@ -81,7 +46,7 @@ function App() {
|
||||
const format = useMemo(() => {
|
||||
return device?.formats.reduce((prev, curr) => {
|
||||
if (prev == null) return curr
|
||||
if (getMaxFps(curr) > getMaxFps(prev)) return curr
|
||||
if (curr.maxFps > prev.maxFps) return curr
|
||||
else return prev
|
||||
}, undefined)
|
||||
}, [device?.formats])
|
||||
@ -110,7 +75,7 @@ export const sortFormatsByResolution = (left: CameraDeviceFormat, right: CameraD
|
||||
// in this case, points aren't "normalized" (e.g. higher resolution = 1 point, lower resolution = -1 points)
|
||||
let leftPoints = left.photoHeight * left.photoWidth
|
||||
let rightPoints = right.photoHeight * right.photoWidth
|
||||
|
||||
|
||||
// we also care about video dimensions, not only photo.
|
||||
leftPoints += left.videoWidth * left.videoHeight
|
||||
rightPoints += right.videoWidth * right.videoHeight
|
||||
@ -155,7 +120,6 @@ Other props that depend on the `format`:
|
||||
* `fps`: Specifies the frame rate to use
|
||||
* `hdr`: Enables HDR photo or video capture and preview
|
||||
* `lowLightBoost`: Enables a night-mode/low-light-boost for photo or video capture and preview
|
||||
* `colorSpace`: Uses the specified color-space for photo or video capture and preview (iOS only since Android only uses `YUV`)
|
||||
* `videoStabilizationMode`: Specifies the video stabilization mode to use for this camera device
|
||||
|
||||
|
||||
|
@ -48,17 +48,13 @@ Frame processors are by far not limited to Hotdog detection, other examples incl
|
||||
* Creating scanners for **QR codes**, **Barcodes** or even custom codes such as **Snapchat's SnapCodes** or **Apple's AppClips**
|
||||
* Creating **snapchat-like filters**, e.g. draw a dog-mask filter over the user's face
|
||||
* Creating **color filters** with depth-detection
|
||||
* Drawing boxes, text, overlays, or colors on the screen in realtime
|
||||
* Rendering filters and shaders such as Blur, inverted colors, beauty filter, or more on the screen
|
||||
|
||||
Because they are written in JS, Frame Processors are **simple**, **powerful**, **extensible** and **easy to create** while still running at **native performance**. (Frame Processors can run up to **1000 times a second**!) Also, you can use **fast-refresh** to quickly see changes while developing or publish [over-the-air updates](https://github.com/microsoft/react-native-code-push) to tweak the Hotdog detector's sensitivity in live apps without pushing a native update.
|
||||
|
||||
:::note
|
||||
Frame Processors require [**react-native-reanimated**](https://github.com/software-mansion/react-native-reanimated) 2.2.0 or higher. Also make sure to add
|
||||
|
||||
```js
|
||||
import 'react-native-reanimated'
|
||||
```
|
||||
|
||||
to the top of the file when using `useFrameProcessor`.
|
||||
Frame Processors require [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 1.0.0 or higher.
|
||||
:::
|
||||
|
||||
### Interacting with Frame Processors
|
||||
@ -80,7 +76,7 @@ You can also easily read from, and assign to [**Shared Values**](https://docs.sw
|
||||
|
||||
In this example, we detect a cat in the frame - if a cat was found, we assign the `catBounds` Shared Value to the coordinates of the cat (relative to the screen) which we can then use in a `useAnimatedStyle` hook to position the red rectangle surrounding the cat. This updates in realtime on the UI Thread, and can also be smoothly animated with `withSpring` or `withTiming`.
|
||||
|
||||
```tsx {6}
|
||||
```tsx {7}
|
||||
// represents position of the cat on the screen 🐈
|
||||
const catBounds = useSharedValue({ top: 0, left: 0, right: 0, bottom: 0 })
|
||||
|
||||
@ -108,22 +104,52 @@ return (
|
||||
)
|
||||
```
|
||||
|
||||
And you can also call back to the React-JS thread by using [`runOnJS`](https://docs.swmansion.com/react-native-reanimated/docs/api/miscellaneous/runOnJS/):
|
||||
And you can also call back to the React-JS thread by using `createRunInJsFn(...)`:
|
||||
|
||||
```tsx {9}
|
||||
const onQRCodeDetected = useCallback((qrCode: string) => {
|
||||
```tsx {1}
|
||||
const onQRCodeDetected = Worklets.createRunInJsFn((qrCode: string) => {
|
||||
navigation.push("ProductPage", { productId: qrCode })
|
||||
}, [])
|
||||
})
|
||||
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const qrCodes = scanQRCodes(frame)
|
||||
if (qrCodes.length > 0) {
|
||||
runOnJS(onQRCodeDetected)(qrCodes[0])
|
||||
onQRCodeDetected(qrCodes[0])
|
||||
}
|
||||
}, [onQRCodeDetected])
|
||||
```
|
||||
|
||||
### Running asynchronously
|
||||
|
||||
Since Frame Processors run synchronously with the Camera Pipeline, anything taking longer than one Frame interval might block the Camera from streaming new Frames. To avoid this, you can use `runAsync` to run code asynchronously on a different Thread:
|
||||
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
console.log("I'm running synchronously at 60 FPS!")
|
||||
runAsync(() => {
|
||||
'worklet'
|
||||
console.log("I'm running asynchronously, possibly at a lower FPS rate!")
|
||||
})
|
||||
}, [])
|
||||
```
|
||||
|
||||
### Running at a throttled FPS rate
|
||||
|
||||
Some Frame Processor Plugins don't need to run on every Frame, for example a Frame Processor that detects the brightness in a Frame only needs to run twice per second:
|
||||
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
console.log("I'm running synchronously at 60 FPS!")
|
||||
runAtTargetFps(2, () => {
|
||||
'worklet'
|
||||
console.log("I'm running synchronously at 2 FPS!")
|
||||
})
|
||||
}, [])
|
||||
```
|
||||
|
||||
### Using Frame Processor Plugins
|
||||
|
||||
Frame Processor Plugins are distributed through npm. To install the [**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler) plugin, run:
|
||||
@ -133,23 +159,6 @@ npm i vision-camera-image-labeler
|
||||
cd ios && pod install
|
||||
```
|
||||
|
||||
Then add it to your `babel.config.js`. For the Image Labeler, this will be `__labelImage`:
|
||||
|
||||
```js {6}
|
||||
module.exports = {
|
||||
plugins: [
|
||||
[
|
||||
'react-native-reanimated/plugin',
|
||||
{
|
||||
globals: ['__labelImage'],
|
||||
},
|
||||
],
|
||||
```
|
||||
|
||||
:::note
|
||||
You have to restart metro-bundler for changes in the `babel.config.js` file to take effect.
|
||||
:::
|
||||
|
||||
That's it! 🎉 Now you can use it:
|
||||
|
||||
```ts
|
||||
@ -182,7 +191,7 @@ This means that **the Frame Processor API only takes ~1ms longer than a fully na
|
||||
|
||||
### Avoiding Frame-drops
|
||||
|
||||
Frame Processors will be **synchronously** called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of **30 FPS**, you have about **33ms** to finish processing frames. Use [`frameProcessorFps`](/docs/api/interfaces/CameraProps#frameprocessorfps) to throttle the frame processor's FPS. For a QR Code Scanner, **5 FPS** (200ms) might suffice, while a object tracking AI might run at the same frame rate as the Camera itself (e.g. **60 FPS** (16ms)).
|
||||
Frame Processors will be **synchronously** called for each frame the Camera sees and have to finish executing before the next frame arrives, otherwise the next frame(s) will be dropped. For a frame rate of **30 FPS**, you have about **33ms** to finish processing frames. For a QR Code Scanner, **5 FPS** (200ms) might suffice, while a object tracking AI might run at the same frame rate as the Camera itself (e.g. **60 FPS** (16ms)).
|
||||
|
||||
### ESLint react-hooks plugin
|
||||
|
||||
@ -192,15 +201,13 @@ If you are using the [react-hooks ESLint plugin](https://www.npmjs.com/package/e
|
||||
|
||||
#### Frame Processors
|
||||
|
||||
**Frame Processors** are JS functions that will be **workletized** using [react-native-reanimated](https://github.com/software-mansion/react-native-reanimated). They are created on a **parallel camera thread** using a separate JavaScript Runtime (_"VisionCamera JS-Runtime"_) and are **invoked synchronously** (using JSI) without ever going over the bridge. In a **Frame Processor** you can write normal JS code, call back to the React-JS Thread (e.g. `setState`), use [Shared Values](https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/shared-values/) and call **Frame Processor Plugins**.
|
||||
|
||||
> See [**the example Frame Processor**](https://github.com/mrousavy/react-native-vision-camera/blob/cf68a4c6476d085ec48fc424a53a96962e0c33f9/example/src/CameraPage.tsx#L199-L203)
|
||||
**Frame Processors** are JS functions that will be **workletized** using [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core). They are created on a **parallel camera thread** using a separate JavaScript Runtime (_"VisionCamera JS-Runtime"_) and are **invoked synchronously** (using JSI) without ever going over the bridge. In a **Frame Processor** you can write normal JS code, call back to the React-JS Thread (e.g. `setState`), use [Shared Values](https://docs.swmansion.com/react-native-reanimated/docs/fundamentals/shared-values/) and call **Frame Processor Plugins**.
|
||||
|
||||
#### Frame Processor Plugins
|
||||
|
||||
**Frame Processor Plugins** are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be **synchronously called** from your JS Frame Processors (using JSI) without ever going over the bridge. Because VisionCamera provides an easy-to-use plugin API, you can easily create a **Frame Processor Plugin** yourself. Some examples include [Barcode Scanning](https://developers.google.com/ml-kit/vision/barcode-scanning), [Face Detection](https://developers.google.com/ml-kit/vision/face-detection), [Image Labeling](https://developers.google.com/ml-kit/vision/image-labeling), [Text Recognition](https://developers.google.com/ml-kit/vision/text-recognition) and more.
|
||||
|
||||
> Learn how to [**create Frame Processor Plugins**](frame-processors-plugins-overview), or check out the [**example Frame Processor Plugin for iOS**](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [**Android**](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java).
|
||||
> Learn how to [**create Frame Processor Plugins**](frame-processors-plugins-overview), or check out the [**example Frame Processor Plugin for iOS**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [**Android**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java).
|
||||
|
||||
#### The `Frame` object
|
||||
|
||||
@ -225,26 +232,20 @@ The Frame Processor API spawns a secondary JavaScript Runtime which consumes a s
|
||||
|
||||
Inside your `gradle.properties` file, add the `disableFrameProcessors` flag:
|
||||
|
||||
```
|
||||
disableFrameProcessors=true
|
||||
```groovy
|
||||
VisionCamera_disableFrameProcessors=true
|
||||
```
|
||||
|
||||
Then, clean and rebuild your project.
|
||||
|
||||
#### iOS
|
||||
|
||||
Inside your `project.pbxproj`, find the `GCC_PREPROCESSOR_DEFINITIONS` group and add the flag:
|
||||
Inside your `Podfile`, add the `VCDisableFrameProcessors` flag:
|
||||
|
||||
```txt {3}
|
||||
GCC_PREPROCESSOR_DEFINITIONS = (
|
||||
"DEBUG=1",
|
||||
"VISION_CAMERA_DISABLE_FRAME_PROCESSORS=1",
|
||||
"$(inherited)",
|
||||
);
|
||||
```ruby
|
||||
$VCDisableFrameProcessors = true
|
||||
```
|
||||
|
||||
Make sure to add this to your Debug-, as well as your Release-configuration.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="expo">
|
||||
|
@ -12,14 +12,14 @@ import TabItem from '@theme/TabItem';
|
||||
|
||||
Frame Processor Plugins are **native functions** which can be directly called from a JS Frame Processor. (See ["Frame Processors"](frame-processors))
|
||||
|
||||
They **receive a frame from the Camera** as an input and can return any kind of output. For example, a `scanQRCodes` function returns an array of detected QR code strings in the frame:
|
||||
They **receive a frame from the Camera** as an input and can return any kind of output. For example, a `detectFaces` function returns an array of detected faces in the frame:
|
||||
|
||||
```tsx {4-5}
|
||||
function App() {
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const qrCodes = scanQRCodes(frame)
|
||||
console.log(`QR Codes in Frame: ${qrCodes}`)
|
||||
const faces = detectFaces(frame)
|
||||
console.log(`Faces in Frame: ${faces}`)
|
||||
}, [])
|
||||
|
||||
return (
|
||||
@ -28,7 +28,7 @@ function App() {
|
||||
}
|
||||
```
|
||||
|
||||
To achieve **maximum performance**, the `scanQRCodes` function is written in a native language (e.g. Objective-C), but it will be directly called from the VisionCamera Frame Processor JavaScript-Runtime.
|
||||
To achieve **maximum performance**, the `detectFaces` function is written in a native language (e.g. Objective-C), but it will be directly called from the VisionCamera Frame Processor JavaScript-Runtime.
|
||||
|
||||
### Types
|
||||
|
||||
@ -39,11 +39,11 @@ Similar to a TurboModule, the Frame Processor Plugin Registry API automatically
|
||||
| `number` | `NSNumber*` (double) | `Double` |
|
||||
| `boolean` | `NSNumber*` (boolean) | `Boolean` |
|
||||
| `string` | `NSString*` | `String` |
|
||||
| `[]` | `NSArray*` | `ReadableNativeArray` |
|
||||
| `{}` | `NSDictionary*` | `ReadableNativeMap` |
|
||||
| `[]` | `NSArray*` | `List<Object>` |
|
||||
| `{}` | `NSDictionary*` | `Map<String, Object>` |
|
||||
| `undefined` / `null` | `nil` | `null` |
|
||||
| `(any, any) => void` | [`RCTResponseSenderBlock`][4] | `(Object, Object) -> void` |
|
||||
| [`Frame`][1] | [`Frame*`][2] | [`ImageProxy`][3] |
|
||||
| [`Frame`][1] | [`Frame*`][2] | [`Frame`][3] |
|
||||
|
||||
### Return values
|
||||
|
||||
@ -51,7 +51,7 @@ Return values will automatically be converted to JS values, assuming they are re
|
||||
|
||||
```java
|
||||
@Override
|
||||
public Object callback(ImageProxy image, Object[] params) {
|
||||
public Object callback(Frame frame, Object[] params) {
|
||||
return "cat";
|
||||
}
|
||||
```
|
||||
@ -61,22 +61,22 @@ Returns a `string` in JS:
|
||||
```js
|
||||
export function detectObject(frame: Frame): string {
|
||||
'worklet'
|
||||
const result = __detectObject(frame)
|
||||
const result = FrameProcessorPlugins.detectObject(frame)
|
||||
console.log(result) // <-- "cat"
|
||||
}
|
||||
```
|
||||
|
||||
You can also manipulate the buffer and return it (or a copy of it) by returning a [`Frame`][2]/[`ImageProxy`][3] instance:
|
||||
You can also manipulate the buffer and return it (or a copy of it) by returning a [`Frame`][2]/[`Frame`][3] instance:
|
||||
|
||||
```java
|
||||
@Override
|
||||
public Object callback(ImageProxy image, Object[] params) {
|
||||
ImageProxy resizedImage = new ImageProxy(/* ... */);
|
||||
return resizedImage;
|
||||
public Object callback(Frame frame, Object[] params) {
|
||||
Frame resizedFrame = new Frame(/* ... */);
|
||||
return resizedFrame;
|
||||
}
|
||||
```
|
||||
|
||||
Which returns a [`Frame`](https://github.com/mrousavy/react-native-vision-camera/blob/main/src/Frame.ts) in JS:
|
||||
Which returns a [`Frame`](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/src/Frame.ts) in JS:
|
||||
|
||||
```js
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
@ -97,16 +97,7 @@ Frame Processors can also accept parameters, following the same type convention
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const codes = scanCodes(frame, ['qr', 'barcode'])
|
||||
}, [])
|
||||
```
|
||||
|
||||
Or with multiple ("variadic") parameters:
|
||||
|
||||
```ts
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const codes = scanCodes(frame, true, 'hello-world', 42)
|
||||
const codes = scanCodes(frame, { codes: ['qr', 'barcode'] })
|
||||
}, [])
|
||||
```
|
||||
|
||||
@ -116,7 +107,7 @@ To let the user know that something went wrong you can use Exceptions:
|
||||
|
||||
```java
|
||||
@Override
|
||||
public Object callback(ImageProxy image, Object[] params) {
|
||||
public Object callback(Frame frame, Object[] params) {
|
||||
if (params[0] instanceof String) {
|
||||
// ...
|
||||
} else {
|
||||
@ -140,7 +131,7 @@ const frameProcessor = useFrameProcessor((frame) => {
|
||||
|
||||
## What's possible?
|
||||
|
||||
You can run any native code you want in a Frame Processor Plugin. Just like in the native iOS and Android Camera APIs, you will receive a frame (`CMSampleBuffer` on iOS, `ImageProxy` on Android) which you can use however you want. In other words; **everything is possible**.
|
||||
You can run any native code you want in a Frame Processor Plugin. Just like in the native iOS and Android Camera APIs, you will receive a frame ([`CMSampleBuffer`][5] on iOS, [`ImageProxy`][6] on Android) which you can use however you want. In other words; **everything is possible**.
|
||||
|
||||
## Implementations
|
||||
|
||||
@ -152,13 +143,13 @@ For example, a realtime video chat application might use WebRTC to send the fram
|
||||
|
||||
```java
|
||||
@Override
|
||||
public Object callback(ImageProxy image, Object[] params) {
|
||||
public Object callback(Frame frame, Object[] params) {
|
||||
String serverURL = (String)params[0];
|
||||
ImageProxy imageCopy = new ImageProxy(/* ... */);
|
||||
Frame frameCopy = new Frame(/* ... */);
|
||||
|
||||
uploaderQueue.runAsync(() -> {
|
||||
WebRTC.uploadImage(imageCopy, serverURL);
|
||||
imageCopy.close();
|
||||
WebRTC.uploadImage(frameCopy, serverURL);
|
||||
frameCopy.close();
|
||||
});
|
||||
|
||||
return null;
|
||||
@ -195,20 +186,15 @@ This way you can handle queueing up the frames yourself and asynchronously call
|
||||
|
||||
### Benchmarking Frame Processor Plugins
|
||||
|
||||
Your Frame Processor Plugins have to be fast. VisionCamera automatically detects slow Frame Processors and outputs relevant information in the native console (Xcode: **Debug Area**, Android Studio: **Logcat**):
|
||||
|
||||
<div align="center">
|
||||
<img src={useBaseUrl("img/slow-log.png")} width="80%" />
|
||||
</div>
|
||||
<div align="center">
|
||||
<img src={useBaseUrl("img/slow-log-2.png")} width="80%" />
|
||||
</div>
|
||||
Your Frame Processor Plugins have to be fast. Use the FPS Graph (`enableFpsGraph`) to see how fast your Camera is running, if it is not running at the target FPS, your Frame Processor is too slow.
|
||||
|
||||
<br />
|
||||
|
||||
#### 🚀 Create your first Frame Processor Plugin for [iOS](frame-processors-plugins-ios) or [Android](frame-processors-plugins-android)!
|
||||
|
||||
[1]: https://github.com/mrousavy/react-native-vision-camera/blob/main/src/Frame.ts
|
||||
[2]: https://github.com/mrousavy/react-native-vision-camera/blob/main/ios/Frame%20Processor/Frame.h
|
||||
[3]: https://developer.android.com/reference/androidx/camera/core/ImageProxy
|
||||
[1]: https://github.com/mrousavy/react-native-vision-camera/blob/main/package/src/Frame.ts
|
||||
[2]: https://github.com/mrousavy/react-native-vision-camera/blob/main/package/ios/Frame%20Processor/Frame.h
|
||||
[3]: https://github.com/mrousavy/react-native-vision-camera/blob/main/package/android/src/main/java/com/mrousavy/camera/frameprocessor/Frame.java
|
||||
[4]: https://github.com/facebook/react-native/blob/9a43eac7a32a6ba3164a048960101022a92fcd5a/React/Base/RCTBridgeModule.h#L20-L24
|
||||
[5]: https://developer.apple.com/documentation/coremedia/cmsamplebuffer
|
||||
[6]: https://developer.android.com/reference/androidx/camera/core/ImageProxy
|
||||
|
120
docs/docs/guides/FRAME_PROCESSORS_SKIA.mdx
Normal file
120
docs/docs/guides/FRAME_PROCESSORS_SKIA.mdx
Normal file
@ -0,0 +1,120 @@
|
||||
---
|
||||
id: frame-processors-skia
|
||||
title: Skia Frame Processors
|
||||
sidebar_label: Skia Frame Processors
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import useBaseUrl from '@docusaurus/useBaseUrl';
|
||||
|
||||
<div>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="283" height="535" style={{ float: 'right' }}>
|
||||
<image href={useBaseUrl("img/frame-processors.gif")} x="18" y="33" width="247" height="469" />
|
||||
<image href={useBaseUrl("img/frame.png")} width="283" height="535" />
|
||||
</svg>
|
||||
</div>
|
||||
|
||||
### What are Skia Frame Processors?
|
||||
|
||||
Skia Frame Processors are [Frame Processors](frame-processors) that allow you to draw onto the Frame using [react-native-skia](https://github.com/Shopify/react-native-skia).
|
||||
|
||||
Skia Frame Processors were introduced in VisionCamera V3 RC.0, but were removed again after VisionCamera V3 RC.9 due to the significantly increased complexity of the video pipeline in the codebase.
|
||||
|
||||
```
|
||||
yarn add react-native-vision-camera@rc.9
|
||||
```
|
||||
|
||||
They worked perfectly fine for those RCs with some minor inconsistencies (landscape orientation didn't work on Android), which proves the concept. If you want to learn more about Skia Frame Processors, we at [Margelo](https://margelo.io) can build a custom solution for your company to implement drawable Frame Processors (e.g. filters, blurring, masks, colors, etc). See [PR #1740](https://github.com/mrousavy/react-native-vision-camera/pull/1740) for more details.
|
||||
|
||||
### Documentation
|
||||
|
||||
For example, you might want to draw a rectangle around a user's face **without writing any native code**, while still **achieving native performance**:
|
||||
|
||||
```jsx
|
||||
function App() {
|
||||
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const faces = detectFaces(frame)
|
||||
faces.forEach((face) => {
|
||||
frame.drawRect(face.rectangle, redPaint)
|
||||
})
|
||||
}, [])
|
||||
|
||||
return (
|
||||
<Camera
|
||||
{...cameraProps}
|
||||
frameProcessor={frameProcessor}
|
||||
/>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
With Skia, you can also implement realtime filters, blurring, shaders, and much more. For example, this is how you invert the colors in a Frame:
|
||||
|
||||
```jsx
|
||||
const INVERTED_COLORS_SHADER = `
|
||||
uniform shader image;
|
||||
|
||||
half4 main(vec2 pos) {
|
||||
vec4 color = image.eval(pos);
|
||||
return vec4(1.0 - color.rgb, 1.0);
|
||||
}
|
||||
`;
|
||||
|
||||
function App() {
|
||||
const imageFilter = Skia.ImageFilter.MakeRuntimeShader(/* INVERTED_COLORS_SHADER */)
|
||||
const paint = Skia.Paint()
|
||||
paint.setImageFilter(imageFilter)
|
||||
|
||||
const frameProcessor = useSkiaFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
frame.render(paint)
|
||||
}, [])
|
||||
|
||||
return (
|
||||
<Camera
|
||||
{...cameraProps}
|
||||
frameProcessor={frameProcessor}
|
||||
/>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
### Rendered outputs
|
||||
|
||||
The rendered results of the Skia Frame Processor are rendered to an offscreen context and will be displayed in the Camera Preview, recorded to a video file (`startRecording()`) and captured in a photo (`takePhoto()`). In other words, you draw into the Frame, not just ontop of it.
|
||||
|
||||
### Performance
|
||||
|
||||
VisionCamera sets up an additional Skia rendering context which requires a few resources.
|
||||
|
||||
On iOS, Metal is used for GPU Acceleration. On Android, OpenGL is used for GPU Acceleration.
|
||||
C++/JSI is used for highly efficient communication between JS and Skia.
|
||||
|
||||
### Disabling Skia Frame Processors
|
||||
|
||||
Skia Frame Processors ship with additional C++ files which might slightly increase the app's build time. If you're not using Skia Frame Processors at all, you can disable them:
|
||||
|
||||
#### Android
|
||||
|
||||
Inside your `gradle.properties` file, add the `disableSkia` flag:
|
||||
|
||||
```groovy
|
||||
VisionCamera_disableSkia=true
|
||||
```
|
||||
|
||||
Then, clean and rebuild your project.
|
||||
|
||||
#### iOS
|
||||
|
||||
Inside your `Podfile`, add the `VCDisableSkia` flag:
|
||||
|
||||
```ruby
|
||||
$VCDisableSkia = true
|
||||
```
|
||||
|
||||
|
||||
<br />
|
||||
|
||||
#### 🚀 Next section: [Zooming](/docs/guides/zooming) (or [creating a Frame Processor Plugin](/docs/guides/frame-processors-plugins-overview))
|
@ -9,36 +9,19 @@ sidebar_label: Finish creating your Frame Processor Plugin
|
||||
To make the Frame Processor Plugin available to the Frame Processor Worklet Runtime, create the following wrapper function in JS/TS:
|
||||
|
||||
```ts
|
||||
import type { Frame } from 'react-native-vision-camera'
|
||||
import { VisionCameraProxy, Frame } from 'react-native-vision-camera'
|
||||
|
||||
const plugin = VisionCameraProxy.getFrameProcessorPlugin('scanFaces')
|
||||
|
||||
/**
|
||||
* Scans QR codes.
|
||||
* Scans faces.
|
||||
*/
|
||||
export function scanQRCodes(frame: Frame): string[] {
|
||||
export function scanFaces(frame: Frame): object {
|
||||
'worklet'
|
||||
return __scanQRCodes(frame)
|
||||
return plugin.call(frame)
|
||||
}
|
||||
```
|
||||
|
||||
Users will then have to add the Frame Processor Plugin's name to their `babel.config.js`.
|
||||
|
||||
For the QR Code Scanner, this will be `__scanQRCodes`:
|
||||
|
||||
```js {6}
|
||||
module.exports = {
|
||||
plugins: [
|
||||
[
|
||||
'react-native-reanimated/plugin',
|
||||
{
|
||||
globals: ['__scanQRCodes'],
|
||||
},
|
||||
],
|
||||
```
|
||||
|
||||
:::note
|
||||
You have to restart metro-bundler for changes in the `babel.config.js` file to take effect.
|
||||
:::
|
||||
|
||||
## Test it!
|
||||
|
||||
Simply call the wrapper Worklet in your Frame Processor:
|
||||
@ -47,8 +30,8 @@ Simply call the wrapper Worklet in your Frame Processor:
|
||||
function App() {
|
||||
const frameProcessor = useFrameProcessor((frame) => {
|
||||
'worklet'
|
||||
const qrCodes = scanQRCodes(frame)
|
||||
console.log(`QR Codes in Frame: ${qrCodes}`)
|
||||
const faces = scanFaces(frame)
|
||||
console.log(`Faces in Frame: ${faces}`)
|
||||
}, [])
|
||||
|
||||
return (
|
||||
@ -64,11 +47,10 @@ If you want to distribute your Frame Processor Plugin, simply use npm.
|
||||
1. Create a blank Native Module using [bob](https://github.com/callstack/react-native-builder-bob) or [create-react-native-module](https://github.com/brodybits/create-react-native-module)
|
||||
2. Name it `vision-camera-plugin-xxxxx` where `xxxxx` is the name of your plugin
|
||||
3. Remove the generated template code from the Example Native Module
|
||||
4. Add VisionCamera to `peerDependencies`: `"react-native-vision-camera": ">= 2"`
|
||||
4. Add VisionCamera to `peerDependencies`: `"react-native-vision-camera": ">= 3"`
|
||||
5. Implement the Frame Processor Plugin in the iOS, Android and JS/TS Codebase using the guides above
|
||||
6. Add installation instructions to the `README.md` to let users know they have to add your frame processor in the `babel.config.js` configuration.
|
||||
7. Publish the plugin to npm. Users will only have to install the plugin using `npm i vision-camera-plugin-xxxxx` and add it to their `babel.config.js` file.
|
||||
8. [Add the plugin to the **official VisionCamera plugin list**](https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx) for more visibility
|
||||
6. Publish the plugin to npm. Users will only have to install the plugin using `npm i vision-camera-plugin-xxxxx` and add it to their `babel.config.js` file.
|
||||
7. [Add the plugin to the **official VisionCamera plugin list**](https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx) for more visibility
|
||||
|
||||
<br />
|
||||
|
||||
|
@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem';
|
||||
## Creating a Frame Processor Plugin for Android
|
||||
|
||||
The Frame Processor Plugin API is built to be as extensible as possible, which allows you to create custom Frame Processor Plugins.
|
||||
In this guide we will create a custom QR Code Scanner Plugin which can be used from JS.
|
||||
In this guide we will create a custom Face Detector Plugin which can be used from JS.
|
||||
|
||||
Android Frame Processor Plugins can be written in either **Java**, **Kotlin** or **C++ (JNI)**.
|
||||
|
||||
@ -23,7 +23,7 @@ npx vision-camera-plugin-builder android
|
||||
```
|
||||
|
||||
:::info
|
||||
The CLI will ask you for the path to project's Android Manifest file, name of the plugin (e.g. `QRCodeFrameProcessor`), name of the exposed method (e.g. `scanQRCodes`) and language you want to use for plugin development (Java or Kotlin).
|
||||
The CLI will ask you for the path to project's Android Manifest file, name of the plugin (e.g. `FaceDetectorFrameProcessorPlugin`), name of the exposed method (e.g. `detectFaces`) and language you want to use for plugin development (Java or Kotlin).
|
||||
For reference see the [CLI's docs](https://github.com/mateusz1913/vision-camera-plugin-builder#%EF%B8%8F-options).
|
||||
:::
|
||||
|
||||
@ -35,7 +35,7 @@ For reference see the [CLI's docs](https://github.com/mateusz1913/vision-camera-
|
||||
@SuppressWarnings("UnnecessaryLocalVariable")
|
||||
List<ReactPackage> packages = new PackageList(this).getPackages();
|
||||
...
|
||||
packages.add(new QRCodeFrameProcessorPluginPackage()); // <- add
|
||||
packages.add(new FaceDetectorFrameProcessorPluginPackage()); // <- add
|
||||
return packages;
|
||||
}
|
||||
```
|
||||
@ -51,33 +51,29 @@ For reference see the [CLI's docs](https://github.com/mateusz1913/vision-camera-
|
||||
<TabItem value="java">
|
||||
|
||||
1. Open your Project in Android Studio
|
||||
2. Create a Java source file, for the QR Code Plugin this will be called `QRCodeFrameProcessorPlugin.java`.
|
||||
2. Create a Java source file, for the Face Detector Plugin this will be called `FaceDetectorFrameProcessorPlugin.java`.
|
||||
3. Add the following code:
|
||||
|
||||
```java {8}
|
||||
import androidx.camera.core.ImageProxy;
|
||||
import com.mrousavy.camera.frameprocessor.Frame;
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPlugin;
|
||||
|
||||
public class QRCodeFrameProcessorPlugin extends FrameProcessorPlugin {
|
||||
public class FaceDetectorFrameProcessorPlugin extends FrameProcessorPlugin {
|
||||
|
||||
@Override
|
||||
public Object callback(ImageProxy image, Object[] params) {
|
||||
public Object callback(Frame frame, Map<String, Object> arguments) {
|
||||
// code goes here
|
||||
return null;
|
||||
}
|
||||
|
||||
QRCodeFrameProcessorPlugin() {
|
||||
super("scanQRCodes");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
:::note
|
||||
The JS function name will be equal to the name you pass to the `super(...)` call (with a `__` prefix). Make sure it is unique across other Frame Processor Plugins.
|
||||
The Frame Processor Plugin will be exposed to JS through the `VisionCameraProxy` object. In this case, it would be `VisionCameraProxy.getFrameProcessorPlugin("detectFaces")`.
|
||||
:::
|
||||
|
||||
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
||||
5. Create a new Java file which registers the Frame Processor Plugin in a React Package, for the QR Code Scanner plugin this file will be called `QRCodeFrameProcessorPluginPackage.java`:
|
||||
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
||||
5. Create a new Java file which registers the Frame Processor Plugin in a React Package, for the Face Detector plugin this file will be called `FaceDetectorFrameProcessorPluginPackage.java`:
|
||||
|
||||
```java {12}
|
||||
import com.facebook.react.ReactPackage;
|
||||
@ -85,13 +81,14 @@ import com.facebook.react.bridge.NativeModule;
|
||||
import com.facebook.react.bridge.ReactApplicationContext;
|
||||
import com.facebook.react.uimanager.ViewManager;
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPlugin;
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPluginRegistry;
|
||||
import javax.annotation.Nonnull;
|
||||
|
||||
public class QRCodeFrameProcessorPluginPackage implements ReactPackage {
|
||||
public class FaceDetectorFrameProcessorPluginPackage implements ReactPackage {
|
||||
@NonNull
|
||||
@Override
|
||||
public List<NativeModule> createNativeModules(@NonNull ReactApplicationContext reactContext) {
|
||||
FrameProcessorPlugin.register(new QRCodeFrameProcessorPlugin());
|
||||
FrameProcessorPluginRegistry.addFrameProcessorPlugin("detectFaces", options -> new FaceDetectorFrameProcessorPlugin());
|
||||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
@ -111,7 +108,7 @@ public class QRCodeFrameProcessorPluginPackage implements ReactPackage {
|
||||
@SuppressWarnings("UnnecessaryLocalVariable")
|
||||
List<ReactPackage> packages = new PackageList(this).getPackages();
|
||||
...
|
||||
packages.add(new QRCodeFrameProcessorPluginPackage()); // <- add
|
||||
packages.add(new FaceDetectorFrameProcessorPluginPackage()); // <- add
|
||||
return packages;
|
||||
}
|
||||
```
|
||||
@ -120,16 +117,16 @@ public class QRCodeFrameProcessorPluginPackage implements ReactPackage {
|
||||
<TabItem value="kotlin">
|
||||
|
||||
1. Open your Project in Android Studio
|
||||
2. Create a Kotlin source file, for the QR Code Plugin this will be called `QRCodeFrameProcessorPlugin.kt`.
|
||||
2. Create a Kotlin source file, for the Face Detector Plugin this will be called `FaceDetectorFrameProcessorPlugin.kt`.
|
||||
3. Add the following code:
|
||||
|
||||
```kotlin {7}
|
||||
import androidx.camera.core.ImageProxy
|
||||
import com.mrousavy.camera.frameprocessor.Frame
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPlugin
|
||||
|
||||
class ExampleFrameProcessorPluginKotlin: FrameProcessorPlugin("scanQRCodes") {
|
||||
class FaceDetectorFrameProcessorPlugin: FrameProcessorPlugin() {
|
||||
|
||||
override fun callback(image: ImageProxy, params: Array<Any>): Any? {
|
||||
override fun callback(frame: Frame, arguments: Map<String, Object>): Any? {
|
||||
// code goes here
|
||||
return null
|
||||
}
|
||||
@ -137,11 +134,11 @@ class ExampleFrameProcessorPluginKotlin: FrameProcessorPlugin("scanQRCodes") {
|
||||
```
|
||||
|
||||
:::note
|
||||
The JS function name will be equal to the name you pass to the `FrameProcessorPlugin(...)` call (with a `__` prefix). Make sure it is unique across other Frame Processor Plugins.
|
||||
The Frame Processor Plugin will be exposed to JS through the `VisionCameraProxy` object. In this case, it would be `VisionCameraProxy.getFrameProcessorPlugin("detectFaces")`.
|
||||
:::
|
||||
|
||||
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
||||
5. Create a new Kotlin file which registers the Frame Processor Plugin in a React Package, for the QR Code Scanner plugin this file will be called `QRCodeFrameProcessorPluginPackage.kt`:
|
||||
4. **Implement your Frame Processing.** See the [Example Plugin (Java)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java) for reference.
|
||||
5. Create a new Kotlin file which registers the Frame Processor Plugin in a React Package, for the Face Detector plugin this file will be called `FaceDetectorFrameProcessorPluginPackage.kt`:
|
||||
|
||||
```kotlin {9}
|
||||
import com.facebook.react.ReactPackage
|
||||
@ -150,9 +147,11 @@ import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.uimanager.ViewManager
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPlugin
|
||||
|
||||
class QRCodeFrameProcessorPluginPackage : ReactPackage {
|
||||
class FaceDetectorFrameProcessorPluginPackage : ReactPackage {
|
||||
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||
FrameProcessorPlugin.register(ExampleFrameProcessorPluginKotlin())
|
||||
FrameProcessorPluginRegistry.addFrameProcessorPlugin("detectFaces") { options ->
|
||||
FaceDetectorFrameProcessorPlugin()
|
||||
}
|
||||
return emptyList()
|
||||
}
|
||||
|
||||
@ -170,7 +169,7 @@ class QRCodeFrameProcessorPluginPackage : ReactPackage {
|
||||
@SuppressWarnings("UnnecessaryLocalVariable")
|
||||
List<ReactPackage> packages = new PackageList(this).getPackages();
|
||||
...
|
||||
packages.add(new QRCodeFrameProcessorPluginPackage()); // <- add
|
||||
packages.add(new FaceDetectorFrameProcessorPluginPackage()); // <- add
|
||||
return packages;
|
||||
}
|
||||
```
|
||||
|
@ -10,20 +10,20 @@ import TabItem from '@theme/TabItem';
|
||||
## Creating a Frame Processor Plugin for iOS
|
||||
|
||||
The Frame Processor Plugin API is built to be as extensible as possible, which allows you to create custom Frame Processor Plugins.
|
||||
In this guide we will create a custom QR Code Scanner Plugin which can be used from JS.
|
||||
In this guide we will create a custom Face Detector Plugin which can be used from JS.
|
||||
|
||||
iOS Frame Processor Plugins can be written in either **Objective-C** or **Swift**.
|
||||
|
||||
### Automatic setup
|
||||
|
||||
Run [Vision Camera Plugin Builder CLI](https://github.com/mateusz1913/vision-camera-plugin-builder),
|
||||
Run [Vision Camera Plugin Builder CLI](https://github.com/mateusz1913/vision-camera-plugin-builder),
|
||||
|
||||
```sh
|
||||
npx vision-camera-plugin-builder ios
|
||||
```
|
||||
|
||||
:::info
|
||||
The CLI will ask you for the path to project's .xcodeproj file, name of the plugin (e.g. `QRCodeFrameProcessor`), name of the exposed method (e.g. `scanQRCodes`) and language you want to use for plugin development (Objective-C, Objective-C++ or Swift).
|
||||
The CLI will ask you for the path to project's .xcodeproj file, name of the plugin (e.g. `FaceDetectorFrameProcessorPlugin`), name of the exposed method (e.g. `detectFaces`) and language you want to use for plugin development (Objective-C, Objective-C++ or Swift).
|
||||
For reference see the [CLI's docs](https://github.com/mateusz1913/vision-camera-plugin-builder#%EF%B8%8F-options).
|
||||
:::
|
||||
|
||||
@ -38,41 +38,52 @@ For reference see the [CLI's docs](https://github.com/mateusz1913/vision-camera-
|
||||
<TabItem value="objc">
|
||||
|
||||
1. Open your Project in Xcode
|
||||
2. Create an Objective-C source file, for the QR Code Plugin this will be called `QRCodeFrameProcessorPlugin.m`.
|
||||
2. Create an Objective-C source file, for the Face Detector Plugin this will be called `FaceDetectorFrameProcessorPlugin.m`.
|
||||
3. Add the following code:
|
||||
|
||||
```objc {12}
|
||||
```objc
|
||||
#import <VisionCamera/FrameProcessorPlugin.h>
|
||||
#import <VisionCamera/FrameProcessorPluginRegistry.h>
|
||||
#import <VisionCamera/Frame.h>
|
||||
|
||||
@interface QRCodeFrameProcessorPlugin : NSObject
|
||||
@interface FaceDetectorFrameProcessorPlugin : FrameProcessorPlugin
|
||||
@end
|
||||
|
||||
@implementation QRCodeFrameProcessorPlugin
|
||||
@implementation FaceDetectorFrameProcessorPlugin
|
||||
|
||||
static inline id scanQRCodes(Frame* frame, NSArray* args) {
|
||||
- (instancetype) initWithOptions:(NSDictionary*)options; {
|
||||
self = [super init];
|
||||
return self;
|
||||
}
|
||||
|
||||
- (id)callback:(Frame*)frame withArguments:(NSDictionary*)arguments {
|
||||
CMSampleBufferRef buffer = frame.buffer;
|
||||
UIImageOrientation orientation = frame.orientation;
|
||||
// code goes here
|
||||
return @[];
|
||||
}
|
||||
|
||||
VISION_EXPORT_FRAME_PROCESSOR(scanQRCodes)
|
||||
+ (void) load {
|
||||
[FrameProcessorPluginRegistry addFrameProcessorPlugin:@"detectFaces"
|
||||
withInitializer:^FrameProcessorPlugin*(NSDictionary* options) {
|
||||
return [[FaceDetectorFrameProcessorPlugin alloc] initWithOptions:options];
|
||||
}];
|
||||
}
|
||||
|
||||
@end
|
||||
```
|
||||
|
||||
:::note
|
||||
The JS function name will be equal to the Objective-C function name you choose (with a `__` prefix). Make sure it is unique across other Frame Processor Plugins.
|
||||
The Frame Processor Plugin will be exposed to JS through the `VisionCameraProxy` object. In this case, it would be `VisionCameraProxy.getFrameProcessorPlugin("detectFaces")`.
|
||||
:::
|
||||
|
||||
4. **Implement your Frame Processing.** See the [Example Plugin (Objective-C)](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20%28Objective%2DC%29) for reference.
|
||||
4. **Implement your Frame Processing.** See the [Example Plugin (Objective-C)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20%28Objective%2DC%29) for reference.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="swift">
|
||||
|
||||
1. Open your Project in Xcode
|
||||
2. Create a Swift file, for the QR Code Plugin this will be `QRCodeFrameProcessorPlugin.swift`. If Xcode asks you to create a Bridging Header, press **create**.
|
||||
2. Create a Swift file, for the Face Detector Plugin this will be `FaceDetectorFrameProcessorPlugin.swift`. If Xcode asks you to create a Bridging Header, press **create**.
|
||||
|
||||

|
||||
|
||||
@ -83,27 +94,12 @@ The JS function name will be equal to the Objective-C function name you choose (
|
||||
#import <VisionCamera/Frame.h>
|
||||
```
|
||||
|
||||
4. Create an Objective-C source file with the same name as the Swift file, for the QR Code Plugin this will be `QRCodeFrameProcessorPlugin.m`. Add the following code:
|
||||
4. In the Swift file, add the following code:
|
||||
|
||||
```objc
|
||||
#import <VisionCamera/FrameProcessorPlugin.h>
|
||||
|
||||
@interface VISION_EXPORT_SWIFT_FRAME_PROCESSOR(scanQRCodes, QRCodeFrameProcessorPlugin)
|
||||
@end
|
||||
```
|
||||
|
||||
:::note
|
||||
The first parameter in the Macro specifies the JS function name. Make sure it is unique across other Frame Processors.
|
||||
:::
|
||||
|
||||
5. In the Swift file, add the following code:
|
||||
|
||||
```swift {8}
|
||||
@objc(QRCodeFrameProcessorPlugin)
|
||||
public class QRCodeFrameProcessorPlugin: NSObject, FrameProcessorPluginBase {
|
||||
|
||||
@objc
|
||||
public static func callback(_ frame: Frame!, withArgs _: [Any]!) -> Any! {
|
||||
```swift
|
||||
@objc(FaceDetectorFrameProcessorPlugin)
|
||||
public class FaceDetectorFrameProcessorPlugin: FrameProcessorPlugin {
|
||||
public override func callback(_ frame: Frame!, withArguments arguments: [String:Any]) -> Any {
|
||||
let buffer = frame.buffer
|
||||
let orientation = frame.orientation
|
||||
// code goes here
|
||||
@ -112,7 +108,31 @@ public class QRCodeFrameProcessorPlugin: NSObject, FrameProcessorPluginBase {
|
||||
}
|
||||
```
|
||||
|
||||
6. **Implement your frame processing.** See [Example Plugin (Swift)](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20%28Swift%29) for reference.
|
||||
5. In your `AppDelegate.m`, add the following imports:
|
||||
|
||||
```objc
|
||||
#import "YOUR_XCODE_PROJECT_NAME-Swift.h"
|
||||
#import <VisionCamera/FrameProcessorPlugin.h>
|
||||
#import <VisionCamera/FrameProcessorPluginRegistry.h>
|
||||
```
|
||||
|
||||
6. In your `AppDelegate.m`, add the following code to `application:didFinishLaunchingWithOptions:`:
|
||||
|
||||
```objc {5}
|
||||
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
|
||||
{
|
||||
...
|
||||
|
||||
[FrameProcessorPluginRegistry addFrameProcessorPlugin:@"detectFaces"
|
||||
withInitializer:^FrameProcessorPlugin*(NSDictionary* options) {
|
||||
return [[FaceDetectorFrameProcessorPlugin alloc] initWithOptions:options];
|
||||
}];
|
||||
|
||||
return [super application:application didFinishLaunchingWithOptions:launchOptions];
|
||||
}
|
||||
```
|
||||
|
||||
7. **Implement your frame processing.** See [Example Plugin (Swift)](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20%28Swift%29) for reference.
|
||||
|
||||
|
||||
</TabItem>
|
||||
|
@ -12,13 +12,10 @@ These are VisionCamera Frame Processor Plugins created by the community.
|
||||
|
||||
```
|
||||
npm i vision-camera-xxxxx
|
||||
cd ios && pod install
|
||||
```
|
||||
|
||||
2. Add the native function's name (the one with the `__` prefix) to your `babel.config.js`. (See their README for instructions)
|
||||
|
||||
:::note
|
||||
You have to restart metro-bundler for changes in the `babel.config.js` file to take effect.
|
||||
:::
|
||||
2. Rebuild your app and use the plugin
|
||||
|
||||
## Plugin List
|
||||
|
||||
|
@ -16,7 +16,7 @@ The Camera's `isActive` property can be used to _pause_ the session (`isActive={
|
||||
|
||||
For example, you want to **pause the camera** when the user **navigates to another page** or **minimizes the app** since otherwise the camera continues to run in the background without the user seeing it, causing **significant battery drain**. Also, on iOS a green dot indicates the user that the camera is still active, possibly causing the user to raise privacy concerns. (🔗 See ["About the orange and green indicators in your iPhone status bar"](https://support.apple.com/en-us/HT211876))
|
||||
|
||||
This example demonstrates how you could pause the camera stream once the app goes into background using a [custom `useIsAppForeground` hook](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/src/hooks/useIsForeground.ts):
|
||||
This example demonstrates how you could pause the camera stream once the app goes into background using a [custom `useIsAppForeground` hook](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/src/hooks/useIsForeground.ts):
|
||||
|
||||
```tsx
|
||||
function App() {
|
||||
|
@ -39,69 +39,69 @@ module.exports = {
|
||||
|
||||
### Create proxy for original and mocked modules
|
||||
|
||||
1. Create a new folder `vision-camera` anywhere in your project.
|
||||
2. Inside that folder, create `vision-camera.js` and `vision-camera.e2e.js`.
|
||||
3. Inside `vision-camera.js`, export the original react native modules you need to mock, and
|
||||
inside `vision-camera.e2e.js` export the mocked modules.
|
||||
1. Create a new folder `vision-camera` anywhere in your project.
|
||||
2. Inside that folder, create `vision-camera.js` and `vision-camera.e2e.js`.
|
||||
3. Inside `vision-camera.js`, export the original react native modules you need to mock, and
|
||||
inside `vision-camera.e2e.js` export the mocked modules.
|
||||
|
||||
In this example, several functions of the modules `Camera` and `sortDevices` are mocked.
|
||||
Define your mocks following the [original definitions](https://github.com/mrousavy/react-native-vision-camera/tree/main/src).
|
||||
In this example, several functions of the modules `Camera` and `sortDevices` are mocked.
|
||||
Define your mocks following the [original definitions](https://github.com/mrousavy/react-native-vision-camera/tree/main/package/src).
|
||||
|
||||
```js
|
||||
// vision-camera.js
|
||||
```js
|
||||
// vision-camera.js
|
||||
|
||||
import { Camera, sortDevices } from 'react-native-vision-camera';
|
||||
import { Camera, sortDevices } from 'react-native-vision-camera';
|
||||
|
||||
export const VisionCamera = Camera;
|
||||
export const visionCameraSortDevices = sortDevices;
|
||||
```
|
||||
export const VisionCamera = Camera;
|
||||
export const visionCameraSortDevices = sortDevices;
|
||||
```
|
||||
|
||||
```js
|
||||
// vision-camera.e2e.js
|
||||
```js
|
||||
// vision-camera.e2e.js
|
||||
|
||||
import React from 'react';
|
||||
import RNFS, { writeFile } from 'react-native-fs';
|
||||
import React from 'react';
|
||||
import RNFS, { writeFile } from 'react-native-fs';
|
||||
|
||||
console.log('[DETOX] Using mocked react-native-vision-camera');
|
||||
console.log('[DETOX] Using mocked react-native-vision-camera');
|
||||
|
||||
export class VisionCamera extends React.PureComponent {
|
||||
static async getAvailableCameraDevices() {
|
||||
return (
|
||||
[
|
||||
{
|
||||
position: 'back',
|
||||
},
|
||||
]
|
||||
);
|
||||
}
|
||||
export class VisionCamera extends React.PureComponent {
|
||||
static async getAvailableCameraDevices() {
|
||||
return (
|
||||
[
|
||||
{
|
||||
position: 'back',
|
||||
},
|
||||
]
|
||||
);
|
||||
}
|
||||
|
||||
static async getCameraPermissionStatus() {
|
||||
return 'authorized';
|
||||
}
|
||||
static async getCameraPermissionStatus() {
|
||||
return 'granted';
|
||||
}
|
||||
|
||||
static async requestCameraPermission() {
|
||||
return 'authorized';
|
||||
}
|
||||
static async requestCameraPermission() {
|
||||
return 'granted';
|
||||
}
|
||||
|
||||
async takePhoto() {
|
||||
const writePath = `${RNFS.DocumentDirectoryPath}/simulated_camera_photo.png`;
|
||||
async takePhoto() {
|
||||
const writePath = `${RNFS.DocumentDirectoryPath}/simulated_camera_photo.png`;
|
||||
|
||||
const imageDataBase64 = 'some_large_base_64_encoded_simulated_camera_photo';
|
||||
await writeFile(writePath, imageDataBase64, 'base64');
|
||||
const imageDataBase64 = 'some_large_base_64_encoded_simulated_camera_photo';
|
||||
await writeFile(writePath, imageDataBase64, 'base64');
|
||||
|
||||
return { path: writePath };
|
||||
}
|
||||
return { path: writePath };
|
||||
}
|
||||
|
||||
render() {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
render() {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export const visionCameraSortDevices = (_left, _right) => 1;
|
||||
```
|
||||
export const visionCameraSortDevices = (_left, _right) => 1;
|
||||
```
|
||||
|
||||
These mocked modules allows us to get authorized camera permissions, get one back camera
|
||||
available and take a fake photo, while the component doesn't render when instantiated.
|
||||
These mocked modules allows us to get granted camera permissions, get one back camera
|
||||
available and take a fake photo, while the component doesn't render when instantiated.
|
||||
|
||||
### Use proxy module
|
||||
|
||||
|
@ -42,9 +42,9 @@ expo install react-native-vision-camera
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
VisionCamera requires **iOS 11 or higher**, and **Android-SDK version 21 or higher**. See [Troubleshooting](/docs/guides/troubleshooting) if you're having installation issues.
|
||||
VisionCamera requires **iOS 12 or higher**, and **Android-SDK version 26 or higher**. See [Troubleshooting](/docs/guides/troubleshooting) if you're having installation issues.
|
||||
|
||||
> **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-reanimated**](https://github.com/software-mansion/react-native-reanimated) 2.2.0 or higher.
|
||||
> **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 1.0.0 or higher.
|
||||
|
||||
## Updating manifests
|
||||
|
||||
@ -138,7 +138,7 @@ const microphonePermission = await Camera.getMicrophonePermissionStatus()
|
||||
|
||||
A permission status can have the following values:
|
||||
|
||||
* `authorized`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
||||
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
||||
* `not-determined`: Your app has not yet requested permission from the user. [Continue by calling the **request** functions.](#requesting-permissions)
|
||||
* `denied`: Your app has already requested permissions from the user, but was explicitly denied. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
|
||||
* `restricted`: (iOS only) Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
|
||||
@ -158,7 +158,7 @@ const newMicrophonePermission = await Camera.requestMicrophonePermission()
|
||||
|
||||
The permission request status can have the following values:
|
||||
|
||||
* `authorized`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
||||
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view).
|
||||
* `denied`: The user explicitly denied the permission request alert. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
|
||||
* `restricted`: (iOS only) Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
|
||||
|
||||
|
@ -1,10 +0,0 @@
|
||||
# TODO
|
||||
|
||||
This is an internal TODO list which I am using to keep track of some of the features that are still missing.
|
||||
|
||||
* [ ] Mirror images from selfie cameras (iOS Done, Android WIP)
|
||||
* [ ] Allow camera switching (front <-> back) while recording and stich videos together
|
||||
* [ ] Make `startRecording()` async. Due to NativeModules limitations, we can only have either one callback or one promise in a native function. For `startRecording()` we need both, since you probably also want to catch any errors that occured during a `startRecording()` call (or wait until the recording has actually started, since this can also take some time)
|
||||
* [ ] Return a `jsi::Value` reference for images (`UIImage`/`Bitmap`) on `takePhoto()` and `takeSnapshot()`. This way, we skip the entire file writing and reading, making image capture _a lot_ faster.
|
||||
* [ ] Implement frame processors. The idea here is that the user passes a small JS function (reanimated worklet) to the `Camera::frameProcessor` prop which will then get called on every frame the camera previews. (I'd say we cap it to 30 times per second, even if the camera fps is higher) This can then be used to scan QR codes, detect faces, detect depth, render something ontop of the camera such as color filters, QR code boundaries or even dog filters, possibly even use AR - all from a single, small, and highly flexible JS function!
|
||||
* [ ] Create a custom MPEG4 encoder to allow for more customizability in `recordVideo()` (`bitRate`, `priority`, `minQuantizationParameter`, `allowFrameReordering`, `expectedFrameRate`, `realTime`, `minimizeMemoryUsage`)
|
@ -14,60 +14,93 @@ Before opening an issue, make sure you try the following:
|
||||
|
||||
## iOS
|
||||
|
||||
1. Try cleaning and rebuilding **everything**:
|
||||
### Build Issues
|
||||
|
||||
1. Try building through Xcode instead of the commandline. The error panel should give you more information about any build errors.
|
||||
2. Try cleaning and rebuilding **everything**:
|
||||
```sh
|
||||
rm -rf package-lock.json && rm -rf yarn.lock && rm -rf node_modules
|
||||
rm -rf ios/Podfile.lock && rm -rf ios/Pods
|
||||
npm i # or "yarn"
|
||||
cd ios && pod repo update && pod update && pod install
|
||||
```
|
||||
2. Check your minimum iOS version. VisionCamera requires a minimum iOS version of **11.0**.
|
||||
3. Check your minimum iOS version. VisionCamera requires a minimum iOS version of **12.4**.
|
||||
1. Open your `Podfile`
|
||||
2. Make sure `platform :ios` is set to `11.0` or higher
|
||||
3. Make sure `iOS Deployment Target` is set to `11.0` or higher (`IPHONEOS_DEPLOYMENT_TARGET` in `project.pbxproj`)
|
||||
3. Check your Swift version. VisionCamera requires a minimum Swift version of **5.2**.
|
||||
2. Make sure `platform :ios` is set to `12.4` or higher
|
||||
3. Make sure `iOS Deployment Target` is set to `12.4` or higher (`IPHONEOS_DEPLOYMENT_TARGET` in `project.pbxproj`)
|
||||
4. Check your Swift version. VisionCamera requires a minimum Swift version of **5.2**.
|
||||
1. Open `project.pbxproj` in a Text Editor
|
||||
2. If the `LIBRARY_SEARCH_PATH` value is set, make sure there is no explicit reference to Swift-5.0. If there is, remove it. See [this StackOverflow answer](https://stackoverflow.com/a/66281846/1123156).
|
||||
3. If the `SWIFT_VERSION` value is set, make sure it is set to `5.2` or higher.
|
||||
4. Make sure you have created a Swift bridging header in your project.
|
||||
5. Make sure you have created a Swift bridging header in your project.
|
||||
1. Open your project (`.xcworkspace`) in Xcode
|
||||
2. Press **File** > **New** > **File** (<kbd>⌘</kbd>+<kbd>N</kbd>)
|
||||
3. Select **Swift File** and press **Next**
|
||||
4. Choose whatever name you want, e.g. `File.swift` and press **Create**
|
||||
5. Press **Create Bridging Header** when promted.
|
||||
5. If you're having runtime issues, check the logs in Xcode to find out more. In Xcode, go to **View** > **Debug Area** > **Activate Console** (<kbd>⇧</kbd>+<kbd>⌘</kbd>+<kbd>C</kbd>).
|
||||
6. Try building without Frame Processors. Set `$VCDisableFrameProcessors = true` in the top of your Podfile, and try rebuilding.
|
||||
|
||||
### Runtime Issues
|
||||
|
||||
1. Check the logs in Xcode to find out more. In Xcode, go to **View** > **Debug Area** > **Activate Console** (<kbd>⇧</kbd>+<kbd>⌘</kbd>+<kbd>C</kbd>).
|
||||
* For errors without messages, there's often an error code attached. Look up the error code on [osstatus.com](https://www.osstatus.com) to get more information about a specific error.
|
||||
6. If your Frame Processor is not running, make sure you check the native Xcode logs to find out why. Also make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
|
||||
2. If your Frame Processor is not running, make sure you check the native Xcode logs. There is useful information about the Frame Processor Runtime that will tell you if something goes wrong.
|
||||
3. If your Frame Processor is not running, make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
|
||||
4. If you are experiencing black-screens, try removing all properties such as `fps`, `hdr` or `format` on the `<Camera>` component except for the required ones:
|
||||
```tsx
|
||||
<Camera device={device} isActive={true} style={{ width: 500, height: 500 }} />
|
||||
```
|
||||
5. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
|
||||
```tsx
|
||||
Camera.getAvailableCameraDevices().then((d) => console.log(JSON.stringify(d, null, 2)))
|
||||
```
|
||||
|
||||
## Android
|
||||
|
||||
1. Try cleaning and rebuilding **everything**:
|
||||
### Build Issues
|
||||
|
||||
1. Try building through Android Studio instead of the commandline. The error panel should give you more information about any build errors.
|
||||
2. Scroll up in the build output to make sure you're not missing any errors. Remember: "Build failed" is not an error message. Scroll further up.
|
||||
3. Try cleaning and rebuilding **everything**:
|
||||
```sh
|
||||
./android/gradlew clean
|
||||
rm -rf package-lock.json && rm -rf yarn.lock && rm -rf node_modules
|
||||
npm i # or "yarn"
|
||||
rm -rf android/.gradle android/.idea android/app/build android/build
|
||||
rm -rf package-lock.json yarn.lock node_modules
|
||||
yarn # or `npm i`
|
||||
```
|
||||
2. Since the Android implementation uses the not-yet fully stable **CameraX** API, make sure you've browsed the [CameraX issue tracker](https://issuetracker.google.com/issues?q=componentid:618491%20status:open) to find out if your issue is a limitation by the **CameraX** library even I cannot get around.
|
||||
3. Make sure you have installed the [Android NDK](https://developer.android.com/ndk).
|
||||
4. Make sure your minimum SDK version is **21 or higher**, and target SDK version is **31 or higher**. See [the example's `build.gradle`](https://github.com/mrousavy/react-native-vision-camera/blob/main/example/android/build.gradle#L5-L10) for reference.
|
||||
4. Make sure you have installed the [Android NDK](https://developer.android.com/ndk).
|
||||
5. Make sure your minimum SDK version is **21 or higher**, and target SDK version is **33 or higher**. See [the example's `build.gradle`](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/build.gradle#L5-L10) for reference.
|
||||
1. Open your `build.gradle`
|
||||
2. Set `buildToolsVersion` to `30.0.0` or higher
|
||||
3. Set `compileSdkVersion` to `31` or higher
|
||||
4. Set `targetSdkVersion` to `31` or higher
|
||||
5. Set `minSdkVersion` to `21` or higher
|
||||
6. Set `ndkVersion` to `"20.1.5948944"` or higher
|
||||
7. Update the Gradle Build-Tools version to `4.1.2` or higher:
|
||||
2. Set `buildToolsVersion` to `33.0.0` or higher
|
||||
3. Set `compileSdkVersion` to `33` or higher
|
||||
4. Set `targetSdkVersion` to `33` or higher
|
||||
5. Set `minSdkVersion` to `26` or higher
|
||||
6. Set `ndkVersion` to `"23.1.7779620"` or higher
|
||||
7. Update the Gradle Build-Tools version to `7.3.1` or higher:
|
||||
```
|
||||
classpath("com.android.tools.build:gradle:4.1.2")
|
||||
classpath("com.android.tools.build:gradle:7.3.1")
|
||||
```
|
||||
4. Make sure your Gradle Wrapper version is `6.5` or higher. In `gradle-wrapper.properties`, set:
|
||||
6. Make sure your Gradle Wrapper version is `7.5.1` or higher. In `gradle-wrapper.properties`, set:
|
||||
```
|
||||
distributionUrl=https\://services.gradle.org/distributions/gradle-6.5-all.zip
|
||||
distributionUrl=https\://services.gradle.org/distributions/gradle-7.5.1-all.zip
|
||||
```
|
||||
7. Try building without Frame Processors. Set `VisionCamera_disableFrameProcessors = true` in your `gradle.properties`, and try rebuilding.
|
||||
|
||||
### Runtime Issues
|
||||
|
||||
1. Check the logs in Android Studio/Logcat to find out more. In Android Studio, go to **View** > **Tool Windows** > **Logcat** (<kbd>⌘</kbd>+<kbd>6</kbd>) or run `adb logcat` in Terminal.
|
||||
2. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/Camera#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
|
||||
3. If your Frame Processor is not running, make sure you check the native Android Studio/Logcat logs. There is useful information about the Frame Processor Runtime that will tell you if something goes wrong.
|
||||
4. If your Frame Processor is not running, make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
|
||||
5. If you are experiencing black-screens, try removing all properties such as `fps`, `hdr` or `format` on the `<Camera>` component except for the required ones:
|
||||
```tsx
|
||||
<Camera device={device} isActive={true} style={{ width: 500, height: 500 }} />
|
||||
```
|
||||
6. Investigate the camera devices this phone has and make sure you're using a valid one. Look for properties such as `pixelFormats`, `id`, and `hardwareLevel`.
|
||||
```tsx
|
||||
Camera.getAvailableCameraDevices().then((d) => console.log(JSON.stringify(d, null, 2)))
|
||||
```
|
||||
5. If you're having runtime issues, check the logs in Android Studio/Logcat to find out more. In Android Studio, go to **View** > **Tool Windows** > **Logcat** (<kbd>⌘</kbd>+<kbd>6</kbd>) or run `adb logcat` in Terminal.
|
||||
6. If a camera device is not being returned by [`Camera.getAvailableCameraDevices()`](/docs/api/classes/Camera#getavailablecameradevices), make sure it is a Camera2 compatible device. See [this section in the Android docs](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#reprocessing) for more information.
|
||||
7. If your Frame Processor is not running, make sure you check the native Android Studio/Logcat logs to find out why. Also make sure you are not using a remote JS debugger such as Google Chrome, since those don't work with JSI.
|
||||
|
||||
## Issues
|
||||
|
||||
If nothing has helped so far, try browsing the [GitHub issues](https://github.com/mrousavy/react-native-vision-camera/issues?q=is%3Aissue). If your issue doesn't exist, [create a new one](https://github.com/mrousavy/react-native-vision-camera/issues/new/choose). Make sure to fill out the template and include as many details as possible. Also try to reproduce the issue in the [example app](https://github.com/mrousavy/react-native-vision-camera/blob/main/example).
|
||||
If nothing has helped so far, try browsing the [GitHub issues](https://github.com/mrousavy/react-native-vision-camera/issues?q=is%3Aissue). If your issue doesn't exist, [create a new one](https://github.com/mrousavy/react-native-vision-camera/issues/new/choose). Make sure to fill out the template and include as many details as possible. Also try to reproduce the issue in the [example app](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example).
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user