docs: Don't use bold links (#1860)

This commit is contained in:
Marc Rousavy 2023-09-26 14:42:22 +02:00 committed by GitHub
parent 04fd597866
commit 2c5c7d63b1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 21 additions and 21 deletions

View File

@ -53,7 +53,7 @@ Frame processors are by far not limited to object detection, other examples incl
Because they are written in JS, Frame Processors are simple, powerful, extensible and easy to create while still running at native performance. (Frame Processors can run up to 1000 times a second!) Also, you can use fast-refresh to quickly see changes while developing or publish [over-the-air updates](https://github.com/microsoft/react-native-code-push) to tweak the object detector's sensitivity in live apps without pushing a native update. Because they are written in JS, Frame Processors are simple, powerful, extensible and easy to create while still running at native performance. (Frame Processors can run up to 1000 times a second!) Also, you can use fast-refresh to quickly see changes while developing or publish [over-the-air updates](https://github.com/microsoft/react-native-code-push) to tweak the object detector's sensitivity in live apps without pushing a native update.
:::note :::note
Frame Processors require [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher. Frame Processors require [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher.
::: :::
## The `Frame` ## The `Frame`
@ -87,7 +87,7 @@ You can simply pass a `Frame` to a native Frame Processor Plugin directly.
### Access JS values ### Access JS values
Since Frame Processors run in [**Worklets**](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md), you can directly use JS values such as React state which are readonly-copied into the Frame Processor: Since Frame Processors run in [Worklets](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md), you can directly use JS values such as React state which are readonly-copied into the Frame Processor:
```tsx ```tsx
// User can look for specific objects // User can look for specific objects
@ -103,7 +103,7 @@ const frameProcessor = useFrameProcessor((frame) => {
### Shared Values ### Shared Values
You can also easily read from, and assign to [**Shared Values**](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md#shared-values), which can be written to from inside a Frame Processor and read from any other context (either React JS, Skia, or Reanimated): You can also easily read from, and assign to [Shared Values](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md#shared-values), which can be written to from inside a Frame Processor and read from any other context (either React JS, Skia, or Reanimated):
```tsx ```tsx
const bananas = useSharedValue([]) const bananas = useSharedValue([])
@ -199,7 +199,7 @@ See: ["Creating Frame Processor Plugins"](/docs/guides/frame-processors-plugins-
### Using Community Plugins ### Using Community Plugins
Community Frame Processor Plugins are distributed through npm. To install the [**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler) plugin, run: Community Frame Processor Plugins are distributed through npm. To install the [vision-camera-image-labeler](https://github.com/mrousavy/vision-camera-image-labeler) plugin, run:
```bash ```bash
npm i vision-camera-image-labeler npm i vision-camera-image-labeler

View File

@ -52,7 +52,7 @@ Frame Processors are JS functions that will be _workletized_ using [react-native
Frame Processor Plugins are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be synchronously called from your JS Frame Processors (using JSI) without ever going over the bridge. Because VisionCamera provides an easy-to-use plugin API, you can easily create a Frame Processor Plugin yourself. Some examples include [Barcode Scanning](https://developers.google.com/ml-kit/vision/barcode-scanning), [Face Detection](https://developers.google.com/ml-kit/vision/face-detection), [Image Labeling](https://developers.google.com/ml-kit/vision/image-labeling), [Text Recognition](https://developers.google.com/ml-kit/vision/text-recognition) and more. Frame Processor Plugins are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be synchronously called from your JS Frame Processors (using JSI) without ever going over the bridge. Because VisionCamera provides an easy-to-use plugin API, you can easily create a Frame Processor Plugin yourself. Some examples include [Barcode Scanning](https://developers.google.com/ml-kit/vision/barcode-scanning), [Face Detection](https://developers.google.com/ml-kit/vision/face-detection), [Image Labeling](https://developers.google.com/ml-kit/vision/image-labeling), [Text Recognition](https://developers.google.com/ml-kit/vision/text-recognition) and more.
> Learn how to [**create Frame Processor Plugins**](frame-processors-plugins-overview), or check out the [**example Frame Processor Plugin for iOS**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [**Android**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java). > Learn how to [create Frame Processor Plugins](frame-processors-plugins-overview), or check out the [example Frame Processor Plugin for iOS](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [Android](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java).
### The `Frame` object ### The `Frame` object
@ -61,7 +61,7 @@ The `Frame` object can be passed around in JS, as well as returned from- and pas
With 4k Frames, roughly 1.5 GB of Frame data flow through your Frame Processor per second. With 4k Frames, roughly 1.5 GB of Frame data flow through your Frame Processor per second.
> See [**this tweet**](https://twitter.com/mrousavy/status/1412300883149393921) for more information. > See [this tweet](https://twitter.com/mrousavy/status/1412300883149393921) for more information.
<br /> <br />

View File

@ -50,7 +50,7 @@ If you want to distribute your Frame Processor Plugin, simply use npm.
4. Add VisionCamera to `peerDependencies`: `"react-native-vision-camera": ">= 3"` 4. Add VisionCamera to `peerDependencies`: `"react-native-vision-camera": ">= 3"`
5. Implement the Frame Processor Plugin in the iOS, Android and JS/TS Codebase using the guides above 5. Implement the Frame Processor Plugin in the iOS, Android and JS/TS Codebase using the guides above
6. Publish the plugin to npm. Users will only have to install the plugin using `npm i vision-camera-plugin-xxxxx` and add it to their `babel.config.js` file. 6. Publish the plugin to npm. Users will only have to install the plugin using `npm i vision-camera-plugin-xxxxx` and add it to their `babel.config.js` file.
7. [Add the plugin to the **official VisionCamera plugin list**](https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx) for more visibility 7. [Add the plugin to the **official VisionCamera plugin list](https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx) for more visibility
<br /> <br />

View File

@ -19,15 +19,15 @@ cd ios && pod install
## Plugin List ## Plugin List
* [mrousavy/**react-native-fast-tflite**](https://github.com/mrousavy/react-native-fast-tflite): A plugin to run any Tensorflow Lite model inside React Native, written in C++ with GPU acceleration. * [mrousavy/**react-native-fast-tflite](https://github.com/mrousavy/react-native-fast-tflite): A plugin to run any Tensorflow Lite model inside React Native, written in C++ with GPU acceleration.
* [mrousavy/**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler): A plugin to label images using MLKit Vision Image Labeler. * [mrousavy/**vision-camera-image-labeler](https://github.com/mrousavy/vision-camera-image-labeler): A plugin to label images using MLKit Vision Image Labeler.
* [mrousavy/**vision-camera-resize-plugin**](https://github.com/mrousavy/vision-camera-resize-plugin): A plugin for fast frame resizing to optimize execution speed of expensive AI algorithms. * [mrousavy/**vision-camera-resize-plugin](https://github.com/mrousavy/vision-camera-resize-plugin): A plugin for fast frame resizing to optimize execution speed of expensive AI algorithms.
* [rodgomesc/**vision-camera-face-detector**](https://github.com/rodgomesc/vision-camera-face-detector): A plugin to detect faces using MLKit Vision Face Detector. * [rodgomesc/**vision-camera-face-detector](https://github.com/rodgomesc/vision-camera-face-detector): A plugin to detect faces using MLKit Vision Face Detector.
* [rodgomesc/**vision-camera-qrcode-scanner**](https://github.com/rodgomesc/vision-camera-qrcode-scanner): A plugin to read barcodes using MLKit Vision QrCode Scanning * [rodgomesc/**vision-camera-qrcode-scanner](https://github.com/rodgomesc/vision-camera-qrcode-scanner): A plugin to read barcodes using MLKit Vision QrCode Scanning
* [mrousavy/**Colorwaver**](https://github.com/mrousavy/Colorwaver): An app (+ plugin) to detect color palettes in the real world. * [mrousavy/**Colorwaver](https://github.com/mrousavy/Colorwaver): An app (+ plugin) to detect color palettes in the real world.
* [xulihang/**vision-camera-dynamsoft-barcode-reader**](https://github.com/xulihang/vision-camera-dynamsoft-barcode-reader): A plugin to read barcodes using Dynamsoft Barcode Reader. * [xulihang/**vision-camera-dynamsoft-barcode-reader](https://github.com/xulihang/vision-camera-dynamsoft-barcode-reader): A plugin to read barcodes using Dynamsoft Barcode Reader.
* [xulihang/**vision-camera-dynamsoft-label-recognizer**](https://github.com/xulihang/vision-camera-dynamsoft-label-recognizer): A plugin to recognize text on labels, MRZ passports, etc. using Dynamsoft Label Recognizer. * [xulihang/**vision-camera-dynamsoft-label-recognizer](https://github.com/xulihang/vision-camera-dynamsoft-label-recognizer): A plugin to recognize text on labels, MRZ passports, etc. using Dynamsoft Label Recognizer.
* [aarongrider/**vision-camera-ocr**](https://github.com/aarongrider/vision-camera-ocr): A plugin to detect text in real time using MLKit Text Detector (OCR). * [aarongrider/**vision-camera-ocr](https://github.com/aarongrider/vision-camera-ocr): A plugin to detect text in real time using MLKit Text Detector (OCR).

View File

@ -15,7 +15,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'
## Installing the library ## Installing the library
Install [**react-native-vision-camera**](https://www.npmjs.com/package/react-native-vision-camera) through npm: Install [react-native-vision-camera](https://www.npmjs.com/package/react-native-vision-camera) through npm:
<Tabs <Tabs
groupId="environment" groupId="environment"
@ -44,7 +44,7 @@ expo install react-native-vision-camera
VisionCamera requires **iOS 12 or higher**, and **Android-SDK version 26 or higher**. See [Troubleshooting](/docs/guides/troubleshooting) if you're having installation issues. VisionCamera requires **iOS 12 or higher**, and **Android-SDK version 26 or higher**. See [Troubleshooting](/docs/guides/troubleshooting) if you're having installation issues.
> **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher. > **(Optional)** If you want to use [Frame Processors](/docs/guides/frame-processors), you need to install [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher.
## Updating manifests ## Updating manifests
@ -152,7 +152,7 @@ const { hasPermission, requestPermission } = useMicrophonePermission()
There could be three states to this: There could be three states to this:
1. First time opening the app, `hasPermission` is false. Call `requestPermission()` now. 1. First time opening the app, `hasPermission` is false. Call `requestPermission()` now.
2. User already granted permission, `hasPermission` is true. Continue with [**using the `<Camera>` view**](#use-the-camera-view). 2. User already granted permission, `hasPermission` is true. Continue with [using the `<Camera>` view](#use-the-camera-view).
3. User explicitly denied permission, `hasPermission` is false and `requestPermission()` will return false. Tell the user that he needs to grant Camera Permission, potentially by using the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to open the App Settings. 3. User explicitly denied permission, `hasPermission` is false and `requestPermission()` will return false. Tell the user that he needs to grant Camera Permission, potentially by using the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to open the App Settings.
</TabItem> </TabItem>
@ -170,7 +170,7 @@ const microphonePermission = await Camera.getMicrophonePermissionStatus()
A permission status can have the following values: A permission status can have the following values:
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view). * `granted`: Your app is authorized to use said permission. Continue with [using the `<Camera>` view](#use-the-camera-view).
* `not-determined`: Your app has not yet requested permission from the user. [Continue by calling the **request** functions.](#requesting-permissions) * `not-determined`: Your app has not yet requested permission from the user. [Continue by calling the **request** functions.](#requesting-permissions)
* `denied`: Your app has already requested permissions from the user, but was explicitly denied. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission. * `denied`: Your app has already requested permissions from the user, but was explicitly denied. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
* `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place. * `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.
@ -190,7 +190,7 @@ const newMicrophonePermission = await Camera.requestMicrophonePermission()
The permission request status can have the following values: The permission request status can have the following values:
* `granted`: Your app is authorized to use said permission. Continue with [**using the `<Camera>` view**](#use-the-camera-view). * `granted`: Your app is authorized to use said permission. Continue with [using the `<Camera>` view](#use-the-camera-view).
* `denied`: The user explicitly denied the permission request alert. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission. * `denied`: The user explicitly denied the permission request alert. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission.
* `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place. * `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.