From 2c5c7d63b1e379d98cc4f7d0673837b52c5b91d1 Mon Sep 17 00:00:00 2001 From: Marc Rousavy Date: Tue, 26 Sep 2023 14:42:22 +0200 Subject: [PATCH] docs: Don't use bold links (#1860) --- docs/docs/guides/FRAME_PROCESSORS.mdx | 8 ++++---- docs/docs/guides/FRAME_PROCESSORS_TIPS.mdx | 4 ++-- .../guides/FRAME_PROCESSOR_CREATE_FINAL.mdx | 2 +- .../guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx | 18 +++++++++--------- docs/docs/guides/SETUP.mdx | 10 +++++----- 5 files changed, 21 insertions(+), 21 deletions(-) diff --git a/docs/docs/guides/FRAME_PROCESSORS.mdx b/docs/docs/guides/FRAME_PROCESSORS.mdx index d57ce8e..087347b 100644 --- a/docs/docs/guides/FRAME_PROCESSORS.mdx +++ b/docs/docs/guides/FRAME_PROCESSORS.mdx @@ -53,7 +53,7 @@ Frame processors are by far not limited to object detection, other examples incl Because they are written in JS, Frame Processors are simple, powerful, extensible and easy to create while still running at native performance. (Frame Processors can run up to 1000 times a second!) Also, you can use fast-refresh to quickly see changes while developing or publish [over-the-air updates](https://github.com/microsoft/react-native-code-push) to tweak the object detector's sensitivity in live apps without pushing a native update. :::note -Frame Processors require [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher. +Frame Processors require [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher. ::: ## The `Frame` @@ -87,7 +87,7 @@ You can simply pass a `Frame` to a native Frame Processor Plugin directly. ### Access JS values -Since Frame Processors run in [**Worklets**](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md), you can directly use JS values such as React state which are readonly-copied into the Frame Processor: +Since Frame Processors run in [Worklets](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md), you can directly use JS values such as React state which are readonly-copied into the Frame Processor: ```tsx // User can look for specific objects @@ -103,7 +103,7 @@ const frameProcessor = useFrameProcessor((frame) => { ### Shared Values -You can also easily read from, and assign to [**Shared Values**](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md#shared-values), which can be written to from inside a Frame Processor and read from any other context (either React JS, Skia, or Reanimated): +You can also easily read from, and assign to [Shared Values](https://github.com/margelo/react-native-worklets-core/blob/main/docs/WORKLETS.md#shared-values), which can be written to from inside a Frame Processor and read from any other context (either React JS, Skia, or Reanimated): ```tsx const bananas = useSharedValue([]) @@ -199,7 +199,7 @@ See: ["Creating Frame Processor Plugins"](/docs/guides/frame-processors-plugins- ### Using Community Plugins -Community Frame Processor Plugins are distributed through npm. To install the [**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler) plugin, run: +Community Frame Processor Plugins are distributed through npm. To install the [vision-camera-image-labeler](https://github.com/mrousavy/vision-camera-image-labeler) plugin, run: ```bash npm i vision-camera-image-labeler diff --git a/docs/docs/guides/FRAME_PROCESSORS_TIPS.mdx b/docs/docs/guides/FRAME_PROCESSORS_TIPS.mdx index 653d9be..b241756 100644 --- a/docs/docs/guides/FRAME_PROCESSORS_TIPS.mdx +++ b/docs/docs/guides/FRAME_PROCESSORS_TIPS.mdx @@ -52,7 +52,7 @@ Frame Processors are JS functions that will be _workletized_ using [react-native Frame Processor Plugins are native functions (written in Objective-C, Swift, C++, Java or Kotlin) that are injected into the VisionCamera JS-Runtime. They can be synchronously called from your JS Frame Processors (using JSI) without ever going over the bridge. Because VisionCamera provides an easy-to-use plugin API, you can easily create a Frame Processor Plugin yourself. Some examples include [Barcode Scanning](https://developers.google.com/ml-kit/vision/barcode-scanning), [Face Detection](https://developers.google.com/ml-kit/vision/face-detection), [Image Labeling](https://developers.google.com/ml-kit/vision/image-labeling), [Text Recognition](https://developers.google.com/ml-kit/vision/text-recognition) and more. -> Learn how to [**create Frame Processor Plugins**](frame-processors-plugins-overview), or check out the [**example Frame Processor Plugin for iOS**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [**Android**](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java). +> Learn how to [create Frame Processor Plugins](frame-processors-plugins-overview), or check out the [example Frame Processor Plugin for iOS](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/ios/Frame%20Processor%20Plugins/Example%20Plugin%20(Swift)/ExamplePluginSwift.swift) or [Android](https://github.com/mrousavy/react-native-vision-camera/blob/main/package/example/android/app/src/main/java/com/mrousavy/camera/example/ExampleFrameProcessorPlugin.java). ### The `Frame` object @@ -61,7 +61,7 @@ The `Frame` object can be passed around in JS, as well as returned from- and pas With 4k Frames, roughly 1.5 GB of Frame data flow through your Frame Processor per second. -> See [**this tweet**](https://twitter.com/mrousavy/status/1412300883149393921) for more information. +> See [this tweet](https://twitter.com/mrousavy/status/1412300883149393921) for more information.
diff --git a/docs/docs/guides/FRAME_PROCESSOR_CREATE_FINAL.mdx b/docs/docs/guides/FRAME_PROCESSOR_CREATE_FINAL.mdx index 1467ca7..1acbdeb 100644 --- a/docs/docs/guides/FRAME_PROCESSOR_CREATE_FINAL.mdx +++ b/docs/docs/guides/FRAME_PROCESSOR_CREATE_FINAL.mdx @@ -50,7 +50,7 @@ If you want to distribute your Frame Processor Plugin, simply use npm. 4. Add VisionCamera to `peerDependencies`: `"react-native-vision-camera": ">= 3"` 5. Implement the Frame Processor Plugin in the iOS, Android and JS/TS Codebase using the guides above 6. Publish the plugin to npm. Users will only have to install the plugin using `npm i vision-camera-plugin-xxxxx` and add it to their `babel.config.js` file. -7. [Add the plugin to the **official VisionCamera plugin list**](https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx) for more visibility +7. [Add the plugin to the **official VisionCamera plugin list](https://github.com/mrousavy/react-native-vision-camera/edit/main/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx) for more visibility
diff --git a/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx b/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx index 9d64f14..9f7a6ac 100644 --- a/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx +++ b/docs/docs/guides/FRAME_PROCESSOR_PLUGIN_LIST.mdx @@ -19,15 +19,15 @@ cd ios && pod install ## Plugin List -* [mrousavy/**react-native-fast-tflite**](https://github.com/mrousavy/react-native-fast-tflite): A plugin to run any Tensorflow Lite model inside React Native, written in C++ with GPU acceleration. -* [mrousavy/**vision-camera-image-labeler**](https://github.com/mrousavy/vision-camera-image-labeler): A plugin to label images using MLKit Vision Image Labeler. -* [mrousavy/**vision-camera-resize-plugin**](https://github.com/mrousavy/vision-camera-resize-plugin): A plugin for fast frame resizing to optimize execution speed of expensive AI algorithms. -* [rodgomesc/**vision-camera-face-detector**](https://github.com/rodgomesc/vision-camera-face-detector): A plugin to detect faces using MLKit Vision Face Detector. -* [rodgomesc/**vision-camera-qrcode-scanner**](https://github.com/rodgomesc/vision-camera-qrcode-scanner): A plugin to read barcodes using MLKit Vision QrCode Scanning -* [mrousavy/**Colorwaver**](https://github.com/mrousavy/Colorwaver): An app (+ plugin) to detect color palettes in the real world. -* [xulihang/**vision-camera-dynamsoft-barcode-reader**](https://github.com/xulihang/vision-camera-dynamsoft-barcode-reader): A plugin to read barcodes using Dynamsoft Barcode Reader. -* [xulihang/**vision-camera-dynamsoft-label-recognizer**](https://github.com/xulihang/vision-camera-dynamsoft-label-recognizer): A plugin to recognize text on labels, MRZ passports, etc. using Dynamsoft Label Recognizer. -* [aarongrider/**vision-camera-ocr**](https://github.com/aarongrider/vision-camera-ocr): A plugin to detect text in real time using MLKit Text Detector (OCR). +* [mrousavy/**react-native-fast-tflite](https://github.com/mrousavy/react-native-fast-tflite): A plugin to run any Tensorflow Lite model inside React Native, written in C++ with GPU acceleration. +* [mrousavy/**vision-camera-image-labeler](https://github.com/mrousavy/vision-camera-image-labeler): A plugin to label images using MLKit Vision Image Labeler. +* [mrousavy/**vision-camera-resize-plugin](https://github.com/mrousavy/vision-camera-resize-plugin): A plugin for fast frame resizing to optimize execution speed of expensive AI algorithms. +* [rodgomesc/**vision-camera-face-detector](https://github.com/rodgomesc/vision-camera-face-detector): A plugin to detect faces using MLKit Vision Face Detector. +* [rodgomesc/**vision-camera-qrcode-scanner](https://github.com/rodgomesc/vision-camera-qrcode-scanner): A plugin to read barcodes using MLKit Vision QrCode Scanning +* [mrousavy/**Colorwaver](https://github.com/mrousavy/Colorwaver): An app (+ plugin) to detect color palettes in the real world. +* [xulihang/**vision-camera-dynamsoft-barcode-reader](https://github.com/xulihang/vision-camera-dynamsoft-barcode-reader): A plugin to read barcodes using Dynamsoft Barcode Reader. +* [xulihang/**vision-camera-dynamsoft-label-recognizer](https://github.com/xulihang/vision-camera-dynamsoft-label-recognizer): A plugin to recognize text on labels, MRZ passports, etc. using Dynamsoft Label Recognizer. +* [aarongrider/**vision-camera-ocr](https://github.com/aarongrider/vision-camera-ocr): A plugin to detect text in real time using MLKit Text Detector (OCR). diff --git a/docs/docs/guides/SETUP.mdx b/docs/docs/guides/SETUP.mdx index be0c7fb..f5838f9 100644 --- a/docs/docs/guides/SETUP.mdx +++ b/docs/docs/guides/SETUP.mdx @@ -15,7 +15,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl' ## Installing the library -Install [**react-native-vision-camera**](https://www.npmjs.com/package/react-native-vision-camera) through npm: +Install [react-native-vision-camera](https://www.npmjs.com/package/react-native-vision-camera) through npm: **(Optional)** If you want to use [**Frame Processors**](/docs/guides/frame-processors), you need to install [**react-native-worklets-core**](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher. +> **(Optional)** If you want to use [Frame Processors](/docs/guides/frame-processors), you need to install [react-native-worklets-core](https://github.com/margelo/react-native-worklets-core) 0.2.0 or higher. ## Updating manifests @@ -152,7 +152,7 @@ const { hasPermission, requestPermission } = useMicrophonePermission() There could be three states to this: 1. First time opening the app, `hasPermission` is false. Call `requestPermission()` now. -2. User already granted permission, `hasPermission` is true. Continue with [**using the `` view**](#use-the-camera-view). +2. User already granted permission, `hasPermission` is true. Continue with [using the `` view](#use-the-camera-view). 3. User explicitly denied permission, `hasPermission` is false and `requestPermission()` will return false. Tell the user that he needs to grant Camera Permission, potentially by using the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to open the App Settings. @@ -170,7 +170,7 @@ const microphonePermission = await Camera.getMicrophonePermissionStatus() A permission status can have the following values: -* `granted`: Your app is authorized to use said permission. Continue with [**using the `` view**](#use-the-camera-view). +* `granted`: Your app is authorized to use said permission. Continue with [using the `` view](#use-the-camera-view). * `not-determined`: Your app has not yet requested permission from the user. [Continue by calling the **request** functions.](#requesting-permissions) * `denied`: Your app has already requested permissions from the user, but was explicitly denied. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission. * `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place. @@ -190,7 +190,7 @@ const newMicrophonePermission = await Camera.requestMicrophonePermission() The permission request status can have the following values: -* `granted`: Your app is authorized to use said permission. Continue with [**using the `` view**](#use-the-camera-view). +* `granted`: Your app is authorized to use said permission. Continue with [using the `` view](#use-the-camera-view). * `denied`: The user explicitly denied the permission request alert. You cannot use the **request** functions again, but you can use the [`Linking` API](https://reactnative.dev/docs/linking#opensettings) to redirect the user to the Settings App where he can manually grant the permission. * `restricted`: Your app cannot use the Camera or Microphone because that functionality has been restricted, possibly due to active restrictions such as parental controls being in place.