* Clean up Frame Processor * Create FrameProcessorHolder * Create FrameProcessorDelegate in ObjC++ * Move frame processor to FrameProcessorDelegate * Decorate runtime, check for null * Update FrameProcessorDelegate.mm * Cleanup FrameProcessorBindings.mm * Fix RuntimeDecorator.h import * Update FrameProcessorDelegate.mm * "React" -> "React Helper" to avoid confusion * Rename folders again * Fix podspec flattening a lot of headers, causing REA nameclash * Fix header imports to avoid REA naming collision * Lazily initialize jsi::Runtime on DispatchQueue * Install frame processor bindings from Swift * First try to call jsi::Function (frame processor) 👀 * Call viewForReactTag on RCT main thread * Fix bridge accessing * Add more logs * Update CameraViewManager.swift * Add more TODOs * Re-indent .cpp files * Fix RCTTurboModule import podspec * Remove unnecessary include check for swift umbrella header * Merge branch 'main' into frame-processors * Docs: use static width for images (283) * Create validate-cpp.yml * Update a lot of packages to latest * Set SWIFT_VERSION to 5.2 in podspec * Create clean.sh * Delete unused C++ files * podspec: Remove CLANG_CXX_LANGUAGE_STANDARD and OTHER_CFLAGS * Update pod lockfiles * Regenerate lockfiles * Remove IOSLogger * Use NSLog * Create FrameProcessorManager (inherits from REA RuntimeManager) * Create reanimated::RuntimeManager shared_ptr * Re-integrate pods * Add react-native-reanimated >=2 peerDependency * Add metro-config * blacklist -> exclusionList * Try to call worklet * Fix jsi::Value* initializer * Call ShareableValue::adapt (makeShareable) with React/JS Runtime * Add null-checks * Lift runtime manager creation out of delegate, into bindings * Remove debug statement * Make RuntimeManager unique_ptr * Set _FRAME_PROCESSOR * Extract convertJSIFunctionToFrameProcessorCallback * Print frame * Merge branch 'main' into frame-processors * Reformat Swift code * Install reanimated from npm again * Re-integrate Pods * Dependabot: Also scan example/ and docs/ * Update validate-cpp.yml * Create FrameProcessorUtils * Create Frame.h * Abstract HostObject creation away * Fix types * Fix frame processor call * Add todo * Update lockfiles * Add C++ contributing instructions * Update CONTRIBUTING.md * Add android/src/main/cpp to cpplint * Update cpplint.sh * Fix a few cpplint errors * Fix globals * Fix a few more cpplint errors * Update App.tsx * Update AndroidLogger.cpp * Format * Fix cpplint script (check-cpp) * Try to simplify frame processor * y * Update FrameProcessorUtils.mm * Update FrameProcessorBindings.mm * Update CameraView.swift * Update CameraViewManager.m * Restructure everything * fix * Fix `@objc` export (make public) * Refactor installFrameProcessorBindings into FrameProcessorRuntimeManager * Add swift RCTBridge.runOnJS helper * Fix run(onJS) * Add pragma once * Add `&self` to lambda * Update FrameProcessorRuntimeManager.mm * reorder imports * Fix imports * forward declare * Rename extension * Destroy buffer after execution * Add FrameProcessorPluginRegistry base * Merge branch 'main' into frame-processors * Add frameProcessor to types * Update Camera.tsx * Fix rebase merge * Remove movieOutput * Use `useFrameProcessor` * Fix bad merge * Add additional ESLint rules * Update lockfiles * Update CameraViewManager.m * Add support for V8 runtime * Add frame processor plugins API * Print plugin invoke * Fix React Utils in podspec * Fix runOnJS swift name * Remove invalid redecl of `captureSession` * Use REA 2.1.0 which includes all my big PRs 🎉 * Update validate-cpp.yml * Update Podfile.lock * Remove Flipper * Fix dereferencing * Capture `self` by value. Fucking hell, what a dumb mistake. * Override a few HostObject functions * Expose isReady, width, height, bytesPerRow and planesCount * use hook again * Expose property names * FrameProcessor -> Frame * Update CameraView+RecordVideo.swift * Add Swift support for Frame Processors Plugins * Add macros for plugin installation * Add ObjC frame processor plugin * Correctly install frame processor plugins * Don't require custom name for macro * Check if plugin already exists * Implement QR Code Frame Processor Plugin in Swift * Adjust ObjC style frame processor macro * optimize * Add `frameProcessorFrameDropRate` * Fix types * Only log once * Log if it executes slowly * Implement `frameProcessorFps` * Implement manual encoded video recordings * Use recommended video settings * Add fileType types * Ignore if input is not ready for media data * Add completion handler * Add audio buffer sampling * Init only for video frame * use AVAssetWriterInputPixelBufferAdaptor * Remove AVAssetWriterInputPixelBufferAdaptor * Rotate VideoWriter * Always assume portrait orientation * Update RecordingSession.swift * Use a separate Queue for Audio * Format Swift * Update CameraView+RecordVideo.swift * Use `videoQueue` instead of `cameraQueue` * Move example plugins to example app * Fix hardcoded name in plugin macro * QRFrame... -> QRCodeFrame... * Update FrameProcessorPlugin.h * Add example frame processors to JS base * Update QRCodeFrameProcessorPluginSwift.m * Add docs to create FP Plugins * Update FRAME_PROCESSORS_CREATE.mdx * Update FRAME_PROCESSORS_CREATE.mdx * Use `AVAssetWriterInputPixelBufferAdaptor` for efficient pixel buffer recycling * Add customizable `pixelFormat` * Use native format if available * Update project.pbxproj * Set video width and height as source-pixel-buffer attributes * Catch * Update App.tsx * Don't explicitly set video dimensions, let CVPixelBufferPool handle it * Add a few logs * Cleanup * Update CameraView+RecordVideo.swift * Eagerly initialize asset writer to fix stutter at first frame * Use `cameraQueue` DispatchQueue to not block CaptureDataOutputDelegate * Fix duration calculation * cleanup * Cleanup * Swiftformat * Return available video codecs * Only show frame drop notification for video output * Remove photo and video codec functionality It was too much complexity and probably never used anyways. * Revert all android related changes for now * Cleanup * Remove unused header * Update AVAssetWriter.Status+descriptor.swift * Only call Frame Processor for Video Frames * Fix `if` * Add support for Frame Processor plugin parameters/arguments * Fix arg support * Move to JSIUtils.mm * Update JSIUtils.h * Update FRAME_PROCESSORS_CREATE.mdx * Update FRAME_PROCESSORS_CREATE.mdx * Upgrade packages for docs/ * fix docs * Rename * highlight lines * docs * community plugins * Update FRAME_PROCESSOR_CREATE_FINAL.mdx * Update FRAME_PROCESSOR_PLUGIN_LIST.mdx * Update FRAME_PROCESSOR_PLUGIN_LIST.mdx * Update dependencies (1/2) * Update dependencies (2/2) * Update Gemfile.lock * add FP docs * Update README.md * Make `lastFrameProcessor` private * add `frameProcessor` docs * fix docs * adjust docs * Update DEVICES.mdx * fix * s * Add logs demo * add metro restart note * Update FRAME_PROCESSOR_CREATE_PLUGIN_IOS.mdx * Mirror video device * Update AVCaptureVideoDataOutput+mirror.swift * Create .swift-version * Enable whole module optimization * Fix recording mirrored video * Swift format * Clean dictionary on `markInvalid` * Fix cleanup * Add docs for disabling frame processors * Update project.pbxproj * Revert "Update project.pbxproj" This reverts commit e67861e51b88b4888a6940e2d20388f3044211d0. * Log frame drop reason * Format * add more samples * Add clang-format * also check .mm * Revert "also check .mm" This reverts commit 8b9d5e2c29866b05909530d104f6633d6c49eadd. * Revert "Add clang-format" This reverts commit 7643ac808e0fc34567ea1f814e73d84955381636. * Use `kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange` as default * Read matching video attributes from videoSettings * Add TODO * Swiftformat * Conditionally disable frame processors * Assert if trying to use frame processors when disabled * Add frame-processors demo gif * Allow disabling frame processors via `VISION_CAMERA_DISABLE_FRAME_PROCESSORS` * Update FrameProcessorRuntimeManager.mm * Update FRAME_PROCESSORS.mdx * Update project.pbxproj * Update FRAME_PROCESSORS_CREATE_OVERVIEW.mdx
135 lines
5.2 KiB
Plaintext
135 lines
5.2 KiB
Plaintext
---
|
|
id: devices
|
|
title: Camera Devices
|
|
sidebar_label: Camera Devices
|
|
---
|
|
|
|
import useBaseUrl from '@docusaurus/useBaseUrl';
|
|
|
|
<div>
|
|
<svg xmlns="http://www.w3.org/2000/svg" width="283" height="535" style={{ float: 'right' }}>
|
|
<image href={useBaseUrl("img/demo.gif")} x="18" y="33" width="247" height="469" />
|
|
<image href={useBaseUrl("img/frame.png")} width="283" height="535" />
|
|
</svg>
|
|
</div>
|
|
|
|
### What are camera devices?
|
|
|
|
Camera devices are the physical (or "virtual") devices that can be used to record videos or capture photos.
|
|
|
|
* **Physical**: A physical camera device is a **camera lens on your phone**. Different physical camera devices have different specifications, such as different capture formats, field of views, focal lengths, and more. Some phones have multiple physical camera devices.
|
|
|
|
> Examples: _"Backside Wide-Angle Camera"_, _"Frontside Wide-Angle Camera (FaceTime HD)"_, _"Ultra-Wide-Angle back camera"_.
|
|
|
|
* **Virtual**: A virtual camera device is a **combination of one or more physical camera devices**, and provides features such as _virtual-device-switchover_ while zooming or _combined photo delivery_ from all physiscal cameras to produce higher quality images.
|
|
|
|
> Examples: _"Triple-Camera"_, _"Dual-Wide-Angle Camera"_
|
|
|
|
### Get available camera devices
|
|
|
|
To get a list of all available camera devices, use the `getAvailableCameraDevices` function:
|
|
|
|
```ts
|
|
const devices = await Camera.getAvailableCameraDevices()
|
|
```
|
|
|
|
:::note
|
|
See the [`CameraDevice` type](https://github.com/cuvent/react-native-vision-camera/blob/main/src/CameraDevice.ts) for more information about a Camera Device
|
|
:::
|
|
|
|
A camera device (`CameraDevice`) contains a list of physical device types this camera device consists of.
|
|
|
|
Example:
|
|
* For a single Wide-Angle camera, this would be `["wide-angle-camera"]`
|
|
* For a Triple-Camera, this would be `["wide-angle-camera", "ultra-wide-angle-camera", "telephoto-camera"]`
|
|
|
|
You can use the helper function `parsePhysicalDeviceTypes` to convert a list of physical devices to a single device descriptor type which can also describe virtual devices:
|
|
|
|
```ts
|
|
console.log(device.devices)
|
|
// --> ["wide-angle-camera", "ultra-wide-angle-camera", "telephoto-camera"]
|
|
|
|
const deviceType = parsePhysicalDeviceTypes(device.devices)
|
|
console.log(deviceType)
|
|
// --> "triple-camera"
|
|
```
|
|
|
|
The `CameraDevice` type also contains other useful information describing a camera device, such as `position` ("front", "back", ...), `hasFlash`, it's `formats` (See [Camera Formats](formats)), and more.
|
|
|
|
:::caution
|
|
Make sure to be careful when filtering out unneeded camera devices, since not every phone supports all camera device types. Some phones don't even have front-cameras. You always want to have a camera device, even when it's not the one that has the best features.
|
|
:::
|
|
|
|
### `useCameraDevices` hook
|
|
|
|
The react-native-vision-camera library provides a hook to make camera device selection a lot easier.
|
|
|
|
You can specify a device type to only find devices with the given type:
|
|
|
|
```tsx
|
|
function App() {
|
|
const devices = useCameraDevices('wide-angle-camera')
|
|
const device = devices.back
|
|
|
|
if (device == null) return <LoadingView />
|
|
return (
|
|
<Camera
|
|
style={StyleSheet.absoluteFill}
|
|
device={device}
|
|
/>
|
|
)
|
|
}
|
|
```
|
|
|
|
Or just return the "best matching camera device". This function prefers camera devices with more physical cameras, and always ranks "wide-angle" physical camera devices first.
|
|
|
|
> Example: `triple-camera` > `dual-wide-camera` > `dual-camera` > `wide-angle-camera` > `ultra-wide-angle-camera` > `telephoto-camera` > ...
|
|
|
|
```tsx
|
|
function App() {
|
|
const devices = useCameraDevices()
|
|
const device = devices.back
|
|
|
|
if (device == null) return <LoadingView />
|
|
return (
|
|
<Camera
|
|
style={StyleSheet.absoluteFill}
|
|
device={device}
|
|
/>
|
|
)
|
|
}
|
|
```
|
|
|
|
### The `isActive` prop
|
|
|
|
The Camera's `isActive` property can be used to _pause_ the session (`isActive={false}`) while still keeping the session "warm". This is more desirable than completely unmounting the camera, since _resuming_ the session (`isActive={true}`) will be **much faster** than re-mounting the camera view.
|
|
|
|
For example, you want to **pause the camera** when the user **navigates to another page** or **minimizes the app** since otherwise the camera continues to run in the background without the user seeing it, causing **siginificant battery drain**. Also, on iOS a green dot indicates the user that the camera is still active, possibly causing the user to raise privacy concerns. (🔗 See ["About the orange and green indicators in your iPhone status bar"](https://support.apple.com/en-us/HT211876))
|
|
|
|
This example demonstrates how you could pause the camera stream once the app goes into background using a custom `useIsAppForeground` hook:
|
|
|
|
```tsx
|
|
function App() {
|
|
const devices = useCameraDevices()
|
|
const device = devices.back
|
|
const isAppForeground = useIsAppForeground()
|
|
|
|
if (device == null) return <LoadingView />
|
|
return (
|
|
<Camera
|
|
style={StyleSheet.absoluteFill}
|
|
device={device}
|
|
isActive={isAppForeground}
|
|
/>
|
|
)
|
|
}
|
|
```
|
|
|
|
:::info
|
|
If you don't care about fast resume times you can also fully unmount the `<Camera>` view instead, which will use less memory (RAM).
|
|
:::
|
|
|
|
<br />
|
|
|
|
#### 🚀 Next section: [Camera Formats](formats)
|