feat: Full Android rewrite (CameraX -> Camera2) (#1674)
* Nuke CameraX * fix: Run View Finder on UI Thread * Open Camera, set up Threads * fix init * Mirror if needed * Try PreviewView * Use max resolution * Add `hardwareLevel` property * Check if output type is supported * Replace `frameRateRanges` with `minFps` and `maxFps` * Remove `isHighestPhotoQualitySupported` * Remove `colorSpace` The native platforms will use the best / most accurate colorSpace by default anyways. * HDR * Check from format * fix * Remove `supportsParallelVideoProcessing` * Correctly return video/photo sizes on Android now. Finally * Log all Device props * Log if optimized usecase is used * Cleanup * Configure Camera Input only once * Revert "Configure Camera Input only once" This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e. * Extract Camera configuration * Try to reconfigure all * Hook based * Properly set up `CameraSession` * Delete unused * fix: Fix recreate when outputs change * Update NativePreviewView.kt * Use callback for closing * Catch CameraAccessException * Finally got it stable * Remove isMirrored * Implement `takePhoto()` * Add ExifInterface library * Run findViewById on UI Thread * Add Photo Output Surface to takePhoto * Fix Video Stabilization Modes * Optimize Imports * More logs * Update CameraSession.kt * Close Image * Use separate Executor in CameraQueue * Delete hooks * Use same Thread again * If opened, call error * Update CameraSession.kt * Log HW level * fix: Don't enable Stream Use Case if it's not 100% supported * Move some stuff * Cleanup PhotoOutputSynchronizer * Try just open in suspend fun * Some synchronization fixes * fix logs * Update CameraDevice+createCaptureSession.kt * Update CameraDevice+createCaptureSession.kt * fixes * fix: Use Snapshot Template for speed capture prio * Use PREVIEW template for repeating request * Use `TEMPLATE_RECORD` if video use-case is attached * Use `isRunning` flag * Recreate session everytime on active/inactive * Lazily get values in capture session * Stability * Rebuild session if outputs change * Set `didOutputsChange` back to false * Capture first in lock * Try * kinda fix it? idk * fix: Keep Outputs * Refactor into single method * Update CameraView.kt * Use Enums for type safety * Implement Orientation (I think) * Move RefCount management to Java (Frame) * Don't crash when dropping a Frame * Prefer Devices with higher max resolution * Prefer multi-cams * Use FastImage for Media Page * Return orientation in takePhoto() * Load orientation from EXIF Data * Add `isMirrored` props and documentation for PhotoFile * fix: Return `not-determined` on Android * Update CameraViewModule.kt * chore: Upgrade packages * fix: Fix Metro Config * Cleanup config * Properly mirror Images on save * Prepare MediaRecorder * Start/Stop MediaRecorder * Remove `takeSnapshot()` It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot * Use `MediaCodec` * Move to `VideoRecording` class * Cleanup Snapshot * Create `SkiaPreviewView` hybrid class * Create OpenGL context * Create `SkiaPreviewView` * Fix texture creation missing context * Draw red frame * Somehow get it working * Add Skia CMake setup * Start looping * Init OpenGL * Refactor into `SkiaRenderer` * Cleanup PreviewSize * Set up * Only re-render UI if there is a new Frame * Preview * Fix init * Try rendering Preview * Update SkiaPreviewView.kt * Log version * Try using Skia (fail) * Drawwwww!!!!!!!!!! 🎉 * Use Preview Size * Clear first * Refactor into SkiaRenderer * Add `previewType: "none"` on iOS * Simplify a lot * Draw Camera? For some reason? I have no idea anymore * Fix OpenGL errors * Got it kinda working again? * Actually draw Frame woah * Clean up code * Cleanup * Update on main * Synchronize render calls * holy shit * Update SkiaRenderer.cpp * Update SkiaRenderer.cpp * Refactor * Update SkiaRenderer.cpp * Check for `NO_INPUT_TEXTURE`^ * Post & Wait * Set input size * Add Video back again * Allow session without preview * Convert JPEG to byte[] * feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689) * Try to pass YUV Buffers as Pixmaps * Create pixmap! * Clean up * Render to preview * Only render if we have an output surface * Update SkiaRenderer.cpp * Fix Y+U+V sampling code * Cleanup * Fix Semaphore 0 * Use 4:2:0 YUV again idk * Update SkiaRenderer.h * Set minSdk to 26 * Set surface * Revert "Set minSdk to 26" This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48. * Set previewType * feat: Video Recording with Camera2 (#1691) * Rename * Update CameraSession.kt * Use `SurfaceHolder` instead of `SurfaceView` for output * Update CameraOutputs.kt * Update CameraSession.kt * fix: Fix crash when Preview is null * Check if snapshot capture is supported * Update RecordingSession.kt * S * Use `MediaRecorder` * Make audio optional * Add Torch * Output duration * Update RecordingSession.kt * Start RecordingSession * logs * More log * Base for preparing pass-through Recording * Use `ImageWriter` to append Images to the Recording Surface * Stream PRIVATE GPU_SAMPLED_IMAGE Images * Add flags * Close session on stop * Allow customizing `videoCodec` and `fileType` * Enable Torch * Fix Torch Mode * Fix comparing outputs with hashCode * Update CameraSession.kt * Correctly pass along Frame Processor * fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz * Use CAMCORDER instead of MIC microphone * Use 1 channel * fix: Use `Orientation` * Add `native` PixelFormat * Update iOS to latest Skia integration * feat: Add `pixelFormat` property to Camera * Catch error in configureSession * Fix JPEG format * Clean up best match finder * Update CameraDeviceDetails.kt * Clamp sizes by maximum CamcorderProfile size * Remove `getAvailableVideoCodecs` * chore: release 3.0.0-rc.5 * Use maximum video size of RECORD as default * Update CameraDeviceDetails.kt * Add a todo * Add JSON device to issue report * Prefer `full` devices and flash * Lock to 30 FPS on Samsung * Implement Zoom * Refactor * Format -> PixelFormat * fix: Feat `pixelFormat` -> `pixelFormats` * Update TROUBLESHOOTING.mdx * Format * fix: Implement `zoom` for Photo Capture * fix: Don't run if `isActive` is `false` * fix: Call `examplePlugin(frame)` * fix: Fix Flash * fix: Use `react-native-worklets-core`! * fix: Fix import
This commit is contained in:
@@ -1,5 +1,5 @@
|
||||
import React from 'react';
|
||||
import { requireNativeComponent, NativeSyntheticEvent, findNodeHandle, NativeMethods, Platform } from 'react-native';
|
||||
import { requireNativeComponent, NativeSyntheticEvent, findNodeHandle, NativeMethods } from 'react-native';
|
||||
import type { CameraDevice } from './CameraDevice';
|
||||
import type { ErrorWithCause } from './CameraError';
|
||||
import { CameraCaptureError, CameraRuntimeError, tryParseNativeCameraError, isErrorWithCause } from './CameraError';
|
||||
@@ -8,13 +8,12 @@ import { assertJSIAvailable } from './JSIHelper';
|
||||
import { CameraModule } from './NativeCameraModule';
|
||||
import type { PhotoFile, TakePhotoOptions } from './PhotoFile';
|
||||
import type { Point } from './Point';
|
||||
import type { TakeSnapshotOptions } from './Snapshot';
|
||||
import type { CameraVideoCodec, RecordVideoOptions, VideoFile, VideoFileType } from './VideoFile';
|
||||
import type { RecordVideoOptions, VideoFile } from './VideoFile';
|
||||
import { VisionCameraProxy } from './FrameProcessorPlugins';
|
||||
|
||||
//#region Types
|
||||
export type CameraPermissionStatus = 'authorized' | 'not-determined' | 'denied' | 'restricted';
|
||||
export type CameraPermissionRequestResult = 'authorized' | 'denied';
|
||||
export type CameraPermissionStatus = 'granted' | 'not-determined' | 'denied' | 'restricted';
|
||||
export type CameraPermissionRequestResult = 'granted' | 'denied';
|
||||
|
||||
interface OnErrorEvent {
|
||||
code: string;
|
||||
@@ -24,7 +23,7 @@ interface OnErrorEvent {
|
||||
type NativeCameraViewProps = Omit<CameraProps, 'device' | 'onInitialized' | 'onError' | 'frameProcessor'> & {
|
||||
cameraId: string;
|
||||
enableFrameProcessor: boolean;
|
||||
previewType: 'native' | 'skia';
|
||||
previewType: 'native' | 'skia' | 'none';
|
||||
onInitialized?: (event: NativeSyntheticEvent<void>) => void;
|
||||
onError?: (event: NativeSyntheticEvent<OnErrorEvent>) => void;
|
||||
onViewReady: () => void;
|
||||
@@ -116,33 +115,6 @@ export class Camera extends React.PureComponent<CameraProps> {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Take a snapshot of the current preview view.
|
||||
*
|
||||
* This can be used as an alternative to {@linkcode Camera.takePhoto | takePhoto()} if speed is more important than quality
|
||||
*
|
||||
* @throws {@linkcode CameraCaptureError} When any kind of error occured while taking a snapshot. Use the {@linkcode CameraCaptureError.code | code} property to get the actual error
|
||||
*
|
||||
* @platform Android
|
||||
* @example
|
||||
* ```ts
|
||||
* const photo = await camera.current.takeSnapshot({
|
||||
* quality: 85,
|
||||
* skipMetadata: true
|
||||
* })
|
||||
* ```
|
||||
*/
|
||||
public async takeSnapshot(options?: TakeSnapshotOptions): Promise<PhotoFile> {
|
||||
if (Platform.OS !== 'android')
|
||||
throw new CameraCaptureError('capture/capture-type-not-supported', `'takeSnapshot()' is not available on ${Platform.OS}!`);
|
||||
|
||||
try {
|
||||
return await CameraModule.takeSnapshot(this.handle, options ?? {});
|
||||
} catch (e) {
|
||||
throw tryParseNativeCameraError(e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Start a new video recording.
|
||||
*
|
||||
@@ -286,25 +258,6 @@ export class Camera extends React.PureComponent<CameraProps> {
|
||||
}
|
||||
//#endregion
|
||||
|
||||
/**
|
||||
* Get a list of video codecs the current camera supports for a given file type. Returned values are ordered by efficiency (descending).
|
||||
* @example
|
||||
* ```ts
|
||||
* const codecs = await camera.current.getAvailableVideoCodecs("mp4")
|
||||
* ```
|
||||
* @throws {@linkcode CameraRuntimeError} When any kind of error occured while getting available video codecs. Use the {@linkcode ParameterError.code | code} property to get the actual error
|
||||
* @platform iOS
|
||||
*/
|
||||
public async getAvailableVideoCodecs(fileType?: VideoFileType): Promise<CameraVideoCodec[]> {
|
||||
if (Platform.OS !== 'ios') return []; // no video codecs supported on other platforms.
|
||||
|
||||
try {
|
||||
return await CameraModule.getAvailableVideoCodecs(this.handle, fileType);
|
||||
} catch (e) {
|
||||
throw tryParseNativeCameraError(e);
|
||||
}
|
||||
}
|
||||
|
||||
//#region Static Functions (NativeModule)
|
||||
/**
|
||||
* Install JSI Bindings for Frame Processors
|
||||
|
@@ -42,43 +42,6 @@ export const parsePhysicalDeviceTypes = (
|
||||
throw new Error(`Invalid physical device type combination! ${physicalDeviceTypes.join(' + ')}`);
|
||||
};
|
||||
|
||||
/**
|
||||
* Indicates a format's color space.
|
||||
*
|
||||
* #### The following colorspaces are available on iOS:
|
||||
* * `"srgb"`: The sGRB color space.
|
||||
* * `"p3-d65"`: The P3 D65 wide color space which uses Illuminant D65 as the white point
|
||||
* * `"hlg-bt2020"`: The BT2020 wide color space which uses Illuminant D65 as the white point and Hybrid Log-Gamma as the transfer function
|
||||
*
|
||||
* > See ["AVCaptureColorSpace"](https://developer.apple.com/documentation/avfoundation/avcapturecolorspace) for more information.
|
||||
*
|
||||
* #### The following colorspaces are available on Android:
|
||||
* * `"yuv"`: The Multi-plane Android YCbCr color space. (YUV 420_888, 422_888 or 444_888)
|
||||
* * `"jpeg"`: The compressed JPEG color space.
|
||||
* * `"jpeg-depth"`: The compressed JPEG color space including depth data.
|
||||
* * `"raw"`: The Camera's RAW sensor color space. (Single-channel Bayer-mosaic image, usually 16 bit)
|
||||
* * `"heic"`: The compressed HEIC color space.
|
||||
* * `"private"`: The Android private opaque image format. (The choices of the actual format and pixel data layout are entirely up to the device-specific and framework internal implementations, and may vary depending on use cases even for the same device. These buffers are not directly accessible to the application)
|
||||
* * `"depth-16"`: The Android dense depth image format (16 bit)
|
||||
* * `"unknown"`: Placeholder for an unknown image/pixel format. [Edit this file](https://github.com/mrousavy/react-native-vision-camera/edit/main/android/src/main/java/com/mrousavy/camera/parsers/ImageFormat+String.kt) to add a name for the unknown format.
|
||||
*
|
||||
* > See ["Android Color Formats"](https://jbit.net/Android_Colors/) for more information.
|
||||
*/
|
||||
export type ColorSpace =
|
||||
// ios
|
||||
| 'hlg-bt2020'
|
||||
| 'p3-d65'
|
||||
| 'srgb'
|
||||
// android
|
||||
| 'yuv'
|
||||
| 'jpeg'
|
||||
| 'jpeg-depth'
|
||||
| 'raw'
|
||||
| 'heic'
|
||||
| 'private'
|
||||
| 'depth-16'
|
||||
| 'unknown';
|
||||
|
||||
/**
|
||||
* Indicates a format's autofocus system.
|
||||
*
|
||||
@@ -89,21 +52,16 @@ export type ColorSpace =
|
||||
export type AutoFocusSystem = 'contrast-detection' | 'phase-detection' | 'none';
|
||||
|
||||
/**
|
||||
* Indicates a format's supported video stabilization mode
|
||||
* Indicates a format's supported video stabilization mode. Enabling video stabilization may introduce additional latency into the video capture pipeline.
|
||||
*
|
||||
* * `"off"`: Indicates that video should not be stabilized
|
||||
* * `"standard"`: Indicates that video should be stabilized using the standard video stabilization algorithm introduced with iOS 5.0. Standard video stabilization has a reduced field of view. Enabling video stabilization may introduce additional latency into the video capture pipeline
|
||||
* * `"cinematic"`: Indicates that video should be stabilized using the cinematic stabilization algorithm for more dramatic results. Cinematic video stabilization has a reduced field of view compared to standard video stabilization. Enabling cinematic video stabilization introduces much more latency into the video capture pipeline than standard video stabilization and consumes significantly more system memory. Use narrow or identical min and max frame durations in conjunction with this mode
|
||||
* * `"cinematic-extended"`: Indicates that the video should be stabilized using the extended cinematic stabilization algorithm. Enabling extended cinematic stabilization introduces longer latency into the video capture pipeline compared to the AVCaptureVideoStabilizationModeCinematic and consumes more memory, but yields improved stability. It is recommended to use identical or similar min and max frame durations in conjunction with this mode (iOS 13.0+)
|
||||
* * `"off"`: No video stabilization. Indicates that video should not be stabilized
|
||||
* * `"standard"`: Standard software-based video stabilization. Standard video stabilization reduces the field of view by about 10%.
|
||||
* * `"cinematic"`: Advanced software-based video stabilization. This applies more aggressive cropping or transformations than standard.
|
||||
* * `"cinematic-extended"`: Extended software- and hardware-based stabilization that aggressively crops and transforms the video to apply a smooth cinematic stabilization.
|
||||
* * `"auto"`: Indicates that the most appropriate video stabilization mode for the device and format should be chosen automatically
|
||||
*/
|
||||
export type VideoStabilizationMode = 'off' | 'standard' | 'cinematic' | 'cinematic-extended' | 'auto';
|
||||
|
||||
export interface FrameRateRange {
|
||||
minFrameRate: number;
|
||||
maxFrameRate: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* A Camera Device's video format. Do not create instances of this type yourself, only use {@linkcode Camera.getAvailableCameraDevices | Camera.getAvailableCameraDevices()}.
|
||||
*/
|
||||
@@ -124,12 +82,6 @@ export interface CameraDeviceFormat {
|
||||
* The video resolution's width
|
||||
*/
|
||||
videoWidth: number;
|
||||
/**
|
||||
* A boolean value specifying whether this format supports the highest possible photo quality that can be delivered on the current platform.
|
||||
*
|
||||
* @platform iOS 13.0+
|
||||
*/
|
||||
isHighestPhotoQualitySupported?: boolean;
|
||||
/**
|
||||
* Maximum supported ISO value
|
||||
*/
|
||||
@@ -146,12 +98,6 @@ export interface CameraDeviceFormat {
|
||||
* The maximum zoom factor (e.g. `128`)
|
||||
*/
|
||||
maxZoom: number;
|
||||
/**
|
||||
* The available color spaces.
|
||||
*
|
||||
* Note: On Android, this will always be only `["yuv"]`
|
||||
*/
|
||||
colorSpaces: ColorSpace[];
|
||||
/**
|
||||
* Specifies whether this format supports HDR mode for video capture
|
||||
*/
|
||||
@@ -161,9 +107,13 @@ export interface CameraDeviceFormat {
|
||||
*/
|
||||
supportsPhotoHDR: boolean;
|
||||
/**
|
||||
* All available frame rate ranges. You can query this to find the highest frame rate available
|
||||
* The minum frame rate this Format needs to run at. High resolution formats often run at lower frame rates.
|
||||
*/
|
||||
frameRateRanges: FrameRateRange[];
|
||||
minFps: number;
|
||||
/**
|
||||
* The maximum frame rate this Format is able to run at. High resolution formats often run at lower frame rates.
|
||||
*/
|
||||
maxFps: number;
|
||||
/**
|
||||
* Specifies this format's auto focus system.
|
||||
*/
|
||||
@@ -173,11 +123,10 @@ export interface CameraDeviceFormat {
|
||||
*/
|
||||
videoStabilizationModes: VideoStabilizationMode[];
|
||||
/**
|
||||
* Specifies this format's pixel format. The pixel format specifies how the individual pixels are interpreted as a visual image.
|
||||
*
|
||||
* The most common format is `420v`. Some formats (like `x420`) are not compatible with some frame processor plugins (e.g. MLKit)
|
||||
* Specifies this format's supported pixel-formats.
|
||||
* In most cases, this is `['native', 'yuv']`.
|
||||
*/
|
||||
pixelFormat: PixelFormat;
|
||||
pixelFormats: PixelFormat[];
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -251,16 +200,6 @@ export interface CameraDevice {
|
||||
* See [the Camera Formats documentation](https://react-native-vision-camera.com/docs/guides/formats) for more information about Camera Formats.
|
||||
*/
|
||||
formats: CameraDeviceFormat[];
|
||||
/**
|
||||
* Whether this camera device supports using Video Recordings (`video={true}`) and Frame Processors (`frameProcessor={...}`) at the same time. See ["The `supportsParallelVideoProcessing` prop"](https://react-native-vision-camera.com/docs/guides/devices#the-supportsparallelvideoprocessing-prop) for more information.
|
||||
*
|
||||
* If this property is `false`, you can only enable `video` or add a `frameProcessor`, but not both.
|
||||
*
|
||||
* * On iOS this value is always `true`.
|
||||
* * On newer Android devices this value is always `true`.
|
||||
* * On older Android devices this value is `false` if the Camera's hardware level is `LEGACY` or `LIMITED`, `true` otherwise. (See [`INFO_SUPPORTED_HARDWARE_LEVEL`](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics#INFO_SUPPORTED_HARDWARE_LEVEL) or [the tables at "Regular capture"](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#regular-capture))
|
||||
*/
|
||||
supportsParallelVideoProcessing: boolean;
|
||||
/**
|
||||
* Whether this camera device supports low light boost.
|
||||
*/
|
||||
@@ -281,4 +220,10 @@ export interface CameraDevice {
|
||||
* Specifies whether this device supports focusing ({@linkcode Camera.focus | Camera.focus(...)})
|
||||
*/
|
||||
supportsFocus: boolean;
|
||||
/**
|
||||
* The hardware level of the Camera.
|
||||
* - On Android, some older devices are running at a `legacy` or `limited` level which means they are running in a backwards compatible mode.
|
||||
* - On iOS, all devices are `full`.
|
||||
*/
|
||||
hardwareLevel: 'legacy' | 'limited' | 'full';
|
||||
}
|
||||
|
@@ -9,9 +9,9 @@ export type DeviceError =
|
||||
| 'device/configuration-error'
|
||||
| 'device/no-device'
|
||||
| 'device/invalid-device'
|
||||
| 'device/parallel-video-processing-not-supported'
|
||||
| 'device/torch-unavailable'
|
||||
| 'device/microphone-unavailable'
|
||||
| 'device/pixel-format-not-supported'
|
||||
| 'device/low-light-boost-not-supported'
|
||||
| 'device/focus-not-supported'
|
||||
| 'device/camera-not-available-on-simulator';
|
||||
@@ -23,6 +23,8 @@ export type FormatError =
|
||||
| 'format/invalid-color-space';
|
||||
export type SessionError =
|
||||
| 'session/camera-not-ready'
|
||||
| 'session/camera-cannot-be-opened'
|
||||
| 'session/camera-has-been-disconnected'
|
||||
| 'session/audio-session-setup-failed'
|
||||
| 'session/audio-in-use-by-other-app'
|
||||
| 'session/audio-session-failed-to-activate';
|
||||
|
@@ -1,5 +1,5 @@
|
||||
import type { ViewProps } from 'react-native';
|
||||
import type { CameraDevice, CameraDeviceFormat, ColorSpace, VideoStabilizationMode } from './CameraDevice';
|
||||
import type { CameraDevice, CameraDeviceFormat, VideoStabilizationMode } from './CameraDevice';
|
||||
import type { CameraRuntimeError } from './CameraError';
|
||||
import type { DrawableFrame, Frame } from './Frame';
|
||||
import type { Orientation } from './Orientation';
|
||||
@@ -14,6 +14,11 @@ export type FrameProcessor =
|
||||
type: 'skia-frame-processor';
|
||||
};
|
||||
|
||||
// TODO: Replace `enableHighQualityPhotos: boolean` in favor of `priorization: 'photo' | 'video'`
|
||||
// TODO: Use RCT_ENUM_PARSER for stuff like previewType, torch, videoStabilizationMode, and orientation
|
||||
// TODO: Use Photo HostObject for stuff like depthData, portraitEffects, etc.
|
||||
// TODO: Add RAW capture support
|
||||
|
||||
export interface CameraProps extends ViewProps {
|
||||
/**
|
||||
* The Camera Device to use.
|
||||
@@ -52,13 +57,27 @@ export interface CameraProps extends ViewProps {
|
||||
/**
|
||||
* Enables **video capture** with the `startRecording` function (see ["Recording Videos"](https://react-native-vision-camera.com/docs/guides/capturing/#recording-videos))
|
||||
*
|
||||
* Note: If you want to use `video` and `frameProcessor` simultaneously, make sure [`supportsParallelVideoProcessing`](https://react-native-vision-camera.com/docs/guides/devices#the-supportsparallelvideoprocessing-prop) is `true`.
|
||||
* Note: If both the `photo` and `video` properties are enabled at the same time and the device is running at a `hardwareLevel` of `'legacy'` or `'limited'`, VisionCamera _might_ use a lower resolution for video capture due to hardware constraints.
|
||||
*/
|
||||
video?: boolean;
|
||||
/**
|
||||
* Enables **audio capture** for video recordings (see ["Recording Videos"](https://react-native-vision-camera.com/docs/guides/capturing/#recording-videos))
|
||||
*/
|
||||
audio?: boolean;
|
||||
/**
|
||||
* Specifies the pixel format for the video pipeline.
|
||||
*
|
||||
* Frames from a [Frame Processor](https://mrousavy.github.io/react-native-vision-camera/docs/guides/frame-processors) will be streamed in the pixel format specified here.
|
||||
*
|
||||
* While `native` and `yuv` are the most efficient formats, some ML models (such as MLKit Barcode detection) require input Frames to be in RGB colorspace, otherwise they just output nonsense.
|
||||
*
|
||||
* - `native`: The hardware native GPU buffer format. This is the most efficient format. (`PRIVATE` on Android, sometimes YUV on iOS)
|
||||
* - `yuv`: The YUV (Y'CbCr 4:2:0 or NV21, 8-bit) format, either video- or full-range, depending on hardware capabilities. This is the second most efficient format.
|
||||
* - `rgb`: The RGB (RGB, RGBA or ABGRA, 8-bit) format. This is least efficient and requires explicit conversion.
|
||||
*
|
||||
* @default `native`
|
||||
*/
|
||||
pixelFormat?: 'native' | 'yuv' | 'rgb';
|
||||
//#endregion
|
||||
|
||||
//#region Common Props (torch, zoom)
|
||||
@@ -116,16 +135,9 @@ export interface CameraProps extends ViewProps {
|
||||
*/
|
||||
lowLightBoost?: boolean;
|
||||
/**
|
||||
* Specifies the color space to use for this camera device. Make sure the given `format` contains the given `colorSpace`.
|
||||
* Specifies the video stabilization mode to use.
|
||||
*
|
||||
* Requires `format` to be set.
|
||||
*/
|
||||
colorSpace?: ColorSpace;
|
||||
/**
|
||||
* Specifies the video stabilization mode to use for this camera device. Make sure the given `format` contains the given `videoStabilizationMode`.
|
||||
*
|
||||
* Requires `format` to be set.
|
||||
* @platform iOS
|
||||
* Requires a `format` to be set that contains the given `videoStabilizationMode`.
|
||||
*/
|
||||
videoStabilizationMode?: VideoStabilizationMode;
|
||||
//#endregion
|
||||
@@ -183,8 +195,6 @@ export interface CameraProps extends ViewProps {
|
||||
*
|
||||
* If {@linkcode previewType | previewType} is set to `"skia"`, you can draw content to the `Frame` using the react-native-skia API.
|
||||
*
|
||||
* Note: If you want to use `video` and `frameProcessor` simultaneously, make sure [`supportsParallelVideoProcessing`](https://react-native-vision-camera.com/docs/guides/devices#the-supportsparallelvideoprocessing-prop) is `true`.
|
||||
*
|
||||
* > See [the Frame Processors documentation](https://mrousavy.github.io/react-native-vision-camera/docs/guides/frame-processors) for more information
|
||||
*
|
||||
* @example
|
||||
|
@@ -1,5 +1,6 @@
|
||||
import type { SkCanvas, SkPaint } from '@shopify/react-native-skia';
|
||||
import type { Orientation } from './Orientation';
|
||||
import { PixelFormat } from './PixelFormat';
|
||||
|
||||
/**
|
||||
* A single frame, as seen by the camera.
|
||||
@@ -40,6 +41,10 @@ export interface Frame {
|
||||
* consideration when running a frame processor. See also: `isMirrored`
|
||||
*/
|
||||
orientation: Orientation;
|
||||
/**
|
||||
* Represents the pixel-format of the Frame.
|
||||
*/
|
||||
pixelFormat: PixelFormat;
|
||||
|
||||
/**
|
||||
* Get the underlying data of the Frame as a uint8 array buffer.
|
||||
|
@@ -1,7 +1,7 @@
|
||||
import type { Frame, FrameInternal } from './Frame';
|
||||
import type { FrameProcessor } from './CameraProps';
|
||||
import { Camera } from './Camera';
|
||||
import { Worklets } from 'react-native-worklets/src';
|
||||
import { Worklets } from 'react-native-worklets-core';
|
||||
import { CameraRuntimeError } from './CameraError';
|
||||
|
||||
type BasicParameterType = string | number | boolean | undefined;
|
||||
|
@@ -1 +1 @@
|
||||
export type Orientation = 'portrait' | 'portraitUpsideDown' | 'landscapeLeft' | 'landscapeRight';
|
||||
export type Orientation = 'portrait' | 'portrait-upside-down' | 'landscape-left' | 'landscape-right';
|
||||
|
@@ -1,5 +1,5 @@
|
||||
import { Orientation } from './Orientation';
|
||||
import type { TemporaryFile } from './TemporaryFile';
|
||||
import { CameraPhotoCodec } from './VideoFile';
|
||||
|
||||
export interface TakePhotoOptions {
|
||||
/**
|
||||
@@ -9,7 +9,6 @@ export interface TakePhotoOptions {
|
||||
* * `"balanced"` Indicates that photo quality and speed of delivery are balanced in priority
|
||||
* * `"speed"` Indicates that speed of photo delivery is most important, even at the expense of quality
|
||||
*
|
||||
* @platform iOS 13.0+
|
||||
* @default "balanced"
|
||||
*/
|
||||
qualityPrioritization?: 'quality' | 'balanced' | 'speed';
|
||||
@@ -32,15 +31,13 @@ export interface TakePhotoOptions {
|
||||
*/
|
||||
enableAutoStabilization?: boolean;
|
||||
/**
|
||||
* Specifies whether the photo output should use content aware distortion correction on this photo request (at its discretion).
|
||||
* Specifies whether the photo output should use content aware distortion correction on this photo request.
|
||||
* For example, the algorithm may not apply correction to faces in the center of a photo, but may apply it to faces near the photo’s edges.
|
||||
*
|
||||
* @platform iOS
|
||||
* @default false
|
||||
*/
|
||||
enableAutoDistortionCorrection?: boolean;
|
||||
/**
|
||||
* Specifies the photo codec to use for this capture. The provided photo codec has to be supported by the session.
|
||||
*/
|
||||
photoCodec?: CameraPhotoCodec;
|
||||
/**
|
||||
* When set to `true`, metadata reading and mapping will be skipped. ({@linkcode PhotoFile.metadata} will be null)
|
||||
*
|
||||
@@ -56,12 +53,31 @@ export interface TakePhotoOptions {
|
||||
/**
|
||||
* Represents a Photo taken by the Camera written to the local filesystem.
|
||||
*
|
||||
* Related: {@linkcode Camera.takePhoto | Camera.takePhoto()}, {@linkcode Camera.takeSnapshot | Camera.takeSnapshot()}
|
||||
* See {@linkcode Camera.takePhoto | Camera.takePhoto()}
|
||||
*/
|
||||
export interface PhotoFile extends TemporaryFile {
|
||||
/**
|
||||
* The width of the photo, in pixels.
|
||||
*/
|
||||
width: number;
|
||||
/**
|
||||
* The height of the photo, in pixels.
|
||||
*/
|
||||
height: number;
|
||||
/**
|
||||
* Whether this photo is in RAW format or not.
|
||||
*/
|
||||
isRawPhoto: boolean;
|
||||
/**
|
||||
* Display orientation of the photo, relative to the Camera's sensor orientation.
|
||||
*
|
||||
* Note that Camera sensors are landscape, so e.g. "portrait" photos will have a value of "landscape-left", etc.
|
||||
*/
|
||||
orientation: Orientation;
|
||||
/**
|
||||
* Whether this photo is mirrored (selfies) or not.
|
||||
*/
|
||||
isMirrored: boolean;
|
||||
thumbnail?: Record<string, unknown>;
|
||||
/**
|
||||
* Metadata information describing the captured image.
|
||||
@@ -69,7 +85,19 @@ export interface PhotoFile extends TemporaryFile {
|
||||
* @see [AVCapturePhoto.metadata](https://developer.apple.com/documentation/avfoundation/avcapturephoto/2873982-metadata)
|
||||
* @see [AndroidX ExifInterface](https://developer.android.com/reference/androidx/exifinterface/media/ExifInterface)
|
||||
*/
|
||||
metadata: {
|
||||
metadata?: {
|
||||
/**
|
||||
* Orientation of the EXIF Image.
|
||||
*
|
||||
* * 1 = 0 degrees: the correct orientation, no adjustment is required.
|
||||
* * 2 = 0 degrees, mirrored: image has been flipped back-to-front.
|
||||
* * 3 = 180 degrees: image is upside down.
|
||||
* * 4 = 180 degrees, mirrored: image has been flipped back-to-front and is upside down.
|
||||
* * 5 = 90 degrees: image has been flipped back-to-front and is on its side.
|
||||
* * 6 = 90 degrees, mirrored: image is on its side.
|
||||
* * 7 = 270 degrees: image has been flipped back-to-front and is on its far side.
|
||||
* * 8 = 270 degrees, mirrored: image is on its far side.
|
||||
*/
|
||||
Orientation: number;
|
||||
/**
|
||||
* @platform iOS
|
||||
|
@@ -1,7 +1,15 @@
|
||||
/**
|
||||
* Represents the pixel format of a `Frame`.
|
||||
* * `420v`: 420 YpCbCr 8 Bi-Planar Video Range
|
||||
* * `420f`: 420 YpCbCr 8 Bi-Planar Full Range
|
||||
* * `x420`: 420 YpCbCr 10 Bi-Planar Video Range
|
||||
*
|
||||
* If you intend to read Pixels from this Frame or use an ML model for processing, make sure that you are
|
||||
* using the expected `PixelFormat`, otherwise the plugin might not be able to properly understand the Frame's content.
|
||||
*
|
||||
* Most ML models operate in either `yuv` (recommended) or `rgb`.
|
||||
*
|
||||
* - `yuv`: Frame is in YUV pixel-format (Y'CbCr 4:2:0 or NV21, 8-bit)
|
||||
* - `rgb`: Frame is in RGB pixel-format (RGB or RGBA, 8-bit)
|
||||
* - `dng`: Frame is in a depth-data pixel format (DNG)
|
||||
* - `native`: Frame is in the Camera's native Hardware Buffer format (PRIVATE). This is the most efficient Format.
|
||||
* - `unknown`: Frame has unknown/unsupported pixel-format.
|
||||
*/
|
||||
export type PixelFormat = '420f' | '420v' | 'x420';
|
||||
export type PixelFormat = 'yuv' | 'rgb' | 'dng' | 'native' | 'unknown';
|
||||
|
@@ -1,28 +0,0 @@
|
||||
export interface TakeSnapshotOptions {
|
||||
/**
|
||||
* Specifies the quality of the JPEG. (0-100, where 100 means best quality (no compression))
|
||||
*
|
||||
* It is recommended to set this to `90` or even `80`, since the user probably won't notice a difference between `90`/`80` and `100`.
|
||||
*
|
||||
* @default 100
|
||||
*/
|
||||
quality?: number;
|
||||
|
||||
/**
|
||||
* Whether the Flash should be enabled or disabled
|
||||
*
|
||||
* @default "off"
|
||||
*/
|
||||
flash?: 'on' | 'off';
|
||||
|
||||
/**
|
||||
* When set to `true`, metadata reading and mapping will be skipped. ({@linkcode PhotoFile.metadata} will be `null`)
|
||||
*
|
||||
* This might result in a faster capture, as metadata reading and mapping requires File IO.
|
||||
*
|
||||
* @default false
|
||||
*
|
||||
* @platform Android
|
||||
*/
|
||||
skipMetadata?: boolean;
|
||||
}
|
@@ -1,21 +1,15 @@
|
||||
import type { CameraCaptureError } from './CameraError';
|
||||
import type { TemporaryFile } from './TemporaryFile';
|
||||
|
||||
export type VideoFileType = 'mov' | 'avci' | 'm4v' | 'mp4';
|
||||
|
||||
export type CameraVideoCodec = 'h264' | 'hevc' | 'hevc-alpha';
|
||||
export type CameraPhotoCodec = 'jpeg' | 'pro-res-4444' | 'pro-res-422' | 'pro-res-422-hq' | 'pro-res-422-lt' | 'pro-res-422-proxy';
|
||||
|
||||
export interface RecordVideoOptions {
|
||||
/**
|
||||
* Set the video flash mode. Natively, this just enables the torch while recording.
|
||||
*/
|
||||
flash?: 'on' | 'off' | 'auto';
|
||||
/**
|
||||
* Sets the file type to use for the Video Recording.
|
||||
* @default "mov"
|
||||
* Specifies the output file type to record videos into.
|
||||
*/
|
||||
fileType?: VideoFileType;
|
||||
fileType?: 'mov' | 'mp4';
|
||||
/**
|
||||
* Called when there was an unexpected runtime error while recording the video.
|
||||
*/
|
||||
@@ -25,13 +19,11 @@ export interface RecordVideoOptions {
|
||||
*/
|
||||
onRecordingFinished: (video: VideoFile) => void;
|
||||
/**
|
||||
* Set the video codec to record in. Different video codecs affect video quality and video size.
|
||||
* To get a list of all available video codecs use the `getAvailableVideoCodecs()` function.
|
||||
*
|
||||
* @default undefined
|
||||
* @platform iOS
|
||||
* The Video Codec to record in.
|
||||
* - `h264`: Widely supported, but might be less efficient, especially with larger sizes or framerates.
|
||||
* - `h265`: The HEVC (High-Efficient-Video-Codec) for higher efficient video recordings.
|
||||
*/
|
||||
videoCodec?: CameraVideoCodec;
|
||||
videoCodec?: 'h265' | 'h265';
|
||||
}
|
||||
|
||||
/**
|
||||
|
@@ -2,7 +2,7 @@ import { DependencyList, useMemo } from 'react';
|
||||
import type { DrawableFrame, Frame, FrameInternal } from '../Frame';
|
||||
import { FrameProcessor } from '../CameraProps';
|
||||
// Install RN Worklets by importing it
|
||||
import 'react-native-worklets/src';
|
||||
import 'react-native-worklets-core';
|
||||
|
||||
export function createFrameProcessor(frameProcessor: FrameProcessor['frameProcessor'], type: FrameProcessor['type']): FrameProcessor {
|
||||
return {
|
||||
|
@@ -8,7 +8,6 @@ export * from './FrameProcessorPlugins';
|
||||
export * from './CameraProps';
|
||||
export * from './PhotoFile';
|
||||
export * from './Point';
|
||||
export * from './Snapshot';
|
||||
export * from './TemporaryFile';
|
||||
export * from './VideoFile';
|
||||
|
||||
|
@@ -1,5 +1,5 @@
|
||||
import { Dimensions } from 'react-native';
|
||||
import type { CameraDevice, CameraDeviceFormat, FrameRateRange } from '../CameraDevice';
|
||||
import type { CameraDevice, CameraDeviceFormat } from '../CameraDevice';
|
||||
|
||||
/**
|
||||
* Compares two devices by the following criteria:
|
||||
@@ -24,6 +24,28 @@ export const sortDevices = (left: CameraDevice, right: CameraDevice): number =>
|
||||
if (leftHasWideAngle) leftPoints += 2;
|
||||
if (rightHasWideAngle) rightPoints += 2;
|
||||
|
||||
if (left.isMultiCam) leftPoints += 2;
|
||||
if (right.isMultiCam) rightPoints += 2;
|
||||
|
||||
if (left.hardwareLevel === 'full') leftPoints += 3;
|
||||
if (right.hardwareLevel === 'full') rightPoints += 3;
|
||||
if (left.hardwareLevel === 'limited') leftPoints += 1;
|
||||
if (right.hardwareLevel === 'limited') rightPoints += 1;
|
||||
|
||||
if (left.hasFlash) leftPoints += 1;
|
||||
if (right.hasFlash) rightPoints += 1;
|
||||
|
||||
const leftMaxResolution = left.formats.reduce(
|
||||
(prev, curr) => Math.max(prev, curr.videoHeight * curr.videoWidth + curr.photoHeight * curr.photoWidth),
|
||||
0,
|
||||
);
|
||||
const rightMaxResolution = right.formats.reduce(
|
||||
(prev, curr) => Math.max(prev, curr.videoHeight * curr.videoWidth + curr.photoHeight * curr.photoWidth),
|
||||
0,
|
||||
);
|
||||
if (leftMaxResolution > rightMaxResolution) leftPoints += 3;
|
||||
if (rightMaxResolution > leftMaxResolution) rightPoints += 3;
|
||||
|
||||
// telephoto cameras often have very poor quality.
|
||||
const leftHasTelephoto = left.devices.includes('telephoto-camera');
|
||||
const rightHasTelephoto = right.devices.includes('telephoto-camera');
|
||||
@@ -69,17 +91,3 @@ export const sortFormats = (left: CameraDeviceFormat, right: CameraDeviceFormat)
|
||||
|
||||
return rightPoints - leftPoints;
|
||||
};
|
||||
|
||||
/**
|
||||
* Returns `true` if the given Frame Rate Range (`range`) contains the given frame rate (`fps`)
|
||||
*
|
||||
* @param {FrameRateRange} range The range to check if the given `fps` are included in
|
||||
* @param {number} fps The FPS to check if the given `range` supports.
|
||||
* @example
|
||||
* ```ts
|
||||
* // get all formats that support 60 FPS
|
||||
* const formatsWithHighFps = useMemo(() => device.formats.filter((f) => f.frameRateRanges.some((r) => frameRateIncluded(r, 60))), [device.formats])
|
||||
* ```
|
||||
* @method
|
||||
*/
|
||||
export const frameRateIncluded = (range: FrameRateRange, fps: number): boolean => fps >= range.minFrameRate && fps <= range.maxFrameRate;
|
||||
|
Reference in New Issue
Block a user