feat: Full Android rewrite (CameraX -> Camera2) (#1674)
* Nuke CameraX * fix: Run View Finder on UI Thread * Open Camera, set up Threads * fix init * Mirror if needed * Try PreviewView * Use max resolution * Add `hardwareLevel` property * Check if output type is supported * Replace `frameRateRanges` with `minFps` and `maxFps` * Remove `isHighestPhotoQualitySupported` * Remove `colorSpace` The native platforms will use the best / most accurate colorSpace by default anyways. * HDR * Check from format * fix * Remove `supportsParallelVideoProcessing` * Correctly return video/photo sizes on Android now. Finally * Log all Device props * Log if optimized usecase is used * Cleanup * Configure Camera Input only once * Revert "Configure Camera Input only once" This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e. * Extract Camera configuration * Try to reconfigure all * Hook based * Properly set up `CameraSession` * Delete unused * fix: Fix recreate when outputs change * Update NativePreviewView.kt * Use callback for closing * Catch CameraAccessException * Finally got it stable * Remove isMirrored * Implement `takePhoto()` * Add ExifInterface library * Run findViewById on UI Thread * Add Photo Output Surface to takePhoto * Fix Video Stabilization Modes * Optimize Imports * More logs * Update CameraSession.kt * Close Image * Use separate Executor in CameraQueue * Delete hooks * Use same Thread again * If opened, call error * Update CameraSession.kt * Log HW level * fix: Don't enable Stream Use Case if it's not 100% supported * Move some stuff * Cleanup PhotoOutputSynchronizer * Try just open in suspend fun * Some synchronization fixes * fix logs * Update CameraDevice+createCaptureSession.kt * Update CameraDevice+createCaptureSession.kt * fixes * fix: Use Snapshot Template for speed capture prio * Use PREVIEW template for repeating request * Use `TEMPLATE_RECORD` if video use-case is attached * Use `isRunning` flag * Recreate session everytime on active/inactive * Lazily get values in capture session * Stability * Rebuild session if outputs change * Set `didOutputsChange` back to false * Capture first in lock * Try * kinda fix it? idk * fix: Keep Outputs * Refactor into single method * Update CameraView.kt * Use Enums for type safety * Implement Orientation (I think) * Move RefCount management to Java (Frame) * Don't crash when dropping a Frame * Prefer Devices with higher max resolution * Prefer multi-cams * Use FastImage for Media Page * Return orientation in takePhoto() * Load orientation from EXIF Data * Add `isMirrored` props and documentation for PhotoFile * fix: Return `not-determined` on Android * Update CameraViewModule.kt * chore: Upgrade packages * fix: Fix Metro Config * Cleanup config * Properly mirror Images on save * Prepare MediaRecorder * Start/Stop MediaRecorder * Remove `takeSnapshot()` It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot * Use `MediaCodec` * Move to `VideoRecording` class * Cleanup Snapshot * Create `SkiaPreviewView` hybrid class * Create OpenGL context * Create `SkiaPreviewView` * Fix texture creation missing context * Draw red frame * Somehow get it working * Add Skia CMake setup * Start looping * Init OpenGL * Refactor into `SkiaRenderer` * Cleanup PreviewSize * Set up * Only re-render UI if there is a new Frame * Preview * Fix init * Try rendering Preview * Update SkiaPreviewView.kt * Log version * Try using Skia (fail) * Drawwwww!!!!!!!!!! 🎉 * Use Preview Size * Clear first * Refactor into SkiaRenderer * Add `previewType: "none"` on iOS * Simplify a lot * Draw Camera? For some reason? I have no idea anymore * Fix OpenGL errors * Got it kinda working again? * Actually draw Frame woah * Clean up code * Cleanup * Update on main * Synchronize render calls * holy shit * Update SkiaRenderer.cpp * Update SkiaRenderer.cpp * Refactor * Update SkiaRenderer.cpp * Check for `NO_INPUT_TEXTURE`^ * Post & Wait * Set input size * Add Video back again * Allow session without preview * Convert JPEG to byte[] * feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689) * Try to pass YUV Buffers as Pixmaps * Create pixmap! * Clean up * Render to preview * Only render if we have an output surface * Update SkiaRenderer.cpp * Fix Y+U+V sampling code * Cleanup * Fix Semaphore 0 * Use 4:2:0 YUV again idk * Update SkiaRenderer.h * Set minSdk to 26 * Set surface * Revert "Set minSdk to 26" This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48. * Set previewType * feat: Video Recording with Camera2 (#1691) * Rename * Update CameraSession.kt * Use `SurfaceHolder` instead of `SurfaceView` for output * Update CameraOutputs.kt * Update CameraSession.kt * fix: Fix crash when Preview is null * Check if snapshot capture is supported * Update RecordingSession.kt * S * Use `MediaRecorder` * Make audio optional * Add Torch * Output duration * Update RecordingSession.kt * Start RecordingSession * logs * More log * Base for preparing pass-through Recording * Use `ImageWriter` to append Images to the Recording Surface * Stream PRIVATE GPU_SAMPLED_IMAGE Images * Add flags * Close session on stop * Allow customizing `videoCodec` and `fileType` * Enable Torch * Fix Torch Mode * Fix comparing outputs with hashCode * Update CameraSession.kt * Correctly pass along Frame Processor * fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz * Use CAMCORDER instead of MIC microphone * Use 1 channel * fix: Use `Orientation` * Add `native` PixelFormat * Update iOS to latest Skia integration * feat: Add `pixelFormat` property to Camera * Catch error in configureSession * Fix JPEG format * Clean up best match finder * Update CameraDeviceDetails.kt * Clamp sizes by maximum CamcorderProfile size * Remove `getAvailableVideoCodecs` * chore: release 3.0.0-rc.5 * Use maximum video size of RECORD as default * Update CameraDeviceDetails.kt * Add a todo * Add JSON device to issue report * Prefer `full` devices and flash * Lock to 30 FPS on Samsung * Implement Zoom * Refactor * Format -> PixelFormat * fix: Feat `pixelFormat` -> `pixelFormats` * Update TROUBLESHOOTING.mdx * Format * fix: Implement `zoom` for Photo Capture * fix: Don't run if `isActive` is `false` * fix: Call `examplePlugin(frame)` * fix: Fix Flash * fix: Use `react-native-worklets-core`! * fix: Fix import
This commit is contained in:
@@ -42,43 +42,6 @@ export const parsePhysicalDeviceTypes = (
|
||||
throw new Error(`Invalid physical device type combination! ${physicalDeviceTypes.join(' + ')}`);
|
||||
};
|
||||
|
||||
/**
|
||||
* Indicates a format's color space.
|
||||
*
|
||||
* #### The following colorspaces are available on iOS:
|
||||
* * `"srgb"`: The sGRB color space.
|
||||
* * `"p3-d65"`: The P3 D65 wide color space which uses Illuminant D65 as the white point
|
||||
* * `"hlg-bt2020"`: The BT2020 wide color space which uses Illuminant D65 as the white point and Hybrid Log-Gamma as the transfer function
|
||||
*
|
||||
* > See ["AVCaptureColorSpace"](https://developer.apple.com/documentation/avfoundation/avcapturecolorspace) for more information.
|
||||
*
|
||||
* #### The following colorspaces are available on Android:
|
||||
* * `"yuv"`: The Multi-plane Android YCbCr color space. (YUV 420_888, 422_888 or 444_888)
|
||||
* * `"jpeg"`: The compressed JPEG color space.
|
||||
* * `"jpeg-depth"`: The compressed JPEG color space including depth data.
|
||||
* * `"raw"`: The Camera's RAW sensor color space. (Single-channel Bayer-mosaic image, usually 16 bit)
|
||||
* * `"heic"`: The compressed HEIC color space.
|
||||
* * `"private"`: The Android private opaque image format. (The choices of the actual format and pixel data layout are entirely up to the device-specific and framework internal implementations, and may vary depending on use cases even for the same device. These buffers are not directly accessible to the application)
|
||||
* * `"depth-16"`: The Android dense depth image format (16 bit)
|
||||
* * `"unknown"`: Placeholder for an unknown image/pixel format. [Edit this file](https://github.com/mrousavy/react-native-vision-camera/edit/main/android/src/main/java/com/mrousavy/camera/parsers/ImageFormat+String.kt) to add a name for the unknown format.
|
||||
*
|
||||
* > See ["Android Color Formats"](https://jbit.net/Android_Colors/) for more information.
|
||||
*/
|
||||
export type ColorSpace =
|
||||
// ios
|
||||
| 'hlg-bt2020'
|
||||
| 'p3-d65'
|
||||
| 'srgb'
|
||||
// android
|
||||
| 'yuv'
|
||||
| 'jpeg'
|
||||
| 'jpeg-depth'
|
||||
| 'raw'
|
||||
| 'heic'
|
||||
| 'private'
|
||||
| 'depth-16'
|
||||
| 'unknown';
|
||||
|
||||
/**
|
||||
* Indicates a format's autofocus system.
|
||||
*
|
||||
@@ -89,21 +52,16 @@ export type ColorSpace =
|
||||
export type AutoFocusSystem = 'contrast-detection' | 'phase-detection' | 'none';
|
||||
|
||||
/**
|
||||
* Indicates a format's supported video stabilization mode
|
||||
* Indicates a format's supported video stabilization mode. Enabling video stabilization may introduce additional latency into the video capture pipeline.
|
||||
*
|
||||
* * `"off"`: Indicates that video should not be stabilized
|
||||
* * `"standard"`: Indicates that video should be stabilized using the standard video stabilization algorithm introduced with iOS 5.0. Standard video stabilization has a reduced field of view. Enabling video stabilization may introduce additional latency into the video capture pipeline
|
||||
* * `"cinematic"`: Indicates that video should be stabilized using the cinematic stabilization algorithm for more dramatic results. Cinematic video stabilization has a reduced field of view compared to standard video stabilization. Enabling cinematic video stabilization introduces much more latency into the video capture pipeline than standard video stabilization and consumes significantly more system memory. Use narrow or identical min and max frame durations in conjunction with this mode
|
||||
* * `"cinematic-extended"`: Indicates that the video should be stabilized using the extended cinematic stabilization algorithm. Enabling extended cinematic stabilization introduces longer latency into the video capture pipeline compared to the AVCaptureVideoStabilizationModeCinematic and consumes more memory, but yields improved stability. It is recommended to use identical or similar min and max frame durations in conjunction with this mode (iOS 13.0+)
|
||||
* * `"off"`: No video stabilization. Indicates that video should not be stabilized
|
||||
* * `"standard"`: Standard software-based video stabilization. Standard video stabilization reduces the field of view by about 10%.
|
||||
* * `"cinematic"`: Advanced software-based video stabilization. This applies more aggressive cropping or transformations than standard.
|
||||
* * `"cinematic-extended"`: Extended software- and hardware-based stabilization that aggressively crops and transforms the video to apply a smooth cinematic stabilization.
|
||||
* * `"auto"`: Indicates that the most appropriate video stabilization mode for the device and format should be chosen automatically
|
||||
*/
|
||||
export type VideoStabilizationMode = 'off' | 'standard' | 'cinematic' | 'cinematic-extended' | 'auto';
|
||||
|
||||
export interface FrameRateRange {
|
||||
minFrameRate: number;
|
||||
maxFrameRate: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* A Camera Device's video format. Do not create instances of this type yourself, only use {@linkcode Camera.getAvailableCameraDevices | Camera.getAvailableCameraDevices()}.
|
||||
*/
|
||||
@@ -124,12 +82,6 @@ export interface CameraDeviceFormat {
|
||||
* The video resolution's width
|
||||
*/
|
||||
videoWidth: number;
|
||||
/**
|
||||
* A boolean value specifying whether this format supports the highest possible photo quality that can be delivered on the current platform.
|
||||
*
|
||||
* @platform iOS 13.0+
|
||||
*/
|
||||
isHighestPhotoQualitySupported?: boolean;
|
||||
/**
|
||||
* Maximum supported ISO value
|
||||
*/
|
||||
@@ -146,12 +98,6 @@ export interface CameraDeviceFormat {
|
||||
* The maximum zoom factor (e.g. `128`)
|
||||
*/
|
||||
maxZoom: number;
|
||||
/**
|
||||
* The available color spaces.
|
||||
*
|
||||
* Note: On Android, this will always be only `["yuv"]`
|
||||
*/
|
||||
colorSpaces: ColorSpace[];
|
||||
/**
|
||||
* Specifies whether this format supports HDR mode for video capture
|
||||
*/
|
||||
@@ -161,9 +107,13 @@ export interface CameraDeviceFormat {
|
||||
*/
|
||||
supportsPhotoHDR: boolean;
|
||||
/**
|
||||
* All available frame rate ranges. You can query this to find the highest frame rate available
|
||||
* The minum frame rate this Format needs to run at. High resolution formats often run at lower frame rates.
|
||||
*/
|
||||
frameRateRanges: FrameRateRange[];
|
||||
minFps: number;
|
||||
/**
|
||||
* The maximum frame rate this Format is able to run at. High resolution formats often run at lower frame rates.
|
||||
*/
|
||||
maxFps: number;
|
||||
/**
|
||||
* Specifies this format's auto focus system.
|
||||
*/
|
||||
@@ -173,11 +123,10 @@ export interface CameraDeviceFormat {
|
||||
*/
|
||||
videoStabilizationModes: VideoStabilizationMode[];
|
||||
/**
|
||||
* Specifies this format's pixel format. The pixel format specifies how the individual pixels are interpreted as a visual image.
|
||||
*
|
||||
* The most common format is `420v`. Some formats (like `x420`) are not compatible with some frame processor plugins (e.g. MLKit)
|
||||
* Specifies this format's supported pixel-formats.
|
||||
* In most cases, this is `['native', 'yuv']`.
|
||||
*/
|
||||
pixelFormat: PixelFormat;
|
||||
pixelFormats: PixelFormat[];
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -251,16 +200,6 @@ export interface CameraDevice {
|
||||
* See [the Camera Formats documentation](https://react-native-vision-camera.com/docs/guides/formats) for more information about Camera Formats.
|
||||
*/
|
||||
formats: CameraDeviceFormat[];
|
||||
/**
|
||||
* Whether this camera device supports using Video Recordings (`video={true}`) and Frame Processors (`frameProcessor={...}`) at the same time. See ["The `supportsParallelVideoProcessing` prop"](https://react-native-vision-camera.com/docs/guides/devices#the-supportsparallelvideoprocessing-prop) for more information.
|
||||
*
|
||||
* If this property is `false`, you can only enable `video` or add a `frameProcessor`, but not both.
|
||||
*
|
||||
* * On iOS this value is always `true`.
|
||||
* * On newer Android devices this value is always `true`.
|
||||
* * On older Android devices this value is `false` if the Camera's hardware level is `LEGACY` or `LIMITED`, `true` otherwise. (See [`INFO_SUPPORTED_HARDWARE_LEVEL`](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics#INFO_SUPPORTED_HARDWARE_LEVEL) or [the tables at "Regular capture"](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#regular-capture))
|
||||
*/
|
||||
supportsParallelVideoProcessing: boolean;
|
||||
/**
|
||||
* Whether this camera device supports low light boost.
|
||||
*/
|
||||
@@ -281,4 +220,10 @@ export interface CameraDevice {
|
||||
* Specifies whether this device supports focusing ({@linkcode Camera.focus | Camera.focus(...)})
|
||||
*/
|
||||
supportsFocus: boolean;
|
||||
/**
|
||||
* The hardware level of the Camera.
|
||||
* - On Android, some older devices are running at a `legacy` or `limited` level which means they are running in a backwards compatible mode.
|
||||
* - On iOS, all devices are `full`.
|
||||
*/
|
||||
hardwareLevel: 'legacy' | 'limited' | 'full';
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user