react-native-vision-camera/src/CameraDevice.ts
Marc Rousavy 37a3548a81
feat: Full Android rewrite (CameraX -> Camera2) (#1674)
* Nuke CameraX

* fix: Run View Finder on UI Thread

* Open Camera, set up Threads

* fix init

* Mirror if needed

* Try PreviewView

* Use max resolution

* Add `hardwareLevel` property

* Check if output type is supported

* Replace `frameRateRanges` with `minFps` and `maxFps`

* Remove `isHighestPhotoQualitySupported`

* Remove `colorSpace`

The native platforms will use the best / most accurate colorSpace by default anyways.

* HDR

* Check from format

* fix

* Remove `supportsParallelVideoProcessing`

* Correctly return video/photo sizes on Android now. Finally

* Log all Device props

* Log if optimized usecase is used

* Cleanup

* Configure Camera Input only once

* Revert "Configure Camera Input only once"

This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e.

* Extract Camera configuration

* Try to reconfigure all

* Hook based

* Properly set up `CameraSession`

* Delete unused

* fix: Fix recreate when outputs change

* Update NativePreviewView.kt

* Use callback for closing

* Catch CameraAccessException

* Finally got it stable

* Remove isMirrored

* Implement `takePhoto()`

* Add ExifInterface library

* Run findViewById on UI Thread

* Add Photo Output Surface to takePhoto

* Fix Video Stabilization Modes

* Optimize Imports

* More logs

* Update CameraSession.kt

* Close Image

* Use separate Executor in CameraQueue

* Delete hooks

* Use same Thread again

* If opened, call error

* Update CameraSession.kt

* Log HW level

* fix: Don't enable Stream Use Case if it's not 100% supported

* Move some stuff

* Cleanup PhotoOutputSynchronizer

* Try just open in suspend fun

* Some synchronization fixes

* fix logs

* Update CameraDevice+createCaptureSession.kt

* Update CameraDevice+createCaptureSession.kt

* fixes

* fix: Use Snapshot Template for speed capture prio

* Use PREVIEW template for repeating request

* Use `TEMPLATE_RECORD` if video use-case is attached

* Use `isRunning` flag

* Recreate session everytime on active/inactive

* Lazily get values in capture session

* Stability

* Rebuild session if outputs change

* Set `didOutputsChange` back to false

* Capture first in lock

* Try

* kinda fix it? idk

* fix: Keep Outputs

* Refactor into single method

* Update CameraView.kt

* Use Enums for type safety

* Implement Orientation (I think)

* Move RefCount management to Java (Frame)

* Don't crash when dropping a Frame

* Prefer Devices with higher max resolution

* Prefer multi-cams

* Use FastImage for Media Page

* Return orientation in takePhoto()

* Load orientation from EXIF Data

* Add `isMirrored` props and documentation for PhotoFile

* fix: Return `not-determined` on Android

* Update CameraViewModule.kt

* chore: Upgrade packages

* fix: Fix Metro Config

* Cleanup config

* Properly mirror Images on save

* Prepare MediaRecorder

* Start/Stop MediaRecorder

* Remove `takeSnapshot()`

It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot

* Use `MediaCodec`

* Move to `VideoRecording` class

* Cleanup Snapshot

* Create `SkiaPreviewView` hybrid class

* Create OpenGL context

* Create `SkiaPreviewView`

* Fix texture creation missing context

* Draw red frame

* Somehow get it working

* Add Skia CMake setup

* Start looping

* Init OpenGL

* Refactor into `SkiaRenderer`

* Cleanup PreviewSize

* Set up

* Only re-render UI if there is a new Frame

* Preview

* Fix init

* Try rendering Preview

* Update SkiaPreviewView.kt

* Log version

* Try using Skia (fail)

* Drawwwww!!!!!!!!!! 🎉

* Use Preview Size

* Clear first

* Refactor into SkiaRenderer

* Add `previewType: "none"` on iOS

* Simplify a lot

* Draw Camera? For some reason? I have no idea anymore

* Fix OpenGL errors

* Got it kinda working again?

* Actually draw Frame woah

* Clean up code

* Cleanup

* Update on main

* Synchronize render calls

* holy shit

* Update SkiaRenderer.cpp

* Update SkiaRenderer.cpp

* Refactor

* Update SkiaRenderer.cpp

* Check for `NO_INPUT_TEXTURE`^

* Post & Wait

* Set input size

* Add Video back again

* Allow session without preview

* Convert JPEG to byte[]

* feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689)

* Try to pass YUV Buffers as Pixmaps

* Create pixmap!

* Clean up

* Render to preview

* Only render if we have an output surface

* Update SkiaRenderer.cpp

* Fix Y+U+V sampling code

* Cleanup

* Fix Semaphore 0

* Use 4:2:0 YUV again idk

* Update SkiaRenderer.h

* Set minSdk to 26

* Set surface

* Revert "Set minSdk to 26"

This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48.

* Set previewType

* feat: Video Recording with Camera2 (#1691)

* Rename

* Update CameraSession.kt

* Use `SurfaceHolder` instead of `SurfaceView` for output

* Update CameraOutputs.kt

* Update CameraSession.kt

* fix: Fix crash when Preview is null

* Check if snapshot capture is supported

* Update RecordingSession.kt

* S

* Use `MediaRecorder`

* Make audio optional

* Add Torch

* Output duration

* Update RecordingSession.kt

* Start RecordingSession

* logs

* More log

* Base for preparing pass-through Recording

* Use `ImageWriter` to append Images to the Recording Surface

* Stream PRIVATE GPU_SAMPLED_IMAGE Images

* Add flags

* Close session on stop

* Allow customizing `videoCodec` and `fileType`

* Enable Torch

* Fix Torch Mode

* Fix comparing outputs with hashCode

* Update CameraSession.kt

* Correctly pass along Frame Processor

* fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz

* Use CAMCORDER instead of MIC microphone

* Use 1 channel

* fix: Use `Orientation`

* Add `native` PixelFormat

* Update iOS to latest Skia integration

* feat: Add `pixelFormat` property to Camera

* Catch error in configureSession

* Fix JPEG format

* Clean up best match finder

* Update CameraDeviceDetails.kt

* Clamp sizes by maximum CamcorderProfile size

* Remove `getAvailableVideoCodecs`

* chore: release 3.0.0-rc.5

* Use maximum video size of RECORD as default

* Update CameraDeviceDetails.kt

* Add a todo

* Add JSON device to issue report

* Prefer `full` devices and flash

* Lock to 30 FPS on Samsung

* Implement Zoom

* Refactor

* Format -> PixelFormat

* fix: Feat `pixelFormat` -> `pixelFormats`

* Update TROUBLESHOOTING.mdx

* Format

* fix: Implement `zoom` for Photo Capture

* fix: Don't run if `isActive` is `false`

* fix: Call `examplePlugin(frame)`

* fix: Fix Flash

* fix: Use `react-native-worklets-core`!

* fix: Fix import
2023-08-21 12:50:14 +02:00

230 lines
9.1 KiB
TypeScript

import type { CameraPosition } from './CameraPosition';
import type { PixelFormat } from './PixelFormat';
/**
* Indentifiers for a physical camera (one that actually exists on the back/front of the device)
*
* * `"ultra-wide-angle-camera"`: A built-in camera with a shorter focal length than that of a wide-angle camera. (focal length between below 24mm)
* * `"wide-angle-camera"`: A built-in wide-angle camera. (focal length between 24mm and 35mm)
* * `"telephoto-camera"`: A built-in camera device with a longer focal length than a wide-angle camera. (focal length between above 85mm)
*/
export type PhysicalCameraDeviceType = 'ultra-wide-angle-camera' | 'wide-angle-camera' | 'telephoto-camera';
/**
* Indentifiers for a logical camera (Combinations of multiple physical cameras to create a single logical camera).
*
* * `"dual-camera"`: A combination of wide-angle and telephoto cameras that creates a capture device.
* * `"dual-wide-camera"`: A device that consists of two cameras of fixed focal length, one ultrawide angle and one wide angle.
* * `"triple-camera"`: A device that consists of three cameras of fixed focal length, one ultrawide angle, one wide angle, and one telephoto.
*/
export type LogicalCameraDeviceType = 'dual-camera' | 'dual-wide-camera' | 'triple-camera';
/**
* Parses an array of physical device types into a single {@linkcode PhysicalCameraDeviceType} or {@linkcode LogicalCameraDeviceType}, depending what matches.
* @method
*/
export const parsePhysicalDeviceTypes = (
physicalDeviceTypes: PhysicalCameraDeviceType[],
): PhysicalCameraDeviceType | LogicalCameraDeviceType => {
if (physicalDeviceTypes.length === 1) {
// @ts-expect-error for very obvious reasons
return physicalDeviceTypes[0];
}
const hasWide = physicalDeviceTypes.includes('wide-angle-camera');
const hasUltra = physicalDeviceTypes.includes('ultra-wide-angle-camera');
const hasTele = physicalDeviceTypes.includes('telephoto-camera');
if (hasTele && hasWide && hasUltra) return 'triple-camera';
if (hasWide && hasUltra) return 'dual-wide-camera';
if (hasWide && hasTele) return 'dual-camera';
throw new Error(`Invalid physical device type combination! ${physicalDeviceTypes.join(' + ')}`);
};
/**
* Indicates a format's autofocus system.
*
* * `"none"`: Indicates that autofocus is not available
* * `"contrast-detection"`: Indicates that autofocus is achieved by contrast detection. Contrast detection performs a focus scan to find the optimal position
* * `"phase-detection"`: Indicates that autofocus is achieved by phase detection. Phase detection has the ability to achieve focus in many cases without a focus scan. Phase detection autofocus is typically less visually intrusive than contrast detection autofocus
*/
export type AutoFocusSystem = 'contrast-detection' | 'phase-detection' | 'none';
/**
* Indicates a format's supported video stabilization mode. Enabling video stabilization may introduce additional latency into the video capture pipeline.
*
* * `"off"`: No video stabilization. Indicates that video should not be stabilized
* * `"standard"`: Standard software-based video stabilization. Standard video stabilization reduces the field of view by about 10%.
* * `"cinematic"`: Advanced software-based video stabilization. This applies more aggressive cropping or transformations than standard.
* * `"cinematic-extended"`: Extended software- and hardware-based stabilization that aggressively crops and transforms the video to apply a smooth cinematic stabilization.
* * `"auto"`: Indicates that the most appropriate video stabilization mode for the device and format should be chosen automatically
*/
export type VideoStabilizationMode = 'off' | 'standard' | 'cinematic' | 'cinematic-extended' | 'auto';
/**
* A Camera Device's video format. Do not create instances of this type yourself, only use {@linkcode Camera.getAvailableCameraDevices | Camera.getAvailableCameraDevices()}.
*/
export interface CameraDeviceFormat {
/**
* The height of the highest resolution a still image (photo) can be produced in
*/
photoHeight: number;
/**
* The width of the highest resolution a still image (photo) can be produced in
*/
photoWidth: number;
/**
* The video resolutions's height
*/
videoHeight: number;
/**
* The video resolution's width
*/
videoWidth: number;
/**
* Maximum supported ISO value
*/
maxISO: number;
/**
* Minimum supported ISO value
*/
minISO: number;
/**
* The video field of view in degrees
*/
fieldOfView: number;
/**
* The maximum zoom factor (e.g. `128`)
*/
maxZoom: number;
/**
* Specifies whether this format supports HDR mode for video capture
*/
supportsVideoHDR: boolean;
/**
* Specifies whether this format supports HDR mode for photo capture
*/
supportsPhotoHDR: boolean;
/**
* The minum frame rate this Format needs to run at. High resolution formats often run at lower frame rates.
*/
minFps: number;
/**
* The maximum frame rate this Format is able to run at. High resolution formats often run at lower frame rates.
*/
maxFps: number;
/**
* Specifies this format's auto focus system.
*/
autoFocusSystem: AutoFocusSystem;
/**
* All supported video stabilization modes
*/
videoStabilizationModes: VideoStabilizationMode[];
/**
* Specifies this format's supported pixel-formats.
* In most cases, this is `['native', 'yuv']`.
*/
pixelFormats: PixelFormat[];
}
/**
* Represents a camera device discovered by the {@linkcode Camera.getAvailableCameraDevices | Camera.getAvailableCameraDevices()} function
*/
export interface CameraDevice {
/**
* The native ID of the camera device instance.
*/
id: string;
/**
* The physical devices this `CameraDevice` contains.
*
* * If this camera device is a **logical camera** (combination of multiple physical cameras), there are multiple cameras in this array.
* * If this camera device is a **physical camera**, there is only a single element in this array.
*
* You can check if the camera is a logical multi-camera by using the `isMultiCam` property.
*/
devices: PhysicalCameraDeviceType[];
/**
* Specifies the physical position of this camera. (back or front)
*/
position: CameraPosition;
/**
* A friendly localized name describing the camera.
*/
name: string;
/**
* Specifies whether this camera supports enabling flash for photo capture.
*/
hasFlash: boolean;
/**
* Specifies whether this camera supports continuously enabling the flash to act like a torch (flash with video capture)
*/
hasTorch: boolean;
/**
* A property indicating whether the device is a virtual multi-camera consisting of multiple combined physical cameras.
*
* Examples:
* * The Dual Camera, which supports seamlessly switching between a wide and telephoto camera while zooming and generating depth data from the disparities between the different points of view of the physical cameras.
* * The TrueDepth Camera, which generates depth data from disparities between a YUV camera and an Infrared camera pointed in the same direction.
*/
isMultiCam: boolean;
/**
* Minimum available zoom factor (e.g. `1`)
*/
minZoom: number;
/**
* Maximum available zoom factor (e.g. `128`)
*/
maxZoom: number;
/**
* The zoom factor where the camera is "neutral".
*
* * For single-physical cameras this property is always `1.0`.
* * For multi cameras this property is a value between `minZoom` and `maxZoom`, where the camera is in _wide-angle_ mode and hasn't switched to the _ultra-wide-angle_ ("fish-eye") or telephoto camera yet.
*
* Use this value as an initial value for the zoom property if you implement custom zoom. (e.g. reanimated shared value should be initially set to this value)
* @example
* const device = ...
*
* const zoom = useSharedValue(device.neutralZoom) // <-- initial value so it doesn't start at ultra-wide
* const cameraProps = useAnimatedProps(() => ({
* zoom: zoom.value
* }))
*/
neutralZoom: number;
/**
* All available formats for this camera device. Use this to find the best format for your use case and set it to the Camera's {@linkcode CameraProps.format | Camera's .format} property.
*
* See [the Camera Formats documentation](https://react-native-vision-camera.com/docs/guides/formats) for more information about Camera Formats.
*/
formats: CameraDeviceFormat[];
/**
* Whether this camera device supports low light boost.
*/
supportsLowLightBoost: boolean;
/**
* Whether this camera supports taking photos with depth data.
*
* **! Work in Progress !**
*/
supportsDepthCapture: boolean;
/**
* Whether this camera supports taking photos in RAW format
*
* **! Work in Progress !**
*/
supportsRawCapture: boolean;
/**
* Specifies whether this device supports focusing ({@linkcode Camera.focus | Camera.focus(...)})
*/
supportsFocus: boolean;
/**
* The hardware level of the Camera.
* - On Android, some older devices are running at a `legacy` or `limited` level which means they are running in a backwards compatible mode.
* - On iOS, all devices are `full`.
*/
hardwareLevel: 'legacy' | 'limited' | 'full';
}