37a3548a81
* Nuke CameraX * fix: Run View Finder on UI Thread * Open Camera, set up Threads * fix init * Mirror if needed * Try PreviewView * Use max resolution * Add `hardwareLevel` property * Check if output type is supported * Replace `frameRateRanges` with `minFps` and `maxFps` * Remove `isHighestPhotoQualitySupported` * Remove `colorSpace` The native platforms will use the best / most accurate colorSpace by default anyways. * HDR * Check from format * fix * Remove `supportsParallelVideoProcessing` * Correctly return video/photo sizes on Android now. Finally * Log all Device props * Log if optimized usecase is used * Cleanup * Configure Camera Input only once * Revert "Configure Camera Input only once" This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e. * Extract Camera configuration * Try to reconfigure all * Hook based * Properly set up `CameraSession` * Delete unused * fix: Fix recreate when outputs change * Update NativePreviewView.kt * Use callback for closing * Catch CameraAccessException * Finally got it stable * Remove isMirrored * Implement `takePhoto()` * Add ExifInterface library * Run findViewById on UI Thread * Add Photo Output Surface to takePhoto * Fix Video Stabilization Modes * Optimize Imports * More logs * Update CameraSession.kt * Close Image * Use separate Executor in CameraQueue * Delete hooks * Use same Thread again * If opened, call error * Update CameraSession.kt * Log HW level * fix: Don't enable Stream Use Case if it's not 100% supported * Move some stuff * Cleanup PhotoOutputSynchronizer * Try just open in suspend fun * Some synchronization fixes * fix logs * Update CameraDevice+createCaptureSession.kt * Update CameraDevice+createCaptureSession.kt * fixes * fix: Use Snapshot Template for speed capture prio * Use PREVIEW template for repeating request * Use `TEMPLATE_RECORD` if video use-case is attached * Use `isRunning` flag * Recreate session everytime on active/inactive * Lazily get values in capture session * Stability * Rebuild session if outputs change * Set `didOutputsChange` back to false * Capture first in lock * Try * kinda fix it? idk * fix: Keep Outputs * Refactor into single method * Update CameraView.kt * Use Enums for type safety * Implement Orientation (I think) * Move RefCount management to Java (Frame) * Don't crash when dropping a Frame * Prefer Devices with higher max resolution * Prefer multi-cams * Use FastImage for Media Page * Return orientation in takePhoto() * Load orientation from EXIF Data * Add `isMirrored` props and documentation for PhotoFile * fix: Return `not-determined` on Android * Update CameraViewModule.kt * chore: Upgrade packages * fix: Fix Metro Config * Cleanup config * Properly mirror Images on save * Prepare MediaRecorder * Start/Stop MediaRecorder * Remove `takeSnapshot()` It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot * Use `MediaCodec` * Move to `VideoRecording` class * Cleanup Snapshot * Create `SkiaPreviewView` hybrid class * Create OpenGL context * Create `SkiaPreviewView` * Fix texture creation missing context * Draw red frame * Somehow get it working * Add Skia CMake setup * Start looping * Init OpenGL * Refactor into `SkiaRenderer` * Cleanup PreviewSize * Set up * Only re-render UI if there is a new Frame * Preview * Fix init * Try rendering Preview * Update SkiaPreviewView.kt * Log version * Try using Skia (fail) * Drawwwww!!!!!!!!!! 🎉 * Use Preview Size * Clear first * Refactor into SkiaRenderer * Add `previewType: "none"` on iOS * Simplify a lot * Draw Camera? For some reason? I have no idea anymore * Fix OpenGL errors * Got it kinda working again? * Actually draw Frame woah * Clean up code * Cleanup * Update on main * Synchronize render calls * holy shit * Update SkiaRenderer.cpp * Update SkiaRenderer.cpp * Refactor * Update SkiaRenderer.cpp * Check for `NO_INPUT_TEXTURE`^ * Post & Wait * Set input size * Add Video back again * Allow session without preview * Convert JPEG to byte[] * feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689) * Try to pass YUV Buffers as Pixmaps * Create pixmap! * Clean up * Render to preview * Only render if we have an output surface * Update SkiaRenderer.cpp * Fix Y+U+V sampling code * Cleanup * Fix Semaphore 0 * Use 4:2:0 YUV again idk * Update SkiaRenderer.h * Set minSdk to 26 * Set surface * Revert "Set minSdk to 26" This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48. * Set previewType * feat: Video Recording with Camera2 (#1691) * Rename * Update CameraSession.kt * Use `SurfaceHolder` instead of `SurfaceView` for output * Update CameraOutputs.kt * Update CameraSession.kt * fix: Fix crash when Preview is null * Check if snapshot capture is supported * Update RecordingSession.kt * S * Use `MediaRecorder` * Make audio optional * Add Torch * Output duration * Update RecordingSession.kt * Start RecordingSession * logs * More log * Base for preparing pass-through Recording * Use `ImageWriter` to append Images to the Recording Surface * Stream PRIVATE GPU_SAMPLED_IMAGE Images * Add flags * Close session on stop * Allow customizing `videoCodec` and `fileType` * Enable Torch * Fix Torch Mode * Fix comparing outputs with hashCode * Update CameraSession.kt * Correctly pass along Frame Processor * fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz * Use CAMCORDER instead of MIC microphone * Use 1 channel * fix: Use `Orientation` * Add `native` PixelFormat * Update iOS to latest Skia integration * feat: Add `pixelFormat` property to Camera * Catch error in configureSession * Fix JPEG format * Clean up best match finder * Update CameraDeviceDetails.kt * Clamp sizes by maximum CamcorderProfile size * Remove `getAvailableVideoCodecs` * chore: release 3.0.0-rc.5 * Use maximum video size of RECORD as default * Update CameraDeviceDetails.kt * Add a todo * Add JSON device to issue report * Prefer `full` devices and flash * Lock to 30 FPS on Samsung * Implement Zoom * Refactor * Format -> PixelFormat * fix: Feat `pixelFormat` -> `pixelFormats` * Update TROUBLESHOOTING.mdx * Format * fix: Implement `zoom` for Photo Capture * fix: Don't run if `isActive` is `false` * fix: Call `examplePlugin(frame)` * fix: Fix Flash * fix: Use `react-native-worklets-core`! * fix: Fix import
297 lines
11 KiB
Swift
297 lines
11 KiB
Swift
//
|
|
// CameraView.swift
|
|
// mrousavy
|
|
//
|
|
// Created by Marc Rousavy on 09.11.20.
|
|
// Copyright © 2020 mrousavy. All rights reserved.
|
|
//
|
|
|
|
import AVFoundation
|
|
import Foundation
|
|
import UIKit
|
|
|
|
//
|
|
// TODOs for the CameraView which are currently too hard to implement either because of AVFoundation's limitations, or my brain capacity
|
|
//
|
|
// CameraView+RecordVideo
|
|
// TODO: Better startRecording()/stopRecording() (promise + callback, wait for TurboModules/JSI)
|
|
|
|
// CameraView+TakePhoto
|
|
// TODO: Photo HDR
|
|
|
|
private let propsThatRequireReconfiguration = ["cameraId",
|
|
"enableDepthData",
|
|
"enableHighQualityPhotos",
|
|
"enablePortraitEffectsMatteDelivery",
|
|
"photo",
|
|
"video",
|
|
"enableFrameProcessor",
|
|
"pixelFormat",
|
|
"previewType"]
|
|
private let propsThatRequireDeviceReconfiguration = ["fps",
|
|
"hdr",
|
|
"lowLightBoost"]
|
|
|
|
// MARK: - CameraView
|
|
|
|
public final class CameraView: UIView {
|
|
// pragma MARK: React Properties
|
|
// props that require reconfiguring
|
|
@objc var cameraId: NSString?
|
|
@objc var enableDepthData = false
|
|
@objc var enableHighQualityPhotos: NSNumber? // nullable bool
|
|
@objc var enablePortraitEffectsMatteDelivery = false
|
|
// use cases
|
|
@objc var photo: NSNumber? // nullable bool
|
|
@objc var video: NSNumber? // nullable bool
|
|
@objc var audio: NSNumber? // nullable bool
|
|
@objc var enableFrameProcessor = false
|
|
@objc var pixelFormat: NSString?
|
|
// props that require format reconfiguring
|
|
@objc var format: NSDictionary?
|
|
@objc var fps: NSNumber?
|
|
@objc var hdr: NSNumber? // nullable bool
|
|
@objc var lowLightBoost: NSNumber? // nullable bool
|
|
@objc var orientation: NSString?
|
|
// other props
|
|
@objc var isActive = false
|
|
@objc var torch = "off"
|
|
@objc var zoom: NSNumber = 1.0 // in "factor"
|
|
@objc var enableFpsGraph = false
|
|
@objc var videoStabilizationMode: NSString?
|
|
@objc var previewType: NSString = "none"
|
|
// events
|
|
@objc var onInitialized: RCTDirectEventBlock?
|
|
@objc var onError: RCTDirectEventBlock?
|
|
@objc var onViewReady: RCTDirectEventBlock?
|
|
// zoom
|
|
@objc var enableZoomGesture = false {
|
|
didSet {
|
|
if enableZoomGesture {
|
|
addPinchGestureRecognizer()
|
|
} else {
|
|
removePinchGestureRecognizer()
|
|
}
|
|
}
|
|
}
|
|
|
|
// pragma MARK: Internal Properties
|
|
internal var isMounted = false
|
|
internal var isReady = false
|
|
// Capture Session
|
|
internal let captureSession = AVCaptureSession()
|
|
internal let audioCaptureSession = AVCaptureSession()
|
|
// Inputs & Outputs
|
|
internal var videoDeviceInput: AVCaptureDeviceInput?
|
|
internal var audioDeviceInput: AVCaptureDeviceInput?
|
|
internal var photoOutput: AVCapturePhotoOutput?
|
|
internal var videoOutput: AVCaptureVideoDataOutput?
|
|
internal var audioOutput: AVCaptureAudioDataOutput?
|
|
// CameraView+RecordView (+ Frame Processor)
|
|
internal var isRecording = false
|
|
internal var recordingSession: RecordingSession?
|
|
#if VISION_CAMERA_ENABLE_FRAME_PROCESSORS
|
|
@objc public var frameProcessor: FrameProcessor?
|
|
#endif
|
|
#if VISION_CAMERA_ENABLE_SKIA
|
|
internal var skiaRenderer: SkiaRenderer?
|
|
#endif
|
|
// CameraView+Zoom
|
|
internal var pinchGestureRecognizer: UIPinchGestureRecognizer?
|
|
internal var pinchScaleOffset: CGFloat = 1.0
|
|
|
|
internal var previewView: PreviewView?
|
|
#if DEBUG
|
|
internal var fpsGraph: RCTFPSGraph?
|
|
#endif
|
|
|
|
/// Returns whether the AVCaptureSession is currently running (reflected by isActive)
|
|
var isRunning: Bool {
|
|
return captureSession.isRunning
|
|
}
|
|
|
|
// pragma MARK: Setup
|
|
override public init(frame: CGRect) {
|
|
super.init(frame: frame)
|
|
|
|
NotificationCenter.default.addObserver(self,
|
|
selector: #selector(sessionRuntimeError),
|
|
name: .AVCaptureSessionRuntimeError,
|
|
object: captureSession)
|
|
NotificationCenter.default.addObserver(self,
|
|
selector: #selector(sessionRuntimeError),
|
|
name: .AVCaptureSessionRuntimeError,
|
|
object: audioCaptureSession)
|
|
NotificationCenter.default.addObserver(self,
|
|
selector: #selector(audioSessionInterrupted),
|
|
name: AVAudioSession.interruptionNotification,
|
|
object: AVAudioSession.sharedInstance)
|
|
NotificationCenter.default.addObserver(self,
|
|
selector: #selector(onOrientationChanged),
|
|
name: UIDevice.orientationDidChangeNotification,
|
|
object: nil)
|
|
|
|
setupPreviewView()
|
|
}
|
|
|
|
@available(*, unavailable)
|
|
required init?(coder _: NSCoder) {
|
|
fatalError("init(coder:) is not implemented.")
|
|
}
|
|
|
|
deinit {
|
|
NotificationCenter.default.removeObserver(self,
|
|
name: .AVCaptureSessionRuntimeError,
|
|
object: captureSession)
|
|
NotificationCenter.default.removeObserver(self,
|
|
name: .AVCaptureSessionRuntimeError,
|
|
object: audioCaptureSession)
|
|
NotificationCenter.default.removeObserver(self,
|
|
name: AVAudioSession.interruptionNotification,
|
|
object: AVAudioSession.sharedInstance)
|
|
NotificationCenter.default.removeObserver(self,
|
|
name: UIDevice.orientationDidChangeNotification,
|
|
object: nil)
|
|
}
|
|
|
|
override public func willMove(toSuperview newSuperview: UIView?) {
|
|
super.willMove(toSuperview: newSuperview)
|
|
|
|
if newSuperview != nil {
|
|
if !isMounted {
|
|
isMounted = true
|
|
onViewReady?(nil)
|
|
}
|
|
}
|
|
}
|
|
|
|
override public func layoutSubviews() {
|
|
if let previewView = previewView {
|
|
previewView.frame = frame
|
|
previewView.bounds = bounds
|
|
}
|
|
}
|
|
|
|
// pragma MARK: Props updating
|
|
override public final func didSetProps(_ changedProps: [String]!) {
|
|
ReactLogger.log(level: .info, message: "Updating \(changedProps.count) prop(s)...")
|
|
let shouldReconfigure = changedProps.contains { propsThatRequireReconfiguration.contains($0) }
|
|
let shouldReconfigureFormat = shouldReconfigure || changedProps.contains("format")
|
|
let shouldReconfigureDevice = shouldReconfigureFormat || changedProps.contains { propsThatRequireDeviceReconfiguration.contains($0) }
|
|
let shouldReconfigureAudioSession = changedProps.contains("audio")
|
|
|
|
let willReconfigure = shouldReconfigure || shouldReconfigureFormat || shouldReconfigureDevice
|
|
|
|
let shouldCheckActive = willReconfigure || changedProps.contains("isActive") || captureSession.isRunning != isActive
|
|
let shouldUpdateTorch = willReconfigure || changedProps.contains("torch") || shouldCheckActive
|
|
let shouldUpdateZoom = willReconfigure || changedProps.contains("zoom") || shouldCheckActive
|
|
let shouldUpdateVideoStabilization = willReconfigure || changedProps.contains("videoStabilizationMode")
|
|
let shouldUpdateOrientation = willReconfigure || changedProps.contains("orientation")
|
|
|
|
if changedProps.contains("previewType") {
|
|
DispatchQueue.main.async {
|
|
self.setupPreviewView()
|
|
}
|
|
}
|
|
if changedProps.contains("enableFpsGraph") {
|
|
DispatchQueue.main.async {
|
|
self.setupFpsGraph()
|
|
}
|
|
}
|
|
|
|
if shouldReconfigure ||
|
|
shouldReconfigureAudioSession ||
|
|
shouldCheckActive ||
|
|
shouldUpdateTorch ||
|
|
shouldUpdateZoom ||
|
|
shouldReconfigureFormat ||
|
|
shouldReconfigureDevice ||
|
|
shouldUpdateVideoStabilization ||
|
|
shouldUpdateOrientation {
|
|
CameraQueues.cameraQueue.async {
|
|
// Video Configuration
|
|
if shouldReconfigure {
|
|
self.configureCaptureSession()
|
|
}
|
|
if shouldReconfigureFormat {
|
|
self.configureFormat()
|
|
}
|
|
if shouldReconfigureDevice {
|
|
self.configureDevice()
|
|
}
|
|
if shouldUpdateVideoStabilization, let videoStabilizationMode = self.videoStabilizationMode as String? {
|
|
self.captureSession.setVideoStabilizationMode(videoStabilizationMode)
|
|
}
|
|
|
|
if shouldUpdateZoom {
|
|
let zoomClamped = max(min(CGFloat(self.zoom.doubleValue), self.maxAvailableZoom), self.minAvailableZoom)
|
|
self.zoom(factor: zoomClamped, animated: false)
|
|
self.pinchScaleOffset = zoomClamped
|
|
}
|
|
|
|
if shouldCheckActive && self.captureSession.isRunning != self.isActive {
|
|
if self.isActive {
|
|
ReactLogger.log(level: .info, message: "Starting Session...")
|
|
self.captureSession.startRunning()
|
|
ReactLogger.log(level: .info, message: "Started Session!")
|
|
} else {
|
|
ReactLogger.log(level: .info, message: "Stopping Session...")
|
|
self.captureSession.stopRunning()
|
|
ReactLogger.log(level: .info, message: "Stopped Session!")
|
|
}
|
|
}
|
|
|
|
if shouldUpdateOrientation {
|
|
self.updateOrientation()
|
|
}
|
|
|
|
// This is a wack workaround, but if I immediately set torch mode after `startRunning()`, the session isn't quite ready yet and will ignore torch.
|
|
if shouldUpdateTorch {
|
|
CameraQueues.cameraQueue.asyncAfter(deadline: .now() + 0.1) {
|
|
self.setTorchMode(self.torch)
|
|
}
|
|
}
|
|
}
|
|
|
|
// Audio Configuration
|
|
if shouldReconfigureAudioSession {
|
|
CameraQueues.audioQueue.async {
|
|
self.configureAudioSession()
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
@objc
|
|
func onOrientationChanged() {
|
|
updateOrientation()
|
|
}
|
|
|
|
// pragma MARK: Event Invokers
|
|
internal final func invokeOnError(_ error: CameraError, cause: NSError? = nil) {
|
|
ReactLogger.log(level: .error, message: "Invoking onError(): \(error.message)")
|
|
guard let onError = onError else { return }
|
|
|
|
var causeDictionary: [String: Any]?
|
|
if let cause = cause {
|
|
causeDictionary = [
|
|
"code": cause.code,
|
|
"domain": cause.domain,
|
|
"message": cause.description,
|
|
"details": cause.userInfo,
|
|
]
|
|
}
|
|
onError([
|
|
"code": error.code,
|
|
"message": error.message,
|
|
"cause": causeDictionary ?? NSNull(),
|
|
])
|
|
}
|
|
|
|
internal final func invokeOnInitialized() {
|
|
ReactLogger.log(level: .info, message: "Camera initialized!")
|
|
guard let onInitialized = onInitialized else { return }
|
|
onInitialized([String: Any]())
|
|
}
|
|
}
|