react-native-vision-camera/ios/CameraView+AVCaptureSession.swift
Marc Rousavy 37a3548a81
feat: Full Android rewrite (CameraX -> Camera2) (#1674)
* Nuke CameraX

* fix: Run View Finder on UI Thread

* Open Camera, set up Threads

* fix init

* Mirror if needed

* Try PreviewView

* Use max resolution

* Add `hardwareLevel` property

* Check if output type is supported

* Replace `frameRateRanges` with `minFps` and `maxFps`

* Remove `isHighestPhotoQualitySupported`

* Remove `colorSpace`

The native platforms will use the best / most accurate colorSpace by default anyways.

* HDR

* Check from format

* fix

* Remove `supportsParallelVideoProcessing`

* Correctly return video/photo sizes on Android now. Finally

* Log all Device props

* Log if optimized usecase is used

* Cleanup

* Configure Camera Input only once

* Revert "Configure Camera Input only once"

This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e.

* Extract Camera configuration

* Try to reconfigure all

* Hook based

* Properly set up `CameraSession`

* Delete unused

* fix: Fix recreate when outputs change

* Update NativePreviewView.kt

* Use callback for closing

* Catch CameraAccessException

* Finally got it stable

* Remove isMirrored

* Implement `takePhoto()`

* Add ExifInterface library

* Run findViewById on UI Thread

* Add Photo Output Surface to takePhoto

* Fix Video Stabilization Modes

* Optimize Imports

* More logs

* Update CameraSession.kt

* Close Image

* Use separate Executor in CameraQueue

* Delete hooks

* Use same Thread again

* If opened, call error

* Update CameraSession.kt

* Log HW level

* fix: Don't enable Stream Use Case if it's not 100% supported

* Move some stuff

* Cleanup PhotoOutputSynchronizer

* Try just open in suspend fun

* Some synchronization fixes

* fix logs

* Update CameraDevice+createCaptureSession.kt

* Update CameraDevice+createCaptureSession.kt

* fixes

* fix: Use Snapshot Template for speed capture prio

* Use PREVIEW template for repeating request

* Use `TEMPLATE_RECORD` if video use-case is attached

* Use `isRunning` flag

* Recreate session everytime on active/inactive

* Lazily get values in capture session

* Stability

* Rebuild session if outputs change

* Set `didOutputsChange` back to false

* Capture first in lock

* Try

* kinda fix it? idk

* fix: Keep Outputs

* Refactor into single method

* Update CameraView.kt

* Use Enums for type safety

* Implement Orientation (I think)

* Move RefCount management to Java (Frame)

* Don't crash when dropping a Frame

* Prefer Devices with higher max resolution

* Prefer multi-cams

* Use FastImage for Media Page

* Return orientation in takePhoto()

* Load orientation from EXIF Data

* Add `isMirrored` props and documentation for PhotoFile

* fix: Return `not-determined` on Android

* Update CameraViewModule.kt

* chore: Upgrade packages

* fix: Fix Metro Config

* Cleanup config

* Properly mirror Images on save

* Prepare MediaRecorder

* Start/Stop MediaRecorder

* Remove `takeSnapshot()`

It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot

* Use `MediaCodec`

* Move to `VideoRecording` class

* Cleanup Snapshot

* Create `SkiaPreviewView` hybrid class

* Create OpenGL context

* Create `SkiaPreviewView`

* Fix texture creation missing context

* Draw red frame

* Somehow get it working

* Add Skia CMake setup

* Start looping

* Init OpenGL

* Refactor into `SkiaRenderer`

* Cleanup PreviewSize

* Set up

* Only re-render UI if there is a new Frame

* Preview

* Fix init

* Try rendering Preview

* Update SkiaPreviewView.kt

* Log version

* Try using Skia (fail)

* Drawwwww!!!!!!!!!! 🎉

* Use Preview Size

* Clear first

* Refactor into SkiaRenderer

* Add `previewType: "none"` on iOS

* Simplify a lot

* Draw Camera? For some reason? I have no idea anymore

* Fix OpenGL errors

* Got it kinda working again?

* Actually draw Frame woah

* Clean up code

* Cleanup

* Update on main

* Synchronize render calls

* holy shit

* Update SkiaRenderer.cpp

* Update SkiaRenderer.cpp

* Refactor

* Update SkiaRenderer.cpp

* Check for `NO_INPUT_TEXTURE`^

* Post & Wait

* Set input size

* Add Video back again

* Allow session without preview

* Convert JPEG to byte[]

* feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689)

* Try to pass YUV Buffers as Pixmaps

* Create pixmap!

* Clean up

* Render to preview

* Only render if we have an output surface

* Update SkiaRenderer.cpp

* Fix Y+U+V sampling code

* Cleanup

* Fix Semaphore 0

* Use 4:2:0 YUV again idk

* Update SkiaRenderer.h

* Set minSdk to 26

* Set surface

* Revert "Set minSdk to 26"

This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48.

* Set previewType

* feat: Video Recording with Camera2 (#1691)

* Rename

* Update CameraSession.kt

* Use `SurfaceHolder` instead of `SurfaceView` for output

* Update CameraOutputs.kt

* Update CameraSession.kt

* fix: Fix crash when Preview is null

* Check if snapshot capture is supported

* Update RecordingSession.kt

* S

* Use `MediaRecorder`

* Make audio optional

* Add Torch

* Output duration

* Update RecordingSession.kt

* Start RecordingSession

* logs

* More log

* Base for preparing pass-through Recording

* Use `ImageWriter` to append Images to the Recording Surface

* Stream PRIVATE GPU_SAMPLED_IMAGE Images

* Add flags

* Close session on stop

* Allow customizing `videoCodec` and `fileType`

* Enable Torch

* Fix Torch Mode

* Fix comparing outputs with hashCode

* Update CameraSession.kt

* Correctly pass along Frame Processor

* fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz

* Use CAMCORDER instead of MIC microphone

* Use 1 channel

* fix: Use `Orientation`

* Add `native` PixelFormat

* Update iOS to latest Skia integration

* feat: Add `pixelFormat` property to Camera

* Catch error in configureSession

* Fix JPEG format

* Clean up best match finder

* Update CameraDeviceDetails.kt

* Clamp sizes by maximum CamcorderProfile size

* Remove `getAvailableVideoCodecs`

* chore: release 3.0.0-rc.5

* Use maximum video size of RECORD as default

* Update CameraDeviceDetails.kt

* Add a todo

* Add JSON device to issue report

* Prefer `full` devices and flash

* Lock to 30 FPS on Samsung

* Implement Zoom

* Refactor

* Format -> PixelFormat

* fix: Feat `pixelFormat` -> `pixelFormats`

* Update TROUBLESHOOTING.mdx

* Format

* fix: Implement `zoom` for Photo Capture

* fix: Don't run if `isActive` is `false`

* fix: Call `examplePlugin(frame)`

* fix: Fix Flash

* fix: Use `react-native-worklets-core`!

* fix: Fix import
2023-08-21 12:50:14 +02:00

263 lines
8.6 KiB
Swift

//
// CameraView+AVCaptureSession.swift
// VisionCamera
//
// Created by Marc Rousavy on 26.03.21.
// Copyright © 2021 mrousavy. All rights reserved.
//
import AVFoundation
import Foundation
/**
Extension for CameraView that sets up the AVCaptureSession, Device and Format.
*/
extension CameraView {
// pragma MARK: Configure Capture Session
/**
Configures the Capture Session.
*/
final func configureCaptureSession() {
ReactLogger.log(level: .info, message: "Configuring Session...")
isReady = false
#if targetEnvironment(simulator)
invokeOnError(.device(.notAvailableOnSimulator))
return
#endif
guard cameraId != nil else {
invokeOnError(.device(.noDevice))
return
}
let cameraId = self.cameraId! as String
ReactLogger.log(level: .info, message: "Initializing Camera with device \(cameraId)...")
captureSession.beginConfiguration()
defer {
captureSession.commitConfiguration()
}
// pragma MARK: Capture Session Inputs
// Video Input
do {
if let videoDeviceInput = videoDeviceInput {
captureSession.removeInput(videoDeviceInput)
self.videoDeviceInput = nil
}
ReactLogger.log(level: .info, message: "Adding Video input...")
guard let videoDevice = AVCaptureDevice(uniqueID: cameraId) else {
invokeOnError(.device(.invalid))
return
}
videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
guard captureSession.canAddInput(videoDeviceInput!) else {
invokeOnError(.parameter(.unsupportedInput(inputDescriptor: "video-input")))
return
}
captureSession.addInput(videoDeviceInput!)
} catch {
invokeOnError(.device(.invalid))
return
}
// pragma MARK: Capture Session Outputs
// Photo Output
if let photoOutput = photoOutput {
captureSession.removeOutput(photoOutput)
self.photoOutput = nil
}
if photo?.boolValue == true {
ReactLogger.log(level: .info, message: "Adding Photo output...")
photoOutput = AVCapturePhotoOutput()
if enableHighQualityPhotos?.boolValue == true {
// TODO: In iOS 16 this will be removed in favor of maxPhotoDimensions.
photoOutput!.isHighResolutionCaptureEnabled = true
if #available(iOS 13.0, *) {
// TODO: Test if this actually does any fusion or if this just calls the captureOutput twice. If the latter, remove it.
photoOutput!.isVirtualDeviceConstituentPhotoDeliveryEnabled = photoOutput!.isVirtualDeviceConstituentPhotoDeliverySupported
photoOutput!.maxPhotoQualityPrioritization = .quality
} else {
photoOutput!.isDualCameraDualPhotoDeliveryEnabled = photoOutput!.isDualCameraDualPhotoDeliverySupported
}
}
if enableDepthData {
photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported
}
if #available(iOS 12.0, *), enablePortraitEffectsMatteDelivery {
photoOutput!.isPortraitEffectsMatteDeliveryEnabled = photoOutput!.isPortraitEffectsMatteDeliverySupported
}
guard captureSession.canAddOutput(photoOutput!) else {
invokeOnError(.parameter(.unsupportedOutput(outputDescriptor: "photo-output")))
return
}
captureSession.addOutput(photoOutput!)
if videoDeviceInput!.device.position == .front {
photoOutput!.mirror()
}
}
// Video Output + Frame Processor
if let videoOutput = videoOutput {
captureSession.removeOutput(videoOutput)
self.videoOutput = nil
}
if video?.boolValue == true || enableFrameProcessor {
ReactLogger.log(level: .info, message: "Adding Video Data output...")
videoOutput = AVCaptureVideoDataOutput()
guard captureSession.canAddOutput(videoOutput!) else {
invokeOnError(.parameter(.unsupportedOutput(outputDescriptor: "video-output")))
return
}
videoOutput!.setSampleBufferDelegate(self, queue: CameraQueues.videoQueue)
videoOutput!.alwaysDiscardsLateVideoFrames = false
if let pixelFormat = pixelFormat as? String {
let defaultFormat = CMFormatDescriptionGetMediaSubType(videoDeviceInput!.device.activeFormat.formatDescription)
var pixelFormatType: OSType = defaultFormat
switch pixelFormat {
case "yuv":
pixelFormatType = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
case "rgb":
pixelFormatType = kCVPixelFormatType_32BGRA
case "native":
pixelFormatType = defaultFormat
default:
invokeOnError(.parameter(.invalid(unionName: "pixelFormat", receivedValue: pixelFormat)))
}
videoOutput!.videoSettings = [
String(kCVPixelBufferPixelFormatTypeKey): pixelFormatType,
]
}
captureSession.addOutput(videoOutput!)
}
onOrientationChanged()
invokeOnInitialized()
isReady = true
ReactLogger.log(level: .info, message: "Session successfully configured!")
}
// pragma MARK: Configure Device
/**
Configures the Video Device with the given FPS and HDR modes.
*/
final func configureDevice() {
ReactLogger.log(level: .info, message: "Configuring Device...")
guard let device = videoDeviceInput?.device else {
invokeOnError(.session(.cameraNotReady))
return
}
do {
try device.lockForConfiguration()
if let fps = fps?.int32Value {
let supportsGivenFps = device.activeFormat.videoSupportedFrameRateRanges.contains { range in
return range.includes(fps: Double(fps))
}
if !supportsGivenFps {
invokeOnError(.format(.invalidFps(fps: Int(fps))))
return
}
let duration = CMTimeMake(value: 1, timescale: fps)
device.activeVideoMinFrameDuration = duration
device.activeVideoMaxFrameDuration = duration
} else {
device.activeVideoMinFrameDuration = CMTime.invalid
device.activeVideoMaxFrameDuration = CMTime.invalid
}
if hdr != nil {
if hdr == true && !device.activeFormat.isVideoHDRSupported {
invokeOnError(.format(.invalidHdr))
return
}
if !device.automaticallyAdjustsVideoHDREnabled {
if device.isVideoHDREnabled != hdr!.boolValue {
device.isVideoHDREnabled = hdr!.boolValue
}
}
}
if lowLightBoost != nil {
if lowLightBoost == true && !device.isLowLightBoostSupported {
invokeOnError(.device(.lowLightBoostNotSupported))
return
}
if device.automaticallyEnablesLowLightBoostWhenAvailable != lowLightBoost!.boolValue {
device.automaticallyEnablesLowLightBoostWhenAvailable = lowLightBoost!.boolValue
}
}
device.unlockForConfiguration()
ReactLogger.log(level: .info, message: "Device successfully configured!")
} catch let error as NSError {
invokeOnError(.device(.configureError), cause: error)
return
}
}
// pragma MARK: Configure Format
/**
Configures the Video Device to find the best matching Format.
*/
final func configureFormat() {
ReactLogger.log(level: .info, message: "Configuring Format...")
guard let filter = format else {
// Format Filter was null. Ignore it.
return
}
guard let device = videoDeviceInput?.device else {
invokeOnError(.session(.cameraNotReady))
return
}
if device.activeFormat.matchesFilter(filter) {
ReactLogger.log(level: .info, message: "Active format already matches filter.")
return
}
// get matching format
let matchingFormats = device.formats.filter { $0.matchesFilter(filter) }.sorted { $0.isBetterThan($1) }
guard let format = matchingFormats.first else {
invokeOnError(.format(.invalidFormat))
return
}
do {
try device.lockForConfiguration()
device.activeFormat = format
device.unlockForConfiguration()
ReactLogger.log(level: .info, message: "Format successfully configured!")
} catch let error as NSError {
invokeOnError(.device(.configureError), cause: error)
return
}
}
// pragma MARK: Notifications/Interruptions
@objc
func sessionRuntimeError(notification: Notification) {
ReactLogger.log(level: .error, message: "Unexpected Camera Runtime Error occured!")
guard let error = notification.userInfo?[AVCaptureSessionErrorKey] as? AVError else {
return
}
invokeOnError(.unknown(message: error._nsError.description), cause: error._nsError)
if isActive {
// restart capture session after an error occured
CameraQueues.cameraQueue.async {
self.captureSession.startRunning()
}
}
}
}