* Clean up Frame Processor * Create FrameProcessorHolder * Create FrameProcessorDelegate in ObjC++ * Move frame processor to FrameProcessorDelegate * Decorate runtime, check for null * Update FrameProcessorDelegate.mm * Cleanup FrameProcessorBindings.mm * Fix RuntimeDecorator.h import * Update FrameProcessorDelegate.mm * "React" -> "React Helper" to avoid confusion * Rename folders again * Fix podspec flattening a lot of headers, causing REA nameclash * Fix header imports to avoid REA naming collision * Lazily initialize jsi::Runtime on DispatchQueue * Install frame processor bindings from Swift * First try to call jsi::Function (frame processor) 👀 * Call viewForReactTag on RCT main thread * Fix bridge accessing * Add more logs * Update CameraViewManager.swift * Add more TODOs * Re-indent .cpp files * Fix RCTTurboModule import podspec * Remove unnecessary include check for swift umbrella header * Merge branch 'main' into frame-processors * Docs: use static width for images (283) * Create validate-cpp.yml * Update a lot of packages to latest * Set SWIFT_VERSION to 5.2 in podspec * Create clean.sh * Delete unused C++ files * podspec: Remove CLANG_CXX_LANGUAGE_STANDARD and OTHER_CFLAGS * Update pod lockfiles * Regenerate lockfiles * Remove IOSLogger * Use NSLog * Create FrameProcessorManager (inherits from REA RuntimeManager) * Create reanimated::RuntimeManager shared_ptr * Re-integrate pods * Add react-native-reanimated >=2 peerDependency * Add metro-config * blacklist -> exclusionList * Try to call worklet * Fix jsi::Value* initializer * Call ShareableValue::adapt (makeShareable) with React/JS Runtime * Add null-checks * Lift runtime manager creation out of delegate, into bindings * Remove debug statement * Make RuntimeManager unique_ptr * Set _FRAME_PROCESSOR * Extract convertJSIFunctionToFrameProcessorCallback * Print frame * Merge branch 'main' into frame-processors * Reformat Swift code * Install reanimated from npm again * Re-integrate Pods * Dependabot: Also scan example/ and docs/ * Update validate-cpp.yml * Create FrameProcessorUtils * Create Frame.h * Abstract HostObject creation away * Fix types * Fix frame processor call * Add todo * Update lockfiles * Add C++ contributing instructions * Update CONTRIBUTING.md * Add android/src/main/cpp to cpplint * Update cpplint.sh * Fix a few cpplint errors * Fix globals * Fix a few more cpplint errors * Update App.tsx * Update AndroidLogger.cpp * Format * Fix cpplint script (check-cpp) * Try to simplify frame processor * y * Update FrameProcessorUtils.mm * Update FrameProcessorBindings.mm * Update CameraView.swift * Update CameraViewManager.m * Restructure everything * fix * Fix `@objc` export (make public) * Refactor installFrameProcessorBindings into FrameProcessorRuntimeManager * Add swift RCTBridge.runOnJS helper * Fix run(onJS) * Add pragma once * Add `&self` to lambda * Update FrameProcessorRuntimeManager.mm * reorder imports * Fix imports * forward declare * Rename extension * Destroy buffer after execution * Add FrameProcessorPluginRegistry base * Merge branch 'main' into frame-processors * Add frameProcessor to types * Update Camera.tsx * Fix rebase merge * Remove movieOutput * Use `useFrameProcessor` * Fix bad merge * Add additional ESLint rules * Update lockfiles * Update CameraViewManager.m * Add support for V8 runtime * Add frame processor plugins API * Print plugin invoke * Fix React Utils in podspec * Fix runOnJS swift name * Remove invalid redecl of `captureSession` * Use REA 2.1.0 which includes all my big PRs 🎉 * Update validate-cpp.yml * Update Podfile.lock * Remove Flipper * Fix dereferencing * Capture `self` by value. Fucking hell, what a dumb mistake. * Override a few HostObject functions * Expose isReady, width, height, bytesPerRow and planesCount * use hook again * Expose property names * FrameProcessor -> Frame * Update CameraView+RecordVideo.swift * Add Swift support for Frame Processors Plugins * Add macros for plugin installation * Add ObjC frame processor plugin * Correctly install frame processor plugins * Don't require custom name for macro * Check if plugin already exists * Implement QR Code Frame Processor Plugin in Swift * Adjust ObjC style frame processor macro * optimize * Add `frameProcessorFrameDropRate` * Fix types * Only log once * Log if it executes slowly * Implement `frameProcessorFps` * Implement manual encoded video recordings * Use recommended video settings * Add fileType types * Ignore if input is not ready for media data * Add completion handler * Add audio buffer sampling * Init only for video frame * use AVAssetWriterInputPixelBufferAdaptor * Remove AVAssetWriterInputPixelBufferAdaptor * Rotate VideoWriter * Always assume portrait orientation * Update RecordingSession.swift * Use a separate Queue for Audio * Format Swift * Update CameraView+RecordVideo.swift * Use `videoQueue` instead of `cameraQueue` * Move example plugins to example app * Fix hardcoded name in plugin macro * QRFrame... -> QRCodeFrame... * Update FrameProcessorPlugin.h * Add example frame processors to JS base * Update QRCodeFrameProcessorPluginSwift.m * Add docs to create FP Plugins * Update FRAME_PROCESSORS_CREATE.mdx * Update FRAME_PROCESSORS_CREATE.mdx * Use `AVAssetWriterInputPixelBufferAdaptor` for efficient pixel buffer recycling * Add customizable `pixelFormat` * Use native format if available * Update project.pbxproj * Set video width and height as source-pixel-buffer attributes * Catch * Update App.tsx * Don't explicitly set video dimensions, let CVPixelBufferPool handle it * Add a few logs * Cleanup * Update CameraView+RecordVideo.swift * Eagerly initialize asset writer to fix stutter at first frame * Use `cameraQueue` DispatchQueue to not block CaptureDataOutputDelegate * Fix duration calculation * cleanup * Cleanup * Swiftformat * Return available video codecs * Only show frame drop notification for video output * Remove photo and video codec functionality It was too much complexity and probably never used anyways. * Revert all android related changes for now * Cleanup * Remove unused header * Update AVAssetWriter.Status+descriptor.swift * Only call Frame Processor for Video Frames * Fix `if` * Add support for Frame Processor plugin parameters/arguments * Fix arg support * Move to JSIUtils.mm * Update JSIUtils.h * Update FRAME_PROCESSORS_CREATE.mdx * Update FRAME_PROCESSORS_CREATE.mdx * Upgrade packages for docs/ * fix docs * Rename * highlight lines * docs * community plugins * Update FRAME_PROCESSOR_CREATE_FINAL.mdx * Update FRAME_PROCESSOR_PLUGIN_LIST.mdx * Update FRAME_PROCESSOR_PLUGIN_LIST.mdx * Update dependencies (1/2) * Update dependencies (2/2) * Update Gemfile.lock * add FP docs * Update README.md * Make `lastFrameProcessor` private * add `frameProcessor` docs * fix docs * adjust docs * Update DEVICES.mdx * fix * s * Add logs demo * add metro restart note * Update FRAME_PROCESSOR_CREATE_PLUGIN_IOS.mdx * Mirror video device * Update AVCaptureVideoDataOutput+mirror.swift * Create .swift-version * Enable whole module optimization * Fix recording mirrored video * Swift format * Clean dictionary on `markInvalid` * Fix cleanup * Add docs for disabling frame processors * Update project.pbxproj * Revert "Update project.pbxproj" This reverts commit e67861e51b88b4888a6940e2d20388f3044211d0. * Log frame drop reason * Format * add more samples * Add clang-format * also check .mm * Revert "also check .mm" This reverts commit 8b9d5e2c29866b05909530d104f6633d6c49eadd. * Revert "Add clang-format" This reverts commit 7643ac808e0fc34567ea1f814e73d84955381636. * Use `kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange` as default * Read matching video attributes from videoSettings * Add TODO * Swiftformat * Conditionally disable frame processors * Assert if trying to use frame processors when disabled * Add frame-processors demo gif * Allow disabling frame processors via `VISION_CAMERA_DISABLE_FRAME_PROCESSORS` * Update FrameProcessorRuntimeManager.mm * Update FRAME_PROCESSORS.mdx * Update project.pbxproj * Update FRAME_PROCESSORS_CREATE_OVERVIEW.mdx
		
			
				
	
	
		
			272 lines
		
	
	
		
			9.5 KiB
		
	
	
	
		
			Swift
		
	
	
	
	
	
			
		
		
	
	
			272 lines
		
	
	
		
			9.5 KiB
		
	
	
	
		
			Swift
		
	
	
	
	
	
| //
 | |
| //  CameraView+AVCaptureSession.swift
 | |
| //  VisionCamera
 | |
| //
 | |
| //  Created by Marc Rousavy on 26.03.21.
 | |
| //  Copyright © 2021 Facebook. All rights reserved.
 | |
| //
 | |
| 
 | |
| import AVFoundation
 | |
| import Foundation
 | |
| 
 | |
| /**
 | |
|  Extension for CameraView that sets up the AVCaptureSession, Device and Format.
 | |
|  */
 | |
| extension CameraView {
 | |
|   /**
 | |
|    Configures the Capture Session.
 | |
|    */
 | |
|   final func configureCaptureSession() {
 | |
|     ReactLogger.log(level: .info, message: "Configuring Session...")
 | |
|     isReady = false
 | |
| 
 | |
|     #if targetEnvironment(simulator)
 | |
|       return invokeOnError(.device(.notAvailableOnSimulator))
 | |
|     #endif
 | |
| 
 | |
|     guard cameraId != nil else {
 | |
|       return invokeOnError(.device(.noDevice))
 | |
|     }
 | |
|     let cameraId = self.cameraId! as String
 | |
| 
 | |
|     ReactLogger.log(level: .info, message: "Initializing Camera with device \(cameraId)...")
 | |
|     captureSession.beginConfiguration()
 | |
|     defer {
 | |
|       captureSession.commitConfiguration()
 | |
|     }
 | |
| 
 | |
|     // Disable automatic Audio Session configuration because we configure it in CameraView+AVAudioSession.swift (called before Camera gets activated)
 | |
|     captureSession.automaticallyConfiguresApplicationAudioSession = false
 | |
| 
 | |
|     // If preset is set, use preset. Otherwise use format.
 | |
|     if let preset = self.preset {
 | |
|       var sessionPreset: AVCaptureSession.Preset?
 | |
|       do {
 | |
|         sessionPreset = try AVCaptureSession.Preset(withString: preset)
 | |
|       } catch let EnumParserError.unsupportedOS(supportedOnOS: os) {
 | |
|         return invokeOnError(.parameter(.unsupportedOS(unionName: "Preset", receivedValue: preset, supportedOnOs: os)))
 | |
|       } catch {
 | |
|         return invokeOnError(.parameter(.invalid(unionName: "Preset", receivedValue: preset)))
 | |
|       }
 | |
|       if sessionPreset != nil {
 | |
|         if captureSession.canSetSessionPreset(sessionPreset!) {
 | |
|           captureSession.sessionPreset = sessionPreset!
 | |
|         } else {
 | |
|           // non-fatal error, so continue with configuration
 | |
|           invokeOnError(.format(.invalidPreset(preset: preset)))
 | |
|         }
 | |
|       }
 | |
|     }
 | |
| 
 | |
|     // INPUTS
 | |
|     // Video Input
 | |
|     do {
 | |
|       if let videoDeviceInput = self.videoDeviceInput {
 | |
|         captureSession.removeInput(videoDeviceInput)
 | |
|       }
 | |
|       guard let videoDevice = AVCaptureDevice(uniqueID: cameraId) else {
 | |
|         return invokeOnError(.device(.invalid))
 | |
|       }
 | |
|       zoom = NSNumber(value: Double(videoDevice.neutralZoomPercent))
 | |
|       videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
 | |
|       guard captureSession.canAddInput(videoDeviceInput!) else {
 | |
|         return invokeOnError(.parameter(.unsupportedInput(inputDescriptor: "video-input")))
 | |
|       }
 | |
|       captureSession.addInput(videoDeviceInput!)
 | |
|     } catch {
 | |
|       return invokeOnError(.device(.invalid))
 | |
|     }
 | |
| 
 | |
|     // OUTPUTS
 | |
|     if let photoOutput = self.photoOutput {
 | |
|       captureSession.removeOutput(photoOutput)
 | |
|     }
 | |
|     // Photo Output
 | |
|     photoOutput = AVCapturePhotoOutput()
 | |
|     photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported && enableDepthData
 | |
|     if let enableHighResolutionCapture = self.enableHighResolutionCapture?.boolValue {
 | |
|       photoOutput!.isHighResolutionCaptureEnabled = enableHighResolutionCapture
 | |
|     }
 | |
|     if #available(iOS 12.0, *) {
 | |
|       photoOutput!.isPortraitEffectsMatteDeliveryEnabled = photoOutput!.isPortraitEffectsMatteDeliverySupported && self.enablePortraitEffectsMatteDelivery
 | |
|     }
 | |
|     guard captureSession.canAddOutput(photoOutput!) else {
 | |
|       return invokeOnError(.parameter(.unsupportedOutput(outputDescriptor: "photo-output")))
 | |
|     }
 | |
|     captureSession.addOutput(photoOutput!)
 | |
|     if videoDeviceInput!.device.position == .front {
 | |
|       photoOutput!.mirror()
 | |
|     }
 | |
| 
 | |
|     // Video Output + Frame Processor
 | |
|     if let videoOutput = self.videoOutput {
 | |
|       captureSession.removeOutput(videoOutput)
 | |
|       self.videoOutput = nil
 | |
|     }
 | |
|     ReactLogger.log(level: .info, message: "Adding Video Data output...")
 | |
|     videoOutput = AVCaptureVideoDataOutput()
 | |
|     guard captureSession.canAddOutput(videoOutput!) else {
 | |
|       return invokeOnError(.parameter(.unsupportedOutput(outputDescriptor: "video-output")))
 | |
|     }
 | |
|     videoOutput!.setSampleBufferDelegate(self, queue: videoQueue)
 | |
|     videoOutput!.alwaysDiscardsLateVideoFrames = true
 | |
|     captureSession.addOutput(videoOutput!)
 | |
|     if videoDeviceInput!.device.position == .front {
 | |
|       videoOutput!.mirror()
 | |
|     }
 | |
| 
 | |
|     // Audio Output
 | |
|     if let audioOutput = self.audioOutput {
 | |
|       captureSession.removeOutput(audioOutput)
 | |
|       self.audioOutput = nil
 | |
|     }
 | |
|     ReactLogger.log(level: .info, message: "Adding Audio Data output...")
 | |
|     audioOutput = AVCaptureAudioDataOutput()
 | |
|     guard captureSession.canAddOutput(audioOutput!) else {
 | |
|       return invokeOnError(.parameter(.unsupportedOutput(outputDescriptor: "audio-output")))
 | |
|     }
 | |
|     audioOutput!.setSampleBufferDelegate(self, queue: audioQueue)
 | |
|     captureSession.addOutput(audioOutput!)
 | |
| 
 | |
|     invokeOnInitialized()
 | |
|     isReady = true
 | |
|     ReactLogger.log(level: .info, message: "Session successfully configured!")
 | |
|   }
 | |
| 
 | |
|   /**
 | |
|    Configures the Video Device with the given FPS, HDR and ColorSpace.
 | |
|    */
 | |
|   final func configureDevice() {
 | |
|     ReactLogger.log(level: .info, message: "Configuring Device...")
 | |
|     guard let device = videoDeviceInput?.device else {
 | |
|       return invokeOnError(.session(.cameraNotReady))
 | |
|     }
 | |
| 
 | |
|     do {
 | |
|       try device.lockForConfiguration()
 | |
| 
 | |
|       if let fps = self.fps?.int32Value {
 | |
|         let duration = CMTimeMake(value: 1, timescale: fps)
 | |
|         device.activeVideoMinFrameDuration = duration
 | |
|         device.activeVideoMaxFrameDuration = duration
 | |
|       } else {
 | |
|         device.activeVideoMinFrameDuration = CMTime.invalid
 | |
|         device.activeVideoMaxFrameDuration = CMTime.invalid
 | |
|       }
 | |
|       if hdr != nil {
 | |
|         if hdr == true && !device.activeFormat.isVideoHDRSupported {
 | |
|           return invokeOnError(.format(.invalidHdr))
 | |
|         }
 | |
|         if !device.automaticallyAdjustsVideoHDREnabled {
 | |
|           if device.isVideoHDREnabled != hdr!.boolValue {
 | |
|             device.isVideoHDREnabled = hdr!.boolValue
 | |
|           }
 | |
|         }
 | |
|       }
 | |
|       if lowLightBoost != nil {
 | |
|         if lowLightBoost == true && !device.isLowLightBoostSupported {
 | |
|           return invokeOnError(.device(.lowLightBoostNotSupported))
 | |
|         }
 | |
|         if device.automaticallyEnablesLowLightBoostWhenAvailable != lowLightBoost!.boolValue {
 | |
|           device.automaticallyEnablesLowLightBoostWhenAvailable = lowLightBoost!.boolValue
 | |
|         }
 | |
|       }
 | |
|       if colorSpace != nil, let avColorSpace = try? AVCaptureColorSpace(string: String(colorSpace!)) {
 | |
|         device.activeColorSpace = avColorSpace
 | |
|       }
 | |
| 
 | |
|       device.unlockForConfiguration()
 | |
|       ReactLogger.log(level: .info, message: "Device successfully configured!")
 | |
|     } catch let error as NSError {
 | |
|       return invokeOnError(.device(.configureError), cause: error)
 | |
|     }
 | |
|   }
 | |
| 
 | |
|   /**
 | |
|    Configures the Video Device to find the best matching Format.
 | |
|    */
 | |
|   final func configureFormat() {
 | |
|     ReactLogger.log(level: .info, message: "Configuring Format...")
 | |
|     guard let filter = self.format else {
 | |
|       // Format Filter was null. Ignore it.
 | |
|       return
 | |
|     }
 | |
|     guard let device = videoDeviceInput?.device else {
 | |
|       return invokeOnError(.session(.cameraNotReady))
 | |
|     }
 | |
| 
 | |
|     if device.activeFormat.matchesFilter(filter) {
 | |
|       ReactLogger.log(level: .info, message: "Active format already matches filter.")
 | |
|       return
 | |
|     }
 | |
| 
 | |
|     // get matching format
 | |
|     let matchingFormats = device.formats.filter { $0.matchesFilter(filter) }.sorted { $0.isBetterThan($1) }
 | |
|     guard let format = matchingFormats.first else {
 | |
|       return invokeOnError(.format(.invalidFormat))
 | |
|     }
 | |
| 
 | |
|     do {
 | |
|       try device.lockForConfiguration()
 | |
|       device.activeFormat = format
 | |
|       device.unlockForConfiguration()
 | |
|       ReactLogger.log(level: .info, message: "Format successfully configured!")
 | |
|     } catch let error as NSError {
 | |
|       return invokeOnError(.device(.configureError), cause: error)
 | |
|     }
 | |
|   }
 | |
| 
 | |
|   @objc
 | |
|   func sessionRuntimeError(notification: Notification) {
 | |
|     ReactLogger.log(level: .error, message: "Unexpected Camera Runtime Error occured!")
 | |
|     guard let error = notification.userInfo?[AVCaptureSessionErrorKey] as? AVError else {
 | |
|       return
 | |
|     }
 | |
| 
 | |
|     invokeOnError(.unknown(message: error._nsError.description), cause: error._nsError)
 | |
| 
 | |
|     if isActive {
 | |
|       // restart capture session after an error occured
 | |
|       queue.async {
 | |
|         self.captureSession.startRunning()
 | |
|       }
 | |
|     }
 | |
|   }
 | |
| 
 | |
|   @objc
 | |
|   func sessionInterruptionBegin(notification: Notification) {
 | |
|     ReactLogger.log(level: .error, message: "Capture Session Interruption begin Notification!")
 | |
|     guard let reasonNumber = notification.userInfo?[AVCaptureSessionInterruptionReasonKey] as? NSNumber else {
 | |
|       return
 | |
|     }
 | |
|     let reason = AVCaptureSession.InterruptionReason(rawValue: reasonNumber.intValue)
 | |
| 
 | |
|     switch reason {
 | |
|     case .audioDeviceInUseByAnotherClient:
 | |
|       // remove audio input so iOS thinks nothing is wrong and won't pause the session.
 | |
|       removeAudioInput()
 | |
|     default:
 | |
|       // don't do anything, iOS will automatically pause session
 | |
|       break
 | |
|     }
 | |
|   }
 | |
| 
 | |
|   @objc
 | |
|   func sessionInterruptionEnd(notification: Notification) {
 | |
|     ReactLogger.log(level: .error, message: "Capture Session Interruption end Notification!")
 | |
|     guard let reasonNumber = notification.userInfo?[AVCaptureSessionInterruptionReasonKey] as? NSNumber else {
 | |
|       return
 | |
|     }
 | |
|     let reason = AVCaptureSession.InterruptionReason(rawValue: reasonNumber.intValue)
 | |
| 
 | |
|     switch reason {
 | |
|     case .audioDeviceInUseByAnotherClient:
 | |
|       // add audio again because we removed it when we received the interruption.
 | |
|       configureAudioSession()
 | |
|     default:
 | |
|       // don't do anything, iOS will automatically resume session
 | |
|       break
 | |
|     }
 | |
|   }
 | |
| }
 |