feat: Full Android rewrite (CameraX -> Camera2) (#1674)

* Nuke CameraX

* fix: Run View Finder on UI Thread

* Open Camera, set up Threads

* fix init

* Mirror if needed

* Try PreviewView

* Use max resolution

* Add `hardwareLevel` property

* Check if output type is supported

* Replace `frameRateRanges` with `minFps` and `maxFps`

* Remove `isHighestPhotoQualitySupported`

* Remove `colorSpace`

The native platforms will use the best / most accurate colorSpace by default anyways.

* HDR

* Check from format

* fix

* Remove `supportsParallelVideoProcessing`

* Correctly return video/photo sizes on Android now. Finally

* Log all Device props

* Log if optimized usecase is used

* Cleanup

* Configure Camera Input only once

* Revert "Configure Camera Input only once"

This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e.

* Extract Camera configuration

* Try to reconfigure all

* Hook based

* Properly set up `CameraSession`

* Delete unused

* fix: Fix recreate when outputs change

* Update NativePreviewView.kt

* Use callback for closing

* Catch CameraAccessException

* Finally got it stable

* Remove isMirrored

* Implement `takePhoto()`

* Add ExifInterface library

* Run findViewById on UI Thread

* Add Photo Output Surface to takePhoto

* Fix Video Stabilization Modes

* Optimize Imports

* More logs

* Update CameraSession.kt

* Close Image

* Use separate Executor in CameraQueue

* Delete hooks

* Use same Thread again

* If opened, call error

* Update CameraSession.kt

* Log HW level

* fix: Don't enable Stream Use Case if it's not 100% supported

* Move some stuff

* Cleanup PhotoOutputSynchronizer

* Try just open in suspend fun

* Some synchronization fixes

* fix logs

* Update CameraDevice+createCaptureSession.kt

* Update CameraDevice+createCaptureSession.kt

* fixes

* fix: Use Snapshot Template for speed capture prio

* Use PREVIEW template for repeating request

* Use `TEMPLATE_RECORD` if video use-case is attached

* Use `isRunning` flag

* Recreate session everytime on active/inactive

* Lazily get values in capture session

* Stability

* Rebuild session if outputs change

* Set `didOutputsChange` back to false

* Capture first in lock

* Try

* kinda fix it? idk

* fix: Keep Outputs

* Refactor into single method

* Update CameraView.kt

* Use Enums for type safety

* Implement Orientation (I think)

* Move RefCount management to Java (Frame)

* Don't crash when dropping a Frame

* Prefer Devices with higher max resolution

* Prefer multi-cams

* Use FastImage for Media Page

* Return orientation in takePhoto()

* Load orientation from EXIF Data

* Add `isMirrored` props and documentation for PhotoFile

* fix: Return `not-determined` on Android

* Update CameraViewModule.kt

* chore: Upgrade packages

* fix: Fix Metro Config

* Cleanup config

* Properly mirror Images on save

* Prepare MediaRecorder

* Start/Stop MediaRecorder

* Remove `takeSnapshot()`

It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot

* Use `MediaCodec`

* Move to `VideoRecording` class

* Cleanup Snapshot

* Create `SkiaPreviewView` hybrid class

* Create OpenGL context

* Create `SkiaPreviewView`

* Fix texture creation missing context

* Draw red frame

* Somehow get it working

* Add Skia CMake setup

* Start looping

* Init OpenGL

* Refactor into `SkiaRenderer`

* Cleanup PreviewSize

* Set up

* Only re-render UI if there is a new Frame

* Preview

* Fix init

* Try rendering Preview

* Update SkiaPreviewView.kt

* Log version

* Try using Skia (fail)

* Drawwwww!!!!!!!!!! 🎉

* Use Preview Size

* Clear first

* Refactor into SkiaRenderer

* Add `previewType: "none"` on iOS

* Simplify a lot

* Draw Camera? For some reason? I have no idea anymore

* Fix OpenGL errors

* Got it kinda working again?

* Actually draw Frame woah

* Clean up code

* Cleanup

* Update on main

* Synchronize render calls

* holy shit

* Update SkiaRenderer.cpp

* Update SkiaRenderer.cpp

* Refactor

* Update SkiaRenderer.cpp

* Check for `NO_INPUT_TEXTURE`^

* Post & Wait

* Set input size

* Add Video back again

* Allow session without preview

* Convert JPEG to byte[]

* feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689)

* Try to pass YUV Buffers as Pixmaps

* Create pixmap!

* Clean up

* Render to preview

* Only render if we have an output surface

* Update SkiaRenderer.cpp

* Fix Y+U+V sampling code

* Cleanup

* Fix Semaphore 0

* Use 4:2:0 YUV again idk

* Update SkiaRenderer.h

* Set minSdk to 26

* Set surface

* Revert "Set minSdk to 26"

This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48.

* Set previewType

* feat: Video Recording with Camera2 (#1691)

* Rename

* Update CameraSession.kt

* Use `SurfaceHolder` instead of `SurfaceView` for output

* Update CameraOutputs.kt

* Update CameraSession.kt

* fix: Fix crash when Preview is null

* Check if snapshot capture is supported

* Update RecordingSession.kt

* S

* Use `MediaRecorder`

* Make audio optional

* Add Torch

* Output duration

* Update RecordingSession.kt

* Start RecordingSession

* logs

* More log

* Base for preparing pass-through Recording

* Use `ImageWriter` to append Images to the Recording Surface

* Stream PRIVATE GPU_SAMPLED_IMAGE Images

* Add flags

* Close session on stop

* Allow customizing `videoCodec` and `fileType`

* Enable Torch

* Fix Torch Mode

* Fix comparing outputs with hashCode

* Update CameraSession.kt

* Correctly pass along Frame Processor

* fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz

* Use CAMCORDER instead of MIC microphone

* Use 1 channel

* fix: Use `Orientation`

* Add `native` PixelFormat

* Update iOS to latest Skia integration

* feat: Add `pixelFormat` property to Camera

* Catch error in configureSession

* Fix JPEG format

* Clean up best match finder

* Update CameraDeviceDetails.kt

* Clamp sizes by maximum CamcorderProfile size

* Remove `getAvailableVideoCodecs`

* chore: release 3.0.0-rc.5

* Use maximum video size of RECORD as default

* Update CameraDeviceDetails.kt

* Add a todo

* Add JSON device to issue report

* Prefer `full` devices and flash

* Lock to 30 FPS on Samsung

* Implement Zoom

* Refactor

* Format -> PixelFormat

* fix: Feat `pixelFormat` -> `pixelFormats`

* Update TROUBLESHOOTING.mdx

* Format

* fix: Implement `zoom` for Photo Capture

* fix: Don't run if `isActive` is `false`

* fix: Call `examplePlugin(frame)`

* fix: Fix Flash

* fix: Use `react-native-worklets-core`!

* fix: Fix import
This commit is contained in:
Marc Rousavy
2023-08-21 12:50:14 +02:00
committed by GitHub
parent 61fd4e0474
commit 37a3548a81
141 changed files with 3991 additions and 2251 deletions

View File

@@ -5,80 +5,60 @@ import android.annotation.SuppressLint
import android.content.Context
import android.content.pm.PackageManager
import android.content.res.Configuration
import android.hardware.camera2.*
import android.hardware.camera2.CameraManager
import android.util.Log
import android.util.Range
import android.view.*
import android.view.View.OnTouchListener
import android.util.Size
import android.view.Surface
import android.view.View
import android.widget.FrameLayout
import androidx.camera.camera2.interop.Camera2Interop
import androidx.camera.core.*
import androidx.camera.core.impl.*
import androidx.camera.extensions.*
import androidx.camera.lifecycle.ProcessCameraProvider
import androidx.camera.video.*
import androidx.camera.video.VideoCapture
import androidx.camera.view.PreviewView
import androidx.core.content.ContextCompat
import androidx.lifecycle.*
import com.facebook.jni.HybridData
import com.facebook.proguard.annotations.DoNotStrip
import com.facebook.react.bridge.*
import com.mrousavy.camera.frameprocessor.Frame
import com.facebook.react.bridge.ReadableMap
import com.mrousavy.camera.extensions.containsAny
import com.mrousavy.camera.extensions.installHierarchyFitter
import com.mrousavy.camera.frameprocessor.FrameProcessor
import com.mrousavy.camera.frameprocessor.FrameProcessorPlugin
import com.mrousavy.camera.frameprocessor.FrameProcessorPluginRegistry
import com.mrousavy.camera.utils.*
import kotlinx.coroutines.*
import kotlinx.coroutines.guava.await
import java.lang.IllegalArgumentException
import java.util.concurrent.ExecutorService
import java.util.concurrent.Executors
import kotlin.math.max
import kotlin.math.min
import com.mrousavy.camera.parsers.PixelFormat
import com.mrousavy.camera.parsers.Orientation
import com.mrousavy.camera.parsers.PreviewType
import com.mrousavy.camera.parsers.Torch
import com.mrousavy.camera.parsers.VideoStabilizationMode
import com.mrousavy.camera.skia.SkiaPreviewView
import com.mrousavy.camera.skia.SkiaRenderer
import com.mrousavy.camera.utils.outputs.CameraOutputs
import kotlinx.coroutines.CoroutineScope
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.launch
import java.io.Closeable
//
// TODOs for the CameraView which are currently too hard to implement either because of CameraX' limitations, or my brain capacity.
//
// CameraView
// TODO: Actually use correct sizes for video and photo (currently it's both the video size)
// TODO: Configurable FPS higher than 30
// TODO: High-speed video recordings (export in CameraViewModule::getAvailableVideoDevices(), and set in CameraView::configurePreview()) (120FPS+)
// TODO: configureSession() enableDepthData
// TODO: configureSession() enableHighQualityPhotos
// TODO: configureSession() enablePortraitEffectsMatteDelivery
// TODO: configureSession() colorSpace
// CameraView+RecordVideo
// TODO: Better startRecording()/stopRecording() (promise + callback, wait for TurboModules/JSI)
// TODO: videoStabilizationMode
// TODO: Return Video size/duration
// CameraView+TakePhoto
// TODO: Mirror selfie images
// TODO: takePhoto() depth data
// TODO: takePhoto() raw capture
// TODO: takePhoto() photoCodec ("hevc" | "jpeg" | "raw")
// TODO: takePhoto() qualityPrioritization
// TODO: takePhoto() enableAutoRedEyeReduction
// TODO: takePhoto() enableAutoStabilization
// TODO: takePhoto() enableAutoDistortionCorrection
// TODO: takePhoto() return with jsi::Value Image reference for faster capture
@Suppress("KotlinJniMissingFunction") // I use fbjni, Android Studio is not smart enough to realize that.
@SuppressLint("ClickableViewAccessibility", "ViewConstructor")
class CameraView(context: Context, private val frameProcessorThread: ExecutorService) : FrameLayout(context), LifecycleOwner {
@SuppressLint("ClickableViewAccessibility", "ViewConstructor", "MissingPermission")
class CameraView(context: Context) : FrameLayout(context) {
companion object {
const val TAG = "CameraView"
const val TAG_PERF = "CameraView.performance"
private val propsThatRequireSessionReconfiguration = arrayListOf("cameraId", "format", "fps", "hdr", "lowLightBoost", "photo", "video", "enableFrameProcessor")
private val arrayListOfZoom = arrayListOf("zoom")
private val propsThatRequirePreviewReconfiguration = arrayListOf("cameraId", "previewType")
private val propsThatRequireSessionReconfiguration = arrayListOf("cameraId", "format", "photo", "video", "enableFrameProcessor", "pixelFormat")
private val propsThatRequireFormatReconfiguration = arrayListOf("fps", "hdr", "videoStabilizationMode", "lowLightBoost")
}
// react properties
// props that require reconfiguring
var cameraId: String? = null // this is actually not a react prop directly, but the result of setting device={}
var cameraId: String? = null
var enableDepthData = false
var enableHighQualityPhotos: Boolean? = null
var enablePortraitEffectsMatteDelivery = false
@@ -87,406 +67,186 @@ class CameraView(context: Context, private val frameProcessorThread: ExecutorSer
var video: Boolean? = null
var audio: Boolean? = null
var enableFrameProcessor = false
var pixelFormat: PixelFormat = PixelFormat.NATIVE
// props that require format reconfiguring
var format: ReadableMap? = null
var fps: Int? = null
var videoStabilizationMode: VideoStabilizationMode? = null
var hdr: Boolean? = null // nullable bool
var colorSpace: String? = null
var lowLightBoost: Boolean? = null // nullable bool
var previewType: PreviewType = PreviewType.NONE
// other props
var isActive = false
var torch = "off"
var torch: Torch = Torch.OFF
var zoom: Float = 1f // in "factor"
var orientation: String? = null
var enableZoomGesture = false
set(value) {
field = value
setOnTouchListener(if (value) touchEventListener else null)
}
var orientation: Orientation? = null
// private properties
private var isMounted = false
private val reactContext: ReactContext
get() = context as ReactContext
internal val cameraManager = context.getSystemService(Context.CAMERA_SERVICE) as CameraManager
@Suppress("JoinDeclarationAndAssignment")
internal val previewView: PreviewView
private val cameraExecutor = Executors.newSingleThreadExecutor()
internal val takePhotoExecutor = Executors.newSingleThreadExecutor()
internal val recordVideoExecutor = Executors.newSingleThreadExecutor()
internal var coroutineScope = CoroutineScope(Dispatchers.Main)
// session
internal val cameraSession: CameraSession
private var previewView: View? = null
private var previewSurface: Surface? = null
internal var camera: Camera? = null
internal var imageCapture: ImageCapture? = null
internal var videoCapture: VideoCapture<Recorder>? = null
public var frameProcessor: FrameProcessor? = null
private var preview: Preview? = null
private var imageAnalysis: ImageAnalysis? = null
internal var activeVideoRecording: Recording? = null
private var extensionsManager: ExtensionsManager? = null
private val scaleGestureListener: ScaleGestureDetector.SimpleOnScaleGestureListener
private val scaleGestureDetector: ScaleGestureDetector
private val touchEventListener: OnTouchListener
private val lifecycleRegistry: LifecycleRegistry
private var hostLifecycleState: Lifecycle.State
private val inputRotation: Int
get() {
return context.displayRotation
}
private val outputRotation: Int
get() {
if (orientation != null) {
// user is overriding output orientation
return when (orientation!!) {
"portrait" -> Surface.ROTATION_0
"landscapeRight" -> Surface.ROTATION_90
"portraitUpsideDown" -> Surface.ROTATION_180
"landscapeLeft" -> Surface.ROTATION_270
else -> throw InvalidTypeScriptUnionError("orientation", orientation!!)
}
} else {
// use same as input rotation
return inputRotation
}
private var skiaRenderer: SkiaRenderer? = null
internal var frameProcessor: FrameProcessor? = null
set(value) {
field = value
cameraSession.setFrameProcessor(frameProcessor)
}
private val inputOrientation: Orientation
get() = cameraSession.orientation
internal val outputOrientation: Orientation
get() = orientation ?: inputOrientation
private var minZoom: Float = 1f
private var maxZoom: Float = 1f
@Suppress("RedundantIf")
internal val fallbackToSnapshot: Boolean
@SuppressLint("UnsafeOptInUsageError")
get() {
if (video != true && !enableFrameProcessor) {
// Both use-cases are disabled, so `photo` is the only use-case anyways. Don't need to fallback here.
return false
}
cameraId?.let { cameraId ->
val cameraManger = reactContext.getSystemService(Context.CAMERA_SERVICE) as? CameraManager
cameraManger?.let {
val characteristics = cameraManger.getCameraCharacteristics(cameraId)
val hardwareLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)
if (hardwareLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) {
// Camera only supports a single use-case at a time
return true
} else {
if (video == true && enableFrameProcessor) {
// Camera supports max. 2 use-cases, but both are occupied by `frameProcessor` and `video`
return true
} else {
// Camera supports max. 2 use-cases and only one is occupied (either `frameProcessor` or `video`), so we can add `photo`
return false
}
}
}
}
return false
}
init {
previewView = PreviewView(context)
previewView.layoutParams = LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)
previewView.installHierarchyFitter() // If this is not called correctly, view finder will be black/blank
addView(previewView)
scaleGestureListener = object : ScaleGestureDetector.SimpleOnScaleGestureListener() {
override fun onScale(detector: ScaleGestureDetector): Boolean {
zoom = max(min((zoom * detector.scaleFactor), maxZoom), minZoom)
update(arrayListOfZoom)
return true
}
}
scaleGestureDetector = ScaleGestureDetector(context, scaleGestureListener)
touchEventListener = OnTouchListener { _, event -> return@OnTouchListener scaleGestureDetector.onTouchEvent(event) }
hostLifecycleState = Lifecycle.State.INITIALIZED
lifecycleRegistry = LifecycleRegistry(this)
reactContext.addLifecycleEventListener(object : LifecycleEventListener {
override fun onHostResume() {
hostLifecycleState = Lifecycle.State.RESUMED
updateLifecycleState()
// workaround for https://issuetracker.google.com/issues/147354615, preview must be bound on resume
update(propsThatRequireSessionReconfiguration)
}
override fun onHostPause() {
hostLifecycleState = Lifecycle.State.CREATED
updateLifecycleState()
}
override fun onHostDestroy() {
hostLifecycleState = Lifecycle.State.DESTROYED
updateLifecycleState()
cameraExecutor.shutdown()
takePhotoExecutor.shutdown()
recordVideoExecutor.shutdown()
reactContext.removeLifecycleEventListener(this)
}
})
this.installHierarchyFitter()
setupPreviewView()
cameraSession = CameraSession(context, cameraManager, { invokeOnInitialized() }, { error -> invokeOnError(error) })
}
override fun onConfigurationChanged(newConfig: Configuration?) {
super.onConfigurationChanged(newConfig)
updateOrientation()
}
@SuppressLint("RestrictedApi")
private fun updateOrientation() {
preview?.targetRotation = inputRotation
imageCapture?.targetRotation = outputRotation
videoCapture?.targetRotation = outputRotation
imageAnalysis?.targetRotation = outputRotation
}
override fun getLifecycle(): Lifecycle {
return lifecycleRegistry
}
/**
* Updates the custom Lifecycle to match the host activity's lifecycle, and if it's active we narrow it down to the [isActive] and [isAttachedToWindow] fields.
*/
private fun updateLifecycleState() {
val lifecycleBefore = lifecycleRegistry.currentState
if (hostLifecycleState == Lifecycle.State.RESUMED) {
// Host Lifecycle (Activity) is currently active (RESUMED), so we narrow it down to the view's lifecycle
if (isActive && isAttachedToWindow) {
lifecycleRegistry.currentState = Lifecycle.State.RESUMED
} else {
lifecycleRegistry.currentState = Lifecycle.State.CREATED
}
} else {
// Host Lifecycle (Activity) is currently inactive (STARTED or DESTROYED), so that overrules our view's lifecycle
lifecycleRegistry.currentState = hostLifecycleState
}
Log.d(TAG, "Lifecycle went from ${lifecycleBefore.name} -> ${lifecycleRegistry.currentState.name} (isActive: $isActive | isAttachedToWindow: $isAttachedToWindow)")
// TODO: updateOrientation()
}
override fun onAttachedToWindow() {
super.onAttachedToWindow()
updateLifecycleState()
if (!isMounted) {
isMounted = true
invokeOnViewReady()
}
updateLifecycle()
}
override fun onDetachedFromWindow() {
super.onDetachedFromWindow()
updateLifecycleState()
updateLifecycle()
}
/**
* Invalidate all React Props and reconfigure the device
*/
fun update(changedProps: ArrayList<String>) = previewView.post {
// TODO: Does this introduce too much overhead?
// I need to .post on the previewView because it might've not been initialized yet
// I need to use CoroutineScope.launch because of the suspend fun [configureSession]
coroutineScope.launch {
try {
val shouldReconfigureSession = changedProps.containsAny(propsThatRequireSessionReconfiguration)
val shouldReconfigureZoom = shouldReconfigureSession || changedProps.contains("zoom")
val shouldReconfigureTorch = shouldReconfigureSession || changedProps.contains("torch")
val shouldUpdateOrientation = shouldReconfigureSession || changedProps.contains("orientation")
private fun setupPreviewView() {
this.previewView?.let { previewView ->
removeView(previewView)
if (previewView is Closeable) previewView.close()
}
this.previewSurface = null
if (changedProps.contains("isActive")) {
updateLifecycleState()
}
if (shouldReconfigureSession) {
when (previewType) {
PreviewType.NONE -> {
// Do nothing.
}
PreviewType.NATIVE -> {
val cameraId = cameraId ?: throw NoCameraDeviceError()
this.previewView = NativePreviewView(context, cameraManager, cameraId) { surface ->
previewSurface = surface
configureSession()
}
if (shouldReconfigureZoom) {
val zoomClamped = max(min(zoom, maxZoom), minZoom)
camera!!.cameraControl.setZoomRatio(zoomClamped)
}
if (shouldReconfigureTorch) {
camera!!.cameraControl.enableTorch(torch == "on")
}
if (shouldUpdateOrientation) {
updateOrientation()
}
} catch (e: Throwable) {
Log.e(TAG, "update() threw: ${e.message}")
invokeOnError(e)
}
PreviewType.SKIA -> {
if (skiaRenderer == null) skiaRenderer = SkiaRenderer()
this.previewView = SkiaPreviewView(context, skiaRenderer!!)
configureSession()
}
}
this.previewView?.let { previewView ->
previewView.layoutParams = LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)
addView(previewView)
}
}
/**
* Configures the camera capture session. This should only be called when the camera device changes.
*/
@SuppressLint("RestrictedApi", "UnsafeOptInUsageError")
private suspend fun configureSession() {
fun update(changedProps: ArrayList<String>) {
Log.i(TAG, "Props changed: $changedProps")
try {
val startTime = System.currentTimeMillis()
Log.i(TAG, "Configuring session...")
val shouldReconfigurePreview = changedProps.containsAny(propsThatRequirePreviewReconfiguration)
val shouldReconfigureSession = shouldReconfigurePreview || changedProps.containsAny(propsThatRequireSessionReconfiguration)
val shouldReconfigureFormat = shouldReconfigureSession || changedProps.containsAny(propsThatRequireFormatReconfiguration)
val shouldReconfigureZoom = /* TODO: When should we reconfigure this? */ shouldReconfigureSession || changedProps.contains("zoom")
val shouldReconfigureTorch = /* TODO: When should we reconfigure this? */ shouldReconfigureSession || changedProps.contains("torch")
val shouldUpdateOrientation = /* TODO: When should we reconfigure this? */ shouldReconfigureSession || changedProps.contains("orientation")
val shouldCheckActive = shouldReconfigureFormat || changedProps.contains("isActive")
if (shouldReconfigurePreview) {
setupPreviewView()
}
if (shouldReconfigureSession) {
configureSession()
}
if (shouldReconfigureFormat) {
configureFormat()
}
if (shouldCheckActive) {
updateLifecycle()
}
if (shouldReconfigureZoom) {
updateZoom()
}
if (shouldReconfigureTorch) {
updateTorch()
}
if (shouldUpdateOrientation) {
// TODO: updateOrientation()
}
} catch (e: Throwable) {
Log.e(TAG, "update() threw: ${e.message}")
invokeOnError(e)
}
}
private fun configureSession() {
try {
Log.i(TAG, "Configuring Camera Device...")
if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
throw CameraPermissionError()
}
if (cameraId == null) {
throw NoCameraDeviceError()
}
if (format != null)
Log.i(TAG, "Configuring session with Camera ID $cameraId and custom format...")
else
Log.i(TAG, "Configuring session with Camera ID $cameraId and default format options...")
val cameraId = cameraId ?: throw NoCameraDeviceError()
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider = ProcessCameraProvider.getInstance(reactContext).await()
val format = format
val targetVideoSize = if (format != null) Size(format.getInt("videoWidth"), format.getInt("videoHeight")) else null
val targetPhotoSize = if (format != null) Size(format.getInt("photoWidth"), format.getInt("photoHeight")) else null
// TODO: Allow previewSurface to be null/none
val previewSurface = previewSurface ?: return
var cameraSelector = CameraSelector.Builder().byID(cameraId!!).build()
if (targetVideoSize != null) skiaRenderer?.setInputSurfaceSize(targetVideoSize.width, targetVideoSize.height)
val tryEnableExtension: (suspend (extension: Int) -> Unit) = lambda@ { extension ->
if (extensionsManager == null) {
Log.i(TAG, "Initializing ExtensionsManager...")
extensionsManager = ExtensionsManager.getInstanceAsync(context, cameraProvider).await()
}
if (extensionsManager!!.isExtensionAvailable(cameraSelector, extension)) {
Log.i(TAG, "Enabling extension $extension...")
cameraSelector = extensionsManager!!.getExtensionEnabledCameraSelector(cameraSelector, extension)
} else {
Log.e(TAG, "Extension $extension is not available for the given Camera!")
throw when (extension) {
ExtensionMode.HDR -> HdrNotContainedInFormatError()
ExtensionMode.NIGHT -> LowLightBoostNotContainedInFormatError()
else -> Error("Invalid extension supplied! Extension $extension is not available.")
}
}
}
val previewOutput = CameraOutputs.PreviewOutput(previewSurface)
val photoOutput = if (photo == true) {
CameraOutputs.PhotoOutput(targetPhotoSize)
} else null
val videoOutput = if (video == true || enableFrameProcessor) {
CameraOutputs.VideoOutput(targetVideoSize, video == true, enableFrameProcessor, pixelFormat.toImageFormat())
} else null
val previewBuilder = Preview.Builder()
.setTargetRotation(inputRotation)
cameraSession.configureSession(cameraId, previewOutput, photoOutput, videoOutput)
} catch (e: Throwable) {
Log.e(TAG, "Failed to configure session: ${e.message}", e)
invokeOnError(e)
}
}
val imageCaptureBuilder = ImageCapture.Builder()
.setTargetRotation(outputRotation)
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
private fun configureFormat() {
cameraSession.configureFormat(fps, videoStabilizationMode, hdr, lowLightBoost)
}
val videoRecorderBuilder = Recorder.Builder()
.setExecutor(cameraExecutor)
private fun updateLifecycle() {
cameraSession.setIsActive(isActive && isAttachedToWindow)
}
val imageAnalysisBuilder = ImageAnalysis.Builder()
.setTargetRotation(outputRotation)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setBackgroundExecutor(frameProcessorThread)
private fun updateZoom() {
cameraSession.setZoom(zoom)
}
if (format == null) {
// let CameraX automatically find best resolution for the target aspect ratio
Log.i(TAG, "No custom format has been set, CameraX will automatically determine best configuration...")
val aspectRatio = aspectRatio(previewView.height, previewView.width) // flipped because it's in sensor orientation.
previewBuilder.setTargetAspectRatio(aspectRatio)
imageCaptureBuilder.setTargetAspectRatio(aspectRatio)
// TODO: Aspect Ratio for Video Recorder?
imageAnalysisBuilder.setTargetAspectRatio(aspectRatio)
} else {
// User has selected a custom format={}. Use that
val format = DeviceFormat(format!!)
Log.i(TAG, "Using custom format - photo: ${format.photoSize}, video: ${format.videoSize} @ $fps FPS")
if (video == true) {
previewBuilder.setTargetResolution(format.videoSize)
} else {
previewBuilder.setTargetResolution(format.photoSize)
}
imageCaptureBuilder.setTargetResolution(format.photoSize)
imageAnalysisBuilder.setTargetResolution(format.photoSize)
// TODO: Ability to select resolution exactly depending on format? Just like on iOS...
when (min(format.videoSize.height, format.videoSize.width)) {
in 0..480 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.SD))
in 480..720 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.HD, FallbackStrategy.lowerQualityThan(Quality.HD)))
in 720..1080 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.FHD, FallbackStrategy.lowerQualityThan(Quality.FHD)))
in 1080..2160 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.UHD, FallbackStrategy.lowerQualityThan(Quality.UHD)))
in 2160..4320 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.HIGHEST, FallbackStrategy.lowerQualityThan(Quality.HIGHEST)))
}
fps?.let { fps ->
if (format.frameRateRanges.any { it.contains(fps) }) {
// Camera supports the given FPS (frame rate range)
val frameDuration = (1.0 / fps.toDouble()).toLong() * 1_000_000_000
Log.i(TAG, "Setting AE_TARGET_FPS_RANGE to $fps-$fps, and SENSOR_FRAME_DURATION to $frameDuration")
Camera2Interop.Extender(previewBuilder)
.setCaptureRequestOption(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range(fps, fps))
.setCaptureRequestOption(CaptureRequest.SENSOR_FRAME_DURATION, frameDuration)
// TODO: Frame Rate/FPS for Video Recorder?
} else {
throw FpsNotContainedInFormatError(fps)
}
}
if (hdr == true) {
tryEnableExtension(ExtensionMode.HDR)
}
if (lowLightBoost == true) {
tryEnableExtension(ExtensionMode.NIGHT)
}
}
// Unbind use cases before rebinding
videoCapture = null
imageCapture = null
imageAnalysis = null
cameraProvider.unbindAll()
// Bind use cases to camera
val useCases = ArrayList<UseCase>()
if (video == true) {
Log.i(TAG, "Adding VideoCapture use-case...")
val videoRecorder = videoRecorderBuilder.build()
videoCapture = VideoCapture.withOutput(videoRecorder)
videoCapture!!.targetRotation = outputRotation
useCases.add(videoCapture!!)
}
if (photo == true) {
if (fallbackToSnapshot) {
Log.i(TAG, "Tried to add photo use-case (`photo={true}`) but the Camera device only supports " +
"a single use-case at a time. Falling back to Snapshot capture.")
} else {
Log.i(TAG, "Adding ImageCapture use-case...")
imageCapture = imageCaptureBuilder.build()
useCases.add(imageCapture!!)
}
}
if (enableFrameProcessor) {
Log.i(TAG, "Adding ImageAnalysis use-case...")
imageAnalysis = imageAnalysisBuilder.build().apply {
setAnalyzer(cameraExecutor) { image ->
// Call JS Frame Processor
val frame = Frame(image)
frameProcessor?.call(frame)
// ...frame gets closed in FrameHostObject implementation via JS ref counting
}
}
useCases.add(imageAnalysis!!)
}
preview = previewBuilder.build()
Log.i(TAG, "Attaching ${useCases.size} use-cases...")
camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, *useCases.toTypedArray())
preview!!.setSurfaceProvider(previewView.surfaceProvider)
minZoom = camera!!.cameraInfo.zoomState.value?.minZoomRatio ?: 1f
maxZoom = camera!!.cameraInfo.zoomState.value?.maxZoomRatio ?: 1f
val duration = System.currentTimeMillis() - startTime
Log.i(TAG_PERF, "Session configured in $duration ms! Camera: ${camera!!}")
invokeOnInitialized()
} catch (exc: Throwable) {
Log.e(TAG, "Failed to configure session: ${exc.message}")
throw when (exc) {
is CameraError -> exc
is IllegalArgumentException -> {
if (exc.message?.contains("too many use cases") == true) {
ParallelVideoProcessingNotSupportedError(exc)
} else {
InvalidCameraDeviceError(exc)
}
}
else -> UnknownCameraError(exc)
}
private fun updateTorch() {
CoroutineScope(Dispatchers.Default).launch {
cameraSession.setTorchMode(torch == Torch.ON)
}
}
}