feat: Full Android rewrite (CameraX -> Camera2) (#1674)
* Nuke CameraX * fix: Run View Finder on UI Thread * Open Camera, set up Threads * fix init * Mirror if needed * Try PreviewView * Use max resolution * Add `hardwareLevel` property * Check if output type is supported * Replace `frameRateRanges` with `minFps` and `maxFps` * Remove `isHighestPhotoQualitySupported` * Remove `colorSpace` The native platforms will use the best / most accurate colorSpace by default anyways. * HDR * Check from format * fix * Remove `supportsParallelVideoProcessing` * Correctly return video/photo sizes on Android now. Finally * Log all Device props * Log if optimized usecase is used * Cleanup * Configure Camera Input only once * Revert "Configure Camera Input only once" This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e. * Extract Camera configuration * Try to reconfigure all * Hook based * Properly set up `CameraSession` * Delete unused * fix: Fix recreate when outputs change * Update NativePreviewView.kt * Use callback for closing * Catch CameraAccessException * Finally got it stable * Remove isMirrored * Implement `takePhoto()` * Add ExifInterface library * Run findViewById on UI Thread * Add Photo Output Surface to takePhoto * Fix Video Stabilization Modes * Optimize Imports * More logs * Update CameraSession.kt * Close Image * Use separate Executor in CameraQueue * Delete hooks * Use same Thread again * If opened, call error * Update CameraSession.kt * Log HW level * fix: Don't enable Stream Use Case if it's not 100% supported * Move some stuff * Cleanup PhotoOutputSynchronizer * Try just open in suspend fun * Some synchronization fixes * fix logs * Update CameraDevice+createCaptureSession.kt * Update CameraDevice+createCaptureSession.kt * fixes * fix: Use Snapshot Template for speed capture prio * Use PREVIEW template for repeating request * Use `TEMPLATE_RECORD` if video use-case is attached * Use `isRunning` flag * Recreate session everytime on active/inactive * Lazily get values in capture session * Stability * Rebuild session if outputs change * Set `didOutputsChange` back to false * Capture first in lock * Try * kinda fix it? idk * fix: Keep Outputs * Refactor into single method * Update CameraView.kt * Use Enums for type safety * Implement Orientation (I think) * Move RefCount management to Java (Frame) * Don't crash when dropping a Frame * Prefer Devices with higher max resolution * Prefer multi-cams * Use FastImage for Media Page * Return orientation in takePhoto() * Load orientation from EXIF Data * Add `isMirrored` props and documentation for PhotoFile * fix: Return `not-determined` on Android * Update CameraViewModule.kt * chore: Upgrade packages * fix: Fix Metro Config * Cleanup config * Properly mirror Images on save * Prepare MediaRecorder * Start/Stop MediaRecorder * Remove `takeSnapshot()` It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot * Use `MediaCodec` * Move to `VideoRecording` class * Cleanup Snapshot * Create `SkiaPreviewView` hybrid class * Create OpenGL context * Create `SkiaPreviewView` * Fix texture creation missing context * Draw red frame * Somehow get it working * Add Skia CMake setup * Start looping * Init OpenGL * Refactor into `SkiaRenderer` * Cleanup PreviewSize * Set up * Only re-render UI if there is a new Frame * Preview * Fix init * Try rendering Preview * Update SkiaPreviewView.kt * Log version * Try using Skia (fail) * Drawwwww!!!!!!!!!! 🎉 * Use Preview Size * Clear first * Refactor into SkiaRenderer * Add `previewType: "none"` on iOS * Simplify a lot * Draw Camera? For some reason? I have no idea anymore * Fix OpenGL errors * Got it kinda working again? * Actually draw Frame woah * Clean up code * Cleanup * Update on main * Synchronize render calls * holy shit * Update SkiaRenderer.cpp * Update SkiaRenderer.cpp * Refactor * Update SkiaRenderer.cpp * Check for `NO_INPUT_TEXTURE`^ * Post & Wait * Set input size * Add Video back again * Allow session without preview * Convert JPEG to byte[] * feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689) * Try to pass YUV Buffers as Pixmaps * Create pixmap! * Clean up * Render to preview * Only render if we have an output surface * Update SkiaRenderer.cpp * Fix Y+U+V sampling code * Cleanup * Fix Semaphore 0 * Use 4:2:0 YUV again idk * Update SkiaRenderer.h * Set minSdk to 26 * Set surface * Revert "Set minSdk to 26" This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48. * Set previewType * feat: Video Recording with Camera2 (#1691) * Rename * Update CameraSession.kt * Use `SurfaceHolder` instead of `SurfaceView` for output * Update CameraOutputs.kt * Update CameraSession.kt * fix: Fix crash when Preview is null * Check if snapshot capture is supported * Update RecordingSession.kt * S * Use `MediaRecorder` * Make audio optional * Add Torch * Output duration * Update RecordingSession.kt * Start RecordingSession * logs * More log * Base for preparing pass-through Recording * Use `ImageWriter` to append Images to the Recording Surface * Stream PRIVATE GPU_SAMPLED_IMAGE Images * Add flags * Close session on stop * Allow customizing `videoCodec` and `fileType` * Enable Torch * Fix Torch Mode * Fix comparing outputs with hashCode * Update CameraSession.kt * Correctly pass along Frame Processor * fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz * Use CAMCORDER instead of MIC microphone * Use 1 channel * fix: Use `Orientation` * Add `native` PixelFormat * Update iOS to latest Skia integration * feat: Add `pixelFormat` property to Camera * Catch error in configureSession * Fix JPEG format * Clean up best match finder * Update CameraDeviceDetails.kt * Clamp sizes by maximum CamcorderProfile size * Remove `getAvailableVideoCodecs` * chore: release 3.0.0-rc.5 * Use maximum video size of RECORD as default * Update CameraDeviceDetails.kt * Add a todo * Add JSON device to issue report * Prefer `full` devices and flash * Lock to 30 FPS on Samsung * Implement Zoom * Refactor * Format -> PixelFormat * fix: Feat `pixelFormat` -> `pixelFormats` * Update TROUBLESHOOTING.mdx * Format * fix: Implement `zoom` for Photo Capture * fix: Don't run if `isActive` is `false` * fix: Call `examplePlugin(frame)` * fix: Fix Flash * fix: Use `react-native-worklets-core`! * fix: Fix import
This commit is contained in:
@@ -6,6 +6,7 @@ set(PACKAGE_NAME "VisionCamera")
|
||||
set(BUILD_DIR ${CMAKE_SOURCE_DIR}/build)
|
||||
set(CMAKE_VERBOSE_MAKEFILE ON)
|
||||
set(CMAKE_CXX_STANDARD 17)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DSK_GL -DSK_GANESH -DSK_BUILD_FOR_ANDROID")
|
||||
|
||||
# Folly
|
||||
include("${NODE_MODULES_DIR}/react-native/ReactAndroid/cmake-utils/folly-flags.cmake")
|
||||
@@ -14,10 +15,9 @@ add_compile_options(${folly_FLAGS})
|
||||
# Third party libraries (Prefabs)
|
||||
find_package(ReactAndroid REQUIRED CONFIG)
|
||||
find_package(fbjni REQUIRED CONFIG)
|
||||
find_package(react-native-worklets REQUIRED CONFIG)
|
||||
find_package(react-native-worklets-core REQUIRED CONFIG)
|
||||
find_library(LOG_LIB log)
|
||||
|
||||
|
||||
set(RNSKIA_PATH ${NODE_MODULES_DIR}/@shopify/react-native-skia)
|
||||
if(EXISTS ${RNSKIA_PATH})
|
||||
find_package(shopify_react-native-skia REQUIRED CONFIG)
|
||||
@@ -27,6 +27,14 @@ else()
|
||||
message("VisionCamera: Skia integration disabled!")
|
||||
ENDIF()
|
||||
|
||||
set (SKIA_LIBS_PATH "${RNSKIA_PATH}/libs/android/${ANDROID_ABI}")
|
||||
add_library(skia STATIC IMPORTED)
|
||||
set_property(TARGET skia PROPERTY IMPORTED_LOCATION "${SKIA_LIBS_PATH}/libskia.a")
|
||||
add_library(svg STATIC IMPORTED)
|
||||
set_property(TARGET svg PROPERTY IMPORTED_LOCATION "${SKIA_LIBS_PATH}/libsvg.a")
|
||||
add_library(skshaper STATIC IMPORTED)
|
||||
set_property(TARGET skshaper PROPERTY IMPORTED_LOCATION "${SKIA_LIBS_PATH}/libskshaper.a")
|
||||
|
||||
# Add react-native-vision-camera sources
|
||||
add_library(
|
||||
${PACKAGE_NAME}
|
||||
@@ -37,6 +45,7 @@ add_library(
|
||||
src/main/cpp/JSIJNIConversion.cpp
|
||||
src/main/cpp/VisionCamera.cpp
|
||||
src/main/cpp/VisionCameraProxy.cpp
|
||||
src/main/cpp/skia/SkiaRenderer.cpp
|
||||
src/main/cpp/java-bindings/JFrame.cpp
|
||||
src/main/cpp/java-bindings/JFrameProcessor.cpp
|
||||
src/main/cpp/java-bindings/JFrameProcessorPlugin.cpp
|
||||
@@ -62,6 +71,16 @@ target_include_directories(
|
||||
# just one directory. HOWEVER, skia itself uses relative paths in
|
||||
# their include statements, and so we have to include the path to skia)
|
||||
"${RNSKIA_PATH}/cpp/skia"
|
||||
|
||||
"${RNSKIA_PATH}/cpp/skia/include/config/"
|
||||
"${RNSKIA_PATH}/cpp/skia/include/core/"
|
||||
"${RNSKIA_PATH}/cpp/skia/include/effects/"
|
||||
"${RNSKIA_PATH}/cpp/skia/include/utils/"
|
||||
"${RNSKIA_PATH}/cpp/skia/include/pathops/"
|
||||
"${RNSKIA_PATH}/cpp/skia/modules/"
|
||||
# "${RNSKIA_PATH}/cpp/skia/modules/skparagraph/include/"
|
||||
"${RNSKIA_PATH}/cpp/skia/include/"
|
||||
"${RNSKIA_PATH}/cpp/skia"
|
||||
)
|
||||
|
||||
# Link everything together
|
||||
@@ -73,6 +92,12 @@ target_link_libraries(
|
||||
ReactAndroid::reactnativejni # <-- RN: React Native JNI bindings
|
||||
ReactAndroid::folly_runtime # <-- RN: For casting JSI <> Java objects
|
||||
fbjni::fbjni # <-- fbjni
|
||||
react-native-worklets::rnworklets # <-- RN Worklets
|
||||
react-native-worklets-core::rnworklets # <-- RN Worklets
|
||||
GLESv2 # <-- Optional: OpenGL (for Skia)
|
||||
EGL # <-- Optional: OpenGL (EGL) (for Skia)
|
||||
${SKIA_PACKAGE} # <-- Optional: RN Skia
|
||||
jnigraphics
|
||||
skia
|
||||
svg
|
||||
skshaper
|
||||
)
|
||||
|
@@ -142,22 +142,13 @@ dependencies {
|
||||
//noinspection GradleDynamicVersion
|
||||
implementation 'com.facebook.react:react-android:+'
|
||||
|
||||
implementation 'androidx.core:core-ktx:1.3.2'
|
||||
implementation "androidx.core:core-ktx:1.3.2"
|
||||
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
|
||||
implementation "org.jetbrains.kotlinx:kotlinx-coroutines-guava:1.5.2"
|
||||
implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:1.5.2"
|
||||
implementation "androidx.exifinterface:exifinterface:1.3.6"
|
||||
|
||||
implementation "androidx.camera:camera-core:1.1.0"
|
||||
implementation "androidx.camera:camera-camera2:1.1.0"
|
||||
implementation "androidx.camera:camera-lifecycle:1.1.0"
|
||||
implementation "androidx.camera:camera-video:1.1.0"
|
||||
|
||||
implementation "androidx.camera:camera-view:1.1.0"
|
||||
implementation "androidx.camera:camera-extensions:1.1.0"
|
||||
|
||||
implementation "androidx.exifinterface:exifinterface:1.3.3"
|
||||
|
||||
implementation project(":react-native-worklets")
|
||||
implementation project(":react-native-worklets-core")
|
||||
implementation project(":shopify_react-native-skia")
|
||||
}
|
||||
|
||||
|
@@ -8,7 +8,7 @@
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <jni.h>
|
||||
|
||||
#include <react-native-worklets/WKTJsiHostObject.h>
|
||||
#include <react-native-worklets-core/WKTJsiHostObject.h>
|
||||
#include "JSITypedArray.h"
|
||||
|
||||
#include <vector>
|
||||
@@ -18,7 +18,7 @@ namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
FrameHostObject::FrameHostObject(const jni::alias_ref<JFrame::javaobject>& frame): frame(make_global(frame)), _refCount(0) { }
|
||||
FrameHostObject::FrameHostObject(const jni::alias_ref<JFrame::javaobject>& frame): frame(make_global(frame)) { }
|
||||
|
||||
FrameHostObject::~FrameHostObject() {
|
||||
// Hermes' Garbage Collector (Hades GC) calls destructors on a separate Thread
|
||||
@@ -37,6 +37,7 @@ std::vector<jsi::PropNameID> FrameHostObject::getPropertyNames(jsi::Runtime& rt)
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("orientation")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("isMirrored")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("timestamp")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("pixelFormat")));
|
||||
// Conversion
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("toString")));
|
||||
result.push_back(jsi::PropNameID::forUtf8(rt, std::string("toArrayBuffer")));
|
||||
@@ -94,8 +95,7 @@ jsi::Value FrameHostObject::get(jsi::Runtime& runtime, const jsi::PropNameID& pr
|
||||
if (name == "incrementRefCount") {
|
||||
auto incrementRefCount = JSI_HOST_FUNCTION_LAMBDA {
|
||||
// Increment retain count by one.
|
||||
std::lock_guard lock(this->_refCountMutex);
|
||||
this->_refCount++;
|
||||
this->frame->incrementRefCount();
|
||||
return jsi::Value::undefined();
|
||||
};
|
||||
return jsi::Function::createFromHostFunction(runtime,
|
||||
@@ -106,12 +106,8 @@ jsi::Value FrameHostObject::get(jsi::Runtime& runtime, const jsi::PropNameID& pr
|
||||
|
||||
if (name == "decrementRefCount") {
|
||||
auto decrementRefCount = JSI_HOST_FUNCTION_LAMBDA {
|
||||
// Decrement retain count by one. If the retain count is zero, we close the Frame.
|
||||
std::lock_guard lock(this->_refCountMutex);
|
||||
this->_refCount--;
|
||||
if (_refCount < 1) {
|
||||
this->frame->close();
|
||||
}
|
||||
// Decrement retain count by one. If the retain count is zero, the Frame gets closed.
|
||||
this->frame->decrementRefCount();
|
||||
return jsi::Value::undefined();
|
||||
};
|
||||
return jsi::Function::createFromHostFunction(runtime,
|
||||
@@ -136,6 +132,10 @@ jsi::Value FrameHostObject::get(jsi::Runtime& runtime, const jsi::PropNameID& pr
|
||||
auto string = this->frame->getOrientation();
|
||||
return jsi::String::createFromUtf8(runtime, string->toStdString());
|
||||
}
|
||||
if (name == "pixelFormat") {
|
||||
auto string = this->frame->getPixelFormat();
|
||||
return jsi::String::createFromUtf8(runtime, string->toStdString());
|
||||
}
|
||||
if (name == "timestamp") {
|
||||
return jsi::Value(static_cast<double>(this->frame->getTimestamp()));
|
||||
}
|
||||
|
@@ -9,7 +9,6 @@
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
#include <mutex>
|
||||
|
||||
#include "java-bindings/JFrame.h"
|
||||
|
||||
@@ -31,9 +30,6 @@ class JSI_EXPORT FrameHostObject : public jsi::HostObject {
|
||||
|
||||
private:
|
||||
static auto constexpr TAG = "VisionCamera";
|
||||
|
||||
size_t _refCount;
|
||||
std::mutex _refCountMutex;
|
||||
};
|
||||
|
||||
} // namespace vision
|
||||
|
@@ -4,6 +4,7 @@
|
||||
#include "java-bindings/JFrameProcessor.h"
|
||||
#include "java-bindings/JVisionCameraProxy.h"
|
||||
#include "VisionCameraProxy.h"
|
||||
#include "skia/SkiaRenderer.h"
|
||||
|
||||
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM *vm, void *) {
|
||||
return facebook::jni::initialize(vm, [] {
|
||||
@@ -11,5 +12,6 @@ JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM *vm, void *) {
|
||||
vision::JFrameProcessor::registerNatives();
|
||||
vision::JVisionCameraProxy::registerNatives();
|
||||
vision::JVisionCameraScheduler::registerNatives();
|
||||
vision::SkiaRenderer::registerNatives();
|
||||
});
|
||||
}
|
||||
|
@@ -42,6 +42,11 @@ local_ref<JString> JFrame::getOrientation() const {
|
||||
return getOrientationMethod(self());
|
||||
}
|
||||
|
||||
local_ref<JString> JFrame::getPixelFormat() const {
|
||||
static const auto getPixelFormatMethod = getClass()->getMethod<JString()>("getPixelFormat");
|
||||
return getPixelFormatMethod(self());
|
||||
}
|
||||
|
||||
int JFrame::getPlanesCount() const {
|
||||
static const auto getPlanesCountMethod = getClass()->getMethod<jint()>("getPlanesCount");
|
||||
return getPlanesCountMethod(self());
|
||||
@@ -57,6 +62,16 @@ local_ref<JArrayByte> JFrame::toByteArray() const {
|
||||
return toByteArrayMethod(self());
|
||||
}
|
||||
|
||||
void JFrame::incrementRefCount() {
|
||||
static const auto incrementRefCountMethod = getClass()->getMethod<void()>("incrementRefCount");
|
||||
incrementRefCountMethod(self());
|
||||
}
|
||||
|
||||
void JFrame::decrementRefCount() {
|
||||
static const auto decrementRefCountMethod = getClass()->getMethod<void()>("decrementRefCount");
|
||||
decrementRefCountMethod(self());
|
||||
}
|
||||
|
||||
void JFrame::close() {
|
||||
static const auto closeMethod = getClass()->getMethod<void()>("close");
|
||||
closeMethod(self());
|
||||
|
@@ -24,7 +24,10 @@ struct JFrame : public JavaClass<JFrame> {
|
||||
int getBytesPerRow() const;
|
||||
jlong getTimestamp() const;
|
||||
local_ref<JString> getOrientation() const;
|
||||
local_ref<JString> getPixelFormat() const;
|
||||
local_ref<JArrayByte> toByteArray() const;
|
||||
void incrementRefCount();
|
||||
void decrementRefCount();
|
||||
void close();
|
||||
};
|
||||
|
||||
|
@@ -9,8 +9,8 @@
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
|
||||
#include <react-native-worklets/WKTJsiWorklet.h>
|
||||
#include <react-native-worklets/WKTJsiHostObject.h>
|
||||
#include <react-native-worklets-core/WKTJsiWorklet.h>
|
||||
#include <react-native-worklets-core/WKTJsiHostObject.h>
|
||||
|
||||
#include "JFrame.h"
|
||||
#include "FrameHostObject.h"
|
||||
|
@@ -11,8 +11,8 @@
|
||||
#include <jsi/jsi.h>
|
||||
#include <react/jni/ReadableNativeMap.h>
|
||||
|
||||
#include <react-native-worklets/WKTJsiWorklet.h>
|
||||
#include <react-native-worklets/WKTJsiWorkletContext.h>
|
||||
#include <react-native-worklets-core/WKTJsiWorklet.h>
|
||||
#include <react-native-worklets-core/WKTJsiWorkletContext.h>
|
||||
|
||||
#include "FrameProcessorPluginHostObject.h"
|
||||
|
||||
|
@@ -6,7 +6,7 @@
|
||||
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <jsi/jsi.h>
|
||||
#include <react-native-worklets/WKTJsiWorkletContext.h>
|
||||
#include <react-native-worklets-core/WKTJsiWorkletContext.h>
|
||||
#include <react/jni/ReadableNativeMap.h>
|
||||
|
||||
#include "JFrameProcessorPlugin.h"
|
||||
|
26
android/src/main/cpp/skia/OpenGLError.h
Normal file
26
android/src/main/cpp/skia/OpenGLError.h
Normal file
@@ -0,0 +1,26 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 09.08.23.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <stdexcept>
|
||||
#include <GLES2/gl2.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
inline std::string getEglErrorIfAny() {
|
||||
EGLint error = glGetError();
|
||||
if (error != GL_NO_ERROR) return " Error: " + std::to_string(error);
|
||||
error = eglGetError();
|
||||
if (error != EGL_SUCCESS) return " Error: " + std::to_string(error);
|
||||
return "";
|
||||
}
|
||||
|
||||
class OpenGLError: public std::runtime_error {
|
||||
public:
|
||||
explicit OpenGLError(const std::string&& message): std::runtime_error(message + getEglErrorIfAny()) {}
|
||||
};
|
||||
|
||||
} // namespace vision
|
327
android/src/main/cpp/skia/SkiaRenderer.cpp
Normal file
327
android/src/main/cpp/skia/SkiaRenderer.cpp
Normal file
@@ -0,0 +1,327 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 10.08.23.
|
||||
//
|
||||
|
||||
#include "SkiaRenderer.h"
|
||||
#include <android/log.h>
|
||||
#include "OpenGLError.h"
|
||||
|
||||
#include <core/SkColorSpace.h>
|
||||
#include <core/SkCanvas.h>
|
||||
#include <core/SkYUVAPixmaps.h>
|
||||
|
||||
#include <gpu/gl/GrGLInterface.h>
|
||||
#include <gpu/GrDirectContext.h>
|
||||
#include <gpu/GrBackendSurface.h>
|
||||
#include <gpu/ganesh/SkSurfaceGanesh.h>
|
||||
#include <gpu/ganesh/SkImageGanesh.h>
|
||||
|
||||
#include <android/native_window_jni.h>
|
||||
#include <android/surface_texture_jni.h>
|
||||
|
||||
// from <gpu/ganesh/gl/GrGLDefines.h>
|
||||
#define GR_GL_TEXTURE_EXTERNAL 0x8D65
|
||||
#define GR_GL_RGBA8 0x8058
|
||||
#define ACTIVE_SURFACE_ID 0
|
||||
|
||||
namespace vision {
|
||||
|
||||
|
||||
jni::local_ref<SkiaRenderer::jhybriddata> SkiaRenderer::initHybrid(jni::alias_ref<jhybridobject> javaPart) {
|
||||
return makeCxxInstance(javaPart);
|
||||
}
|
||||
|
||||
SkiaRenderer::SkiaRenderer(const jni::alias_ref<jhybridobject>& javaPart) {
|
||||
_javaPart = jni::make_global(javaPart);
|
||||
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Initializing SkiaRenderer...");
|
||||
|
||||
_previewSurface = nullptr;
|
||||
_previewWidth = 0;
|
||||
_previewHeight = 0;
|
||||
_inputSurfaceTextureId = NO_INPUT_TEXTURE;
|
||||
}
|
||||
|
||||
SkiaRenderer::~SkiaRenderer() {
|
||||
if (_glDisplay != EGL_NO_DISPLAY) {
|
||||
eglMakeCurrent(_glDisplay, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
|
||||
if (_glSurface != EGL_NO_SURFACE) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Destroying OpenGL Surface...");
|
||||
eglDestroySurface(_glDisplay, _glSurface);
|
||||
_glSurface = EGL_NO_SURFACE;
|
||||
}
|
||||
if (_glContext != EGL_NO_CONTEXT) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Destroying OpenGL Context...");
|
||||
eglDestroyContext(_glDisplay, _glContext);
|
||||
_glContext = EGL_NO_CONTEXT;
|
||||
}
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Destroying OpenGL Display...");
|
||||
eglTerminate(_glDisplay);
|
||||
_glDisplay = EGL_NO_DISPLAY;
|
||||
}
|
||||
if (_skiaContext != nullptr) {
|
||||
_skiaContext->abandonContext();
|
||||
_skiaContext = nullptr;
|
||||
}
|
||||
destroyOutputSurface();
|
||||
}
|
||||
|
||||
void SkiaRenderer::ensureOpenGL(ANativeWindow* surface) {
|
||||
bool successful;
|
||||
// EGLDisplay
|
||||
if (_glDisplay == EGL_NO_DISPLAY) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Initializing EGLDisplay..");
|
||||
_glDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);
|
||||
if (_glDisplay == EGL_NO_DISPLAY) throw OpenGLError("Failed to get default OpenGL Display!");
|
||||
|
||||
EGLint major;
|
||||
EGLint minor;
|
||||
successful = eglInitialize(_glDisplay, &major, &minor);
|
||||
if (!successful) throw OpenGLError("Failed to initialize OpenGL!");
|
||||
}
|
||||
|
||||
// EGLConfig
|
||||
if (_glConfig == nullptr) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Initializing EGLConfig..");
|
||||
EGLint attributes[] = {EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
|
||||
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
|
||||
EGL_ALPHA_SIZE, 8,
|
||||
EGL_BLUE_SIZE, 8,
|
||||
EGL_GREEN_SIZE, 8,
|
||||
EGL_RED_SIZE, 8,
|
||||
EGL_DEPTH_SIZE, 0,
|
||||
EGL_STENCIL_SIZE, 0,
|
||||
EGL_NONE};
|
||||
EGLint numConfigs;
|
||||
successful = eglChooseConfig(_glDisplay, attributes, &_glConfig, 1, &numConfigs);
|
||||
if (!successful || numConfigs == 0) throw OpenGLError("Failed to choose OpenGL config!");
|
||||
}
|
||||
|
||||
// EGLContext
|
||||
if (_glContext == EGL_NO_CONTEXT) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Initializing EGLContext..");
|
||||
EGLint contextAttributes[] = {EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE};
|
||||
_glContext = eglCreateContext(_glDisplay, _glConfig, nullptr, contextAttributes);
|
||||
if (_glContext == EGL_NO_CONTEXT) throw OpenGLError("Failed to create OpenGL context!");
|
||||
}
|
||||
|
||||
// EGLSurface
|
||||
if (_glSurface == EGL_NO_SURFACE) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Initializing EGLSurface..");
|
||||
_glSurface = eglCreateWindowSurface(_glDisplay, _glConfig, surface, nullptr);
|
||||
_skiaContext = GrDirectContext::MakeGL();
|
||||
}
|
||||
|
||||
successful = eglMakeCurrent(_glDisplay, _glSurface, _glSurface, _glContext);
|
||||
if (!successful || eglGetError() != EGL_SUCCESS) throw OpenGLError("Failed to use current OpenGL context!");
|
||||
}
|
||||
|
||||
void SkiaRenderer::setOutputSurface(jobject previewSurface) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Setting Output Surface..");
|
||||
destroyOutputSurface();
|
||||
|
||||
_previewSurface = ANativeWindow_fromSurface(jni::Environment::current(), previewSurface);
|
||||
_glSurface = EGL_NO_SURFACE;
|
||||
}
|
||||
|
||||
void SkiaRenderer::destroyOutputSurface() {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Destroying Output Surface..");
|
||||
if (_glSurface != EGL_NO_SURFACE) {
|
||||
eglDestroySurface(_glDisplay, _glSurface);
|
||||
_glSurface = EGL_NO_SURFACE;
|
||||
if (_skiaContext != nullptr) {
|
||||
_skiaContext->abandonContext();
|
||||
_skiaContext = nullptr;
|
||||
}
|
||||
}
|
||||
if (_previewSurface != nullptr) {
|
||||
ANativeWindow_release(_previewSurface);
|
||||
_previewSurface = nullptr;
|
||||
}
|
||||
}
|
||||
|
||||
void SkiaRenderer::setOutputSurfaceSize(int width, int height) {
|
||||
_previewWidth = width;
|
||||
_previewHeight = height;
|
||||
}
|
||||
|
||||
void SkiaRenderer::setInputTextureSize(int width, int height) {
|
||||
_inputWidth = width;
|
||||
_inputHeight = height;
|
||||
}
|
||||
|
||||
void SkiaRenderer::renderLatestFrameToPreview() {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "renderLatestFrameToPreview()");
|
||||
if (_previewSurface == nullptr) {
|
||||
throw std::runtime_error("Cannot render latest frame to preview without a preview surface! "
|
||||
"renderLatestFrameToPreview() needs to be called after setPreviewSurface().");
|
||||
}
|
||||
return;
|
||||
if (_inputSurfaceTextureId == NO_INPUT_TEXTURE) {
|
||||
throw std::runtime_error("Cannot render latest frame to preview without an input texture! "
|
||||
"renderLatestFrameToPreview() needs to be called after prepareInputTexture().");
|
||||
}
|
||||
ensureOpenGL(_previewSurface);
|
||||
|
||||
if (_skiaContext == nullptr) {
|
||||
_skiaContext = GrDirectContext::MakeGL();
|
||||
}
|
||||
_skiaContext->resetContext();
|
||||
|
||||
GrGLTextureInfo textureInfo {
|
||||
// OpenGL will automatically convert YUV -> RGB because it's an EXTERNAL texture
|
||||
.fTarget = GR_GL_TEXTURE_EXTERNAL,
|
||||
.fID = _inputSurfaceTextureId,
|
||||
.fFormat = GR_GL_RGBA8,
|
||||
.fProtected = skgpu::Protected::kNo,
|
||||
};
|
||||
GrBackendTexture texture(_inputWidth,
|
||||
_inputHeight,
|
||||
GrMipMapped::kNo,
|
||||
textureInfo);
|
||||
sk_sp<SkImage> frame = SkImages::AdoptTextureFrom(_skiaContext.get(),
|
||||
texture,
|
||||
kTopLeft_GrSurfaceOrigin,
|
||||
kN32_SkColorType,
|
||||
kOpaque_SkAlphaType);
|
||||
|
||||
GrGLFramebufferInfo fboInfo {
|
||||
// FBO #0 is the currently active OpenGL Surface (eglMakeCurrent)
|
||||
.fFBOID = ACTIVE_SURFACE_ID,
|
||||
.fFormat = GR_GL_RGBA8,
|
||||
.fProtected = skgpu::Protected::kNo,
|
||||
};;
|
||||
GrBackendRenderTarget renderTarget(_previewWidth,
|
||||
_previewHeight,
|
||||
0,
|
||||
8,
|
||||
fboInfo);
|
||||
SkSurfaceProps props(0, kUnknown_SkPixelGeometry);
|
||||
sk_sp<SkSurface> surface = SkSurfaces::WrapBackendRenderTarget(_skiaContext.get(),
|
||||
renderTarget,
|
||||
kTopLeft_GrSurfaceOrigin,
|
||||
kN32_SkColorType,
|
||||
nullptr,
|
||||
&props);
|
||||
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Rendering %ix%i Frame to %ix%i Preview..", frame->width(), frame->height(), surface->width(), surface->height());
|
||||
|
||||
auto canvas = surface->getCanvas();
|
||||
|
||||
canvas->clear(SkColors::kBlack);
|
||||
|
||||
auto duration = std::chrono::system_clock::now().time_since_epoch();
|
||||
auto millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
|
||||
|
||||
canvas->drawImage(frame, 0, 0);
|
||||
|
||||
// TODO: Run Skia Frame Processor
|
||||
auto rect = SkRect::MakeXYWH(150, 250, millis % 3000 / 10, millis % 3000 / 10);
|
||||
auto paint = SkPaint();
|
||||
paint.setColor(SkColors::kRed);
|
||||
canvas->drawRect(rect, paint);
|
||||
|
||||
// Flush
|
||||
canvas->flush();
|
||||
|
||||
bool successful = eglSwapBuffers(_glDisplay, _glSurface);
|
||||
if (!successful || eglGetError() != EGL_SUCCESS) throw OpenGLError("Failed to swap OpenGL buffers!");
|
||||
}
|
||||
|
||||
|
||||
void SkiaRenderer::renderCameraFrameToOffscreenCanvas(jni::JByteBuffer yBuffer,
|
||||
jni::JByteBuffer uBuffer,
|
||||
jni::JByteBuffer vBuffer) {
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Begin render...");
|
||||
ensureOpenGL(_previewSurface);
|
||||
if (_skiaContext == nullptr) {
|
||||
_skiaContext = GrDirectContext::MakeGL();
|
||||
}
|
||||
_skiaContext->resetContext();
|
||||
|
||||
// See https://en.wikipedia.org/wiki/Chroma_subsampling - we're in 4:2:0
|
||||
size_t bytesPerRow = sizeof(uint8_t) * _inputWidth;
|
||||
|
||||
SkImageInfo yInfo = SkImageInfo::MakeA8(_inputWidth, _inputHeight);
|
||||
SkPixmap yPixmap(yInfo, yBuffer.getDirectAddress(), bytesPerRow);
|
||||
|
||||
SkImageInfo uInfo = SkImageInfo::MakeA8(_inputWidth / 2, _inputHeight / 2);
|
||||
SkPixmap uPixmap(uInfo, uBuffer.getDirectAddress(), bytesPerRow / 2);
|
||||
|
||||
SkImageInfo vInfo = SkImageInfo::MakeA8(_inputWidth / 2, _inputHeight / 2);
|
||||
SkPixmap vPixmap(vInfo, vBuffer.getDirectAddress(), bytesPerRow / 2);
|
||||
|
||||
SkYUVAInfo info(SkISize::Make(_inputWidth, _inputHeight),
|
||||
SkYUVAInfo::PlaneConfig::kY_U_V,
|
||||
SkYUVAInfo::Subsampling::k420,
|
||||
SkYUVColorSpace::kRec709_Limited_SkYUVColorSpace);
|
||||
SkPixmap externalPixmaps[3] = { yPixmap, uPixmap, vPixmap };
|
||||
SkYUVAPixmaps pixmaps = SkYUVAPixmaps::FromExternalPixmaps(info, externalPixmaps);
|
||||
|
||||
sk_sp<SkImage> image = SkImages::TextureFromYUVAPixmaps(_skiaContext.get(), pixmaps);
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
GrGLFramebufferInfo fboInfo {
|
||||
// FBO #0 is the currently active OpenGL Surface (eglMakeCurrent)
|
||||
.fFBOID = ACTIVE_SURFACE_ID,
|
||||
.fFormat = GR_GL_RGBA8,
|
||||
.fProtected = skgpu::Protected::kNo,
|
||||
};;
|
||||
GrBackendRenderTarget renderTarget(_previewWidth,
|
||||
_previewHeight,
|
||||
0,
|
||||
8,
|
||||
fboInfo);
|
||||
SkSurfaceProps props(0, kUnknown_SkPixelGeometry);
|
||||
sk_sp<SkSurface> surface = SkSurfaces::WrapBackendRenderTarget(_skiaContext.get(),
|
||||
renderTarget,
|
||||
kTopLeft_GrSurfaceOrigin,
|
||||
kN32_SkColorType,
|
||||
nullptr,
|
||||
&props);
|
||||
|
||||
auto canvas = surface->getCanvas();
|
||||
|
||||
canvas->clear(SkColors::kBlack);
|
||||
|
||||
auto duration = std::chrono::system_clock::now().time_since_epoch();
|
||||
auto millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
|
||||
|
||||
canvas->drawImage(image, 0, 0);
|
||||
|
||||
// TODO: Run Skia Frame Processor
|
||||
auto rect = SkRect::MakeXYWH(150, 250, millis % 3000 / 10, millis % 3000 / 10);
|
||||
auto paint = SkPaint();
|
||||
paint.setColor(SkColors::kRed);
|
||||
canvas->drawRect(rect, paint);
|
||||
|
||||
// Flush
|
||||
canvas->flush();
|
||||
|
||||
bool successful = eglSwapBuffers(_glDisplay, _glSurface);
|
||||
if (!successful || eglGetError() != EGL_SUCCESS) throw OpenGLError("Failed to swap OpenGL buffers!");
|
||||
|
||||
|
||||
__android_log_print(ANDROID_LOG_INFO, TAG, "Rendered!");
|
||||
}
|
||||
|
||||
|
||||
void SkiaRenderer::registerNatives() {
|
||||
registerHybrid({
|
||||
makeNativeMethod("initHybrid", SkiaRenderer::initHybrid),
|
||||
makeNativeMethod("setInputTextureSize", SkiaRenderer::setInputTextureSize),
|
||||
makeNativeMethod("setOutputSurface", SkiaRenderer::setOutputSurface),
|
||||
makeNativeMethod("destroyOutputSurface", SkiaRenderer::destroyOutputSurface),
|
||||
makeNativeMethod("setOutputSurfaceSize", SkiaRenderer::setOutputSurfaceSize),
|
||||
makeNativeMethod("renderLatestFrameToPreview", SkiaRenderer::renderLatestFrameToPreview),
|
||||
makeNativeMethod("renderCameraFrameToOffscreenCanvas", SkiaRenderer::renderCameraFrameToOffscreenCanvas),
|
||||
});
|
||||
}
|
||||
|
||||
} // namespace vision
|
77
android/src/main/cpp/skia/SkiaRenderer.h
Normal file
77
android/src/main/cpp/skia/SkiaRenderer.h
Normal file
@@ -0,0 +1,77 @@
|
||||
//
|
||||
// Created by Marc Rousavy on 10.08.23.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <jni.h>
|
||||
#include <fbjni/fbjni.h>
|
||||
#include <fbjni/ByteBuffer.h>
|
||||
|
||||
#include <GLES2/gl2.h>
|
||||
#include <EGL/egl.h>
|
||||
#include <include/core/SkSurface.h>
|
||||
#include <android/native_window.h>
|
||||
|
||||
namespace vision {
|
||||
|
||||
using namespace facebook;
|
||||
|
||||
#define NO_INPUT_TEXTURE 7654321
|
||||
|
||||
class SkiaRenderer: public jni::HybridClass<SkiaRenderer> {
|
||||
// JNI Stuff
|
||||
public:
|
||||
static auto constexpr kJavaDescriptor = "Lcom/mrousavy/camera/skia/SkiaRenderer;";
|
||||
static void registerNatives();
|
||||
|
||||
private:
|
||||
friend HybridBase;
|
||||
jni::global_ref<SkiaRenderer::javaobject> _javaPart;
|
||||
explicit SkiaRenderer(const jni::alias_ref<jhybridobject>& javaPart);
|
||||
|
||||
public:
|
||||
static jni::local_ref<jhybriddata> initHybrid(jni::alias_ref<jhybridobject> javaPart);
|
||||
~SkiaRenderer();
|
||||
|
||||
private:
|
||||
// Input Texture (Camera)
|
||||
void setInputTextureSize(int width, int height);
|
||||
// Output Surface (Preview)
|
||||
void setOutputSurface(jobject previewSurface);
|
||||
void destroyOutputSurface();
|
||||
void setOutputSurfaceSize(int width, int height);
|
||||
|
||||
/**
|
||||
* Renders the latest Camera Frame from the Input Texture onto the Preview Surface. (60 FPS)
|
||||
*/
|
||||
void renderLatestFrameToPreview();
|
||||
/**
|
||||
* Renders the latest Camera Frame into it's Input Texture and run the Skia Frame Processor (1..240 FPS)
|
||||
*/
|
||||
void renderCameraFrameToOffscreenCanvas(jni::JByteBuffer yBuffer,
|
||||
jni::JByteBuffer uBuffer,
|
||||
jni::JByteBuffer vBuffer);
|
||||
|
||||
private:
|
||||
// OpenGL Context
|
||||
EGLContext _glContext = EGL_NO_CONTEXT;
|
||||
EGLDisplay _glDisplay = EGL_NO_DISPLAY;
|
||||
EGLSurface _glSurface = EGL_NO_SURFACE;
|
||||
EGLConfig _glConfig = nullptr;
|
||||
// Skia Context
|
||||
sk_sp<GrDirectContext> _skiaContext;
|
||||
|
||||
// Input Texture (Camera/Offscreen)
|
||||
GLuint _inputSurfaceTextureId = NO_INPUT_TEXTURE;
|
||||
int _inputWidth, _inputHeight;
|
||||
// Output Texture (Surface/Preview)
|
||||
ANativeWindow* _previewSurface;
|
||||
int _previewWidth, _previewHeight;
|
||||
|
||||
void ensureOpenGL(ANativeWindow* surface);
|
||||
|
||||
static auto constexpr TAG = "SkiaRenderer";
|
||||
};
|
||||
|
||||
} // namespace vision
|
36
android/src/main/java/com/mrousavy/camera/CameraQueues.kt
Normal file
36
android/src/main/java/com/mrousavy/camera/CameraQueues.kt
Normal file
@@ -0,0 +1,36 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.os.Handler
|
||||
import android.os.HandlerThread
|
||||
import kotlinx.coroutines.CoroutineDispatcher
|
||||
import kotlinx.coroutines.android.asCoroutineDispatcher
|
||||
import kotlinx.coroutines.asExecutor
|
||||
import java.util.concurrent.Executor
|
||||
|
||||
class CameraQueues {
|
||||
companion object {
|
||||
val cameraQueue = CameraQueue("mrousavy/VisionCamera.main")
|
||||
val videoQueue = CameraQueue("mrousavy/VisionCamera.video")
|
||||
val previewQueue = CameraQueue("mrousavy/VisionCamera.preview")
|
||||
}
|
||||
|
||||
class CameraQueue(name: String) {
|
||||
val handler: Handler
|
||||
private val thread: HandlerThread
|
||||
val executor: Executor
|
||||
val coroutineDispatcher: CoroutineDispatcher
|
||||
|
||||
init {
|
||||
thread = HandlerThread(name)
|
||||
thread.start()
|
||||
handler = Handler(thread.looper)
|
||||
coroutineDispatcher = handler.asCoroutineDispatcher(name)
|
||||
executor = coroutineDispatcher.asExecutor()
|
||||
}
|
||||
|
||||
protected fun finalize() {
|
||||
thread.quitSafely()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
478
android/src/main/java/com/mrousavy/camera/CameraSession.kt
Normal file
478
android/src/main/java/com/mrousavy/camera/CameraSession.kt
Normal file
@@ -0,0 +1,478 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.content.Context
|
||||
import android.graphics.Rect
|
||||
import android.hardware.camera2.CameraCaptureSession
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.CameraDevice
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.hardware.camera2.CaptureRequest
|
||||
import android.hardware.camera2.CaptureResult
|
||||
import android.hardware.camera2.TotalCaptureResult
|
||||
import android.media.Image
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import android.util.Range
|
||||
import android.util.Size
|
||||
import com.mrousavy.camera.extensions.SessionType
|
||||
import com.mrousavy.camera.extensions.capture
|
||||
import com.mrousavy.camera.extensions.createCaptureSession
|
||||
import com.mrousavy.camera.extensions.createPhotoCaptureRequest
|
||||
import com.mrousavy.camera.extensions.openCamera
|
||||
import com.mrousavy.camera.extensions.tryClose
|
||||
import com.mrousavy.camera.extensions.zoomed
|
||||
import com.mrousavy.camera.frameprocessor.Frame
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessor
|
||||
import com.mrousavy.camera.parsers.CameraDeviceError
|
||||
import com.mrousavy.camera.parsers.Flash
|
||||
import com.mrousavy.camera.parsers.Orientation
|
||||
import com.mrousavy.camera.parsers.QualityPrioritization
|
||||
import com.mrousavy.camera.parsers.VideoCodec
|
||||
import com.mrousavy.camera.parsers.VideoFileType
|
||||
import com.mrousavy.camera.parsers.VideoStabilizationMode
|
||||
import com.mrousavy.camera.utils.PhotoOutputSynchronizer
|
||||
import com.mrousavy.camera.utils.RecordingSession
|
||||
import com.mrousavy.camera.utils.outputs.CameraOutputs
|
||||
import kotlinx.coroutines.CoroutineScope
|
||||
import kotlinx.coroutines.launch
|
||||
import kotlinx.coroutines.sync.Mutex
|
||||
import kotlinx.coroutines.sync.withLock
|
||||
import java.io.Closeable
|
||||
import java.lang.IllegalArgumentException
|
||||
import java.util.concurrent.CancellationException
|
||||
import kotlin.coroutines.CoroutineContext
|
||||
import kotlin.math.min
|
||||
|
||||
// TODO: Use reprocessable YUV capture session for more efficient Skia Frame Processing
|
||||
|
||||
class CameraSession(private val context: Context,
|
||||
private val cameraManager: CameraManager,
|
||||
private val onInitialized: () -> Unit,
|
||||
private val onError: (e: Throwable) -> Unit): CoroutineScope, Closeable, CameraOutputs.Callback, CameraManager.AvailabilityCallback() {
|
||||
companion object {
|
||||
private const val TAG = "CameraSession"
|
||||
}
|
||||
|
||||
data class CapturedPhoto(val image: Image,
|
||||
val metadata: TotalCaptureResult,
|
||||
val orientation: Orientation,
|
||||
val isMirrored: Boolean,
|
||||
val format: Int): Closeable {
|
||||
override fun close() {
|
||||
image.close()
|
||||
}
|
||||
}
|
||||
|
||||
// setInput(..)
|
||||
private var cameraId: String? = null
|
||||
|
||||
// setOutputs(..)
|
||||
private var outputs: CameraOutputs? = null
|
||||
|
||||
// setIsActive(..)
|
||||
private var isActive = false
|
||||
|
||||
// configureFormat(..)
|
||||
private var fps: Int? = null
|
||||
private var videoStabilizationMode: VideoStabilizationMode? = null
|
||||
private var lowLightBoost: Boolean? = null
|
||||
private var hdr: Boolean? = null
|
||||
|
||||
// zoom(..)
|
||||
private var zoom: Float = 1.0f
|
||||
|
||||
private var captureSession: CameraCaptureSession? = null
|
||||
private var cameraDevice: CameraDevice? = null
|
||||
private val photoOutputSynchronizer = PhotoOutputSynchronizer()
|
||||
private val mutex = Mutex()
|
||||
private var isRunning = false
|
||||
private var enableTorch = false
|
||||
private var recording: RecordingSession? = null
|
||||
private var frameProcessor: FrameProcessor? = null
|
||||
|
||||
override val coroutineContext: CoroutineContext = CameraQueues.cameraQueue.coroutineDispatcher
|
||||
|
||||
init {
|
||||
cameraManager.registerAvailabilityCallback(this, CameraQueues.cameraQueue.handler)
|
||||
}
|
||||
|
||||
override fun close() {
|
||||
cameraManager.unregisterAvailabilityCallback(this)
|
||||
photoOutputSynchronizer.clear()
|
||||
captureSession?.close()
|
||||
cameraDevice?.tryClose()
|
||||
outputs?.close()
|
||||
isRunning = false
|
||||
}
|
||||
|
||||
val orientation: Orientation
|
||||
get() {
|
||||
val cameraId = cameraId ?: return Orientation.PORTRAIT
|
||||
val characteristics = cameraManager.getCameraCharacteristics(cameraId)
|
||||
val sensorRotation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION) ?: 0
|
||||
return Orientation.fromRotationDegrees(sensorRotation)
|
||||
}
|
||||
|
||||
fun configureSession(cameraId: String,
|
||||
preview: CameraOutputs.PreviewOutput? = null,
|
||||
photo: CameraOutputs.PhotoOutput? = null,
|
||||
video: CameraOutputs.VideoOutput? = null) {
|
||||
Log.i(TAG, "Configuring Session for Camera $cameraId...")
|
||||
val outputs = CameraOutputs(cameraId,
|
||||
cameraManager,
|
||||
preview,
|
||||
photo,
|
||||
video,
|
||||
this)
|
||||
if (this.cameraId == cameraId && this.outputs == outputs && isActive == isRunning) {
|
||||
Log.i(TAG, "Nothing changed in configuration, canceling..")
|
||||
}
|
||||
|
||||
this.cameraId = cameraId
|
||||
this.outputs = outputs
|
||||
launch {
|
||||
startRunning()
|
||||
}
|
||||
}
|
||||
|
||||
fun configureFormat(fps: Int? = null,
|
||||
videoStabilizationMode: VideoStabilizationMode? = null,
|
||||
hdr: Boolean? = null,
|
||||
lowLightBoost: Boolean? = null) {
|
||||
Log.i(TAG, "Setting Format (fps: $fps | videoStabilization: $videoStabilizationMode | hdr: $hdr | lowLightBoost: $lowLightBoost)...")
|
||||
this.fps = fps
|
||||
this.videoStabilizationMode = videoStabilizationMode
|
||||
this.hdr = hdr
|
||||
this.lowLightBoost = lowLightBoost
|
||||
launch {
|
||||
startRunning()
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Starts or stops the Camera.
|
||||
*/
|
||||
fun setIsActive(isActive: Boolean) {
|
||||
Log.i(TAG, "Setting isActive: $isActive (isRunning: $isRunning)")
|
||||
this.isActive = isActive
|
||||
if (isActive == isRunning) return
|
||||
|
||||
launch {
|
||||
if (isActive) {
|
||||
startRunning()
|
||||
} else {
|
||||
stopRunning()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fun setFrameProcessor(frameProcessor: FrameProcessor?) {
|
||||
this.frameProcessor = frameProcessor
|
||||
}
|
||||
|
||||
suspend fun takePhoto(qualityPrioritization: QualityPrioritization,
|
||||
flashMode: Flash,
|
||||
enableRedEyeReduction: Boolean,
|
||||
enableAutoStabilization: Boolean,
|
||||
outputOrientation: Orientation): CapturedPhoto {
|
||||
val captureSession = captureSession ?: throw CameraNotReadyError()
|
||||
val outputs = outputs ?: throw CameraNotReadyError()
|
||||
|
||||
val photoOutput = outputs.photoOutput ?: throw PhotoNotEnabledError()
|
||||
|
||||
val cameraCharacteristics = cameraManager.getCameraCharacteristics(captureSession.device.id)
|
||||
val orientation = outputOrientation.toSensorRelativeOrientation(cameraCharacteristics)
|
||||
val captureRequest = captureSession.device.createPhotoCaptureRequest(cameraManager,
|
||||
photoOutput.surface,
|
||||
zoom,
|
||||
qualityPrioritization,
|
||||
flashMode,
|
||||
enableRedEyeReduction,
|
||||
enableAutoStabilization,
|
||||
orientation)
|
||||
Log.i(TAG, "Photo capture 0/2 - starting capture...")
|
||||
val result = captureSession.capture(captureRequest)
|
||||
val timestamp = result[CaptureResult.SENSOR_TIMESTAMP]!!
|
||||
Log.i(TAG, "Photo capture 1/2 complete - received metadata with timestamp $timestamp")
|
||||
try {
|
||||
val image = photoOutputSynchronizer.await(timestamp)
|
||||
|
||||
val isMirrored = cameraCharacteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT
|
||||
|
||||
Log.i(TAG, "Photo capture 2/2 complete - received ${image.width} x ${image.height} image.")
|
||||
return CapturedPhoto(image, result, orientation, isMirrored, image.format)
|
||||
} catch (e: CancellationException) {
|
||||
throw CaptureAbortedError(false)
|
||||
}
|
||||
}
|
||||
|
||||
override fun onPhotoCaptured(image: Image) {
|
||||
Log.i(CameraView.TAG, "Photo captured! ${image.width} x ${image.height}")
|
||||
photoOutputSynchronizer.set(image.timestamp, image)
|
||||
}
|
||||
|
||||
override fun onVideoFrameCaptured(image: Image) {
|
||||
// TODO: Correctly get orientation and everything
|
||||
val frame = Frame(image, System.currentTimeMillis(), Orientation.PORTRAIT, false)
|
||||
frame.incrementRefCount()
|
||||
|
||||
// Call (Skia-) Frame Processor
|
||||
frameProcessor?.call(frame)
|
||||
|
||||
// Write Image to the Recording
|
||||
recording?.appendImage(image)
|
||||
|
||||
frame.decrementRefCount()
|
||||
}
|
||||
|
||||
suspend fun startRecording(enableAudio: Boolean,
|
||||
codec: VideoCodec,
|
||||
fileType: VideoFileType,
|
||||
callback: (video: RecordingSession.Video) -> Unit) {
|
||||
mutex.withLock {
|
||||
if (recording != null) throw RecordingInProgressError()
|
||||
val outputs = outputs ?: throw CameraNotReadyError()
|
||||
val videoOutput = outputs.videoOutput ?: throw VideoNotEnabledError()
|
||||
|
||||
val recording = RecordingSession(context, enableAudio, videoOutput.size, fps, codec, orientation, fileType, callback)
|
||||
recording.start()
|
||||
this.recording = recording
|
||||
}
|
||||
}
|
||||
|
||||
suspend fun stopRecording() {
|
||||
mutex.withLock {
|
||||
val recording = recording ?: throw NoRecordingInProgressError()
|
||||
|
||||
recording.stop()
|
||||
this.recording = null
|
||||
}
|
||||
}
|
||||
|
||||
suspend fun pauseRecording() {
|
||||
mutex.withLock {
|
||||
val recording = recording ?: throw NoRecordingInProgressError()
|
||||
recording.pause()
|
||||
}
|
||||
}
|
||||
|
||||
suspend fun resumeRecording() {
|
||||
mutex.withLock {
|
||||
val recording = recording ?: throw NoRecordingInProgressError()
|
||||
recording.resume()
|
||||
}
|
||||
}
|
||||
|
||||
suspend fun setTorchMode(enableTorch: Boolean) {
|
||||
if (this.enableTorch != enableTorch) {
|
||||
this.enableTorch = enableTorch
|
||||
startRunning()
|
||||
}
|
||||
}
|
||||
|
||||
fun setZoom(zoom: Float) {
|
||||
if (this.zoom != zoom) {
|
||||
this.zoom = zoom
|
||||
launch {
|
||||
startRunning()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun onCameraAvailable(cameraId: String) {
|
||||
super.onCameraAvailable(cameraId)
|
||||
Log.i(TAG, "Camera became available: $cameraId")
|
||||
}
|
||||
|
||||
override fun onCameraUnavailable(cameraId: String) {
|
||||
super.onCameraUnavailable(cameraId)
|
||||
Log.i(TAG, "Camera became un-available: $cameraId")
|
||||
}
|
||||
|
||||
/**
|
||||
* Opens a [CameraDevice]. If there already is an open Camera for the given [cameraId], use that.
|
||||
*/
|
||||
private suspend fun getCameraDevice(cameraId: String, onClosed: (error: Throwable) -> Unit): CameraDevice {
|
||||
val currentDevice = cameraDevice
|
||||
if (currentDevice?.id == cameraId) {
|
||||
// We already opened that device
|
||||
return currentDevice
|
||||
}
|
||||
// Close previous device
|
||||
cameraDevice?.tryClose()
|
||||
cameraDevice = null
|
||||
|
||||
val device = cameraManager.openCamera(cameraId, { camera, reason ->
|
||||
Log.d(TAG, "Camera Closed ($cameraDevice == $camera)")
|
||||
if (cameraDevice == camera) {
|
||||
// The current CameraDevice has been closed, handle that!
|
||||
onClosed(reason)
|
||||
cameraDevice = null
|
||||
} else {
|
||||
// A new CameraDevice has been opened, we don't care about this one anymore.
|
||||
}
|
||||
}, CameraQueues.cameraQueue)
|
||||
|
||||
// Cache device in memory
|
||||
cameraDevice = device
|
||||
return device
|
||||
}
|
||||
|
||||
// Caches the result of outputs.hashCode() of the last getCaptureSession call
|
||||
private var lastOutputsHashCode: Int? = null
|
||||
|
||||
private suspend fun getCaptureSession(cameraDevice: CameraDevice,
|
||||
outputs: CameraOutputs,
|
||||
onClosed: () -> Unit): CameraCaptureSession {
|
||||
val currentSession = captureSession
|
||||
if (currentSession?.device == cameraDevice && outputs.hashCode() == lastOutputsHashCode) {
|
||||
// We already opened a CameraCaptureSession on this device
|
||||
return currentSession
|
||||
}
|
||||
captureSession?.close()
|
||||
captureSession = null
|
||||
|
||||
val session = cameraDevice.createCaptureSession(cameraManager, SessionType.REGULAR, outputs, { session ->
|
||||
Log.d(TAG, "Capture Session Closed ($captureSession == $session)")
|
||||
if (captureSession == session) {
|
||||
// The current CameraCaptureSession has been closed, handle that!
|
||||
onClosed()
|
||||
captureSession = null
|
||||
} else {
|
||||
// A new CameraCaptureSession has been opened, we don't care about this one anymore.
|
||||
}
|
||||
}, CameraQueues.cameraQueue)
|
||||
|
||||
// Cache session in memory
|
||||
captureSession = session
|
||||
lastOutputsHashCode = outputs.hashCode()
|
||||
return session
|
||||
}
|
||||
|
||||
private fun getPreviewCaptureRequest(captureSession: CameraCaptureSession,
|
||||
outputs: CameraOutputs,
|
||||
fps: Int? = null,
|
||||
videoStabilizationMode: VideoStabilizationMode? = null,
|
||||
lowLightBoost: Boolean? = null,
|
||||
hdr: Boolean? = null,
|
||||
torch: Boolean? = null): CaptureRequest {
|
||||
val template = if (outputs.videoOutput != null) CameraDevice.TEMPLATE_RECORD else CameraDevice.TEMPLATE_PREVIEW
|
||||
val captureRequest = captureSession.device.createCaptureRequest(template)
|
||||
outputs.previewOutput?.let { output ->
|
||||
Log.i(TAG, "Adding output surface ${output.outputType}..")
|
||||
captureRequest.addTarget(output.surface)
|
||||
}
|
||||
outputs.videoOutput?.let { output ->
|
||||
Log.i(TAG, "Adding output surface ${output.outputType}..")
|
||||
captureRequest.addTarget(output.surface)
|
||||
}
|
||||
|
||||
if (fps != null) {
|
||||
// TODO: Samsung advertises 60 FPS but only allows 30 FPS for some reason.
|
||||
val isSamsung = Build.MANUFACTURER == "samsung"
|
||||
val targetFps = if (isSamsung) 30 else fps
|
||||
|
||||
captureRequest.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range(targetFps, targetFps))
|
||||
}
|
||||
if (videoStabilizationMode != null) {
|
||||
captureRequest.set(CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE, videoStabilizationMode.toDigitalStabilizationMode())
|
||||
captureRequest.set(CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE, videoStabilizationMode.toOpticalStabilizationMode())
|
||||
}
|
||||
if (lowLightBoost == true) {
|
||||
captureRequest.set(CaptureRequest.CONTROL_SCENE_MODE, CaptureRequest.CONTROL_SCENE_MODE_NIGHT)
|
||||
}
|
||||
if (hdr == true) {
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP_MR1) {
|
||||
captureRequest.set(CaptureRequest.CONTROL_SCENE_MODE, CaptureRequest.CONTROL_SCENE_MODE_HDR)
|
||||
}
|
||||
}
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
|
||||
captureRequest.set(CaptureRequest.CONTROL_ZOOM_RATIO, zoom)
|
||||
} else {
|
||||
val cameraCharacteristics = cameraManager.getCameraCharacteristics(cameraId!!)
|
||||
val size = cameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE)!!
|
||||
captureRequest.set(CaptureRequest.SCALER_CROP_REGION, size.zoomed(zoom))
|
||||
}
|
||||
|
||||
val torchMode = if (torch == true) CaptureRequest.FLASH_MODE_TORCH else CaptureRequest.FLASH_MODE_OFF
|
||||
captureRequest.set(CaptureRequest.FLASH_MODE, torchMode)
|
||||
|
||||
return captureRequest.build()
|
||||
}
|
||||
|
||||
private fun destroy() {
|
||||
Log.i(TAG, "Destroying session..")
|
||||
captureSession?.stopRepeating()
|
||||
captureSession?.close()
|
||||
captureSession = null
|
||||
|
||||
cameraDevice?.close()
|
||||
cameraDevice = null
|
||||
|
||||
isRunning = false
|
||||
}
|
||||
|
||||
private suspend fun startRunning() {
|
||||
isRunning = false
|
||||
val cameraId = cameraId ?: return
|
||||
if (!isActive) return
|
||||
|
||||
Log.i(TAG, "Starting Camera Session...")
|
||||
|
||||
try {
|
||||
mutex.withLock {
|
||||
val fps = fps
|
||||
val videoStabilizationMode = videoStabilizationMode
|
||||
val lowLightBoost = lowLightBoost
|
||||
val hdr = hdr
|
||||
val outputs = outputs
|
||||
|
||||
if (outputs == null || outputs.size == 0) {
|
||||
Log.i(TAG, "CameraSession doesn't have any Outputs, canceling..")
|
||||
destroy()
|
||||
return@withLock
|
||||
}
|
||||
|
||||
// 2. Open Camera Device
|
||||
val camera = getCameraDevice(cameraId) { reason ->
|
||||
isRunning = false
|
||||
onError(reason)
|
||||
}
|
||||
|
||||
// 3. Create capture session with outputs
|
||||
val session = getCaptureSession(camera, outputs) {
|
||||
isRunning = false
|
||||
onError(CameraDisconnectedError(cameraId, CameraDeviceError.DISCONNECTED))
|
||||
}
|
||||
|
||||
// 4. Create repeating request (configures FPS, HDR, etc.)
|
||||
val repeatingRequest = getPreviewCaptureRequest(session, outputs, fps, videoStabilizationMode, lowLightBoost, hdr)
|
||||
|
||||
// 5. Start repeating request
|
||||
session.setRepeatingRequest(repeatingRequest, null, null)
|
||||
|
||||
Log.i(TAG, "Camera Session started!")
|
||||
isRunning = true
|
||||
this.captureSession = session
|
||||
this.outputs = outputs
|
||||
this.cameraDevice = camera
|
||||
|
||||
onInitialized()
|
||||
}
|
||||
} catch (e: IllegalStateException) {
|
||||
Log.e(TAG, "Failed to start Camera Session, this session is already closed.", e)
|
||||
}
|
||||
}
|
||||
|
||||
private suspend fun stopRunning() {
|
||||
Log.i(TAG, "Stopping Camera Session...")
|
||||
try {
|
||||
mutex.withLock {
|
||||
destroy()
|
||||
Log.i(TAG, "Camera Session stopped!")
|
||||
}
|
||||
} catch (e: IllegalStateException) {
|
||||
Log.e(TAG, "Failed to stop Camera Session, this session is already closed.", e)
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,29 +1,7 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import androidx.camera.core.FocusMeteringAction
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import kotlinx.coroutines.guava.await
|
||||
import kotlinx.coroutines.withContext
|
||||
import java.util.concurrent.TimeUnit
|
||||
|
||||
suspend fun CameraView.focus(pointMap: ReadableMap) {
|
||||
val cameraControl = camera?.cameraControl ?: throw CameraNotReadyError()
|
||||
if (!pointMap.hasKey("x") || !pointMap.hasKey("y")) {
|
||||
throw InvalidTypeScriptUnionError("point", pointMap.toString())
|
||||
}
|
||||
|
||||
val dpi = resources.displayMetrics.density
|
||||
val x = pointMap.getDouble("x") * dpi
|
||||
val y = pointMap.getDouble("y") * dpi
|
||||
|
||||
// Getting the point from the previewView needs to be run on the UI thread
|
||||
val point = withContext(coroutineScope.coroutineContext) {
|
||||
previewView.meteringPointFactory.createPoint(x.toFloat(), y.toFloat())
|
||||
}
|
||||
|
||||
val action = FocusMeteringAction.Builder(point, FocusMeteringAction.FLAG_AF or FocusMeteringAction.FLAG_AE)
|
||||
.setAutoCancelDuration(5, TimeUnit.SECONDS) // auto-reset after 5 seconds
|
||||
.build()
|
||||
|
||||
cameraControl.startFocusAndMetering(action).await()
|
||||
// TODO: CameraView.focus!!
|
||||
}
|
||||
|
@@ -3,27 +3,15 @@ package com.mrousavy.camera
|
||||
import android.Manifest
|
||||
import android.annotation.SuppressLint
|
||||
import android.content.pm.PackageManager
|
||||
import androidx.camera.video.FileOutputOptions
|
||||
import androidx.camera.video.VideoRecordEvent
|
||||
import androidx.core.content.ContextCompat
|
||||
import androidx.core.util.Consumer
|
||||
import com.facebook.react.bridge.*
|
||||
import com.mrousavy.camera.utils.makeErrorMap
|
||||
import java.io.File
|
||||
import java.text.SimpleDateFormat
|
||||
import com.mrousavy.camera.parsers.Torch
|
||||
import com.mrousavy.camera.parsers.VideoCodec
|
||||
import com.mrousavy.camera.parsers.VideoFileType
|
||||
import com.mrousavy.camera.utils.RecordingSession
|
||||
import java.util.*
|
||||
|
||||
data class TemporaryFile(val path: String)
|
||||
|
||||
fun CameraView.startRecording(options: ReadableMap, onRecordCallback: Callback) {
|
||||
if (videoCapture == null) {
|
||||
if (video == true) {
|
||||
throw CameraNotReadyError()
|
||||
} else {
|
||||
throw VideoNotEnabledError()
|
||||
}
|
||||
}
|
||||
|
||||
suspend fun CameraView.startRecording(options: ReadableMap, onRecordCallback: Callback) {
|
||||
// check audio permission
|
||||
if (audio == true) {
|
||||
if (ContextCompat.checkSelfPermission(context, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
|
||||
@@ -34,89 +22,38 @@ fun CameraView.startRecording(options: ReadableMap, onRecordCallback: Callback)
|
||||
if (options.hasKey("flash")) {
|
||||
val enableFlash = options.getString("flash") == "on"
|
||||
// overrides current torch mode value to enable flash while recording
|
||||
camera!!.cameraControl.enableTorch(enableFlash)
|
||||
cameraSession.setTorchMode(enableFlash)
|
||||
}
|
||||
var codec = VideoCodec.H264
|
||||
if (options.hasKey("videoCodec")) {
|
||||
codec = VideoCodec.fromUnionValue(options.getString("videoCodec"))
|
||||
}
|
||||
var fileType = VideoFileType.MP4
|
||||
if (options.hasKey("fileType")) {
|
||||
fileType = VideoFileType.fromUnionValue(options.getString("fileType"))
|
||||
}
|
||||
|
||||
val id = SimpleDateFormat("yyyyMMdd_HHmmss", Locale.US).format(Date())
|
||||
val file = File.createTempFile("VisionCamera-${id}", ".mp4")
|
||||
val fileOptions = FileOutputOptions.Builder(file).build()
|
||||
|
||||
val recorder = videoCapture!!.output
|
||||
var recording = recorder.prepareRecording(context, fileOptions)
|
||||
|
||||
if (audio == true) {
|
||||
@SuppressLint("MissingPermission")
|
||||
recording = recording.withAudioEnabled()
|
||||
val callback = { video: RecordingSession.Video ->
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", video.path)
|
||||
map.putDouble("duration", video.durationMs.toDouble() / 1000.0)
|
||||
onRecordCallback(map, null)
|
||||
}
|
||||
|
||||
activeVideoRecording = recording.start(ContextCompat.getMainExecutor(context), object : Consumer<VideoRecordEvent> {
|
||||
override fun accept(event: VideoRecordEvent?) {
|
||||
if (event is VideoRecordEvent.Finalize) {
|
||||
if (event.hasError()) {
|
||||
// error occured!
|
||||
val error = when (event.error) {
|
||||
VideoRecordEvent.Finalize.ERROR_ENCODING_FAILED -> VideoEncoderError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_FILE_SIZE_LIMIT_REACHED -> FileSizeLimitReachedError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_INSUFFICIENT_STORAGE -> InsufficientStorageError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_INVALID_OUTPUT_OPTIONS -> InvalidVideoOutputOptionsError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_NO_VALID_DATA -> NoValidDataError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_RECORDER_ERROR -> RecorderError(event.cause)
|
||||
VideoRecordEvent.Finalize.ERROR_SOURCE_INACTIVE -> InactiveSourceError(event.cause)
|
||||
else -> UnknownCameraError(event.cause)
|
||||
}
|
||||
val map = makeErrorMap("${error.domain}/${error.id}", error.message, error)
|
||||
onRecordCallback(null, map)
|
||||
} else {
|
||||
// recording saved successfully!
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", event.outputResults.outputUri.toString())
|
||||
map.putDouble("duration", /* seconds */ event.recordingStats.recordedDurationNanos.toDouble() / 1000000.0 / 1000.0)
|
||||
map.putDouble("size", /* kB */ event.recordingStats.numBytesRecorded.toDouble() / 1000.0)
|
||||
onRecordCallback(map, null)
|
||||
}
|
||||
|
||||
// reset the torch mode
|
||||
camera!!.cameraControl.enableTorch(torch == "on")
|
||||
}
|
||||
}
|
||||
})
|
||||
cameraSession.startRecording(audio == true, codec, fileType, callback)
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
fun CameraView.pauseRecording() {
|
||||
if (videoCapture == null) {
|
||||
throw CameraNotReadyError()
|
||||
}
|
||||
if (activeVideoRecording == null) {
|
||||
throw NoRecordingInProgressError()
|
||||
}
|
||||
|
||||
activeVideoRecording!!.pause()
|
||||
suspend fun CameraView.pauseRecording() {
|
||||
cameraSession.pauseRecording()
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
fun CameraView.resumeRecording() {
|
||||
if (videoCapture == null) {
|
||||
throw CameraNotReadyError()
|
||||
}
|
||||
if (activeVideoRecording == null) {
|
||||
throw NoRecordingInProgressError()
|
||||
}
|
||||
|
||||
activeVideoRecording!!.resume()
|
||||
suspend fun CameraView.resumeRecording() {
|
||||
cameraSession.resumeRecording()
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
fun CameraView.stopRecording() {
|
||||
if (videoCapture == null) {
|
||||
throw CameraNotReadyError()
|
||||
}
|
||||
if (activeVideoRecording == null) {
|
||||
throw NoRecordingInProgressError()
|
||||
}
|
||||
|
||||
activeVideoRecording!!.stop()
|
||||
|
||||
// reset torch mode to original value
|
||||
camera!!.cameraControl.enableTorch(torch == "on")
|
||||
suspend fun CameraView.stopRecording() {
|
||||
cameraSession.stopRecording()
|
||||
cameraSession.setTorchMode(torch == Torch.ON)
|
||||
}
|
||||
|
@@ -1,114 +1,115 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.annotation.SuppressLint
|
||||
import android.content.Context
|
||||
import android.graphics.Bitmap
|
||||
import android.graphics.BitmapFactory
|
||||
import android.graphics.ImageFormat
|
||||
import android.graphics.Matrix
|
||||
import android.hardware.camera2.*
|
||||
import android.util.Log
|
||||
import androidx.camera.camera2.interop.Camera2CameraInfo
|
||||
import androidx.camera.core.ImageCapture
|
||||
import androidx.camera.core.ImageProxy
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import com.facebook.react.bridge.WritableMap
|
||||
import com.mrousavy.camera.parsers.Flash
|
||||
import com.mrousavy.camera.parsers.QualityPrioritization
|
||||
import com.mrousavy.camera.utils.*
|
||||
import kotlinx.coroutines.*
|
||||
import java.io.File
|
||||
import kotlin.system.measureTimeMillis
|
||||
import java.io.FileOutputStream
|
||||
import java.io.OutputStream
|
||||
|
||||
private const val TAG = "CameraView.takePhoto"
|
||||
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
suspend fun CameraView.takePhoto(options: ReadableMap): WritableMap = coroutineScope {
|
||||
if (fallbackToSnapshot) {
|
||||
Log.i(CameraView.TAG, "takePhoto() called, but falling back to Snapshot because 1 use-case is already occupied.")
|
||||
return@coroutineScope takeSnapshot(options)
|
||||
}
|
||||
suspend fun CameraView.takePhoto(optionsMap: ReadableMap): WritableMap {
|
||||
val options = optionsMap.toHashMap()
|
||||
Log.i(TAG, "Taking photo... Options: $options")
|
||||
|
||||
val startFunc = System.nanoTime()
|
||||
Log.i(CameraView.TAG, "takePhoto() called")
|
||||
if (imageCapture == null) {
|
||||
if (photo == true) {
|
||||
throw CameraNotReadyError()
|
||||
} else {
|
||||
throw PhotoNotEnabledError()
|
||||
}
|
||||
}
|
||||
val qualityPrioritization = options["qualityPrioritization"] as? String ?: "balanced"
|
||||
val flash = options["flash"] as? String ?: "off"
|
||||
val enableAutoRedEyeReduction = options["enableAutoRedEyeReduction"] == true
|
||||
val enableAutoStabilization = options["enableAutoStabilization"] == true
|
||||
val skipMetadata = options["skipMetadata"] == true
|
||||
|
||||
if (options.hasKey("flash")) {
|
||||
val flashMode = options.getString("flash")
|
||||
imageCapture!!.flashMode = when (flashMode) {
|
||||
"on" -> ImageCapture.FLASH_MODE_ON
|
||||
"off" -> ImageCapture.FLASH_MODE_OFF
|
||||
"auto" -> ImageCapture.FLASH_MODE_AUTO
|
||||
else -> throw InvalidTypeScriptUnionError("flash", flashMode ?: "(null)")
|
||||
}
|
||||
}
|
||||
// All those options are not yet implemented - see https://github.com/mrousavy/react-native-vision-camera/issues/75
|
||||
if (options.hasKey("photoCodec")) {
|
||||
// TODO photoCodec
|
||||
}
|
||||
if (options.hasKey("qualityPrioritization")) {
|
||||
// TODO qualityPrioritization
|
||||
}
|
||||
if (options.hasKey("enableAutoRedEyeReduction")) {
|
||||
// TODO enableAutoRedEyeReduction
|
||||
}
|
||||
if (options.hasKey("enableDualCameraFusion")) {
|
||||
// TODO enableDualCameraFusion
|
||||
}
|
||||
if (options.hasKey("enableAutoStabilization")) {
|
||||
// TODO enableAutoStabilization
|
||||
}
|
||||
if (options.hasKey("enableAutoDistortionCorrection")) {
|
||||
// TODO enableAutoDistortionCorrection
|
||||
}
|
||||
val skipMetadata = if (options.hasKey("skipMetadata")) options.getBoolean("skipMetadata") else false
|
||||
val flashMode = Flash.fromUnionValue(flash)
|
||||
val qualityPrioritizationMode = QualityPrioritization.fromUnionValue(qualityPrioritization)
|
||||
|
||||
val camera2Info = Camera2CameraInfo.from(camera!!.cameraInfo)
|
||||
val lensFacing = camera2Info.getCameraCharacteristic(CameraCharacteristics.LENS_FACING)
|
||||
val photo = cameraSession.takePhoto(qualityPrioritizationMode,
|
||||
flashMode,
|
||||
enableAutoRedEyeReduction,
|
||||
enableAutoStabilization,
|
||||
outputOrientation)
|
||||
|
||||
val results = awaitAll(
|
||||
async(coroutineContext) {
|
||||
Log.d(CameraView.TAG, "Taking picture...")
|
||||
val startCapture = System.nanoTime()
|
||||
val pic = imageCapture!!.takePicture(takePhotoExecutor)
|
||||
val endCapture = System.nanoTime()
|
||||
Log.i(CameraView.TAG_PERF, "Finished image capture in ${(endCapture - startCapture) / 1_000_000}ms")
|
||||
pic
|
||||
},
|
||||
async(Dispatchers.IO) {
|
||||
Log.d(CameraView.TAG, "Creating temp file...")
|
||||
File.createTempFile("mrousavy", ".jpg", context.cacheDir).apply { deleteOnExit() }
|
||||
}
|
||||
)
|
||||
val photo = results.first { it is ImageProxy } as ImageProxy
|
||||
val file = results.first { it is File } as File
|
||||
photo.use {
|
||||
Log.i(TAG, "Successfully captured ${photo.image.width} x ${photo.image.height} photo!")
|
||||
|
||||
val exif: ExifInterface?
|
||||
@Suppress("BlockingMethodInNonBlockingContext")
|
||||
withContext(Dispatchers.IO) {
|
||||
Log.d(CameraView.TAG, "Saving picture to ${file.absolutePath}...")
|
||||
val milliseconds = measureTimeMillis {
|
||||
val flipHorizontally = lensFacing == CameraCharacteristics.LENS_FACING_FRONT
|
||||
photo.save(file, flipHorizontally)
|
||||
}
|
||||
Log.i(CameraView.TAG_PERF, "Finished image saving in ${milliseconds}ms")
|
||||
// TODO: Read Exif from existing in-memory photo buffer instead of file?
|
||||
exif = if (skipMetadata) null else ExifInterface(file)
|
||||
val cameraCharacteristics = cameraManager.getCameraCharacteristics(cameraId!!)
|
||||
|
||||
val path = savePhotoToFile(context, cameraCharacteristics, photo)
|
||||
|
||||
Log.i(TAG, "Successfully saved photo to file! $path")
|
||||
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", path)
|
||||
map.putInt("width", photo.image.width)
|
||||
map.putInt("height", photo.image.height)
|
||||
map.putString("orientation", photo.orientation.unionValue)
|
||||
map.putBoolean("isRawPhoto", photo.format == ImageFormat.RAW_SENSOR)
|
||||
map.putBoolean("isMirrored", photo.isMirrored)
|
||||
|
||||
// TODO: Add metadata prop to resulting photo
|
||||
|
||||
return map
|
||||
}
|
||||
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", file.absolutePath)
|
||||
map.putInt("width", photo.width)
|
||||
map.putInt("height", photo.height)
|
||||
map.putBoolean("isRawPhoto", photo.isRaw)
|
||||
|
||||
val metadata = exif?.buildMetadataMap()
|
||||
map.putMap("metadata", metadata)
|
||||
|
||||
photo.close()
|
||||
|
||||
Log.d(CameraView.TAG, "Finished taking photo!")
|
||||
|
||||
val endFunc = System.nanoTime()
|
||||
Log.i(CameraView.TAG_PERF, "Finished function execution in ${(endFunc - startFunc) / 1_000_000}ms")
|
||||
return@coroutineScope map
|
||||
}
|
||||
|
||||
private fun writeImageToStream(imageBytes: ByteArray, stream: OutputStream, isMirrored: Boolean) {
|
||||
if (isMirrored) {
|
||||
val bitmap = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
|
||||
val matrix = Matrix()
|
||||
matrix.preScale(-1f, 1f)
|
||||
val processedBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.width, bitmap.height, matrix, false)
|
||||
processedBitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream)
|
||||
} else {
|
||||
stream.write(imageBytes)
|
||||
}
|
||||
}
|
||||
|
||||
private suspend fun savePhotoToFile(context: Context,
|
||||
cameraCharacteristics: CameraCharacteristics,
|
||||
photo: CameraSession.CapturedPhoto): String {
|
||||
return withContext(Dispatchers.IO) {
|
||||
when (photo.format) {
|
||||
// When the format is JPEG or DEPTH JPEG we can simply save the bytes as-is
|
||||
ImageFormat.JPEG, ImageFormat.DEPTH_JPEG -> {
|
||||
val buffer = photo.image.planes[0].buffer
|
||||
val bytes = ByteArray(buffer.remaining()).apply { buffer.get(this) }
|
||||
val file = createFile(context, ".jpg")
|
||||
FileOutputStream(file).use { stream ->
|
||||
writeImageToStream(bytes, stream, photo.isMirrored)
|
||||
}
|
||||
return@withContext file.absolutePath
|
||||
}
|
||||
|
||||
// When the format is RAW we use the DngCreator utility library
|
||||
ImageFormat.RAW_SENSOR -> {
|
||||
val dngCreator = DngCreator(cameraCharacteristics, photo.metadata)
|
||||
val file = createFile(context, ".dng")
|
||||
FileOutputStream(file).use { stream ->
|
||||
// TODO: Make sure orientation is loaded properly here?
|
||||
dngCreator.writeImage(stream, photo.image)
|
||||
}
|
||||
return@withContext file.absolutePath
|
||||
}
|
||||
|
||||
else -> {
|
||||
throw Error("Failed to save Photo to file, image format is not supported! ${photo.format}")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private fun createFile(context: Context, extension: String): File {
|
||||
return File.createTempFile("mrousavy", extension, context.cacheDir).apply { deleteOnExit() }
|
||||
}
|
||||
|
@@ -1,60 +0,0 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.graphics.Bitmap
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import com.facebook.react.bridge.WritableMap
|
||||
import com.mrousavy.camera.utils.buildMetadataMap
|
||||
import kotlinx.coroutines.Dispatchers
|
||||
import kotlinx.coroutines.coroutineScope
|
||||
import kotlinx.coroutines.withContext
|
||||
import java.io.File
|
||||
import java.io.FileOutputStream
|
||||
import kotlinx.coroutines.guava.await
|
||||
|
||||
suspend fun CameraView.takeSnapshot(options: ReadableMap): WritableMap = coroutineScope {
|
||||
val camera = camera ?: throw CameraNotReadyError()
|
||||
val enableFlash = options.getString("flash") == "on"
|
||||
|
||||
try {
|
||||
if (enableFlash) {
|
||||
camera.cameraControl.enableTorch(true).await()
|
||||
}
|
||||
|
||||
val bitmap = withContext(coroutineScope.coroutineContext) {
|
||||
previewView.bitmap ?: throw CameraNotReadyError()
|
||||
}
|
||||
|
||||
val quality = if (options.hasKey("quality")) options.getInt("quality") else 100
|
||||
|
||||
val file: File
|
||||
val exif: ExifInterface
|
||||
@Suppress("BlockingMethodInNonBlockingContext")
|
||||
withContext(Dispatchers.IO) {
|
||||
file = File.createTempFile("mrousavy", ".jpg", context.cacheDir).apply { deleteOnExit() }
|
||||
FileOutputStream(file).use { stream ->
|
||||
bitmap.compress(Bitmap.CompressFormat.JPEG, quality, stream)
|
||||
}
|
||||
exif = ExifInterface(file)
|
||||
}
|
||||
|
||||
val map = Arguments.createMap()
|
||||
map.putString("path", file.absolutePath)
|
||||
map.putInt("width", bitmap.width)
|
||||
map.putInt("height", bitmap.height)
|
||||
map.putBoolean("isRawPhoto", false)
|
||||
|
||||
val skipMetadata =
|
||||
if (options.hasKey("skipMetadata")) options.getBoolean("skipMetadata") else false
|
||||
val metadata = if (skipMetadata) null else exif.buildMetadataMap()
|
||||
map.putMap("metadata", metadata)
|
||||
|
||||
return@coroutineScope map
|
||||
} finally {
|
||||
if (enableFlash) {
|
||||
// reset to `torch` property
|
||||
camera.cameraControl.enableTorch(this@takeSnapshot.torch == "on")
|
||||
}
|
||||
}
|
||||
}
|
@@ -5,80 +5,60 @@ import android.annotation.SuppressLint
|
||||
import android.content.Context
|
||||
import android.content.pm.PackageManager
|
||||
import android.content.res.Configuration
|
||||
import android.hardware.camera2.*
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.util.Log
|
||||
import android.util.Range
|
||||
import android.view.*
|
||||
import android.view.View.OnTouchListener
|
||||
import android.util.Size
|
||||
import android.view.Surface
|
||||
import android.view.View
|
||||
import android.widget.FrameLayout
|
||||
import androidx.camera.camera2.interop.Camera2Interop
|
||||
import androidx.camera.core.*
|
||||
import androidx.camera.core.impl.*
|
||||
import androidx.camera.extensions.*
|
||||
import androidx.camera.lifecycle.ProcessCameraProvider
|
||||
import androidx.camera.video.*
|
||||
import androidx.camera.video.VideoCapture
|
||||
import androidx.camera.view.PreviewView
|
||||
import androidx.core.content.ContextCompat
|
||||
import androidx.lifecycle.*
|
||||
import com.facebook.jni.HybridData
|
||||
import com.facebook.proguard.annotations.DoNotStrip
|
||||
import com.facebook.react.bridge.*
|
||||
import com.mrousavy.camera.frameprocessor.Frame
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import com.mrousavy.camera.extensions.containsAny
|
||||
import com.mrousavy.camera.extensions.installHierarchyFitter
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessor
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPlugin
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessorPluginRegistry
|
||||
import com.mrousavy.camera.utils.*
|
||||
import kotlinx.coroutines.*
|
||||
import kotlinx.coroutines.guava.await
|
||||
import java.lang.IllegalArgumentException
|
||||
import java.util.concurrent.ExecutorService
|
||||
import java.util.concurrent.Executors
|
||||
import kotlin.math.max
|
||||
import kotlin.math.min
|
||||
import com.mrousavy.camera.parsers.PixelFormat
|
||||
import com.mrousavy.camera.parsers.Orientation
|
||||
import com.mrousavy.camera.parsers.PreviewType
|
||||
import com.mrousavy.camera.parsers.Torch
|
||||
import com.mrousavy.camera.parsers.VideoStabilizationMode
|
||||
import com.mrousavy.camera.skia.SkiaPreviewView
|
||||
import com.mrousavy.camera.skia.SkiaRenderer
|
||||
import com.mrousavy.camera.utils.outputs.CameraOutputs
|
||||
import kotlinx.coroutines.CoroutineScope
|
||||
import kotlinx.coroutines.Dispatchers
|
||||
import kotlinx.coroutines.launch
|
||||
import java.io.Closeable
|
||||
|
||||
//
|
||||
// TODOs for the CameraView which are currently too hard to implement either because of CameraX' limitations, or my brain capacity.
|
||||
//
|
||||
// CameraView
|
||||
// TODO: Actually use correct sizes for video and photo (currently it's both the video size)
|
||||
// TODO: Configurable FPS higher than 30
|
||||
// TODO: High-speed video recordings (export in CameraViewModule::getAvailableVideoDevices(), and set in CameraView::configurePreview()) (120FPS+)
|
||||
// TODO: configureSession() enableDepthData
|
||||
// TODO: configureSession() enableHighQualityPhotos
|
||||
// TODO: configureSession() enablePortraitEffectsMatteDelivery
|
||||
// TODO: configureSession() colorSpace
|
||||
|
||||
// CameraView+RecordVideo
|
||||
// TODO: Better startRecording()/stopRecording() (promise + callback, wait for TurboModules/JSI)
|
||||
// TODO: videoStabilizationMode
|
||||
// TODO: Return Video size/duration
|
||||
|
||||
// CameraView+TakePhoto
|
||||
// TODO: Mirror selfie images
|
||||
// TODO: takePhoto() depth data
|
||||
// TODO: takePhoto() raw capture
|
||||
// TODO: takePhoto() photoCodec ("hevc" | "jpeg" | "raw")
|
||||
// TODO: takePhoto() qualityPrioritization
|
||||
// TODO: takePhoto() enableAutoRedEyeReduction
|
||||
// TODO: takePhoto() enableAutoStabilization
|
||||
// TODO: takePhoto() enableAutoDistortionCorrection
|
||||
// TODO: takePhoto() return with jsi::Value Image reference for faster capture
|
||||
|
||||
@Suppress("KotlinJniMissingFunction") // I use fbjni, Android Studio is not smart enough to realize that.
|
||||
@SuppressLint("ClickableViewAccessibility", "ViewConstructor")
|
||||
class CameraView(context: Context, private val frameProcessorThread: ExecutorService) : FrameLayout(context), LifecycleOwner {
|
||||
@SuppressLint("ClickableViewAccessibility", "ViewConstructor", "MissingPermission")
|
||||
class CameraView(context: Context) : FrameLayout(context) {
|
||||
companion object {
|
||||
const val TAG = "CameraView"
|
||||
const val TAG_PERF = "CameraView.performance"
|
||||
|
||||
private val propsThatRequireSessionReconfiguration = arrayListOf("cameraId", "format", "fps", "hdr", "lowLightBoost", "photo", "video", "enableFrameProcessor")
|
||||
private val arrayListOfZoom = arrayListOf("zoom")
|
||||
private val propsThatRequirePreviewReconfiguration = arrayListOf("cameraId", "previewType")
|
||||
private val propsThatRequireSessionReconfiguration = arrayListOf("cameraId", "format", "photo", "video", "enableFrameProcessor", "pixelFormat")
|
||||
private val propsThatRequireFormatReconfiguration = arrayListOf("fps", "hdr", "videoStabilizationMode", "lowLightBoost")
|
||||
}
|
||||
|
||||
// react properties
|
||||
// props that require reconfiguring
|
||||
var cameraId: String? = null // this is actually not a react prop directly, but the result of setting device={}
|
||||
var cameraId: String? = null
|
||||
var enableDepthData = false
|
||||
var enableHighQualityPhotos: Boolean? = null
|
||||
var enablePortraitEffectsMatteDelivery = false
|
||||
@@ -87,406 +67,186 @@ class CameraView(context: Context, private val frameProcessorThread: ExecutorSer
|
||||
var video: Boolean? = null
|
||||
var audio: Boolean? = null
|
||||
var enableFrameProcessor = false
|
||||
var pixelFormat: PixelFormat = PixelFormat.NATIVE
|
||||
// props that require format reconfiguring
|
||||
var format: ReadableMap? = null
|
||||
var fps: Int? = null
|
||||
var videoStabilizationMode: VideoStabilizationMode? = null
|
||||
var hdr: Boolean? = null // nullable bool
|
||||
var colorSpace: String? = null
|
||||
var lowLightBoost: Boolean? = null // nullable bool
|
||||
var previewType: PreviewType = PreviewType.NONE
|
||||
// other props
|
||||
var isActive = false
|
||||
var torch = "off"
|
||||
var torch: Torch = Torch.OFF
|
||||
var zoom: Float = 1f // in "factor"
|
||||
var orientation: String? = null
|
||||
var enableZoomGesture = false
|
||||
set(value) {
|
||||
field = value
|
||||
setOnTouchListener(if (value) touchEventListener else null)
|
||||
}
|
||||
var orientation: Orientation? = null
|
||||
|
||||
// private properties
|
||||
private var isMounted = false
|
||||
private val reactContext: ReactContext
|
||||
get() = context as ReactContext
|
||||
internal val cameraManager = context.getSystemService(Context.CAMERA_SERVICE) as CameraManager
|
||||
|
||||
@Suppress("JoinDeclarationAndAssignment")
|
||||
internal val previewView: PreviewView
|
||||
private val cameraExecutor = Executors.newSingleThreadExecutor()
|
||||
internal val takePhotoExecutor = Executors.newSingleThreadExecutor()
|
||||
internal val recordVideoExecutor = Executors.newSingleThreadExecutor()
|
||||
internal var coroutineScope = CoroutineScope(Dispatchers.Main)
|
||||
// session
|
||||
internal val cameraSession: CameraSession
|
||||
private var previewView: View? = null
|
||||
private var previewSurface: Surface? = null
|
||||
|
||||
internal var camera: Camera? = null
|
||||
internal var imageCapture: ImageCapture? = null
|
||||
internal var videoCapture: VideoCapture<Recorder>? = null
|
||||
public var frameProcessor: FrameProcessor? = null
|
||||
private var preview: Preview? = null
|
||||
private var imageAnalysis: ImageAnalysis? = null
|
||||
|
||||
internal var activeVideoRecording: Recording? = null
|
||||
|
||||
private var extensionsManager: ExtensionsManager? = null
|
||||
|
||||
private val scaleGestureListener: ScaleGestureDetector.SimpleOnScaleGestureListener
|
||||
private val scaleGestureDetector: ScaleGestureDetector
|
||||
private val touchEventListener: OnTouchListener
|
||||
|
||||
private val lifecycleRegistry: LifecycleRegistry
|
||||
private var hostLifecycleState: Lifecycle.State
|
||||
|
||||
private val inputRotation: Int
|
||||
get() {
|
||||
return context.displayRotation
|
||||
}
|
||||
private val outputRotation: Int
|
||||
get() {
|
||||
if (orientation != null) {
|
||||
// user is overriding output orientation
|
||||
return when (orientation!!) {
|
||||
"portrait" -> Surface.ROTATION_0
|
||||
"landscapeRight" -> Surface.ROTATION_90
|
||||
"portraitUpsideDown" -> Surface.ROTATION_180
|
||||
"landscapeLeft" -> Surface.ROTATION_270
|
||||
else -> throw InvalidTypeScriptUnionError("orientation", orientation!!)
|
||||
}
|
||||
} else {
|
||||
// use same as input rotation
|
||||
return inputRotation
|
||||
}
|
||||
private var skiaRenderer: SkiaRenderer? = null
|
||||
internal var frameProcessor: FrameProcessor? = null
|
||||
set(value) {
|
||||
field = value
|
||||
cameraSession.setFrameProcessor(frameProcessor)
|
||||
}
|
||||
|
||||
private val inputOrientation: Orientation
|
||||
get() = cameraSession.orientation
|
||||
internal val outputOrientation: Orientation
|
||||
get() = orientation ?: inputOrientation
|
||||
|
||||
private var minZoom: Float = 1f
|
||||
private var maxZoom: Float = 1f
|
||||
|
||||
@Suppress("RedundantIf")
|
||||
internal val fallbackToSnapshot: Boolean
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
get() {
|
||||
if (video != true && !enableFrameProcessor) {
|
||||
// Both use-cases are disabled, so `photo` is the only use-case anyways. Don't need to fallback here.
|
||||
return false
|
||||
}
|
||||
cameraId?.let { cameraId ->
|
||||
val cameraManger = reactContext.getSystemService(Context.CAMERA_SERVICE) as? CameraManager
|
||||
cameraManger?.let {
|
||||
val characteristics = cameraManger.getCameraCharacteristics(cameraId)
|
||||
val hardwareLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)
|
||||
if (hardwareLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) {
|
||||
// Camera only supports a single use-case at a time
|
||||
return true
|
||||
} else {
|
||||
if (video == true && enableFrameProcessor) {
|
||||
// Camera supports max. 2 use-cases, but both are occupied by `frameProcessor` and `video`
|
||||
return true
|
||||
} else {
|
||||
// Camera supports max. 2 use-cases and only one is occupied (either `frameProcessor` or `video`), so we can add `photo`
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
init {
|
||||
previewView = PreviewView(context)
|
||||
previewView.layoutParams = LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)
|
||||
previewView.installHierarchyFitter() // If this is not called correctly, view finder will be black/blank
|
||||
addView(previewView)
|
||||
|
||||
scaleGestureListener = object : ScaleGestureDetector.SimpleOnScaleGestureListener() {
|
||||
override fun onScale(detector: ScaleGestureDetector): Boolean {
|
||||
zoom = max(min((zoom * detector.scaleFactor), maxZoom), minZoom)
|
||||
update(arrayListOfZoom)
|
||||
return true
|
||||
}
|
||||
}
|
||||
scaleGestureDetector = ScaleGestureDetector(context, scaleGestureListener)
|
||||
touchEventListener = OnTouchListener { _, event -> return@OnTouchListener scaleGestureDetector.onTouchEvent(event) }
|
||||
|
||||
hostLifecycleState = Lifecycle.State.INITIALIZED
|
||||
lifecycleRegistry = LifecycleRegistry(this)
|
||||
reactContext.addLifecycleEventListener(object : LifecycleEventListener {
|
||||
override fun onHostResume() {
|
||||
hostLifecycleState = Lifecycle.State.RESUMED
|
||||
updateLifecycleState()
|
||||
// workaround for https://issuetracker.google.com/issues/147354615, preview must be bound on resume
|
||||
update(propsThatRequireSessionReconfiguration)
|
||||
}
|
||||
override fun onHostPause() {
|
||||
hostLifecycleState = Lifecycle.State.CREATED
|
||||
updateLifecycleState()
|
||||
}
|
||||
override fun onHostDestroy() {
|
||||
hostLifecycleState = Lifecycle.State.DESTROYED
|
||||
updateLifecycleState()
|
||||
cameraExecutor.shutdown()
|
||||
takePhotoExecutor.shutdown()
|
||||
recordVideoExecutor.shutdown()
|
||||
reactContext.removeLifecycleEventListener(this)
|
||||
}
|
||||
})
|
||||
this.installHierarchyFitter()
|
||||
setupPreviewView()
|
||||
cameraSession = CameraSession(context, cameraManager, { invokeOnInitialized() }, { error -> invokeOnError(error) })
|
||||
}
|
||||
|
||||
override fun onConfigurationChanged(newConfig: Configuration?) {
|
||||
super.onConfigurationChanged(newConfig)
|
||||
updateOrientation()
|
||||
}
|
||||
|
||||
@SuppressLint("RestrictedApi")
|
||||
private fun updateOrientation() {
|
||||
preview?.targetRotation = inputRotation
|
||||
imageCapture?.targetRotation = outputRotation
|
||||
videoCapture?.targetRotation = outputRotation
|
||||
imageAnalysis?.targetRotation = outputRotation
|
||||
}
|
||||
|
||||
override fun getLifecycle(): Lifecycle {
|
||||
return lifecycleRegistry
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the custom Lifecycle to match the host activity's lifecycle, and if it's active we narrow it down to the [isActive] and [isAttachedToWindow] fields.
|
||||
*/
|
||||
private fun updateLifecycleState() {
|
||||
val lifecycleBefore = lifecycleRegistry.currentState
|
||||
if (hostLifecycleState == Lifecycle.State.RESUMED) {
|
||||
// Host Lifecycle (Activity) is currently active (RESUMED), so we narrow it down to the view's lifecycle
|
||||
if (isActive && isAttachedToWindow) {
|
||||
lifecycleRegistry.currentState = Lifecycle.State.RESUMED
|
||||
} else {
|
||||
lifecycleRegistry.currentState = Lifecycle.State.CREATED
|
||||
}
|
||||
} else {
|
||||
// Host Lifecycle (Activity) is currently inactive (STARTED or DESTROYED), so that overrules our view's lifecycle
|
||||
lifecycleRegistry.currentState = hostLifecycleState
|
||||
}
|
||||
Log.d(TAG, "Lifecycle went from ${lifecycleBefore.name} -> ${lifecycleRegistry.currentState.name} (isActive: $isActive | isAttachedToWindow: $isAttachedToWindow)")
|
||||
// TODO: updateOrientation()
|
||||
}
|
||||
|
||||
override fun onAttachedToWindow() {
|
||||
super.onAttachedToWindow()
|
||||
updateLifecycleState()
|
||||
if (!isMounted) {
|
||||
isMounted = true
|
||||
invokeOnViewReady()
|
||||
}
|
||||
updateLifecycle()
|
||||
}
|
||||
|
||||
override fun onDetachedFromWindow() {
|
||||
super.onDetachedFromWindow()
|
||||
updateLifecycleState()
|
||||
updateLifecycle()
|
||||
}
|
||||
|
||||
/**
|
||||
* Invalidate all React Props and reconfigure the device
|
||||
*/
|
||||
fun update(changedProps: ArrayList<String>) = previewView.post {
|
||||
// TODO: Does this introduce too much overhead?
|
||||
// I need to .post on the previewView because it might've not been initialized yet
|
||||
// I need to use CoroutineScope.launch because of the suspend fun [configureSession]
|
||||
coroutineScope.launch {
|
||||
try {
|
||||
val shouldReconfigureSession = changedProps.containsAny(propsThatRequireSessionReconfiguration)
|
||||
val shouldReconfigureZoom = shouldReconfigureSession || changedProps.contains("zoom")
|
||||
val shouldReconfigureTorch = shouldReconfigureSession || changedProps.contains("torch")
|
||||
val shouldUpdateOrientation = shouldReconfigureSession || changedProps.contains("orientation")
|
||||
private fun setupPreviewView() {
|
||||
this.previewView?.let { previewView ->
|
||||
removeView(previewView)
|
||||
if (previewView is Closeable) previewView.close()
|
||||
}
|
||||
this.previewSurface = null
|
||||
|
||||
if (changedProps.contains("isActive")) {
|
||||
updateLifecycleState()
|
||||
}
|
||||
if (shouldReconfigureSession) {
|
||||
when (previewType) {
|
||||
PreviewType.NONE -> {
|
||||
// Do nothing.
|
||||
}
|
||||
PreviewType.NATIVE -> {
|
||||
val cameraId = cameraId ?: throw NoCameraDeviceError()
|
||||
this.previewView = NativePreviewView(context, cameraManager, cameraId) { surface ->
|
||||
previewSurface = surface
|
||||
configureSession()
|
||||
}
|
||||
if (shouldReconfigureZoom) {
|
||||
val zoomClamped = max(min(zoom, maxZoom), minZoom)
|
||||
camera!!.cameraControl.setZoomRatio(zoomClamped)
|
||||
}
|
||||
if (shouldReconfigureTorch) {
|
||||
camera!!.cameraControl.enableTorch(torch == "on")
|
||||
}
|
||||
if (shouldUpdateOrientation) {
|
||||
updateOrientation()
|
||||
}
|
||||
} catch (e: Throwable) {
|
||||
Log.e(TAG, "update() threw: ${e.message}")
|
||||
invokeOnError(e)
|
||||
}
|
||||
PreviewType.SKIA -> {
|
||||
if (skiaRenderer == null) skiaRenderer = SkiaRenderer()
|
||||
this.previewView = SkiaPreviewView(context, skiaRenderer!!)
|
||||
configureSession()
|
||||
}
|
||||
}
|
||||
|
||||
this.previewView?.let { previewView ->
|
||||
previewView.layoutParams = LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)
|
||||
addView(previewView)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Configures the camera capture session. This should only be called when the camera device changes.
|
||||
*/
|
||||
@SuppressLint("RestrictedApi", "UnsafeOptInUsageError")
|
||||
private suspend fun configureSession() {
|
||||
fun update(changedProps: ArrayList<String>) {
|
||||
Log.i(TAG, "Props changed: $changedProps")
|
||||
try {
|
||||
val startTime = System.currentTimeMillis()
|
||||
Log.i(TAG, "Configuring session...")
|
||||
val shouldReconfigurePreview = changedProps.containsAny(propsThatRequirePreviewReconfiguration)
|
||||
val shouldReconfigureSession = shouldReconfigurePreview || changedProps.containsAny(propsThatRequireSessionReconfiguration)
|
||||
val shouldReconfigureFormat = shouldReconfigureSession || changedProps.containsAny(propsThatRequireFormatReconfiguration)
|
||||
val shouldReconfigureZoom = /* TODO: When should we reconfigure this? */ shouldReconfigureSession || changedProps.contains("zoom")
|
||||
val shouldReconfigureTorch = /* TODO: When should we reconfigure this? */ shouldReconfigureSession || changedProps.contains("torch")
|
||||
val shouldUpdateOrientation = /* TODO: When should we reconfigure this? */ shouldReconfigureSession || changedProps.contains("orientation")
|
||||
val shouldCheckActive = shouldReconfigureFormat || changedProps.contains("isActive")
|
||||
|
||||
if (shouldReconfigurePreview) {
|
||||
setupPreviewView()
|
||||
}
|
||||
if (shouldReconfigureSession) {
|
||||
configureSession()
|
||||
}
|
||||
if (shouldReconfigureFormat) {
|
||||
configureFormat()
|
||||
}
|
||||
if (shouldCheckActive) {
|
||||
updateLifecycle()
|
||||
}
|
||||
|
||||
if (shouldReconfigureZoom) {
|
||||
updateZoom()
|
||||
}
|
||||
if (shouldReconfigureTorch) {
|
||||
updateTorch()
|
||||
}
|
||||
if (shouldUpdateOrientation) {
|
||||
// TODO: updateOrientation()
|
||||
}
|
||||
} catch (e: Throwable) {
|
||||
Log.e(TAG, "update() threw: ${e.message}")
|
||||
invokeOnError(e)
|
||||
}
|
||||
}
|
||||
|
||||
private fun configureSession() {
|
||||
try {
|
||||
Log.i(TAG, "Configuring Camera Device...")
|
||||
|
||||
if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
|
||||
throw CameraPermissionError()
|
||||
}
|
||||
if (cameraId == null) {
|
||||
throw NoCameraDeviceError()
|
||||
}
|
||||
if (format != null)
|
||||
Log.i(TAG, "Configuring session with Camera ID $cameraId and custom format...")
|
||||
else
|
||||
Log.i(TAG, "Configuring session with Camera ID $cameraId and default format options...")
|
||||
val cameraId = cameraId ?: throw NoCameraDeviceError()
|
||||
|
||||
// Used to bind the lifecycle of cameras to the lifecycle owner
|
||||
val cameraProvider = ProcessCameraProvider.getInstance(reactContext).await()
|
||||
val format = format
|
||||
val targetVideoSize = if (format != null) Size(format.getInt("videoWidth"), format.getInt("videoHeight")) else null
|
||||
val targetPhotoSize = if (format != null) Size(format.getInt("photoWidth"), format.getInt("photoHeight")) else null
|
||||
// TODO: Allow previewSurface to be null/none
|
||||
val previewSurface = previewSurface ?: return
|
||||
|
||||
var cameraSelector = CameraSelector.Builder().byID(cameraId!!).build()
|
||||
if (targetVideoSize != null) skiaRenderer?.setInputSurfaceSize(targetVideoSize.width, targetVideoSize.height)
|
||||
|
||||
val tryEnableExtension: (suspend (extension: Int) -> Unit) = lambda@ { extension ->
|
||||
if (extensionsManager == null) {
|
||||
Log.i(TAG, "Initializing ExtensionsManager...")
|
||||
extensionsManager = ExtensionsManager.getInstanceAsync(context, cameraProvider).await()
|
||||
}
|
||||
if (extensionsManager!!.isExtensionAvailable(cameraSelector, extension)) {
|
||||
Log.i(TAG, "Enabling extension $extension...")
|
||||
cameraSelector = extensionsManager!!.getExtensionEnabledCameraSelector(cameraSelector, extension)
|
||||
} else {
|
||||
Log.e(TAG, "Extension $extension is not available for the given Camera!")
|
||||
throw when (extension) {
|
||||
ExtensionMode.HDR -> HdrNotContainedInFormatError()
|
||||
ExtensionMode.NIGHT -> LowLightBoostNotContainedInFormatError()
|
||||
else -> Error("Invalid extension supplied! Extension $extension is not available.")
|
||||
}
|
||||
}
|
||||
}
|
||||
val previewOutput = CameraOutputs.PreviewOutput(previewSurface)
|
||||
val photoOutput = if (photo == true) {
|
||||
CameraOutputs.PhotoOutput(targetPhotoSize)
|
||||
} else null
|
||||
val videoOutput = if (video == true || enableFrameProcessor) {
|
||||
CameraOutputs.VideoOutput(targetVideoSize, video == true, enableFrameProcessor, pixelFormat.toImageFormat())
|
||||
} else null
|
||||
|
||||
val previewBuilder = Preview.Builder()
|
||||
.setTargetRotation(inputRotation)
|
||||
cameraSession.configureSession(cameraId, previewOutput, photoOutput, videoOutput)
|
||||
} catch (e: Throwable) {
|
||||
Log.e(TAG, "Failed to configure session: ${e.message}", e)
|
||||
invokeOnError(e)
|
||||
}
|
||||
}
|
||||
|
||||
val imageCaptureBuilder = ImageCapture.Builder()
|
||||
.setTargetRotation(outputRotation)
|
||||
.setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
|
||||
private fun configureFormat() {
|
||||
cameraSession.configureFormat(fps, videoStabilizationMode, hdr, lowLightBoost)
|
||||
}
|
||||
|
||||
val videoRecorderBuilder = Recorder.Builder()
|
||||
.setExecutor(cameraExecutor)
|
||||
private fun updateLifecycle() {
|
||||
cameraSession.setIsActive(isActive && isAttachedToWindow)
|
||||
}
|
||||
|
||||
val imageAnalysisBuilder = ImageAnalysis.Builder()
|
||||
.setTargetRotation(outputRotation)
|
||||
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
|
||||
.setBackgroundExecutor(frameProcessorThread)
|
||||
private fun updateZoom() {
|
||||
cameraSession.setZoom(zoom)
|
||||
}
|
||||
|
||||
if (format == null) {
|
||||
// let CameraX automatically find best resolution for the target aspect ratio
|
||||
Log.i(TAG, "No custom format has been set, CameraX will automatically determine best configuration...")
|
||||
val aspectRatio = aspectRatio(previewView.height, previewView.width) // flipped because it's in sensor orientation.
|
||||
previewBuilder.setTargetAspectRatio(aspectRatio)
|
||||
imageCaptureBuilder.setTargetAspectRatio(aspectRatio)
|
||||
// TODO: Aspect Ratio for Video Recorder?
|
||||
imageAnalysisBuilder.setTargetAspectRatio(aspectRatio)
|
||||
} else {
|
||||
// User has selected a custom format={}. Use that
|
||||
val format = DeviceFormat(format!!)
|
||||
Log.i(TAG, "Using custom format - photo: ${format.photoSize}, video: ${format.videoSize} @ $fps FPS")
|
||||
if (video == true) {
|
||||
previewBuilder.setTargetResolution(format.videoSize)
|
||||
} else {
|
||||
previewBuilder.setTargetResolution(format.photoSize)
|
||||
}
|
||||
imageCaptureBuilder.setTargetResolution(format.photoSize)
|
||||
imageAnalysisBuilder.setTargetResolution(format.photoSize)
|
||||
|
||||
// TODO: Ability to select resolution exactly depending on format? Just like on iOS...
|
||||
when (min(format.videoSize.height, format.videoSize.width)) {
|
||||
in 0..480 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.SD))
|
||||
in 480..720 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.HD, FallbackStrategy.lowerQualityThan(Quality.HD)))
|
||||
in 720..1080 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.FHD, FallbackStrategy.lowerQualityThan(Quality.FHD)))
|
||||
in 1080..2160 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.UHD, FallbackStrategy.lowerQualityThan(Quality.UHD)))
|
||||
in 2160..4320 -> videoRecorderBuilder.setQualitySelector(QualitySelector.from(Quality.HIGHEST, FallbackStrategy.lowerQualityThan(Quality.HIGHEST)))
|
||||
}
|
||||
|
||||
fps?.let { fps ->
|
||||
if (format.frameRateRanges.any { it.contains(fps) }) {
|
||||
// Camera supports the given FPS (frame rate range)
|
||||
val frameDuration = (1.0 / fps.toDouble()).toLong() * 1_000_000_000
|
||||
|
||||
Log.i(TAG, "Setting AE_TARGET_FPS_RANGE to $fps-$fps, and SENSOR_FRAME_DURATION to $frameDuration")
|
||||
Camera2Interop.Extender(previewBuilder)
|
||||
.setCaptureRequestOption(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range(fps, fps))
|
||||
.setCaptureRequestOption(CaptureRequest.SENSOR_FRAME_DURATION, frameDuration)
|
||||
// TODO: Frame Rate/FPS for Video Recorder?
|
||||
} else {
|
||||
throw FpsNotContainedInFormatError(fps)
|
||||
}
|
||||
}
|
||||
if (hdr == true) {
|
||||
tryEnableExtension(ExtensionMode.HDR)
|
||||
}
|
||||
if (lowLightBoost == true) {
|
||||
tryEnableExtension(ExtensionMode.NIGHT)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Unbind use cases before rebinding
|
||||
videoCapture = null
|
||||
imageCapture = null
|
||||
imageAnalysis = null
|
||||
cameraProvider.unbindAll()
|
||||
|
||||
// Bind use cases to camera
|
||||
val useCases = ArrayList<UseCase>()
|
||||
if (video == true) {
|
||||
Log.i(TAG, "Adding VideoCapture use-case...")
|
||||
|
||||
val videoRecorder = videoRecorderBuilder.build()
|
||||
videoCapture = VideoCapture.withOutput(videoRecorder)
|
||||
videoCapture!!.targetRotation = outputRotation
|
||||
useCases.add(videoCapture!!)
|
||||
}
|
||||
if (photo == true) {
|
||||
if (fallbackToSnapshot) {
|
||||
Log.i(TAG, "Tried to add photo use-case (`photo={true}`) but the Camera device only supports " +
|
||||
"a single use-case at a time. Falling back to Snapshot capture.")
|
||||
} else {
|
||||
Log.i(TAG, "Adding ImageCapture use-case...")
|
||||
imageCapture = imageCaptureBuilder.build()
|
||||
useCases.add(imageCapture!!)
|
||||
}
|
||||
}
|
||||
if (enableFrameProcessor) {
|
||||
Log.i(TAG, "Adding ImageAnalysis use-case...")
|
||||
imageAnalysis = imageAnalysisBuilder.build().apply {
|
||||
setAnalyzer(cameraExecutor) { image ->
|
||||
// Call JS Frame Processor
|
||||
val frame = Frame(image)
|
||||
frameProcessor?.call(frame)
|
||||
// ...frame gets closed in FrameHostObject implementation via JS ref counting
|
||||
}
|
||||
}
|
||||
useCases.add(imageAnalysis!!)
|
||||
}
|
||||
|
||||
preview = previewBuilder.build()
|
||||
Log.i(TAG, "Attaching ${useCases.size} use-cases...")
|
||||
camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, *useCases.toTypedArray())
|
||||
preview!!.setSurfaceProvider(previewView.surfaceProvider)
|
||||
|
||||
minZoom = camera!!.cameraInfo.zoomState.value?.minZoomRatio ?: 1f
|
||||
maxZoom = camera!!.cameraInfo.zoomState.value?.maxZoomRatio ?: 1f
|
||||
|
||||
val duration = System.currentTimeMillis() - startTime
|
||||
Log.i(TAG_PERF, "Session configured in $duration ms! Camera: ${camera!!}")
|
||||
invokeOnInitialized()
|
||||
} catch (exc: Throwable) {
|
||||
Log.e(TAG, "Failed to configure session: ${exc.message}")
|
||||
throw when (exc) {
|
||||
is CameraError -> exc
|
||||
is IllegalArgumentException -> {
|
||||
if (exc.message?.contains("too many use cases") == true) {
|
||||
ParallelVideoProcessingNotSupportedError(exc)
|
||||
} else {
|
||||
InvalidCameraDeviceError(exc)
|
||||
}
|
||||
}
|
||||
else -> UnknownCameraError(exc)
|
||||
}
|
||||
private fun updateTorch() {
|
||||
CoroutineScope(Dispatchers.Default).launch {
|
||||
cameraSession.setTorchMode(torch == Torch.ON)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@@ -3,16 +3,20 @@ package com.mrousavy.camera
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import com.facebook.react.common.MapBuilder
|
||||
import com.facebook.react.uimanager.ViewGroupManager
|
||||
import com.facebook.react.uimanager.ThemedReactContext
|
||||
import com.facebook.react.uimanager.ViewGroupManager
|
||||
import com.facebook.react.uimanager.annotations.ReactProp
|
||||
import com.mrousavy.camera.parsers.PixelFormat
|
||||
import com.mrousavy.camera.parsers.Orientation
|
||||
import com.mrousavy.camera.parsers.PreviewType
|
||||
import com.mrousavy.camera.parsers.Torch
|
||||
import com.mrousavy.camera.parsers.VideoStabilizationMode
|
||||
|
||||
@Suppress("unused")
|
||||
class CameraViewManager(reactContext: ReactApplicationContext) : ViewGroupManager<CameraView>() {
|
||||
|
||||
public override fun createViewInstance(context: ThemedReactContext): CameraView {
|
||||
val cameraViewModule = context.getNativeModule(CameraViewModule::class.java)!!
|
||||
return CameraView(context, cameraViewModule.frameProcessorThread)
|
||||
return CameraView(context)
|
||||
}
|
||||
|
||||
override fun onAfterUpdateTransaction(view: CameraView) {
|
||||
@@ -69,6 +73,14 @@ class CameraViewManager(reactContext: ReactApplicationContext) : ViewGroupManage
|
||||
view.enableFrameProcessor = enableFrameProcessor
|
||||
}
|
||||
|
||||
@ReactProp(name = "pixelFormat")
|
||||
fun setPixelFormat(view: CameraView, pixelFormat: String?) {
|
||||
val newPixelFormat = PixelFormat.fromUnionValue(pixelFormat)
|
||||
if (view.pixelFormat != newPixelFormat)
|
||||
addChangedPropToTransaction(view, "pixelFormat")
|
||||
view.pixelFormat = newPixelFormat ?: PixelFormat.NATIVE
|
||||
}
|
||||
|
||||
@ReactProp(name = "enableDepthData")
|
||||
fun setEnableDepthData(view: CameraView, enableDepthData: Boolean) {
|
||||
if (view.enableDepthData != enableDepthData)
|
||||
@@ -76,6 +88,22 @@ class CameraViewManager(reactContext: ReactApplicationContext) : ViewGroupManage
|
||||
view.enableDepthData = enableDepthData
|
||||
}
|
||||
|
||||
@ReactProp(name = "videoStabilizationMode")
|
||||
fun setVideoStabilizationMode(view: CameraView, videoStabilizationMode: String?) {
|
||||
val newMode = VideoStabilizationMode.fromUnionValue(videoStabilizationMode)
|
||||
if (view.videoStabilizationMode != newMode)
|
||||
addChangedPropToTransaction(view, "videoStabilizationMode")
|
||||
view.videoStabilizationMode = newMode
|
||||
}
|
||||
|
||||
@ReactProp(name = "previewType")
|
||||
fun setPreviewType(view: CameraView, previewType: String) {
|
||||
val newMode = PreviewType.fromUnionValue(previewType)
|
||||
if (view.previewType != newMode)
|
||||
addChangedPropToTransaction(view, "previewType")
|
||||
view.previewType = newMode
|
||||
}
|
||||
|
||||
@ReactProp(name = "enableHighQualityPhotos")
|
||||
fun setEnableHighQualityPhotos(view: CameraView, enableHighQualityPhotos: Boolean?) {
|
||||
if (view.enableHighQualityPhotos != enableHighQualityPhotos)
|
||||
@@ -121,13 +149,6 @@ class CameraViewManager(reactContext: ReactApplicationContext) : ViewGroupManage
|
||||
view.lowLightBoost = lowLightBoost
|
||||
}
|
||||
|
||||
@ReactProp(name = "colorSpace")
|
||||
fun setColorSpace(view: CameraView, colorSpace: String?) {
|
||||
if (view.colorSpace != colorSpace)
|
||||
addChangedPropToTransaction(view, "colorSpace")
|
||||
view.colorSpace = colorSpace
|
||||
}
|
||||
|
||||
@ReactProp(name = "isActive")
|
||||
fun setIsActive(view: CameraView, isActive: Boolean) {
|
||||
if (view.isActive != isActive)
|
||||
@@ -137,9 +158,10 @@ class CameraViewManager(reactContext: ReactApplicationContext) : ViewGroupManage
|
||||
|
||||
@ReactProp(name = "torch")
|
||||
fun setTorch(view: CameraView, torch: String) {
|
||||
if (view.torch != torch)
|
||||
val newMode = Torch.fromUnionValue(torch)
|
||||
if (view.torch != newMode)
|
||||
addChangedPropToTransaction(view, "torch")
|
||||
view.torch = torch
|
||||
view.torch = newMode
|
||||
}
|
||||
|
||||
@ReactProp(name = "zoom")
|
||||
@@ -150,18 +172,12 @@ class CameraViewManager(reactContext: ReactApplicationContext) : ViewGroupManage
|
||||
view.zoom = zoomFloat
|
||||
}
|
||||
|
||||
@ReactProp(name = "enableZoomGesture")
|
||||
fun setEnableZoomGesture(view: CameraView, enableZoomGesture: Boolean) {
|
||||
if (view.enableZoomGesture != enableZoomGesture)
|
||||
addChangedPropToTransaction(view, "enableZoomGesture")
|
||||
view.enableZoomGesture = enableZoomGesture
|
||||
}
|
||||
|
||||
@ReactProp(name = "orientation")
|
||||
fun setOrientation(view: CameraView, orientation: String) {
|
||||
if (view.orientation != orientation)
|
||||
fun setOrientation(view: CameraView, orientation: String?) {
|
||||
val newMode = Orientation.fromUnionValue(orientation)
|
||||
if (view.orientation != newMode)
|
||||
addChangedPropToTransaction(view, "orientation")
|
||||
view.orientation = orientation
|
||||
view.orientation = newMode
|
||||
}
|
||||
|
||||
companion object {
|
||||
|
@@ -6,23 +6,20 @@ import android.content.pm.PackageManager
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import androidx.camera.extensions.ExtensionsManager
|
||||
import androidx.camera.lifecycle.ProcessCameraProvider
|
||||
import androidx.core.content.ContextCompat
|
||||
import com.facebook.react.bridge.*
|
||||
import com.facebook.react.module.annotations.ReactModule
|
||||
import com.facebook.react.modules.core.PermissionAwareActivity
|
||||
import com.facebook.react.modules.core.PermissionListener
|
||||
import com.facebook.react.uimanager.UIManagerHelper
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.mrousavy.camera.frameprocessor.VisionCameraInstaller
|
||||
import java.util.concurrent.ExecutorService
|
||||
import com.mrousavy.camera.frameprocessor.VisionCameraProxy
|
||||
import com.mrousavy.camera.parsers.*
|
||||
import com.mrousavy.camera.utils.*
|
||||
import kotlinx.coroutines.*
|
||||
import kotlinx.coroutines.guava.await
|
||||
import java.util.concurrent.Executors
|
||||
import kotlin.coroutines.resume
|
||||
import kotlin.coroutines.resumeWithException
|
||||
import kotlin.coroutines.suspendCoroutine
|
||||
|
||||
@ReactModule(name = CameraViewModule.TAG)
|
||||
@Suppress("unused")
|
||||
@@ -32,12 +29,10 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
var RequestCode = 10
|
||||
}
|
||||
|
||||
var frameProcessorThread: ExecutorService = Executors.newSingleThreadExecutor()
|
||||
private val coroutineScope = CoroutineScope(Dispatchers.Default) // TODO: or Dispatchers.Main?
|
||||
|
||||
override fun invalidate() {
|
||||
super.invalidate()
|
||||
frameProcessorThread.shutdown()
|
||||
if (coroutineScope.isActive) {
|
||||
coroutineScope.cancel("CameraViewModule has been destroyed.")
|
||||
}
|
||||
@@ -47,17 +42,22 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
return TAG
|
||||
}
|
||||
|
||||
private fun findCameraView(viewId: Int): CameraView {
|
||||
Log.d(TAG, "Finding view $viewId...")
|
||||
val view = if (reactApplicationContext != null) UIManagerHelper.getUIManager(reactApplicationContext, viewId)?.resolveView(viewId) as CameraView? else null
|
||||
Log.d(TAG, if (reactApplicationContext != null) "Found view $viewId!" else "Couldn't find view $viewId!")
|
||||
return view ?: throw ViewNotFoundError(viewId)
|
||||
private suspend fun findCameraView(viewId: Int): CameraView {
|
||||
return suspendCoroutine { continuation ->
|
||||
UiThreadUtil.runOnUiThread {
|
||||
Log.d(TAG, "Finding view $viewId...")
|
||||
val view = if (reactApplicationContext != null) UIManagerHelper.getUIManager(reactApplicationContext, viewId)?.resolveView(viewId) as CameraView? else null
|
||||
Log.d(TAG, if (reactApplicationContext != null) "Found view $viewId!" else "Couldn't find view $viewId!")
|
||||
if (view != null) continuation.resume(view)
|
||||
else continuation.resumeWithException(ViewNotFoundError(viewId))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod(isBlockingSynchronousMethod = true)
|
||||
fun installFrameProcessorBindings(): Boolean {
|
||||
return try {
|
||||
val proxy = VisionCameraProxy(reactApplicationContext, frameProcessorThread)
|
||||
val proxy = VisionCameraProxy(reactApplicationContext)
|
||||
VisionCameraInstaller.install(proxy)
|
||||
true
|
||||
} catch (e: Error) {
|
||||
@@ -69,24 +69,13 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
@ReactMethod
|
||||
fun takePhoto(viewTag: Int, options: ReadableMap, promise: Promise) {
|
||||
coroutineScope.launch {
|
||||
val view = findCameraView(viewTag)
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.takePhoto(options)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Suppress("unused")
|
||||
@ReactMethod
|
||||
fun takeSnapshot(viewTag: Int, options: ReadableMap, promise: Promise) {
|
||||
coroutineScope.launch {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.takeSnapshot(options)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: startRecording() cannot be awaited, because I can't have a Promise and a onRecordedCallback in the same function. Hopefully TurboModules allows that
|
||||
@ReactMethod
|
||||
fun startRecording(viewTag: Int, options: ReadableMap, onRecordCallback: Callback) {
|
||||
@@ -98,7 +87,7 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
val map = makeErrorMap("${error.domain}/${error.id}", error.message, error)
|
||||
onRecordCallback(null, map)
|
||||
} catch (error: Throwable) {
|
||||
val map = makeErrorMap("capture/unknown", "An unknown error occurred while trying to start a video recording!", error)
|
||||
val map = makeErrorMap("capture/unknown", "An unknown error occurred while trying to start a video recording! ${error.message}", error)
|
||||
onRecordCallback(null, map)
|
||||
}
|
||||
}
|
||||
@@ -106,36 +95,42 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
|
||||
@ReactMethod
|
||||
fun pauseRecording(viewTag: Int, promise: Promise) {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.pauseRecording()
|
||||
return@withPromise null
|
||||
coroutineScope.launch {
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.pauseRecording()
|
||||
return@withPromise null
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun resumeRecording(viewTag: Int, promise: Promise) {
|
||||
withPromise(promise) {
|
||||
coroutineScope.launch {
|
||||
val view = findCameraView(viewTag)
|
||||
view.resumeRecording()
|
||||
return@withPromise null
|
||||
withPromise(promise) {
|
||||
view.resumeRecording()
|
||||
return@withPromise null
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun stopRecording(viewTag: Int, promise: Promise) {
|
||||
withPromise(promise) {
|
||||
coroutineScope.launch {
|
||||
val view = findCameraView(viewTag)
|
||||
view.stopRecording()
|
||||
return@withPromise null
|
||||
withPromise(promise) {
|
||||
view.stopRecording()
|
||||
return@withPromise null
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun focus(viewTag: Int, point: ReadableMap, promise: Promise) {
|
||||
coroutineScope.launch {
|
||||
val view = findCameraView(viewTag)
|
||||
withPromise(promise) {
|
||||
val view = findCameraView(viewTag)
|
||||
view.focus(point)
|
||||
return@withPromise null
|
||||
}
|
||||
@@ -146,13 +141,11 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
fun getAvailableCameraDevices(promise: Promise) {
|
||||
coroutineScope.launch {
|
||||
withPromise(promise) {
|
||||
val cameraProvider = ProcessCameraProvider.getInstance(reactApplicationContext).await()
|
||||
val extensionsManager = ExtensionsManager.getInstanceAsync(reactApplicationContext, cameraProvider).await()
|
||||
val manager = reactApplicationContext.getSystemService(Context.CAMERA_SERVICE) as CameraManager
|
||||
|
||||
val devices = Arguments.createArray()
|
||||
manager.cameraIdList.forEach { cameraId ->
|
||||
val device = CameraDevice(manager, extensionsManager, cameraId)
|
||||
val device = CameraDeviceDetails(manager, cameraId)
|
||||
devices.pushMap(device.toMap())
|
||||
}
|
||||
promise.resolve(devices)
|
||||
@@ -160,23 +153,36 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
}
|
||||
}
|
||||
|
||||
private fun canRequestPermission(permission: String): Boolean {
|
||||
val activity = currentActivity as? PermissionAwareActivity
|
||||
return activity?.shouldShowRequestPermissionRationale(permission) ?: false
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun getCameraPermissionStatus(promise: Promise) {
|
||||
val status = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.CAMERA)
|
||||
promise.resolve(parsePermissionStatus(status))
|
||||
var parsed = PermissionStatus.fromPermissionStatus(status)
|
||||
if (parsed == PermissionStatus.DENIED && canRequestPermission(Manifest.permission.CAMERA)) {
|
||||
parsed = PermissionStatus.NOT_DETERMINED
|
||||
}
|
||||
promise.resolve(parsed.unionValue)
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun getMicrophonePermissionStatus(promise: Promise) {
|
||||
val status = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.RECORD_AUDIO)
|
||||
promise.resolve(parsePermissionStatus(status))
|
||||
var parsed = PermissionStatus.fromPermissionStatus(status)
|
||||
if (parsed == PermissionStatus.DENIED && canRequestPermission(Manifest.permission.RECORD_AUDIO)) {
|
||||
parsed = PermissionStatus.NOT_DETERMINED
|
||||
}
|
||||
promise.resolve(parsed.unionValue)
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun requestCameraPermission(promise: Promise) {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) {
|
||||
// API 21 and below always grants permission on app install
|
||||
return promise.resolve("authorized")
|
||||
return promise.resolve(PermissionStatus.GRANTED.unionValue)
|
||||
}
|
||||
|
||||
val activity = reactApplicationContext.currentActivity
|
||||
@@ -185,7 +191,8 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
val listener = PermissionListener { requestCode: Int, _: Array<String>, grantResults: IntArray ->
|
||||
if (requestCode == currentRequestCode) {
|
||||
val permissionStatus = if (grantResults.isNotEmpty()) grantResults[0] else PackageManager.PERMISSION_DENIED
|
||||
promise.resolve(parsePermissionStatus(permissionStatus))
|
||||
val parsed = PermissionStatus.fromPermissionStatus(permissionStatus)
|
||||
promise.resolve(parsed.unionValue)
|
||||
return@PermissionListener true
|
||||
}
|
||||
return@PermissionListener false
|
||||
@@ -200,7 +207,7 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
fun requestMicrophonePermission(promise: Promise) {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) {
|
||||
// API 21 and below always grants permission on app install
|
||||
return promise.resolve("authorized")
|
||||
return promise.resolve(PermissionStatus.GRANTED.unionValue)
|
||||
}
|
||||
|
||||
val activity = reactApplicationContext.currentActivity
|
||||
@@ -209,7 +216,8 @@ class CameraViewModule(reactContext: ReactApplicationContext): ReactContextBaseJ
|
||||
val listener = PermissionListener { requestCode: Int, _: Array<String>, grantResults: IntArray ->
|
||||
if (requestCode == currentRequestCode) {
|
||||
val permissionStatus = if (grantResults.isNotEmpty()) grantResults[0] else PackageManager.PERMISSION_DENIED
|
||||
promise.resolve(parsePermissionStatus(permissionStatus))
|
||||
val parsed = PermissionStatus.fromPermissionStatus(permissionStatus)
|
||||
promise.resolve(parsed.unionValue)
|
||||
return@PermissionListener true
|
||||
}
|
||||
return@PermissionListener false
|
||||
|
@@ -1,7 +1,7 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import androidx.camera.video.VideoRecordEvent.Finalize.VideoRecordError
|
||||
import com.mrousavy.camera.parsers.CameraDeviceError
|
||||
import com.mrousavy.camera.utils.outputs.CameraOutputs
|
||||
|
||||
abstract class CameraError(
|
||||
/**
|
||||
@@ -37,16 +37,14 @@ class CameraPermissionError : CameraError("permission", "camera-permission-denie
|
||||
class InvalidTypeScriptUnionError(unionName: String, unionValue: String) : CameraError("parameter", "invalid-parameter", "The given value for $unionName could not be parsed! (Received: $unionValue)")
|
||||
|
||||
class NoCameraDeviceError : CameraError("device", "no-device", "No device was set! Use `getAvailableCameraDevices()` to select a suitable Camera device.")
|
||||
class InvalidCameraDeviceError(cause: Throwable) : CameraError("device", "invalid-device", "The given Camera device could not be found for use-case binding!", cause)
|
||||
class ParallelVideoProcessingNotSupportedError(cause: Throwable) : CameraError("device", "parallel-video-processing-not-supported", "The given LEGACY Camera device does not support parallel " +
|
||||
"video processing (`video={true}` + `frameProcessor={...}`). Disable either `video` or `frameProcessor`. To find out if a device supports parallel video processing, check the `supportsParallelVideoProcessing` property on the CameraDevice. " +
|
||||
"See https://react-native-vision-camera.com/docs/guides/devices#the-supportsparallelvideoprocessing-prop for more information.", cause)
|
||||
class NoFlashAvailableError : CameraError("device", "flash-unavailable", "The Camera Device does not have a flash unit! Make sure you select a device where `hasFlash`/`hasTorch` is true!")
|
||||
class PixelFormatNotSupportedError(format: String) : CameraError("device", "pixel-format-not-supported", "The pixelFormat $format is not supported on the given Camera Device!")
|
||||
|
||||
class FpsNotContainedInFormatError(fps: Int) : CameraError("format", "invalid-fps", "The given FPS were not valid for the currently selected format. Make sure you select a format which `frameRateRanges` includes $fps FPS!")
|
||||
class FpsNotContainedInFormatError(fps: Int) : CameraError("format", "invalid-fps", "The given format cannot run at $fps FPS! Make sure your FPS is lower than `format.maxFps` but higher than `format.minFps`.")
|
||||
class HdrNotContainedInFormatError : CameraError(
|
||||
"format", "invalid-hdr",
|
||||
"The currently selected format does not support HDR capture! " +
|
||||
"Make sure you select a format which `frameRateRanges` includes `supportsPhotoHDR`!"
|
||||
"Make sure you select a format which includes `supportsPhotoHDR`!"
|
||||
)
|
||||
class LowLightBoostNotContainedInFormatError : CameraError(
|
||||
"format", "invalid-low-light-boost",
|
||||
@@ -55,11 +53,14 @@ class LowLightBoostNotContainedInFormatError : CameraError(
|
||||
)
|
||||
|
||||
class CameraNotReadyError : CameraError("session", "camera-not-ready", "The Camera is not ready yet! Wait for the onInitialized() callback!")
|
||||
class CameraCannotBeOpenedError(cameraId: String, error: CameraDeviceError) : CameraError("session", "camera-cannot-be-opened", "The given Camera device (id: $cameraId) could not be opened! Error: $error")
|
||||
class CameraSessionCannotBeConfiguredError(cameraId: String, outputs: CameraOutputs) : CameraError("session", "cannot-create-session", "Failed to create a Camera Session for Camera $cameraId! Outputs: $outputs")
|
||||
class CameraDisconnectedError(cameraId: String, error: CameraDeviceError) : CameraError("session", "camera-has-been-disconnected", "The given Camera device (id: $cameraId) has been disconnected! Error: $error")
|
||||
|
||||
class VideoNotEnabledError : CameraError("capture", "video-not-enabled", "Video capture is disabled! Pass `video={true}` to enable video recordings.")
|
||||
class PhotoNotEnabledError : CameraError("capture", "photo-not-enabled", "Photo capture is disabled! Pass `photo={true}` to enable photo capture.")
|
||||
|
||||
class InvalidFormatError(format: Int) : CameraError("capture", "invalid-photo-format", "The Photo has an invalid format! Expected ${ImageFormat.YUV_420_888}, actual: $format")
|
||||
class CaptureAbortedError(wasImageCaptured: Boolean) : CameraError("capture", "aborted", "The image capture was aborted! Was Image captured: $wasImageCaptured")
|
||||
class UnknownCaptureError(wasImageCaptured: Boolean) : CameraError("capture", "unknown", "An unknown error occurred while trying to capture an Image! Was Image captured: $wasImageCaptured")
|
||||
|
||||
class VideoEncoderError(cause: Throwable?) : CameraError("capture", "encoder-error", "The recording failed while encoding.\n" +
|
||||
"This error may be generated when the video or audio codec encounters an error during encoding. " +
|
||||
@@ -104,8 +105,10 @@ class FileSizeLimitReachedError(cause: Throwable?) : CameraError("capture", "fil
|
||||
"The file size limitation will refer to OutputOptions.getFileSizeLimit(). The output file will still be generated with this error.",
|
||||
cause)
|
||||
|
||||
class NoRecordingInProgressError : CameraError("capture", "no-recording-in-progress", "No active recording in progress!")
|
||||
class NoRecordingInProgressError : CameraError("capture", "no-recording-in-progress", "There was no active video recording in progress! Did you call stopRecording() twice?")
|
||||
class RecordingInProgressError : CameraError("capture", "recording-in-progress", "There is already an active video recording in progress! Did you call startRecording() twice?")
|
||||
|
||||
class ViewNotFoundError(viewId: Int) : CameraError("system", "view-not-found", "The given view (ID $viewId) was not found in the view manager.")
|
||||
|
||||
class UnknownCameraError(cause: Throwable?) : CameraError("unknown", "unknown", cause?.message ?: "An unknown camera error occured.", cause)
|
||||
|
||||
|
@@ -0,0 +1,75 @@
|
||||
package com.mrousavy.camera
|
||||
|
||||
import android.annotation.SuppressLint
|
||||
import android.content.Context
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.util.Log
|
||||
import android.util.Size
|
||||
import android.view.Surface
|
||||
import android.view.SurfaceHolder
|
||||
import android.view.SurfaceView
|
||||
import com.mrousavy.camera.extensions.getPreviewSize
|
||||
import kotlin.math.roundToInt
|
||||
|
||||
/**
|
||||
* A [SurfaceView] that can be adjusted to a specified aspect ratio and
|
||||
* performs center-crop transformation of input frames.
|
||||
*/
|
||||
@SuppressLint("ViewConstructor")
|
||||
class NativePreviewView(context: Context,
|
||||
cameraManager: CameraManager,
|
||||
cameraId: String,
|
||||
private val onSurfaceChanged: (surface: Surface?) -> Unit): SurfaceView(context) {
|
||||
private val targetSize: Size
|
||||
private val aspectRatio: Float
|
||||
get() = targetSize.width.toFloat() / targetSize.height.toFloat()
|
||||
|
||||
init {
|
||||
val characteristics = cameraManager.getCameraCharacteristics(cameraId)
|
||||
targetSize = characteristics.getPreviewSize()
|
||||
|
||||
Log.i(TAG, "Using Preview Size ${targetSize.width} x ${targetSize.height}.")
|
||||
holder.setFixedSize(targetSize.width, targetSize.height)
|
||||
holder.addCallback(object: SurfaceHolder.Callback {
|
||||
override fun surfaceCreated(holder: SurfaceHolder) {
|
||||
Log.i(TAG, "Surface created! ${holder.surface}")
|
||||
onSurfaceChanged(holder.surface)
|
||||
}
|
||||
|
||||
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {
|
||||
Log.i(TAG, "Surface resized! ${holder.surface} ($width x $height in format #$format)")
|
||||
}
|
||||
|
||||
override fun surfaceDestroyed(holder: SurfaceHolder) {
|
||||
Log.i(TAG, "Surface destroyed! ${holder.surface}")
|
||||
onSurfaceChanged(null)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
override fun onMeasure(widthMeasureSpec: Int, heightMeasureSpec: Int) {
|
||||
super.onMeasure(widthMeasureSpec, heightMeasureSpec)
|
||||
val width = MeasureSpec.getSize(widthMeasureSpec)
|
||||
val height = MeasureSpec.getSize(heightMeasureSpec)
|
||||
Log.d(TAG, "onMeasure($width, $height)")
|
||||
|
||||
// Performs center-crop transformation of the camera frames
|
||||
val newWidth: Int
|
||||
val newHeight: Int
|
||||
val actualRatio = if (width > height) aspectRatio else 1f / aspectRatio
|
||||
if (width < height * actualRatio) {
|
||||
newHeight = height
|
||||
newWidth = (height * actualRatio).roundToInt()
|
||||
} else {
|
||||
newWidth = width
|
||||
newHeight = (width / actualRatio).roundToInt()
|
||||
}
|
||||
|
||||
Log.d(TAG, "Measured dimensions set: $newWidth x $newHeight")
|
||||
setMeasuredDimension(newWidth, newHeight)
|
||||
}
|
||||
|
||||
companion object {
|
||||
private const val TAG = "NativePreviewView"
|
||||
}
|
||||
}
|
@@ -0,0 +1,27 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.media.CamcorderProfile
|
||||
import android.util.Size
|
||||
|
||||
private val qualitiesMap = mapOf(
|
||||
Size(176 - 1, 144 - 1) to CamcorderProfile.QUALITY_LOW,
|
||||
Size(176, 144) to CamcorderProfile.QUALITY_QCIF,
|
||||
Size(320, 240) to CamcorderProfile.QUALITY_QVGA,
|
||||
Size(352, 288) to CamcorderProfile.QUALITY_CIF,
|
||||
Size(640, 480) to CamcorderProfile.QUALITY_VGA,
|
||||
Size(720, 480) to CamcorderProfile.QUALITY_480P,
|
||||
Size(1280, 720) to CamcorderProfile.QUALITY_720P,
|
||||
Size(1920, 1080) to CamcorderProfile.QUALITY_1080P,
|
||||
Size(2048, 1080) to CamcorderProfile.QUALITY_2K,
|
||||
Size(2560, 1440) to CamcorderProfile.QUALITY_QHD,
|
||||
Size(3840, 2160) to CamcorderProfile.QUALITY_2160P,
|
||||
Size(4096, 2160) to CamcorderProfile.QUALITY_4KDCI,
|
||||
Size(7680, 4320) to CamcorderProfile.QUALITY_8KUHD,
|
||||
Size(7680 + 1, 4320 + 1) to CamcorderProfile.QUALITY_HIGH,
|
||||
)
|
||||
|
||||
fun getCamcorderQualityForSize(size: Size): Int {
|
||||
// Find closest match
|
||||
val closestMatch = qualitiesMap.keys.closestTo(size)
|
||||
return qualitiesMap[closestMatch] ?: CamcorderProfile.QUALITY_HIGH
|
||||
}
|
@@ -0,0 +1,42 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.hardware.camera2.CameraCaptureSession
|
||||
import android.hardware.camera2.CaptureFailure
|
||||
import android.hardware.camera2.CaptureRequest
|
||||
import android.hardware.camera2.TotalCaptureResult
|
||||
import com.mrousavy.camera.CameraQueues
|
||||
import com.mrousavy.camera.CaptureAbortedError
|
||||
import com.mrousavy.camera.UnknownCaptureError
|
||||
import kotlin.coroutines.resume
|
||||
import kotlin.coroutines.resumeWithException
|
||||
import kotlin.coroutines.suspendCoroutine
|
||||
|
||||
suspend fun CameraCaptureSession.capture(captureRequest: CaptureRequest): TotalCaptureResult {
|
||||
return suspendCoroutine { continuation ->
|
||||
this.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
|
||||
override fun onCaptureCompleted(
|
||||
session: CameraCaptureSession,
|
||||
request: CaptureRequest,
|
||||
result: TotalCaptureResult
|
||||
) {
|
||||
super.onCaptureCompleted(session, request, result)
|
||||
continuation.resume(result)
|
||||
}
|
||||
|
||||
override fun onCaptureFailed(
|
||||
session: CameraCaptureSession,
|
||||
request: CaptureRequest,
|
||||
failure: CaptureFailure
|
||||
) {
|
||||
super.onCaptureFailed(session, request, failure)
|
||||
val wasImageCaptured = failure.wasImageCaptured()
|
||||
val error = when (failure.reason) {
|
||||
CaptureFailure.REASON_ERROR -> UnknownCaptureError(wasImageCaptured)
|
||||
CaptureFailure.REASON_FLUSHED -> CaptureAbortedError(wasImageCaptured)
|
||||
else -> UnknownCaptureError(wasImageCaptured)
|
||||
}
|
||||
continuation.resumeWithException(error)
|
||||
}
|
||||
}, CameraQueues.cameraQueue.handler)
|
||||
}
|
||||
}
|
@@ -0,0 +1,72 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.content.res.Resources
|
||||
import android.graphics.ImageFormat
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.params.StreamConfigurationMap
|
||||
import android.media.CamcorderProfile
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import android.util.Size
|
||||
import android.view.SurfaceHolder
|
||||
import android.view.SurfaceView
|
||||
|
||||
private fun getMaximumPreviewSize(): Size {
|
||||
// See https://developer.android.com/reference/android/hardware/camera2/params/StreamConfigurationMap
|
||||
// According to the Android Developer documentation, PREVIEW streams can have a resolution
|
||||
// of up to the phone's display's resolution, with a maximum of 1920x1080.
|
||||
val display1080p = Size(1920, 1080)
|
||||
val displaySize = Size(Resources.getSystem().displayMetrics.widthPixels, Resources.getSystem().displayMetrics.heightPixels)
|
||||
val isHighResScreen = displaySize.bigger >= display1080p.bigger || displaySize.smaller >= display1080p.smaller
|
||||
Log.i("PreviewSize", "Phone has a ${displaySize.width} x ${displaySize.height} screen.")
|
||||
return if (isHighResScreen) display1080p else displaySize
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the maximum Preview Resolution this device is capable of streaming at. (For [SurfaceView])
|
||||
*/
|
||||
fun CameraCharacteristics.getPreviewSize(): Size {
|
||||
val config = this.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
|
||||
val previewSize = getMaximumPreviewSize()
|
||||
val outputSizes = config.getOutputSizes(SurfaceHolder::class.java).sortedByDescending { it.width * it.height }
|
||||
return outputSizes.first { it.bigger <= previewSize.bigger && it.smaller <= previewSize.smaller }
|
||||
}
|
||||
|
||||
private fun getMaximumVideoSize(cameraId: String): Size? {
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
|
||||
val profiles = CamcorderProfile.getAll(cameraId, CamcorderProfile.QUALITY_HIGH)
|
||||
if (profiles != null) {
|
||||
val largestProfile = profiles.videoProfiles.maxBy { it.width * it.height }
|
||||
return Size(largestProfile.width, largestProfile.height)
|
||||
}
|
||||
}
|
||||
|
||||
val cameraIdInt = cameraId.toIntOrNull()
|
||||
if (cameraIdInt != null) {
|
||||
val profile = CamcorderProfile.get(cameraIdInt, CamcorderProfile.QUALITY_HIGH)
|
||||
return Size(profile.videoFrameWidth, profile.videoFrameHeight)
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
fun CameraCharacteristics.getVideoSizes(cameraId: String, format: Int): List<Size> {
|
||||
val config = this.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
|
||||
val sizes = config.getOutputSizes(format) ?: emptyArray()
|
||||
val maxVideoSize = getMaximumVideoSize(cameraId)
|
||||
if (maxVideoSize != null) {
|
||||
return sizes.filter { it.bigger <= maxVideoSize.bigger }
|
||||
}
|
||||
return sizes.toList()
|
||||
}
|
||||
|
||||
fun CameraCharacteristics.getPhotoSizes(format: Int): List<Size> {
|
||||
val config = this.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
|
||||
val sizes = config.getOutputSizes(format) ?: emptyArray()
|
||||
val highResSizes = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
|
||||
config.getHighResolutionOutputSizes(format)
|
||||
} else {
|
||||
null
|
||||
} ?: emptyArray()
|
||||
return sizes.plus(highResSizes).toList()
|
||||
}
|
@@ -0,0 +1,99 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.hardware.camera2.CameraCaptureSession
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.CameraDevice
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.hardware.camera2.params.OutputConfiguration
|
||||
import android.hardware.camera2.params.SessionConfiguration
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import android.view.Surface
|
||||
import androidx.annotation.RequiresApi
|
||||
import com.mrousavy.camera.CameraQueues
|
||||
import com.mrousavy.camera.CameraSessionCannotBeConfiguredError
|
||||
import com.mrousavy.camera.utils.outputs.CameraOutputs
|
||||
import kotlinx.coroutines.suspendCancellableCoroutine
|
||||
import kotlin.coroutines.resume
|
||||
import kotlin.coroutines.resumeWithException
|
||||
|
||||
enum class SessionType {
|
||||
REGULAR,
|
||||
HIGH_SPEED;
|
||||
|
||||
@RequiresApi(Build.VERSION_CODES.P)
|
||||
fun toSessionType(): Int {
|
||||
return when(this) {
|
||||
REGULAR -> SessionConfiguration.SESSION_REGULAR
|
||||
HIGH_SPEED -> SessionConfiguration.SESSION_HIGH_SPEED
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private const val TAG = "CreateCaptureSession"
|
||||
private var sessionId = 1000
|
||||
|
||||
suspend fun CameraDevice.createCaptureSession(cameraManager: CameraManager,
|
||||
sessionType: SessionType,
|
||||
outputs: CameraOutputs,
|
||||
onClosed: (session: CameraCaptureSession) -> Unit,
|
||||
queue: CameraQueues.CameraQueue): CameraCaptureSession {
|
||||
return suspendCancellableCoroutine { continuation ->
|
||||
val characteristics = cameraManager.getCameraCharacteristics(id)
|
||||
val hardwareLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)!!
|
||||
val sessionId = sessionId++
|
||||
Log.i(TAG, "Camera $id: Creating Capture Session #$sessionId... " +
|
||||
"Hardware Level: $hardwareLevel} | Outputs: $outputs")
|
||||
|
||||
val callback = object: CameraCaptureSession.StateCallback() {
|
||||
override fun onConfigured(session: CameraCaptureSession) {
|
||||
Log.i(TAG, "Camera $id: Capture Session #$sessionId configured!")
|
||||
continuation.resume(session)
|
||||
}
|
||||
|
||||
override fun onConfigureFailed(session: CameraCaptureSession) {
|
||||
Log.e(TAG, "Camera $id: Failed to configure Capture Session #$sessionId!")
|
||||
continuation.resumeWithException(CameraSessionCannotBeConfiguredError(id, outputs))
|
||||
}
|
||||
|
||||
override fun onClosed(session: CameraCaptureSession) {
|
||||
super.onClosed(session)
|
||||
Log.i(TAG, "Camera $id: Capture Session #$sessionId closed!")
|
||||
onClosed(session)
|
||||
}
|
||||
}
|
||||
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
|
||||
// API >= 24
|
||||
val outputConfigurations = arrayListOf<OutputConfiguration>()
|
||||
outputs.previewOutput?.let { output ->
|
||||
outputConfigurations.add(output.toOutputConfiguration(characteristics))
|
||||
}
|
||||
outputs.photoOutput?.let { output ->
|
||||
outputConfigurations.add(output.toOutputConfiguration(characteristics))
|
||||
}
|
||||
outputs.videoOutput?.let { output ->
|
||||
outputConfigurations.add(output.toOutputConfiguration(characteristics))
|
||||
}
|
||||
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
|
||||
// API >=28
|
||||
Log.i(TAG, "Using new API (>=28)")
|
||||
val config = SessionConfiguration(sessionType.toSessionType(), outputConfigurations, queue.executor, callback)
|
||||
this.createCaptureSession(config)
|
||||
} else {
|
||||
// API >=24
|
||||
Log.i(TAG, "Using legacy API (<28)")
|
||||
this.createCaptureSessionByOutputConfigurations(outputConfigurations, callback, queue.handler)
|
||||
}
|
||||
} else {
|
||||
// API <24
|
||||
Log.i(TAG, "Using legacy API (<24)")
|
||||
val surfaces = arrayListOf<Surface>()
|
||||
outputs.previewOutput?.let { surfaces.add(it.surface) }
|
||||
outputs.photoOutput?.let { surfaces.add(it.surface) }
|
||||
outputs.videoOutput?.let { surfaces.add(it.surface) }
|
||||
this.createCaptureSession(surfaces, callback, queue.handler)
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,97 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.CameraDevice
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.hardware.camera2.CaptureRequest
|
||||
import android.os.Build
|
||||
import android.view.Surface
|
||||
import com.mrousavy.camera.parsers.Flash
|
||||
import com.mrousavy.camera.parsers.Orientation
|
||||
import com.mrousavy.camera.parsers.QualityPrioritization
|
||||
|
||||
private fun supportsSnapshotCapture(cameraCharacteristics: CameraCharacteristics): Boolean {
|
||||
// As per CameraDevice.TEMPLATE_VIDEO_SNAPSHOT in documentation:
|
||||
val hardwareLevel = cameraCharacteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)!!
|
||||
if (hardwareLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) return false
|
||||
|
||||
val capabilities = cameraCharacteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!
|
||||
val hasDepth = capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT)
|
||||
val isBackwardsCompatible = !capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE)
|
||||
if (hasDepth && !isBackwardsCompatible) return false
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
fun CameraDevice.createPhotoCaptureRequest(cameraManager: CameraManager,
|
||||
surface: Surface,
|
||||
zoom: Float,
|
||||
qualityPrioritization: QualityPrioritization,
|
||||
flashMode: Flash,
|
||||
enableRedEyeReduction: Boolean,
|
||||
enableAutoStabilization: Boolean,
|
||||
orientation: Orientation): CaptureRequest {
|
||||
val cameraCharacteristics = cameraManager.getCameraCharacteristics(this.id)
|
||||
|
||||
val template = if (qualityPrioritization == QualityPrioritization.SPEED && supportsSnapshotCapture(cameraCharacteristics)) {
|
||||
CameraDevice.TEMPLATE_VIDEO_SNAPSHOT
|
||||
} else {
|
||||
CameraDevice.TEMPLATE_STILL_CAPTURE
|
||||
}
|
||||
val captureRequest = this.createCaptureRequest(template)
|
||||
|
||||
// TODO: Maybe we can even expose that prop directly?
|
||||
val jpegQuality = when (qualityPrioritization) {
|
||||
QualityPrioritization.SPEED -> 85
|
||||
QualityPrioritization.BALANCED -> 92
|
||||
QualityPrioritization.QUALITY -> 100
|
||||
}
|
||||
captureRequest[CaptureRequest.JPEG_QUALITY] = jpegQuality.toByte()
|
||||
|
||||
captureRequest.set(CaptureRequest.JPEG_ORIENTATION, orientation.toDegrees())
|
||||
|
||||
when (flashMode) {
|
||||
// Set the Flash Mode
|
||||
Flash.OFF -> {
|
||||
captureRequest[CaptureRequest.CONTROL_AE_MODE] = CaptureRequest.CONTROL_AE_MODE_ON
|
||||
}
|
||||
Flash.ON -> {
|
||||
captureRequest[CaptureRequest.CONTROL_AE_MODE] = CaptureRequest.CONTROL_AE_MODE_ON_ALWAYS_FLASH
|
||||
}
|
||||
Flash.AUTO -> {
|
||||
if (enableRedEyeReduction) {
|
||||
captureRequest[CaptureRequest.CONTROL_AE_MODE] = CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH_REDEYE
|
||||
} else {
|
||||
captureRequest[CaptureRequest.CONTROL_AE_MODE] = CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (enableAutoStabilization) {
|
||||
// Enable optical or digital image stabilization
|
||||
val digitalStabilization = cameraCharacteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES)
|
||||
val hasDigitalStabilization = digitalStabilization?.contains(CameraCharacteristics.CONTROL_VIDEO_STABILIZATION_MODE_ON) ?: false
|
||||
|
||||
val opticalStabilization = cameraCharacteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_OPTICAL_STABILIZATION)
|
||||
val hasOpticalStabilization = opticalStabilization?.contains(CameraCharacteristics.LENS_OPTICAL_STABILIZATION_MODE_ON) ?: false
|
||||
if (hasOpticalStabilization) {
|
||||
captureRequest[CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE] = CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE_OFF
|
||||
captureRequest[CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE] = CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE_ON
|
||||
} else if (hasDigitalStabilization) {
|
||||
captureRequest[CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE] = CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE_ON
|
||||
} else {
|
||||
// no stabilization is supported. ignore it
|
||||
}
|
||||
}
|
||||
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
|
||||
captureRequest[CaptureRequest.CONTROL_ZOOM_RATIO] = zoom
|
||||
} else {
|
||||
val size = cameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE)!!
|
||||
captureRequest.set(CaptureRequest.SCALER_CROP_REGION, size.zoomed(zoom))
|
||||
}
|
||||
|
||||
captureRequest.addTarget(surface)
|
||||
|
||||
return captureRequest.build()
|
||||
}
|
@@ -0,0 +1,68 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.annotation.SuppressLint
|
||||
import android.hardware.camera2.CameraDevice
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import com.mrousavy.camera.CameraCannotBeOpenedError
|
||||
import com.mrousavy.camera.CameraDisconnectedError
|
||||
import com.mrousavy.camera.CameraQueues
|
||||
import com.mrousavy.camera.parsers.CameraDeviceError
|
||||
import kotlinx.coroutines.suspendCancellableCoroutine
|
||||
import kotlin.coroutines.resume
|
||||
import kotlin.coroutines.resumeWithException
|
||||
|
||||
private const val TAG = "CameraManager"
|
||||
|
||||
@SuppressLint("MissingPermission")
|
||||
suspend fun CameraManager.openCamera(cameraId: String,
|
||||
onDisconnected: (camera: CameraDevice, reason: Throwable) -> Unit,
|
||||
queue: CameraQueues.CameraQueue): CameraDevice {
|
||||
return suspendCancellableCoroutine { continuation ->
|
||||
Log.i(TAG, "Camera $cameraId: Opening...")
|
||||
|
||||
val callback = object: CameraDevice.StateCallback() {
|
||||
override fun onOpened(camera: CameraDevice) {
|
||||
Log.i(TAG, "Camera $cameraId: Opened!")
|
||||
continuation.resume(camera)
|
||||
}
|
||||
|
||||
override fun onDisconnected(camera: CameraDevice) {
|
||||
Log.i(TAG, "Camera $cameraId: Disconnected!")
|
||||
if (continuation.isActive) {
|
||||
continuation.resumeWithException(CameraCannotBeOpenedError(cameraId, CameraDeviceError.DISCONNECTED))
|
||||
} else {
|
||||
onDisconnected(camera, CameraDisconnectedError(cameraId, CameraDeviceError.DISCONNECTED))
|
||||
}
|
||||
camera.tryClose()
|
||||
}
|
||||
|
||||
override fun onError(camera: CameraDevice, errorCode: Int) {
|
||||
Log.e(TAG, "Camera $cameraId: Error! $errorCode")
|
||||
val error = CameraDeviceError.fromCameraDeviceError(errorCode)
|
||||
if (continuation.isActive) {
|
||||
continuation.resumeWithException(CameraCannotBeOpenedError(cameraId, error))
|
||||
} else {
|
||||
onDisconnected(camera, CameraDisconnectedError(cameraId, error))
|
||||
}
|
||||
camera.tryClose()
|
||||
}
|
||||
}
|
||||
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
|
||||
this.openCamera(cameraId, queue.executor, callback)
|
||||
} else {
|
||||
this.openCamera(cameraId, callback, queue.handler)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fun CameraDevice.tryClose() {
|
||||
try {
|
||||
Log.i(TAG, "Camera $id: Closing...")
|
||||
this.close()
|
||||
} catch (e: Throwable) {
|
||||
Log.e(TAG, "Camera $id: Failed to close!", e)
|
||||
}
|
||||
}
|
@@ -1,4 +1,4 @@
|
||||
package com.mrousavy.camera.utils
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.content.Context
|
||||
import android.os.Build
|
@@ -0,0 +1,21 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.os.Handler
|
||||
import java.util.concurrent.Semaphore
|
||||
|
||||
/**
|
||||
* Posts a Message to this Handler and blocks the calling Thread until the Handler finished executing the given job.
|
||||
*/
|
||||
fun Handler.postAndWait(job: () -> Unit) {
|
||||
val semaphore = Semaphore(0)
|
||||
|
||||
this.post {
|
||||
try {
|
||||
job()
|
||||
} finally {
|
||||
semaphore.release()
|
||||
}
|
||||
}
|
||||
|
||||
semaphore.acquire()
|
||||
}
|
@@ -1,4 +1,4 @@
|
||||
package com.mrousavy.camera.utils
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
fun <T> List<T>.containsAny(elements: List<T>): Boolean {
|
||||
return elements.any { element -> this.contains(element) }
|
@@ -0,0 +1,38 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.hardware.camera2.params.DynamicRangeProfiles
|
||||
import android.media.MediaCodecInfo
|
||||
import android.media.MediaFormat
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import androidx.annotation.RequiresApi
|
||||
|
||||
@RequiresApi(Build.VERSION_CODES.N)
|
||||
private fun getTransferFunction(codecProfile: Int) = when (codecProfile) {
|
||||
MediaCodecInfo.CodecProfileLevel.HEVCProfileMain10 -> MediaFormat.COLOR_TRANSFER_HLG
|
||||
MediaCodecInfo.CodecProfileLevel.HEVCProfileMain10HDR10 -> MediaFormat.COLOR_TRANSFER_ST2084
|
||||
MediaCodecInfo.CodecProfileLevel.HEVCProfileMain10HDR10Plus -> MediaFormat.COLOR_TRANSFER_ST2084
|
||||
else -> MediaFormat.COLOR_TRANSFER_SDR_VIDEO
|
||||
}
|
||||
|
||||
fun MediaFormat.setDynamicRangeProfile(dynamicRangeProfile: Long) {
|
||||
val profile = when (dynamicRangeProfile) {
|
||||
DynamicRangeProfiles.HLG10 -> MediaCodecInfo.CodecProfileLevel.HEVCProfileMain10
|
||||
DynamicRangeProfiles.HDR10 -> MediaCodecInfo.CodecProfileLevel.HEVCProfileMain10HDR10
|
||||
DynamicRangeProfiles.HDR10_PLUS -> MediaCodecInfo.CodecProfileLevel.HEVCProfileMain10HDR10Plus
|
||||
else -> null
|
||||
}
|
||||
|
||||
if (profile != null) {
|
||||
Log.i("MediaFormat", "Using HDR Profile $profile")
|
||||
this.setInteger(MediaFormat.KEY_PROFILE, profile)
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
|
||||
this.setInteger(MediaFormat.KEY_COLOR_STANDARD, MediaFormat.COLOR_STANDARD_BT2020)
|
||||
this.setInteger(MediaFormat.KEY_COLOR_RANGE, MediaFormat.COLOR_RANGE_FULL)
|
||||
this.setInteger(MediaFormat.KEY_COLOR_TRANSFER, getTransferFunction(profile))
|
||||
}
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) {
|
||||
this.setFeatureEnabled(MediaCodecInfo.CodecCapabilities.FEATURE_HdrEditing, true)
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,14 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.graphics.Rect
|
||||
|
||||
fun Rect.zoomed(zoomFactor: Float): Rect {
|
||||
val height = bottom - top
|
||||
val width = right - left
|
||||
|
||||
val left = this.left + (width / zoomFactor / 2)
|
||||
val top = this.top + (height / zoomFactor / 2)
|
||||
val right = this.right - (width / zoomFactor / 2)
|
||||
val bottom = this.bottom - (height / zoomFactor / 2)
|
||||
return Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
|
||||
}
|
@@ -0,0 +1,48 @@
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.util.Size
|
||||
import android.util.SizeF
|
||||
import android.view.Surface
|
||||
import kotlin.math.abs
|
||||
import kotlin.math.max
|
||||
import kotlin.math.min
|
||||
|
||||
fun List<Size>.closestToOrMax(size: Size?): Size {
|
||||
return if (size != null) {
|
||||
this.minBy { abs(it.width - size.width) + abs(it.height - size.height) }
|
||||
} else {
|
||||
this.maxBy { it.width * it.height }
|
||||
}
|
||||
}
|
||||
|
||||
fun Collection<Size>.closestTo(size: Size): Size {
|
||||
return this.minBy { abs(it.width - size.width) + abs(it.height - size.height) }
|
||||
}
|
||||
|
||||
/**
|
||||
* Rotate by a given Surface Rotation
|
||||
*/
|
||||
fun Size.rotated(surfaceRotation: Int): Size {
|
||||
return when (surfaceRotation) {
|
||||
Surface.ROTATION_0 -> Size(width, height)
|
||||
Surface.ROTATION_90 -> Size(height, width)
|
||||
Surface.ROTATION_180 -> Size(width, height)
|
||||
Surface.ROTATION_270 -> Size(height, width)
|
||||
else -> Size(width, height)
|
||||
}
|
||||
}
|
||||
|
||||
val Size.bigger: Int
|
||||
get() = max(width, height)
|
||||
val Size.smaller: Int
|
||||
get() = min(width, height)
|
||||
|
||||
val SizeF.bigger: Float
|
||||
get() = max(this.width, this.height)
|
||||
val SizeF.smaller: Float
|
||||
get() = min(this.width, this.height)
|
||||
|
||||
operator fun Size.compareTo(other: Size): Int {
|
||||
return (this.width * this.height).compareTo(other.width * other.height)
|
||||
}
|
||||
|
@@ -1,4 +1,4 @@
|
||||
package com.mrousavy.camera.utils
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import android.view.View
|
||||
import android.view.ViewGroup
|
@@ -1,4 +1,4 @@
|
||||
package com.mrousavy.camera.utils
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import com.facebook.react.bridge.WritableArray
|
||||
|
@@ -1,4 +1,4 @@
|
||||
package com.mrousavy.camera.utils
|
||||
package com.mrousavy.camera.extensions
|
||||
|
||||
import com.facebook.react.bridge.WritableMap
|
||||
|
@@ -1,43 +1,47 @@
|
||||
package com.mrousavy.camera.frameprocessor;
|
||||
|
||||
import android.annotation.SuppressLint;
|
||||
import android.graphics.ImageFormat;
|
||||
import android.graphics.Matrix;
|
||||
import android.media.Image;
|
||||
import androidx.camera.core.ImageProxy;
|
||||
import com.facebook.proguard.annotations.DoNotStrip;
|
||||
import com.mrousavy.camera.parsers.PixelFormat;
|
||||
import com.mrousavy.camera.parsers.Orientation;
|
||||
|
||||
import java.nio.ByteBuffer;
|
||||
|
||||
public class Frame {
|
||||
private final ImageProxy imageProxy;
|
||||
private final Image image;
|
||||
private final boolean isMirrored;
|
||||
private final long timestamp;
|
||||
private final Orientation orientation;
|
||||
private int refCount = 0;
|
||||
|
||||
public Frame(ImageProxy imageProxy) {
|
||||
this.imageProxy = imageProxy;
|
||||
public Frame(Image image, long timestamp, Orientation orientation, boolean isMirrored) {
|
||||
this.image = image;
|
||||
this.timestamp = timestamp;
|
||||
this.orientation = orientation;
|
||||
this.isMirrored = isMirrored;
|
||||
}
|
||||
|
||||
public ImageProxy getImageProxy() {
|
||||
return imageProxy;
|
||||
public Image getImage() {
|
||||
return image;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public int getWidth() {
|
||||
return imageProxy.getWidth();
|
||||
return image.getWidth();
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public int getHeight() {
|
||||
return imageProxy.getHeight();
|
||||
return image.getHeight();
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public boolean getIsValid() {
|
||||
try {
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
Image image = imageProxy.getImage();
|
||||
if (image == null) return false;
|
||||
// will throw an exception if the image is already closed
|
||||
image.getCropRect();
|
||||
// no exception thrown, image must still be valid.
|
||||
@@ -51,40 +55,38 @@ public class Frame {
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public boolean getIsMirrored() {
|
||||
Matrix matrix = imageProxy.getImageInfo().getSensorToBufferTransformMatrix();
|
||||
// TODO: Figure out how to get isMirrored from ImageProxy
|
||||
return false;
|
||||
return isMirrored;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public long getTimestamp() {
|
||||
return imageProxy.getImageInfo().getTimestamp();
|
||||
return timestamp;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public String getOrientation() {
|
||||
int rotation = imageProxy.getImageInfo().getRotationDegrees();
|
||||
if (rotation >= 45 && rotation < 135)
|
||||
return "landscapeRight";
|
||||
if (rotation >= 135 && rotation < 225)
|
||||
return "portraitUpsideDown";
|
||||
if (rotation >= 225 && rotation < 315)
|
||||
return "landscapeLeft";
|
||||
return "portrait";
|
||||
return orientation.getUnionValue();
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public String getPixelFormat() {
|
||||
PixelFormat format = PixelFormat.Companion.fromImageFormat(image.getFormat());
|
||||
return format.getUnionValue();
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public int getPlanesCount() {
|
||||
return imageProxy.getPlanes().length;
|
||||
return image.getPlanes().length;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public int getBytesPerRow() {
|
||||
return imageProxy.getPlanes()[0].getRowStride();
|
||||
return image.getPlanes()[0].getRowStride();
|
||||
}
|
||||
|
||||
private static byte[] byteArrayCache;
|
||||
@@ -92,10 +94,10 @@ public class Frame {
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public byte[] toByteArray() {
|
||||
switch (imageProxy.getFormat()) {
|
||||
switch (image.getFormat()) {
|
||||
case ImageFormat.YUV_420_888:
|
||||
ByteBuffer yBuffer = imageProxy.getPlanes()[0].getBuffer();
|
||||
ByteBuffer vuBuffer = imageProxy.getPlanes()[2].getBuffer();
|
||||
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
|
||||
ByteBuffer vuBuffer = image.getPlanes()[2].getBuffer();
|
||||
int ySize = yBuffer.remaining();
|
||||
int vuSize = vuBuffer.remaining();
|
||||
|
||||
@@ -106,15 +108,45 @@ public class Frame {
|
||||
yBuffer.get(byteArrayCache, 0, ySize);
|
||||
vuBuffer.get(byteArrayCache, ySize, vuSize);
|
||||
|
||||
return byteArrayCache;
|
||||
case ImageFormat.JPEG:
|
||||
ByteBuffer rgbBuffer = image.getPlanes()[0].getBuffer();
|
||||
int size = rgbBuffer.remaining();
|
||||
|
||||
if (byteArrayCache == null || byteArrayCache.length != size) {
|
||||
byteArrayCache = new byte[size];
|
||||
}
|
||||
rgbBuffer.get(byteArrayCache);
|
||||
|
||||
return byteArrayCache;
|
||||
default:
|
||||
throw new RuntimeException("Cannot convert Frame with Format " + imageProxy.getFormat() + " to byte array!");
|
||||
throw new RuntimeException("Cannot convert Frame with Format " + image.getFormat() + " to byte array!");
|
||||
}
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public void incrementRefCount() {
|
||||
synchronized (this) {
|
||||
refCount++;
|
||||
}
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
public void decrementRefCount() {
|
||||
synchronized (this) {
|
||||
refCount--;
|
||||
if (refCount <= 0) {
|
||||
// If no reference is held on this Image, close it.
|
||||
image.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
private void close() {
|
||||
imageProxy.close();
|
||||
image.close();
|
||||
}
|
||||
}
|
||||
|
@@ -2,20 +2,21 @@ package com.mrousavy.camera.frameprocessor
|
||||
|
||||
import android.util.Log
|
||||
import androidx.annotation.Keep
|
||||
import androidx.annotation.UiThread
|
||||
import com.facebook.jni.HybridData
|
||||
import com.facebook.proguard.annotations.DoNotStrip
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.bridge.ReadableNativeMap
|
||||
import com.facebook.react.bridge.UiThreadUtil
|
||||
import com.facebook.react.turbomodule.core.CallInvokerHolderImpl
|
||||
import com.facebook.react.uimanager.UIManagerHelper
|
||||
import com.mrousavy.camera.CameraView
|
||||
import com.mrousavy.camera.ViewNotFoundError
|
||||
import java.lang.ref.WeakReference
|
||||
import java.util.concurrent.ExecutorService
|
||||
|
||||
|
||||
@Suppress("KotlinJniMissingFunction") // we use fbjni.
|
||||
class VisionCameraProxy(context: ReactApplicationContext, frameProcessorThread: ExecutorService) {
|
||||
class VisionCameraProxy(context: ReactApplicationContext) {
|
||||
companion object {
|
||||
const val TAG = "VisionCameraProxy"
|
||||
init {
|
||||
@@ -36,11 +37,12 @@ class VisionCameraProxy(context: ReactApplicationContext, frameProcessorThread:
|
||||
init {
|
||||
val jsCallInvokerHolder = context.catalystInstance.jsCallInvokerHolder as CallInvokerHolderImpl
|
||||
val jsRuntimeHolder = context.javaScriptContextHolder.get()
|
||||
mScheduler = VisionCameraScheduler(frameProcessorThread)
|
||||
mScheduler = VisionCameraScheduler()
|
||||
mContext = WeakReference(context)
|
||||
mHybridData = initHybrid(jsRuntimeHolder, jsCallInvokerHolder, mScheduler)
|
||||
}
|
||||
|
||||
@UiThread
|
||||
private fun findCameraViewById(viewId: Int): CameraView {
|
||||
Log.d(TAG, "Finding view $viewId...")
|
||||
val ctx = mContext.get()
|
||||
@@ -52,15 +54,19 @@ class VisionCameraProxy(context: ReactApplicationContext, frameProcessorThread:
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
fun setFrameProcessor(viewId: Int, frameProcessor: FrameProcessor) {
|
||||
val view = findCameraViewById(viewId)
|
||||
view.frameProcessor = frameProcessor
|
||||
UiThreadUtil.runOnUiThread {
|
||||
val view = findCameraViewById(viewId)
|
||||
view.frameProcessor = frameProcessor
|
||||
}
|
||||
}
|
||||
|
||||
@DoNotStrip
|
||||
@Keep
|
||||
fun removeFrameProcessor(viewId: Int) {
|
||||
val view = findCameraViewById(viewId)
|
||||
view.frameProcessor = null
|
||||
UiThreadUtil.runOnUiThread {
|
||||
val view = findCameraViewById(viewId)
|
||||
view.frameProcessor = null
|
||||
}
|
||||
}
|
||||
|
||||
@DoNotStrip
|
||||
|
@@ -2,6 +2,8 @@ package com.mrousavy.camera.frameprocessor;
|
||||
|
||||
import com.facebook.jni.HybridData;
|
||||
import com.facebook.proguard.annotations.DoNotStrip;
|
||||
import com.mrousavy.camera.CameraQueues;
|
||||
|
||||
import java.util.concurrent.ExecutorService;
|
||||
|
||||
@SuppressWarnings("JavaJniMissingFunction") // using fbjni here
|
||||
@@ -9,10 +11,8 @@ public class VisionCameraScheduler {
|
||||
@SuppressWarnings({"unused", "FieldCanBeLocal"})
|
||||
@DoNotStrip
|
||||
private final HybridData mHybridData;
|
||||
private final ExecutorService frameProcessorThread;
|
||||
|
||||
public VisionCameraScheduler(ExecutorService frameProcessorThread) {
|
||||
this.frameProcessorThread = frameProcessorThread;
|
||||
public VisionCameraScheduler() {
|
||||
mHybridData = initHybrid();
|
||||
}
|
||||
|
||||
@@ -22,6 +22,8 @@ public class VisionCameraScheduler {
|
||||
@SuppressWarnings("unused")
|
||||
@DoNotStrip
|
||||
private void scheduleTrigger() {
|
||||
frameProcessorThread.submit(this::trigger);
|
||||
CameraQueues.CameraQueue videoQueue = CameraQueues.Companion.getVideoQueue();
|
||||
// TODO: Make sure post(this::trigger) works.
|
||||
videoQueue.getHandler().post(this::trigger);
|
||||
}
|
||||
}
|
||||
|
@@ -0,0 +1,25 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraDevice
|
||||
|
||||
enum class CameraDeviceError(override val unionValue: String): JSUnionValue {
|
||||
CAMERA_ALREADY_IN_USE("camera-already-in-use"),
|
||||
TOO_MANY_OPEN_CAMERAS("too-many-open-cameras"),
|
||||
CAMERA_IS_DISABLED_BY_ANDROID("camera-is-disabled-by-android"),
|
||||
UNKNOWN_CAMERA_DEVICE_ERROR("unknown-camera-device-error"),
|
||||
UNKNOWN_FATAL_CAMERA_SERVICE_ERROR("unknown-fatal-camera-service-error"),
|
||||
DISCONNECTED("camera-has-been-disconnected");
|
||||
|
||||
companion object {
|
||||
fun fromCameraDeviceError(cameraDeviceError: Int): CameraDeviceError {
|
||||
return when (cameraDeviceError) {
|
||||
CameraDevice.StateCallback.ERROR_CAMERA_IN_USE -> CAMERA_ALREADY_IN_USE
|
||||
CameraDevice.StateCallback.ERROR_MAX_CAMERAS_IN_USE -> TOO_MANY_OPEN_CAMERAS
|
||||
CameraDevice.StateCallback.ERROR_CAMERA_DISABLED -> CAMERA_IS_DISABLED_BY_ANDROID
|
||||
CameraDevice.StateCallback.ERROR_CAMERA_DEVICE -> UNKNOWN_CAMERA_DEVICE_ERROR
|
||||
CameraDevice.StateCallback.ERROR_CAMERA_SERVICE -> UNKNOWN_FATAL_CAMERA_SERVICE_ERROR
|
||||
else -> UNKNOWN_CAMERA_DEVICE_ERROR
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
20
android/src/main/java/com/mrousavy/camera/parsers/Flash.kt
Normal file
20
android/src/main/java/com/mrousavy/camera/parsers/Flash.kt
Normal file
@@ -0,0 +1,20 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import com.mrousavy.camera.InvalidTypeScriptUnionError
|
||||
|
||||
enum class Flash(override val unionValue: String): JSUnionValue {
|
||||
OFF("off"),
|
||||
ON("on"),
|
||||
AUTO("auto");
|
||||
|
||||
companion object: JSUnionValue.Companion<Flash> {
|
||||
override fun fromUnionValue(unionValue: String?): Flash {
|
||||
return when (unionValue) {
|
||||
"off" -> OFF
|
||||
"on" -> ON
|
||||
"auto" -> AUTO
|
||||
else -> throw InvalidTypeScriptUnionError("flash", unionValue ?: "(null)")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,24 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
|
||||
enum class HardwareLevel(override val unionValue: String): JSUnionValue {
|
||||
LEGACY("legacy"),
|
||||
LIMITED("limited"),
|
||||
EXTERNAL("external"),
|
||||
FULL("full"),
|
||||
LEVEL_3("level-3");
|
||||
|
||||
companion object {
|
||||
fun fromCameraCharacteristics(cameraCharacteristics: CameraCharacteristics): HardwareLevel {
|
||||
return when (cameraCharacteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)) {
|
||||
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY -> LEGACY
|
||||
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED -> LIMITED
|
||||
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_EXTERNAL -> EXTERNAL
|
||||
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL -> FULL
|
||||
CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_3 -> LEVEL_3
|
||||
else -> LEGACY
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,48 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
|
||||
/**
|
||||
* Parses ImageFormat/PixelFormat int to a string representation useable for the TypeScript types.
|
||||
*/
|
||||
fun parseImageFormat(imageFormat: Int): String {
|
||||
return when (imageFormat) {
|
||||
ImageFormat.YUV_420_888 -> "yuv"
|
||||
ImageFormat.YUV_422_888 -> "yuv"
|
||||
ImageFormat.YUV_444_888 -> "yuv"
|
||||
ImageFormat.JPEG -> "jpeg"
|
||||
ImageFormat.DEPTH_JPEG -> "jpeg-depth"
|
||||
ImageFormat.RAW_SENSOR -> "raw"
|
||||
ImageFormat.RAW_PRIVATE -> "raw"
|
||||
ImageFormat.HEIC -> "heic"
|
||||
ImageFormat.PRIVATE -> "private"
|
||||
ImageFormat.DEPTH16 -> "depth-16"
|
||||
else -> "unknown"
|
||||
/*
|
||||
ImageFormat.UNKNOWN -> "TODOFILL"
|
||||
ImageFormat.RGB_565 -> "TODOFILL"
|
||||
ImageFormat.YV12 -> "TODOFILL"
|
||||
ImageFormat.Y8 -> "TODOFILL"
|
||||
ImageFormat.NV16 -> "TODOFILL"
|
||||
ImageFormat.NV21 -> "TODOFILL"
|
||||
ImageFormat.YUY2 -> "TODOFILL"
|
||||
ImageFormat.FLEX_RGB_888 -> "TODOFILL"
|
||||
ImageFormat.FLEX_RGBA_8888 -> "TODOFILL"
|
||||
ImageFormat.RAW10 -> "TODOFILL"
|
||||
ImageFormat.RAW12 -> "TODOFILL"
|
||||
ImageFormat.DEPTH_POINT_CLOUD -> "TODOFILL"
|
||||
@Suppress("DUPLICATE_LABEL_IN_WHEN")
|
||||
PixelFormat.UNKNOWN -> "TODOFILL"
|
||||
PixelFormat.TRANSPARENT -> "TODOFILL"
|
||||
PixelFormat.TRANSLUCENT -> "TODOFILL"
|
||||
PixelFormat.RGBX_8888 -> "TODOFILL"
|
||||
PixelFormat.RGBA_F16 -> "TODOFILL"
|
||||
PixelFormat.RGBA_8888 -> "TODOFILL"
|
||||
PixelFormat.RGBA_1010102 -> "TODOFILL"
|
||||
PixelFormat.OPAQUE -> "TODOFILL"
|
||||
@Suppress("DUPLICATE_LABEL_IN_WHEN")
|
||||
PixelFormat.RGB_565 -> "TODOFILL"
|
||||
PixelFormat.RGB_888 -> "TODOFILL"
|
||||
*/
|
||||
}
|
||||
}
|
@@ -0,0 +1,9 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
interface JSUnionValue {
|
||||
val unionValue: String
|
||||
|
||||
interface Companion<T> {
|
||||
fun fromUnionValue(unionValue: String?): T?
|
||||
}
|
||||
}
|
@@ -0,0 +1,20 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
|
||||
enum class LensFacing(override val unionValue: String): JSUnionValue {
|
||||
BACK("back"),
|
||||
FRONT("front"),
|
||||
EXTERNAL("external");
|
||||
|
||||
companion object {
|
||||
fun fromCameraCharacteristics(cameraCharacteristics: CameraCharacteristics): LensFacing {
|
||||
return when (cameraCharacteristics.get(CameraCharacteristics.LENS_FACING)!!) {
|
||||
CameraCharacteristics.LENS_FACING_BACK -> BACK
|
||||
CameraCharacteristics.LENS_FACING_FRONT -> FRONT
|
||||
CameraCharacteristics.LENS_FACING_EXTERNAL -> EXTERNAL
|
||||
else -> EXTERNAL
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,15 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
|
||||
/**
|
||||
* Parses Lens Facing int to a string representation useable for the TypeScript types.
|
||||
*/
|
||||
fun parseLensFacing(lensFacing: Int?): String? {
|
||||
return when (lensFacing) {
|
||||
CameraCharacteristics.LENS_FACING_BACK -> "back"
|
||||
CameraCharacteristics.LENS_FACING_FRONT -> "front"
|
||||
CameraCharacteristics.LENS_FACING_EXTERNAL -> "external"
|
||||
else -> null
|
||||
}
|
||||
}
|
@@ -0,0 +1,56 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
|
||||
enum class Orientation(override val unionValue: String): JSUnionValue {
|
||||
PORTRAIT("portrait"),
|
||||
LANDSCAPE_RIGHT("landscape-right"),
|
||||
PORTRAIT_UPSIDE_DOWN("portrait-upside-down"),
|
||||
LANDSCAPE_LEFT("landscape-left");
|
||||
|
||||
fun toDegrees(): Int {
|
||||
return when(this) {
|
||||
PORTRAIT -> 0
|
||||
LANDSCAPE_RIGHT -> 90
|
||||
PORTRAIT_UPSIDE_DOWN -> 180
|
||||
LANDSCAPE_LEFT -> 270
|
||||
}
|
||||
}
|
||||
|
||||
fun toSensorRelativeOrientation(cameraCharacteristics: CameraCharacteristics): Orientation {
|
||||
val sensorOrientation = cameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)!!
|
||||
|
||||
// Convert target orientation to rotation degrees (0, 90, 180, 270)
|
||||
var rotationDegrees = this.toDegrees()
|
||||
|
||||
// Reverse device orientation for front-facing cameras
|
||||
val facingFront = cameraCharacteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT
|
||||
if (facingFront) rotationDegrees = -rotationDegrees
|
||||
|
||||
// Rotate sensor rotation by target rotation
|
||||
val newRotationDegrees = (sensorOrientation + rotationDegrees + 360) % 360
|
||||
|
||||
return fromRotationDegrees(newRotationDegrees)
|
||||
}
|
||||
|
||||
companion object: JSUnionValue.Companion<Orientation> {
|
||||
override fun fromUnionValue(unionValue: String?): Orientation? {
|
||||
return when (unionValue) {
|
||||
"portrait" -> PORTRAIT
|
||||
"landscape-right" -> LANDSCAPE_RIGHT
|
||||
"portrait-upside-down" -> PORTRAIT_UPSIDE_DOWN
|
||||
"landscape-left" -> LANDSCAPE_LEFT
|
||||
else -> null
|
||||
}
|
||||
}
|
||||
|
||||
fun fromRotationDegrees(rotationDegrees: Int): Orientation {
|
||||
return when (rotationDegrees) {
|
||||
in 45..135 -> LANDSCAPE_RIGHT
|
||||
in 135..225 -> PORTRAIT_UPSIDE_DOWN
|
||||
in 225..315 -> LANDSCAPE_LEFT
|
||||
else -> PORTRAIT
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,11 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.content.pm.PackageManager
|
||||
|
||||
fun parsePermissionStatus(status: Int): String {
|
||||
return when (status) {
|
||||
PackageManager.PERMISSION_DENIED -> "denied"
|
||||
PackageManager.PERMISSION_GRANTED -> "authorized"
|
||||
else -> "not-determined"
|
||||
}
|
||||
}
|
@@ -0,0 +1,19 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.content.pm.PackageManager
|
||||
|
||||
enum class PermissionStatus(override val unionValue: String): JSUnionValue {
|
||||
DENIED("denied"),
|
||||
NOT_DETERMINED("not-determined"),
|
||||
GRANTED("granted");
|
||||
|
||||
companion object {
|
||||
fun fromPermissionStatus(status: Int): PermissionStatus {
|
||||
return when (status) {
|
||||
PackageManager.PERMISSION_DENIED -> DENIED
|
||||
PackageManager.PERMISSION_GRANTED -> GRANTED
|
||||
else -> NOT_DETERMINED
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,57 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import com.mrousavy.camera.PixelFormatNotSupportedError
|
||||
|
||||
@Suppress("FoldInitializerAndIfToElvis")
|
||||
enum class PixelFormat(override val unionValue: String): JSUnionValue {
|
||||
YUV("yuv"),
|
||||
RGB("rgb"),
|
||||
DNG("dng"),
|
||||
NATIVE("native"),
|
||||
UNKNOWN("unknown");
|
||||
|
||||
private fun bestMatch(formats: IntArray, targetFormats: Array<Int>): Int? {
|
||||
targetFormats.forEach { format ->
|
||||
if (formats.contains(format)) return format
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
fun toImageFormat(): Int {
|
||||
val result = when (this) {
|
||||
YUV -> ImageFormat.YUV_420_888
|
||||
RGB -> ImageFormat.JPEG
|
||||
DNG -> ImageFormat.RAW_SENSOR
|
||||
NATIVE -> ImageFormat.PRIVATE
|
||||
UNKNOWN -> null
|
||||
}
|
||||
if (result == null) {
|
||||
throw PixelFormatNotSupportedError(this.unionValue)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
companion object: JSUnionValue.Companion<PixelFormat> {
|
||||
fun fromImageFormat(imageFormat: Int): PixelFormat {
|
||||
return when (imageFormat) {
|
||||
ImageFormat.YUV_420_888 -> YUV
|
||||
ImageFormat.JPEG, ImageFormat.DEPTH_JPEG -> RGB
|
||||
ImageFormat.RAW_SENSOR -> DNG
|
||||
ImageFormat.PRIVATE -> NATIVE
|
||||
else -> UNKNOWN
|
||||
}
|
||||
}
|
||||
|
||||
override fun fromUnionValue(unionValue: String?): PixelFormat? {
|
||||
return when (unionValue) {
|
||||
"yuv" -> YUV
|
||||
"rgb" -> RGB
|
||||
"dng" -> DNG
|
||||
"native" -> NATIVE
|
||||
"unknown" -> UNKNOWN
|
||||
else -> null
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,18 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
enum class PreviewType(override val unionValue: String): JSUnionValue {
|
||||
NONE("none"),
|
||||
NATIVE("native"),
|
||||
SKIA("skia");
|
||||
|
||||
companion object: JSUnionValue.Companion<PreviewType> {
|
||||
override fun fromUnionValue(unionValue: String?): PreviewType {
|
||||
return when (unionValue) {
|
||||
"none" -> NONE
|
||||
"native" -> NATIVE
|
||||
"skia" -> SKIA
|
||||
else -> NONE
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,18 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
enum class QualityPrioritization(override val unionValue: String): JSUnionValue {
|
||||
SPEED("speed"),
|
||||
BALANCED("balanced"),
|
||||
QUALITY("quality");
|
||||
|
||||
companion object: JSUnionValue.Companion<QualityPrioritization> {
|
||||
override fun fromUnionValue(unionValue: String?): QualityPrioritization {
|
||||
return when (unionValue) {
|
||||
"speed" -> SPEED
|
||||
"balanced" -> BALANCED
|
||||
"quality" -> QUALITY
|
||||
else -> BALANCED
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,20 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.util.Size
|
||||
import android.util.SizeF
|
||||
import kotlin.math.max
|
||||
import kotlin.math.min
|
||||
|
||||
val Size.bigger: Int
|
||||
get() = max(this.width, this.height)
|
||||
val Size.smaller: Int
|
||||
get() = min(this.width, this.height)
|
||||
|
||||
val SizeF.bigger: Float
|
||||
get() = max(this.width, this.height)
|
||||
val SizeF.smaller: Float
|
||||
get() = min(this.width, this.height)
|
||||
|
||||
fun areUltimatelyEqual(size1: Size, size2: Size): Boolean {
|
||||
return size1.width * size1.height == size2.width * size2.height
|
||||
}
|
16
android/src/main/java/com/mrousavy/camera/parsers/Torch.kt
Normal file
16
android/src/main/java/com/mrousavy/camera/parsers/Torch.kt
Normal file
@@ -0,0 +1,16 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
enum class Torch(override val unionValue: String): JSUnionValue {
|
||||
OFF("off"),
|
||||
ON("on");
|
||||
|
||||
companion object: JSUnionValue.Companion<Torch> {
|
||||
override fun fromUnionValue(unionValue: String?): Torch {
|
||||
return when (unionValue) {
|
||||
"off" -> OFF
|
||||
"on" -> ON
|
||||
else -> OFF
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,25 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.media.MediaRecorder
|
||||
|
||||
enum class VideoCodec(override val unionValue: String): JSUnionValue {
|
||||
H264("h264"),
|
||||
H265("h265");
|
||||
|
||||
fun toVideoCodec(): Int {
|
||||
return when (this) {
|
||||
H264 -> MediaRecorder.VideoEncoder.H264
|
||||
H265 -> MediaRecorder.VideoEncoder.HEVC
|
||||
}
|
||||
}
|
||||
|
||||
companion object: JSUnionValue.Companion<VideoCodec> {
|
||||
override fun fromUnionValue(unionValue: String?): VideoCodec {
|
||||
return when (unionValue) {
|
||||
"h264" -> H264
|
||||
"h265" -> H265
|
||||
else -> H264
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,26 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.media.MediaRecorder
|
||||
import com.mrousavy.camera.InvalidTypeScriptUnionError
|
||||
|
||||
enum class VideoFileType(override val unionValue: String): JSUnionValue {
|
||||
MOV("mov"),
|
||||
MP4("mp4");
|
||||
|
||||
fun toExtension(): String {
|
||||
return when (this) {
|
||||
MOV -> ".mov"
|
||||
MP4 -> ".mp4"
|
||||
}
|
||||
}
|
||||
|
||||
companion object: JSUnionValue.Companion<VideoFileType> {
|
||||
override fun fromUnionValue(unionValue: String?): VideoFileType {
|
||||
return when (unionValue) {
|
||||
"mov" -> MOV
|
||||
"mp4" -> MP4
|
||||
else -> throw InvalidTypeScriptUnionError("fileType", unionValue ?: "(null)")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,12 +0,0 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraMetadata.*
|
||||
|
||||
fun parseVideoStabilizationMode(stabiliazionMode: Int): String {
|
||||
return when (stabiliazionMode) {
|
||||
CONTROL_VIDEO_STABILIZATION_MODE_OFF -> "off"
|
||||
CONTROL_VIDEO_STABILIZATION_MODE_ON -> "standard"
|
||||
CONTROL_VIDEO_STABILIZATION_MODE_PREVIEW_STABILIZATION -> "cinematic"
|
||||
else -> "off"
|
||||
}
|
||||
}
|
@@ -0,0 +1,59 @@
|
||||
package com.mrousavy.camera.parsers
|
||||
|
||||
import android.hardware.camera2.CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_OFF
|
||||
import android.hardware.camera2.CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_ON
|
||||
import android.hardware.camera2.CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_PREVIEW_STABILIZATION
|
||||
import android.hardware.camera2.CameraMetadata.LENS_OPTICAL_STABILIZATION_MODE_OFF
|
||||
import android.hardware.camera2.CameraMetadata.LENS_OPTICAL_STABILIZATION_MODE_ON
|
||||
|
||||
enum class VideoStabilizationMode(override val unionValue: String): JSUnionValue {
|
||||
OFF("off"),
|
||||
STANDARD("standard"),
|
||||
CINEMATIC("cinematic"),
|
||||
CINEMATIC_EXTENDED("cinematic-extended");
|
||||
|
||||
fun toDigitalStabilizationMode(): Int {
|
||||
return when (this) {
|
||||
OFF -> CONTROL_VIDEO_STABILIZATION_MODE_OFF
|
||||
STANDARD -> CONTROL_VIDEO_STABILIZATION_MODE_ON
|
||||
CINEMATIC -> 2 /* CONTROL_VIDEO_STABILIZATION_MODE_PREVIEW_STABILIZATION */
|
||||
else -> CONTROL_VIDEO_STABILIZATION_MODE_OFF
|
||||
}
|
||||
}
|
||||
|
||||
fun toOpticalStabilizationMode(): Int {
|
||||
return when (this) {
|
||||
OFF -> LENS_OPTICAL_STABILIZATION_MODE_OFF
|
||||
CINEMATIC_EXTENDED -> LENS_OPTICAL_STABILIZATION_MODE_ON
|
||||
else -> LENS_OPTICAL_STABILIZATION_MODE_OFF
|
||||
}
|
||||
}
|
||||
|
||||
companion object: JSUnionValue.Companion<VideoStabilizationMode> {
|
||||
override fun fromUnionValue(unionValue: String?): VideoStabilizationMode? {
|
||||
return when (unionValue) {
|
||||
"off" -> OFF
|
||||
"standard" -> STANDARD
|
||||
"cinematic" -> CINEMATIC
|
||||
"cinematic-extended" -> CINEMATIC_EXTENDED
|
||||
else -> null
|
||||
}
|
||||
}
|
||||
|
||||
fun fromDigitalVideoStabilizationMode(stabiliazionMode: Int): VideoStabilizationMode {
|
||||
return when (stabiliazionMode) {
|
||||
CONTROL_VIDEO_STABILIZATION_MODE_OFF -> OFF
|
||||
CONTROL_VIDEO_STABILIZATION_MODE_ON -> STANDARD
|
||||
CONTROL_VIDEO_STABILIZATION_MODE_PREVIEW_STABILIZATION -> CINEMATIC
|
||||
else -> OFF
|
||||
}
|
||||
}
|
||||
fun fromOpticalVideoStabilizationMode(stabiliazionMode: Int): VideoStabilizationMode {
|
||||
return when (stabiliazionMode) {
|
||||
LENS_OPTICAL_STABILIZATION_MODE_OFF -> OFF
|
||||
LENS_OPTICAL_STABILIZATION_MODE_ON -> CINEMATIC_EXTENDED
|
||||
else -> OFF
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,75 @@
|
||||
package com.mrousavy.camera.skia
|
||||
|
||||
import android.annotation.SuppressLint
|
||||
import android.content.Context
|
||||
import android.util.Log
|
||||
import android.view.Choreographer
|
||||
import android.view.SurfaceHolder
|
||||
import android.view.SurfaceView
|
||||
import com.mrousavy.camera.extensions.postAndWait
|
||||
|
||||
@SuppressLint("ViewConstructor")
|
||||
class SkiaPreviewView(context: Context,
|
||||
private val skiaRenderer: SkiaRenderer): SurfaceView(context), SurfaceHolder.Callback {
|
||||
companion object {
|
||||
private const val TAG = "SkiaPreviewView"
|
||||
}
|
||||
|
||||
private var isAlive = true
|
||||
|
||||
init {
|
||||
holder.addCallback(this)
|
||||
}
|
||||
|
||||
private fun startLooping(choreographer: Choreographer) {
|
||||
choreographer.postFrameCallback {
|
||||
synchronized(this) {
|
||||
if (!isAlive) return@synchronized
|
||||
|
||||
Log.i(TAG, "tick..")
|
||||
|
||||
// Refresh UI (60 FPS)
|
||||
skiaRenderer.onPreviewFrame()
|
||||
startLooping(choreographer)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun surfaceCreated(holder: SurfaceHolder) {
|
||||
synchronized(this) {
|
||||
Log.i(TAG, "onSurfaceCreated(..)")
|
||||
|
||||
skiaRenderer.thread.postAndWait {
|
||||
// Create C++ part (OpenGL/Skia context)
|
||||
skiaRenderer.setPreviewSurface(holder.surface)
|
||||
isAlive = true
|
||||
|
||||
// Start updating the Preview View (~60 FPS)
|
||||
startLooping(Choreographer.getInstance())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun surfaceChanged(holder: SurfaceHolder, format: Int, w: Int, h: Int) {
|
||||
synchronized(this) {
|
||||
Log.i(TAG, "surfaceChanged($w, $h)")
|
||||
|
||||
skiaRenderer.thread.postAndWait {
|
||||
// Update C++ OpenGL Surface size
|
||||
skiaRenderer.setPreviewSurfaceSize(w, h)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override fun surfaceDestroyed(holder: SurfaceHolder) {
|
||||
synchronized(this) {
|
||||
isAlive = false
|
||||
Log.i(TAG, "surfaceDestroyed(..)")
|
||||
|
||||
skiaRenderer.thread.postAndWait {
|
||||
// Clean up C++ part (OpenGL/Skia context)
|
||||
skiaRenderer.destroyPreviewSurface()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,98 @@
|
||||
package com.mrousavy.camera.skia
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import android.view.Surface
|
||||
import com.facebook.jni.HybridData
|
||||
import com.facebook.proguard.annotations.DoNotStrip
|
||||
import com.mrousavy.camera.CameraQueues
|
||||
import com.mrousavy.camera.frameprocessor.Frame
|
||||
import java.io.Closeable
|
||||
import java.nio.ByteBuffer
|
||||
|
||||
@Suppress("KotlinJniMissingFunction")
|
||||
class SkiaRenderer: Closeable {
|
||||
@DoNotStrip
|
||||
private var mHybridData: HybridData
|
||||
private var hasNewFrame = false
|
||||
private var hasOutputSurface = false
|
||||
|
||||
val thread = CameraQueues.previewQueue.handler
|
||||
|
||||
init {
|
||||
mHybridData = initHybrid()
|
||||
}
|
||||
|
||||
override fun close() {
|
||||
hasNewFrame = false
|
||||
thread.post {
|
||||
synchronized(this) {
|
||||
destroyOutputSurface()
|
||||
mHybridData.resetNative()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fun setPreviewSurface(surface: Surface) {
|
||||
synchronized(this) {
|
||||
setOutputSurface(surface)
|
||||
hasOutputSurface = true
|
||||
}
|
||||
}
|
||||
|
||||
fun setPreviewSurfaceSize(width: Int, height: Int) {
|
||||
synchronized(this) {
|
||||
setOutputSurfaceSize(width, height)
|
||||
}
|
||||
}
|
||||
|
||||
fun destroyPreviewSurface() {
|
||||
synchronized(this) {
|
||||
destroyOutputSurface()
|
||||
hasOutputSurface = false
|
||||
}
|
||||
}
|
||||
|
||||
fun setInputSurfaceSize(width: Int, height: Int) {
|
||||
synchronized(this) {
|
||||
setInputTextureSize(width, height)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Called on every Camera Frame (1..240 FPS)
|
||||
*/
|
||||
fun onCameraFrame(frame: Frame) {
|
||||
synchronized(this) {
|
||||
if (!hasOutputSurface) return
|
||||
if (frame.image.format != ImageFormat.YUV_420_888) {
|
||||
throw Error("Failed to render Camera Frame! Expected Image format #${ImageFormat.YUV_420_888} (ImageFormat.YUV_420_888), received #${frame.image.format}.")
|
||||
}
|
||||
val (y, u, v) = frame.image.planes
|
||||
renderCameraFrameToOffscreenCanvas(y.buffer, u.buffer, v.buffer)
|
||||
hasNewFrame = true
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Called on every UI Frame (60 FPS)
|
||||
*/
|
||||
fun onPreviewFrame() {
|
||||
synchronized(this) {
|
||||
if (!hasOutputSurface) return
|
||||
if (!hasNewFrame) return
|
||||
renderLatestFrameToPreview()
|
||||
hasNewFrame = false
|
||||
}
|
||||
}
|
||||
|
||||
private external fun initHybrid(): HybridData
|
||||
|
||||
private external fun renderCameraFrameToOffscreenCanvas(yBuffer: ByteBuffer,
|
||||
uBuffer: ByteBuffer,
|
||||
vBuffer: ByteBuffer)
|
||||
private external fun renderLatestFrameToPreview()
|
||||
private external fun setInputTextureSize(width: Int, height: Int)
|
||||
private external fun setOutputSurface(surface: Any)
|
||||
private external fun setOutputSurfaceSize(width: Int, height: Int)
|
||||
private external fun destroyOutputSurface()
|
||||
}
|
@@ -1,28 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import androidx.camera.core.AspectRatio
|
||||
import kotlin.math.abs
|
||||
import kotlin.math.max
|
||||
import kotlin.math.min
|
||||
|
||||
private const val RATIO_4_3_VALUE = 4.0 / 3.0
|
||||
private const val RATIO_16_9_VALUE = 16.0 / 9.0
|
||||
|
||||
/**
|
||||
* [androidx.camera.core.ImageAnalysisConfig] requires enum value of
|
||||
* [androidx.camera.core.AspectRatio]. Currently it has values of 4:3 & 16:9.
|
||||
*
|
||||
* Detecting the most suitable ratio for dimensions provided in @params by counting absolute
|
||||
* of preview ratio to one of the provided values.
|
||||
*
|
||||
* @param width - preview width
|
||||
* @param height - preview height
|
||||
* @return suitable aspect ratio
|
||||
*/
|
||||
fun aspectRatio(width: Int, height: Int): Int {
|
||||
val previewRatio = max(width, height).toDouble() / min(width, height)
|
||||
if (abs(previewRatio - RATIO_4_3_VALUE) <= abs(previewRatio - RATIO_16_9_VALUE)) {
|
||||
return AspectRatio.RATIO_4_3
|
||||
}
|
||||
return AspectRatio.RATIO_16_9
|
||||
}
|
@@ -1,58 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.util.Size
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.ReadableArray
|
||||
import com.mrousavy.camera.parsers.bigger
|
||||
import kotlin.math.PI
|
||||
import kotlin.math.atan
|
||||
|
||||
// 35mm is 135 film format, a standard in which focal lengths are usually measured
|
||||
val Size35mm = Size(36, 24)
|
||||
|
||||
/**
|
||||
* Convert a given array of focal lengths to the corresponding TypeScript union type name.
|
||||
*
|
||||
* Possible values for single cameras:
|
||||
* * `"wide-angle-camera"`
|
||||
* * `"ultra-wide-angle-camera"`
|
||||
* * `"telephoto-camera"`
|
||||
*
|
||||
* Sources for the focal length categories:
|
||||
* * [Telephoto Lens (wikipedia)](https://en.wikipedia.org/wiki/Telephoto_lens)
|
||||
* * [Normal Lens (wikipedia)](https://en.wikipedia.org/wiki/Normal_lens)
|
||||
* * [Wide-Angle Lens (wikipedia)](https://en.wikipedia.org/wiki/Wide-angle_lens)
|
||||
* * [Ultra-Wide-Angle Lens (wikipedia)](https://en.wikipedia.org/wiki/Ultra_wide_angle_lens)
|
||||
*/
|
||||
fun CameraCharacteristics.getDeviceTypes(): ReadableArray {
|
||||
// TODO: Check if getDeviceType() works correctly, even for logical multi-cameras
|
||||
val focalLengths = this.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)!!
|
||||
val sensorSize = this.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE)!!
|
||||
|
||||
// To get valid focal length standards we have to upscale to the 35mm measurement (film standard)
|
||||
val cropFactor = Size35mm.bigger / sensorSize.bigger
|
||||
|
||||
val deviceTypes = Arguments.createArray()
|
||||
|
||||
val containsTelephoto = focalLengths.any { l -> (l * cropFactor) > 35 } // TODO: Telephoto lenses are > 85mm, but we don't have anything between that range..
|
||||
// val containsNormalLens = focalLengths.any { l -> (l * cropFactor) > 35 && (l * cropFactor) <= 55 }
|
||||
val containsWideAngle = focalLengths.any { l -> (l * cropFactor) >= 24 && (l * cropFactor) <= 35 }
|
||||
val containsUltraWideAngle = focalLengths.any { l -> (l * cropFactor) < 24 }
|
||||
|
||||
if (containsTelephoto)
|
||||
deviceTypes.pushString("telephoto-camera")
|
||||
if (containsWideAngle)
|
||||
deviceTypes.pushString("wide-angle-camera")
|
||||
if (containsUltraWideAngle)
|
||||
deviceTypes.pushString("ultra-wide-angle-camera")
|
||||
|
||||
return deviceTypes
|
||||
}
|
||||
|
||||
fun CameraCharacteristics.getFieldOfView(): Double {
|
||||
val focalLengths = this.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)!!
|
||||
val sensorSize = this.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE)!!
|
||||
|
||||
return 2 * atan(sensorSize.bigger / (focalLengths[0] * 2)) * (180 / PI)
|
||||
}
|
@@ -1,44 +1,43 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.CameraExtensionCharacteristics
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.hardware.camera2.CameraMetadata
|
||||
import android.hardware.camera2.params.DynamicRangeProfiles
|
||||
import android.os.Build
|
||||
import android.util.Range
|
||||
import android.util.Size
|
||||
import androidx.camera.core.CameraSelector
|
||||
import androidx.camera.extensions.ExtensionMode
|
||||
import androidx.camera.extensions.ExtensionsManager
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.ReadableArray
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
import com.mrousavy.camera.parsers.bigger
|
||||
import com.mrousavy.camera.parsers.parseImageFormat
|
||||
import com.mrousavy.camera.parsers.parseLensFacing
|
||||
import com.mrousavy.camera.parsers.parseVideoStabilizationMode
|
||||
import com.mrousavy.camera.extensions.bigger
|
||||
import com.mrousavy.camera.extensions.getPhotoSizes
|
||||
import com.mrousavy.camera.extensions.getVideoSizes
|
||||
import com.mrousavy.camera.parsers.PixelFormat
|
||||
import com.mrousavy.camera.parsers.HardwareLevel
|
||||
import com.mrousavy.camera.parsers.LensFacing
|
||||
import com.mrousavy.camera.parsers.VideoStabilizationMode
|
||||
import kotlin.math.PI
|
||||
import kotlin.math.atan
|
||||
|
||||
class CameraDevice(private val cameraManager: CameraManager, extensionsManager: ExtensionsManager, private val cameraId: String) {
|
||||
private val cameraSelector = CameraSelector.Builder().byID(cameraId).build()
|
||||
class CameraDeviceDetails(private val cameraManager: CameraManager, private val cameraId: String) {
|
||||
private val characteristics = cameraManager.getCameraCharacteristics(cameraId)
|
||||
private val hardwareLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL) ?: CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY
|
||||
private val hardwareLevel = HardwareLevel.fromCameraCharacteristics(characteristics)
|
||||
private val capabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES) ?: IntArray(0)
|
||||
private val extensions = getSupportedExtensions()
|
||||
|
||||
// device characteristics
|
||||
private val isMultiCam = capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA)
|
||||
private val supportsDepthCapture = capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT)
|
||||
private val isMultiCam = capabilities.contains(11 /* TODO: CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA */)
|
||||
private val supportsDepthCapture = capabilities.contains(8 /* TODO: CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT */)
|
||||
private val supportsRawCapture = capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_RAW)
|
||||
private val supportsLowLightBoost = extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.NIGHT) || extensions.contains(CameraExtensionCharacteristics.EXTENSION_NIGHT)
|
||||
private val lensFacing = characteristics.get(CameraCharacteristics.LENS_FACING)!!
|
||||
private val supportsLowLightBoost = extensions.contains(4 /* TODO: CameraExtensionCharacteristics.EXTENSION_NIGHT */)
|
||||
private val lensFacing = LensFacing.fromCameraCharacteristics(characteristics)
|
||||
private val hasFlash = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE) ?: false
|
||||
private val focalLengths = characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: FloatArray(0)
|
||||
private val focalLengths = characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(35f /* 35mm default */)
|
||||
private val sensorSize = characteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE)!!
|
||||
private val name = (if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) characteristics.get(CameraCharacteristics.INFO_VERSION)
|
||||
else null) ?: "${parseLensFacing(lensFacing)} (${cameraId})"
|
||||
else null) ?: "$lensFacing (${cameraId})"
|
||||
|
||||
// "formats" (all possible configurations for this device)
|
||||
private val zoomRange = (if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) characteristics.get(CameraCharacteristics.CONTROL_ZOOM_RATIO_RANGE)
|
||||
@@ -50,11 +49,10 @@ class CameraDevice(private val cameraManager: CameraManager, extensionsManager:
|
||||
private val isoRange = characteristics.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE) ?: Range(0, 0)
|
||||
private val digitalStabilizationModes = characteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES) ?: IntArray(0)
|
||||
private val opticalStabilizationModes = characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_OPTICAL_STABILIZATION) ?: IntArray(0)
|
||||
private val supportsPhotoHdr = extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.HDR) || extensions.contains(CameraExtensionCharacteristics.EXTENSION_HDR)
|
||||
private val supportsPhotoHdr = extensions.contains(3 /* TODO: CameraExtensionCharacteristics.EXTENSION_HDR */)
|
||||
private val supportsVideoHdr = getHasVideoHdr()
|
||||
|
||||
// see https://developer.android.com/reference/android/hardware/camera2/CameraDevice#regular-capture
|
||||
private val supportsParallelVideoProcessing = hardwareLevel != CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY && hardwareLevel != CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
|
||||
private val videoFormat = ImageFormat.YUV_420_888
|
||||
|
||||
// get extensions (HDR, Night Mode, ..)
|
||||
private fun getSupportedExtensions(): List<Int> {
|
||||
@@ -72,37 +70,21 @@ class CameraDevice(private val cameraManager: CameraManager, extensionsManager:
|
||||
val availableProfiles = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_DYNAMIC_RANGE_PROFILES)
|
||||
?: DynamicRangeProfiles(LongArray(0))
|
||||
return availableProfiles.supportedProfiles.contains(DynamicRangeProfiles.HLG10)
|
||||
|| availableProfiles.supportedProfiles.contains(DynamicRangeProfiles.HDR10)
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
private fun createFrameRateRanges(ranges: Array<Range<Int>>): ReadableArray {
|
||||
val array = Arguments.createArray()
|
||||
ranges.forEach { range ->
|
||||
val map = Arguments.createMap()
|
||||
map.putInt("minFrameRate", range.lower)
|
||||
map.putInt("maxFrameRate", range.upper)
|
||||
array.pushMap(map)
|
||||
}
|
||||
return array
|
||||
}
|
||||
|
||||
private fun createFrameRateRanges(minFps: Int, maxFps: Int): ReadableArray {
|
||||
return createFrameRateRanges(arrayOf(Range(minFps, maxFps)))
|
||||
}
|
||||
|
||||
private fun createColorSpaces(): ReadableArray {
|
||||
val array = Arguments.createArray()
|
||||
array.pushString("yuv")
|
||||
return array
|
||||
}
|
||||
|
||||
private fun createStabilizationModes(): ReadableArray {
|
||||
val array = Arguments.createArray()
|
||||
val videoStabilizationModes = digitalStabilizationModes.plus(opticalStabilizationModes)
|
||||
videoStabilizationModes.forEach { videoStabilizationMode ->
|
||||
array.pushString(parseVideoStabilizationMode(videoStabilizationMode))
|
||||
digitalStabilizationModes.forEach { videoStabilizationMode ->
|
||||
val mode = VideoStabilizationMode.fromDigitalVideoStabilizationMode(videoStabilizationMode)
|
||||
array.pushString(mode.unionValue)
|
||||
}
|
||||
opticalStabilizationModes.forEach { videoStabilizationMode ->
|
||||
val mode = VideoStabilizationMode.fromOpticalVideoStabilizationMode(videoStabilizationMode)
|
||||
array.pushString(mode.unionValue)
|
||||
}
|
||||
return array
|
||||
}
|
||||
@@ -141,69 +123,77 @@ class CameraDevice(private val cameraManager: CameraManager, extensionsManager:
|
||||
return 2 * atan(sensorSize.bigger / (focalLengths[0] * 2)) * (180 / PI)
|
||||
}
|
||||
|
||||
private fun buildFormatMap(outputSize: Size, outputFormat: Int, fpsRanges: ReadableArray): ReadableMap {
|
||||
val highResSizes = (if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) cameraConfig.getHighResolutionOutputSizes(outputFormat) else null) ?: emptyArray()
|
||||
|
||||
val map = Arguments.createMap()
|
||||
map.putInt("photoHeight", outputSize.height)
|
||||
map.putInt("photoWidth", outputSize.width)
|
||||
map.putInt("videoHeight", outputSize.height)
|
||||
map.putInt("videoWidth", outputSize.width)
|
||||
map.putBoolean("isHighestPhotoQualitySupported", highResSizes.contains(outputSize))
|
||||
map.putInt("maxISO", isoRange.upper)
|
||||
map.putInt("minISO", isoRange.lower)
|
||||
map.putDouble("fieldOfView", getFieldOfView())
|
||||
map.putArray("colorSpaces", createColorSpaces())
|
||||
map.putBoolean("supportsVideoHDR", supportsVideoHdr)
|
||||
map.putBoolean("supportsPhotoHDR", supportsPhotoHdr)
|
||||
map.putString("autoFocusSystem", "contrast-detection") // TODO: Is this wrong?
|
||||
map.putArray("videoStabilizationModes", createStabilizationModes())
|
||||
map.putString("pixelFormat", parseImageFormat(outputFormat))
|
||||
map.putArray("frameRateRanges", fpsRanges)
|
||||
return map
|
||||
private fun getVideoSizes(): List<Size> {
|
||||
return characteristics.getVideoSizes(cameraId, videoFormat)
|
||||
}
|
||||
private fun getPhotoSizes(): List<Size> {
|
||||
return characteristics.getPhotoSizes(ImageFormat.JPEG)
|
||||
}
|
||||
|
||||
private fun getFormats(): ReadableArray {
|
||||
val array = Arguments.createArray()
|
||||
|
||||
val highSpeedSizes = cameraConfig.highSpeedVideoSizes
|
||||
val videoSizes = getVideoSizes()
|
||||
val photoSizes = getPhotoSizes()
|
||||
|
||||
val outputFormats = cameraConfig.outputFormats
|
||||
outputFormats.forEach { outputFormat ->
|
||||
// Normal Video/Photo Sizes
|
||||
val outputSizes = cameraConfig.getOutputSizes(outputFormat)
|
||||
outputSizes.forEach { outputSize ->
|
||||
val frameDuration = cameraConfig.getOutputMinFrameDuration(outputFormat, outputSize)
|
||||
val maxFps = (1.0 / (frameDuration.toDouble() / 1000000000)).toInt()
|
||||
val minFps = 1
|
||||
videoSizes.forEach { videoSize ->
|
||||
val frameDuration = cameraConfig.getOutputMinFrameDuration(videoFormat, videoSize)
|
||||
val maxFps = (1.0 / (frameDuration.toDouble() / 1_000_000_000)).toInt()
|
||||
|
||||
val map = buildFormatMap(outputSize, outputFormat, createFrameRateRanges(minFps, maxFps))
|
||||
array.pushMap(map)
|
||||
}
|
||||
|
||||
// High-Speed (Slow Motion) Video Sizes
|
||||
highSpeedSizes.forEach { outputSize ->
|
||||
val highSpeedRanges = cameraConfig.getHighSpeedVideoFpsRangesFor(outputSize)
|
||||
|
||||
val map = buildFormatMap(outputSize, outputFormat, createFrameRateRanges(highSpeedRanges))
|
||||
photoSizes.forEach { photoSize ->
|
||||
val map = buildFormatMap(photoSize, videoSize, Range(1, maxFps))
|
||||
array.pushMap(map)
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: Add high-speed video ranges (high-fps / slow-motion)
|
||||
|
||||
return array
|
||||
}
|
||||
|
||||
// Get available pixel formats for the given Size
|
||||
private fun createPixelFormats(size: Size): ReadableArray {
|
||||
val formats = cameraConfig.outputFormats
|
||||
val array = Arguments.createArray()
|
||||
formats.forEach { format ->
|
||||
val sizes = cameraConfig.getOutputSizes(format)
|
||||
val hasSize = sizes.any { it.width == size.width && it.height == size.height }
|
||||
if (hasSize) {
|
||||
array.pushString(PixelFormat.fromImageFormat(format).unionValue)
|
||||
}
|
||||
}
|
||||
return array
|
||||
}
|
||||
|
||||
private fun buildFormatMap(photoSize: Size, videoSize: Size, fpsRange: Range<Int>): ReadableMap {
|
||||
val map = Arguments.createMap()
|
||||
map.putInt("photoHeight", photoSize.height)
|
||||
map.putInt("photoWidth", photoSize.width)
|
||||
map.putInt("videoHeight", videoSize.height)
|
||||
map.putInt("videoWidth", videoSize.width)
|
||||
map.putInt("minISO", isoRange.lower)
|
||||
map.putInt("maxISO", isoRange.upper)
|
||||
map.putInt("minFps", fpsRange.lower)
|
||||
map.putInt("maxFps", fpsRange.upper)
|
||||
map.putDouble("fieldOfView", getFieldOfView())
|
||||
map.putBoolean("supportsVideoHDR", supportsVideoHdr)
|
||||
map.putBoolean("supportsPhotoHDR", supportsPhotoHdr)
|
||||
map.putString("autoFocusSystem", "contrast-detection") // TODO: Is this wrong?
|
||||
map.putArray("videoStabilizationModes", createStabilizationModes())
|
||||
map.putArray("pixelFormats", createPixelFormats(videoSize))
|
||||
return map
|
||||
}
|
||||
|
||||
// convert to React Native JS object (map)
|
||||
fun toMap(): ReadableMap {
|
||||
val map = Arguments.createMap()
|
||||
map.putString("id", cameraId)
|
||||
map.putArray("devices", getDeviceTypes())
|
||||
map.putString("position", parseLensFacing(lensFacing))
|
||||
map.putString("position", lensFacing.unionValue)
|
||||
map.putString("name", name)
|
||||
map.putBoolean("hasFlash", hasFlash)
|
||||
map.putBoolean("hasTorch", hasFlash)
|
||||
map.putBoolean("isMultiCam", isMultiCam)
|
||||
map.putBoolean("supportsParallelVideoProcessing", supportsParallelVideoProcessing)
|
||||
map.putBoolean("supportsRawCapture", supportsRawCapture)
|
||||
map.putBoolean("supportsDepthCapture", supportsDepthCapture)
|
||||
map.putBoolean("supportsLowLightBoost", supportsLowLightBoost)
|
||||
@@ -211,6 +201,37 @@ class CameraDevice(private val cameraManager: CameraManager, extensionsManager:
|
||||
map.putDouble("minZoom", minZoom)
|
||||
map.putDouble("maxZoom", maxZoom)
|
||||
map.putDouble("neutralZoom", 1.0) // Zoom is always relative to 1.0 on Android
|
||||
map.putString("hardwareLevel", hardwareLevel.unionValue)
|
||||
|
||||
val array = Arguments.createArray()
|
||||
cameraConfig.outputFormats.forEach { f ->
|
||||
val str = when (f) {
|
||||
ImageFormat.YUV_420_888 -> "YUV_420_888"
|
||||
ImageFormat.YUV_422_888 -> "YUV_422_888"
|
||||
ImageFormat.YUV_444_888 -> "YUV_444_888"
|
||||
ImageFormat.JPEG -> "JPEG"
|
||||
ImageFormat.DEPTH16 -> "DEPTH16"
|
||||
ImageFormat.DEPTH_JPEG -> "DEPTH_JPEG"
|
||||
ImageFormat.FLEX_RGBA_8888 -> "FLEX_RGBA_8888"
|
||||
ImageFormat.FLEX_RGB_888 -> "FLEX_RGB_888"
|
||||
ImageFormat.YUY2 -> "YUY2"
|
||||
ImageFormat.Y8 -> "Y8"
|
||||
ImageFormat.YV12 -> "YV12"
|
||||
ImageFormat.HEIC -> "HEIC"
|
||||
ImageFormat.PRIVATE -> "PRIVATE"
|
||||
ImageFormat.RAW_PRIVATE -> "RAW_PRIVATE"
|
||||
ImageFormat.RAW_SENSOR -> "RAW_SENSOR"
|
||||
ImageFormat.RAW10 -> "RAW10"
|
||||
ImageFormat.RAW12 -> "RAW12"
|
||||
ImageFormat.NV16 -> "NV16"
|
||||
ImageFormat.NV21 -> "NV21"
|
||||
ImageFormat.UNKNOWN -> "UNKNOWN"
|
||||
ImageFormat.YCBCR_P010 -> "YCBCR_P010"
|
||||
else -> "unknown ($f)"
|
||||
}
|
||||
array.pushString(str)
|
||||
}
|
||||
map.putArray("pixelFormats", array)
|
||||
|
||||
map.putArray("formats", getFormats())
|
||||
|
@@ -1,25 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.annotation.SuppressLint
|
||||
import androidx.camera.camera2.interop.Camera2CameraInfo
|
||||
import androidx.camera.core.CameraSelector
|
||||
import java.lang.IllegalArgumentException
|
||||
|
||||
/**
|
||||
* Create a new [CameraSelector] which selects the camera with the given [cameraId]
|
||||
*/
|
||||
@SuppressLint("UnsafeOptInUsageError")
|
||||
fun CameraSelector.Builder.byID(cameraId: String): CameraSelector.Builder {
|
||||
return this.addCameraFilter { cameras ->
|
||||
cameras.filter { cameraInfoX ->
|
||||
try {
|
||||
val cameraInfo = Camera2CameraInfo.from(cameraInfoX)
|
||||
return@filter cameraInfo.cameraId == cameraId
|
||||
} catch (e: IllegalArgumentException) {
|
||||
// Occurs when the [cameraInfoX] is not castable to a Camera2 Info object.
|
||||
// We can ignore this error because the [getAvailableCameraDevices()] func only returns Camera2 devices.
|
||||
return@filter false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,33 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.util.Range
|
||||
import android.util.Size
|
||||
import com.facebook.react.bridge.ReadableMap
|
||||
|
||||
class DeviceFormat(map: ReadableMap) {
|
||||
val frameRateRanges: List<Range<Int>>
|
||||
val photoSize: Size
|
||||
val videoSize: Size
|
||||
|
||||
init {
|
||||
frameRateRanges = map.getArray("frameRateRanges")!!.toArrayList().map { range ->
|
||||
if (range is HashMap<*, *>)
|
||||
rangeFactory(range["minFrameRate"], range["maxFrameRate"])
|
||||
else
|
||||
throw IllegalArgumentException("DeviceFormat: frameRateRanges contained a Range that was not of type HashMap<*,*>! Actual Type: ${range?.javaClass?.name}")
|
||||
}
|
||||
photoSize = Size(map.getInt("photoWidth"), map.getInt("photoHeight"))
|
||||
videoSize = Size(map.getInt("videoWidth"), map.getInt("videoHeight"))
|
||||
}
|
||||
}
|
||||
|
||||
fun rangeFactory(minFrameRate: Any?, maxFrameRate: Any?): Range<Int> {
|
||||
return when (minFrameRate) {
|
||||
is Int -> Range(minFrameRate, maxFrameRate as Int)
|
||||
is Double -> Range(minFrameRate.toInt(), (maxFrameRate as Double).toInt())
|
||||
else -> throw IllegalArgumentException(
|
||||
"DeviceFormat: frameRateRanges contained a Range that didn't have minFrameRate/maxFrameRate of types Int/Double! " +
|
||||
"Actual Type: ${minFrameRate?.javaClass?.name} & ${maxFrameRate?.javaClass?.name}"
|
||||
)
|
||||
}
|
||||
}
|
@@ -1,62 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.WritableMap
|
||||
|
||||
fun ExifInterface.buildMetadataMap(): WritableMap {
|
||||
val metadataMap = Arguments.createMap()
|
||||
metadataMap.putInt("Orientation", this.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL))
|
||||
|
||||
val tiffMap = Arguments.createMap()
|
||||
tiffMap.putInt("ResolutionUnit", this.getAttributeInt(ExifInterface.TAG_RESOLUTION_UNIT, 0))
|
||||
tiffMap.putString("Software", this.getAttribute(ExifInterface.TAG_SOFTWARE))
|
||||
tiffMap.putString("Make", this.getAttribute(ExifInterface.TAG_MAKE))
|
||||
tiffMap.putString("DateTime", this.getAttribute(ExifInterface.TAG_DATETIME))
|
||||
tiffMap.putDouble("XResolution", this.getAttributeDouble(ExifInterface.TAG_X_RESOLUTION, 0.0))
|
||||
tiffMap.putString("Model", this.getAttribute(ExifInterface.TAG_MODEL))
|
||||
tiffMap.putDouble("YResolution", this.getAttributeDouble(ExifInterface.TAG_Y_RESOLUTION, 0.0))
|
||||
metadataMap.putMap("{TIFF}", tiffMap)
|
||||
|
||||
val exifMap = Arguments.createMap()
|
||||
exifMap.putString("DateTimeOriginal", this.getAttribute(ExifInterface.TAG_DATETIME_ORIGINAL))
|
||||
exifMap.putDouble("ExposureTime", this.getAttributeDouble(ExifInterface.TAG_EXPOSURE_TIME, 0.0))
|
||||
exifMap.putDouble("FNumber", this.getAttributeDouble(ExifInterface.TAG_F_NUMBER, 0.0))
|
||||
val lensSpecificationArray = Arguments.createArray()
|
||||
this.getAttributeRange(ExifInterface.TAG_LENS_SPECIFICATION)?.forEach { lensSpecificationArray.pushInt(it.toInt()) }
|
||||
exifMap.putArray("LensSpecification", lensSpecificationArray)
|
||||
exifMap.putDouble("ExposureBiasValue", this.getAttributeDouble(ExifInterface.TAG_EXPOSURE_BIAS_VALUE, 0.0))
|
||||
exifMap.putInt("ColorSpace", this.getAttributeInt(ExifInterface.TAG_COLOR_SPACE, ExifInterface.COLOR_SPACE_S_RGB))
|
||||
exifMap.putInt("FocalLenIn35mmFilm", this.getAttributeInt(ExifInterface.TAG_FOCAL_LENGTH_IN_35MM_FILM, 0))
|
||||
exifMap.putDouble("BrightnessValue", this.getAttributeDouble(ExifInterface.TAG_BRIGHTNESS_VALUE, 0.0))
|
||||
exifMap.putInt("ExposureMode", this.getAttributeInt(ExifInterface.TAG_EXPOSURE_MODE, ExifInterface.EXPOSURE_MODE_AUTO.toInt()))
|
||||
exifMap.putString("LensModel", this.getAttribute(ExifInterface.TAG_LENS_MODEL))
|
||||
exifMap.putInt("SceneType", this.getAttributeInt(ExifInterface.TAG_SCENE_TYPE, ExifInterface.SCENE_TYPE_DIRECTLY_PHOTOGRAPHED.toInt()))
|
||||
exifMap.putInt("PixelXDimension", this.getAttributeInt(ExifInterface.TAG_PIXEL_X_DIMENSION, 0))
|
||||
exifMap.putDouble("ShutterSpeedValue", this.getAttributeDouble(ExifInterface.TAG_SHUTTER_SPEED_VALUE, 0.0))
|
||||
exifMap.putInt("SensingMethod", this.getAttributeInt(ExifInterface.TAG_SENSING_METHOD, ExifInterface.SENSOR_TYPE_NOT_DEFINED.toInt()))
|
||||
val subjectAreaArray = Arguments.createArray()
|
||||
this.getAttributeRange(ExifInterface.TAG_SUBJECT_AREA)?.forEach { subjectAreaArray.pushInt(it.toInt()) }
|
||||
exifMap.putArray("SubjectArea", subjectAreaArray)
|
||||
exifMap.putDouble("ApertureValue", this.getAttributeDouble(ExifInterface.TAG_APERTURE_VALUE, 0.0))
|
||||
exifMap.putString("SubsecTimeDigitized", this.getAttribute(ExifInterface.TAG_SUBSEC_TIME_DIGITIZED))
|
||||
exifMap.putDouble("FocalLength", this.getAttributeDouble(ExifInterface.TAG_FOCAL_LENGTH, 0.0))
|
||||
exifMap.putString("LensMake", this.getAttribute(ExifInterface.TAG_LENS_MAKE))
|
||||
exifMap.putString("SubsecTimeOriginal", this.getAttribute(ExifInterface.TAG_SUBSEC_TIME_ORIGINAL))
|
||||
exifMap.putString("OffsetTimeDigitized", this.getAttribute(ExifInterface.TAG_OFFSET_TIME_DIGITIZED))
|
||||
exifMap.putInt("PixelYDimension", this.getAttributeInt(ExifInterface.TAG_PIXEL_Y_DIMENSION, 0))
|
||||
val isoSpeedRatingsArray = Arguments.createArray()
|
||||
this.getAttributeRange(ExifInterface.TAG_PHOTOGRAPHIC_SENSITIVITY)?.forEach { isoSpeedRatingsArray.pushInt(it.toInt()) }
|
||||
exifMap.putArray("ISOSpeedRatings", isoSpeedRatingsArray)
|
||||
exifMap.putInt("WhiteBalance", this.getAttributeInt(ExifInterface.TAG_WHITE_BALANCE, 0))
|
||||
exifMap.putString("DateTimeDigitized", this.getAttribute(ExifInterface.TAG_DATETIME_DIGITIZED))
|
||||
exifMap.putString("OffsetTimeOriginal", this.getAttribute(ExifInterface.TAG_OFFSET_TIME_ORIGINAL))
|
||||
exifMap.putString("ExifVersion", this.getAttribute(ExifInterface.TAG_EXIF_VERSION))
|
||||
exifMap.putString("OffsetTime", this.getAttribute(ExifInterface.TAG_OFFSET_TIME))
|
||||
exifMap.putInt("Flash", this.getAttributeInt(ExifInterface.TAG_FLASH, ExifInterface.FLAG_FLASH_FIRED.toInt()))
|
||||
exifMap.putInt("ExposureProgram", this.getAttributeInt(ExifInterface.TAG_EXPOSURE_PROGRAM, ExifInterface.EXPOSURE_PROGRAM_NOT_DEFINED.toInt()))
|
||||
exifMap.putInt("MeteringMode", this.getAttributeInt(ExifInterface.TAG_METERING_MODE, ExifInterface.METERING_MODE_UNKNOWN.toInt()))
|
||||
metadataMap.putMap("{Exif}", exifMap)
|
||||
|
||||
return metadataMap
|
||||
}
|
20
android/src/main/java/com/mrousavy/camera/utils/ExifUtils.kt
Normal file
20
android/src/main/java/com/mrousavy/camera/utils/ExifUtils.kt
Normal file
@@ -0,0 +1,20 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
|
||||
class ExifUtils {
|
||||
companion object {
|
||||
fun computeExifOrientation(rotationDegrees: Int, mirrored: Boolean) = when {
|
||||
rotationDegrees == 0 && !mirrored -> ExifInterface.ORIENTATION_NORMAL
|
||||
rotationDegrees == 0 && mirrored -> ExifInterface.ORIENTATION_FLIP_HORIZONTAL
|
||||
rotationDegrees == 180 && !mirrored -> ExifInterface.ORIENTATION_ROTATE_180
|
||||
rotationDegrees == 180 && mirrored -> ExifInterface.ORIENTATION_FLIP_VERTICAL
|
||||
rotationDegrees == 270 && mirrored -> ExifInterface.ORIENTATION_TRANSVERSE
|
||||
rotationDegrees == 90 && !mirrored -> ExifInterface.ORIENTATION_ROTATE_90
|
||||
rotationDegrees == 90 && mirrored -> ExifInterface.ORIENTATION_TRANSPOSE
|
||||
rotationDegrees == 270 && mirrored -> ExifInterface.ORIENTATION_ROTATE_270
|
||||
rotationDegrees == 270 && !mirrored -> ExifInterface.ORIENTATION_TRANSVERSE
|
||||
else -> ExifInterface.ORIENTATION_UNDEFINED
|
||||
}
|
||||
}
|
||||
}
|
@@ -1,41 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import androidx.camera.core.ImageCapture
|
||||
import androidx.camera.core.ImageCaptureException
|
||||
import androidx.camera.core.ImageProxy
|
||||
import java.util.concurrent.Executor
|
||||
import kotlin.coroutines.resume
|
||||
import kotlin.coroutines.resumeWithException
|
||||
import kotlin.coroutines.suspendCoroutine
|
||||
|
||||
suspend inline fun ImageCapture.takePicture(options: ImageCapture.OutputFileOptions, executor: Executor) = suspendCoroutine<ImageCapture.OutputFileResults> { cont ->
|
||||
this.takePicture(
|
||||
options, executor,
|
||||
object : ImageCapture.OnImageSavedCallback {
|
||||
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
|
||||
cont.resume(outputFileResults)
|
||||
}
|
||||
|
||||
override fun onError(exception: ImageCaptureException) {
|
||||
cont.resumeWithException(exception)
|
||||
}
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
suspend inline fun ImageCapture.takePicture(executor: Executor) = suspendCoroutine<ImageProxy> { cont ->
|
||||
this.takePicture(
|
||||
executor,
|
||||
object : ImageCapture.OnImageCapturedCallback() {
|
||||
override fun onCaptureSuccess(image: ImageProxy) {
|
||||
super.onCaptureSuccess(image)
|
||||
cont.resume(image)
|
||||
}
|
||||
|
||||
override fun onError(exception: ImageCaptureException) {
|
||||
super.onError(exception)
|
||||
cont.resumeWithException(exception)
|
||||
}
|
||||
}
|
||||
)
|
||||
}
|
@@ -1,12 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import androidx.camera.core.ImageProxy
|
||||
|
||||
val ImageProxy.isRaw: Boolean
|
||||
get() {
|
||||
return when (format) {
|
||||
ImageFormat.RAW_SENSOR, ImageFormat.RAW10, ImageFormat.RAW12, ImageFormat.RAW_PRIVATE -> true
|
||||
else -> false
|
||||
}
|
||||
}
|
@@ -1,127 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.graphics.Bitmap
|
||||
import android.graphics.BitmapFactory
|
||||
import android.graphics.ImageFormat
|
||||
import android.graphics.Matrix
|
||||
import android.util.Log
|
||||
import androidx.camera.core.ImageProxy
|
||||
import androidx.exifinterface.media.ExifInterface
|
||||
import com.mrousavy.camera.CameraView
|
||||
import com.mrousavy.camera.InvalidFormatError
|
||||
import java.io.ByteArrayOutputStream
|
||||
import java.io.File
|
||||
import java.io.FileOutputStream
|
||||
import java.nio.ByteBuffer
|
||||
import kotlin.system.measureTimeMillis
|
||||
|
||||
// TODO: Fix this flip() function (this outputs a black image)
|
||||
fun flip(imageBytes: ByteArray, imageWidth: Int): ByteArray {
|
||||
// separate out the sub arrays
|
||||
var holder = ByteArray(imageBytes.size)
|
||||
var subArray = ByteArray(imageWidth)
|
||||
var subCount = 0
|
||||
for (i in imageBytes.indices) {
|
||||
subArray[subCount] = imageBytes[i]
|
||||
subCount++
|
||||
if (i % imageWidth == 0) {
|
||||
subArray.reverse()
|
||||
if (i == imageWidth) {
|
||||
holder = subArray
|
||||
} else {
|
||||
holder += subArray
|
||||
}
|
||||
subCount = 0
|
||||
subArray = ByteArray(imageWidth)
|
||||
}
|
||||
}
|
||||
subArray = ByteArray(imageWidth)
|
||||
System.arraycopy(imageBytes, imageBytes.size - imageWidth, subArray, 0, subArray.size)
|
||||
return holder + subArray
|
||||
}
|
||||
|
||||
// TODO: This function is slow. Figure out a faster way to flip images, preferably via directly manipulating the byte[] Exif flags
|
||||
fun flipImage(imageBytes: ByteArray): ByteArray {
|
||||
val bitmap = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
|
||||
val matrix = Matrix()
|
||||
|
||||
val exif = ExifInterface(imageBytes.inputStream())
|
||||
val orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_UNDEFINED)
|
||||
|
||||
when (orientation) {
|
||||
ExifInterface.ORIENTATION_ROTATE_180 -> {
|
||||
matrix.setRotate(180f)
|
||||
matrix.postScale(-1f, 1f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_FLIP_VERTICAL -> {
|
||||
matrix.setRotate(180f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_TRANSPOSE -> {
|
||||
matrix.setRotate(90f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_ROTATE_90 -> {
|
||||
matrix.setRotate(90f)
|
||||
matrix.postScale(-1f, 1f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_TRANSVERSE -> {
|
||||
matrix.setRotate(-90f)
|
||||
}
|
||||
ExifInterface.ORIENTATION_ROTATE_270 -> {
|
||||
matrix.setRotate(-90f)
|
||||
matrix.postScale(-1f, 1f)
|
||||
}
|
||||
}
|
||||
|
||||
val newBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.width, bitmap.height, matrix, true)
|
||||
val stream = ByteArrayOutputStream()
|
||||
newBitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream)
|
||||
return stream.toByteArray()
|
||||
}
|
||||
|
||||
fun ImageProxy.save(file: File, flipHorizontally: Boolean) {
|
||||
when (format) {
|
||||
// TODO: ImageFormat.RAW_SENSOR
|
||||
// TODO: ImageFormat.DEPTH_JPEG
|
||||
ImageFormat.JPEG -> {
|
||||
val buffer = planes[0].buffer
|
||||
var bytes = ByteArray(buffer.remaining())
|
||||
|
||||
// copy image from buffer to byte array
|
||||
buffer.get(bytes)
|
||||
|
||||
if (flipHorizontally) {
|
||||
val milliseconds = measureTimeMillis {
|
||||
bytes = flipImage(bytes)
|
||||
}
|
||||
Log.i(CameraView.TAG_PERF, "Flipping Image took $milliseconds ms.")
|
||||
}
|
||||
|
||||
val output = FileOutputStream(file)
|
||||
output.write(bytes)
|
||||
output.close()
|
||||
}
|
||||
ImageFormat.YUV_420_888 -> {
|
||||
// "prebuffer" simply contains the meta information about the following planes.
|
||||
val prebuffer = ByteBuffer.allocate(16)
|
||||
prebuffer.putInt(width)
|
||||
.putInt(height)
|
||||
.putInt(planes[1].pixelStride)
|
||||
.putInt(planes[1].rowStride)
|
||||
|
||||
val output = FileOutputStream(file)
|
||||
output.write(prebuffer.array()) // write meta information to file
|
||||
// Now write the actual planes.
|
||||
var buffer: ByteBuffer
|
||||
var bytes: ByteArray
|
||||
|
||||
for (i in 0..2) {
|
||||
buffer = planes[i].buffer
|
||||
bytes = ByteArray(buffer.remaining()) // makes byte array large enough to hold image
|
||||
buffer.get(bytes) // copies image from buffer to byte array
|
||||
output.write(bytes) // write the byte array to file
|
||||
}
|
||||
output.close()
|
||||
}
|
||||
else -> throw InvalidFormatError(format)
|
||||
}
|
||||
}
|
@@ -0,0 +1,32 @@
|
||||
package com.mrousavy.camera.utils;
|
||||
|
||||
import android.media.Image
|
||||
import kotlinx.coroutines.CompletableDeferred
|
||||
|
||||
class PhotoOutputSynchronizer {
|
||||
private val photoOutputQueue = HashMap<Long, CompletableDeferred<Image>>()
|
||||
|
||||
private operator fun get(key: Long): CompletableDeferred<Image> {
|
||||
if (!photoOutputQueue.containsKey(key)) {
|
||||
photoOutputQueue[key] = CompletableDeferred()
|
||||
}
|
||||
return photoOutputQueue[key]!!
|
||||
}
|
||||
|
||||
suspend fun await(timestamp: Long): Image {
|
||||
val image = this[timestamp].await()
|
||||
photoOutputQueue.remove(timestamp)
|
||||
return image
|
||||
}
|
||||
|
||||
fun set(timestamp: Long, image: Image) {
|
||||
this[timestamp].complete(image)
|
||||
}
|
||||
|
||||
fun clear() {
|
||||
photoOutputQueue.forEach {
|
||||
it.value.cancel()
|
||||
}
|
||||
photoOutputQueue.clear()
|
||||
}
|
||||
}
|
@@ -0,0 +1,158 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.content.Context
|
||||
import android.media.Image
|
||||
import android.media.ImageWriter
|
||||
import android.media.MediaCodec
|
||||
import android.media.MediaRecorder
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import android.util.Size
|
||||
import android.view.Surface
|
||||
import com.mrousavy.camera.parsers.Orientation
|
||||
import com.mrousavy.camera.parsers.VideoCodec
|
||||
import com.mrousavy.camera.parsers.VideoFileType
|
||||
import com.mrousavy.camera.utils.outputs.CameraOutputs
|
||||
import java.io.File
|
||||
|
||||
class RecordingSession(context: Context,
|
||||
private val enableAudio: Boolean,
|
||||
private val videoSize: Size,
|
||||
private val fps: Int? = null,
|
||||
private val codec: VideoCodec = VideoCodec.H264,
|
||||
private val orientation: Orientation,
|
||||
private val fileType: VideoFileType = VideoFileType.MP4,
|
||||
private val callback: (video: Video) -> Unit) {
|
||||
companion object {
|
||||
private const val TAG = "RecordingSession"
|
||||
// bits per second
|
||||
private const val VIDEO_BIT_RATE = 10_000_000
|
||||
private const val AUDIO_SAMPLING_RATE = 44_100
|
||||
private const val AUDIO_BIT_RATE = 16 * AUDIO_SAMPLING_RATE
|
||||
private const val AUDIO_CHANNELS = 1
|
||||
}
|
||||
|
||||
data class Video(val path: String, val durationMs: Long)
|
||||
|
||||
private val recorder: MediaRecorder
|
||||
private val outputFile: File
|
||||
private var startTime: Long? = null
|
||||
private var imageWriter: ImageWriter? = null
|
||||
val surface: Surface
|
||||
|
||||
init {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) {
|
||||
throw Error("Video Recording is only supported on Devices running Android version 23 (M) or newer.")
|
||||
}
|
||||
|
||||
surface = MediaCodec.createPersistentInputSurface()
|
||||
|
||||
outputFile = File.createTempFile("mrousavy", fileType.toExtension(), context.cacheDir)
|
||||
|
||||
Log.i(TAG, "Creating RecordingSession for ${outputFile.absolutePath}")
|
||||
|
||||
recorder = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) MediaRecorder(context) else MediaRecorder()
|
||||
|
||||
if (enableAudio) recorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER)
|
||||
recorder.setVideoSource(MediaRecorder.VideoSource.SURFACE)
|
||||
|
||||
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
|
||||
recorder.setOutputFile(outputFile.absolutePath)
|
||||
recorder.setVideoEncodingBitRate(VIDEO_BIT_RATE)
|
||||
recorder.setVideoSize(videoSize.width, videoSize.height)
|
||||
if (fps != null) recorder.setVideoFrameRate(fps)
|
||||
|
||||
Log.i(TAG, "Using $codec Video Codec..")
|
||||
recorder.setVideoEncoder(codec.toVideoCodec())
|
||||
if (enableAudio) {
|
||||
Log.i(TAG, "Adding Audio Channel..")
|
||||
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
|
||||
recorder.setAudioEncodingBitRate(AUDIO_BIT_RATE)
|
||||
recorder.setAudioSamplingRate(AUDIO_SAMPLING_RATE)
|
||||
recorder.setAudioChannels(AUDIO_CHANNELS)
|
||||
}
|
||||
recorder.setInputSurface(surface)
|
||||
recorder.setOrientationHint(orientation.toDegrees())
|
||||
|
||||
recorder.setOnErrorListener { _, what, extra ->
|
||||
Log.e(TAG, "MediaRecorder Error: $what ($extra)")
|
||||
stop()
|
||||
}
|
||||
recorder.setOnInfoListener { _, what, extra ->
|
||||
Log.i(TAG, "MediaRecorder Info: $what ($extra)")
|
||||
}
|
||||
|
||||
Log.i(TAG, "Created $this!")
|
||||
}
|
||||
|
||||
fun start() {
|
||||
synchronized(this) {
|
||||
Log.i(TAG, "Starting RecordingSession..")
|
||||
recorder.prepare()
|
||||
recorder.start()
|
||||
startTime = System.currentTimeMillis()
|
||||
}
|
||||
}
|
||||
|
||||
fun stop() {
|
||||
synchronized(this) {
|
||||
Log.i(TAG, "Stopping RecordingSession..")
|
||||
try {
|
||||
recorder.stop()
|
||||
recorder.release()
|
||||
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
|
||||
imageWriter?.close()
|
||||
imageWriter = null
|
||||
}
|
||||
} catch (e: Error) {
|
||||
Log.e(TAG, "Failed to stop MediaRecorder!", e)
|
||||
}
|
||||
|
||||
val stopTime = System.currentTimeMillis()
|
||||
val durationMs = stopTime - (startTime ?: stopTime)
|
||||
callback(Video(outputFile.absolutePath, durationMs))
|
||||
}
|
||||
}
|
||||
|
||||
fun pause() {
|
||||
synchronized(this) {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
|
||||
throw Error("Pausing a recording is only supported on Devices running Android version 24 (N) or newer.")
|
||||
}
|
||||
Log.i(TAG, "Pausing Recording Session..")
|
||||
recorder.pause()
|
||||
}
|
||||
}
|
||||
|
||||
fun resume() {
|
||||
synchronized(this) {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
|
||||
throw Error("Resuming a recording is only supported on Devices running Android version 24 (N) or newer.")
|
||||
}
|
||||
Log.i(TAG, "Resuming Recording Session..")
|
||||
recorder.resume()
|
||||
}
|
||||
}
|
||||
|
||||
fun appendImage(image: Image) {
|
||||
synchronized(this) {
|
||||
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) {
|
||||
throw Error("Video Recording is only supported on Devices running Android version 23 (M) or newer.")
|
||||
}
|
||||
|
||||
// TODO: Correctly mirror/flip Image in OpenGL pipeline, otherwise flipping camera while recording results in inverted frames
|
||||
|
||||
if (imageWriter == null) {
|
||||
imageWriter = ImageWriter.newInstance(surface, CameraOutputs.VIDEO_OUTPUT_BUFFER_SIZE)
|
||||
}
|
||||
image.timestamp = System.nanoTime()
|
||||
imageWriter!!.queueInputImage(image)
|
||||
}
|
||||
}
|
||||
|
||||
override fun toString(): String {
|
||||
val audio = if (enableAudio) "with audio" else "without audio"
|
||||
return "${videoSize.width} x ${videoSize.height} @ $fps FPS $codec $fileType $orientation RecordingSession ($audio)"
|
||||
}
|
||||
}
|
@@ -1,17 +0,0 @@
|
||||
package com.mrousavy.camera.utils
|
||||
|
||||
import android.util.Size
|
||||
import android.view.Surface
|
||||
|
||||
/**
|
||||
* Rotate by a given Surface Rotation
|
||||
*/
|
||||
fun Size.rotated(surfaceRotation: Int): Size {
|
||||
return when (surfaceRotation) {
|
||||
Surface.ROTATION_0 -> Size(width, height)
|
||||
Surface.ROTATION_90 -> Size(height, width)
|
||||
Surface.ROTATION_180 -> Size(width, height)
|
||||
Surface.ROTATION_270 -> Size(height, width)
|
||||
else -> Size(width, height)
|
||||
}
|
||||
}
|
@@ -0,0 +1,145 @@
|
||||
package com.mrousavy.camera.utils.outputs
|
||||
|
||||
import android.graphics.ImageFormat
|
||||
import android.hardware.HardwareBuffer
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.CameraManager
|
||||
import android.media.Image
|
||||
import android.media.ImageReader
|
||||
import android.media.MediaCodec
|
||||
import android.util.Log
|
||||
import android.util.Size
|
||||
import android.view.Surface
|
||||
import com.mrousavy.camera.CameraQueues
|
||||
import com.mrousavy.camera.extensions.closestToOrMax
|
||||
import com.mrousavy.camera.extensions.getPhotoSizes
|
||||
import com.mrousavy.camera.extensions.getPreviewSize
|
||||
import com.mrousavy.camera.extensions.getVideoSizes
|
||||
import com.mrousavy.camera.frameprocessor.Frame
|
||||
import com.mrousavy.camera.frameprocessor.FrameProcessor
|
||||
import com.mrousavy.camera.parsers.Orientation
|
||||
import java.io.Closeable
|
||||
import java.lang.IllegalStateException
|
||||
|
||||
class CameraOutputs(val cameraId: String,
|
||||
cameraManager: CameraManager,
|
||||
val preview: PreviewOutput? = null,
|
||||
val photo: PhotoOutput? = null,
|
||||
val video: VideoOutput? = null,
|
||||
val callback: Callback): Closeable {
|
||||
companion object {
|
||||
private const val TAG = "CameraOutputs"
|
||||
const val VIDEO_OUTPUT_BUFFER_SIZE = 3
|
||||
const val PHOTO_OUTPUT_BUFFER_SIZE = 3
|
||||
}
|
||||
|
||||
data class PreviewOutput(val surface: Surface)
|
||||
data class PhotoOutput(val targetSize: Size? = null,
|
||||
val format: Int = ImageFormat.JPEG)
|
||||
data class VideoOutput(val targetSize: Size? = null,
|
||||
val enableRecording: Boolean = false,
|
||||
val enableFrameProcessor: Boolean? = false,
|
||||
val format: Int = ImageFormat.PRIVATE,
|
||||
val hdrProfile: Long? = null /* DynamicRangeProfiles */)
|
||||
|
||||
interface Callback {
|
||||
fun onPhotoCaptured(image: Image)
|
||||
fun onVideoFrameCaptured(image: Image)
|
||||
}
|
||||
|
||||
var previewOutput: SurfaceOutput? = null
|
||||
private set
|
||||
var photoOutput: ImageReaderOutput? = null
|
||||
private set
|
||||
var videoOutput: SurfaceOutput? = null
|
||||
private set
|
||||
|
||||
val size: Int
|
||||
get() {
|
||||
var size = 0
|
||||
if (previewOutput != null) size++
|
||||
if (photoOutput != null) size++
|
||||
if (videoOutput != null) size++
|
||||
return size
|
||||
}
|
||||
|
||||
override fun equals(other: Any?): Boolean {
|
||||
if (other !is CameraOutputs) return false
|
||||
return this.cameraId == other.cameraId
|
||||
&& (this.preview == null) == (other.preview == null)
|
||||
&& this.photo?.targetSize == other.photo?.targetSize
|
||||
&& this.photo?.format == other.photo?.format
|
||||
&& this.video?.enableRecording == other.video?.enableRecording
|
||||
&& this.video?.targetSize == other.video?.targetSize
|
||||
&& this.video?.format == other.video?.format
|
||||
}
|
||||
|
||||
override fun hashCode(): Int {
|
||||
var result = cameraId.hashCode()
|
||||
result += (preview?.hashCode() ?: 0)
|
||||
result += (photo?.hashCode() ?: 0)
|
||||
result += (video?.hashCode() ?: 0)
|
||||
return result
|
||||
}
|
||||
|
||||
override fun close() {
|
||||
photoOutput?.close()
|
||||
videoOutput?.close()
|
||||
}
|
||||
|
||||
override fun toString(): String {
|
||||
val strings = arrayListOf<String>()
|
||||
previewOutput?.let { strings.add(it.toString()) }
|
||||
photoOutput?.let { strings.add(it.toString()) }
|
||||
videoOutput?.let { strings.add(it.toString()) }
|
||||
return strings.joinToString(", ", "[", "]")
|
||||
}
|
||||
|
||||
init {
|
||||
val characteristics = cameraManager.getCameraCharacteristics(cameraId)
|
||||
val config = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
|
||||
|
||||
Log.i(TAG, "Preparing Outputs for Camera $cameraId...")
|
||||
|
||||
// Preview output: Low resolution repeating images (SurfaceView)
|
||||
if (preview != null) {
|
||||
Log.i(TAG, "Adding native preview view output.")
|
||||
previewOutput = SurfaceOutput(preview.surface, characteristics.getPreviewSize(), SurfaceOutput.OutputType.PREVIEW)
|
||||
}
|
||||
|
||||
// Photo output: High quality still images (takePhoto())
|
||||
if (photo != null) {
|
||||
val size = characteristics.getPhotoSizes(photo.format).closestToOrMax(photo.targetSize)
|
||||
|
||||
val imageReader = ImageReader.newInstance(size.width, size.height, photo.format, PHOTO_OUTPUT_BUFFER_SIZE)
|
||||
imageReader.setOnImageAvailableListener({ reader ->
|
||||
val image = reader.acquireLatestImage() ?: return@setOnImageAvailableListener
|
||||
callback.onPhotoCaptured(image)
|
||||
}, CameraQueues.cameraQueue.handler)
|
||||
|
||||
Log.i(TAG, "Adding ${size.width}x${size.height} photo output. (Format: ${photo.format})")
|
||||
photoOutput = ImageReaderOutput(imageReader, SurfaceOutput.OutputType.PHOTO)
|
||||
}
|
||||
|
||||
// Video output: High resolution repeating images (startRecording() or useFrameProcessor())
|
||||
if (video != null) {
|
||||
val size = characteristics.getVideoSizes(cameraId, video.format).closestToOrMax(video.targetSize)
|
||||
|
||||
val flags = HardwareBuffer.USAGE_GPU_SAMPLED_IMAGE or HardwareBuffer.USAGE_VIDEO_ENCODE
|
||||
val imageReader = ImageReader.newInstance(size.width, size.height, video.format, VIDEO_OUTPUT_BUFFER_SIZE, flags)
|
||||
imageReader.setOnImageAvailableListener({ reader ->
|
||||
try {
|
||||
val image = reader.acquireNextImage() ?: return@setOnImageAvailableListener
|
||||
callback.onVideoFrameCaptured(image)
|
||||
} catch (e: IllegalStateException) {
|
||||
Log.e(TAG, "Failed to acquire a new Image, dropping a Frame.. The Frame Processor cannot keep up with the Camera's FPS!", e)
|
||||
}
|
||||
}, CameraQueues.videoQueue.handler)
|
||||
|
||||
Log.i(TAG, "Adding ${size.width}x${size.height} video output. (Format: ${video.format} | HDR: ${video.hdrProfile})")
|
||||
videoOutput = ImageReaderOutput(imageReader, SurfaceOutput.OutputType.VIDEO)
|
||||
}
|
||||
|
||||
Log.i(TAG, "Prepared $size Outputs for Camera $cameraId!")
|
||||
}
|
||||
}
|
@@ -0,0 +1,22 @@
|
||||
package com.mrousavy.camera.utils.outputs
|
||||
|
||||
import android.media.ImageReader
|
||||
import android.util.Log
|
||||
import android.util.Size
|
||||
import java.io.Closeable
|
||||
|
||||
/**
|
||||
* A [SurfaceOutput] that uses an [ImageReader] as it's surface.
|
||||
*/
|
||||
class ImageReaderOutput(private val imageReader: ImageReader,
|
||||
outputType: OutputType,
|
||||
dynamicRangeProfile: Long? = null): Closeable, SurfaceOutput(imageReader.surface, Size(imageReader.width, imageReader.height), outputType, dynamicRangeProfile) {
|
||||
override fun close() {
|
||||
Log.i(TAG, "Closing ${imageReader.width}x${imageReader.height} $outputType ImageReader..")
|
||||
imageReader.close()
|
||||
}
|
||||
|
||||
override fun toString(): String {
|
||||
return "$outputType (${imageReader.width} x ${imageReader.height} in format #${imageReader.imageFormat})"
|
||||
}
|
||||
}
|
@@ -0,0 +1,80 @@
|
||||
package com.mrousavy.camera.utils.outputs
|
||||
|
||||
import android.hardware.camera2.CameraCharacteristics
|
||||
import android.hardware.camera2.CameraMetadata
|
||||
import android.hardware.camera2.params.OutputConfiguration
|
||||
import android.os.Build
|
||||
import android.util.Log
|
||||
import android.util.Size
|
||||
import android.view.Surface
|
||||
import androidx.annotation.RequiresApi
|
||||
import java.io.Closeable
|
||||
|
||||
/**
|
||||
* A general-purpose Camera Output that writes to a [Surface]
|
||||
*/
|
||||
open class SurfaceOutput(val surface: Surface,
|
||||
val size: Size,
|
||||
val outputType: OutputType,
|
||||
private val dynamicRangeProfile: Long? = null,
|
||||
private val closeSurfaceOnEnd: Boolean = false): Closeable {
|
||||
companion object {
|
||||
const val TAG = "SurfaceOutput"
|
||||
|
||||
private fun supportsOutputType(characteristics: CameraCharacteristics, outputType: OutputType): Boolean {
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) {
|
||||
val availableUseCases = characteristics.get(CameraCharacteristics.SCALER_AVAILABLE_STREAM_USE_CASES)
|
||||
if (availableUseCases != null) {
|
||||
if (availableUseCases.contains(outputType.toOutputType().toLong())) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
@RequiresApi(Build.VERSION_CODES.N)
|
||||
fun toOutputConfiguration(characteristics: CameraCharacteristics): OutputConfiguration {
|
||||
val result = OutputConfiguration(surface)
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) {
|
||||
if (dynamicRangeProfile != null) {
|
||||
result.dynamicRangeProfile = dynamicRangeProfile
|
||||
Log.i(TAG, "Using dynamic range profile ${result.dynamicRangeProfile} for $outputType output.")
|
||||
}
|
||||
if (supportsOutputType(characteristics, outputType)) {
|
||||
result.streamUseCase = outputType.toOutputType().toLong()
|
||||
Log.i(TAG, "Using optimized stream use case ${result.streamUseCase} for $outputType output.")
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
override fun toString(): String {
|
||||
return "$outputType (${size.width} x ${size.height})"
|
||||
}
|
||||
|
||||
override fun close() {
|
||||
if (closeSurfaceOnEnd) {
|
||||
surface.release()
|
||||
}
|
||||
}
|
||||
|
||||
enum class OutputType {
|
||||
PHOTO,
|
||||
VIDEO,
|
||||
PREVIEW,
|
||||
VIDEO_AND_PREVIEW;
|
||||
|
||||
@RequiresApi(Build.VERSION_CODES.TIRAMISU)
|
||||
fun toOutputType(): Int {
|
||||
return when(this) {
|
||||
PHOTO -> CameraMetadata.SCALER_AVAILABLE_STREAM_USE_CASES_STILL_CAPTURE
|
||||
VIDEO -> CameraMetadata.SCALER_AVAILABLE_STREAM_USE_CASES_VIDEO_RECORD
|
||||
PREVIEW -> CameraMetadata.SCALER_AVAILABLE_STREAM_USE_CASES_PREVIEW
|
||||
VIDEO_AND_PREVIEW -> CameraMetadata.SCALER_AVAILABLE_STREAM_USE_CASES_PREVIEW_VIDEO_STILL
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
Reference in New Issue
Block a user