* feat: Use JSI's `ArrayBuffer` instead of `TypedArray`
* fix: Fix move memory
* feat: Implement iOS
* Format
* Update JSIJNIConversion.cpp
* fix: Fix Android `toArrayBuffer` and other
* Catch FP call errors
* Update return type
* Use `CPU_READ_OFTEN` flag as well
* CPU flag
* Run destructors under `jni::ThreadScope`
* Update FrameProcessorPluginHostObject.cpp
* fix: Fix `toArrayBuffer()` crash
* Update Frame.ts
* feat: Create `TypedArray` class for Frame Processor Plugins
* Type
* feat: Pass `VisionCameraProxy` along (BREAKING)
* feat: Finish implementation
* Log a bit
* feat: Successfully convert JSI <> JNI buffers
* Wrap buffer
* fix: Fix using wrong Runtime
* feat: Add docs
* add zero copy example
* Format C++
* Create iOS base
* feat: Finish iOS implementation
* chore: Format
* fix: Use `NSData` instead of `NSMutableData`
* Format
* fix: Fix build when Frame Processors are disabled
* chore: Rename `TypedArray` to `SharedArray`
* fix: Fix Swift typings for Array
* Remove a few default inits
* fix: Fix Android build
* fix: Use `NSInteger`
* Update SharedArray.mm
* fix: Expose bytes directly on iOS (NSData was immutable)
* feat: Re-throw error on JS side instead of just logging on native side
* fix: Fix proxy
* fix: Fix app crash by only logging error
* fix: Use `global.ErrorUtils` (from reanimated)
* fix: Fix multi-Thread access on Java
* fix: Thread-lock access on iOS as well
* whoops add missing header impl
* Update Podfile.lock
* fix: Don't use `CFGetRetainCount`
* fix: Lock access on iOS as well
* C++ format
* More detailed error
* chore: Move getters into `Frame`
* Format c++
* Use enum `orientation` again
* format
* fix: Synchronize `isValid` on Java
* Also log pixelformat
* feat: Use Java enums in C++
* Format C++
* Remove unused hasConstants method. Do no thrown error on minBy call when filterd videoProfiles contains zero items.
* Remove not related to the fix changes.
* fix: Use `bitRate` multiplier instead of setting it to an absolute value
* Pass override
* Format
* Rename
* feat: Also implement Android
* fix: Log Mbps properly
* fix: Up-/Down-scale bit-rate if different options
* fix: Parse in Manager
* Update RecordingSession+getRecommendedBitRate.kt
* fix: Close `CameraSession` if the View is removed
* fix: Use ViewManager's `onDropViewInstance` instead
* fix: Only stop repeating if we have a session
* fix: Reset `configuration` on `close()`
* feat: Split `videoHdr` and `photoHdr` into two settings
* fix: Rename all `hdr`
* fix: Fix HDR on Android
* Update CameraDeviceDetails.kt
* Update CameraDeviceDetails.kt
* fix: Correctly configure `pixelFormat` AFTER `format`
* Update CameraSession+Configuration.swift
* fix: Also after format changed
* Android & TypeScript part of scanned code corner points. Scanned frame dimensions also included in callback. #2076
* TS fix. #2076
* Implement iOS parts of code scanner corner points with additional scanned frame data.
* Add example page for code scanning
* Use Point type from Point.ts
* Update package/src/CodeScanner.ts
Add parameters description to CodeScanner callback.
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* Update package/src/CodeScanner.ts
More expressive description for CodeScannerFrame.
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* Update package/src/CodeScanner.ts
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* Update package/src/CodeScanner.ts
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* Update package/ios/Core/CameraSession+CodeScanner.swift
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* Update package/ios/Core/CameraSession+CodeScanner.swift
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* Remove default values from CodeSCannerFrame
* Linting
* Multiply code corner points in swift
---------
Co-authored-by: stemy <balazs.stemler@metrix.co.hu>
Co-authored-by: Zoli <iamzozo@metrix.co.hu>
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* fix: set correct namespace in build.gradle
* chore: refactor Android project for compatibility with multiple Gradle versions
---------
Co-authored-by: Marc Rousavy <me@mrousavy.com>
Implements a semi-working version of flash photo capture for Android.
This isn't properly implemented because a proper implementation requires a fully custom precapture sequence that enables the torch, then waits for AE/AF to adjust, lock AE/AF, then capture with a single torch burst, and then turn the torch off again. This is quite complex, that's why the feature request #1890 is marked at $3,000.
For now, this is a simple flash burst which _sometimes works_, _sometimes not_ - highly depends on the device.
If anyone wants true working flash capture, sponsor in #1890.
* feat: Create base for `CameraConfiguration` diff
* Fix
* Write three configure methods
* Build?
* MOre
* Update CameraView+RecordVideo.kt
* Fix errors
* Update CameraDeviceDetails.kt
* Update CameraSession.kt
* Auto-resize Preview View
* More
* Make it work? idk
* Format
* Call `configure` under mutex, and change isActive
* fix: Make Outputs comparable
* fix: Make CodeScanner comparable
* Format
* fix: Update outputs after reconfiguring
* Update CameraPage.tsx
* fix: Close CaptureSession before
in VisionCamera v1 & v2 there were two ObjC macros that were helping
in creation/registration of Frame Processors, but these were removed with
v3
This PR reintroduces such macros, which will not only make FP development
easier, but also it will also fix issues people had with registration of
Swift Frame Processors (+load vs +initialize issues)
Docs were also updated to reflect that the macros should be used to
correctly initialize and register ObjC/Swift Frame Processors
Moves everything Camera related into `core/` / `Core/` so that it is better encapsulated from React Native.
Benefits:
1. Code is much better organized. Should be easier for collaborators now, and cleaner codebase for me.
2. Locking is fully atomically as you can now only configure the session through a lock/Mutex which is batch-overridable
* On iOS, this makes Camera startup time **MUCH** faster, I measured speedups from **1.5 seconds** to only **240 milliseconds** since we only lock/commit once! 🚀
* On Android, this fixes a few out-of-sync/concurrency issues like "Capture Request contains unconfigured Input/Output Surface!" since it is now a single lock-operation! 💪
3. It is easier to integrate VisionCamera outside of React Native (e.g. Native iOS Apps, NativeScript, Flutter, etc)
With this PR, VisionCamera V3 is up to **7x** faster than V2
* feat: Route images through `ImageWriter` into OpenGL pipeline
* fix: Use RGB format
* fix: Every device supports YUV, RGB and NATIVE
* Update VideoPipeline.kt
* log format
* Plug ImageReader between OpenGL pipeline
* Call Frame Processor
* Format
* Remove logs
* feat: Use `HardwareBuffer` for `toArrayBuffer()`
* Format
* feat: Route images through `ImageWriter` into OpenGL pipeline
* fix: Use RGB format
* fix: Every device supports YUV, RGB and NATIVE
* Update VideoPipeline.kt
* log format
* Plug ImageReader between OpenGL pipeline
* Call Frame Processor
* Format
* Remove logs
* fix: Incorrect zoom on Android < 11
Fixes#1865
* Clamp zoom on Android
Some unclamped zoom values crash. For example, zoom={0.5} crashes
(tested on Android 9).
* Extract zoom into an extension (Android)
* Update package/android/src/main/java/com/mrousavy/camera/extensions/CaptureRequest+setZoom.kt
---------
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* feat: Implement `resizeMode` prop for iOS
- `"cover"`: Keep aspect ratio, but fill entire parent view (centered).
- `"contain"`: Keep aspect ratio, but make sure the entire content is visible even if it introduces additional blank areas (centered).
* chore: Update prop docs
* Update CameraProps.ts
* Lint & Format
* feat(preview): respect format's aspect ratio
* fix: code guidelines and previewSize in PreviewView
* feat: add resizeMode 'cover' and 'contain' on Android
1. Reverts 4e96eb77e0 (PR #1789) to bring the C++ OpenGL GPU Pipeline back.
2. Fixes the "initHybrid JNI not found" error by loading the native JNI/C++ library in `VideoPipeline.kt`.
This PR has two downsides:
1. `pixelFormat="yuv"` does not work on Android. OpenGL only works in RGB
2. OpenGL rendering is fast, but it has an overhead. I think for Camera -> Video Recording we shouldn't be using an entire OpenGL rendering pipeline.
The original plan was to use something similar to how it works on iOS by just passing GPU buffers around, but the android.media APIs just aren't as advanced yet. `ImageReader`/`ImageWriter` is way too buggy and doesn't really work with `MediaRecorder`/`MediaCodec`.
This sucks, I hope in the future we can use something like `AHardwareBuffer`s.
* feat: Add support for LiDAR, TrueDepth, External (USB) and Continuity Camera Devices (iOS 17)
* Rename `devices` -> `physicalDevices`
* fix: Comment out iOS 17 cameras for now
* fix: Move `supportsDepthCapture` to `format`
* fix: Fall back to `wide-angle-camera` for any unknown types
* Update CameraPage.tsx
* `descriptor` -> `physicalDeviceDescriptor`
* Update CameraDevice.ts
* Format
* feat: Expose `userPreferredCameraDevice`
Uses the new iOS 17 API where the user can prefer a default device, otherwise fall back to the first device of the available ones
* fix: Expose as property
* Add TODO comments
* fix: Format code
* fix: Compile below Swift 5.9