* Nuke CameraX
* fix: Run View Finder on UI Thread
* Open Camera, set up Threads
* fix init
* Mirror if needed
* Try PreviewView
* Use max resolution
* Add `hardwareLevel` property
* Check if output type is supported
* Replace `frameRateRanges` with `minFps` and `maxFps`
* Remove `isHighestPhotoQualitySupported`
* Remove `colorSpace`
The native platforms will use the best / most accurate colorSpace by default anyways.
* HDR
* Check from format
* fix
* Remove `supportsParallelVideoProcessing`
* Correctly return video/photo sizes on Android now. Finally
* Log all Device props
* Log if optimized usecase is used
* Cleanup
* Configure Camera Input only once
* Revert "Configure Camera Input only once"
This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e.
* Extract Camera configuration
* Try to reconfigure all
* Hook based
* Properly set up `CameraSession`
* Delete unused
* fix: Fix recreate when outputs change
* Update NativePreviewView.kt
* Use callback for closing
* Catch CameraAccessException
* Finally got it stable
* Remove isMirrored
* Implement `takePhoto()`
* Add ExifInterface library
* Run findViewById on UI Thread
* Add Photo Output Surface to takePhoto
* Fix Video Stabilization Modes
* Optimize Imports
* More logs
* Update CameraSession.kt
* Close Image
* Use separate Executor in CameraQueue
* Delete hooks
* Use same Thread again
* If opened, call error
* Update CameraSession.kt
* Log HW level
* fix: Don't enable Stream Use Case if it's not 100% supported
* Move some stuff
* Cleanup PhotoOutputSynchronizer
* Try just open in suspend fun
* Some synchronization fixes
* fix logs
* Update CameraDevice+createCaptureSession.kt
* Update CameraDevice+createCaptureSession.kt
* fixes
* fix: Use Snapshot Template for speed capture prio
* Use PREVIEW template for repeating request
* Use `TEMPLATE_RECORD` if video use-case is attached
* Use `isRunning` flag
* Recreate session everytime on active/inactive
* Lazily get values in capture session
* Stability
* Rebuild session if outputs change
* Set `didOutputsChange` back to false
* Capture first in lock
* Try
* kinda fix it? idk
* fix: Keep Outputs
* Refactor into single method
* Update CameraView.kt
* Use Enums for type safety
* Implement Orientation (I think)
* Move RefCount management to Java (Frame)
* Don't crash when dropping a Frame
* Prefer Devices with higher max resolution
* Prefer multi-cams
* Use FastImage for Media Page
* Return orientation in takePhoto()
* Load orientation from EXIF Data
* Add `isMirrored` props and documentation for PhotoFile
* fix: Return `not-determined` on Android
* Update CameraViewModule.kt
* chore: Upgrade packages
* fix: Fix Metro Config
* Cleanup config
* Properly mirror Images on save
* Prepare MediaRecorder
* Start/Stop MediaRecorder
* Remove `takeSnapshot()`
It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot
* Use `MediaCodec`
* Move to `VideoRecording` class
* Cleanup Snapshot
* Create `SkiaPreviewView` hybrid class
* Create OpenGL context
* Create `SkiaPreviewView`
* Fix texture creation missing context
* Draw red frame
* Somehow get it working
* Add Skia CMake setup
* Start looping
* Init OpenGL
* Refactor into `SkiaRenderer`
* Cleanup PreviewSize
* Set up
* Only re-render UI if there is a new Frame
* Preview
* Fix init
* Try rendering Preview
* Update SkiaPreviewView.kt
* Log version
* Try using Skia (fail)
* Drawwwww!!!!!!!!!! 🎉
* Use Preview Size
* Clear first
* Refactor into SkiaRenderer
* Add `previewType: "none"` on iOS
* Simplify a lot
* Draw Camera? For some reason? I have no idea anymore
* Fix OpenGL errors
* Got it kinda working again?
* Actually draw Frame woah
* Clean up code
* Cleanup
* Update on main
* Synchronize render calls
* holy shit
* Update SkiaRenderer.cpp
* Update SkiaRenderer.cpp
* Refactor
* Update SkiaRenderer.cpp
* Check for `NO_INPUT_TEXTURE`^
* Post & Wait
* Set input size
* Add Video back again
* Allow session without preview
* Convert JPEG to byte[]
* feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689)
* Try to pass YUV Buffers as Pixmaps
* Create pixmap!
* Clean up
* Render to preview
* Only render if we have an output surface
* Update SkiaRenderer.cpp
* Fix Y+U+V sampling code
* Cleanup
* Fix Semaphore 0
* Use 4:2:0 YUV again idk
* Update SkiaRenderer.h
* Set minSdk to 26
* Set surface
* Revert "Set minSdk to 26"
This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48.
* Set previewType
* feat: Video Recording with Camera2 (#1691)
* Rename
* Update CameraSession.kt
* Use `SurfaceHolder` instead of `SurfaceView` for output
* Update CameraOutputs.kt
* Update CameraSession.kt
* fix: Fix crash when Preview is null
* Check if snapshot capture is supported
* Update RecordingSession.kt
* S
* Use `MediaRecorder`
* Make audio optional
* Add Torch
* Output duration
* Update RecordingSession.kt
* Start RecordingSession
* logs
* More log
* Base for preparing pass-through Recording
* Use `ImageWriter` to append Images to the Recording Surface
* Stream PRIVATE GPU_SAMPLED_IMAGE Images
* Add flags
* Close session on stop
* Allow customizing `videoCodec` and `fileType`
* Enable Torch
* Fix Torch Mode
* Fix comparing outputs with hashCode
* Update CameraSession.kt
* Correctly pass along Frame Processor
* fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz
* Use CAMCORDER instead of MIC microphone
* Use 1 channel
* fix: Use `Orientation`
* Add `native` PixelFormat
* Update iOS to latest Skia integration
* feat: Add `pixelFormat` property to Camera
* Catch error in configureSession
* Fix JPEG format
* Clean up best match finder
* Update CameraDeviceDetails.kt
* Clamp sizes by maximum CamcorderProfile size
* Remove `getAvailableVideoCodecs`
* chore: release 3.0.0-rc.5
* Use maximum video size of RECORD as default
* Update CameraDeviceDetails.kt
* Add a todo
* Add JSON device to issue report
* Prefer `full` devices and flash
* Lock to 30 FPS on Samsung
* Implement Zoom
* Refactor
* Format -> PixelFormat
* fix: Feat `pixelFormat` -> `pixelFormats`
* Update TROUBLESHOOTING.mdx
* Format
* fix: Implement `zoom` for Photo Capture
* fix: Don't run if `isActive` is `false`
* fix: Call `examplePlugin(frame)`
* fix: Fix Flash
* fix: Use `react-native-worklets-core`!
* fix: Fix import
* fix: Fix CI for "Build Android"
* update versions
* Update Gemfile.lock
* format swift
* fix: Fix swift lint
* Update .swiftlint.yml
* Use C++17 for lint
* fix: Fix C++ lints
Before, Frame Processors ran on a separate Thread.
After, Frame Processors run fully synchronous and always at the same FPS as the Camera.
Two new functions have been introduced:
* `runAtTargetFps(fps: number, func: () => void)`: Runs the given code as often as the given `fps`, effectively throttling it's calls.
* `runAsync(frame: Frame, func: () => void)`: Runs the given function on a separate Thread for Frame Processing. A strong reference to the Frame is held as long as the function takes to execute.
You can use `runAtTargetFps` to throttle calls to a specific API (e.g. if your Camera is running at 60 FPS, but you only want to run face detection at ~25 FPS, use `runAtTargetFps(25, ...)`.)
You can use `runAsync` to run a heavy algorithm asynchronous, so that the Camera is not blocked while your algorithm runs. This is useful if your main sync processor draws something, and your async processor is doing some image analysis on the side.
You can also combine both functions.
Examples:
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAtTargetFps(10, () => {
'worklet'
console.log("I'm running at 10 FPS!")
})
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAsync(frame, () => {
'worklet'
console.log("I'm running on another Thread, I can block for longer!")
})
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAtTargetFps(10, () => {
'worklet'
runAsync(frame, () => {
'worklet'
console.log("I'm running on another Thread at 10 FPS, I can block for longer!")
})
})
}, [])
```
* feat: Allow returning of ImageProxy in a Frame Processor
* chore: Clean up
* fix: Missing space
* Update useFrameProcessor.ts
* Revert "Update useFrameProcessor.ts"
This reverts commit 9c645489cdfdf2079972669756a2cd20cc81e25e.
* Calculate a format's video dimensions based on supported resolutions and photo dimensions
* Add Android fallback strategy for recording quality
* Ensure that session props are not ignored when app is resumed
* Simplify setting Android video dimensions in format
* Modify Android imageAnalysisBuilder to use photoSize
* Update onHostResume function to reference android preview issue
* Add missing Android capture errors
Accessing previewView.bitmap was throwing an error because it wasn't being done on the main thread.
Any access to previewView needs to be done on the main (UI) thread. This commit fixes the issue by
ensuring this access is now run on the main thread.
Fixes#547
This commit fixes#758. I was having the same issue and looked into it a bit. I found
[this StackOverflow answer](https://stackoverflow.com/a/60585382) which described a
solution to the same problem. Rather than manually calculate the focus point, we can
get the PreviewView to do it for us. This fixes the issue because the PreviewView
factors in any scaling or resizing of the view on the screen, which we weren't doing
before. The only potential issue is that this needs to run on the UI thread
(which is what the `withContext` is doing), but I've tested it with frame processors
enabled and disabled, and have found no issues in either case.
* fix: Use `rootDir` instead of `projectDir`
* Revert "fix: Use `rootDir` instead of `projectDir`"
This reverts commit 058e0a110bcf9b688e12a1cccbac2f23a29fa55c.
* fix: Find node_modules path where react-native/ lives
* fix: Figure out VisionCameraExample project
* Revert "fix: Figure out VisionCameraExample project"
This reverts commit 7ca455098244dd62280d40586062803d1ccc2c5f.