* Nuke CameraX
* fix: Run View Finder on UI Thread
* Open Camera, set up Threads
* fix init
* Mirror if needed
* Try PreviewView
* Use max resolution
* Add `hardwareLevel` property
* Check if output type is supported
* Replace `frameRateRanges` with `minFps` and `maxFps`
* Remove `isHighestPhotoQualitySupported`
* Remove `colorSpace`
The native platforms will use the best / most accurate colorSpace by default anyways.
* HDR
* Check from format
* fix
* Remove `supportsParallelVideoProcessing`
* Correctly return video/photo sizes on Android now. Finally
* Log all Device props
* Log if optimized usecase is used
* Cleanup
* Configure Camera Input only once
* Revert "Configure Camera Input only once"
This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e.
* Extract Camera configuration
* Try to reconfigure all
* Hook based
* Properly set up `CameraSession`
* Delete unused
* fix: Fix recreate when outputs change
* Update NativePreviewView.kt
* Use callback for closing
* Catch CameraAccessException
* Finally got it stable
* Remove isMirrored
* Implement `takePhoto()`
* Add ExifInterface library
* Run findViewById on UI Thread
* Add Photo Output Surface to takePhoto
* Fix Video Stabilization Modes
* Optimize Imports
* More logs
* Update CameraSession.kt
* Close Image
* Use separate Executor in CameraQueue
* Delete hooks
* Use same Thread again
* If opened, call error
* Update CameraSession.kt
* Log HW level
* fix: Don't enable Stream Use Case if it's not 100% supported
* Move some stuff
* Cleanup PhotoOutputSynchronizer
* Try just open in suspend fun
* Some synchronization fixes
* fix logs
* Update CameraDevice+createCaptureSession.kt
* Update CameraDevice+createCaptureSession.kt
* fixes
* fix: Use Snapshot Template for speed capture prio
* Use PREVIEW template for repeating request
* Use `TEMPLATE_RECORD` if video use-case is attached
* Use `isRunning` flag
* Recreate session everytime on active/inactive
* Lazily get values in capture session
* Stability
* Rebuild session if outputs change
* Set `didOutputsChange` back to false
* Capture first in lock
* Try
* kinda fix it? idk
* fix: Keep Outputs
* Refactor into single method
* Update CameraView.kt
* Use Enums for type safety
* Implement Orientation (I think)
* Move RefCount management to Java (Frame)
* Don't crash when dropping a Frame
* Prefer Devices with higher max resolution
* Prefer multi-cams
* Use FastImage for Media Page
* Return orientation in takePhoto()
* Load orientation from EXIF Data
* Add `isMirrored` props and documentation for PhotoFile
* fix: Return `not-determined` on Android
* Update CameraViewModule.kt
* chore: Upgrade packages
* fix: Fix Metro Config
* Cleanup config
* Properly mirror Images on save
* Prepare MediaRecorder
* Start/Stop MediaRecorder
* Remove `takeSnapshot()`
It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot
* Use `MediaCodec`
* Move to `VideoRecording` class
* Cleanup Snapshot
* Create `SkiaPreviewView` hybrid class
* Create OpenGL context
* Create `SkiaPreviewView`
* Fix texture creation missing context
* Draw red frame
* Somehow get it working
* Add Skia CMake setup
* Start looping
* Init OpenGL
* Refactor into `SkiaRenderer`
* Cleanup PreviewSize
* Set up
* Only re-render UI if there is a new Frame
* Preview
* Fix init
* Try rendering Preview
* Update SkiaPreviewView.kt
* Log version
* Try using Skia (fail)
* Drawwwww!!!!!!!!!! 🎉
* Use Preview Size
* Clear first
* Refactor into SkiaRenderer
* Add `previewType: "none"` on iOS
* Simplify a lot
* Draw Camera? For some reason? I have no idea anymore
* Fix OpenGL errors
* Got it kinda working again?
* Actually draw Frame woah
* Clean up code
* Cleanup
* Update on main
* Synchronize render calls
* holy shit
* Update SkiaRenderer.cpp
* Update SkiaRenderer.cpp
* Refactor
* Update SkiaRenderer.cpp
* Check for `NO_INPUT_TEXTURE`^
* Post & Wait
* Set input size
* Add Video back again
* Allow session without preview
* Convert JPEG to byte[]
* feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689)
* Try to pass YUV Buffers as Pixmaps
* Create pixmap!
* Clean up
* Render to preview
* Only render if we have an output surface
* Update SkiaRenderer.cpp
* Fix Y+U+V sampling code
* Cleanup
* Fix Semaphore 0
* Use 4:2:0 YUV again idk
* Update SkiaRenderer.h
* Set minSdk to 26
* Set surface
* Revert "Set minSdk to 26"
This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48.
* Set previewType
* feat: Video Recording with Camera2 (#1691)
* Rename
* Update CameraSession.kt
* Use `SurfaceHolder` instead of `SurfaceView` for output
* Update CameraOutputs.kt
* Update CameraSession.kt
* fix: Fix crash when Preview is null
* Check if snapshot capture is supported
* Update RecordingSession.kt
* S
* Use `MediaRecorder`
* Make audio optional
* Add Torch
* Output duration
* Update RecordingSession.kt
* Start RecordingSession
* logs
* More log
* Base for preparing pass-through Recording
* Use `ImageWriter` to append Images to the Recording Surface
* Stream PRIVATE GPU_SAMPLED_IMAGE Images
* Add flags
* Close session on stop
* Allow customizing `videoCodec` and `fileType`
* Enable Torch
* Fix Torch Mode
* Fix comparing outputs with hashCode
* Update CameraSession.kt
* Correctly pass along Frame Processor
* fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz
* Use CAMCORDER instead of MIC microphone
* Use 1 channel
* fix: Use `Orientation`
* Add `native` PixelFormat
* Update iOS to latest Skia integration
* feat: Add `pixelFormat` property to Camera
* Catch error in configureSession
* Fix JPEG format
* Clean up best match finder
* Update CameraDeviceDetails.kt
* Clamp sizes by maximum CamcorderProfile size
* Remove `getAvailableVideoCodecs`
* chore: release 3.0.0-rc.5
* Use maximum video size of RECORD as default
* Update CameraDeviceDetails.kt
* Add a todo
* Add JSON device to issue report
* Prefer `full` devices and flash
* Lock to 30 FPS on Samsung
* Implement Zoom
* Refactor
* Format -> PixelFormat
* fix: Feat `pixelFormat` -> `pixelFormats`
* Update TROUBLESHOOTING.mdx
* Format
* fix: Implement `zoom` for Photo Capture
* fix: Don't run if `isActive` is `false`
* fix: Call `examplePlugin(frame)`
* fix: Fix Flash
* fix: Use `react-native-worklets-core`!
* fix: Fix import
Before, Frame Processors ran on a separate Thread.
After, Frame Processors run fully synchronous and always at the same FPS as the Camera.
Two new functions have been introduced:
* `runAtTargetFps(fps: number, func: () => void)`: Runs the given code as often as the given `fps`, effectively throttling it's calls.
* `runAsync(frame: Frame, func: () => void)`: Runs the given function on a separate Thread for Frame Processing. A strong reference to the Frame is held as long as the function takes to execute.
You can use `runAtTargetFps` to throttle calls to a specific API (e.g. if your Camera is running at 60 FPS, but you only want to run face detection at ~25 FPS, use `runAtTargetFps(25, ...)`.)
You can use `runAsync` to run a heavy algorithm asynchronous, so that the Camera is not blocked while your algorithm runs. This is useful if your main sync processor draws something, and your async processor is doing some image analysis on the side.
You can also combine both functions.
Examples:
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAtTargetFps(10, () => {
'worklet'
console.log("I'm running at 10 FPS!")
})
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAsync(frame, () => {
'worklet'
console.log("I'm running on another Thread, I can block for longer!")
})
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAtTargetFps(10, () => {
'worklet'
runAsync(frame, () => {
'worklet'
console.log("I'm running on another Thread at 10 FPS, I can block for longer!")
})
})
}, [])
```
* Add `onFrameProcessorPerformanceSuggestionAvailable` and make `frameProcessorFps` support `auto`
* Implement performance suggestion and auto-adjusting
* Fix FPS setting, evaluate correctly
* Floor suggested FPS
* Remove `console.log` for frame drop warnings.
* Swift format
* Use `30` magic number
* only call if FPS is different
* Update CameraView.swift
* Implement Android 1/2
* Cleanup
* Update `frameProcessorFps` if available
* Optimize `FrameProcessorPerformanceDataCollector` initialization
* Cache call
* Set frameProcessorFps directly (Kotlin setter)
* Don't suggest if same value
* Call suggestion every second
* reset time on set
* Always store 15 last samples
* reset counter too
* Update FrameProcessorPerformanceDataCollector.swift
* Update CameraView+RecordVideo.swift
* Update CameraView.kt
* iOS: Redesign evaluation
* Update CameraView+RecordVideo.swift
* Android: Redesign evaluation
* Update CameraView.kt
* Update REA to latest alpha and install RNScreens
* Fix frameProcessorFps updating