* feat: Call Skia Renderer
* Use default NativePreviewView for Skia
* Render to separate FBO
* It appears once
* Refactor a lot lol
* Pass width/height
* Read width/heights
* Update SkiaRenderer.cpp
* Read stencil/samples
* Use switch for target
* Clear full red
* Update VideoPipeline.cpp
* fix: Use `BorrowTextureFrom` instead of `AdoptTextureFrom`
* Get it to work
* Draw Camera Frame again (only works for first frame)
* glDisable(GL_BLEND)
* Use Frame Buffer again
* Simplify Skia offscreen surface creation
* fix: Get it to kinda work?
* fix: Remove `sampler2D` shader
Only the EXTERNAL_OES one kinda works
* Revert "fix: Remove `sampler2D` shader"
This reverts commit bf241a82f440f5a442f23a2b10329b813e7cdb3e.
* Revert "fix: Get it to kinda work?"
This reverts commit ea6a8784ad8dc7d05e8076591874f021b51dd84a.
* fix: Use Skia for rendering
* Simplify drawing code a lot
* Clean up drawing loop a bit more
* Some docs
* Update SkiaRenderer.cpp
* Surface
* try to use Matrix
* Use BottomLeft as a surface origin again
* Get actual surface dimensions
* Use 1x1 pbuffer instead
* Update SkiaRenderer.cpp
* Update SkiaRenderer.cpp
* feat: Implement Skia Frame Processor (#1735)
* feat: Implement JS Skia Frame Processor
* Update SkiaRenderer.cpp
* push
* Create Frame from C++
* compile
* Compile
* Update VideoPipeline.cpp
* Fix JNI local ref
* Use `HardwareBuffer` for implementation
* feat: Custom `Frame` implementation that uses CPU `ByteBuffer` (#1736)
* feat: Implement JS Skia Frame Processor
* Update SkiaRenderer.cpp
* push
* Create Frame from C++
* compile
* Compile
* Update VideoPipeline.cpp
* Fix JNI local ref
* Use `HardwareBuffer` for implementation
* try: Try to just create a CPU based ByteBuffer
* fix: Fix Java Type
* fix remaining errors
* try fixing FrameFactory
* Use `free`
* fix: Fix scene mode crash on some emulators
* fix: Fix scene mode crash on some emulators
* Fix getting pixels
* fix: Fix buffer not being freed
* Add some docs to `Frame`
* Test Skia again
* Use `getCurrentPresentationTime()`
* Remove `FrameFactory.cpp`
* Update VideoPipeline.h
* Update VideoPipeline.cpp
* Nuke CameraX
* fix: Run View Finder on UI Thread
* Open Camera, set up Threads
* fix init
* Mirror if needed
* Try PreviewView
* Use max resolution
* Add `hardwareLevel` property
* Check if output type is supported
* Replace `frameRateRanges` with `minFps` and `maxFps`
* Remove `isHighestPhotoQualitySupported`
* Remove `colorSpace`
The native platforms will use the best / most accurate colorSpace by default anyways.
* HDR
* Check from format
* fix
* Remove `supportsParallelVideoProcessing`
* Correctly return video/photo sizes on Android now. Finally
* Log all Device props
* Log if optimized usecase is used
* Cleanup
* Configure Camera Input only once
* Revert "Configure Camera Input only once"
This reverts commit 0fd6c03f54c7566cb5592053720c4a8743aba92e.
* Extract Camera configuration
* Try to reconfigure all
* Hook based
* Properly set up `CameraSession`
* Delete unused
* fix: Fix recreate when outputs change
* Update NativePreviewView.kt
* Use callback for closing
* Catch CameraAccessException
* Finally got it stable
* Remove isMirrored
* Implement `takePhoto()`
* Add ExifInterface library
* Run findViewById on UI Thread
* Add Photo Output Surface to takePhoto
* Fix Video Stabilization Modes
* Optimize Imports
* More logs
* Update CameraSession.kt
* Close Image
* Use separate Executor in CameraQueue
* Delete hooks
* Use same Thread again
* If opened, call error
* Update CameraSession.kt
* Log HW level
* fix: Don't enable Stream Use Case if it's not 100% supported
* Move some stuff
* Cleanup PhotoOutputSynchronizer
* Try just open in suspend fun
* Some synchronization fixes
* fix logs
* Update CameraDevice+createCaptureSession.kt
* Update CameraDevice+createCaptureSession.kt
* fixes
* fix: Use Snapshot Template for speed capture prio
* Use PREVIEW template for repeating request
* Use `TEMPLATE_RECORD` if video use-case is attached
* Use `isRunning` flag
* Recreate session everytime on active/inactive
* Lazily get values in capture session
* Stability
* Rebuild session if outputs change
* Set `didOutputsChange` back to false
* Capture first in lock
* Try
* kinda fix it? idk
* fix: Keep Outputs
* Refactor into single method
* Update CameraView.kt
* Use Enums for type safety
* Implement Orientation (I think)
* Move RefCount management to Java (Frame)
* Don't crash when dropping a Frame
* Prefer Devices with higher max resolution
* Prefer multi-cams
* Use FastImage for Media Page
* Return orientation in takePhoto()
* Load orientation from EXIF Data
* Add `isMirrored` props and documentation for PhotoFile
* fix: Return `not-determined` on Android
* Update CameraViewModule.kt
* chore: Upgrade packages
* fix: Fix Metro Config
* Cleanup config
* Properly mirror Images on save
* Prepare MediaRecorder
* Start/Stop MediaRecorder
* Remove `takeSnapshot()`
It no longer works on Android and never worked on iOS. Users could use useFrameProcessor to take a Snapshot
* Use `MediaCodec`
* Move to `VideoRecording` class
* Cleanup Snapshot
* Create `SkiaPreviewView` hybrid class
* Create OpenGL context
* Create `SkiaPreviewView`
* Fix texture creation missing context
* Draw red frame
* Somehow get it working
* Add Skia CMake setup
* Start looping
* Init OpenGL
* Refactor into `SkiaRenderer`
* Cleanup PreviewSize
* Set up
* Only re-render UI if there is a new Frame
* Preview
* Fix init
* Try rendering Preview
* Update SkiaPreviewView.kt
* Log version
* Try using Skia (fail)
* Drawwwww!!!!!!!!!! 🎉
* Use Preview Size
* Clear first
* Refactor into SkiaRenderer
* Add `previewType: "none"` on iOS
* Simplify a lot
* Draw Camera? For some reason? I have no idea anymore
* Fix OpenGL errors
* Got it kinda working again?
* Actually draw Frame woah
* Clean up code
* Cleanup
* Update on main
* Synchronize render calls
* holy shit
* Update SkiaRenderer.cpp
* Update SkiaRenderer.cpp
* Refactor
* Update SkiaRenderer.cpp
* Check for `NO_INPUT_TEXTURE`^
* Post & Wait
* Set input size
* Add Video back again
* Allow session without preview
* Convert JPEG to byte[]
* feat: Use `ImageReader` and use YUV Image Buffers in Skia Context (#1689)
* Try to pass YUV Buffers as Pixmaps
* Create pixmap!
* Clean up
* Render to preview
* Only render if we have an output surface
* Update SkiaRenderer.cpp
* Fix Y+U+V sampling code
* Cleanup
* Fix Semaphore 0
* Use 4:2:0 YUV again idk
* Update SkiaRenderer.h
* Set minSdk to 26
* Set surface
* Revert "Set minSdk to 26"
This reverts commit c4085b7c16c628532e5c2d68cf7ed11c751d0b48.
* Set previewType
* feat: Video Recording with Camera2 (#1691)
* Rename
* Update CameraSession.kt
* Use `SurfaceHolder` instead of `SurfaceView` for output
* Update CameraOutputs.kt
* Update CameraSession.kt
* fix: Fix crash when Preview is null
* Check if snapshot capture is supported
* Update RecordingSession.kt
* S
* Use `MediaRecorder`
* Make audio optional
* Add Torch
* Output duration
* Update RecordingSession.kt
* Start RecordingSession
* logs
* More log
* Base for preparing pass-through Recording
* Use `ImageWriter` to append Images to the Recording Surface
* Stream PRIVATE GPU_SAMPLED_IMAGE Images
* Add flags
* Close session on stop
* Allow customizing `videoCodec` and `fileType`
* Enable Torch
* Fix Torch Mode
* Fix comparing outputs with hashCode
* Update CameraSession.kt
* Correctly pass along Frame Processor
* fix: Use AUDIO_BIT_RATE of 16 * 44,1Khz
* Use CAMCORDER instead of MIC microphone
* Use 1 channel
* fix: Use `Orientation`
* Add `native` PixelFormat
* Update iOS to latest Skia integration
* feat: Add `pixelFormat` property to Camera
* Catch error in configureSession
* Fix JPEG format
* Clean up best match finder
* Update CameraDeviceDetails.kt
* Clamp sizes by maximum CamcorderProfile size
* Remove `getAvailableVideoCodecs`
* chore: release 3.0.0-rc.5
* Use maximum video size of RECORD as default
* Update CameraDeviceDetails.kt
* Add a todo
* Add JSON device to issue report
* Prefer `full` devices and flash
* Lock to 30 FPS on Samsung
* Implement Zoom
* Refactor
* Format -> PixelFormat
* fix: Feat `pixelFormat` -> `pixelFormats`
* Update TROUBLESHOOTING.mdx
* Format
* fix: Implement `zoom` for Photo Capture
* fix: Don't run if `isActive` is `false`
* fix: Call `examplePlugin(frame)`
* fix: Fix Flash
* fix: Use `react-native-worklets-core`!
* fix: Fix import
* feat: Add more Error insights when the Camera Module cannot be found
* Assert JSI is available
* Update error description
* fix
* Update CameraError.ts
Before, Frame Processors ran on a separate Thread.
After, Frame Processors run fully synchronous and always at the same FPS as the Camera.
Two new functions have been introduced:
* `runAtTargetFps(fps: number, func: () => void)`: Runs the given code as often as the given `fps`, effectively throttling it's calls.
* `runAsync(frame: Frame, func: () => void)`: Runs the given function on a separate Thread for Frame Processing. A strong reference to the Frame is held as long as the function takes to execute.
You can use `runAtTargetFps` to throttle calls to a specific API (e.g. if your Camera is running at 60 FPS, but you only want to run face detection at ~25 FPS, use `runAtTargetFps(25, ...)`.)
You can use `runAsync` to run a heavy algorithm asynchronous, so that the Camera is not blocked while your algorithm runs. This is useful if your main sync processor draws something, and your async processor is doing some image analysis on the side.
You can also combine both functions.
Examples:
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAtTargetFps(10, () => {
'worklet'
console.log("I'm running at 10 FPS!")
})
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAsync(frame, () => {
'worklet'
console.log("I'm running on another Thread, I can block for longer!")
})
}, [])
```
```js
const frameProcessor = useFrameProcessor((frame) => {
'worklet'
console.log("I'm running at 60 FPS!")
runAtTargetFps(10, () => {
'worklet'
runAsync(frame, () => {
'worklet'
console.log("I'm running on another Thread at 10 FPS, I can block for longer!")
})
})
}, [])
```
* Calculate a format's video dimensions based on supported resolutions and photo dimensions
* Add Android fallback strategy for recording quality
* Ensure that session props are not ignored when app is resumed
* Simplify setting Android video dimensions in format
* Modify Android imageAnalysisBuilder to use photoSize
* Update onHostResume function to reference android preview issue
* Add missing Android capture errors
* feat: disableFrameProcessors for android via expo-config-plugin prop
* chore: naming
* feat: fix shared library issues with expo config plug prop flag
* fix: use a glob pattern instead of listing every single shared lib
* fix: use wildcard since libc++ is not enough (libhermes, libjni, libjsi etc)
* fix: use wildcard since libc++ is not enough (libhermes, libjni, libjsi etc)
* feat: 🎉 disable frame processors for iOS as well
* chore: comments
* chore: make eslint/ts happy
* chore: cleanup
* refactor: no need to pass a param here. We just want to disbale it
* chore: remove withDangerouslyHandleAndroidSharedLibrary
* chore: remove danger plugin
* add video codec value
* add types
* use `recommendedVideoSettings` method instead
* lint
* refactor for better readability
* add a method to get available codecs (ios)
* imrove tsDoc description of the videoCodec option
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* ios format
Co-authored-by: Marc Rousavy <marcrousavy@hotmail.com>
* Add custom `onViewReady` event to get layout
`componentDidMount` is async, so the native view _might_ not exist yet causing a race condition in the `setFrameProcessor` code.
This PR fixes this by calling `setFrameProcessor` only after the native view has actually mounted, and to ensure that I created a custom event that fires at that point.
* Update CameraView.swift
* fix: switched incorrect property ordering for qualityPrioritization options
fix: added extra step required for create frame processing plugin on Android
* fix: adjusted the highlighted line
* chore: added guidelines on how to generate and check docs updares
* chore: change instructions so they aren't so unnecessarily wordy! :P
* Add `onFrameProcessorPerformanceSuggestionAvailable` and make `frameProcessorFps` support `auto`
* Implement performance suggestion and auto-adjusting
* Fix FPS setting, evaluate correctly
* Floor suggested FPS
* Remove `console.log` for frame drop warnings.
* Swift format
* Use `30` magic number
* only call if FPS is different
* Update CameraView.swift
* Implement Android 1/2
* Cleanup
* Update `frameProcessorFps` if available
* Optimize `FrameProcessorPerformanceDataCollector` initialization
* Cache call
* Set frameProcessorFps directly (Kotlin setter)
* Don't suggest if same value
* Call suggestion every second
* reset time on set
* Always store 15 last samples
* reset counter too
* Update FrameProcessorPerformanceDataCollector.swift
* Update CameraView+RecordVideo.swift
* Update CameraView.kt
* iOS: Redesign evaluation
* Update CameraView+RecordVideo.swift
* Android: Redesign evaluation
* Update CameraView.kt
* Update REA to latest alpha and install RNScreens
* Fix frameProcessorFps updating
* fix: Fix UI Thread race condition in `setFrameProcessor(...)`
* Revert "fix: Fix UI Thread race condition in `setFrameProcessor(...)`"
This reverts commit 9c524e123cff6843d7d11db602a5027d1bb06b4b.
* Use `setImmediate` to call `setFrameProcessor(...)`
* Fix frame processor order of applying
* Add `enableFrameProcessor` prop that defines if a FP is added
* rename constant
* Implement `enableFrameProcessor` prop for Android and make `frameProcessorFps` faster
* link to troubleshooting guide
* Update TROUBLESHOOTING.mdx
* Add logs for use-cases
* fix log
* set initial frame processor in `onLayout` instead of `componentDidMount`