* feat: Use JSI's `ArrayBuffer` instead of `TypedArray`
* fix: Fix move memory
* feat: Implement iOS
* Format
* Update JSIJNIConversion.cpp
* fix: Fix Android `toArrayBuffer` and other
* Catch FP call errors
* Update return type
* Use `CPU_READ_OFTEN` flag as well
* CPU flag
* Run destructors under `jni::ThreadScope`
* Update FrameProcessorPluginHostObject.cpp
* fix: Fix `toArrayBuffer()` crash
* Update Frame.ts
* feat: Create `TypedArray` class for Frame Processor Plugins
* Type
* feat: Pass `VisionCameraProxy` along (BREAKING)
* feat: Finish implementation
* Log a bit
* feat: Successfully convert JSI <> JNI buffers
* Wrap buffer
* fix: Fix using wrong Runtime
* feat: Add docs
* add zero copy example
* Format C++
* Create iOS base
* feat: Finish iOS implementation
* chore: Format
* fix: Use `NSData` instead of `NSMutableData`
* Format
* fix: Fix build when Frame Processors are disabled
* chore: Rename `TypedArray` to `SharedArray`
* fix: Fix Swift typings for Array
* Remove a few default inits
* fix: Fix Android build
* fix: Use `NSInteger`
* Update SharedArray.mm
* fix: Expose bytes directly on iOS (NSData was immutable)
1. Reverts 4e96eb77e0 (PR #1789) to bring the C++ OpenGL GPU Pipeline back.
2. Fixes the "initHybrid JNI not found" error by loading the native JNI/C++ library in `VideoPipeline.kt`.
This PR has two downsides:
1. `pixelFormat="yuv"` does not work on Android. OpenGL only works in RGB
2. OpenGL rendering is fast, but it has an overhead. I think for Camera -> Video Recording we shouldn't be using an entire OpenGL rendering pipeline.
The original plan was to use something similar to how it works on iOS by just passing GPU buffers around, but the android.media APIs just aren't as advanced yet. `ImageReader`/`ImageWriter` is way too buggy and doesn't really work with `MediaRecorder`/`MediaCodec`.
This sucks, I hope in the future we can use something like `AHardwareBuffer`s.