feat: Complete iOS Codebase rewrite (#1647)
* Make Frame Processors an extra subspec * Update VisionCamera.podspec * Make optional * Make VisionCamera compile without Skia * Fix * Add skia again * Update VisionCamera.podspec * Make VisionCamera build without Frame Processors * Rename error to `system/frame-processors-unavailable` * Fix Frame Processor returning early * Remove `preset`, FP partial rewrite * Only warn on frame drop * Fix wrong queue * fix: Run on CameraQueue again * Update CameraView.swift * fix: Activate audio session asynchronously on audio queue * Update CameraView+RecordVideo.swift * Update PreviewView.h * Cleanups * Cleanup * fix cast * feat: Add LiDAR Depth Camera support * Upgrade Ruby * Add vector icons type * Update Gemfile.lock * fix: Stop queues on deinit * Also load `builtInTrueDepthCamera` * Update CameraViewManager.swift * Update SkImageHelpers.mm * Extract FrameProcessorCallback to FrameProcessor Holds more context now :) * Rename to .m * fix: Add `RCTLog` import * Create SkiaFrameProcessor * Update CameraBridge.h * Call Frame Processor * Fix defines * fix: Allow deleting callback funcs * fix Skia build * batch * Just call `setSkiaFrameProcessor` * Rewrite in Swift * Pass `SkiaRenderer` * Fix Import * Move `PreviewView` to Swift * Fix Layer * Set Skia Canvas to Frame Host Object * Make `DrawableFrameHostObject` subclass * Fix TS types * Use same MTLDevice and apply scale * Make getter * Extract `setTorch` and `Preview` * fix: Fix nil metal device * Don't wait for session stop in deinit * Use main pixel ratio * Use unique_ptr for Render Contexts * fix: Fix SkiaPreviewDisplayLink broken after deinit * inline `getTextureCache` * Update CameraPage.tsx * chore: Format iOS * perf: Allow MTLLayer to be optimized for only frame buffers * Add RN Video types * fix: Fix Frame Processors if guard * Find nodeModules recursively * Create `Frame.isDrawable` * Add `cocoapods-check` dependency
This commit is contained in:
@@ -18,34 +18,6 @@ Each camera device (see [Camera Devices](devices)) provides a number of capture
|
||||
|
||||
If you don't want to specify the best format for your camera device, you don't have to. The Camera _automatically chooses the best matching format_ for the current camera device. This is why the Camera's `format` property is _optional_.
|
||||
|
||||
If you don't want to do a lot of filtering, but still want to let the camera know what your intentions are, you can use the Camera's `preset` property.
|
||||
|
||||
For example, use the `'medium'` preset if you want to create a video-chat application that shouldn't excessively use mobile data:
|
||||
|
||||
```tsx
|
||||
function App() {
|
||||
const devices = useCameraDevices()
|
||||
const device = devices.back
|
||||
|
||||
if (device == null) return <LoadingView />
|
||||
return (
|
||||
<Camera
|
||||
style={StyleSheet.absoluteFill}
|
||||
device={device}
|
||||
preset="medium"
|
||||
/>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
:::note
|
||||
See the [CameraPreset.ts](https://github.com/mrousavy/react-native-vision-camera/blob/main/src/CameraPreset.ts) type for more information about presets
|
||||
:::
|
||||
|
||||
:::warning
|
||||
You cannot set `preset` and `format` at the same time; if `format` is set, `preset` must be `undefined` and vice versa!
|
||||
:::
|
||||
|
||||
### What you need to know about cameras
|
||||
|
||||
To understand a bit more about camera formats, you first need to understand a few "general camera basics":
|
||||
@@ -110,7 +82,7 @@ export const sortFormatsByResolution = (left: CameraDeviceFormat, right: CameraD
|
||||
// in this case, points aren't "normalized" (e.g. higher resolution = 1 point, lower resolution = -1 points)
|
||||
let leftPoints = left.photoHeight * left.photoWidth
|
||||
let rightPoints = right.photoHeight * right.photoWidth
|
||||
|
||||
|
||||
// we also care about video dimensions, not only photo.
|
||||
leftPoints += left.videoWidth * left.videoHeight
|
||||
rightPoints += right.videoWidth * right.videoHeight
|
||||
|
Reference in New Issue
Block a user