375e894038
* Make Frame Processors an extra subspec * Update VisionCamera.podspec * Make optional * Make VisionCamera compile without Skia * Fix * Add skia again * Update VisionCamera.podspec * Make VisionCamera build without Frame Processors * Rename error to `system/frame-processors-unavailable` * Fix Frame Processor returning early * Remove `preset`, FP partial rewrite * Only warn on frame drop * Fix wrong queue * fix: Run on CameraQueue again * Update CameraView.swift * fix: Activate audio session asynchronously on audio queue * Update CameraView+RecordVideo.swift * Update PreviewView.h * Cleanups * Cleanup * fix cast * feat: Add LiDAR Depth Camera support * Upgrade Ruby * Add vector icons type * Update Gemfile.lock * fix: Stop queues on deinit * Also load `builtInTrueDepthCamera` * Update CameraViewManager.swift * Update SkImageHelpers.mm * Extract FrameProcessorCallback to FrameProcessor Holds more context now :) * Rename to .m * fix: Add `RCTLog` import * Create SkiaFrameProcessor * Update CameraBridge.h * Call Frame Processor * Fix defines * fix: Allow deleting callback funcs * fix Skia build * batch * Just call `setSkiaFrameProcessor` * Rewrite in Swift * Pass `SkiaRenderer` * Fix Import * Move `PreviewView` to Swift * Fix Layer * Set Skia Canvas to Frame Host Object * Make `DrawableFrameHostObject` subclass * Fix TS types * Use same MTLDevice and apply scale * Make getter * Extract `setTorch` and `Preview` * fix: Fix nil metal device * Don't wait for session stop in deinit * Use main pixel ratio * Use unique_ptr for Render Contexts * fix: Fix SkiaPreviewDisplayLink broken after deinit * inline `getTextureCache` * Update CameraPage.tsx * chore: Format iOS * perf: Allow MTLLayer to be optimized for only frame buffers * Add RN Video types * fix: Fix Frame Processors if guard * Find nodeModules recursively * Create `Frame.isDrawable` * Add `cocoapods-check` dependency |
||
---|---|---|
.. | ||
Extensions | ||
Frame Processor | ||
Parsers | ||
React Utils | ||
Skia Render Layer | ||
VisionCamera.xcodeproj | ||
.swift-version | ||
.swiftformat | ||
.swiftlint.yml | ||
CameraBridge.h | ||
CameraError.swift | ||
CameraQueues.swift | ||
CameraView.swift | ||
CameraView+AVAudioSession.swift | ||
CameraView+AVCaptureSession.swift | ||
CameraView+Focus.swift | ||
CameraView+Orientation.swift | ||
CameraView+Preview.swift | ||
CameraView+RecordVideo.swift | ||
CameraView+TakePhoto.swift | ||
CameraView+Torch.swift | ||
CameraView+Zoom.swift | ||
CameraViewManager.m | ||
CameraViewManager.swift | ||
NativePreviewView.swift | ||
PhotoCaptureDelegate.swift | ||
PreviewView.swift | ||
README.md | ||
RecordingSession.swift |
ios
This folder contains the iOS-platform-specific code for react-native-vision-camera.
Prerequesites
- Install Xcode tools
xcode-select --install
- Install need SwiftFormat and SwiftLint
brew install swiftformat swiftlint
Getting Started
It is recommended that you work on the code using the Example project (example/ios/VisionCameraExample.xcworkspace
), since that always includes the React Native header files, plus you can easily test changes that way.
You can however still edit the library project here by opening VisionCamera.xcodeproj
, this has the advantage of automatically formatting your Code (swiftformat) and showing you Linter errors (swiftlint) when trying to build (⌘+B).
Committing
Before committing, make sure that you're not violating the Swift or C++ codestyles. To do that, run the following command:
yarn check-ios
This will also try to automatically fix any errors by re-formatting the Swift code.