30b56153db
Before, Frame Processors ran on a separate Thread. After, Frame Processors run fully synchronous and always at the same FPS as the Camera. Two new functions have been introduced: * `runAtTargetFps(fps: number, func: () => void)`: Runs the given code as often as the given `fps`, effectively throttling it's calls. * `runAsync(frame: Frame, func: () => void)`: Runs the given function on a separate Thread for Frame Processing. A strong reference to the Frame is held as long as the function takes to execute. You can use `runAtTargetFps` to throttle calls to a specific API (e.g. if your Camera is running at 60 FPS, but you only want to run face detection at ~25 FPS, use `runAtTargetFps(25, ...)`.) You can use `runAsync` to run a heavy algorithm asynchronous, so that the Camera is not blocked while your algorithm runs. This is useful if your main sync processor draws something, and your async processor is doing some image analysis on the side. You can also combine both functions. Examples: ```js const frameProcessor = useFrameProcessor((frame) => { 'worklet' console.log("I'm running at 60 FPS!") }, []) ``` ```js const frameProcessor = useFrameProcessor((frame) => { 'worklet' console.log("I'm running at 60 FPS!") runAtTargetFps(10, () => { 'worklet' console.log("I'm running at 10 FPS!") }) }, []) ``` ```js const frameProcessor = useFrameProcessor((frame) => { 'worklet' console.log("I'm running at 60 FPS!") runAsync(frame, () => { 'worklet' console.log("I'm running on another Thread, I can block for longer!") }) }, []) ``` ```js const frameProcessor = useFrameProcessor((frame) => { 'worklet' console.log("I'm running at 60 FPS!") runAtTargetFps(10, () => { 'worklet' runAsync(frame, () => { 'worklet' console.log("I'm running on another Thread at 10 FPS, I can block for longer!") }) }) }, []) ``` |
||
---|---|---|
.. | ||
Extensions | ||
Frame Processor | ||
Parsers | ||
React Utils | ||
VisionCamera.xcodeproj | ||
.swift-version | ||
.swiftformat | ||
.swiftlint.yml | ||
CameraBridge.h | ||
CameraError.swift | ||
CameraQueues.swift | ||
CameraView.swift | ||
CameraView+AVAudioSession.swift | ||
CameraView+AVCaptureSession.swift | ||
CameraView+Focus.swift | ||
CameraView+Orientation.swift | ||
CameraView+RecordVideo.swift | ||
CameraView+TakePhoto.swift | ||
CameraView+Zoom.swift | ||
CameraViewManager.m | ||
CameraViewManager.swift | ||
PhotoCaptureDelegate.swift | ||
README.md | ||
RecordingSession.swift |
ios
This folder contains the iOS-platform-specific code for react-native-vision-camera.
Prerequesites
- Install Xcode tools
xcode-select --install
- Install need SwiftFormat and SwiftLint
brew install swiftformat swiftlint
Getting Started
It is recommended that you work on the code using the Example project (example/ios/VisionCameraExample.xcworkspace
), since that always includes the React Native header files, plus you can easily test changes that way.
You can however still edit the library project here by opening VisionCamera.xcodeproj
, this has the advantage of automatically formatting your Code (swiftformat) and showing you Linter errors (swiftlint) when trying to build (⌘+B).
Committing
Before committing, make sure that you're not violating the Swift or C++ codestyles. To do that, run the following command:
yarn check-ios
This will also try to automatically fix any errors by re-formatting the Swift code.