feat: native Frame type to provide Orientation (#186)

* Use Frame.h

* Add orientation

* Determine buffer orientation

* Replace plugins

* fix calls

* Update FRAME_PROCESSOR_CREATE_PLUGIN_IOS.mdx

* Update FRAME_PROCESSOR_CREATE_PLUGIN_IOS.mdx

* format

* Update CameraPage.tsx

* Update FRAME_PROCESSOR_CREATE_PLUGIN_IOS.mdx

* Add links to docs

* Use `.` syntax

* Make properties `readonly`

* Fix `@synthesize` backing store
This commit is contained in:
Marc Rousavy
2021-06-09 10:57:05 +02:00
committed by GitHub
parent 7025fc1cbe
commit 68a716b506
21 changed files with 179 additions and 116 deletions

View File

@@ -32,23 +32,23 @@ To achieve **maximum performance**, the `scanQRCodes` function is written in a n
The Frame Processor Plugin Registry API automatically manages type conversion from JS <-> native. They are converted into the most efficient data-structures, as seen here:
| JS Type | Objective-C Type | Java Type |
|----------------------|---------------------------|----------------------------|
| `number` | `NSNumber*` (double) | `double` |
| `boolean` | `NSNumber*` (boolean) | `boolean` |
| `string` | `NSString*` | `String` |
| `[]` | `NSArray*` | `Array<Object>` |
| `{}` | `NSDictionary*` | `HashMap<Object>` |
| `undefined` / `null` | `nil` | `null` |
| `(any, any) => void` | `RCTResponseSenderBlock` | `(Object, Object) -> void` |
| `Frame` | `CMSampleBufferRefHolder` | `ImageProxy` |
| JS Type | Objective-C Type | Java Type |
|----------------------|-------------------------------|----------------------------|
| `number` | `NSNumber*` (double) | `double` |
| `boolean` | `NSNumber*` (boolean) | `boolean` |
| `string` | `NSString*` | `String` |
| `[]` | `NSArray*` | `Array<Object>` |
| `{}` | `NSDictionary*` | `HashMap<Object>` |
| `undefined` / `null` | `nil` | `null` |
| `(any, any) => void` | [`RCTResponseSenderBlock`][4] | `(Object, Object) -> void` |
| [`Frame`][1] | [`Frame*`][2] | [`ImageProxy`][3] |
### Return values
Return values will automatically be converted to JS values, assuming they are representable in the ["Types" table](#types). So the following Objective-C frame processor:
```objc
static inline id detectObject(CMSampleBufferRef buffer, NSArray args) {
static inline id detectObject(Frame* frame, NSArray args) {
return @"cat";
}
```
@@ -63,15 +63,17 @@ export function detectObject(frame: Frame): string {
}
```
You can also manipulate the buffer and return it (or a copy) by using the `CMSampleBufferRefHolder` class:
You can also manipulate the buffer and return it (or a copy) by using the `Frame` class:
```objc
static inline id resize(CMSampleBufferRef buffer, NSArray args) {
#import <VisionCamera/Frame.h>
static inline id resize(Frame* frame, NSArray args) {
NSNumber* width = [arguments objectAtIndex:0];
NSNumber* height = [arguments objectAtIndex:1];
CMSampleBufferRef resizedBuffer = CMSampleBufferCopyAndResize(buffer, width, height);
return [[CMSampleBufferRefHolder alloc] initWithBuffer:resizedBuffer];
CMSampleBufferRef resizedBuffer = CMSampleBufferCopyAndResize(frame.buffer, width, height);
return [[Frame alloc] initWithBuffer:resizedBuffer orientation:frame.orientation];
}
```
@@ -116,9 +118,9 @@ For example, a realtime video chat application might use WebRTC to send the fram
```objc
static dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0ul);
static inline id sendFrameToWebRTC(CMSampleBufferRef buffer, NSArray args) {
static inline id sendFrameToWebRTC(Frame* frame, NSArray args) {
CMSampleBufferRef bufferCopy;
CMSampleBufferCreateCopy(kCFAllocatorDefault, buffer, &bufferCopy);
CMSampleBufferCreateCopy(kCFAllocatorDefault, frame.buffer, &bufferCopy);
dispatch_async(queue, ^{
NSString* serverURL = (NSString*)args[0];
@@ -171,3 +173,8 @@ Your Frame Processor Plugins have to be fast. VisionCamera automatically detects
<br />
#### 🚀 Create your first Frame Processor Plugin for [iOS](frame-processors-plugins-ios) or [Android](frame-processors-plugins-android)!
[1]: https://github.com/cuvent/react-native-vision-camera/blob/main/src/Frame.ts
[2]: https://github.com/cuvent/react-native-vision-camera/blob/main/ios/Frame%20Processor/Frame.h
[3]: https://developer.android.com/reference/androidx/camera/core/ImageProxy
[4]: https://github.com/facebook/react-native/blob/9a43eac7a32a6ba3164a048960101022a92fcd5a/React/Base/RCTBridgeModule.h#L20-L24

View File

@@ -27,15 +27,18 @@ iOS Frame Processor Plugins can be written in either **Objective-C** or **Swift*
2. Create an Objective-C source file, for the QR Code Plugin this will be called `QRCodeFrameProcessorPlugin.m`.
3. Add the following code:
```objc {9}
```objc {11}
#import <VisionCamera/FrameProcessorPlugin.h>
#import <VisionCamera/Frame.h>
@interface QRCodeFrameProcessorPlugin : NSObject
@end
@implementation QRCodeFrameProcessorPlugin
static inline id scanQRCodes(CMSampleBufferRef buffer, NSArray args) {
static inline id scanQRCodes(Frame* frame, NSArray args) {
CMSampleBufferRef buffer = frame.buffer;
UIImageOrientation orientation = frame.orientation;
// code goes here
return @[];
}
@@ -62,6 +65,7 @@ The JS function name will be equal to the Objective-C function name you choose (
```objc
#import <VisionCamera/FrameProcessorPlugin.h>
#import <VisionCamera/Frame.h>
```
3. Create an Objective-C source file with the same name as the Swift file, for the QR Code Plugin this will be `QRCodeFrameProcessorPlugin.m`. Add the following code:
@@ -79,12 +83,14 @@ The first parameter in the Macro specifies the JS function name. Make sure it is
4. In the Swift file, add the following code:
```swift {6}
```swift {8}
@objc(QRCodeFrameProcessorPlugin)
public class QRCodeFrameProcessorPlugin: NSObject, FrameProcessorPluginBase {
@objc
public static func callback(_: CMSampleBuffer!, withArgs _: [Any]!) -> Any! {
public static func callback(_ frame: Frame!, withArgs _: [Any]!) -> Any! {
let buffer = frame.buffer
let orientation = frame.orientation
// code goes here
return []
}