From 8189173120e27aec1f0ac544a3f083335ddce007 Mon Sep 17 00:00:00 2001 From: Marc Rousavy Date: Mon, 31 May 2021 14:11:33 +0200 Subject: [PATCH] Update FRAME_PROCESSORS.mdx --- docs/docs/guides/FRAME_PROCESSORS.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/guides/FRAME_PROCESSORS.mdx b/docs/docs/guides/FRAME_PROCESSORS.mdx index 0345861..5e68947 100644 --- a/docs/docs/guides/FRAME_PROCESSORS.mdx +++ b/docs/docs/guides/FRAME_PROCESSORS.mdx @@ -148,7 +148,7 @@ I have used [MLKit Vision Image Labeling](https://firebase.google.com/docs/ml-ki * Fully natively (written in pure Objective-C, no React interaction at all), I have measured an average of **68ms** per call. * As a Frame Processor Plugin (written in Objective-C, called through a JS Frame Processor function), I have measured an average of **69ms** per call, meaning **the Frame Processor API only takes ~1ms longer than a fully native implementation**, making it **the fastest way to run any sort of Frame Processing in React Native**. -> All measurements are recorded on an iPhone 11 Pro, benchmarked total execution time of the [`captureOutput`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate/1385775-captureoutput) function by using [`mach_absolute_time`](https://developer.apple.com/documentation/kernel/1462446-mach_absolute_time). +> All measurements are recorded on an iPhone 11 Pro, benchmarked total execution time of the [`captureOutput`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate/1385775-captureoutput) function by using [`CFAbsoluteTimeGetCurrent`](https://developer.apple.com/documentation/corefoundation/1543542-cfabsolutetimegetcurrent). Running smaller images (lower than 4k resolution) is much quicker and many algorithms can even run at 60 FPS. ### ESLint react-hooks plugin