You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We at Margelo are planning the next major version for react-native-vision-camera: VisionCamera V3 ✨
For VisionCamera V3 we target one major feature and a ton of stability and performance improvements:
Write-back frame processors. We are introducing a new feature where you can simply draw on frame in a Frame Processor using RN Skia. This allows you to draw face masks, filters, overlays, color shadings, shaders, Metal, etc..)
Uses a hardware accelerated Skia layer for showing the Preview
Some cool examples like inverted colors shader filter, VHS filter (inspired by Snapchat's VHS + distortion filter), and a realtime text/bounding box overlay
Realtime face blurring or license plate blurring
Easy to write color correction, beauty filters
All in simple JS (RN Skia) - no native code and hot reload while still maintaining pretty much native performance!
Sync Frame Processors. Frame Processors will now be fully synchronous and run on the same Thread as the Camera is running.
Pretty much on-par with native performance now.
Run frame processing without any delay - everything until your function returns is the latest data.
Use runAtTargetFps(fps, ...) to run code at a throttled FPS rate inside your Frame Processor
Use runAsync(...) to run code on a separate thread for background processing inside your Frame Processor. This can take longer without blocking the Camera.
Migrate VisionCamera to RN 0.71. Benefits:
Much simpler build setup. The CMakeLists/build.gradle files will be simplified as we will use prefabs, and a ton of annoying build errors should be fixed.
Up to date with latest React Native version
Prefabs support on Android
No more Boost/Glog/Folly downloading/extracting
Completely redesigned declarative API for device/format selection (resolution, fps, low-light, ...)
Control exactly how much FPS you want to record in
Know exactly if a desired format is supported and be able to fall back to a different one
Control the exact resolution and know what is supported (e.g. higher than 1080, but no higher than 4k, ...)
Control settings like low-light mode, compression, recording format H.264 or H.265, etc.
Add reactive API for getAvailableCameraDevices() so external devices can become plugged in/out during runtime
Add zero-shutter lag API for CameraX
Rewrite the native Android part from CameraX to Camera2
Much more stability as CameraX just isn't mature enough yet
Much more flexibility with devices/formats
Slow-motion / 240 FPS recording on Android
Use a custom Worklet Runtime instead of Reanimated
Fixes a ton of crashes and stability issues in Frame Processors/Plugins
Improves compilation time as we don't need to extract Reanimated anymore
Improve Performance of Frame Processors by caching FrameHostObject instance
Improve error handling by using default JS error handler instead of console.error (mContext.handleException(..))
More access to the Frame in Frame Processors:
toByteArray(): Gets the Frame data as a byte array. The type is Uint8Array (TypedArray/ArrayBuffer). Keep in mind that Frame buffers are usually allocated on the GPU, so this comes with a performance cost of a GPU -> CPU copy operation. I've optimized it a bit to run pretty fast :)
orientation: The orientation of the Frame. e.g. "portrait"
isMirrored: Whether the Frame is mirrored (eg in selfie cams)
timestamp: The presentation timestamp of the Frame
Of course we can't just put weeks of effort into this project for free. This is why we are looking for 5-10 partners who are interested in seeing this become reality by funding the development of those features.
Ontop of seeing this become reality, we also create a sponsors section for your company logo in the VisionCamera documentation/README, and we will test the new VisionCamera V3 version in your app to ensure it's compatibility for your use-case.
If you are interested in that, reach out to me over Twitter: https://twitter.com/mrousavy or email: me@mrousavy.com
Demo
Here's the current proof of concept we built in 3 hours:
Currently, I spent around ~60 hours to improve that proof of concept and created the above demos. I also refined the iOS part a bit and created some fixes, did some research and improved the Skia handling.
Make sure we use high performance Skia drawing operations
Make sure Frames are not out of sync of the screen refresh rate (60Hz / 120Hz)
Do some performance profiling and see if we can improve something
Make sure everything continues to work when not using Skia
Swap the REA Runtime with the custom Worklet Runtime
Implement synchronous Frame Processors
Implement runAtTargetFps
Implement runAsync
Implement toByteArray(), orientation, isMirrored and timestamp on Frame
Add orientation to Frame
Convert it to a TurboModule/Fabric
Rewrite to new simple & declarative API
Android
Set up Skia Preview
Pass Skia Canvas to JS Frame Processor
Make sure we use high performance Skia drawing operations
Make sure Frames are not out of sync of the screen refresh rate (60Hz / 120Hz)
Do some performance profiling and see if we can improve something
Make sure everything continues to work when not using Skia
Swap the REA Runtime with the custom Worklet Runtime
Implement synchronous Frame Processors
Implement runAtTargetFps
Implement runAsync
Implement toByteArray(), orientation, isMirrored and timestamp on Frame
Add orientation to Frame
Convert it to a TurboModule/Fabric
Rewrite from Camera2 to CameraX
Rewrite to new simple & declarative API
Documentation
Create documentation for write-back/Skia Frame Processors
Create documentation for synchronous Frame Processors
Create documentation for runAtTargetFps
Create documentation for runAsync
Create a realtime face blurring example
Create a realtime license plate blurring example
Create a realtime text recognition/overlay example
Create a realtime color grading/beauty filter example
Create a realtime face outline/landmarks detector example
I reckon this will be around 500 hours of effort in total.
Update 15.2.2023: I just started working on this here: feat: ✨ V3 ✨ #1466. No one is paying me for that so I am doing all this in my free time. I decided to just ignore issues/backlash so that I can work as productive as I can. If someone is complaining, they should either offer a fix (PR) or pay me. If I listen to all issues the library will never get better :)
We at Margelo are planning the next major version for react-native-vision-camera: VisionCamera V3 ✨
For VisionCamera V3 we target one major feature and a ton of stability and performance improvements:
runAtTargetFps(fps, ...)to run code at a throttled FPS rate inside your Frame ProcessorrunAsync(...)to run code on a separate thread for background processing inside your Frame Processor. This can take longer without blocking the Camera.getAvailableCameraDevices()so external devices can become plugged in/out during runtimeimageFromFrame) facebookresearch/playtorch#199 and working PR for Tensorflow: feat: V3 tensorflow plugin #1633)FrameHostObjectinstancemContext.handleException(..))toByteArray(): Gets the Frame data as a byte array. The type isUint8Array(TypedArray/ArrayBuffer). Keep in mind that Frame buffers are usually allocated on the GPU, so this comes with a performance cost of a GPU -> CPU copy operation. I've optimized it a bit to run pretty fast :)orientation: The orientation of the Frame. e.g."portrait"isMirrored: Whether the Frame is mirrored (eg in selfie cams)timestamp: The presentation timestamp of the FrameOf course we can't just put weeks of effort into this project for free. This is why we are looking for 5-10 partners who are interested in seeing this become reality by funding the development of those features.
Ontop of seeing this become reality, we also create a sponsors section for your company logo in the VisionCamera documentation/README, and we will test the new VisionCamera V3 version in your app to ensure it's compatibility for your use-case.
If you are interested in that, reach out to me over Twitter: https://twitter.com/mrousavy or email: me@mrousavy.com
Demo
Here's the current proof of concept we built in 3 hours:
Progress
Currently, I spent around ~60 hours to improve that proof of concept and created the above demos. I also refined the iOS part a bit and created some fixes, did some research and improved the Skia handling.
Here is the current Draft PR: #1345
Here's a TODO list:
runAtTargetFpsrunAsynctoByteArray(),orientation,isMirroredandtimestamponFrameorientationtoFramerunAtTargetFpsrunAsynctoByteArray(),orientation,isMirroredandtimestamponFrameorientationtoFramerunAtTargetFpsrunAsyncI reckon this will be around 500 hours of effort in total.
Update 15.2.2023: I just started working on this here: feat: ✨ V3 ✨ #1466. No one is paying me for that so I am doing all this in my free time. I decided to just ignore issues/backlash so that I can work as productive as I can. If someone is complaining, they should either offer a fix (PR) or pay me. If I listen to all issues the library will never get better :)