-
Notifications
You must be signed in to change notification settings - Fork 335
Add MoViNet
model
#2304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @innat that you for this suggestion. We will keep this open, but at this point this is of low priority for the team. |
@divyashreepathihalli Thanks for the confirmation. I pulled out the movinet from tf-model garden and maintaining to a dedicated repo (private for now). The codebase somewhat complex due to large number of configurations. I will keep update the codebase, so, please let me know when keras-cv is ready take it. |
If you have code ready to go, which works well across all backends. Please feel free to open the PR. We will review it and add it. |
Thanks for reporting the issue! We have consolidated the development of KerasCV into the new KerasHub package, which supports image, text, and multi-modal models. Please read the announcement.
With our focus shifted to KerasHub, we are not planning any further development or releases in KerasCV. If you encounter a KerasCV feature that is missing from KerasHub, or would like to propose an addition to the library, please file an issue with KerasHub. |
Firstly, identifying which feature is required or missing can be done effectively, either by practitioners or the Keras team. Tickets in |
This issue is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you. |
Uh oh!
There was an error while loading. Please reload this page.
Short Description
MoViNets: Mobile Video Networks for Efficient Video Recognition
Mobile Video Networks (MoViNets) are efficient video classification models runnable on mobile devices. MoViNets demonstrate state-of-the-art accuracy and efficiency on several large-scale video action recognition datasets.
On Kinetics 600, MoViNet-A6 achieves 84.8% top-1 accuracy, outperforming recent Vision Transformer models like ViViT (83.0%) and VATT (83.6%) without any additional training data, while using 10x fewer FLOPs. And streaming MoViNet-A0 achieves 72% accuracy while using 3x fewer FLOPs than MobileNetV3-large (68%).
Papers
MoViNets
Existing Implementations
Other Information
The streaming version of this model makes it quite impression and it would be valuable addition.
The text was updated successfully, but these errors were encountered: