MoViNets: Mobile Video Networks for Efficient Video Recognition
- URL: http://arxiv.org/abs/2103.11511v1
- Date: Sun, 21 Mar 2021 23:06:38 GMT
- Title: MoViNets: Mobile Video Networks for Efficient Video Recognition
- Authors: Dan Kondratyuk, Liangzhe Yuan, Yandong Li, Li Zhang, Mingxing Tan,
Matthew Brown, Boqing Gong
- Abstract summary: 3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets.
We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs.
- Score: 52.49314494202433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Mobile Video Networks (MoViNets), a family of computation and
memory efficient video networks that can operate on streaming video for online
inference. 3D convolutional neural networks (CNNs) are accurate at video
recognition but require large computation and memory budgets and do not support
online inference, making them difficult to work on mobile devices. We propose a
three-step approach to improve computational efficiency while substantially
reducing the peak memory usage of 3D CNNs. First, we design a video network
search space and employ neural architecture search to generate efficient and
diverse 3D CNN architectures. Second, we introduce the Stream Buffer technique
that decouples memory from video clip duration, allowing 3D CNNs to embed
arbitrary-length streaming video sequences for both training and inference with
a small constant memory footprint. Third, we propose a simple ensembling
technique to improve accuracy further without sacrificing efficiency. These
three progressive techniques allow MoViNets to achieve state-of-the-art
accuracy and efficiency on the Kinetics, Moments in Time, and Charades video
action recognition datasets. For instance, MoViNet-A5-Stream achieves the same
accuracy as X3D-XL on Kinetics 600 while requiring 80% fewer FLOPs and 65% less
memory. Code will be made available at
https://github.com/tensorflow/models/tree/master/official/vision.
Related papers
- OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - Maximizing Spatio-Temporal Entropy of Deep 3D CNNs for Efficient Video
Recognition [25.364148451584356]
3D convolution neural networks (CNNs) have been the prevailing option for video recognition.
We propose to automatically design efficient 3D CNN architectures via a novel training-free neural architecture search approach.
Experiments on Something-Something V1&V2 and Kinetics400 demonstrate that the E3D family achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-05T15:11:53Z) - Continual 3D Convolutional Neural Networks for Real-time Processing of
Videos [93.73198973454944]
We introduce Continual 3D Contemporalal Neural Networks (Co3D CNNs)
Co3D CNNs process videos frame-by-frame rather than by clip by clip.
We show that Co3D CNNs initialised on the weights from preexisting state-of-the-art video recognition models reduce floating point operations for frame-wise computations by 10.0-12.4x while improving accuracy on Kinetics-400 by 2.3-3.8x.
arXiv Detail & Related papers (2021-05-31T18:30:52Z) - 3D CNNs with Adaptive Temporal Feature Resolutions [83.43776851586351]
Similarity Guided Sampling (SGS) module can be plugged into any existing 3D CNN architecture.
SGS empowers 3D CNNs by learning the similarity of temporal features and grouping similar features together.
Our evaluations show that the proposed module improves the state-of-the-art by reducing the computational cost (GFLOPs) by half while preserving or even improving the accuracy.
arXiv Detail & Related papers (2020-11-17T14:34:05Z) - Dissected 3D CNNs: Temporal Skip Connections for Efficient Online Video
Processing [15.980090046426193]
Conal Neural Networks with 3D kernels (3D-CNNs) currently achieve state-of-the-art results in video recognition tasks.
We propose dissected 3D-CNNs, where the intermediate volumes of the network are dissected and propagated over depth (time) dimension for future calculations.
For action classification, the dissected version of ResNet models performs 77-90% fewer computations at online operation.
arXiv Detail & Related papers (2020-09-30T12:48:52Z) - RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks
on Mobile Devices [57.877112704841366]
This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs.
For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.
arXiv Detail & Related papers (2020-07-20T02:05:32Z) - A Real-time Action Representation with Temporal Encoding and Deep
Compression [115.3739774920845]
We propose a new real-time convolutional architecture, called Temporal Convolutional 3D Network (T-C3D), for action representation.
T-C3D learns video action representations in a hierarchical multi-granularity manner while obtaining a high process speed.
Our method achieves clear improvements on UCF101 action recognition benchmark against state-of-the-art real-time methods by 5.4% in terms of accuracy and 2 times faster in terms of inference speed with a less than 5MB storage model.
arXiv Detail & Related papers (2020-06-17T06:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.