Maximizing Spatio-Temporal Entropy of Deep 3D CNNs for Efficient Video
Recognition
- URL: http://arxiv.org/abs/2303.02693v1
- Date: Sun, 5 Mar 2023 15:11:53 GMT
- Title: Maximizing Spatio-Temporal Entropy of Deep 3D CNNs for Efficient Video
Recognition
- Authors: Junyan Wang, Zhenhong Sun, Yichen Qian, Dong Gong, Xiuyu Sun, Ming
Lin, Maurice Pagnucco, Yang Song
- Abstract summary: 3D convolution neural networks (CNNs) have been the prevailing option for video recognition.
We propose to automatically design efficient 3D CNN architectures via a novel training-free neural architecture search approach.
Experiments on Something-Something V1&V2 and Kinetics400 demonstrate that the E3D family achieves state-of-the-art performance.
- Score: 25.364148451584356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D convolution neural networks (CNNs) have been the prevailing option for
video recognition. To capture the temporal information, 3D convolutions are
computed along the sequences, leading to cubically growing and expensive
computations. To reduce the computational cost, previous methods resort to
manually designed 3D/2D CNN structures with approximations or automatic search,
which sacrifice the modeling ability or make training time-consuming. In this
work, we propose to automatically design efficient 3D CNN architectures via a
novel training-free neural architecture search approach tailored for 3D CNNs
considering the model complexity. To measure the expressiveness of 3D CNNs
efficiently, we formulate a 3D CNN as an information system and derive an
analytic entropy score, based on the Maximum Entropy Principle. Specifically,
we propose a spatio-temporal entropy score (STEntr-Score) with a refinement
factor to handle the discrepancy of visual information in spatial and temporal
dimensions, through dynamically leveraging the correlation between the feature
map size and kernel size depth-wisely. Highly efficient and expressive 3D CNN
architectures, \ie entropy-based 3D CNNs (E3D family), can then be efficiently
searched by maximizing the STEntr-Score under a given computational budget, via
an evolutionary algorithm without training the network parameters. Extensive
experiments on Something-Something V1\&V2 and Kinetics400 demonstrate that the
E3D family achieves state-of-the-art performance with higher computational
efficiency. Code is available at
https://github.com/alibaba/lightweight-neural-architecture-search.
Related papers
- Intelligent 3D Network Protocol for Multimedia Data Classification using
Deep Learning [0.0]
We implement Hybrid Deep Learning Architecture that combines STIP and 3D CNN features to enhance the performance of 3D videos effectively.
The results are compared with state-of-the-art frameworks from literature for action recognition on UCF101 with an accuracy of 95%.
arXiv Detail & Related papers (2022-07-23T12:24:52Z) - Gate-Shift-Fuse for Video Action Recognition [43.8525418821458]
Gate-Fuse (GSF) is a novel-temporal feature extraction module which controls interactions in-temporal decomposition and learns to adaptively route features through time and combine them in a data dependent manner.
GSF can be inserted into existing 2D CNNs to convert them into efficient and high performing, with negligible parameter and compute overhead.
We perform an extensive analysis of GSF using two popular 2D CNN families and achieve state-of-the-art or competitive performance on five standard action recognition benchmarks.
arXiv Detail & Related papers (2022-03-16T19:19:04Z) - Continual 3D Convolutional Neural Networks for Real-time Processing of
Videos [93.73198973454944]
We introduce Continual 3D Contemporalal Neural Networks (Co3D CNNs)
Co3D CNNs process videos frame-by-frame rather than by clip by clip.
We show that Co3D CNNs initialised on the weights from preexisting state-of-the-art video recognition models reduce floating point operations for frame-wise computations by 10.0-12.4x while improving accuracy on Kinetics-400 by 2.3-3.8x.
arXiv Detail & Related papers (2021-05-31T18:30:52Z) - MoViNets: Mobile Video Networks for Efficient Video Recognition [52.49314494202433]
3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets.
We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs.
arXiv Detail & Related papers (2021-03-21T23:06:38Z) - Hyperspectral Image Classification: Artifacts of Dimension Reduction on
Hybrid CNN [1.2875323263074796]
2D and 3D CNN models have proved highly efficient in exploiting the spatial and spectral information of Hyperspectral Images.
This work proposed a lightweight CNN (3D followed by 2D-CNN) model which significantly reduces the computational cost.
arXiv Detail & Related papers (2021-01-25T18:43:57Z) - 3D CNNs with Adaptive Temporal Feature Resolutions [83.43776851586351]
Similarity Guided Sampling (SGS) module can be plugged into any existing 3D CNN architecture.
SGS empowers 3D CNNs by learning the similarity of temporal features and grouping similar features together.
Our evaluations show that the proposed module improves the state-of-the-art by reducing the computational cost (GFLOPs) by half while preserving or even improving the accuracy.
arXiv Detail & Related papers (2020-11-17T14:34:05Z) - RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks
on Mobile Devices [57.877112704841366]
This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs.
For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.
arXiv Detail & Related papers (2020-07-20T02:05:32Z) - A Real-time Action Representation with Temporal Encoding and Deep
Compression [115.3739774920845]
We propose a new real-time convolutional architecture, called Temporal Convolutional 3D Network (T-C3D), for action representation.
T-C3D learns video action representations in a hierarchical multi-granularity manner while obtaining a high process speed.
Our method achieves clear improvements on UCF101 action recognition benchmark against state-of-the-art real-time methods by 5.4% in terms of accuracy and 2 times faster in terms of inference speed with a less than 5MB storage model.
arXiv Detail & Related papers (2020-06-17T06:30:43Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.