X3D: Expanding Architectures for Efficient Video Recognition
- URL: http://arxiv.org/abs/2004.04730v1
- Date: Thu, 9 Apr 2020 17:59:47 GMT
- Title: X3D: Expanding Architectures for Efficient Video Recognition
- Authors: Christoph Feichtenhofer
- Abstract summary: X3D is a family of efficient video networks that progressively expand a tiny 2D image classification architecture.
Inspired by feature selection methods in machine learning, a simple stepwise network expansion approach is employed.
We report competitive accuracy at unprecedented efficiency on video classification and detection benchmarks.
- Score: 21.539880641349693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents X3D, a family of efficient video networks that
progressively expand a tiny 2D image classification architecture along multiple
network axes, in space, time, width and depth. Inspired by feature selection
methods in machine learning, a simple stepwise network expansion approach is
employed that expands a single axis in each step, such that good accuracy to
complexity trade-off is achieved. To expand X3D to a specific target
complexity, we perform progressive forward expansion followed by backward
contraction. X3D achieves state-of-the-art performance while requiring 4.8x and
5.5x fewer multiply-adds and parameters for similar accuracy as previous work.
Our most surprising finding is that networks with high spatiotemporal
resolution can perform well, while being extremely light in terms of network
width and parameters. We report competitive accuracy at unprecedented
efficiency on video classification and detection benchmarks. Code will be
available at: https://github.com/facebookresearch/SlowFast
Related papers
- An Efficient 3D Convolutional Neural Network with Channel-wise, Spatial-grouped, and Temporal Convolutions [3.798710743290466]
We introduce a simple and very efficient 3D convolutional neural network for video action recognition.
We evaluate the performance and efficiency of our proposed network on several video action recognition datasets.
arXiv Detail & Related papers (2025-03-02T08:47:06Z) - TSP3D: Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding [74.033589504806]
We propose an efficient multi-level convolution architecture for 3D visual grounding.
Our method achieves top inference speed and surpasses previous fastest method by 100% FPS.
arXiv Detail & Related papers (2025-02-14T18:59:59Z) - Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural
Architecture Search [73.05693037548932]
X3D work presents a new family of efficient video models by expanding a hand-crafted image architecture along multiple axes.
A probabilistic neural architecture search method is adopted to efficiently search in such a large space.
Evaluations on Kinetics and Something-Something-V2 benchmarks confirm our AutoX3D models outperform existing ones in accuracy up to 1.3% under similar FLOPs.
arXiv Detail & Related papers (2021-12-09T05:40:33Z) - MoViNets: Mobile Video Networks for Efficient Video Recognition [52.49314494202433]
3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets.
We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs.
arXiv Detail & Related papers (2021-03-21T23:06:38Z) - 2D or not 2D? Adaptive 3D Convolution Selection for Efficient Video
Recognition [84.697097472401]
We introduce Ada3D, a conditional computation framework that learns instance-specific 3D usage policies to determine frames and convolution layers to be used in a 3D network.
We demonstrate that our method achieves similar accuracies to state-of-the-art 3D models while requiring 20%-50% less computation across different datasets.
arXiv Detail & Related papers (2020-12-29T21:40:38Z) - Making a Case for 3D Convolutions for Object Segmentation in Videos [16.167397418720483]
We show that 3D convolutional networks can be effectively applied to dense video prediction tasks such as salient object segmentation.
We propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules.
Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal benchmarks.
arXiv Detail & Related papers (2020-08-26T12:24:23Z) - Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution [34.713667358316286]
Self-driving cars need to understand 3D scenes efficiently and accurately in order to drive safely.
Existing 3D perception models are not able to recognize small instances very well due to the low-resolution voxelization and aggressive downsampling.
We propose Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch.
arXiv Detail & Related papers (2020-07-31T14:27:27Z) - A Real-time Action Representation with Temporal Encoding and Deep
Compression [115.3739774920845]
We propose a new real-time convolutional architecture, called Temporal Convolutional 3D Network (T-C3D), for action representation.
T-C3D learns video action representations in a hierarchical multi-granularity manner while obtaining a high process speed.
Our method achieves clear improvements on UCF101 action recognition benchmark against state-of-the-art real-time methods by 5.4% in terms of accuracy and 2 times faster in terms of inference speed with a less than 5MB storage model.
arXiv Detail & Related papers (2020-06-17T06:30:43Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - CAKES: Channel-wise Automatic KErnel Shrinking for Efficient 3D Networks [87.02416370081123]
3D Convolution Neural Networks (CNNs) have been widely applied to 3D scene understanding, such as video analysis and volumetric image recognition.
We propose Channel-wise Automatic KErnel Shrinking (CAKES), to enable efficient 3D learning by shrinking standard 3D convolutions into a set of economic operations.
arXiv Detail & Related papers (2020-03-28T14:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.