Would Mega-scale Datasets Further Enhance Spatiotemporal 3D CNNs?
- URL: http://arxiv.org/abs/2004.04968v1
- Date: Fri, 10 Apr 2020 09:44:19 GMT
- Title: Would Mega-scale Datasets Further Enhance Spatiotemporal 3D CNNs?
- Authors: Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, Yutaka Satoh
- Abstract summary: In the early era of deep neural networks, 2D CNNs have been better than 3D CNNs in the context of video recognition.
Recent studies revealed that 3D CNNs can outperform 2D CNNs trained on a large-scale video dataset.
- Score: 18.95620388632382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How can we collect and use a video dataset to further improve spatiotemporal
3D Convolutional Neural Networks (3D CNNs)? In order to positively answer this
open question in video recognition, we have conducted an exploration study
using a couple of large-scale video datasets and 3D CNNs. In the early era of
deep neural networks, 2D CNNs have been better than 3D CNNs in the context of
video recognition. Recent studies revealed that 3D CNNs can outperform 2D CNNs
trained on a large-scale video dataset. However, we heavily rely on
architecture exploration instead of dataset consideration. Therefore, in the
present paper, we conduct exploration study in order to improve spatiotemporal
3D CNNs as follows: (i) Recently proposed large-scale video datasets help
improve spatiotemporal 3D CNNs in terms of video classification accuracy. We
reveal that a carefully annotated dataset (e.g., Kinetics-700) effectively
pre-trains a video representation for a video classification task. (ii) We
confirm the relationships between #category/#instance and video classification
accuracy. The results show that #category should initially be fixed, and then
#instance is increased on a video dataset in case of dataset construction.
(iii) In order to practically extend a video dataset, we simply concatenate
publicly available datasets, such as Kinetics-700 and Moments in Time (MiT)
datasets. Compared with Kinetics-700 pre-training, we further enhance
spatiotemporal 3D CNNs with the merged dataset, e.g., +0.9, +3.4, and +1.1 on
UCF-101, HMDB-51, and ActivityNet datasets, respectively, in terms of
fine-tuning. (iv) In terms of recognition architecture, the Kinetics-700 and
merged dataset pre-trained models increase the recognition performance to 200
layers with the Residual Network (ResNet), while the Kinetics-400 pre-trained
model cannot successfully optimize the 200-layer architecture.
Related papers
- OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Maximizing Spatio-Temporal Entropy of Deep 3D CNNs for Efficient Video
Recognition [25.364148451584356]
3D convolution neural networks (CNNs) have been the prevailing option for video recognition.
We propose to automatically design efficient 3D CNN architectures via a novel training-free neural architecture search approach.
Experiments on Something-Something V1&V2 and Kinetics400 demonstrate that the E3D family achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-05T15:11:53Z) - Intelligent 3D Network Protocol for Multimedia Data Classification using
Deep Learning [0.0]
We implement Hybrid Deep Learning Architecture that combines STIP and 3D CNN features to enhance the performance of 3D videos effectively.
The results are compared with state-of-the-art frameworks from literature for action recognition on UCF101 with an accuracy of 95%.
arXiv Detail & Related papers (2022-07-23T12:24:52Z) - Continual 3D Convolutional Neural Networks for Real-time Processing of
Videos [93.73198973454944]
We introduce Continual 3D Contemporalal Neural Networks (Co3D CNNs)
Co3D CNNs process videos frame-by-frame rather than by clip by clip.
We show that Co3D CNNs initialised on the weights from preexisting state-of-the-art video recognition models reduce floating point operations for frame-wise computations by 10.0-12.4x while improving accuracy on Kinetics-400 by 2.3-3.8x.
arXiv Detail & Related papers (2021-05-31T18:30:52Z) - MoViNets: Mobile Video Networks for Efficient Video Recognition [52.49314494202433]
3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets.
We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs.
arXiv Detail & Related papers (2021-03-21T23:06:38Z) - 3D CNNs with Adaptive Temporal Feature Resolutions [83.43776851586351]
Similarity Guided Sampling (SGS) module can be plugged into any existing 3D CNN architecture.
SGS empowers 3D CNNs by learning the similarity of temporal features and grouping similar features together.
Our evaluations show that the proposed module improves the state-of-the-art by reducing the computational cost (GFLOPs) by half while preserving or even improving the accuracy.
arXiv Detail & Related papers (2020-11-17T14:34:05Z) - RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks
on Mobile Devices [57.877112704841366]
This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs.
For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.
arXiv Detail & Related papers (2020-07-20T02:05:32Z) - An Information-rich Sampling Technique over Spatio-Temporal CNN for
Classification of Human Actions in Videos [5.414308305392762]
We propose a novel scheme for human action recognition in videos, using a 3-dimensional Convolutional Neural Network (3D CNN) based classifier.
In this paper, a 3D CNN architecture is proposed to extract featuresweighted and follows Long Short-Term Memory (LSTM) to recognize human actions.
Experiments are performed with KTH and WEIZMANN human actions datasets, whereby it is shown to produce comparable results with the state-of-the-art techniques.
arXiv Detail & Related papers (2020-02-06T05:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.