An Information-rich Sampling Technique over Spatio-Temporal CNN for
Classification of Human Actions in Videos
- URL: http://arxiv.org/abs/2002.02100v2
- Date: Fri, 7 Feb 2020 06:42:20 GMT
- Title: An Information-rich Sampling Technique over Spatio-Temporal CNN for
Classification of Human Actions in Videos
- Authors: S.H. Shabbeer Basha, Viswanath Pulabaigari, Snehasis Mukherjee
- Abstract summary: We propose a novel scheme for human action recognition in videos, using a 3-dimensional Convolutional Neural Network (3D CNN) based classifier.
In this paper, a 3D CNN architecture is proposed to extract featuresweighted and follows Long Short-Term Memory (LSTM) to recognize human actions.
Experiments are performed with KTH and WEIZMANN human actions datasets, whereby it is shown to produce comparable results with the state-of-the-art techniques.
- Score: 5.414308305392762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel scheme for human action recognition in videos, using a
3-dimensional Convolutional Neural Network (3D CNN) based classifier.
Traditionally in deep learning based human activity recognition approaches,
either a few random frames or every $k^{th}$ frame of the video is considered
for training the 3D CNN, where $k$ is a small positive integer, like 4, 5, or
6. This kind of sampling reduces the volume of the input data, which speeds-up
training of the network and also avoids over-fitting to some extent, thus
enhancing the performance of the 3D CNN model. In the proposed video sampling
technique, consecutive $k$ frames of a video are aggregated into a single frame
by computing a Gaussian-weighted summation of the $k$ frames. The resulting
frame (aggregated frame) preserves the information in a better way than the
conventional approaches and experimentally shown to perform better. In this
paper, a 3D CNN architecture is proposed to extract the spatio-temporal
features and follows Long Short-Term Memory (LSTM) to recognize human actions.
The proposed 3D CNN architecture is capable of handling the videos where the
camera is placed at a distance from the performer. Experiments are performed
with KTH and WEIZMANN human actions datasets, whereby it is shown to produce
comparable results with the state-of-the-art techniques.
Related papers
- F4D: Factorized 4D Convolutional Neural Network for Efficient
Video-level Representation Learning [4.123763595394021]
Most existing 3D convolutional neural network (CNN)-based methods for video-level representation learning are clip-based.
We propose a factorized 4D CNN architecture with attention (F4D) that is capable of learning more effective, finer-grained, long-termtemporal video representations.
arXiv Detail & Related papers (2023-11-28T19:21:57Z) - Maximizing Spatio-Temporal Entropy of Deep 3D CNNs for Efficient Video
Recognition [25.364148451584356]
3D convolution neural networks (CNNs) have been the prevailing option for video recognition.
We propose to automatically design efficient 3D CNN architectures via a novel training-free neural architecture search approach.
Experiments on Something-Something V1&V2 and Kinetics400 demonstrate that the E3D family achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-05T15:11:53Z) - Intelligent 3D Network Protocol for Multimedia Data Classification using
Deep Learning [0.0]
We implement Hybrid Deep Learning Architecture that combines STIP and 3D CNN features to enhance the performance of 3D videos effectively.
The results are compared with state-of-the-art frameworks from literature for action recognition on UCF101 with an accuracy of 95%.
arXiv Detail & Related papers (2022-07-23T12:24:52Z) - Continual 3D Convolutional Neural Networks for Real-time Processing of
Videos [93.73198973454944]
We introduce Continual 3D Contemporalal Neural Networks (Co3D CNNs)
Co3D CNNs process videos frame-by-frame rather than by clip by clip.
We show that Co3D CNNs initialised on the weights from preexisting state-of-the-art video recognition models reduce floating point operations for frame-wise computations by 10.0-12.4x while improving accuracy on Kinetics-400 by 2.3-3.8x.
arXiv Detail & Related papers (2021-05-31T18:30:52Z) - MoViNets: Mobile Video Networks for Efficient Video Recognition [52.49314494202433]
3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets.
We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs.
arXiv Detail & Related papers (2021-03-21T23:06:38Z) - 2D or not 2D? Adaptive 3D Convolution Selection for Efficient Video
Recognition [84.697097472401]
We introduce Ada3D, a conditional computation framework that learns instance-specific 3D usage policies to determine frames and convolution layers to be used in a 3D network.
We demonstrate that our method achieves similar accuracies to state-of-the-art 3D models while requiring 20%-50% less computation across different datasets.
arXiv Detail & Related papers (2020-12-29T21:40:38Z) - 3D CNNs with Adaptive Temporal Feature Resolutions [83.43776851586351]
Similarity Guided Sampling (SGS) module can be plugged into any existing 3D CNN architecture.
SGS empowers 3D CNNs by learning the similarity of temporal features and grouping similar features together.
Our evaluations show that the proposed module improves the state-of-the-art by reducing the computational cost (GFLOPs) by half while preserving or even improving the accuracy.
arXiv Detail & Related papers (2020-11-17T14:34:05Z) - A Real-time Action Representation with Temporal Encoding and Deep
Compression [115.3739774920845]
We propose a new real-time convolutional architecture, called Temporal Convolutional 3D Network (T-C3D), for action representation.
T-C3D learns video action representations in a hierarchical multi-granularity manner while obtaining a high process speed.
Our method achieves clear improvements on UCF101 action recognition benchmark against state-of-the-art real-time methods by 5.4% in terms of accuracy and 2 times faster in terms of inference speed with a less than 5MB storage model.
arXiv Detail & Related papers (2020-06-17T06:30:43Z) - Would Mega-scale Datasets Further Enhance Spatiotemporal 3D CNNs? [18.95620388632382]
In the early era of deep neural networks, 2D CNNs have been better than 3D CNNs in the context of video recognition.
Recent studies revealed that 3D CNNs can outperform 2D CNNs trained on a large-scale video dataset.
arXiv Detail & Related papers (2020-04-10T09:44:19Z) - V4D:4D Convolutional Neural Networks for Video-level Representation
Learning [58.548331848942865]
Most 3D CNNs for video representation learning are clip-based, and thus do not consider video-temporal evolution of features.
We propose Video-level 4D Conal Neural Networks, or V4D, to model long-range representation with 4D convolutions.
V4D achieves excellent results, surpassing recent 3D CNNs by a large margin.
arXiv Detail & Related papers (2020-02-18T09:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.