Exploring Temporal Context and Human Movement Dynamics for Online Action
Detection in Videos
- URL: http://arxiv.org/abs/2106.13967v1
- Date: Sat, 26 Jun 2021 08:34:19 GMT
- Title: Exploring Temporal Context and Human Movement Dynamics for Online Action
Detection in Videos
- Authors: Vasiliki I. Vasileiou, Nikolaos Kardaris, Petros Maragos
- Abstract summary: Temporal context and human movement dynamics can be effectively employed for online action detection.
Our approach uses various state-of-the-art architectures and appropriately combines the extracted features in order to improve action detection.
- Score: 32.88517041655816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, the interaction between humans and robots is constantly expanding,
requiring more and more human motion recognition applications to operate in
real time. However, most works on temporal action detection and recognition
perform these tasks in offline manner, i.e. temporally segmented videos are
classified as a whole. In this paper, based on the recently proposed framework
of Temporal Recurrent Networks, we explore how temporal context and human
movement dynamics can be effectively employed for online action detection. Our
approach uses various state-of-the-art architectures and appropriately combines
the extracted features in order to improve action detection. We evaluate our
method on a challenging but widely used dataset for temporal action
localization, THUMOS'14. Our experiments show significant improvement over the
baseline method, achieving state-of-the art results on THUMOS'14.
Related papers
- Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Human Activity Recognition Using Cascaded Dual Attention CNN and
Bi-Directional GRU Framework [3.3721926640077795]
Vision-based human activity recognition has emerged as one of the essential research areas in video analytics domain.
This paper presents a computationally efficient yet generic spatial-temporal cascaded framework that exploits the deep discriminative spatial and temporal features for human activity recognition.
The proposed framework attains an improvement in execution time up to 167 times in terms of frames per second as compared to most of the contemporary action recognition methods.
arXiv Detail & Related papers (2022-08-09T20:34:42Z) - Continuous Human Action Recognition for Human-Machine Interaction: A
Review [39.593687054839265]
Recognising actions within an input video are challenging but necessary tasks for applications that require real-time human-machine interaction.
We provide on the feature extraction and learning strategies that are used on most state-of-the-art methods.
We investigate the application of such models to real-world scenarios and discuss several limitations and key research directions.
arXiv Detail & Related papers (2022-02-26T09:25:44Z) - Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition [111.87412719773889]
We propose a joint learning framework for "interacted object localization" and "human action recognition" based on skeleton data.
Our method achieves the best or competitive performance with the state-of-the-art methods for human action recognition.
arXiv Detail & Related papers (2021-10-28T10:09:34Z) - Efficient Modelling Across Time of Human Actions and Interactions [92.39082696657874]
We argue that current fixed-sized-temporal kernels in 3 convolutional neural networks (CNNDs) can be improved to better deal with temporal variations in the input.
We study how we can better handle between classes of actions, by enhancing their feature differences over different layers of the architecture.
The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results.
arXiv Detail & Related papers (2021-10-05T15:39:11Z) - Deep Learning-based Action Detection in Untrimmed Videos: A Survey [20.11911785578534]
Most real-world videos are lengthy and untrimmed with sparse segments of interest.
The task of temporal activity detection in untrimmed videos aims to localize the temporal boundary of actions.
This paper provides an overview of deep learning-based algorithms to tackle temporal action detection in untrimmed videos.
arXiv Detail & Related papers (2021-09-30T22:42:25Z) - Collaborative Distillation in the Parameter and Spectrum Domains for
Video Action Recognition [79.60708268515293]
This paper explores how to train small and efficient networks for action recognition.
We propose two distillation strategies in the frequency domain, namely the feature spectrum and parameter distribution distillations respectively.
Our method can achieve higher performance than state-of-the-art methods with the same backbone.
arXiv Detail & Related papers (2020-09-15T07:29:57Z) - Attention-Oriented Action Recognition for Real-Time Human-Robot
Interaction [11.285529781751984]
We propose an attention-oriented multi-level network framework to meet the need for real-time interaction.
Specifically, a Pre-Attention network is employed to roughly focus on the interactor in the scene at low resolution.
The other compact CNN receives the extracted skeleton sequence as input for action recognition.
arXiv Detail & Related papers (2020-07-02T12:41:28Z) - Intra- and Inter-Action Understanding via Temporal Action Parsing [118.32912239230272]
We construct a new dataset developed on sport videos with manual annotations of sub-actions, and conduct a study on temporal action parsing on top.
Our study shows that a sport activity usually consists of multiple sub-actions and that the awareness of such temporal structures is beneficial to action recognition.
We also investigate a number of temporal parsing methods, and thereon devise an improved method that is capable of mining sub-actions from training data without knowing the labels of them.
arXiv Detail & Related papers (2020-05-20T17:45:18Z) - ZSTAD: Zero-Shot Temporal Activity Detection [107.63759089583382]
We propose a novel task setting called zero-shot temporal activity detection (ZSTAD), where activities that have never been seen in training can still be detected.
We design an end-to-end deep network based on R-C3D as the architecture for this solution.
Experiments on both the THUMOS14 and the Charades datasets show promising performance in terms of detecting unseen activities.
arXiv Detail & Related papers (2020-03-12T02:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.