Coarse Temporal Attention Network (CTA-Net) for Driver's Activity
Recognition
- URL: http://arxiv.org/abs/2101.06636v1
- Date: Sun, 17 Jan 2021 10:15:37 GMT
- Title: Coarse Temporal Attention Network (CTA-Net) for Driver's Activity
Recognition
- Authors: Zachary Wharton, Ardhendu Behera, Yonghuai Liu, Nik Bessis
- Abstract summary: Driver's activities are different since they are executed by the same subject with similar body parts movements, resulting in subtle changes.
Our model is named Coarse Temporal Attention Network (CTA-Net), in which coarse temporal branches are introduced in a trainable glimpse.
The model then uses an innovative attention mechanism to generate high-level action specific contextual information for activity recognition.
- Score: 14.07119502083967
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There is significant progress in recognizing traditional human activities
from videos focusing on highly distinctive actions involving discriminative
body movements, body-object and/or human-human interactions. Driver's
activities are different since they are executed by the same subject with
similar body parts movements, resulting in subtle changes. To address this, we
propose a novel framework by exploiting the spatiotemporal attention to model
the subtle changes. Our model is named Coarse Temporal Attention Network
(CTA-Net), in which coarse temporal branches are introduced in a trainable
glimpse network. The goal is to allow the glimpse to capture high-level
temporal relationships, such as 'during', 'before' and 'after' by focusing on a
specific part of a video. These branches also respect the topology of the
temporal dynamics in the video, ensuring that different branches learn
meaningful spatial and temporal changes. The model then uses an innovative
attention mechanism to generate high-level action specific contextual
information for activity recognition by exploring the hidden states of an LSTM.
The attention mechanism helps in learning to decide the importance of each
hidden state for the recognition task by weighing them when constructing the
representation of the video. Our approach is evaluated on four publicly
accessible datasets and significantly outperforms the state-of-the-art by a
considerable margin with only RGB video as input.
Related papers
- Deeply-Coupled Convolution-Transformer with Spatial-temporal
Complementary Learning for Video-based Person Re-identification [91.56939957189505]
We propose a novel spatial-temporal complementary learning framework named Deeply-Coupled Convolution-Transformer (DCCT) for high-performance video-based person Re-ID.
Our framework could attain better performances than most state-of-the-art methods.
arXiv Detail & Related papers (2023-04-27T12:16:44Z) - Learning State-Aware Visual Representations from Audible Interactions [39.08554113807464]
We propose a self-supervised algorithm to learn representations from egocentric video data.
We use audio signals to identify moments of likely interactions which are conducive to better learning.
We validate these contributions extensively on two large-scale egocentric datasets.
arXiv Detail & Related papers (2022-09-27T17:57:13Z) - Exploring Motion and Appearance Information for Temporal Sentence
Grounding [52.01687915910648]
We propose a Motion-Appearance Reasoning Network (MARN) to solve temporal sentence grounding.
We develop separate motion and appearance branches to learn motion-guided and appearance-guided object relations.
Our proposed MARN significantly outperforms previous state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-01-03T02:44:18Z) - Relational Self-Attention: What's Missing in Attention for Video
Understanding [52.38780998425556]
We introduce a relational feature transform, dubbed the relational self-attention (RSA)
Our experiments and ablation studies show that the RSA network substantially outperforms convolution and self-attention counterparts.
arXiv Detail & Related papers (2021-11-02T15:36:11Z) - Efficient Modelling Across Time of Human Actions and Interactions [92.39082696657874]
We argue that current fixed-sized-temporal kernels in 3 convolutional neural networks (CNNDs) can be improved to better deal with temporal variations in the input.
We study how we can better handle between classes of actions, by enhancing their feature differences over different layers of the architecture.
The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results.
arXiv Detail & Related papers (2021-10-05T15:39:11Z) - Identity-aware Graph Memory Network for Action Detection [37.65846189707054]
We explicitly highlight the identity information of the actors in terms of both long-term and short-term context through a graph memory network.
Specifically, we propose the hierarchical graph neural network (IGNN) to comprehensively conduct long-term relation modeling.
We develop a dual attention module (DAM) to generate identity-aware constraint to reduce the influence of interference by the actors of different identities.
arXiv Detail & Related papers (2021-08-26T02:34:55Z) - Learning Asynchronous and Sparse Human-Object Interaction in Videos [56.73059840294019]
Asynchronous-Sparse Interaction Graph Networks (ASSIGN) is able to automatically detect the structure of interaction events associated with entities in a video scene.
ASSIGN is tested on human-object interaction recognition and shows superior performance in segmenting and labeling of human sub-activities and object affordances from raw videos.
arXiv Detail & Related papers (2021-03-03T23:43:55Z) - DS-Net: Dynamic Spatiotemporal Network for Video Salient Object
Detection [78.04869214450963]
We propose a novel dynamic temporal-temporal network (DSNet) for more effective fusion of temporal and spatial information.
We show that the proposed method achieves superior performance than state-of-the-art algorithms.
arXiv Detail & Related papers (2020-12-09T06:42:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.