DanHAR: Dual Attention Network For Multimodal Human Activity Recognition
Using Wearable Sensors
- URL: http://arxiv.org/abs/2006.14435v4
- Date: Wed, 21 Jul 2021 08:21:37 GMT
- Title: DanHAR: Dual Attention Network For Multimodal Human Activity Recognition
Using Wearable Sensors
- Authors: Wenbin Gao, Lei Zhang, Qi Teng, Jun He, Hao Wu
- Abstract summary: We propose a novel dual attention method called DanHAR, which introduces the framework of blending channel attention and temporal attention on a CNN.
DanHAR achieves state-of-the-art performance with negligible overhead of parameters.
- Score: 9.492607098644536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human activity recognition (HAR) in ubiquitous computing has been beginning
to incorporate attention into the context of deep neural networks (DNNs), in
which the rich sensing data from multimodal sensors such as accelerometer and
gyroscope is used to infer human activities. Recently, two attention methods
are proposed via combining with Gated Recurrent Units (GRU) and Long Short-Term
Memory (LSTM) network, which can capture the dependencies of sensing signals in
both spatial and temporal domains simultaneously. However, recurrent networks
often have a weak feature representing power compared with convolutional neural
networks (CNNs). On the other hand, two attention, i.e., hard attention and
soft attention, are applied in temporal domains via combining with CNN, which
pay more attention to the target activity from a long sequence. However, they
can only tell where to focus and miss channel information, which plays an
important role in deciding what to focus. As a result, they fail to address the
spatial-temporal dependencies of multimodal sensing signals, compared with
attention-based GRU or LSTM. In the paper, we propose a novel dual attention
method called DanHAR, which introduces the framework of blending channel
attention and temporal attention on a CNN, demonstrating superiority in
improving the comprehensibility for multimodal HAR. Extensive experiments on
four public HAR datasets and weakly labeled dataset show that DanHAR achieves
state-of-the-art performance with negligible overhead of parameters.
Furthermore, visualizing analysis is provided to show that our attention can
amplifies more important sensor modalities and timesteps during classification,
which agrees well with human common intuition.
Related papers
- Know Thy Neighbors: A Graph Based Approach for Effective Sensor-Based
Human Activity Recognition in Smart Homes [0.0]
We propose a novel graph-guided neural network approach for Human Activity Recognition (HAR) in smart homes.
We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home.
Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms.
arXiv Detail & Related papers (2023-11-16T02:43:13Z) - Learning Feature Matching via Matchable Keypoint-Assisted Graph Neural
Network [52.29330138835208]
Accurately matching local features between a pair of images is a challenging computer vision task.
Previous studies typically use attention based graph neural networks (GNNs) with fully-connected graphs over keypoints within/across images.
We propose MaKeGNN, a sparse attention-based GNN architecture which bypasses non-repeatable keypoints and leverages matchable ones to guide message passing.
arXiv Detail & Related papers (2023-07-04T02:50:44Z) - Influencer Detection with Dynamic Graph Neural Networks [56.1837101824783]
We investigate different dynamic Graph Neural Networks (GNNs) configurations for influencer detection.
We show that using deep multi-head attention in GNN and encoding temporal attributes significantly improves performance.
arXiv Detail & Related papers (2022-11-15T13:00:25Z) - Beyond the Gates of Euclidean Space: Temporal-Discrimination-Fusions and
Attention-based Graph Neural Network for Human Activity Recognition [5.600003119721707]
Human activity recognition (HAR) through wearable devices has received much interest due to its numerous applications in fitness tracking, wellness screening, and supported living.
Traditional deep learning (DL) has set a state of the art performance for HAR domain.
We propose an approach based on Graph Neural Networks (GNNs) for structuring the input representation and exploiting the relations among the samples.
arXiv Detail & Related papers (2022-06-10T03:04:23Z) - Correlation-Aware Deep Tracking [83.51092789908677]
We propose a novel target-dependent feature network inspired by the self-/cross-attention scheme.
Our network deeply embeds cross-image feature correlation in multiple layers of the feature network.
Our model can be flexibly pre-trained on abundant unpaired images, leading to notably faster convergence than the existing methods.
arXiv Detail & Related papers (2022-03-03T11:53:54Z) - Continuity-Discrimination Convolutional Neural Network for Visual Object
Tracking [150.51667609413312]
This paper proposes a novel model, named Continuity-Discrimination Convolutional Neural Network (CD-CNN) for visual object tracking.
To address this problem, CD-CNN models temporal appearance continuity based on the idea of temporal slowness.
In order to alleviate inaccurate target localization and drifting, we propose a novel notion, object-centroid.
arXiv Detail & Related papers (2021-04-18T06:35:03Z) - Spatial-Temporal Correlation and Topology Learning for Person
Re-Identification in Videos [78.45050529204701]
We propose a novel framework to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation.
CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body.
It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body.
arXiv Detail & Related papers (2021-04-15T14:32:12Z) - Coordinate Attention for Efficient Mobile Network Design [96.40415345942186]
We propose a novel attention mechanism for mobile networks by embedding positional information into channel attention.
Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes.
Our coordinate attention is beneficial to ImageNet classification and behaves better in down-stream tasks, such as object detection and semantic segmentation.
arXiv Detail & Related papers (2021-03-04T09:18:02Z) - A Two-stream Neural Network for Pose-based Hand Gesture Recognition [23.50938160992517]
Pose based hand gesture recognition has been widely studied in the recent years.
This paper proposes a two-stream neural network with one stream being a self-attention based graph convolutional network (SAGCN)
The residual-connection enhanced Bi-IndRNN extends an IndRNN with the capability of bidirectional processing for temporal modelling.
arXiv Detail & Related papers (2021-01-22T03:22:26Z) - Deep ConvLSTM with self-attention for human activity decoding using
wearables [0.0]
We propose a deep neural network architecture that captures features of multiple sensor time-series data but also selects important time points.
We show the validity of the proposed approach across different data sampling strategies and demonstrate that the self-attention mechanism gave a significant improvement.
The proposed methods open avenues for better decoding of human activity from multiple body sensors over extended periods time.
arXiv Detail & Related papers (2020-05-02T04:30:31Z) - Human Activity Recognition from Wearable Sensor Data Using
Self-Attention [2.9023633922848586]
We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
arXiv Detail & Related papers (2020-03-17T14:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.