Human Activity Recognition from Wearable Sensor Data Using
Self-Attention
- URL: http://arxiv.org/abs/2003.09018v1
- Date: Tue, 17 Mar 2020 14:16:57 GMT
- Title: Human Activity Recognition from Wearable Sensor Data Using
Self-Attention
- Authors: Saif Mahmud, M Tanjid Hasan Tonmoy, Kishor Kumar Bhaumik, A K M
Mahbubur Rahman, M Ashraful Amin, Mohammad Shoyaib, Muhammad Asif Hossain
Khan, Amin Ahsan Ali
- Abstract summary: We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
- Score: 2.9023633922848586
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human Activity Recognition from body-worn sensor data poses an inherent
challenge in capturing spatial and temporal dependencies of time-series
signals. In this regard, the existing recurrent or convolutional or their
hybrid models for activity recognition struggle to capture spatio-temporal
context from the feature space of sensor reading sequence. To address this
complex problem, we propose a self-attention based neural network model that
foregoes recurrent architectures and utilizes different types of attention
mechanisms to generate higher dimensional feature representation used for
classification. We performed extensive experiments on four popular publicly
available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD. Our model
achieve significant performance improvement over recent state-of-the-art models
in both benchmark test subjects and Leave-one-subject-out evaluation. We also
observe that the sensor attention maps produced by our model is able capture
the importance of the modality and placement of the sensors in predicting the
different activity classes.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - A Spatio-Temporal Multilayer Perceptron for Gesture Recognition [70.34489104710366]
We propose a multilayer state-weighted perceptron for gesture recognition in the context of autonomous vehicles.
An evaluation of TCG and Drive&Act datasets is provided to showcase the promising performance of our approach.
We deploy our model to our autonomous vehicle to show its real-time capability and stable execution.
arXiv Detail & Related papers (2022-04-25T08:42:47Z) - Attention-Based Sensor Fusion for Human Activity Recognition Using IMU
Signals [4.558966602878624]
We propose a novel attention-based approach to human activity recognition using multiple IMU sensors worn at different body locations.
An attention-based fusion mechanism is developed to learn the importance of sensors at different body locations and to generate an attentive feature representation.
The proposed approach is evaluated using five public datasets and it outperforms state-of-the-art methods on a wide variety of activity categories.
arXiv Detail & Related papers (2021-12-20T17:00:27Z) - Hierarchical Self Attention Based Autoencoder for Open-Set Human
Activity Recognition [2.492343817244558]
Self attention based approach is proposed for wearable sensor based human activity recognition.
It incorporates self-attention based feature representations from encoder to detect unseen activity classes in open-set recognition setting.
We conduct extensive validation experiments that indicate significantly improved robustness to noise and subject specific variability in body-worn sensor signals.
arXiv Detail & Related papers (2021-03-07T06:21:18Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Deep ConvLSTM with self-attention for human activity decoding using
wearables [0.0]
We propose a deep neural network architecture that captures features of multiple sensor time-series data but also selects important time points.
We show the validity of the proposed approach across different data sampling strategies and demonstrate that the self-attention mechanism gave a significant improvement.
The proposed methods open avenues for better decoding of human activity from multiple body sensors over extended periods time.
arXiv Detail & Related papers (2020-05-02T04:30:31Z) - Human Action Recognition and Assessment via Deep Neural Network
Self-Organization [0.0]
This chapter introduces a set of hierarchical models for the learning and recognition of actions from depth maps and RGB images.
A particularity of these models is the use of growing self-organizing networks that quickly adapt to non-stationary distributions.
arXiv Detail & Related papers (2020-01-04T15:58:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.