Deep ConvLSTM with self-attention for human activity decoding using
wearables
- URL: http://arxiv.org/abs/2005.00698v2
- Date: Fri, 18 Dec 2020 03:08:37 GMT
- Title: Deep ConvLSTM with self-attention for human activity decoding using
wearables
- Authors: Satya P. Singh, Aim\'e Lay-Ekuakille, Deepak Gangwar, Madan Kumar
Sharma, Sukrit Gupta
- Abstract summary: We propose a deep neural network architecture that captures features of multiple sensor time-series data but also selects important time points.
We show the validity of the proposed approach across different data sampling strategies and demonstrate that the self-attention mechanism gave a significant improvement.
The proposed methods open avenues for better decoding of human activity from multiple body sensors over extended periods time.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decoding human activity accurately from wearable sensors can aid in
applications related to healthcare and context awareness. The present
approaches in this domain use recurrent and/or convolutional models to capture
the spatio-temporal features from time-series data from multiple sensors. We
propose a deep neural network architecture that not only captures the
spatio-temporal features of multiple sensor time-series data but also selects,
learns important time points by utilizing a self-attention mechanism. We show
the validity of the proposed approach across different data sampling strategies
on six public datasets and demonstrate that the self-attention mechanism gave a
significant improvement in performance over deep networks using a combination
of recurrent and convolution networks. We also show that the proposed approach
gave a statistically significant performance enhancement over previous
state-of-the-art methods for the tested datasets. The proposed methods open
avenues for better decoding of human activity from multiple body sensors over
extended periods of time. The code implementation for the proposed model is
available at https://github.com/isukrit/encodingHumanActivity.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - UMSNet: An Universal Multi-sensor Network for Human Activity Recognition [10.952666953066542]
This paper proposes a universal multi-sensor network (UMSNet) for human activity recognition.
In particular, we propose a new lightweight sensor residual block (called LSR block), which improves the performance.
Our framework has a clear structure and can be directly applied to various types of multi-modal Time Series Classification tasks.
arXiv Detail & Related papers (2022-05-24T03:29:54Z) - A Spatio-Temporal Multilayer Perceptron for Gesture Recognition [70.34489104710366]
We propose a multilayer state-weighted perceptron for gesture recognition in the context of autonomous vehicles.
An evaluation of TCG and Drive&Act datasets is provided to showcase the promising performance of our approach.
We deploy our model to our autonomous vehicle to show its real-time capability and stable execution.
arXiv Detail & Related papers (2022-04-25T08:42:47Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Energy Aware Deep Reinforcement Learning Scheduling for Sensors
Correlated in Time and Space [62.39318039798564]
We propose a scheduling mechanism capable of taking advantage of correlated information.
The proposed mechanism is capable of determining the frequency with which sensors should transmit their updates.
We show that our solution can significantly extend the sensors' lifetime.
arXiv Detail & Related papers (2020-11-19T09:53:27Z) - A Tree-structure Convolutional Neural Network for Temporal Features
Exaction on Sensor-based Multi-resident Activity Recognition [4.619245607612873]
We propose an end-to-end Tree-Structure Convolutional neural network based framework for Multi-Resident Activity Recognition (TSC-MRAR)
First, we treat each sample as an event and obtain the current event embedding through the previous sensor readings in the sliding window.
Then, in order to automatically generate the temporal features, a tree-structure network is designed to derive the temporal dependence of nearby readings.
arXiv Detail & Related papers (2020-11-05T14:31:00Z) - Benchmarking Deep Learning Interpretability in Time Series Predictions [41.13847656750174]
Saliency methods are used extensively to highlight the importance of input features in model predictions.
We set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures.
arXiv Detail & Related papers (2020-10-26T22:07:53Z) - ESPRESSO: Entropy and ShaPe awaRe timE-Series SegmentatiOn for
processing heterogeneous sensor data [5.142415132534397]
We propose ESPRESSO, a hybrid segmentation model for multi-dimensional time-series.
ESPRESSO exploits the entropy and temporal shape properties of time-series.
It offers superior performance to four state-of-the-art methods across seven public datasets of wearable and wear-free sensing.
arXiv Detail & Related papers (2020-07-24T10:41:20Z) - Human Activity Recognition from Wearable Sensor Data Using
Self-Attention [2.9023633922848586]
We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
arXiv Detail & Related papers (2020-03-17T14:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.