Human Activity Recognition using Deep Learning Models on Smartphones and
Smartwatches Sensor Data
- URL: http://arxiv.org/abs/2103.03836v1
- Date: Sun, 28 Feb 2021 06:49:52 GMT
- Title: Human Activity Recognition using Deep Learning Models on Smartphones and
Smartwatches Sensor Data
- Authors: Bolu Oluwalade, Sunil Neela, Judy Wawira, Tobiloba Adejumo, Saptarshi
Purkayastha
- Abstract summary: We use the popular WISDM dataset for activity recognition.
We show that smartphones and smartwatches don't capture data in the same way due to the location where they are worn.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, human activity recognition has garnered considerable
attention both in industrial and academic research because of the wide
deployment of sensors, such as accelerometers and gyroscopes, in products such
as smartphones and smartwatches. Activity recognition is currently applied in
various fields where valuable information about an individual's functional
ability and lifestyle is needed. In this study, we used the popular WISDM
dataset for activity recognition. Using multivariate analysis of covariance
(MANCOVA), we established a statistically significant difference (p<0.05)
between the data generated from the sensors embedded in smartphones and
smartwatches. By doing this, we show that smartphones and smartwatches don't
capture data in the same way due to the location where they are worn. We
deployed several neural network architectures to classify 15 different hand and
non-hand-oriented activities. These models include Long short-term memory
(LSTM), Bi-directional Long short-term memory (BiLSTM), Convolutional Neural
Network (CNN), and Convolutional LSTM (ConvLSTM). The developed models
performed best with watch accelerometer data. Also, we saw that the
classification precision obtained with the convolutional input classifiers (CNN
and ConvLSTM) was higher than the end-to-end LSTM classifier in 12 of the 15
activities. Additionally, the CNN model for the watch accelerometer was better
able to classify non-hand oriented activities when compared to hand-oriented
activities.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Human Activity Recognition on Time Series Accelerometer Sensor Data
using LSTM Recurrent Neural Networks [0.2294014185517203]
In this study, we focus on the use of smartwatch accelerometer sensors to recognize eating activity.
We collected sensor data from 10 participants while consuming pizza.
We developed a LSTM-ANN architecture that has demonstrated 90% success in identifying individual bites compared to a puff, medication-taking or jogging activities.
arXiv Detail & Related papers (2022-06-03T19:24:20Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - UMSNet: An Universal Multi-sensor Network for Human Activity Recognition [10.952666953066542]
This paper proposes a universal multi-sensor network (UMSNet) for human activity recognition.
In particular, we propose a new lightweight sensor residual block (called LSR block), which improves the performance.
Our framework has a clear structure and can be directly applied to various types of multi-modal Time Series Classification tasks.
arXiv Detail & Related papers (2022-05-24T03:29:54Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data [61.79595926825511]
Acquiring balanced datasets containing accurate activity labels requires humans to correctly annotate and potentially interfere with the subjects' normal activities in real-time.
We propose HAR-GCCN, a deep graph CNN model that leverages the correlation between chronologically adjacent sensor measurements to predict the correct labels for unclassified activities.
Har-GCCN shows superior performance relative to previously used baseline methods, improving classification accuracy by about 25% and up to 68% on different datasets.
arXiv Detail & Related papers (2022-03-07T01:23:46Z) - Human Activity Recognition models using Limited Consumer Device Sensors
and Machine Learning [0.0]
Human activity recognition has grown in popularity with its increase of applications within daily lifestyles and medical environments.
This paper presents the findings of different models that are limited to train using sensor data from smartphones and smartwatches.
Results show promise for models trained strictly using limited sensor data collected from only smartphones and smartwatches coupled with traditional machine learning concepts and algorithms.
arXiv Detail & Related papers (2022-01-21T06:54:05Z) - Deep ConvLSTM with self-attention for human activity decoding using
wearables [0.0]
We propose a deep neural network architecture that captures features of multiple sensor time-series data but also selects important time points.
We show the validity of the proposed approach across different data sampling strategies and demonstrate that the self-attention mechanism gave a significant improvement.
The proposed methods open avenues for better decoding of human activity from multiple body sensors over extended periods time.
arXiv Detail & Related papers (2020-05-02T04:30:31Z) - Sequential Weakly Labeled Multi-Activity Localization and Recognition on
Wearable Sensors using Recurrent Attention Networks [13.64024154785943]
We propose a recurrent attention network (RAN) to handle sequential weakly labeled multi-activity recognition and location tasks.
Our RAN model can simultaneously infer multi-activity types from the coarse-grained sequential weak labels.
It will greatly reduce the burden of manual labeling.
arXiv Detail & Related papers (2020-04-13T04:57:09Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.