Human Activity Recognition on Time Series Accelerometer Sensor Data
using LSTM Recurrent Neural Networks
- URL: http://arxiv.org/abs/2206.07654v1
- Date: Fri, 3 Jun 2022 19:24:20 GMT
- Title: Human Activity Recognition on Time Series Accelerometer Sensor Data
using LSTM Recurrent Neural Networks
- Authors: Chrisogonas O. Odhiambo, Sanjoy Saha, Corby K. Martin, Homayoun
Valafar
- Abstract summary: In this study, we focus on the use of smartwatch accelerometer sensors to recognize eating activity.
We collected sensor data from 10 participants while consuming pizza.
We developed a LSTM-ANN architecture that has demonstrated 90% success in identifying individual bites compared to a puff, medication-taking or jogging activities.
- Score: 0.2294014185517203
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The use of sensors available through smart devices has pervaded everyday life
in several applications including human activity monitoring, healthcare, and
social networks. In this study, we focus on the use of smartwatch accelerometer
sensors to recognize eating activity. More specifically, we collected sensor
data from 10 participants while consuming pizza. Using this information, and
other comparable data available for similar events such as smoking and
medication-taking, and dissimilar activities of jogging, we developed a
LSTM-ANN architecture that has demonstrated 90% success in identifying
individual bites compared to a puff, medication-taking or jogging activities.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - MultiIoT: Benchmarking Machine Learning for the Internet of Things [70.74131118309967]
The next generation of machine learning systems must be adept at perceiving and interacting with the physical world.
sensory data from motion, thermal, geolocation, depth, wireless signals, video, and audio are increasingly used to model the states of physical environments.
Existing efforts are often specialized to a single sensory modality or prediction task.
This paper proposes MultiIoT, the most expansive and unified IoT benchmark to date, encompassing over 1.15 million samples from 12 modalities and 8 real-world tasks.
arXiv Detail & Related papers (2023-11-10T18:13:08Z) - Multi-Channel Time-Series Person and Soft-Biometric Identification [65.83256210066787]
This work investigates person and soft-biometrics identification from recordings of humans performing different activities using deep architectures.
We evaluate the method on four datasets of multi-channel time-series human activity recognition (HAR)
Soft-biometric based attribute representation shows promising results and emphasis the necessity of larger datasets.
arXiv Detail & Related papers (2023-04-04T07:24:51Z) - Your Day in Your Pocket: Complex Activity Recognition from Smartphone
Accelerometers [7.335712499936904]
This paper investigates the recognition of complex activities exclusively using smartphone accelerometer data.
We used a large smartphone sensing dataset collected from over 600 users in five countries during the pandemic.
Deep learning-based, binary classification of eight complex activities can be achieved with AUROC scores up to 0.76 with partially personalized models.
arXiv Detail & Related papers (2023-01-17T16:22:30Z) - A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition [53.41825941088989]
A new WiFi-based and video-based neural network (WiNN) is proposed to improve the robustness of activity recognition.
Our results show that WiVi data set satisfies the primary demand and all three branches in the proposed pipeline keep more than $80%$ of activity recognition accuracy.
arXiv Detail & Related papers (2022-05-24T10:49:11Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data [61.79595926825511]
Acquiring balanced datasets containing accurate activity labels requires humans to correctly annotate and potentially interfere with the subjects' normal activities in real-time.
We propose HAR-GCCN, a deep graph CNN model that leverages the correlation between chronologically adjacent sensor measurements to predict the correct labels for unclassified activities.
Har-GCCN shows superior performance relative to previously used baseline methods, improving classification accuracy by about 25% and up to 68% on different datasets.
arXiv Detail & Related papers (2022-03-07T01:23:46Z) - Physical Activity Recognition by Utilising Smartphone Sensor Signals [0.0]
This study collected human activity data from 60 participants across two different days for a total of six activities recorded by gyroscope and accelerometer sensors in a modern smartphone.
The proposed approach achieved a classification accuracy of 98 percent in identifying four different activities.
arXiv Detail & Related papers (2022-01-20T09:58:52Z) - Attention-Based Sensor Fusion for Human Activity Recognition Using IMU
Signals [4.558966602878624]
We propose a novel attention-based approach to human activity recognition using multiple IMU sensors worn at different body locations.
An attention-based fusion mechanism is developed to learn the importance of sensors at different body locations and to generate an attentive feature representation.
The proposed approach is evaluated using five public datasets and it outperforms state-of-the-art methods on a wide variety of activity categories.
arXiv Detail & Related papers (2021-12-20T17:00:27Z) - Human Activity Recognition using Deep Learning Models on Smartphones and
Smartwatches Sensor Data [0.0]
We use the popular WISDM dataset for activity recognition.
We show that smartphones and smartwatches don't capture data in the same way due to the location where they are worn.
arXiv Detail & Related papers (2021-02-28T06:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.