HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data
- URL: http://arxiv.org/abs/2203.03087v1
- Date: Mon, 7 Mar 2022 01:23:46 GMT
- Title: HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data
- Authors: Abduallah Mohamed, Fernando Lejarza, Stephanie Cahail, Christian
Claudel, Edison Thomaz
- Abstract summary: Acquiring balanced datasets containing accurate activity labels requires humans to correctly annotate and potentially interfere with the subjects' normal activities in real-time.
We propose HAR-GCCN, a deep graph CNN model that leverages the correlation between chronologically adjacent sensor measurements to predict the correct labels for unclassified activities.
Har-GCCN shows superior performance relative to previously used baseline methods, improving classification accuracy by about 25% and up to 68% on different datasets.
- Score: 61.79595926825511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of human activity recognition from mobile sensor data applies to
multiple domains, such as health monitoring, personal fitness, daily life
logging, and senior care. A critical challenge for training human activity
recognition models is data quality. Acquiring balanced datasets containing
accurate activity labels requires humans to correctly annotate and potentially
interfere with the subjects' normal activities in real-time. Despite the
likelihood of incorrect annotation or lack thereof, there is often an inherent
chronology to human behavior. For example, we take a shower after we exercise.
This implicit chronology can be used to learn unknown labels and classify
future activities. In this work, we propose HAR-GCCN, a deep graph CNN model
that leverages the correlation between chronologically adjacent sensor
measurements to predict the correct labels for unclassified activities that
have at least one activity label. We propose a new training strategy enforcing
that the model predicts the missing activity labels by leveraging the known
ones. HAR-GCCN shows superior performance relative to previously used baseline
methods, improving classification accuracy by about 25% and up to 68% on
different datasets. Code is available at
\url{https://github.com/abduallahmohamed/HAR-GCNN}.
Related papers
- Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - Domain Adaptation Under Behavioral and Temporal Shifts for Natural Time
Series Mobile Activity Recognition [31.43183992755392]
Existing datasets typically consist of scripted movements.
Our long-term goal is to perform mobile activity recognition in natural settings.
Because of the large variations present in human behavior, we collect data from many participants across two different age groups.
arXiv Detail & Related papers (2022-07-10T02:48:34Z) - Beyond the Gates of Euclidean Space: Temporal-Discrimination-Fusions and
Attention-based Graph Neural Network for Human Activity Recognition [5.600003119721707]
Human activity recognition (HAR) through wearable devices has received much interest due to its numerous applications in fitness tracking, wellness screening, and supported living.
Traditional deep learning (DL) has set a state of the art performance for HAR domain.
We propose an approach based on Graph Neural Networks (GNNs) for structuring the input representation and exploiting the relations among the samples.
arXiv Detail & Related papers (2022-06-10T03:04:23Z) - Human Activity Recognition on wrist-worn accelerometers using
self-supervised neural networks [0.0]
Measures of Activity of Daily Living (ADL) are an important indicator of overall health but difficult to measure in-clinic.
We propose a self-supervised learning paradigm to create a robust representation of accelerometer data that can generalize across devices and subjects.
We also propose a segmentation algorithm which can identify segments of salient activity and boost HAR accuracy on continuous real-life data.
arXiv Detail & Related papers (2021-12-22T23:35:20Z) - Self-supervised Pretraining with Classification Labels for Temporal
Activity Detection [54.366236719520565]
Temporal Activity Detection aims to predict activity classes per frame.
Due to the expensive frame-level annotations required for detection, the scale of detection datasets is limited.
This work proposes a novel self-supervised pretraining method for detection leveraging classification labels.
arXiv Detail & Related papers (2021-11-26T18:59:28Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - HHAR-net: Hierarchical Human Activity Recognition using Neural Networks [2.4530909757679633]
This research aims at building a hierarchical classification with Neural Networks to recognize human activities.
We evaluate our model on the Extrasensory dataset; a dataset collected in the wild and containing data from smartphones and smartwatches.
arXiv Detail & Related papers (2020-10-28T17:06:42Z) - ConsNet: Learning Consistency Graph for Zero-Shot Human-Object
Interaction Detection [101.56529337489417]
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of human, action, object> in images.
We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs.
Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities.
arXiv Detail & Related papers (2020-08-14T09:11:18Z) - Don't Wait, Just Weight: Improving Unsupervised Representations by
Learning Goal-Driven Instance Weights [92.16372657233394]
Self-supervised learning techniques can boost performance by learning useful representations from unlabelled data.
We show that by learning Bayesian instance weights for the unlabelled data, we can improve the downstream classification accuracy.
Our method, BetaDataWeighter is evaluated using the popular self-supervised rotation prediction task on STL-10 and Visual Decathlon.
arXiv Detail & Related papers (2020-06-22T15:59:32Z) - Sequential Weakly Labeled Multi-Activity Localization and Recognition on
Wearable Sensors using Recurrent Attention Networks [13.64024154785943]
We propose a recurrent attention network (RAN) to handle sequential weakly labeled multi-activity recognition and location tasks.
Our RAN model can simultaneously infer multi-activity types from the coarse-grained sequential weak labels.
It will greatly reduce the burden of manual labeling.
arXiv Detail & Related papers (2020-04-13T04:57:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.