BSDGAN: Balancing Sensor Data Generative Adversarial Networks for Human
Activity Recognition
- URL: http://arxiv.org/abs/2208.03647v1
- Date: Sun, 7 Aug 2022 05:48:48 GMT
- Title: BSDGAN: Balancing Sensor Data Generative Adversarial Networks for Human
Activity Recognition
- Authors: Yifan Hu and Yu Wang
- Abstract summary: Human Activity Recognition (HAR) based on sensor data has become an active research topic in the field of machine learning.
Due to the inconsistent frequency of human activities, the amount of data for each activity in the human activity dataset is imbalanced.
We propose Balancing Sensor Data Generative Adversarial Networks (BSDGAN) to generate sensor data for minority human activities.
- Score: 10.46273607225732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of IoT technology enables a variety of sensors can be
integrated into mobile devices. Human Activity Recognition (HAR) based on
sensor data has become an active research topic in the field of machine
learning and ubiquitous computing. However, due to the inconsistent frequency
of human activities, the amount of data for each activity in the human activity
dataset is imbalanced. Considering the limited sensor resources and the high
cost of manually labeled sensor data, human activity recognition is facing the
challenge of highly imbalanced activity datasets. In this paper, we propose
Balancing Sensor Data Generative Adversarial Networks (BSDGAN) to generate
sensor data for minority human activities. The proposed BSDGAN consists of a
generator model and a discriminator model. Considering the extreme imbalance of
human activity dataset, an autoencoder is employed to initialize the training
process of BSDGAN, ensure the data features of each activity can be learned.
The generated activity data is combined with the original dataset to balance
the amount of activity data across human activity classes. We deployed multiple
human activity recognition models on two publicly available imbalanced human
activity datasets, WISDM and UNIMIB. Experimental results show that the
proposed BSDGAN can effectively capture the data features of real human
activity sensor data, and generate realistic synthetic sensor data. Meanwhile,
the balanced activity dataset can effectively help the activity recognition
model to improve the recognition accuracy.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition [5.669438716143601]
Human Activity Recognition (HAR) has not fully capitalized on the proliferation of deep learning.
We propose a novel approach to improve wearable sensor-based HAR by introducing a pose-to-sensor network model.
Our contributions include the integration of simultaneous training, direct pose-to-sensor generation, and a comprehensive evaluation on the MM-Fit dataset.
arXiv Detail & Related papers (2024-04-25T10:13:18Z) - Unsupervised Embedding Learning for Human Activity Recognition Using
Wearable Sensor Data [2.398608007786179]
We present an unsupervised approach to project the human activities into an embedding space in which similar activities will be located closely together.
Results of experiments on three labeled benchmark datasets demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2023-07-21T08:52:47Z) - Unsupervised Statistical Feature-Guided Diffusion Model for Sensor-based Human Activity Recognition [3.2319909486685354]
A key problem holding up progress in wearable sensor-based human activity recognition is the unavailability of diverse and labeled training data.
We propose an unsupervised statistical feature-guided diffusion model specifically optimized for wearable sensor-based human activity recognition.
By conditioning the diffusion model on statistical information such as mean, standard deviation, Z-score, and skewness, we generate diverse and representative synthetic sensor data.
arXiv Detail & Related papers (2023-05-30T15:12:59Z) - Human Activity Recognition Using Self-Supervised Representations of
Wearable Data [0.0]
Development of accurate algorithms for human activity recognition (HAR) is hindered by the lack of large real-world labeled datasets.
Here we develop a 6-class HAR model with strong performance when evaluated on real-world datasets not seen during training.
arXiv Detail & Related papers (2023-04-26T07:33:54Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline Performances [76.34037366117234]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - cGAN-Based High Dimensional IMU Sensor Data Generation for Enhanced
Human Activity Recognition in Therapeutic Activities [0.0]
A novel GAN network called TheraGAN was developed to generate IMU signals associated with rehabilitation activities.
The generated signals closely mimicked the real signals, and adding generated data resulted in a significant improvement in the performance of all tested networks.
arXiv Detail & Related papers (2023-02-16T00:08:28Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - Video-based Pose-Estimation Data as Source for Transfer Learning in
Human Activity Recognition [71.91734471596433]
Human Activity Recognition (HAR) using on-body devices identifies specific human actions in unconstrained environments.
Previous works demonstrated that transfer learning is a good strategy for addressing scenarios with scarce data.
This paper proposes using datasets intended for human-pose estimation as a source for transfer learning.
arXiv Detail & Related papers (2022-12-02T18:19:36Z) - HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data [61.79595926825511]
Acquiring balanced datasets containing accurate activity labels requires humans to correctly annotate and potentially interfere with the subjects' normal activities in real-time.
We propose HAR-GCCN, a deep graph CNN model that leverages the correlation between chronologically adjacent sensor measurements to predict the correct labels for unclassified activities.
Har-GCCN shows superior performance relative to previously used baseline methods, improving classification accuracy by about 25% and up to 68% on different datasets.
arXiv Detail & Related papers (2022-03-07T01:23:46Z) - Transformer Networks for Data Augmentation of Human Physical Activity
Recognition [61.303828551910634]
State of the art models like Recurrent Generative Adrial Networks (RGAN) are used to generate realistic synthetic data.
In this paper, transformer based generative adversarial networks which have global attention on data, are compared on PAMAP2 and Real World Human Activity Recognition data sets with RGAN.
arXiv Detail & Related papers (2021-09-02T16:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.