PhysioGAN: Training High Fidelity Generative Model for Physiological
Sensor Readings
- URL: http://arxiv.org/abs/2204.13597v1
- Date: Mon, 25 Apr 2022 07:38:43 GMT
- Title: PhysioGAN: Training High Fidelity Generative Model for Physiological
Sensor Readings
- Authors: Moustafa Alzantot, Luis Garcia, Mani Srivastava
- Abstract summary: We present PHYSIOGAN, a generative model to produce high fidelity synthetic physiological sensor data readings.
We evaluate it against the state-of-the-art techniques using two different real-world datasets: ECG classification and activity recognition from motion sensors datasets.
- Score: 6.029263679246354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models such as the variational autoencoder (VAE) and the
generative adversarial networks (GAN) have proven to be incredibly powerful for
the generation of synthetic data that preserves statistical properties and
utility of real-world datasets, especially in the context of image and natural
language text. Nevertheless, until now, there has no successful demonstration
of how to apply either method for generating useful physiological sensory data.
The state-of-the-art techniques in this context have achieved only limited
success. We present PHYSIOGAN, a generative model to produce high fidelity
synthetic physiological sensor data readings. PHYSIOGAN consists of an encoder,
decoder, and a discriminator. We evaluate PHYSIOGAN against the
state-of-the-art techniques using two different real-world datasets: ECG
classification and activity recognition from motion sensors datasets. We
compare PHYSIOGAN to the baseline models not only the accuracy of class
conditional generation but also the sample diversity and sample novelty of the
synthetic datasets. We prove that PHYSIOGAN generates samples with higher
utility than other generative models by showing that classification models
trained on only synthetic data generated by PHYSIOGAN have only 10% and 20%
decrease in their classification accuracy relative to classification models
trained on the real data. Furthermore, we demonstrate the use of PHYSIOGAN for
sensor data imputation in creating plausible results.
Related papers
- Synthetic Data Generation by Supervised Neural Gas Network for Physiological Emotion Recognition Data [0.0]
This study introduces an innovative approach to synthetic data generation using a Supervised Neural Gas (SNG) network.
The SNG efficiently processes the input data, creating synthetic instances that closely mimic the original data distributions.
arXiv Detail & Related papers (2025-01-19T15:34:05Z) - Exploring the Impact of Synthetic Data on Human Gesture Recognition Tasks Using GANs [0.0]
This study is the first to explore the feasibility of synthesizing motion gestures for allergic rhinitis from wearable IoT device data using Generative Adversarial Networks (GANs)
We also focus on these AI models' performance in terms of fidelity, diversity, and privacy.
arXiv Detail & Related papers (2024-12-09T11:15:47Z) - Synthetic ECG Generation for Data Augmentation and Transfer Learning in Arrhythmia Classification [1.7614607439356635]
We explore the usefulness of synthetic data generated with different generative models from Deep Learning.
We investigate the effects of transfer learning, by fine-tuning a synthetically pre-trained model and then adding increasing proportions of real data.
arXiv Detail & Related papers (2024-11-27T15:46:34Z) - Synthetic Image Learning: Preserving Performance and Preventing Membership Inference Attacks [5.0243930429558885]
This paper introduces Knowledge Recycling (KR), a pipeline designed to optimise the generation and use of synthetic data for training downstream classifiers.
At the heart of this pipeline is Generative Knowledge Distillation (GKD), the proposed technique that significantly improves the quality and usefulness of the information.
The results show a significant reduction in the performance gap between models trained on real and synthetic data, with models based on synthetic data outperforming those trained on real data in some cases.
arXiv Detail & Related papers (2024-07-22T10:31:07Z) - Learning from Synthetic Data for Visual Grounding [55.21937116752679]
We show that SynGround can improve the localization capabilities of off-the-shelf vision-and-language models.
Data generated with SynGround improves the pointing game accuracy of a pretrained ALBEF and BLIP models by 4.81% and 17.11% absolute percentage points, respectively.
arXiv Detail & Related papers (2024-03-20T17:59:43Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Derm-T2IM: Harnessing Synthetic Skin Lesion Data via Stable Diffusion
Models for Enhanced Skin Disease Classification using ViT and CNN [1.0499611180329804]
We aim to incorporate enhanced data transformation techniques by extending the recent success of few-shot learning.
We investigate the impact of incorporating newly generated synthetic data into the training pipeline of state-of-art machine learning models.
arXiv Detail & Related papers (2024-01-10T13:46:03Z) - On the Stability of Iterative Retraining of Generative Models on their own Data [56.153542044045224]
We study the impact of training generative models on mixed datasets.
We first prove the stability of iterative training under the condition that the initial generative models approximate the data distribution well enough.
We empirically validate our theory on both synthetic and natural images by iteratively training normalizing flows and state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-09-30T16:41:04Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.