Protect and Extend -- Using GANs for Synthetic Data Generation of
Time-Series Medical Records
- URL: http://arxiv.org/abs/2402.14042v2
- Date: Fri, 1 Mar 2024 11:46:26 GMT
- Title: Protect and Extend -- Using GANs for Synthetic Data Generation of
Time-Series Medical Records
- Authors: Navid Ashrafi, Vera Schmitt, Robert P. Spang, Sebastian M\"oller,
Jan-Niklas Voigt-Antons
- Abstract summary: This research compares state-of-the-art GAN-based models for synthetic data generation to generate time-series synthetic medical records of dementia patients.
Our experiments indicate the superiority of the privacy-preserving GAN (PPGAN) model over other models regarding privacy preservation.
- Score: 1.9749268648715583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Preservation of private user data is of paramount importance for high Quality
of Experience (QoE) and acceptability, particularly with services treating
sensitive data, such as IT-based health services. Whereas anonymization
techniques were shown to be prone to data re-identification, synthetic data
generation has gradually replaced anonymization since it is relatively less
time and resource-consuming and more robust to data leakage. Generative
Adversarial Networks (GANs) have been used for generating synthetic datasets,
especially GAN frameworks adhering to the differential privacy phenomena. This
research compares state-of-the-art GAN-based models for synthetic data
generation to generate time-series synthetic medical records of dementia
patients which can be distributed without privacy concerns. Predictive
modeling, autocorrelation, and distribution analysis are used to assess the
Quality of Generating (QoG) of the generated data. The privacy preservation of
the respective models is assessed by applying membership inference attacks to
determine potential data leakage risks. Our experiments indicate the
superiority of the privacy-preserving GAN (PPGAN) model over other models
regarding privacy preservation while maintaining an acceptable level of QoG.
The presented results can support better data protection for medical use cases
in the future.
Related papers
- Evaluating Differentially Private Synthetic Data Generation in High-Stakes Domains [9.123834467375532]
We explore the feasibility of using synthetic data generated from differentially private language models in place of real data to facilitate the development of NLP in high-stakes domains.
Our results show that prior simplistic evaluations have failed to highlight utility, privacy, and fairness issues in the synthetic data.
arXiv Detail & Related papers (2024-10-10T19:31:02Z) - KIPPS: Knowledge infusion in Privacy Preserving Synthetic Data
Generation [0.0]
Generative Deep Learning models struggle to model discrete and non-Gaussian features that have domain constraints.
Generative models create synthetic data that repeats sensitive features, which is a privacy risk.
This paper proposes a novel model, KIPPS, that infuses Domain and Regulatory Knowledge from Knowledge Graphs into Generative Deep Learning models for enhanced Privacy Preserving Synthetic data generation.
arXiv Detail & Related papers (2024-09-25T19:50:03Z) - Mitigating the Privacy Issues in Retrieval-Augmented Generation (RAG) via Pure Synthetic Data [51.41288763521186]
Retrieval-augmented generation (RAG) enhances the outputs of language models by integrating relevant information retrieved from external knowledge sources.
RAG systems may face severe privacy risks when retrieving private data.
We propose using synthetic data as a privacy-preserving alternative for the retrieval data.
arXiv Detail & Related papers (2024-06-20T22:53:09Z) - Synthesizing Multimodal Electronic Health Records via Predictive Diffusion Models [69.06149482021071]
We propose a novel EHR data generation model called EHRPD.
It is a diffusion-based model designed to predict the next visit based on the current one while also incorporating time interval estimation.
We conduct experiments on two public datasets and evaluate EHRPD from fidelity, privacy, and utility perspectives.
arXiv Detail & Related papers (2024-06-20T02:20:23Z) - Quantifying and Mitigating Privacy Risks for Tabular Generative Models [13.153278585144355]
Synthetic data from generative models emerges as the privacy-preserving data-sharing solution.
We propose DP-TLDM, Differentially Private Tabular Latent Diffusion Model.
We show that DP-TLDM improves the synthetic quality by an average of 35% in data resemblance, 15% in the utility for downstream tasks, and 50% in data discriminability.
arXiv Detail & Related papers (2024-03-12T17:27:49Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - Beyond Privacy: Navigating the Opportunities and Challenges of Synthetic
Data [91.52783572568214]
Synthetic data may become a dominant force in the machine learning world, promising a future where datasets can be tailored to individual needs.
We discuss which fundamental challenges the community needs to overcome for wider relevance and application of synthetic data.
arXiv Detail & Related papers (2023-04-07T16:38:40Z) - Differentially Private Synthetic Medical Data Generation using
Convolutional GANs [7.2372051099165065]
We develop a differentially private framework for synthetic data generation using R'enyi differential privacy.
Our approach builds on convolutional autoencoders and convolutional generative adversarial networks to preserve some of the critical characteristics of the generated synthetic data.
We demonstrate that our model outperforms existing state-of-the-art models under the same privacy budget.
arXiv Detail & Related papers (2020-12-22T01:03:49Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Hide-and-Seek Privacy Challenge [88.49671206936259]
The NeurIPS 2020 Hide-and-Seek Privacy Challenge is a novel two-tracked competition to accelerate progress in tackling both problems.
In our head-to-head format, participants in the synthetic data generation track (i.e. "hiders") and the patient re-identification track (i.e. "seekers") are directly pitted against each other by way of a new, high-quality intensive care time-series dataset.
arXiv Detail & Related papers (2020-07-23T15:50:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.