Differentially Private Generative Adversarial Networks with Model
Inversion
- URL: http://arxiv.org/abs/2201.03139v1
- Date: Mon, 10 Jan 2022 02:26:26 GMT
- Title: Differentially Private Generative Adversarial Networks with Model
Inversion
- Authors: Dongjie Chen, Sen-ching Samson Cheung, Chen-Nee Chuah, Sally Ozonoff
- Abstract summary: To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) gradient descent method.
We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator.
Our approach outperforms the standard DP-GAN method based on Inception Score, Fr'echet Inception Distance, and classification accuracy under the same privacy guarantee.
- Score: 6.651002556438805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To protect sensitive data in training a Generative Adversarial Network (GAN),
the standard approach is to use differentially private (DP) stochastic gradient
descent method in which controlled noise is added to the gradients. The quality
of the output synthetic samples can be adversely affected and the training of
the network may not even converge in the presence of these noises. We propose
Differentially Private Model Inversion (DPMI) method where the private data is
first mapped to the latent space via a public generator, followed by a
lower-dimensional DP-GAN with better convergent properties. Experimental
results on standard datasets CIFAR10 and SVHN as well as on a facial landmark
dataset for Autism screening show that our approach outperforms the standard
DP-GAN method based on Inception Score, Fr\'echet Inception Distance, and
classification accuracy under the same privacy guarantee.
Related papers
- Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training [31.559864332056648]
We propose a generic differential privacy framework with heterogeneous noise (DP-Hero)
Atop DP-Hero, we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous and guided by prior-established model parameters.
We conduct comprehensive experiments to verify and explain the effectiveness of the proposed DP-Hero, showing improved training accuracy compared with state-of-the-art works.
arXiv Detail & Related papers (2024-09-05T08:40:54Z) - LLM-based Privacy Data Augmentation Guided by Knowledge Distillation
with a Distribution Tutor for Medical Text Classification [67.92145284679623]
We propose a DP-based tutor that models the noised private distribution and controls samples' generation with a low privacy cost.
We theoretically analyze our model's privacy protection and empirically verify our model.
arXiv Detail & Related papers (2024-02-26T11:52:55Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - DPGOMI: Differentially Private Data Publishing with Gaussian Optimized
Model Inversion [8.204115285718437]
We propose Differentially Private Data Publishing with Gaussian Optimized Model Inversion (DPGOMI) to address this issue.
Our approach involves mapping private data to the latent space using a public generator, followed by a lower-dimensional DP-GAN with better convergence properties.
Our results show that DPGOMI outperforms the standard DP-GAN method in terms of Inception Score, Freche't Inception Distance, and classification performance.
arXiv Detail & Related papers (2023-10-06T18:46:22Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - DP-CGAN: Differentially Private Synthetic Data and Label Generation [18.485995499841]
We introduce a Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy.
We show that DP-CGAN can generate visually and empirically promising results on the MNIST dataset with a single-digit epsilon parameter in differential privacy.
arXiv Detail & Related papers (2020-01-27T11:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.