On the Privacy Properties of GAN-generated Samples
- URL: http://arxiv.org/abs/2206.01349v1
- Date: Fri, 3 Jun 2022 00:29:35 GMT
- Title: On the Privacy Properties of GAN-generated Samples
- Authors: Zinan Lin, Vyas Sekar, Giulia Fanti
- Abstract summary: We show that GAN-generated samples inherently satisfy some (weak) privacy guarantees.
We also study the robustness of GAN-generated samples to membership inference attacks.
- Score: 12.765060550622422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy implications of generative adversarial networks (GANs) are a
topic of great interest, leading to several recent algorithms for training GANs
with privacy guarantees. By drawing connections to the generalization
properties of GANs, we prove that under some assumptions, GAN-generated samples
inherently satisfy some (weak) privacy guarantees. First, we show that if a GAN
is trained on m samples and used to generate n samples, the generated samples
are (epsilon, delta)-differentially-private for (epsilon, delta) pairs where
delta scales as O(n/m). We show that under some special conditions, this upper
bound is tight. Next, we study the robustness of GAN-generated samples to
membership inference attacks. We model membership inference as a hypothesis
test in which the adversary must determine whether a given sample was drawn
from the training dataset or from the underlying data distribution. We show
that this adversary can achieve an area under the ROC curve that scales no
better than O(m^{-1/4}).
Related papers
- Self-Guided Generation of Minority Samples Using Diffusion Models [57.319845580050924]
We present a novel approach for generating minority samples that live on low-density regions of a data manifold.
Our framework is built upon diffusion models, leveraging the principle of guided sampling.
Experiments on benchmark real datasets demonstrate that our approach can greatly improve the capability of creating realistic low-likelihood minority instances.
arXiv Detail & Related papers (2024-07-16T10:03:29Z) - A Privacy-Preserving Walk in the Latent Space of Generative Models for
Medical Applications [11.39717289910264]
Generative Adversarial Networks (GANs) have demonstrated their ability to generate synthetic samples that match a target distribution.
GANs tend to embed near-duplicates of real samples in the latent space.
We propose a latent space navigation strategy able to generate diverse synthetic samples that may support effective training of deep models.
arXiv Detail & Related papers (2023-07-06T13:35:48Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Breaking the Spurious Causality of Conditional Generation via Fairness
Intervention with Corrective Sampling [77.15766509677348]
Conditional generative models often inherit spurious correlations from the training dataset.
This can result in label-conditional distributions that are imbalanced with respect to another latent attribute.
We propose a general two-step strategy to mitigate this issue.
arXiv Detail & Related papers (2022-12-05T08:09:33Z) - Selectively increasing the diversity of GAN-generated samples [8.980453507536017]
We propose a novel method to selectively increase the diversity of GAN-generated samples.
We show the superiority of our method in a synthetic benchmark as well as a real-life scenario simulating data from the Zero Degree Calorimeter of ALICE experiment in CERN.
arXiv Detail & Related papers (2022-07-04T16:27:06Z) - Generative Models with Information-Theoretic Protection Against
Membership Inference Attacks [6.840474688871695]
Deep generative models, such as Generative Adversarial Networks (GANs), synthesize diverse high-fidelity data samples.
GANs may disclose private information from the data they are trained on, making them susceptible to adversarial attacks.
We propose an information theoretically motivated regularization term that prevents the generative model from overfitting to training data and encourages generalizability.
arXiv Detail & Related papers (2022-05-31T19:29:55Z) - imdpGAN: Generating Private and Specific Data with Generative
Adversarial Networks [19.377726080729293]
imdpGAN is an end-to-end framework that simultaneously achieves privacy protection and learns latent representations.
We show that imdpGAN preserves the privacy of the individual data point, and learns latent codes to control the specificity of the generated samples.
arXiv Detail & Related papers (2020-09-29T08:03:32Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.