privGAN: Protecting GANs from membership inference attacks at low cost
- URL: http://arxiv.org/abs/2001.00071v4
- Date: Sun, 13 Dec 2020 18:27:26 GMT
- Title: privGAN: Protecting GANs from membership inference attacks at low cost
- Authors: Sumit Mukherjee, Yixi Xu, Anusua Trivedi, Juan Lavista Ferres
- Abstract summary: Generative Adversarial Networks (GANs) have made releasing of synthetic images a viable approach to share data without releasing the original dataset.
Recent work has shown that the GAN models and their synthetically generated data can be used to infer the training set membership by an adversary.
Here we develop a new GAN architecture (privGAN) where the generator is trained not only to cheat the discriminator but also to defend membership inference attacks.
- Score: 5.735035463793008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) have made releasing of synthetic
images a viable approach to share data without releasing the original dataset.
It has been shown that such synthetic data can be used for a variety of
downstream tasks such as training classifiers that would otherwise require the
original dataset to be shared. However, recent work has shown that the GAN
models and their synthetically generated data can be used to infer the training
set membership by an adversary who has access to the entire dataset and some
auxiliary information. Current approaches to mitigate this problem (such as
DPGAN) lead to dramatically poorer generated sample quality than the original
non--private GANs. Here we develop a new GAN architecture (privGAN), where the
generator is trained not only to cheat the discriminator but also to defend
membership inference attacks. The new mechanism provides protection against
this mode of attack while leading to negligible loss in downstream
performances. In addition, our algorithm has been shown to explicitly prevent
overfitting to the training set, which explains why our protection is so
effective. The main contributions of this paper are: i) we propose a novel GAN
architecture that can generate synthetic data in a privacy preserving manner
without additional hyperparameter tuning and architecture selection, ii) we
provide a theoretical understanding of the optimal solution of the privGAN loss
function, iii) we demonstrate the effectiveness of our model against several
white and black--box attacks on several benchmark datasets, iv) we demonstrate
on three common benchmark datasets that synthetic images generated by privGAN
lead to negligible loss in downstream performance when compared against
non--private GANs.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - VFLGAN: Vertical Federated Learning-based Generative Adversarial Network for Vertically Partitioned Data Publication [16.055684281505474]
This article proposes a Vertical Federated Learning-based Generative Adrial Network, VFLGAN, for vertically partitioned data publication.
The quality of the synthetic dataset generated by VFLGAN is 3.2 times better than that generated by VertiGAN.
We also propose a practical auditing scheme that applies membership inference attacks to estimate privacy leakage through the synthetic dataset.
arXiv Detail & Related papers (2024-04-15T12:25:41Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Preserving Privacy in GANs Against Membership Inference Attack [30.668589815716775]
Generative Adversarial Networks (GANs) have been widely used for generating synthetic data.
Recent works showed that GANs might leak information regarding their training data samples.
This makes GANs vulnerable to Membership Inference Attacks (MIAs)
arXiv Detail & Related papers (2023-11-06T15:04:48Z) - Ownership Protection of Generative Adversarial Networks [9.355840335132124]
Generative adversarial networks (GANs) have shown remarkable success in image synthesis.
It is critical to technically protect the intellectual property of GANs.
We propose a new ownership protection method based on the common characteristics of a target model and its stolen models.
arXiv Detail & Related papers (2023-06-08T14:31:58Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data [1.5293427903448022]
We introduce a new attribute inference attack against synthetic data.
We show that our attack can be highly accurate even on arbitrary records.
We then evaluate the tradeoff between protecting privacy and preserving statistical utility.
arXiv Detail & Related papers (2023-01-24T14:56:36Z) - Generative Models with Information-Theoretic Protection Against
Membership Inference Attacks [6.840474688871695]
Deep generative models, such as Generative Adversarial Networks (GANs), synthesize diverse high-fidelity data samples.
GANs may disclose private information from the data they are trained on, making them susceptible to adversarial attacks.
We propose an information theoretically motivated regularization term that prevents the generative model from overfitting to training data and encourages generalizability.
arXiv Detail & Related papers (2022-05-31T19:29:55Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Partially Conditioned Generative Adversarial Networks [75.08725392017698]
Generative Adversarial Networks (GANs) let one synthesise artificial datasets by implicitly modelling the underlying probability distribution of a real-world training dataset.
With the introduction of Conditional GANs and their variants, these methods were extended to generating samples conditioned on ancillary information available for each sample within the dataset.
In this work, we argue that standard Conditional GANs are not suitable for such a task and propose a new Adversarial Network architecture and training strategy.
arXiv Detail & Related papers (2020-07-06T15:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.