Private GANs, Revisited
- URL: http://arxiv.org/abs/2302.02936v2
- Date: Thu, 5 Oct 2023 04:47:52 GMT
- Title: Private GANs, Revisited
- Authors: Alex Bie, Gautam Kamath, Guojun Zhang
- Abstract summary: We show that the canonical approach for training differentially private GANs can yield significantly improved results after modifications to training.
We show that a simple fix -- taking more discriminator steps between generator steps -- restores parity between the generator and discriminator and improves results.
- Score: 16.570354461039603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that the canonical approach for training differentially private GANs
-- updating the discriminator with differentially private stochastic gradient
descent (DPSGD) -- can yield significantly improved results after modifications
to training. Specifically, we propose that existing instantiations of this
approach neglect to consider how adding noise only to discriminator updates
inhibits discriminator training, disrupting the balance between the generator
and discriminator necessary for successful GAN training. We show that a simple
fix -- taking more discriminator steps between generator steps -- restores
parity between the generator and discriminator and improves results.
Additionally, with the goal of restoring parity, we experiment with other
modifications -- namely, large batch sizes and adaptive discriminator update
frequency -- to improve discriminator training and see further improvements in
generation quality. Our results demonstrate that on standard image synthesis
benchmarks, DPSGD outperforms all alternative GAN privatization schemes. Code:
https://github.com/alexbie98/dpgan-revisit.
Related papers
- SCP-GAN: Self-Correcting Discriminator Optimization for Training
Consistency Preserving Metric GAN on Speech Enhancement Tasks [28.261911789087463]
We introduce several improvements to the GAN training schemes, which can be applied to most GAN-based SE models.
We present self-correcting optimization for training a GAN discriminator on SE tasks, which helps avoid "harmful" training directions.
We have tested our proposed methods on several state-of-the-art GAN-based SE models and obtained consistent improvements.
arXiv Detail & Related papers (2022-10-26T04:48:40Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - Data-Efficient Instance Generation from Instance Discrimination [40.71055888512495]
We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
arXiv Detail & Related papers (2021-06-08T17:52:59Z) - Training GANs with Stronger Augmentations via Contrastive Discriminator [80.8216679195]
We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
arXiv Detail & Related papers (2021-03-17T16:04:54Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.