Training GANs with Stronger Augmentations via Contrastive Discriminator
- URL: http://arxiv.org/abs/2103.09742v1
- Date: Wed, 17 Mar 2021 16:04:54 GMT
- Title: Training GANs with Stronger Augmentations via Contrastive Discriminator
- Authors: Jongheon Jeong and Jinwoo Shin
- Abstract summary: We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
- Score: 80.8216679195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works in Generative Adversarial Networks (GANs) are actively
revisiting various data augmentation techniques as an effective way to prevent
discriminator overfitting. It is still unclear, however, that which
augmentations could actually improve GANs, and in particular, how to apply a
wider range of augmentations in training. In this paper, we propose a novel way
to address these questions by incorporating a recent contrastive representation
learning scheme into the GAN discriminator, coined ContraD. This "fusion"
enables the discriminators to work with much stronger augmentations without
increasing their training instability, thereby preventing the discriminator
overfitting issue in GANs more effectively. Even better, we observe that the
contrastive learning itself also benefits from our GAN training, i.e., by
maintaining discriminative features between real and fake samples, suggesting a
strong coherence between the two worlds: good contrastive representations are
also good for GAN discriminators, and vice versa. Our experimental results show
that GANs with ContraD consistently improve FID and IS compared to other recent
techniques incorporating data augmentations, still maintaining highly
discriminative features in the discriminator in terms of the linear evaluation.
Finally, as a byproduct, we also show that our GANs trained in an unsupervised
manner (without labels) can induce many conditional generative models via a
simple latent sampling, leveraging the learned features of ContraD. Code is
available at https://github.com/jh-jeong/ContraD.
Related papers
- Unilaterally Aggregated Contrastive Learning with Hierarchical
Augmentation for Anomaly Detection [64.50126371767476]
We propose Unilaterally Aggregated Contrastive Learning with Hierarchical Augmentation (UniCon-HA)
We explicitly encourage the concentration of inliers and the dispersion of virtual outliers via supervised and unsupervised contrastive losses.
Our method is evaluated under three AD settings including unlabeled one-class, unlabeled multi-class, and labeled multi-class.
arXiv Detail & Related papers (2023-08-20T04:01:50Z) - Private GANs, Revisited [16.570354461039603]
We show that the canonical approach for training differentially private GANs can yield significantly improved results after modifications to training.
We show that a simple fix -- taking more discriminator steps between generator steps -- restores parity between the generator and discriminator and improves results.
arXiv Detail & Related papers (2023-02-06T17:11:09Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - Hybrid Generative-Contrastive Representation Learning [32.84066504783469]
We show that a transformer-based encoder-decoder architecture trained with both contrastive and generative losses can learn highly discriminative and robust representations without hurting the generative performance.
arXiv Detail & Related papers (2021-06-11T04:23:48Z) - Data-Efficient Instance Generation from Instance Discrimination [40.71055888512495]
We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
arXiv Detail & Related papers (2021-06-08T17:52:59Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.