cGANs with Auxiliary Discriminative Classifier
- URL: http://arxiv.org/abs/2107.10060v2
- Date: Thu, 22 Jul 2021 06:16:51 GMT
- Title: cGANs with Auxiliary Discriminative Classifier
- Authors: Liang Hou, Qi Cao, Huawei Shen, Xueqi Cheng
- Abstract summary: Conditional generative models aim to learn the underlying joint distribution of data and labels.
auxiliary classifier generative adversarial networks (AC-GAN) have been widely used, but suffer from the issue of low intra-class diversity on generated samples.
We propose novel cGANs with auxiliary discriminative classifier (ADC-GAN) to address the issue of AC-GAN.
- Score: 43.78253518292111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional generative models aim to learn the underlying joint distribution
of data and labels, and thus realize conditional generation. Among them,
auxiliary classifier generative adversarial networks (AC-GAN) have been widely
used, but suffer from the issue of low intra-class diversity on generated
samples. In this paper, we point out that the fundamental reason is that the
classifier of AC-GAN is generator-agnostic, and thus cannot provide informative
guidance to the generator to approximate the target joint distribution, leading
to a minimization of conditional entropy that decreases the intra-class
diversity. Based on this finding, we propose novel cGANs with auxiliary
discriminative classifier (ADC-GAN) to address the issue of AC-GAN.
Specifically, the auxiliary discriminative classifier becomes generator-aware
by distinguishing between the real and fake data while recognizing their
labels. We then optimize the generator based on the auxiliary classifier along
with the original discriminator to match the joint and marginal distributions
of the generated samples with those of the real samples. We provide theoretical
analysis and empirical evidence on synthetic and real-world datasets to
demonstrate the superiority of the proposed ADC-GAN compared to competitive
cGANs.
Related papers
- Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation [108.11783463263328]
This paper proposes a Generative model-based Noise-Robust Training method (GeNRT)
It eliminates domain shift while mitigating label noise.
Experiments on Office-Home, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T06:43:55Z) - RepFair-GAN: Mitigating Representation Bias in GANs Using Gradient
Clipping [2.580765958706854]
We define a new fairness notion for generative models in terms of the distribution of generated samples sharing the same protected attributes.
We show that this fairness notion is violated even when the dataset contains equally represented groups.
We show that controlling the groups' gradient norm by performing group-wise gradient norm clipping in the discriminator leads to a more fair data generation.
arXiv Detail & Related papers (2022-07-13T14:58:48Z) - UQGAN: A Unified Model for Uncertainty Quantification of Deep
Classifiers trained via Conditional GANs [9.496524884855559]
We present an approach to quantifying uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs)
Instead of shielding the entire in-distribution data with GAN generated OoD examples, we shield each class separately with out-of-class examples generated by a conditional GAN.
In particular, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers.
arXiv Detail & Related papers (2022-01-31T14:42:35Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Improving Model Compatibility of Generative Adversarial Networks by
Boundary Calibration [24.28407308818025]
Boundary-Calibration GANs (BCGANs) are proposed to improve GAN's model compatibility.
BCGANs generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
arXiv Detail & Related papers (2021-11-03T16:08:09Z) - A Unified View of cGANs with and without Classifiers [24.28407308818025]
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
Some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers.
In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs.
arXiv Detail & Related papers (2021-11-01T15:36:33Z) - Self-supervised GANs with Label Augmentation [43.78253518292111]
We propose a novel self-supervised GANs framework with label augmentation, i.e., augmenting the GAN labels (real or fake) with the self-supervised pseudo-labels.
We demonstrate that the proposed method significantly outperforms competitive baselines on both generative modeling and representation learning.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.