Unbiased Auxiliary Classifier GANs with MINE
- URL: http://arxiv.org/abs/2006.07567v1
- Date: Sat, 13 Jun 2020 05:51:51 GMT
- Title: Unbiased Auxiliary Classifier GANs with MINE
- Authors: Ligong Han, Anastasis Stathopoulos, Tao Xue, Dimitris Metaxas
- Abstract summary: We propose an Unbiased Auxiliary GANs (UAC-GAN) that utilize the Mutual Information Neural Estorimat (MINE) to estimate the mutual information between the generated data distribution and labels.
Our UAC-GAN performs better than AC-GAN and TACGAN on three datasets.
- Score: 7.902878869106766
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Auxiliary Classifier GANs (AC-GANs) are widely used conditional generative
models and are capable of generating high-quality images. Previous work has
pointed out that AC-GAN learns a biased distribution. To remedy this, Twin
Auxiliary Classifier GAN (TAC-GAN) introduces a twin classifier to the min-max
game. However, it has been reported that using a twin auxiliary classifier may
cause instability in training. To this end, we propose an Unbiased Auxiliary
GANs (UAC-GAN) that utilizes the Mutual Information Neural Estimator (MINE) to
estimate the mutual information between the generated data distribution and
labels. To further improve the performance, we also propose a novel
projection-based statistics network architecture for MINE. Experimental results
on three datasets, including Mixture of Gaussian (MoG), MNIST and CIFAR10
datasets, show that our UAC-GAN performs better than AC-GAN and TAC-GAN. Code
can be found on the project website.
Related papers
- Additional Look into GAN-based Augmentation for Deep Learning COVID-19
Image Classification [57.1795052451257]
We study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples.
We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems.
The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets.
arXiv Detail & Related papers (2024-01-26T08:28:13Z) - Sequential training of GANs against GAN-classifiers reveals correlated
"knowledge gaps" present among independently trained GAN instances [1.104121146441257]
We iteratively train GAN-classifiers and train GANs that "fool" the classifiers.
We examine the effect on GAN training dynamics, output quality, and GAN-classifier generalization.
arXiv Detail & Related papers (2023-03-27T18:18:15Z) - Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training [45.70113212633225]
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
arXiv Detail & Related papers (2021-11-01T17:51:33Z) - A Unified View of cGANs with and without Classifiers [24.28407308818025]
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
Some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers.
In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs.
arXiv Detail & Related papers (2021-11-01T15:36:33Z) - cGANs with Auxiliary Discriminative Classifier [43.78253518292111]
Conditional generative models aim to learn the underlying joint distribution of data and labels.
auxiliary classifier generative adversarial networks (AC-GAN) have been widely used, but suffer from the issue of low intra-class diversity on generated samples.
We propose novel cGANs with auxiliary discriminative classifier (ADC-GAN) to address the issue of AC-GAN.
arXiv Detail & Related papers (2021-07-21T13:06:32Z) - Learning Efficient GANs for Image Translation via Differentiable Masks
and co-Attention Distillation [130.30465659190773]
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices.
We introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation.
Experiments show DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model.
arXiv Detail & Related papers (2020-11-17T02:39:19Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
Systems [16.360144499713524]
Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data.
We propose a new class of GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators.
We observe xAI-GANs provide an improvement of up to 23.18% in the quality of generated images on both MNIST and FMNIST datasets over standard GANs.
arXiv Detail & Related papers (2020-02-24T18:38:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.