Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training
- URL: http://arxiv.org/abs/2111.01118v1
- Date: Mon, 1 Nov 2021 17:51:33 GMT
- Title: Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training
- Authors: Minguk Kang, Woohyeon Shim, Minsu Cho, Jaesik Park
- Abstract summary: Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
- Score: 45.70113212633225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional Generative Adversarial Networks (cGAN) generate realistic images
by incorporating class information into GAN. While one of the most popular
cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN),
it is widely known that training ACGAN is challenging as the number of classes
in the dataset increases. ACGAN also tends to generate easily classifiable
samples with a lack of diversity. In this paper, we introduce two cures for
ACGAN. First, we identify that gradient exploding in the classifier can cause
an undesirable collapse in early training, and projecting input vectors onto a
unit hypersphere can resolve the problem. Second, we propose the Data-to-Data
Cross-Entropy loss (D2D-CE) to exploit relational information in the
class-labeled dataset. On this foundation, we propose the Rebooted Auxiliary
Classifier Generative Adversarial Network (ReACGAN). The experimental results
show that ReACGAN achieves state-of-the-art generation results on CIFAR10,
Tiny-ImageNet, CUB200, and ImageNet datasets. We also verify that ReACGAN
benefits from differentiable augmentations and that D2D-CE harmonizes with
StyleGAN2 architecture. Model weights and a software package that provides
implementations of representative cGANs and all experiments in our paper are
available at https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.
Related papers
- Additional Look into GAN-based Augmentation for Deep Learning COVID-19
Image Classification [57.1795052451257]
We study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples.
We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems.
The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets.
arXiv Detail & Related papers (2024-01-26T08:28:13Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training [20.03447539784024]
We propose a novel approach for training GANs with images as inputs, but without enforcing any pairwise constraints.
The process can be made efficient by identifying closely related datasets, or a friendly neighborhood'' of the target distribution.
We show that the Spider GAN formulation results in faster convergence, as the generator can discover correspondence even between seemingly unrelated datasets.
arXiv Detail & Related papers (2023-05-12T17:03:18Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Sequential training of GANs against GAN-classifiers reveals correlated
"knowledge gaps" present among independently trained GAN instances [1.104121146441257]
We iteratively train GAN-classifiers and train GANs that "fool" the classifiers.
We examine the effect on GAN training dynamics, output quality, and GAN-classifier generalization.
arXiv Detail & Related papers (2023-03-27T18:18:15Z) - DGL-GAN: Discriminator Guided Learning for GAN Compression [57.6150859067392]
Generative Adversarial Networks (GANs) with high computation costs have achieved remarkable results in synthesizing high-resolution images from random noise.
We propose a novel yet simple bf Discriminator bf Guided bf Learning approach for compressing vanilla bf GAN, dubbed bf DGL-GAN.
arXiv Detail & Related papers (2021-12-13T09:24:45Z) - A Unified View of cGANs with and without Classifiers [24.28407308818025]
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
Some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers.
In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs.
arXiv Detail & Related papers (2021-11-01T15:36:33Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - Unbiased Auxiliary Classifier GANs with MINE [7.902878869106766]
We propose an Unbiased Auxiliary GANs (UAC-GAN) that utilize the Mutual Information Neural Estorimat (MINE) to estimate the mutual information between the generated data distribution and labels.
Our UAC-GAN performs better than AC-GAN and TACGAN on three datasets.
arXiv Detail & Related papers (2020-06-13T05:51:51Z) - xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
Systems [16.360144499713524]
Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data.
We propose a new class of GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators.
We observe xAI-GANs provide an improvement of up to 23.18% in the quality of generated images on both MNIST and FMNIST datasets over standard GANs.
arXiv Detail & Related papers (2020-02-24T18:38:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.