Collapse by Conditioning: Training Class-conditional GANs with Limited
Data
- URL: http://arxiv.org/abs/2201.06578v1
- Date: Mon, 17 Jan 2022 18:59:23 GMT
- Title: Collapse by Conditioning: Training Class-conditional GANs with Limited
Data
- Authors: Mohamad Shahbazi, Martin Danelljan, Danda Pani Paudel, Luc Van Gool
- Abstract summary: We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
- Score: 109.30895503994687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class-conditioning offers a direct means of controlling a Generative
Adversarial Network (GAN) based on a discrete input variable. While necessary
in many applications, the additional information provided by the class labels
could even be expected to benefit the training of the GAN itself. Contrary to
this belief, we observe that class-conditioning causes mode collapse in limited
data settings, where unconditional learning leads to satisfactory generative
ability. Motivated by this observation, we propose a training strategy for
conditional GANs (cGANs) that effectively prevents the observed mode-collapse
by leveraging unconditional learning. Our training strategy starts with an
unconditional GAN and gradually injects conditional information into the
generator and the objective function. The proposed method for training cGANs
with limited data results not only in stable training but also in generating
high-quality images, thanks to the early-stage exploitation of the shared
information across classes. We analyze the aforementioned mode collapse problem
in comprehensive experiments on four datasets. Our approach demonstrates
outstanding results compared with state-of-the-art methods and established
baselines. The code is available at:
https://github.com/mshahbazi72/transitional-cGAN
Related papers
- Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [13.836798036474143]
Key challenge in Federated Class Continual Learning is catastrophic forgetting.
We propose a novel method of data replay based on diffusion models.
Our method significantly outperforms existing baselines.
arXiv Detail & Related papers (2024-09-02T10:07:24Z) - Taming the Tail in Class-Conditional GANs: Knowledge Sharing via Unconditional Training at Lower Resolutions [10.946446480162148]
GANs tend to favor classes with more samples, leading to the generation of low-quality and less diverse samples in tail classes.
We propose a straightforward yet effective method for knowledge sharing, allowing tail classes to borrow from the rich information from classes with more abundant training data.
Experiments on several long-tail benchmarks and GAN architectures demonstrate a significant improvement over existing methods in both the diversity and fidelity of the generated images.
arXiv Detail & Related papers (2024-02-26T23:03:00Z) - Zero-Shot Conditioning of Score-Based Diffusion Models by Neuro-Symbolic Constraints [1.1826485120701153]
We propose a method that, given a pre-trained unconditional score-based generative model, samples from the conditional distribution under arbitrary logical constraints.
We show how to manipulate the learned score in order to sample from an un-normalized distribution conditional on a user-defined constraint.
We define a flexible and numerically stable neuro-symbolic framework for encoding soft logical constraints.
arXiv Detail & Related papers (2023-08-31T08:25:47Z) - Information-theoretic stochastic contrastive conditional GAN:
InfoSCC-GAN [6.201770337181472]
We present a contrastive conditional generative adversarial network (Info SCC-GAN) with an explorable latent space.
Info SCC-GAN is derived based on an information-theoretic formulation of mutual information between input data and latent space representation.
Experiments show that Info SCC-GAN outperforms the "vanilla" EigenGAN in the image generation on AFHQ and CelebA datasets.
arXiv Detail & Related papers (2021-12-17T17:56:30Z) - Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot
Study [116.05514467222544]
Generative adversarial networks (GANs) nowadays are capable of producing images of incredible realism.
One concern raised is whether the state-of-the-art GAN's learned distribution still suffers from mode collapse.
This paper explores to diagnose GAN intra-mode collapse and calibrate that, in a novel black-box setting.
arXiv Detail & Related papers (2021-07-23T06:03:55Z) - Efficient Conditional GAN Transfer with Knowledge Propagation across
Classes [85.38369543858516]
CGANs provide new opportunities for knowledge transfer compared to unconditional setup.
New classes may borrow knowledge from the related old classes, or share knowledge among themselves to improve the training.
New GAN transfer method explicitly propagates the knowledge from the old classes to the new classes.
arXiv Detail & Related papers (2021-02-12T18:55:34Z) - S2cGAN: Semi-Supervised Training of Conditional GANs with Fewer Labels [1.3764085113103222]
Conditional GANs (cGANs) provide a mechanism to control the generation process by conditioning the output on a user defined input.
We propose a framework for semi-supervised training of cGANs which utilizes sparse labels to learn the conditional mapping.
We demonstrate effectiveness of our method on multiple datasets and different conditional tasks.
arXiv Detail & Related papers (2020-10-23T19:13:44Z) - Conditional Hybrid GAN for Sequence Generation [56.67961004064029]
We propose a novel conditional hybrid GAN (C-Hybrid-GAN) to solve this issue.
We exploit the Gumbel-Softmax technique to approximate the distribution of discrete-valued sequences.
We demonstrate that the proposed C-Hybrid-GAN outperforms the existing methods in context-conditioned discrete-valued sequence generation.
arXiv Detail & Related papers (2020-09-18T03:52:55Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z) - Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs [104.85633684716296]
We show that simple fine-tuning of GANs with frozen lower layers of the discriminator performs surprisingly well.
This simple baseline, FreezeD, significantly outperforms previous techniques used in both unconditional and conditional GANs.
arXiv Detail & Related papers (2020-02-25T15:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.