A New Perspective on Stabilizing GANs training: Direct Adversarial
Training
- URL: http://arxiv.org/abs/2008.09041v5
- Date: Tue, 19 Jul 2022 14:47:32 GMT
- Title: A New Perspective on Stabilizing GANs training: Direct Adversarial
Training
- Authors: Ziqiang Li, Pengfei Xia, Rentuo Tao, Hongjing Niu, Bin Li
- Abstract summary: Training instability is still one of the open problems for all GAN-based algorithms.
It is found that sometimes the images produced by the generator act like adversarial examples of the discriminator during the training process.
We propose the Direct Adversarial Training method to stabilize the training process of GANs.
- Score: 10.66166999381244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) are the most popular image generation
models that have achieved remarkable progress on various computer vision tasks.
However, training instability is still one of the open problems for all
GAN-based algorithms. Quite a number of methods have been proposed to stabilize
the training of GANs, the focuses of which were respectively put on the loss
functions, regularization and normalization technologies, training algorithms,
and model architectures. Different from the above methods, in this paper, a new
perspective on stabilizing GANs training is presented. It is found that
sometimes the images produced by the generator act like adversarial examples of
the discriminator during the training process, which may be part of the reason
causing the unstable training of GANs. With this finding, we propose the Direct
Adversarial Training (DAT) method to stabilize the training process of GANs.
Furthermore, we prove that the DAT method is able to minimize the Lipschitz
constant of the discriminator adaptively. The advanced performance of DAT is
verified on multiple loss functions, network architectures, hyper-parameters,
and datasets. Specifically, DAT achieves significant improvements of 11.5% FID
on CIFAR-100 unconditional generation based on SSGAN, 10.5% FID on STL-10
unconditional generation based on SSGAN, and 13.2% FID on LSUN-Bedroom
unconditional generation based on SSGAN. Code will be available at
https://github.com/iceli1007/DAT-GAN
Related papers
- CHAIN: Enhancing Generalization in Data-Efficient GANs via lipsCHitz continuity constrAIned Normalization [36.20084231028338]
Generative Adversarial Networks (GANs) significantly advanced image generation but their performance heavily depends on abundant training data.
In scenarios with limited data, GANs often struggle with discriminator overfitting and unstable training.
We present CHAIN, which replaces the conventional centering step with zero-mean regularization and integrates a Lipschitz continuity constraint in the scaling step.
arXiv Detail & Related papers (2024-03-31T01:41:36Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Collapse by Conditioning: Training Class-conditional GANs with Limited
Data [109.30895503994687]
We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
arXiv Detail & Related papers (2022-01-17T18:59:23Z) - Regularizing Generative Adversarial Networks under Limited Data [88.57330330305535]
This work proposes a regularization approach for training robust GAN models on limited data.
We show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.
arXiv Detail & Related papers (2021-04-07T17:59:06Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - InfoMax-GAN: Improved Adversarial Image Generation via Information
Maximization and Contrastive Learning [39.316605441868944]
Generative Adversarial Networks (GANs) are fundamental to many generative modelling applications.
We propose a principled framework to simultaneously mitigate two fundamental issues in GANs: catastrophic forgetting of the discriminator and mode collapse of the generator.
Our approach significantly stabilizes GAN training and improves GAN performance for image synthesis across five datasets.
arXiv Detail & Related papers (2020-07-09T06:56:11Z) - Improving GAN Training with Probability Ratio Clipping and Sample
Reweighting [145.5106274085799]
generative adversarial networks (GANs) often suffer from inferior performance due to unstable training.
We propose a new variational GAN training framework which enjoys superior training stability.
By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks.
arXiv Detail & Related papers (2020-06-12T01:39:48Z) - Training Generative Adversarial Networks with Limited Data [42.72100066471578]
We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.
Good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images.
arXiv Detail & Related papers (2020-06-11T17:06:34Z) - Stabilizing Training of Generative Adversarial Nets via Langevin Stein
Variational Gradient Descent [11.329376606876101]
We propose to stabilize GAN training via a novel particle-based variational inference -- Langevin Stein variational descent gradient (LSVGD)
We show that LSVGD dynamics has an implicit regularization which is able to enhance particles' spread-out and diversity.
arXiv Detail & Related papers (2020-04-22T11:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.