DigGAN: Discriminator gradIent Gap Regularization for GAN Training with
Limited Data
- URL: http://arxiv.org/abs/2211.14694v1
- Date: Sun, 27 Nov 2022 01:03:58 GMT
- Title: DigGAN: Discriminator gradIent Gap Regularization for GAN Training with
Limited Data
- Authors: Tiantian Fang, Ruoyu Sun, Alex Schwing
- Abstract summary: We propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN.
DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t. real images and w.r.t. the generated samples.
We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available.
- Score: 13.50061291734299
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative adversarial nets (GANs) have been remarkably successful at
learning to sample from distributions specified by a given dataset,
particularly if the given dataset is reasonably large compared to its
dimensionality. However, given limited data, classical GANs have struggled, and
strategies like output-regularization, data-augmentation, use of pre-trained
models and pruning have been shown to lead to improvements. Notably, the
applicability of these strategies is 1) often constrained to particular
settings, e.g., availability of a pretrained GAN; or 2) increases training
time, e.g., when using pruning. In contrast, we propose a Discriminator
gradIent Gap regularized GAN (DigGAN) formulation which can be added to any
existing GAN. DigGAN augments existing GANs by encouraging to narrow the gap
between the norm of the gradient of a discriminator's prediction w.r.t.\ real
images and w.r.t.\ the generated samples. We observe this formulation to avoid
bad attractors within the GAN loss landscape, and we find DigGAN to
significantly improve the results of GAN training when limited data is
available. Code is available at \url{https://github.com/AilsaF/DigGAN}.
Related papers
- LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Adaptive DropBlock Enhanced Generative Adversarial Networks for
Hyperspectral Image Classification [36.679303770326264]
We propose an Adaptive DropBlock-enhanced Generative Adversarial Networks (ADGAN) for hyperspectral image (HSI) classification.
The discriminator in GAN always contradicts itself and tries to associate fake labels to the minority-class samples, and thus impair the classification performance.
Experimental results on three HSI datasets demonstrated that the proposed ADGAN achieved superior performance over state-of-the-art GAN-based methods.
arXiv Detail & Related papers (2022-01-22T01:43:59Z) - Tail of Distribution GAN (TailGAN): Generative-
Adversarial-Network-Based Boundary Formation [0.0]
We create a GAN-based tail formation model for anomaly detection, the Tail of distribution GAN (TailGAN)
Using TailGAN, we leverage GANs for anomaly detection and use maximum entropy regularization.
We evaluate TailGAN for identifying Out-of-Distribution (OoD) data and its performance evaluated on MNIST, CIFAR-10, Baggage X-Ray, and OoD data shows competitiveness compared to methods from the literature.
arXiv Detail & Related papers (2021-07-24T17:29:21Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - On Data Augmentation for GAN Training [39.074761323958406]
We propose Data Augmentation Optimized for GAN (DAG) to enable the use of augmented data in GAN training.
We conduct experiments to apply DAG to different GAN models.
When DAG is used in some GAN models, the system establishes state-of-the-art Frechet Inception Distance (FID) scores.
arXiv Detail & Related papers (2020-06-09T15:19:26Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.