Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data
- URL: http://arxiv.org/abs/2111.06849v1
- Date: Fri, 12 Nov 2021 18:13:45 GMT
- Title: Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data
- Authors: Liming Jiang, Bo Dai, Wayne Wu, Chen Change Loy
- Abstract summary: Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
- Score: 125.7135706352493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) typically require ample data for
training in order to synthesize high-fidelity images. Recent studies have shown
that training GANs with limited data remains formidable due to discriminator
overfitting, the underlying cause that impedes the generator's convergence.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation
(APA) to encourage healthy competition between the generator and the
discriminator. As an alternative method to existing approaches that rely on
standard data augmentations or model regularization, APA alleviates overfitting
by employing the generator itself to augment the real data distribution with
generated images, which deceives the discriminator adaptively. Extensive
experiments demonstrate the effectiveness of APA in improving synthesis quality
in the low-data regime. We provide a theoretical analysis to examine the
convergence and rationality of our new training strategy. APA is simple and
effective. It can be added seamlessly to powerful contemporary GANs, such as
StyleGAN2, with negligible computational cost.
Related papers
- Parallelly Tempered Generative Adversarial Networks [7.94957965474334]
A generative adversarial network (GAN) has been a representative backbone model in generative artificial intelligence (AI)
This work analyzes the training instability and inefficiency in the presence of mode collapse by linking it to multimodality in the target distribution.
With our newly developed GAN objective function, the generator can learn all the tempered distributions simultaneously.
arXiv Detail & Related papers (2024-11-18T18:01:13Z) - MCGAN: Enhancing GAN Training with Regression-Based Generator Loss [5.7645234295847345]
An adversarial network (GAN) has emerged as a powerful tool for generating high-fidelity data.
We propose an algorithm called Monte Carlo GAN (MCGAN)
This approach, utilizing an innovative generative loss function, termly the regression loss, reformulates the generator training as a regression task.
We show that our method requires a weaker condition on the discriminator for effective generator training.
arXiv Detail & Related papers (2024-05-27T14:15:52Z) - Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks [3.3903891679981593]
We present Bias-transforming Generative Adversarial Networks (Bt-GAN), a GAN-based synthetic data generator specifically designed for the healthcare domain.
Our results demonstrate that Bt-GAN achieves SOTA accuracy while significantly improving fairness and minimizing bias.
arXiv Detail & Related papers (2024-04-21T12:16:38Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Improving Adversarial Robustness by Contrastive Guided Diffusion Process [19.972628281993487]
We propose Contrastive-Guided Diffusion Process (Contrastive-DP) to guide the diffusion model in data generation.
We show that enhancing the distinguishability among the generated data is critical for improving adversarial robustness.
arXiv Detail & Related papers (2022-10-18T07:20:53Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - HGAN: Hybrid Generative Adversarial Network [25.940501417539416]
We propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model.
A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach.
arXiv Detail & Related papers (2021-02-07T03:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.