On Data Augmentation for GAN Training
- URL: http://arxiv.org/abs/2006.05338v3
- Date: Thu, 31 Dec 2020 08:34:10 GMT
- Title: On Data Augmentation for GAN Training
- Authors: Ngoc-Trung Tran, Viet-Hung Tran, Ngoc-Bao Nguyen, Trung-Kien Nguyen,
Ngai-Man Cheung
- Abstract summary: We propose Data Augmentation Optimized for GAN (DAG) to enable the use of augmented data in GAN training.
We conduct experiments to apply DAG to different GAN models.
When DAG is used in some GAN models, the system establishes state-of-the-art Frechet Inception Distance (FID) scores.
- Score: 39.074761323958406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent successes in Generative Adversarial Networks (GAN) have affirmed the
importance of using more data in GAN training. Yet it is expensive to collect
data in many domains such as medical applications. Data Augmentation (DA) has
been applied in these applications. In this work, we first argue that the
classical DA approach could mislead the generator to learn the distribution of
the augmented data, which could be different from that of the original data. We
then propose a principled framework, termed Data Augmentation Optimized for GAN
(DAG), to enable the use of augmented data in GAN training to improve the
learning of the original distribution. We provide theoretical analysis to show
that using our proposed DAG aligns with the original GAN in minimizing the
Jensen-Shannon (JS) divergence between the original distribution and model
distribution. Importantly, the proposed DAG effectively leverages the augmented
data to improve the learning of discriminator and generator. We conduct
experiments to apply DAG to different GAN models: unconditional GAN,
conditional GAN, self-supervised GAN and CycleGAN using datasets of natural
images and medical images. The results show that DAG achieves consistent and
considerable improvements across these models. Furthermore, when DAG is used in
some GAN models, the system establishes state-of-the-art Frechet Inception
Distance (FID) scores. Our code is available.
Related papers
- Convolutional Learning on Directed Acyclic Graphs [10.282099295800322]
We develop a novel convolutional architecture tailored for learning from data defined over directed acyclic graphs (DAGs)
We develop a novel convolutional graph neural network that integrates learnable DAG filters to account for the partial ordering induced by the graph topology.
arXiv Detail & Related papers (2024-05-05T21:30:18Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Generative adversarial networks for data-scarce spectral applications [0.0]
We report on an application of GANs in the domain of synthetic spectral data generation.
We show that CWGANs can act as a surrogate model with improved performance in the low-data regime.
arXiv Detail & Related papers (2023-07-14T16:27:24Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - DigGAN: Discriminator gradIent Gap Regularization for GAN Training with
Limited Data [13.50061291734299]
We propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN.
DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t. real images and w.r.t. the generated samples.
We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available.
arXiv Detail & Related papers (2022-11-27T01:03:58Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - GAN-based Data Augmentation for Chest X-ray Classification [0.0]
Generative Adrialversa Networks (GANs) offer a novel method of synthetic data augmentation.
GAN-based augmentation leads to higher downstream performance for underrepresented classes.
This suggests that GAN-based augmentation a promising area of research to improve network performance when data collection is prohibitively expensive.
arXiv Detail & Related papers (2021-07-07T01:36:48Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Generative Data Augmentation for Commonsense Reasoning [75.26876609249197]
G-DAUGC is a novel generative data augmentation method that aims to achieve more accurate and robust learning in the low-resource setting.
G-DAUGC consistently outperforms existing data augmentation methods based on back-translation.
Our analysis demonstrates that G-DAUGC produces a diverse set of fluent training examples, and that its selection and training approaches are important for performance.
arXiv Detail & Related papers (2020-04-24T06:12:10Z) - xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
Systems [16.360144499713524]
Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data.
We propose a new class of GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators.
We observe xAI-GANs provide an improvement of up to 23.18% in the quality of generated images on both MNIST and FMNIST datasets over standard GANs.
arXiv Detail & Related papers (2020-02-24T18:38:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.