Image Augmentations for GAN Training
- URL: http://arxiv.org/abs/2006.02595v1
- Date: Thu, 4 Jun 2020 00:16:02 GMT
- Title: Image Augmentations for GAN Training
- Authors: Zhengli Zhao, Zizhao Zhang, Ting Chen, Sameer Singh, Han Zhang
- Abstract summary: We provide insights and guidelines on how to augment images for both vanilla GANs and GANs with regularizations.
Surprisingly, we find that vanilla GANs attain generation quality on par with recent state-of-the-art results.
- Score: 57.65145659417266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentations have been widely studied to improve the accuracy and
robustness of classifiers. However, the potential of image augmentation in
improving GAN models for image synthesis has not been thoroughly investigated
in previous studies. In this work, we systematically study the effectiveness of
various existing augmentation techniques for GAN training in a variety of
settings. We provide insights and guidelines on how to augment images for both
vanilla GANs and GANs with regularizations, improving the fidelity of the
generated images substantially. Surprisingly, we find that vanilla GANs attain
generation quality on par with recent state-of-the-art results if we use
augmentations on both real and generated images. When this GAN training is
combined with other augmentation-based regularization techniques, such as
contrastive loss and consistency regularization, the augmentations further
improve the quality of generated images. We provide new state-of-the-art
results for conditional generation on CIFAR-10 with both consistency loss and
contrastive loss as additional regularizations.
Related papers
- Enhancing Low Dose Computed Tomography Images Using Consistency Training Techniques [7.694256285730863]
In this paper, we introduce the beta noise distribution, which provides flexibility in adjusting noise levels.
High Noise Improved Consistency Training (HN-iCT) is trained in a supervised fashion.
Our results indicate that unconditional image generation using HN-iCT significantly outperforms basic CT and iCT training techniques with NFE=1.
arXiv Detail & Related papers (2024-11-19T02:48:36Z) - Ultrasound Image Enhancement using CycleGAN and Perceptual Loss [4.428854369140015]
This work introduces an advanced framework designed to enhance ultrasound images, especially those captured by portable hand-held devices.
We utilize an enhanced generative adversarial network (CycleGAN) model for ultrasound image enhancement across five organ systems.
arXiv Detail & Related papers (2023-12-18T23:21:00Z) - DifAugGAN: A Practical Diffusion-style Data Augmentation for GAN-based
Single Image Super-resolution [88.13972071356422]
We propose a diffusion-style data augmentation scheme for GAN-based image super-resolution (SR) methods, known as DifAugGAN.
It involves adapting the diffusion process in generative diffusion models for improving the calibration of the discriminator during training.
Our DifAugGAN can be a Plug-and-Play strategy for current GAN-based SISR methods to improve the calibration of the discriminator and thus improve SR performance.
arXiv Detail & Related papers (2023-11-30T12:37:53Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - Dynamic Test-Time Augmentation via Differentiable Functions [3.686808512438363]
DynTTA is an image enhancement method that generates recognition-friendly images without retraining the recognition model.
DynTTA is based on differentiable data augmentation techniques and generates a blended image from many augmented images to improve the recognition accuracy under distribution shifts.
arXiv Detail & Related papers (2022-12-09T06:06:47Z) - Evolving GAN Formulations for Higher Quality Image Synthesis [15.861807854144228]
Generative Adversarial Networks (GANs) have extended deep learning to complex generation and translation tasks.
GANs are notoriously difficult to train: Mode collapse and other instabilities in the training process often degrade the quality of the generated results.
This paper presents a new technique called TaylorGAN for improving GANs by discovering customized loss functions for each of its two networks.
arXiv Detail & Related papers (2021-02-17T05:11:21Z) - Generative Data Augmentation for Commonsense Reasoning [75.26876609249197]
G-DAUGC is a novel generative data augmentation method that aims to achieve more accurate and robust learning in the low-resource setting.
G-DAUGC consistently outperforms existing data augmentation methods based on back-translation.
Our analysis demonstrates that G-DAUGC produces a diverse set of fluent training examples, and that its selection and training approaches are important for performance.
arXiv Detail & Related papers (2020-04-24T06:12:10Z) - Improved Consistency Regularization for GANs [102.17007700413326]
We propose several modifications to the consistency regularization procedure designed to improve its performance.
For unconditional image synthesis on CIFAR-10 and CelebA, our modifications yield the best known FID scores on various GAN architectures.
On ImageNet-2012, we apply our technique to the original BigGAN model and improve the FID from 6.66 to 5.38, which is the best score at that model size.
arXiv Detail & Related papers (2020-02-11T22:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.