On Leveraging Pretrained GANs for Generation with Limited Data
- URL: http://arxiv.org/abs/2002.11810v3
- Date: Sat, 8 Aug 2020 03:59:02 GMT
- Title: On Leveraging Pretrained GANs for Generation with Limited Data
- Authors: Miaoyun Zhao, Yulai Cong, Lawrence Carin
- Abstract summary: generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
- Score: 83.32972353800633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown generative adversarial networks (GANs) can generate
highly realistic images, that are often indistinguishable (by humans) from real
images. Most images so generated are not contained in the training dataset,
suggesting potential for augmenting training sets with GAN-generated data.
While this scenario is of particular relevance when there are limited data
available, there is still the issue of training the GAN itself based on that
limited data. To facilitate this, we leverage existing GAN models pretrained on
large-scale datasets (like ImageNet) to introduce additional knowledge (which
may not exist within the limited data), following the concept of transfer
learning. Demonstrated by natural-image generation, we reveal that low-level
filters (those close to observations) of both the generator and discriminator
of pretrained GANs can be transferred to facilitate generation in a
perceptually-distinct target domain with limited training data. To further
adapt the transferred filters to the target domain, we propose adaptive filter
modulation (AdaFM). An extensive set of experiments is presented to demonstrate
the effectiveness of the proposed techniques on generation with limited data.
Related papers
- Generative adversarial networks for data-scarce spectral applications [0.0]
We report on an application of GANs in the domain of synthetic spectral data generation.
We show that CWGANs can act as a surrogate model with improved performance in the low-data regime.
arXiv Detail & Related papers (2023-07-14T16:27:24Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
Limited Data Domains [77.46963293257912]
We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain.
This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain.
We show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods.
arXiv Detail & Related papers (2021-04-28T13:10:56Z) - Regularizing Generative Adversarial Networks under Limited Data [88.57330330305535]
This work proposes a regularization approach for training robust GAN models on limited data.
We show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.
arXiv Detail & Related papers (2021-04-07T17:59:06Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Data Instance Prior for Transfer Learning in GANs [25.062518859107946]
We propose a novel transfer learning method for GANs in the limited data domain.
We show that the proposed method effectively transfers knowledge to domains with few target images.
We also show the utility of data instance prior in large-scale unconditional image generation and image editing tasks.
arXiv Detail & Related papers (2020-12-08T07:40:30Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.