Data Instance Prior for Transfer Learning in GANs
- URL: http://arxiv.org/abs/2012.04256v1
- Date: Tue, 8 Dec 2020 07:40:30 GMT
- Title: Data Instance Prior for Transfer Learning in GANs
- Authors: Puneet Mangla, Nupur Kumari, Mayank Singh, Vineeth N Balasubramanian,
Balaji Krishnamurthy
- Abstract summary: We propose a novel transfer learning method for GANs in the limited data domain.
We show that the proposed method effectively transfers knowledge to domains with few target images.
We also show the utility of data instance prior in large-scale unconditional image generation and image editing tasks.
- Score: 25.062518859107946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in generative adversarial networks (GANs) have shown
remarkable progress in generating high-quality images. However, this gain in
performance depends on the availability of a large amount of training data. In
limited data regimes, training typically diverges, and therefore the generated
samples are of low quality and lack diversity. Previous works have addressed
training in low data setting by leveraging transfer learning and data
augmentation techniques. We propose a novel transfer learning method for GANs
in the limited data domain by leveraging informative data prior derived from
self-supervised/supervised pre-trained networks trained on a diverse source
domain. We perform experiments on several standard vision datasets using
various GAN architectures (BigGAN, SNGAN, StyleGAN2) to demonstrate that the
proposed method effectively transfers knowledge to domains with few target
images, outperforming existing state-of-the-art techniques in terms of image
quality and diversity. We also show the utility of data instance prior in
large-scale unconditional image generation and image editing tasks.
Related papers
- Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - Training on Thin Air: Improve Image Classification with Generated Data [28.96941414724037]
Diffusion Inversion is a simple yet effective method to generate diverse, high-quality training data for image classification.
Our approach captures the original data distribution and ensures data coverage by inverting images to the latent space of Stable Diffusion.
We identify three key components that allow our generated images to successfully supplant the original dataset.
arXiv Detail & Related papers (2023-05-24T16:33:02Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image
Generation [17.20913584422917]
Few-shot image generation aims at generating realistic images through training a GAN model given few samples.
A typical solution for few-shot generation is to transfer a well-trained GAN model from a data-rich source domain to the data-deficient target domain.
We propose a novel self-supervised transfer scheme termed D3T-GAN, addressing the cross-domain GANs transfer in few-shot image generation.
arXiv Detail & Related papers (2022-05-12T11:32:39Z) - Regularizing Generative Adversarial Networks under Limited Data [88.57330330305535]
This work proposes a regularization approach for training robust GAN models on limited data.
We show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.
arXiv Detail & Related papers (2021-04-07T17:59:06Z) - Multiclass non-Adversarial Image Synthesis, with Application to
Classification from Very Small Sample [6.243995448840211]
We present a novel non-adversarial generative method - Clustered Optimization of LAtent space (COLA)
In the full data regime, our method is capable of generating diverse multi-class images with no supervision.
In the small-data regime, where only a small sample of labeled images is available for training with no access to additional unlabeled data, our results surpass state-of-the-art GAN models trained on the same amount of data.
arXiv Detail & Related papers (2020-11-25T18:47:27Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.