Few-Shot Adaptation of Generative Adversarial Networks
- URL: http://arxiv.org/abs/2010.11943v1
- Date: Thu, 22 Oct 2020 17:59:29 GMT
- Title: Few-Shot Adaptation of Generative Adversarial Networks
- Authors: Esther Robb and Wen-Sheng Chu and Abhishek Kumar and Jia-Bin Huang
- Abstract summary: This paper proposes a simple and effective method, Few-Shot GAN, for adapting GANs in few-shot settings (less than 100 images)
FSGAN learns to adapt the singular values of the pre-trained weights while freezing the corresponding singular vectors.
We show that our method has significant visual quality gains compared with existing GAN adaptation methods.
- Score: 54.014885321880755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have shown remarkable performance in
image synthesis tasks, but typically require a large number of training samples
to achieve high-quality synthesis. This paper proposes a simple and effective
method, Few-Shot GAN (FSGAN), for adapting GANs in few-shot settings (less than
100 images). FSGAN repurposes component analysis techniques and learns to adapt
the singular values of the pre-trained weights while freezing the corresponding
singular vectors. This provides a highly expressive parameter space for
adaptation while constraining changes to the pretrained weights. We validate
our method in a challenging few-shot setting of 5-100 images in the target
domain. We show that our method has significant visual quality gains compared
with existing GAN adaptation methods. We report qualitative and quantitative
results showing the effectiveness of our method. We additionally highlight a
problem for few-shot synthesis in the standard quantitative metric used by
data-efficient image synthesis works. Code and additional results are available
at http://e-271.github.io/few-shot-gan.
Related papers
- E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation [69.72194342962615]
We introduce and address a novel research direction: can the process of distilling GANs from diffusion models be made significantly more efficient?
First, we construct a base GAN model with generalized features, adaptable to different concepts through fine-tuning, eliminating the need for training from scratch.
Second, we identify crucial layers within the base GAN model and employ Low-Rank Adaptation (LoRA) with a simple yet effective rank search process, rather than fine-tuning the entire base model.
Third, we investigate the minimal amount of data necessary for fine-tuning, further reducing the overall training time.
arXiv Detail & Related papers (2024-01-11T18:59:14Z) - Diversified in-domain synthesis with efficient fine-tuning for few-shot
classification [64.86872227580866]
Few-shot image classification aims to learn an image classifier using only a small set of labeled examples per class.
We propose DISEF, a novel approach which addresses the generalization challenge in few-shot learning using synthetic data.
We validate our method in ten different benchmarks, consistently outperforming baselines and establishing a new state-of-the-art for few-shot classification.
arXiv Detail & Related papers (2023-12-05T17:18:09Z) - Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations [61.132408427908175]
zero-shot GAN adaptation aims to reuse well-trained generators to synthesize images of an unseen target domain.
With only a single representative text feature instead of real images, the synthesized images gradually lose diversity.
We propose a novel method to find semantic variations of the target text in the CLIP space.
arXiv Detail & Related papers (2023-08-21T08:12:28Z) - Improving GAN Training via Feature Space Shrinkage [69.98365478398593]
We propose AdaptiveMix, which shrinks regions of training data in the image representation space of the discriminator.
Considering it is intractable to directly bound feature space, we propose to construct hard samples and narrow down the feature distance between hard and easy samples.
The evaluation results demonstrate that our AdaptiveMix can facilitate the training of GANs and effectively improve the image quality of generated samples.
arXiv Detail & Related papers (2023-03-02T20:22:24Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - InfoMax-GAN: Improved Adversarial Image Generation via Information
Maximization and Contrastive Learning [39.316605441868944]
Generative Adversarial Networks (GANs) are fundamental to many generative modelling applications.
We propose a principled framework to simultaneously mitigate two fundamental issues in GANs: catastrophic forgetting of the discriminator and mode collapse of the generator.
Our approach significantly stabilizes GAN training and improves GAN performance for image synthesis across five datasets.
arXiv Detail & Related papers (2020-07-09T06:56:11Z) - Training End-to-end Single Image Generators without GANs [27.393821783237186]
AugurOne is a novel approach for training single image generative models.
Our approach trains an upscaling neural network using non-affine augmentations of the (single) input image.
A compact latent space is jointly learned allowing for controlled image synthesis.
arXiv Detail & Related papers (2020-04-07T17:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.