One-Shot Generative Domain Adaptation
- URL: http://arxiv.org/abs/2111.09876v1
- Date: Thu, 18 Nov 2021 18:55:08 GMT
- Title: One-Shot Generative Domain Adaptation
- Authors: Ceyuan Yang, Yujun Shen, Zhiyi Zhang, Yinghao Xu, Jiapeng Zhu, Zhirong
Wu, Bolei Zhou
- Abstract summary: This work aims at transferring a Generative Adversarial Network (GAN) pre-trained on one image domain to a new domain referring to as few as just one target image.
- Score: 39.17324951275831
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work aims at transferring a Generative Adversarial Network (GAN)
pre-trained on one image domain to a new domain referring to as few as just one
target image. The main challenge is that, under limited supervision, it is
extremely difficult to synthesize photo-realistic and highly diverse images,
while acquiring representative characters of the target. Different from
existing approaches that adopt the vanilla fine-tuning strategy, we import two
lightweight modules to the generator and the discriminator respectively.
Concretely, we introduce an attribute adaptor into the generator yet freeze its
original parameters, through which it can reuse the prior knowledge to the most
extent and hence maintain the synthesis quality and diversity. We then equip
the well-learned discriminator backbone with an attribute classifier to ensure
that the generator captures the appropriate characters from the reference.
Furthermore, considering the poor diversity of the training data (i.e., as few
as only one image), we propose to also constrain the diversity of the
generative domain in the training process, alleviating the optimization
difficulty. Our approach brings appealing results under various settings,
substantially surpassing state-of-the-art alternatives, especially in terms of
synthesis diversity. Noticeably, our method works well even with large domain
gaps, and robustly converges within a few minutes for each experiment.
Related papers
- I2I-Galip: Unsupervised Medical Image Translation Using Generative Adversarial CLIP [30.506544165999564]
Unpaired image-to-image translation is a challenging task due to the absence of paired examples.
We propose a new image-to-image translation framework named Image-to-Image-Generative-Adversarial-CLIP (I2I-Galip)
arXiv Detail & Related papers (2024-09-19T01:44:50Z) - Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations [61.132408427908175]
zero-shot GAN adaptation aims to reuse well-trained generators to synthesize images of an unseen target domain.
With only a single representative text feature instead of real images, the synthesized images gradually lose diversity.
We propose a novel method to find semantic variations of the target text in the CLIP space.
arXiv Detail & Related papers (2023-08-21T08:12:28Z) - Few-shot Image Generation via Masked Discrimination [20.998032566820907]
Few-shot image generation aims to generate images of high quality and great diversity with limited data.
It is difficult for modern GANs to avoid overfitting when trained on only a few images.
This work presents a novel approach to realize few-shot GAN adaptation via masked discrimination.
arXiv Detail & Related papers (2022-10-27T06:02:22Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z) - A Closer Look at Few-shot Image Generation [38.83570296616384]
When transferring pretrained GANs on small target data, the generator tends to replicate the training samples.
Several methods have been proposed to address this few-shot image generation, but there is a lack of effort to analyze them under a unified framework.
We propose a framework to analyze existing methods during the adaptation.
Second contribution proposes to apply mutual information (MI) to retain the source domain's rich multi-level diversity information in the target domain generator.
arXiv Detail & Related papers (2022-05-08T07:46:26Z) - StEP: Style-based Encoder Pre-training for Multi-modal Image Synthesis [68.3787368024951]
We propose a novel approach for multi-modal Image-to-image (I2I) translation.
We learn a latent embedding, jointly with the generator, that models the variability of the output domain.
Specifically, we pre-train a generic style encoder using a novel proxy task to learn an embedding of images, from arbitrary domains, into a low-dimensional style latent space.
arXiv Detail & Related papers (2021-04-14T19:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.