D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image
Generation
- URL: http://arxiv.org/abs/2205.06032v1
- Date: Thu, 12 May 2022 11:32:39 GMT
- Title: D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image
Generation
- Authors: Xintian Wu, Huanyu Wang, Yiming Wu, Xi Li
- Abstract summary: Few-shot image generation aims at generating realistic images through training a GAN model given few samples.
A typical solution for few-shot generation is to transfer a well-trained GAN model from a data-rich source domain to the data-deficient target domain.
We propose a novel self-supervised transfer scheme termed D3T-GAN, addressing the cross-domain GANs transfer in few-shot image generation.
- Score: 17.20913584422917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an important and challenging problem, few-shot image generation aims at
generating realistic images through training a GAN model given few samples. A
typical solution for few-shot generation is to transfer a well-trained GAN
model from a data-rich source domain to the data-deficient target domain. In
this paper, we propose a novel self-supervised transfer scheme termed D3T-GAN,
addressing the cross-domain GANs transfer in few-shot image generation.
Specifically, we design two individual strategies to transfer knowledge between
generators and discriminators, respectively. To transfer knowledge between
generators, we conduct a data-dependent transformation, which projects and
reconstructs the target samples into the source generator space. Then, we
perform knowledge transfer from transformed samples to generated samples. To
transfer knowledge between discriminators, we design a multi-level discriminant
knowledge distillation from the source discriminator to the target
discriminator on both the real and fake samples. Extensive experiments show
that our method improve the quality of generated images and achieves the
state-of-the-art FID scores on commonly used datasets.
Related papers
- Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - Exploring Incompatible Knowledge Transfer in Few-shot Image Generation [107.81232567861117]
Few-shot image generation learns to generate diverse and high-fidelity images from a target domain using a few reference samples.
Existing F SIG methods select, preserve and transfer prior knowledge from a source generator to learn the target generator.
We propose knowledge truncation, which is a complementary operation to knowledge preservation and is implemented by a lightweight pruning-based method.
arXiv Detail & Related papers (2023-04-15T14:57:15Z) - Guided Image-to-Image Translation by Discriminator-Generator
Communication [71.86347329356244]
The goal of Image-to-image (I2I) translation is to transfer an image from a source domain to a target domain.
One major branch of this research is to formulate I2I translation based on Generative Adversarial Network (GAN)
arXiv Detail & Related papers (2023-03-07T02:29:36Z) - Diffusion Guided Domain Adaptation of Image Generators [22.444668833151677]
We show that the classifier-free guidance can be leveraged as a critic and enable generators to distill knowledge from large-scale text-to-image diffusion models.
Generators can be efficiently shifted into new domains indicated by text prompts without access to groundtruth samples from target domains.
Although not trained to minimize CLIP loss, our model achieves equally high CLIP scores and significantly lower FID than prior work on short prompts.
arXiv Detail & Related papers (2022-12-08T18:46:19Z) - Few-shot Image Generation via Masked Discrimination [20.998032566820907]
Few-shot image generation aims to generate images of high quality and great diversity with limited data.
It is difficult for modern GANs to avoid overfitting when trained on only a few images.
This work presents a novel approach to realize few-shot GAN adaptation via masked discrimination.
arXiv Detail & Related papers (2022-10-27T06:02:22Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - A Systematic Evaluation of Domain Adaptation in Facial Expression
Recognition [0.0]
This paper provides a systematic evaluation of domain adaptation in facial expression recognition.
We use state-of-the-art transfer learning techniques and six commonly-used facial expression datasets.
We find sobering results that the accuracy of transfer learning is not high, and varies idiosyncratically with the target dataset.
arXiv Detail & Related papers (2021-06-29T14:41:19Z) - MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
Limited Data Domains [77.46963293257912]
We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain.
This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain.
We show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods.
arXiv Detail & Related papers (2021-04-28T13:10:56Z) - Domain Adaptation for Learning Generator from Paired Few-Shot Data [72.04430033118426]
We propose a Paired Few-shot GAN (PFS-GAN) model for learning generators with sufficient source data and a few target data.
Our method has better quantitative and qualitative results on the generated target-domain data with higher diversity in comparison to several baselines.
arXiv Detail & Related papers (2021-02-25T10:11:44Z) - Six-channel Image Representation for Cross-domain Object Detection [17.854940064699985]
Deep learning models are data-driven and the excellent performance is highly dependent on the abundant and diverse datasets.
Some image-to-image translation techniques are employed to generate some fake data of some specific scenes to train the models.
We propose to inspire the original 3-channel images and their corresponding GAN-generated fake images to form 6-channel representations of the dataset.
arXiv Detail & Related papers (2021-01-03T04:50:03Z) - Data Instance Prior for Transfer Learning in GANs [25.062518859107946]
We propose a novel transfer learning method for GANs in the limited data domain.
We show that the proposed method effectively transfers knowledge to domains with few target images.
We also show the utility of data instance prior in large-scale unconditional image generation and image editing tasks.
arXiv Detail & Related papers (2020-12-08T07:40:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.