One-Shot Domain Adaptation For Face Generation
- URL: http://arxiv.org/abs/2003.12869v1
- Date: Sat, 28 Mar 2020 18:50:13 GMT
- Title: One-Shot Domain Adaptation For Face Generation
- Authors: Chao Yang, Ser-Nam Lim
- Abstract summary: We propose a framework capable of generating face images that fall into the same distribution as that of a given one-shot example.
We develop an iterative optimization scheme that rapidly adapts the weights of the model to shift the output's high-level distribution to the target's.
To generate images of the same distribution, we introduce a style-mixing technique that transfers the low-level statistics from the target to faces randomly generated with the model.
- Score: 34.882820002799626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a framework capable of generating face images that
fall into the same distribution as that of a given one-shot example. We
leverage a pre-trained StyleGAN model that already learned the generic face
distribution. Given the one-shot target, we develop an iterative optimization
scheme that rapidly adapts the weights of the model to shift the output's
high-level distribution to the target's. To generate images of the same
distribution, we introduce a style-mixing technique that transfers the
low-level statistics from the target to faces randomly generated with the
model. With that, we are able to generate an unlimited number of faces that
inherit from the distribution of both generic human faces and the one-shot
example. The newly generated faces can serve as augmented training data for
other downstream tasks. Such setting is appealing as it requires labeling very
few, or even one example, in the target domain, which is often the case of
real-world face manipulations that result from a variety of unknown and unique
distributions, each with extremely low prevalence. We show the effectiveness of
our one-shot approach for detecting face manipulations and compare it with
other few-shot domain adaptation methods qualitatively and quantitatively.
Related papers
- Score Neural Operator: A Generative Model for Learning and Generalizing Across Multiple Probability Distributions [7.851040662069365]
We introduce the $emphScore Neural Operator, which learns the mapping from multiple probability distributions to their score functions within a unified framework.
Our approach offers significant potential for few-shot learning applications, where a single image from a new distribution can be leveraged to generate multiple distinct images from that distribution.
arXiv Detail & Related papers (2024-10-11T06:00:34Z) - ZoDi: Zero-Shot Domain Adaptation with Diffusion-Based Image Transfer [13.956618446530559]
This paper proposes a zero-shot domain adaptation method based on diffusion models, called ZoDi.
First, we utilize an off-the-shelf diffusion model to synthesize target-like images by transferring the domain of source images to the target domain.
Secondly, we train the model using both source images and synthesized images with the original representations to learn domain-robust representations.
arXiv Detail & Related papers (2024-03-20T14:58:09Z) - Real-World Image Variation by Aligning Diffusion Inversion Chain [53.772004619296794]
A domain gap exists between generated images and real-world images, which poses a challenge in generating high-quality variations of real-world images.
We propose a novel inference pipeline called Real-world Image Variation by ALignment (RIVAL)
Our pipeline enhances the generation quality of image variations by aligning the image generation process to the source image's inversion chain.
arXiv Detail & Related papers (2023-05-30T04:09:47Z) - Taming Normalizing Flows [22.15640952962115]
We propose an algorithm for taming Normalizing Flow models.
We focus on Normalizing Flows because they can calculate the exact generation probability likelihood for a given image.
Taming is achieved with a fast fine-tuning process without retraining the model from scratch, achieving the goal in a matter of minutes.
arXiv Detail & Related papers (2022-11-29T18:56:04Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - One-Shot Adaptation of GAN in Just One CLIP [51.188396199083336]
We present a novel single-shot GAN adaptation method through unified CLIP space manipulations.
Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization.
We show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-17T13:03:06Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Improving the Fairness of Deep Generative Models without Retraining [41.6580482370894]
Generative Adversarial Networks (GANs) advance face synthesis through learning the underlying distribution of observed data.
Despite the high-quality generated faces, some minority groups can be rarely generated from the trained models due to a biased image generation process.
We propose an interpretable baseline method to balance the output facial attributes without retraining.
arXiv Detail & Related papers (2020-12-09T03:20:41Z) - Few-shot Image Generation with Elastic Weight Consolidation [53.556446614013105]
Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.
We adapt a pretrained model, without introducing any additional parameters, to the few examples of the target domain.
We demonstrate the effectiveness of our algorithm by generating high-quality results of different target domains.
arXiv Detail & Related papers (2020-12-04T18:57:13Z) - Improving Face Recognition from Hard Samples via Distribution
Distillation Loss [131.61036519863856]
Large facial variations are the main challenge in face recognition.
We propose a novel Distribution Distillation Loss to narrow the performance gap between easy and hard samples.
We have conducted extensive experiments on both generic large-scale face benchmarks and benchmarks with diverse variations on race, resolution and pose.
arXiv Detail & Related papers (2020-02-10T11:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.