Few-shot Image Generation with Elastic Weight Consolidation
- URL: http://arxiv.org/abs/2012.02780v1
- Date: Fri, 4 Dec 2020 18:57:13 GMT
- Title: Few-shot Image Generation with Elastic Weight Consolidation
- Authors: Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman
- Abstract summary: Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.
We adapt a pretrained model, without introducing any additional parameters, to the few examples of the target domain.
We demonstrate the effectiveness of our algorithm by generating high-quality results of different target domains.
- Score: 53.556446614013105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot image generation seeks to generate more data of a given domain, with
only few available training examples. As it is unreasonable to expect to fully
infer the distribution from just a few observations (e.g., emojis), we seek to
leverage a large, related source domain as pretraining (e.g., human faces).
Thus, we wish to preserve the diversity of the source domain, while adapting to
the appearance of the target. We adapt a pretrained model, without introducing
any additional parameters, to the few examples of the target domain. Crucially,
we regularize the changes of the weights during this adaptation, in order to
best preserve the information of the source dataset, while fitting the target.
We demonstrate the effectiveness of our algorithm by generating high-quality
results of different target domains, including those with extremely few
examples (e.g., <10). We also analyze the performance of our method with
respect to some important factors, such as the number of examples and the
dissimilarity between the source and target domain.
Related papers
- DG-TTA: Out-of-domain medical image segmentation through Domain Generalization and Test-Time Adaptation [43.842694540544194]
We propose to combine domain generalization and test-time adaptation to create a highly effective approach for reusing pre-trained models in unseen target domains.
We demonstrate that our method, combined with pre-trained whole-body CT models, can effectively segment MR images with high accuracy.
arXiv Detail & Related papers (2023-12-11T10:26:21Z) - Prior Omission of Dissimilar Source Domain(s) for Cost-Effective
Few-Shot Learning [24.647313693814798]
Few-shot slot tagging is an emerging research topic in the field of Natural Language Understanding (NLU)
With sufficient annotated data from source domains, the key challenge is how to train and adapt the model to another target domain which only has few labels.
arXiv Detail & Related papers (2021-09-11T09:30:59Z) - Few-shot Image Generation via Cross-domain Correspondence [98.2263458153041]
Training generative models, such as GANs, on a target domain containing limited examples can easily result in overfitting.
In this work, we seek to utilize a large source domain for pretraining and transfer the diversity information from source to target.
To further reduce overfitting, we present an anchor-based strategy to encourage different levels of realism over different regions in the latent space.
arXiv Detail & Related papers (2021-04-13T17:59:35Z) - Domain Adaptation for Learning Generator from Paired Few-Shot Data [72.04430033118426]
We propose a Paired Few-shot GAN (PFS-GAN) model for learning generators with sufficient source data and a few target data.
Our method has better quantitative and qualitative results on the generated target-domain data with higher diversity in comparison to several baselines.
arXiv Detail & Related papers (2021-02-25T10:11:44Z) - $n$-Reference Transfer Learning for Saliency Prediction [73.17061116358036]
We propose a few-shot transfer learning paradigm for saliency prediction.
The proposed framework is gradient-based and model-agnostic.
The results show that the proposed framework achieves a significant performance improvement.
arXiv Detail & Related papers (2020-07-09T23:20:44Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.