Few-shot Image Generation via Adaptation-Aware Kernel Modulation
- URL: http://arxiv.org/abs/2210.16559v3
- Date: Tue, 9 May 2023 16:42:00 GMT
- Title: Few-shot Image Generation via Adaptation-Aware Kernel Modulation
- Authors: Yunqing Zhao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man
Cheung
- Abstract summary: Few-shot image generation (F SIG) aims to generate new and diverse samples given an extremely limited number of samples from a domain.
Recent work has addressed the problem using transfer learning approach, leveraging a GAN pretrained on a large-scale source domain dataset.
We propose Adaptation-Aware kernel Modulation (AdAM) to address general F SIG of different source-target domain proximity.
- Score: 33.191479192580275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot image generation (FSIG) aims to learn to generate new and diverse
samples given an extremely limited number of samples from a domain, e.g., 10
training samples. Recent work has addressed the problem using transfer learning
approach, leveraging a GAN pretrained on a large-scale source domain dataset
and adapting that model to the target domain based on very limited target
domain samples. Central to recent FSIG methods are knowledge preserving
criteria, which aim to select a subset of source model's knowledge to be
preserved into the adapted model. However, a major limitation of existing
methods is that their knowledge preserving criteria consider only source
domain/source task, and they fail to consider target domain/adaptation task in
selecting source model's knowledge, casting doubt on their suitability for
setups of different proximity between source and target domain. Our work makes
two contributions. As our first contribution, we re-visit recent FSIG works and
their experiments. Our important finding is that, under setups which assumption
of close proximity between source and target domains is relaxed, existing
state-of-the-art (SOTA) methods which consider only source domain/source task
in knowledge preserving perform no better than a baseline fine-tuning method.
To address the limitation of existing methods, as our second contribution, we
propose Adaptation-Aware kernel Modulation (AdAM) to address general FSIG of
different source-target domain proximity. Extensive experimental results show
that the proposed method consistently achieves SOTA performance across
source/target domains of different proximity, including challenging setups when
source and target domains are more apart. Project Page:
https://yunqing-me.github.io/AdAM/
Related papers
- Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Adaptive Semantic Consistency for Cross-domain Few-shot Classification [27.176106714652327]
Cross-domain few-shot classification (CD-FSC) aims to identify novel target classes with a few samples.
We propose a simple plug-and-play Adaptive Semantic Consistency framework, which improves cross-domain robustness.
The proposed ASC enables explicit transfer of source domain knowledge to prevent the model from overfitting the target domain.
arXiv Detail & Related papers (2023-08-01T15:37:19Z) - AdAM: Few-Shot Image Generation via Adaptation-Aware Kernel Modulation [71.58154388819887]
Few-shot image generation (F SIG) aims to generate new and diverse images given few (e.g., 10) training samples.
Recent work has addressed F SIG by leveraging a GAN pre-trained on a large-scale source domain and adapting it to the target domain with few target samples.
We propose Adaptation-Aware kernel Modulation (AdAM) for general F SIG of different source-target domain proximity.
arXiv Detail & Related papers (2023-07-04T03:56:43Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Meta-DMoE: Adapting to Domain Shift by Meta-Distillation from
Mixture-of-Experts [33.21435044949033]
Most existing methods perform training on multiple source domains using a single model.
We propose a novel framework for unsupervised test-time adaptation, which is formulated as a knowledge distillation process.
arXiv Detail & Related papers (2022-10-08T02:28:10Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Generalized Source-free Domain Adaptation [47.907168218249694]
We propose a new domain adaptation paradigm called Generalized Source-free Domain Adaptation (G-SFDA)
For target performance our method is on par with or better than existing DA and SFDA methods, specifically it achieves state-of-the-art performance (85.4%) on VisDA.
arXiv Detail & Related papers (2021-08-03T16:34:12Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.