Domain Adaptation for Learning Generator from Paired Few-Shot Data
- URL: http://arxiv.org/abs/2102.12765v1
- Date: Thu, 25 Feb 2021 10:11:44 GMT
- Title: Domain Adaptation for Learning Generator from Paired Few-Shot Data
- Authors: Chun-Chih Teng and Pin-Yu Chen and Wei-Chen Chiu
- Abstract summary: We propose a Paired Few-shot GAN (PFS-GAN) model for learning generators with sufficient source data and a few target data.
Our method has better quantitative and qualitative results on the generated target-domain data with higher diversity in comparison to several baselines.
- Score: 72.04430033118426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a Paired Few-shot GAN (PFS-GAN) model for learning generators with
sufficient source data and a few target data. While generative model learning
typically needs large-scale training data, our PFS-GAN not only uses the
concept of few-shot learning but also domain shift to transfer the knowledge
across domains, which alleviates the issue of obtaining low-quality generator
when only trained with target domain data. The cross-domain datasets are
assumed to have two properties: (1) each target-domain sample has its
source-domain correspondence and (2) two domains share similar content
information but different appearance. Our PFS-GAN aims to learn the
disentangled representation from images, which composed of domain-invariant
content features and domain-specific appearance features. Furthermore, a
relation loss is introduced on the content features while shifting the
appearance features to increase the structural diversity. Extensive experiments
show that our method has better quantitative and qualitative results on the
generated target-domain data with higher diversity in comparison to several
baselines.
Related papers
- TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Physically-Constrained Transfer Learning through Shared Abundance Space
for Hyperspectral Image Classification [14.840925517957258]
We propose a new transfer learning scheme to bridge the gap between the source and target domains.
The proposed method is referred to as physically-constrained transfer learning through shared abundance space.
arXiv Detail & Related papers (2020-08-19T17:41:37Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.