Cross-Domain Few-Shot Classification via Inter-Source Stylization
- URL: http://arxiv.org/abs/2208.08015v2
- Date: Tue, 29 Aug 2023 09:05:58 GMT
- Title: Cross-Domain Few-Shot Classification via Inter-Source Stylization
- Authors: Huali Xu, Shuaifeng Zhi, Li Liu
- Abstract summary: Cross-Domain Few-Shot Classification (CDFSC) is to accurately classify a target dataset with limited labelled data.
This paper proposes a solution that makes use of multiple source domains without the need for additional labeling costs.
- Score: 11.008292768447614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of Cross-Domain Few-Shot Classification (CDFSC) is to accurately
classify a target dataset with limited labelled data by exploiting the
knowledge of a richly labelled auxiliary dataset, despite the differences
between the domains of the two datasets. Some existing approaches require
labelled samples from multiple domains for model training. However, these
methods fail when the sample labels are scarce. To overcome this challenge,
this paper proposes a solution that makes use of multiple source domains
without the need for additional labeling costs. Specifically, one of the source
domains is completely tagged, while the others are untagged. An Inter-Source
Stylization Network (ISSNet) is then introduced to enhance stylisation across
multiple source domains, enriching data distribution and model's generalization
capabilities. Experiments on 8 target datasets show that ISSNet leverages
unlabelled data from multiple source data and significantly reduces the
negative impact of domain gaps on classification performance compared to
several baseline methods.
Related papers
- Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation [4.7589762171821715]
Source-free Unsupervised Domain Adaptation (SF-UDA) aims to transfer a model's performance from a labeled source domain to an unlabeled target domain without direct access to source samples.
We introduce a data-efficient, CLIP-powered dual-branch network (CDBN) to address the dual challenges of limited source data and privacy concerns.
CDBN achieves near state-of-the-art performance with far fewer source domain samples than existing methods across 31 transfer tasks on seven datasets.
arXiv Detail & Related papers (2024-10-21T09:25:49Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Clarinet: A One-step Approach Towards Budget-friendly Unsupervised
Domain Adaptation [39.53192710720228]
In unsupervised domain adaptation (UDA), classifiers for the target domain are trained with massive true-label data from the source domain and unlabeled data from the target domain.
We consider a novel problem setting where the classifier for the target domain has to be trained with complementary-label data from the source domain and unlabeled data from the target domain named budget-friendly UDA.
The complementary label adversarial network (CLARINET) is proposed to solve the BFUDA problem.
arXiv Detail & Related papers (2020-07-29T05:31:58Z) - DACS: Domain Adaptation via Cross-domain Mixed Sampling [4.205692673448206]
Unsupervised domain adaptation attempts to train on labelled data from one domain, and simultaneously learn from unlabelled data in the domain of interest.
We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-labels.
We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes.
arXiv Detail & Related papers (2020-07-17T00:43:11Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z) - Multi-source Domain Adaptation for Visual Sentiment Classification [92.53780541232773]
We propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN)
To handle data from multiple source domains, MSGAN learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution.
Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
arXiv Detail & Related papers (2020-01-12T08:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.