Inductive Unsupervised Domain Adaptation for Few-Shot Classification via
Clustering
- URL: http://arxiv.org/abs/2006.12816v1
- Date: Tue, 23 Jun 2020 08:17:48 GMT
- Title: Inductive Unsupervised Domain Adaptation for Few-Shot Classification via
Clustering
- Authors: Xin Cong, Bowen Yu, Tingwen Liu, Shiyao Cui, Hengzhu Tang, Bin Wang
- Abstract summary: Few-shot classification tends to struggle when it needs to adapt to diverse domains.
We introduce a framework, DaFeC, to improve Domain adaptation performance for Few-shot classification via Clustering.
Our approach outperforms previous work with absolute gains (in classification accuracy) of 4.95%, 9.55%, 3.99% and 11.62%, respectively.
- Score: 16.39667909141402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot classification tends to struggle when it needs to adapt to diverse
domains. Due to the non-overlapping label space between domains, the
performance of conventional domain adaptation is limited. Previous work tackles
the problem in a transductive manner, by assuming access to the full set of
test data, which is too restrictive for many real-world applications. In this
paper, we set out to tackle this issue by introducing a inductive framework,
DaFeC, to improve Domain adaptation performance for Few-shot classification via
Clustering. We first build a representation extractor to derive features for
unlabeled data from the target domain (no test data is necessary) and then
group them with a cluster miner. The generated pseudo-labeled data and the
labeled source-domain data are used as supervision to update the parameters of
the few-shot classifier. In order to derive high-quality pseudo labels, we
propose a Clustering Promotion Mechanism, to learn better features for the
target domain via Similarity Entropy Minimization and Adversarial Distribution
Alignment, which are combined with a Cosine Annealing Strategy. Experiments are
performed on the FewRel 2.0 dataset. Our approach outperforms previous work
with absolute gains (in classification accuracy) of 4.95%, 9.55%, 3.99% and
11.62%, respectively, under four few-shot settings.
Related papers
- Unsupervised Domain Adaptation via Distilled Discriminative Clustering [45.39542287480395]
We re-cast the domain adaptation problem as discriminative clustering of target data.
We propose to jointly train the network using parallel, supervised learning objectives over labeled source data.
We conduct careful ablation studies and extensive experiments on five popular benchmark datasets.
arXiv Detail & Related papers (2023-02-23T13:03:48Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Correlated Adversarial Joint Discrepancy Adaptation Network [6.942003070153651]
We propose a novel approach called correlated adversarial joint discrepancy adaptation network (CAJNet)
By training the joint features, we can align the marginal and conditional distributions between the two domains.
In addition, we introduce a probability-based top-$mathcalK$ correlated label ($mathcalK$-label) which is a powerful indicator of the target domain.
arXiv Detail & Related papers (2021-05-18T19:52:08Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Exploring Category-Agnostic Clusters for Open-Set Domain Adaptation [138.29273453811945]
We present Self-Ensembling with Category-agnostic Clusters (SE-CC) -- a novel architecture that steers domain adaptation with category-agnostic clusters in target domain.
clustering is performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.
arXiv Detail & Related papers (2020-06-11T16:19:02Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.