High-level semantic feature matters few-shot unsupervised domain
adaptation
- URL: http://arxiv.org/abs/2301.01956v1
- Date: Thu, 5 Jan 2023 08:39:52 GMT
- Title: High-level semantic feature matters few-shot unsupervised domain
adaptation
- Authors: Lei Yu, Wanqi Yang, Shengqi Huang, Lei Wang, Ming Yang
- Abstract summary: We propose a novel task-specific semantic feature learning method (TSECS) for FS-UDA.
TSECS learns high-level semantic features for image-to-class similarity measurement.
We show that the proposed method significantly outperforms SOTA methods in FS-UDA by a large margin.
- Score: 15.12545632709954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In few-shot unsupervised domain adaptation (FS-UDA), most existing methods
followed the few-shot learning (FSL) methods to leverage the low-level local
features (learned from conventional convolutional models, e.g., ResNet) for
classification. However, the goal of FS-UDA and FSL are relevant yet distinct,
since FS-UDA aims to classify the samples in target domain rather than source
domain. We found that the local features are insufficient to FS-UDA, which
could introduce noise or bias against classification, and not be used to
effectively align the domains. To address the above issues, we aim to refine
the local features to be more discriminative and relevant to classification.
Thus, we propose a novel task-specific semantic feature learning method (TSECS)
for FS-UDA. TSECS learns high-level semantic features for image-to-class
similarity measurement. Based on the high-level features, we design a
cross-domain self-training strategy to leverage the few labeled samples in
source domain to build the classifier in target domain. In addition, we
minimize the KL divergence of the high-level feature distributions between
source and target domains to shorten the distance of the samples between the
two domains. Extensive experiments on DomainNet show that the proposed method
significantly outperforms SOTA methods in FS-UDA by a large margin (i.e., 10%).
Related papers
- Domain Adaptive Few-Shot Open-Set Learning [36.39622440120531]
We propose Domain Adaptive Few-Shot Open Set Recognition (DA-FSOS) and introduce a meta-learning-based architecture named DAFOSNET.
Our training approach ensures that DAFOS-NET can generalize well to new scenarios in the target domain.
We present three benchmarks for DA-FSOS based on the Office-Home, mini-ImageNet/CUB, and DomainNet datasets.
arXiv Detail & Related papers (2023-09-22T12:04:47Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Improving Transferability of Domain Adaptation Networks Through Domain
Alignment Layers [1.3766148734487902]
Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain by assigning weak knowledge from a bag of source models.
We propose to embed Multi-Source version of DomaIn Alignment Layers (MS-DIAL) at different levels of the predictor.
Our approach can improve state-of-the-art MSDA methods, yielding relative gains of up to +30.64% on their classification accuracies.
arXiv Detail & Related papers (2021-09-06T18:41:19Z) - Few-shot Unsupervised Domain Adaptation with Image-to-class Sparse
Similarity Encoding [24.64900089320843]
This paper investigates a valuable setting called few-shot unsupervised domain adaptation (FS-UDA)
In this setting, the source domain data are labelled, but with few-shot per category, while the target domain data are unlabelled.
We develop a general UDA model to solve the few-shot labeled data per category and the domain adaptation between support and query sets.
arXiv Detail & Related papers (2021-08-06T06:15:02Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.