Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning
- URL: http://arxiv.org/abs/2305.02693v3
- Date: Fri, 22 Dec 2023 05:39:11 GMT
- Title: Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning
- Authors: Xinyang Huang, Chuang Zhu and Wenkai Chen
- Abstract summary: In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain.
We propose a Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples.
- Score: 4.232614032390374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In semi-supervised domain adaptation (SSDA), a few labeled target samples of
each class help the model to transfer knowledge representation from the fully
labeled source domain to the target domain. Many existing methods ignore the
benefits of making full use of the labeled target samples from multi-level. To
make better use of this additional data, we propose a novel Prototype-based
Multi-level Learning (ProML) framework to better tap the potential of labeled
target samples. To achieve intra-domain adaptation, we first introduce a
pseudo-label aggregation based on the intra-domain optimal transport to help
the model align the feature distribution of unlabeled target samples and the
prototype. At the inter-domain level, we propose a cross-domain alignment loss
to help the model use the target prototype for cross-domain knowledge transfer.
We further propose a dual consistency based on prototype similarity and linear
classifier to promote discriminative learning of compact target feature
representation at the batch level. Extensive experiments on three datasets,
including DomainNet, VisDA2017, and Office-Home demonstrate that our proposed
method achieves state-of-the-art performance in SSDA.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Polycentric Clustering and Structural Regularization for Source-free
Unsupervised Domain Adaptation [20.952542421577487]
Source-Free Domain Adaptation (SFDA) aims to solve the domain adaptation problem by transferring the knowledge learned from a pre-trained source model to an unseen target domain.
Most existing methods assign pseudo-labels to the target data by generating feature prototypes.
In this paper, a novel framework named PCSR is proposed to tackle SFDA via a novel intra-class Polycentric Clustering and Structural Regularization strategy.
arXiv Detail & Related papers (2022-10-14T02:20:48Z) - Multi-Anchor Active Domain Adaptation for Semantic Segmentation [25.93409207335442]
Unsupervised domain adaption has proven to be an effective approach for alleviating the intensive workload of manual annotation.
We propose to introduce a novel multi-anchor based active learning strategy to assist domain adaptation regarding the semantic segmentation task.
arXiv Detail & Related papers (2021-08-18T07:33:13Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.