Semi-Supervised Domain Adaptation by Similarity based Pseudo-label
Injection
- URL: http://arxiv.org/abs/2209.01881v1
- Date: Mon, 5 Sep 2022 10:28:08 GMT
- Title: Semi-Supervised Domain Adaptation by Similarity based Pseudo-label
Injection
- Authors: Abhay Rawat, Isha Dua, Saurav Gupta and Rahul Tallamraju
- Abstract summary: One of the primary challenges in Semi-supervised Domain Adaptation (SSDA) is the skewed ratio between the number of labeled source and target samples.
Recent works in SSDA show that aligning only the labeled target samples with the source samples potentially leads to incomplete domain alignment of the target domain to the source domain.
In our approach, to align the two domains, we leverage contrastive losses to learn a semantically meaningful and a domain agnostic feature space.
- Score: 0.735996217853436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the primary challenges in Semi-supervised Domain Adaptation (SSDA) is
the skewed ratio between the number of labeled source and target samples,
causing the model to be biased towards the source domain. Recent works in SSDA
show that aligning only the labeled target samples with the source samples
potentially leads to incomplete domain alignment of the target domain to the
source domain. In our approach, to align the two domains, we leverage
contrastive losses to learn a semantically meaningful and a domain agnostic
feature space using the supervised samples from both domains. To mitigate
challenges caused by the skewed label ratio, we pseudo-label the unlabeled
target samples by comparing their feature representation to those of the
labeled samples from both the source and target domains. Furthermore, to
increase the support of the target domain, these potentially noisy
pseudo-labels are gradually injected into the labeled target dataset over the
course of training. Specifically, we use a temperature scaled cosine similarity
measure to assign a soft pseudo-label to the unlabeled target samples.
Additionally, we compute an exponential moving average of the soft
pseudo-labels for each unlabeled sample. These pseudo-labels are progressively
injected or removed) into the (from) the labeled target dataset based on a
confidence threshold to supplement the alignment of the source and target
distributions. Finally, we use a supervised contrastive loss on the labeled and
pseudo-labeled datasets to align the source and target distributions. Using our
proposed approach, we showcase state-of-the-art performance on SSDA benchmarks
- Office-Home, DomainNet and Office-31.
Related papers
- Evidential Graph Contrastive Alignment for Source-Free Blending-Target Domain Adaptation [3.0134158269410207]
We propose a new method called Evidential Contrastive Alignment (ECA) to decouple the blending target domain and alleviate the effect from noisy target pseudo labels.
ECA outperforms other methods with considerable gains and achieves comparable results compared with those that have domain labels or source data in prior.
arXiv Detail & Related papers (2024-08-14T13:02:20Z) - Domain Adaptation Using Pseudo Labels [16.79672078512152]
In the absence of labeled target data, unsupervised domain adaptation approaches seek to align the marginal distributions of the source and target domains.
We deploy a pretrained network to determine accurate labels for the target domain using a multi-stage pseudo-label refinement procedure.
Our results on multiple datasets demonstrate the effectiveness of our simple procedure in comparison with complex state-of-the-art techniques.
arXiv Detail & Related papers (2024-02-09T22:15:11Z) - Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Improving Pseudo Labels With Intra-Class Similarity for Unsupervised
Domain Adaptation [14.059958451082544]
We propose a novel approach to improve the accuracy of the pseudo labels in the target domain.
The proposed method can boost the accuracy of the pseudo labels and further lead to more discriminative and domain invariant features.
arXiv Detail & Related papers (2022-07-25T12:42:24Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement [84.10513481953583]
unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
arXiv Detail & Related papers (2022-05-26T18:45:04Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.