Domain Adaptation Using Pseudo Labels
- URL: http://arxiv.org/abs/2402.06809v3
- Date: Tue, 12 Mar 2024 00:59:34 GMT
- Title: Domain Adaptation Using Pseudo Labels
- Authors: Sachin Chhabra, Hemanth Venkateswara and Baoxin Li
- Abstract summary: In the absence of labeled target data, unsupervised domain adaptation approaches seek to align the marginal distributions of the source and target domains.
We deploy a pretrained network to determine accurate labels for the target domain using a multi-stage pseudo-label refinement procedure.
Our results on multiple datasets demonstrate the effectiveness of our simple procedure in comparison with complex state-of-the-art techniques.
- Score: 16.79672078512152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the absence of labeled target data, unsupervised domain adaptation
approaches seek to align the marginal distributions of the source and target
domains in order to train a classifier for the target. Unsupervised domain
alignment procedures are category-agnostic and end up misaligning the
categories. We address this problem by deploying a pretrained network to
determine accurate labels for the target domain using a multi-stage
pseudo-label refinement procedure. The filters are based on the confidence,
distance (conformity), and consistency of the pseudo labels. Our results on
multiple datasets demonstrate the effectiveness of our simple procedure in
comparison with complex state-of-the-art techniques.
Related papers
- Domain-Invariant Feature Alignment Using Variational Inference For
Partial Domain Adaptation [6.04077629908308]
The proposed technique delivers superior and comparable accuracy to existing methods.
The experimental findings in numerous cross-domain classification tasks demonstrate that the proposed technique delivers superior and comparable accuracy to existing methods.
arXiv Detail & Related papers (2022-12-03T10:39:14Z) - Semi-Supervised Domain Adaptation by Similarity based Pseudo-label
Injection [0.735996217853436]
One of the primary challenges in Semi-supervised Domain Adaptation (SSDA) is the skewed ratio between the number of labeled source and target samples.
Recent works in SSDA show that aligning only the labeled target samples with the source samples potentially leads to incomplete domain alignment of the target domain to the source domain.
In our approach, to align the two domains, we leverage contrastive losses to learn a semantically meaningful and a domain agnostic feature space.
arXiv Detail & Related papers (2022-09-05T10:28:08Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Domain Adaptation with Auxiliary Target Domain-Oriented Classifier [115.39091109079622]
Domain adaptation aims to transfer knowledge from a label-rich but heterogeneous domain to a label-scare domain.
One of the most popular SSL techniques is pseudo-labeling that assigns pseudo labels for each unlabeled data.
We propose a new pseudo-labeling framework called Auxiliary Target Domain-Oriented (ATDOC)
ATDOC alleviates the bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
arXiv Detail & Related papers (2020-07-08T15:01:35Z) - Learning a Domain Classifier Bank for Unsupervised Adaptive Object
Detection [48.19258721979389]
In this paper, we propose a fine-grained domain alignment approach for object detectors based on deep networks.
We develop a bare object detector with the proposed fine-grained domain alignment mechanism as the adaptive detector.
Experiments on three popular transferring benchmarks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-07-06T09:12:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.