Improving Pseudo Labels With Intra-Class Similarity for Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2207.12139v1
- Date: Mon, 25 Jul 2022 12:42:24 GMT
- Title: Improving Pseudo Labels With Intra-Class Similarity for Unsupervised
Domain Adaptation
- Authors: Jie Wang, Xiao-Lei Zhang
- Abstract summary: We propose a novel approach to improve the accuracy of the pseudo labels in the target domain.
The proposed method can boost the accuracy of the pseudo labels and further lead to more discriminative and domain invariant features.
- Score: 14.059958451082544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) transfers knowledge from a label-rich
source domain to a different but related fully-unlabeled target domain. To
address the problem of domain shift, more and more UDA methods adopt pseudo
labels of the target samples to improve the generalization ability on the
target domain. However, inaccurate pseudo labels of the target samples may
yield suboptimal performance with error accumulation during the optimization
process. Moreover, once the pseudo labels are generated, how to remedy the
generated pseudo labels is far from explored. In this paper, we propose a novel
approach to improve the accuracy of the pseudo labels in the target domain. It
first generates coarse pseudo labels by a conventional UDA method. Then, it
iteratively exploits the intra-class similarity of the target samples for
improving the generated coarse pseudo labels, and aligns the source and target
domains with the improved pseudo labels. The accuracy improvement of the pseudo
labels is made by first deleting dissimilar samples, and then using spanning
trees to eliminate the samples with the wrong pseudo labels in the intra-class
samples. We have applied the proposed approach to several conventional UDA
methods as an additional term. Experimental results demonstrate that the
proposed method can boost the accuracy of the pseudo labels and further lead to
more discriminative and domain invariant features than the conventional
baselines.
Related papers
- Domain Adaptation Using Pseudo Labels [16.79672078512152]
In the absence of labeled target data, unsupervised domain adaptation approaches seek to align the marginal distributions of the source and target domains.
We deploy a pretrained network to determine accurate labels for the target domain using a multi-stage pseudo-label refinement procedure.
Our results on multiple datasets demonstrate the effectiveness of our simple procedure in comparison with complex state-of-the-art techniques.
arXiv Detail & Related papers (2024-02-09T22:15:11Z) - Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Refined Pseudo labeling for Source-free Domain Adaptive Object Detection [9.705172026751294]
Source-freeD is proposed to adapt source-trained detectors to target domains with only unlabeled target data.
Existing source-freeD methods typically utilize pseudo labeling, where the performance heavily relies on the selection of confidence threshold.
We present a category-aware adaptive threshold estimation module, which adaptively provides the appropriate threshold for each category.
arXiv Detail & Related papers (2023-03-07T08:31:42Z) - Semi-Supervised Domain Adaptation by Similarity based Pseudo-label
Injection [0.735996217853436]
One of the primary challenges in Semi-supervised Domain Adaptation (SSDA) is the skewed ratio between the number of labeled source and target samples.
Recent works in SSDA show that aligning only the labeled target samples with the source samples potentially leads to incomplete domain alignment of the target domain to the source domain.
In our approach, to align the two domains, we leverage contrastive losses to learn a semantically meaningful and a domain agnostic feature space.
arXiv Detail & Related papers (2022-09-05T10:28:08Z) - CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement [84.10513481953583]
unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
arXiv Detail & Related papers (2022-05-26T18:45:04Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - Rethinking Pseudo Labels for Semi-Supervised Object Detection [84.697097472401]
We introduce certainty-aware pseudo labels tailored for object detection.
We dynamically adjust the thresholds used to generate pseudo labels and reweight loss functions for each category to alleviate the class imbalance problem.
Our approach improves supervised baselines by up to 10% AP using only 1-10% labeled data from COCO.
arXiv Detail & Related papers (2021-06-01T01:32:03Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z) - Domain Adaptation with Auxiliary Target Domain-Oriented Classifier [115.39091109079622]
Domain adaptation aims to transfer knowledge from a label-rich but heterogeneous domain to a label-scare domain.
One of the most popular SSL techniques is pseudo-labeling that assigns pseudo labels for each unlabeled data.
We propose a new pseudo-labeling framework called Auxiliary Target Domain-Oriented (ATDOC)
ATDOC alleviates the bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
arXiv Detail & Related papers (2020-07-08T15:01:35Z) - Sparsely-Labeled Source Assisted Domain Adaptation [64.75698236688729]
This paper proposes a novel Sparsely-Labeled Source Assisted Domain Adaptation (SLSA-DA) algorithm.
Due to the label scarcity problem, the projected clustering is conducted on both the source and target domains.
arXiv Detail & Related papers (2020-05-08T15:37:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.