Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label
- URL: http://arxiv.org/abs/2105.04729v1
- Date: Tue, 11 May 2021 00:51:31 GMT
- Title: Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label
- Authors: Huihuang Chen, Li Li, Jie Chen, Kuo-Yi Lin
- Abstract summary: Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
- Score: 8.132250810529873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge
transfer from labeled source domain to unlabeled target domain. Recently, many
domain adaptation (DA) methods use centroid to align the local distribution of
different domains, that is, to align different classes. This improves the
effect of domain adaptation, but domain differences exist not only between
classes, but also between samples. This work rethinks what is the alignment
between different domains, and studies how to achieve the real alignment
between different domains. Previous DA methods only considered one distribution
feature of aligned samples, such as full distribution or local distribution. In
addition to aligning the global distribution, the real domain adaptation should
also align the meso distribution and the micro distribution. Therefore, this
study propose a double classifier method based on high confidence label (DCP).
By aligning the centroid and the distribution between centroid and sample of
different classifiers, the meso and micro distribution alignment of different
domains is realized. In addition, in order to reduce the chain error caused by
error marking, This study propose a high confidence marking method to reduce
the marking error. To verify its versatility, this study evaluates DCP on
digital recognition and target recognition data sets. The results show that our
method achieves state-of-the-art results on most of the current domain
adaptation benchmark datasets.
Related papers
- centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation [17.97306640457707]
We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
arXiv Detail & Related papers (2023-08-21T10:35:32Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Cross-Region Domain Adaptation for Class-level Alignment [32.586107376036075]
We propose a method that applies adversarial training to align two feature distributions in the target domain.
It uses a self-training framework to split the image into two regions, which form two distributions to align in the feature space.
We term this approach cross-region adaptation (CRA) to distinguish from the previous methods of aligning different domain distributions.
arXiv Detail & Related papers (2021-09-14T04:13:35Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Class Distribution Alignment for Adversarial Domain Adaptation [32.95056492475652]
Conditional ADversarial Image Translation (CADIT) is proposed to explicitly align the class distributions given samples between the two domains.
It integrates a discriminative structure-preserving loss and a joint adversarial generation loss.
Our approach achieves superior classification in the target domain when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-04-20T15:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.