Cross-domain error minimization for unsupervised domain adaptation
- URL: http://arxiv.org/abs/2106.15057v1
- Date: Tue, 29 Jun 2021 02:00:29 GMT
- Title: Cross-domain error minimization for unsupervised domain adaptation
- Authors: Yuntao Du, Yinghao Chen, Fengli Cui, Xiaowen Zhang, Chongjun Wang
- Abstract summary: Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
Previous methods focus on learning domain-invariant features to decrease the discrepancy between the feature distributions and minimize the source error.
We propose a curriculum learning based strategy to select the target samples with more accurate pseudo-labels during training.
- Score: 2.9766397696234996
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Unsupervised domain adaptation aims to transfer knowledge from a labeled
source domain to an unlabeled target domain. Previous methods focus on learning
domain-invariant features to decrease the discrepancy between the feature
distributions as well as minimizing the source error and have made remarkable
progress. However, a recently proposed theory reveals that such a strategy is
not sufficient for a successful domain adaptation. It shows that besides a
small source error, both the discrepancy between the feature distributions and
the discrepancy between the labeling functions should be small across domains.
The discrepancy between the labeling functions is essentially the cross-domain
errors which are ignored by existing methods. To overcome this issue, in this
paper, a novel method is proposed to integrate all the objectives into a
unified optimization framework. Moreover, the incorrect pseudo labels widely
used in previous methods can lead to error accumulation during learning. To
alleviate this problem, the pseudo labels are obtained by utilizing structural
information of the target domain besides source classifier and we propose a
curriculum learning based strategy to select the target samples with more
accurate pseudo-labels during training. Comprehensive experiments are
conducted, and the results validate that our approach outperforms
state-of-the-art methods.
Related papers
- Domain Adaptation Using Pseudo Labels [16.79672078512152]
In the absence of labeled target data, unsupervised domain adaptation approaches seek to align the marginal distributions of the source and target domains.
We deploy a pretrained network to determine accurate labels for the target domain using a multi-stage pseudo-label refinement procedure.
Our results on multiple datasets demonstrate the effectiveness of our simple procedure in comparison with complex state-of-the-art techniques.
arXiv Detail & Related papers (2024-02-09T22:15:11Z) - centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation [17.97306640457707]
We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
arXiv Detail & Related papers (2023-08-21T10:35:32Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Missing-Class-Robust Domain Adaptation by Unilateral Alignment for Fault
Diagnosis [3.786700931138978]
Domain adaptation aims at improving model performance by leveraging the learned knowledge in the source domain and transferring it to the target domain.
Recently, domain adversarial methods have been particularly successful in alleviating the distribution shift between the source and the target domains.
We demonstrate in this paper that the performance of domain adversarial methods can be vulnerable to an incomplete target label space during training.
arXiv Detail & Related papers (2020-01-07T13:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.