Clarinet: A One-step Approach Towards Budget-friendly Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2007.14612v2
- Date: Thu, 4 Mar 2021 06:09:13 GMT
- Title: Clarinet: A One-step Approach Towards Budget-friendly Unsupervised
Domain Adaptation
- Authors: Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu
- Abstract summary: In unsupervised domain adaptation (UDA), classifiers for the target domain are trained with massive true-label data from the source domain and unlabeled data from the target domain.
We consider a novel problem setting where the classifier for the target domain has to be trained with complementary-label data from the source domain and unlabeled data from the target domain named budget-friendly UDA.
The complementary label adversarial network (CLARINET) is proposed to solve the BFUDA problem.
- Score: 39.53192710720228
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In unsupervised domain adaptation (UDA), classifiers for the target domain
are trained with massive true-label data from the source domain and unlabeled
data from the target domain. However, it may be difficult to collect
fully-true-label data in a source domain given a limited budget. To mitigate
this problem, we consider a novel problem setting where the classifier for the
target domain has to be trained with complementary-label data from the source
domain and unlabeled data from the target domain named budget-friendly UDA
(BFUDA). The key benefit is that it is much less costly to collect
complementary-label source data (required by BFUDA) than collecting the
true-label source data (required by ordinary UDA). To this end, the
complementary label adversarial network (CLARINET) is proposed to solve the
BFUDA problem. CLARINET maintains two deep networks simultaneously, where one
focuses on classifying complementary-label source data and the other takes care
of the source-to-target distributional adaptation. Experiments show that
CLARINET significantly outperforms a series of competent baselines.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - IT-RUDA: Information Theory Assisted Robust Unsupervised Domain
Adaptation [7.225445443960775]
Distribution shift between train (source) and test (target) datasets is a common problem encountered in machine learning applications.
UDA technique carries out knowledge transfer from a label-rich source domain to an unlabeled target domain.
Outliers that exist in either source or target datasets can introduce additional challenges when using UDA in practice.
arXiv Detail & Related papers (2022-10-24T04:33:52Z) - Cross-Domain Few-Shot Classification via Inter-Source Stylization [11.008292768447614]
Cross-Domain Few-Shot Classification (CDFSC) is to accurately classify a target dataset with limited labelled data.
This paper proposes a solution that makes use of multiple source domains without the need for additional labeling costs.
arXiv Detail & Related papers (2022-08-17T01:44:32Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Learning from a Complementary-label Source Domain: Theory and Algorithms [39.53192710720228]
We propose a novel setting that the source domain is composed of complementary-label data.
A complementary label adversarial network (CLARINET) is proposed to solve CC-UDA and PC-UDA problems.
Experiments show that CLARINET significantly outperforms a series of competent baselines on handwritten-digits-recognition and objects-recognition tasks.
arXiv Detail & Related papers (2020-08-04T10:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.