Learning from a Complementary-label Source Domain: Theory and Algorithms
- URL: http://arxiv.org/abs/2008.01454v1
- Date: Tue, 4 Aug 2020 10:49:35 GMT
- Title: Learning from a Complementary-label Source Domain: Theory and Algorithms
- Authors: Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu
- Abstract summary: We propose a novel setting that the source domain is composed of complementary-label data.
A complementary label adversarial network (CLARINET) is proposed to solve CC-UDA and PC-UDA problems.
Experiments show that CLARINET significantly outperforms a series of competent baselines on handwritten-digits-recognition and objects-recognition tasks.
- Score: 39.53192710720228
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In unsupervised domain adaptation (UDA), a classifier for the target domain
is trained with massive true-label data from the source domain and unlabeled
data from the target domain. However, collecting fully-true-label data in the
source domain is high-cost and sometimes impossible. Compared to the true
labels, a complementary label specifies a class that a pattern does not belong
to, hence collecting complementary labels would be less laborious than
collecting true labels. Thus, in this paper, we propose a novel setting that
the source domain is composed of complementary-label data, and a theoretical
bound for it is first proved. We consider two cases of this setting, one is
that the source domain only contains complementary-label data (completely
complementary unsupervised domain adaptation, CC-UDA), and the other is that
the source domain has plenty of complementary-label data and a small amount of
true-label data (partly complementary unsupervised domain adaptation, PC-UDA).
To this end, a complementary label adversarial network} (CLARINET) is proposed
to solve CC-UDA and PC-UDA problems. CLARINET maintains two deep networks
simultaneously, where one focuses on classifying complementary-label source
data and the other takes care of source-to-target distributional adaptation.
Experiments show that CLARINET significantly outperforms a series of competent
baselines on handwritten-digits-recognition and objects-recognition tasks.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Clarinet: A One-step Approach Towards Budget-friendly Unsupervised
Domain Adaptation [39.53192710720228]
In unsupervised domain adaptation (UDA), classifiers for the target domain are trained with massive true-label data from the source domain and unlabeled data from the target domain.
We consider a novel problem setting where the classifier for the target domain has to be trained with complementary-label data from the source domain and unlabeled data from the target domain named budget-friendly UDA.
The complementary label adversarial network (CLARINET) is proposed to solve the BFUDA problem.
arXiv Detail & Related papers (2020-07-29T05:31:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.