Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation
- URL: http://arxiv.org/abs/2007.12684v5
- Date: Wed, 22 Sep 2021 21:55:14 GMT
- Title: Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation
- Authors: Luyu Yang, Yan Wang, Mingfei Gao, Abhinav Shrivastava, Kilian Q.
Weinberger, Wei-Lun Chao, Ser-Nam Lim
- Abstract summary: Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
- Score: 80.55236691733506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a
labeled source domain to a different but related target domain, from which
unlabeled data and a small set of labeled data are provided. Current methods
that treat source and target supervision without distinction overlook their
inherent discrepancy, resulting in a source-dominated model that has not
effectively used the target supervision. In this paper, we argue that the
labeled target data needs to be distinguished for effective SSDA, and propose
to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised
learning (SSL) task in the target domain and an unsupervised domain adaptation
(UDA) task across domains. By doing so, the two sub-tasks can better leverage
the corresponding supervision and thus yield very different classifiers. To
integrate the strengths of the two classifiers, we apply the well-established
co-training framework, in which the two classifiers exchange their high
confident predictions to iteratively "teach each other" so that both
classifiers can excel in the target domain. We call our approach Deep
Co-training with Task decomposition (DeCoTa). DeCoTa requires no adversarial
training and is easy to implement. Moreover, DeCoTa is well-founded on the
theoretical condition of when co-training would succeed. As a result, DeCoTa
achieves state-of-the-art results on several SSDA datasets, outperforming the
prior art by a notable 4% margin on DomainNet. Code is available at
https://github.com/LoyoYang/DeCoTa
Related papers
- Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - ACT: Semi-supervised Domain-adaptive Medical Image Segmentation with
Asymmetric Co-training [34.017031149886556]
Unsupervised domain adaptation (UDA) has been vastly explored to alleviate domain shifts between source and target domains.
We propose to exploit both labeled source and target domain data, in addition to unlabeled target data in a unified manner.
We present a novel asymmetric co-training (ACT) framework to integrate these subsets and avoid the domination of the source domain data.
arXiv Detail & Related papers (2022-06-05T23:48:00Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Adversarial Learning for Zero-shot Domain Adaptation [31.334196673143257]
Zero-shot domain adaptation is a problem where neither data sample nor label is available for parameter learning in the target domain.
We propose a new method for ZSDA by transferring domain shift from an irrelevant task to the task of interest.
We evaluate the proposed method on benchmark datasets and achieve the state-of-the-art performances.
arXiv Detail & Related papers (2020-09-11T03:41:32Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.