Co-Teaching for Unsupervised Domain Adaptation and Expansion
- URL: http://arxiv.org/abs/2204.01210v3
- Date: Thu, 14 Sep 2023 01:54:51 GMT
- Title: Co-Teaching for Unsupervised Domain Adaptation and Expansion
- Authors: Kaibin Tian, Qijie Wei, Xirong Li
- Abstract summary: Unsupervised Domain Adaptation (UDA) essentially trades a model's performance on a source domain for improving its performance on a target domain.
UDE tries to adapt the model for the target domain as UDA does, and in the meantime maintains its source-domain performance.
In both UDA and UDE settings, a model tailored to a given domain, let it be the source or the target domain, is assumed to well handle samples from the given domain.
We exploit this finding, and accordingly propose Co-Teaching (CT)
- Score: 12.455364571022576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) essentially trades a model's performance
on a source domain for improving its performance on a target domain. To resolve
the issue, Unsupervised Domain Expansion (UDE) has been proposed recently. UDE
tries to adapt the model for the target domain as UDA does, and in the meantime
maintains its source-domain performance. In both UDA and UDE settings, a model
tailored to a given domain, let it be the source or the target domain, is
assumed to well handle samples from the given domain. We question the
assumption by reporting the existence of cross-domain visual ambiguity: Given
the lack of a crystally clear boundary between the two domains, samples from
one domain can be visually close to the other domain. Such sorts of samples are
typically in minority in their host domain, so they tend to be overlooked by
the domain-specific model, but can be better handled by a model from the other
domain. We exploit this finding, and accordingly propose Co-Teaching (CT). The
CT method is instantiated with knowledge distillation based CT (kdCT) plus
mixup based CT (miCT). Specifically, kdCT transfers knowledge from a
leading-teacher network and an assistant-teacher network to a student network,
so the cross-domain ambiguity will be better handled by the student. Meanwhile,
miCT further enhances the generalization ability of the student. Extensive
experiments on two image classification datasets and two driving-scene
segmentation datasets justify the viability of CT for UDA and UDE.
Related papers
- CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel
Segmentation via Disentangling Representation Style Transfer and
Collaborative Consistency Learning [3.9562534927482704]
We propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts.
Our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-01-13T07:03:16Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Unsupervised Domain Expansion for Visual Categorization [12.427064803221729]
unsupervised domain expansion (UDE) aims to adapt a deep model for the target domain with its unlabeled data, while maintaining the model's performance on the source domain.
We develop a knowledge distillation based learning mechanism, enabling KDDE to optimize a single objective wherein the source and target domains are equally treated.
arXiv Detail & Related papers (2021-04-01T03:27:35Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
Reliable Transfer for Cardiac Segmentation [69.09432302497116]
We propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++.
We design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain.
In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data.
arXiv Detail & Related papers (2021-01-07T05:17:38Z) - Conditional Coupled Generative Adversarial Networks for Zero-Shot Domain
Adaptation [31.334196673143257]
Machine learning models trained in one domain perform poorly in the other domains due to the existence of domain shift.
We propose conditional coupled generative adversarial networks (CoCoGAN) by extending the coupled generative adversarial networks (CoGAN) into a conditioning model.
Our proposed CoCoGAN is able to capture the joint distribution of dual-domain samples in two different tasks, i.e. the relevant task (RT) and an irrelevant task (IRT)
arXiv Detail & Related papers (2020-09-11T04:36:42Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.