Domain Adaptive Ensemble Learning
- URL: http://arxiv.org/abs/2003.07325v3
- Date: Wed, 8 Sep 2021 07:36:36 GMT
- Title: Domain Adaptive Ensemble Learning
- Authors: Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang
- Abstract summary: We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems.
Experiments on three multi-source UDA and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins.
- Score: 141.98192460069765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of generalizing deep neural networks from multiple source domains
to a target one is studied under two settings: When unlabeled target data is
available, it is a multi-source unsupervised domain adaptation (UDA) problem,
otherwise a domain generalization (DG) problem. We propose a unified framework
termed domain adaptive ensemble learning (DAEL) to address both problems. A
DAEL model is composed of a CNN feature extractor shared across domains and
multiple classifier heads each trained to specialize in a particular source
domain. Each such classifier is an expert to its own domain and a non-expert to
others. DAEL aims to learn these experts collaboratively so that when forming
an ensemble, they can leverage complementary information from each other to be
more effective for an unseen target domain. To this end, each source domain is
used in turn as a pseudo-target-domain with its own expert providing
supervisory signal to the ensemble of non-experts learned from the other
sources. For unlabeled target data under the UDA setting where real expert does
not exist, DAEL uses pseudo-label to supervise the ensemble learning. Extensive
experiments on three multi-source UDA datasets and two DG datasets show that
DAEL improves the state of the art on both problems, often by significant
margins. The code is released at
\url{https://github.com/KaiyangZhou/Dassl.pytorch}.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization [55.06956781674986]
We resort to solving the semi-supervised domain generalization task, where there are a few label information in each source domain.
We propose MultiMatch, extending FixMatch to the multi-task learning framework, producing the high-quality pseudo-label for SSDG.
A series of experiments validate the effectiveness of the proposed method, and it outperforms the existing semi-supervised methods and the SSDG method on several benchmark DG datasets.
arXiv Detail & Related papers (2022-08-11T14:44:33Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Generalizable Person Re-identification with Relevance-aware Mixture of
Experts [45.13716166680772]
We propose a novel method called the relevance-aware mixture of experts (RaMoE)
RaMoE uses an effective voting-based mixture mechanism to dynamically leverage source domains' diverse characteristics to improve the model's generalization.
Considering the target domains' invisibility during training, we propose a novel learning-to-learn algorithm combined with our relation alignment loss to update the voting network.
arXiv Detail & Related papers (2021-05-19T14:19:34Z) - Unsupervised Multi-Source Domain Adaptation for Person Re-Identification [39.817734080890695]
Unsupervised domain adaptation (UDA) methods for person re-identification (re-ID) aim at transferring re-ID knowledge from labeled source data to unlabeled target data.
We introduce the multi-source concept into UDA person re-ID field, where multiple source datasets are used during training.
The proposed method outperforms state-of-the-art UDA person re-ID methods by a large margin, and even achieves comparable performance to the supervised approaches without any post-processing techniques.
arXiv Detail & Related papers (2021-04-27T03:33:35Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.