Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for
Semi-supervised Domain Generalization
- URL: http://arxiv.org/abs/2110.04820v1
- Date: Sun, 10 Oct 2021 15:17:27 GMT
- Title: Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for
Semi-supervised Domain Generalization
- Authors: Ruiqi Wang, Lei Qi, Yinghuan Shi and Yang Gao
- Abstract summary: We propose a novel framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels.
To predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed.
Also, considering inconsistent goals between generalization and pseudo-labeling, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process.
- Score: 26.255457629490135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the goal of directly generalizing trained models to unseen target
domains, domain generalization (DG), a newly proposed learning paradigm, has
attracted considerable attention. Previous DG models usually require a
sufficient quantity of annotated samples from observed source domains during
training. In this paper, we relax this requirement about full annotation and
investigate semi-supervised domain generalization (SSDG) where only one source
domain is fully annotated along with the other domains totally unlabeled in the
training process. With the challenges of tackling the domain gap between
observed source domains and predicting unseen target domains, we propose a
novel deep framework via joint domain-aware labels and dual-classifier to
produce high-quality pseudo-labels. Concretely, to predict accurate
pseudo-labels under domain shift, a domain-aware pseudo-labeling module is
developed. Also, considering inconsistent goals between generalization and
pseudo-labeling: former prevents overfitting on all source domains while latter
might overfit the unlabeled source domains for high accuracy, we employ a
dual-classifier to independently perform pseudo-labeling and domain
generalization in the training process. Extensive results on publicly available
DG benchmark datasets show the efficacy of our proposed SSDG method compared to
the well-designed baselines and the state-of-the-art semi-supervised learning
methods.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization [55.06956781674986]
We resort to solving the semi-supervised domain generalization task, where there are a few label information in each source domain.
We propose MultiMatch, extending FixMatch to the multi-task learning framework, producing the high-quality pseudo-label for SSDG.
A series of experiments validate the effectiveness of the proposed method, and it outperforms the existing semi-supervised methods and the SSDG method on several benchmark DG datasets.
arXiv Detail & Related papers (2022-08-11T14:44:33Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Semi-Supervised Domain Generalization with Evolving Intermediate Domain [24.75184388536862]
Domain Generalization aims to generalize a model trained on multiple source domains to an unseen target domain.
We introduce a novel paradigm of DG, termed as Semi-Supervised Domain Generalization.
We develop a pseudo labeling phase and a generalization phase independently for SSDG.
arXiv Detail & Related papers (2021-11-19T13:55:57Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Domain Generalization via Semi-supervised Meta Learning [7.722498348924133]
We propose the first method of domain generalization to leverage unlabeled samples.
It is trained by a meta learning approach to mimic the distribution shift between the input source domains and unseen target domains.
Experimental results on benchmark datasets indicate that DG outperforms state-of-the-art domain generalization and semi-supervised learning methods.
arXiv Detail & Related papers (2020-09-26T18:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.