Label-Efficient Domain Generalization via Collaborative Exploration and
Generalization
- URL: http://arxiv.org/abs/2208.03644v1
- Date: Sun, 7 Aug 2022 05:34:50 GMT
- Title: Label-Efficient Domain Generalization via Collaborative Exploration and
Generalization
- Authors: Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, Lanfen Lin
- Abstract summary: This paper introduces label-efficient domain generalization (LEDG) to enable model generalization with label-limited source domains.
We propose a novel framework called Collaborative Exploration and Generalization (CEG) which jointly optimize active exploration and semi-supervised generalization.
- Score: 28.573872986524794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Considerable progress has been made in domain generalization (DG) which aims
to learn a generalizable model from multiple well-annotated source domains to
unknown target domains. However, it can be prohibitively expensive to obtain
sufficient annotation for source datasets in many real scenarios. To escape
from the dilemma between domain generalization and annotation costs, in this
paper, we introduce a novel task named label-efficient domain generalization
(LEDG) to enable model generalization with label-limited source domains. To
address this challenging task, we propose a novel framework called
Collaborative Exploration and Generalization (CEG) which jointly optimizes
active exploration and semi-supervised generalization. Specifically, in active
exploration, to explore class and domain discriminability while avoiding
information divergence and redundancy, we query the labels of the samples with
the highest overall ranking of class uncertainty, domain representativeness,
and information diversity. In semi-supervised generalization, we design
MixUp-based intra- and inter-domain knowledge augmentation to expand domain
knowledge and generalize domain invariance. We unify active exploration and
semi-supervised generalization in a collaborative way and promote mutual
enhancement between them, boosting model generalization with limited
annotation. Extensive experiments show that CEG yields superior generalization
performance. In particular, CEG can even use only 5% data annotation budget to
achieve competitive results compared to the previous DG methods with fully
labeled data on PACS dataset.
Related papers
- Generalized Universal Domain Adaptation with Generative Flow Networks [76.1350941965148]
Generalized Universal Domain Adaptation aims to achieve precise prediction of all target labels including unknown categories.
GUDA bridges the gap between label distribution shift-based and label space mismatch-based variants.
We propose an active domain adaptation algorithm named GFlowDA, which selects diverse samples with probabilities proportional to a reward function.
arXiv Detail & Related papers (2023-05-08T05:34:15Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Semi-Supervised Domain Generalization with Evolving Intermediate Domain [24.75184388536862]
Domain Generalization aims to generalize a model trained on multiple source domains to an unseen target domain.
We introduce a novel paradigm of DG, termed as Semi-Supervised Domain Generalization.
We develop a pseudo labeling phase and a generalization phase independently for SSDG.
arXiv Detail & Related papers (2021-11-19T13:55:57Z) - Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for
Semi-supervised Domain Generalization [26.255457629490135]
We propose a novel framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels.
To predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed.
Also, considering inconsistent goals between generalization and pseudo-labeling, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process.
arXiv Detail & Related papers (2021-10-10T15:17:27Z) - Domain-Specific Bias Filtering for Single Labeled Domain Generalization [19.679447374738498]
Domain generalization utilizes multiple labeled source datasets to train a generalizable model for unseen target domains.
Due to expensive annotation costs, the requirements of labeling all the source data are hard to be met in real-world applications.
We propose a novel method called Domain-Specific Bias Filtering (DSBF), which filters out its domain-specific bias with the unlabeled source data.
arXiv Detail & Related papers (2021-10-02T05:08:01Z) - Semi-Supervised Domain Generalization with Stochastic StyleMatch [90.98288822165482]
In real-world applications, we might have only a few labels available from each source domain due to high annotation cost.
In this work, we investigate semi-supervised domain generalization, a more realistic and practical setting.
Our proposed approach, StyleMatch, is inspired by FixMatch, a state-of-the-art semi-supervised learning method based on pseudo-labeling.
arXiv Detail & Related papers (2021-06-01T16:00:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.