Domain-Specific Bias Filtering for Single Labeled Domain Generalization
- URL: http://arxiv.org/abs/2110.00726v1
- Date: Sat, 2 Oct 2021 05:08:01 GMT
- Title: Domain-Specific Bias Filtering for Single Labeled Domain Generalization
- Authors: Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, Lanfen Lin
- Abstract summary: Domain generalization utilizes multiple labeled source datasets to train a generalizable model for unseen target domains.
Due to expensive annotation costs, the requirements of labeling all the source data are hard to be met in real-world applications.
We propose a novel method called Domain-Specific Bias Filtering (DSBF), which filters out its domain-specific bias with the unlabeled source data.
- Score: 19.679447374738498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) utilizes multiple labeled source datasets to train
a generalizable model for unseen target domains. However, due to expensive
annotation costs, the requirements of labeling all the source data are hard to
be met in real-world applications. In this paper, we investigate a Single
Labeled Domain Generalization (SLDG) task with only one source domain being
labeled, which is more practical and challenging than the Conventional Domain
Generalization (CDG). A major obstacle in the SLDG task is the
discriminability-generalization bias: discriminative information in the labeled
source dataset may contain domain-specific bias, constraining the
generalization of the trained model. To tackle this challenging task, we
propose a novel method called Domain-Specific Bias Filtering (DSBF), which
initializes a discriminative model with the labeled source data and filters out
its domain-specific bias with the unlabeled source data for generalization
improvement. We divide the filtering process into: (1) Feature extractor
debiasing using k-means clustering-based semantic feature re-extraction; and
(2) Classifier calibrating using attention-guided semantic feature projection.
DSBF unifies the exploration of the labeled and the unlabeled source data to
enhance the discriminability and generalization of the trained model, resulting
in a highly generalizable model. We further provide theoretical analysis to
verify the proposed domain-specific bias filtering process. Extensive
experiments on multiple datasets show the superior performance of DSBF in
tackling both the challenging SLDG task and the CDG task.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Label-Efficient Domain Generalization via Collaborative Exploration and
Generalization [28.573872986524794]
This paper introduces label-efficient domain generalization (LEDG) to enable model generalization with label-limited source domains.
We propose a novel framework called Collaborative Exploration and Generalization (CEG) which jointly optimize active exploration and semi-supervised generalization.
arXiv Detail & Related papers (2022-08-07T05:34:50Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Improving Multi-Domain Generalization through Domain Re-labeling [31.636953426159224]
We study the important link between pre-specified domain labels and the generalization performance.
We introduce a general approach for multi-domain generalization, MulDEns, that uses an ERM-based deep ensembling backbone.
We show that MulDEns does not require tailoring the augmentation strategy or the training process specific to a dataset.
arXiv Detail & Related papers (2021-12-17T23:21:50Z) - Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for
Semi-supervised Domain Generalization [26.255457629490135]
We propose a novel framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels.
To predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed.
Also, considering inconsistent goals between generalization and pseudo-labeling, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process.
arXiv Detail & Related papers (2021-10-10T15:17:27Z) - Domain-Irrelevant Representation Learning for Unsupervised Domain
Generalization [22.980607134596077]
Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains.
While unlabeled data are far more accessible, we seek to explore how unsupervised learning can help deep models generalizes across domains.
We propose a Domain-Irrelevant Unsupervised Learning (DIUL) method to cope with the significant and misleading heterogeneity within unlabeled data.
arXiv Detail & Related papers (2021-07-13T16:20:50Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.