Exploiting Domain-Specific Features to Enhance Domain Generalization
- URL: http://arxiv.org/abs/2110.09410v1
- Date: Mon, 18 Oct 2021 15:42:39 GMT
- Title: Exploiting Domain-Specific Features to Enhance Domain Generalization
- Authors: Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung
- Abstract summary: Domain Generalization (DG) aims to train a model, from multiple observed source domains, in order to perform well on unseen target domains.
Prior DG approaches have focused on extracting domain-invariant information across sources to generalize on target domains.
We propose meta-Domain Specific-Domain Invariant (mD) - a novel theoretically sound framework.
- Score: 10.774902700296249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain Generalization (DG) aims to train a model, from multiple observed
source domains, in order to perform well on unseen target domains. To obtain
the generalization capability, prior DG approaches have focused on extracting
domain-invariant information across sources to generalize on target domains,
while useful domain-specific information which strongly correlates with labels
in individual domains and the generalization to target domains is usually
ignored. In this paper, we propose meta-Domain Specific-Domain Invariant
(mDSDI) - a novel theoretically sound framework that extends beyond the
invariance view to further capture the usefulness of domain-specific
information. Our key insight is to disentangle features in the latent space
while jointly learning both domain-invariant and domain-specific features in a
unified framework. The domain-specific representation is optimized through the
meta-learning framework to adapt from source domains, targeting a robust
generalization on unseen domains. We empirically show that mDSDI provides
competitive results with state-of-the-art techniques in DG. A further ablation
study with our generated dataset, Background-Colored-MNIST, confirms the
hypothesis that domain-specific is essential, leading to better results when
compared with only using domain-invariant.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - MetaDefa: Meta-learning based on Domain Enhancement and Feature
Alignment for Single Domain Generalization [12.095382249996032]
A novel meta-learning method based on domain enhancement and feature alignment (MetaDefa) is proposed to improve the model generalization performance.
In this paper, domain-invariant features can be fully explored by focusing on similar target regions between source and augmented domains feature space.
Extensive experiments on two publicly available datasets show that MetaDefa has significant generalization performance advantages in unknown multiple target domains.
arXiv Detail & Related papers (2023-11-27T15:13:02Z) - Domain Generalization for Domain-Linked Classes [8.738092015092207]
In the real-world, classes may often be domain-linked, i.e. expressed only in a specific domain.
We propose a Fair and cONtrastive feature-space regularization algorithm for Domain-linked DG, FOND.
arXiv Detail & Related papers (2023-06-01T16:39:50Z) - Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Adaptive Mixture of Experts Learning for Generalizable Face
Anti-Spoofing [37.75738807247752]
Face anti-spoofing approaches based on domain generalization (DG) have drawn growing attention.
Existing DG-based FAS approaches always capture the domain-invariant features for generalizing on the various unseen domains.
We propose an Adaptive Mixture of Experts Learning framework, which exploits the domain-specific information to adaptively establish the link among the seen source domains and unseen target domains.
arXiv Detail & Related papers (2022-07-20T13:02:51Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.