Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization
- URL: http://arxiv.org/abs/2302.02350v5
- Date: Wed, 20 Sep 2023 19:47:44 GMT
- Title: Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization
- Authors: Daoan Zhang, Mingkai Chen, Chenming Li, Lingyun Huang, Jianguo Zhang
- Abstract summary: We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
- Score: 9.577254317971933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain Generalization (DG) is a fundamental challenge for machine learning
models, which aims to improve model generalization on various domains. Previous
methods focus on generating domain invariant features from various source
domains. However, we argue that the domain variantions also contain useful
information, ie, classification-aware information, for downstream tasks, which
has been largely ignored. Different from learning domain invariant features
from source domains, we decouple the input images into Domain Expert Features
and noise. The proposed domain expert features lie in a learned latent space
where the images in each domain can be classified independently, enabling the
implicit use of classification-aware domain variations. Based on the analysis,
we proposed a novel paradigm called Domain Disentanglement Network (DDN) to
disentangle the domain expert features from the source domain images and
aggregate the source domain expert features for representing the target test
domain. We also propound a new contrastive learning method to guide the domain
expert features to form a more balanced and separable feature space.
Experiments on the widely-used benchmarks of PACS, VLCS, OfficeHome, DomainNet,
and TerraIncognita demonstrate the competitive performance of our method
compared to the recently proposed alternatives.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Domain Generalization by Learning and Removing Domain-specific Features [15.061481139046952]
Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains.
We propose a new approach that aims to explicitly remove domain-specific features for domain generalization.
We develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed.
arXiv Detail & Related papers (2022-12-14T08:46:46Z) - Adaptive Mixture of Experts Learning for Generalizable Face
Anti-Spoofing [37.75738807247752]
Face anti-spoofing approaches based on domain generalization (DG) have drawn growing attention.
Existing DG-based FAS approaches always capture the domain-invariant features for generalizing on the various unseen domains.
We propose an Adaptive Mixture of Experts Learning framework, which exploits the domain-specific information to adaptively establish the link among the seen source domains and unseen target domains.
arXiv Detail & Related papers (2022-07-20T13:02:51Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Exploiting Domain-Specific Features to Enhance Domain Generalization [10.774902700296249]
Domain Generalization (DG) aims to train a model, from multiple observed source domains, in order to perform well on unseen target domains.
Prior DG approaches have focused on extracting domain-invariant information across sources to generalize on target domains.
We propose meta-Domain Specific-Domain Invariant (mD) - a novel theoretically sound framework.
arXiv Detail & Related papers (2021-10-18T15:42:39Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Unsupervised Domain Expansion from Multiple Sources [39.03086451203708]
This paper presents a method for unsupervised multi-source domain expansion (UMSDE) where only the pre-learned models of the source domains and unlabelled new domain data are available.
We propose to use the predicted class probability of the unlabelled data in the new domain produced by different source models to jointly mitigate the biases among domains, exploit the discriminative information in the new domain, and preserve the performance in the source domains.
arXiv Detail & Related papers (2020-05-26T07:02:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.