Generalizing to Unseen Domains: A Survey on Domain Generalization
- URL: http://arxiv.org/abs/2103.03097v1
- Date: Tue, 2 Mar 2021 06:04:11 GMT
- Title: Generalizing to Unseen Domains: A Survey on Domain Generalization
- Authors: Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin
- Abstract summary: Domain generalization deals with a challenging setting where one or several different but related domain(s) are given.
The goal is to learn a model that can generalize to an unseen test domain.
This paper presents the first review for recent advances in domain generalization.
- Score: 59.16754307820612
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain generalization (DG), i.e., out-of-distribution generalization, has
attracted increased interests in recent years. Domain generalization deals with
a challenging setting where one or several different but related domain(s) are
given, and the goal is to learn a model that can generalize to an unseen test
domain. For years, great progress has been achieved. This paper presents the
first review for recent advances in domain generalization. First, we provide a
formal definition of domain generalization and discuss several related fields.
Next, we thoroughly review the theories related to domain generalization and
carefully analyze the theory behind generalization. Then, we categorize recent
algorithms into three classes and present them in detail: data manipulation,
representation learning, and learning strategy, each of which contains several
popular algorithms. Third, we introduce the commonly used datasets and
applications. Finally, we summarize existing literature and present some
potential research topics for the future.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - A Survey on Domain Generalization for Medical Image Analysis [9.410880477358942]
Domain Generalization for MedIA aims to address the domain shift challenge by generalizing effectively and performing robustly across unknown data distributions.
We provide a formal definition of domain shift and domain generalization in medical field, and discuss several related settings.
We summarize the recent methods from three viewpoints: data manipulation level, feature representation level, and model training level, and present some algorithms in detail.
arXiv Detail & Related papers (2024-02-07T17:08:27Z) - Federated Domain Generalization: A Survey [12.84261944926547]
In machine learning, data is often distributed across different devices, organizations, or edge nodes.
In response to this challenge, there has been a surge of interest in federated domain generalization.
This paper presents the first survey of recent advances in this area.
arXiv Detail & Related papers (2023-06-02T07:55:42Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Domain Generalization: A Survey [146.68420112164577]
Domain generalization (DG) aims to achieve OOD generalization by only using source domain data for model learning.
For the first time, a comprehensive literature review is provided to summarize the ten-year development in DG.
arXiv Detail & Related papers (2021-03-03T16:12:22Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z) - In Search of Lost Domain Generalization [25.43757332883202]
We implement DomainBed, a testbed for domain generalization.
We conduct extensive experiments using DomainBed and find that, when carefully implemented, empirical risk minimization shows state-of-the-art performance.
arXiv Detail & Related papers (2020-07-02T23:08:07Z) - Representation via Representations: Domain Generalization via
Adversarially Learned Invariant Representations [14.751829773340537]
We examine adversarial censoring techniques for learning invariant representations from multiple "studies" (or domains)
In many contexts, such as medical forecasting, domain generalization from studies in populous areas provides fairness of a different flavor, not anticipated in previous work on algorithmic fairness.
arXiv Detail & Related papers (2020-06-20T02:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.