Domain Generalization: A Survey
- URL: http://arxiv.org/abs/2103.02503v1
- Date: Wed, 3 Mar 2021 16:12:22 GMT
- Title: Domain Generalization: A Survey
- Authors: Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, Chen Change Loy
- Abstract summary: Domain generalization (DG) aims to achieve OOD generalization by only using source domain data for model learning.
For the first time, a comprehensive literature review is provided to summarize the ten-year development in DG.
- Score: 146.68420112164577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generalization to out-of-distribution (OOD) data is a capability natural to
humans yet challenging for machines to reproduce. This is because most
statistical learning algorithms strongly rely on the i.i.d.~assumption while in
practice the target data often come from a different distribution than the
source data, known as domain shift. Domain generalization (DG) aims to achieve
OOD generalization by only using source domain data for model learning. Since
first introduced in 2011, research in DG has undergone a decade progress. Ten
years of research in this topic have led to a broad spectrum of methodologies,
e.g., based on domain alignment, meta-learning, data augmentation, or ensemble
learning, just to name a few; and have covered various applications such as
object recognition, segmentation, action recognition, and person
re-identification. In this paper, for the first time, a comprehensive
literature review is provided to summarize the ten-year development in DG.
First, we cover the background by giving the problem definitions and discussing
how DG is related to other fields like domain adaptation and transfer learning.
Second, we conduct a thorough review into existing methods and present a
taxonomy based on their methodologies and motivations. Finally, we conclude
this survey with potential research directions.
Related papers
- Domain Generalization through Meta-Learning: A Survey [6.524870790082051]
Deep neural networks (DNNs) have revolutionized artificial intelligence but often lack performance when faced with out-of-distribution (OOD) data.
This survey paper delves into the realm of meta-learning with a focus on its contribution to domain generalization.
arXiv Detail & Related papers (2024-04-03T14:55:17Z) - A Survey on Domain Generalization for Medical Image Analysis [9.410880477358942]
Domain Generalization for MedIA aims to address the domain shift challenge by generalizing effectively and performing robustly across unknown data distributions.
We provide a formal definition of domain shift and domain generalization in medical field, and discuss several related settings.
We summarize the recent methods from three viewpoints: data manipulation level, feature representation level, and model training level, and present some algorithms in detail.
arXiv Detail & Related papers (2024-02-07T17:08:27Z) - Federated Domain Generalization: A Survey [12.84261944926547]
In machine learning, data is often distributed across different devices, organizations, or edge nodes.
In response to this challenge, there has been a surge of interest in federated domain generalization.
This paper presents the first survey of recent advances in this area.
arXiv Detail & Related papers (2023-06-02T07:55:42Z) - Domain Generalization in Machine Learning Models for Wireless
Communications: Concepts, State-of-the-Art, and Open Issues [32.61904205763364]
Data-driven machine learning (ML) is promoted as one potential technology to be used in next-generations wireless systems.
Most of these applications rely on supervised learning which assumes that the source (training) and target (test) data are independent and identically distributed (i.i.d)
This assumption is often violated in the real world due to domain or distribution shifts between the source and the target data.
domain generalization (DG) tackles the OOD-related issues by learning models on different and distinct source domains/datasets.
arXiv Detail & Related papers (2023-03-13T15:52:30Z) - A Comprehensive Survey on Source-free Domain Adaptation [69.17622123344327]
The research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years.
We provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme.
We compare the results of more than 30 representative SFDA methods on three popular classification benchmarks.
arXiv Detail & Related papers (2023-02-23T06:32:09Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Generalizing to Unseen Domains: A Survey on Domain Generalization [59.16754307820612]
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given.
The goal is to learn a model that can generalize to an unseen test domain.
This paper presents the first review for recent advances in domain generalization.
arXiv Detail & Related papers (2021-03-02T06:04:11Z) - Learning to Generate Novel Domains for Domain Generalization [115.21519842245752]
This paper focuses on the task of learning from multiple source domains a model that generalizes well to unseen domains.
We employ a data generator to synthesize data from pseudo-novel domains to augment the source domains.
Our method, L2A-OT, outperforms current state-of-the-art DG methods on four benchmark datasets.
arXiv Detail & Related papers (2020-07-07T09:34:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.