In Search of Lost Domain Generalization
- URL: http://arxiv.org/abs/2007.01434v1
- Date: Thu, 2 Jul 2020 23:08:07 GMT
- Title: In Search of Lost Domain Generalization
- Authors: Ishaan Gulrajani, David Lopez-Paz
- Abstract summary: We implement DomainBed, a testbed for domain generalization.
We conduct extensive experiments using DomainBed and find that, when carefully implemented, empirical risk minimization shows state-of-the-art performance.
- Score: 25.43757332883202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of domain generalization algorithms is to predict well on
distributions different from those seen during training. While a myriad of
domain generalization algorithms exist, inconsistencies in experimental
conditions -- datasets, architectures, and model selection criteria -- render
fair and realistic comparisons difficult. In this paper, we are interested in
understanding how useful domain generalization algorithms are in realistic
settings. As a first step, we realize that model selection is non-trivial for
domain generalization tasks. Contrary to prior work, we argue that domain
generalization algorithms without a model selection strategy should be regarded
as incomplete. Next, we implement DomainBed, a testbed for domain
generalization including seven multi-domain datasets, nine baseline algorithms,
and three model selection criteria. We conduct extensive experiments using
DomainBed and find that, when carefully implemented, empirical risk
minimization shows state-of-the-art performance across all datasets. Looking
forward, we hope that the release of DomainBed, along with contributions from
fellow researchers, will streamline reproducible and rigorous research in
domain generalization.
Related papers
- Non-stationary Domain Generalization: Theory and Algorithm [11.781050299571692]
In this paper, we study domain generalization in non-stationary environment.
We first examine the impact of environmental non-stationarity on model performance.
Then, we propose a novel algorithm based on adaptive invariant representation learning.
arXiv Detail & Related papers (2024-05-10T21:32:43Z) - Domain Adversarial Active Learning for Domain Generalization
Classification [8.003401798449337]
Domain generalization models aim to learn cross-domain knowledge from source domain data, to improve performance on unknown target domains.
Recent research has demonstrated that diverse and rich source domain samples can enhance domain generalization capability.
We propose a domain-adversarial active learning (DAAL) algorithm for classification tasks in domain generalization.
arXiv Detail & Related papers (2024-03-10T10:59:22Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Failure Modes of Domain Generalization Algorithms [29.772370301145543]
We propose an evaluation framework for domain generalization algorithms.
We show that the largest contributor to the generalization error varies across methods, datasets, regularization strengths and even training lengths.
arXiv Detail & Related papers (2021-11-26T20:04:19Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - Generalizing to Unseen Domains: A Survey on Domain Generalization [59.16754307820612]
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given.
The goal is to learn a model that can generalize to an unseen test domain.
This paper presents the first review for recent advances in domain generalization.
arXiv Detail & Related papers (2021-03-02T06:04:11Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Representation via Representations: Domain Generalization via
Adversarially Learned Invariant Representations [14.751829773340537]
We examine adversarial censoring techniques for learning invariant representations from multiple "studies" (or domains)
In many contexts, such as medical forecasting, domain generalization from studies in populous areas provides fairness of a different flavor, not anticipated in previous work on algorithmic fairness.
arXiv Detail & Related papers (2020-06-20T02:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.