Model-Based Domain Generalization
- URL: http://arxiv.org/abs/2102.11436v1
- Date: Tue, 23 Feb 2021 00:59:02 GMT
- Title: Model-Based Domain Generalization
- Authors: Alexander Robey and George J. Pappas and Hamed Hassani
- Abstract summary: We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
- Score: 96.84818110323518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of domain generalization, in which a predictor is
trained on data drawn from a family of related training domains and tested on a
distinct and unseen test domain. While a variety of approaches have been
proposed for this setting, it was recently shown that no existing algorithm can
consistently outperform empirical risk minimization (ERM) over the training
domains. To this end, in this paper we propose a novel approach for the domain
generalization problem called Model-Based Domain Generalization. In our
approach, we first use unlabeled data from the training domains to learn
multi-modal domain transformation models that map data from one training domain
to any other domain. Next, we propose a constrained optimization-based
formulation for domain generalization which enforces that a trained predictor
be invariant to distributional shifts under the underlying domain
transformation model. Finally, we propose a novel algorithmic framework for
efficiently solving this constrained optimization problem. In our experiments,
we show that this approach outperforms both ERM and domain generalization
algorithms on numerous well-known, challenging datasets, including WILDS, PACS,
and ImageNet. In particular, our algorithms beat the current state-of-the-art
methods on the very-recently-proposed WILDS benchmark by up to 20 percentage
points.
Related papers
- Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - Adaptive Methods for Real-World Domain Generalization [32.030688845421594]
In our work, we investigate whether it is possible to leverage domain information from unseen test samples themselves.
We propose a domain-adaptive approach consisting of two steps: a) we first learn a discriminative domain embedding from unsupervised training examples, and b) use this domain embedding as supplementary information to build a domain-adaptive model.
Our approach achieves state-of-the-art performance on various domain generalization benchmarks.
arXiv Detail & Related papers (2021-03-29T17:44:35Z) - Domain Invariant Representation Learning with Domain Density
Transformations [30.29600757980369]
Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains.
We show how to use generative adversarial networks to learn such domain transformations to implement our method in practice.
arXiv Detail & Related papers (2021-02-09T19:25:32Z) - Hierarchical Variational Auto-Encoding for Unsupervised Domain
Generalization [4.670305538969914]
We choose a generative approach within the framework of variational autoencoders and propose an unsupervised algorithm that is able to generalize to new domains without supervision.
Our method is able to learn representations that disentangle domain-specific information from class-label specific information even in complex settings.
arXiv Detail & Related papers (2021-01-23T07:09:59Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.