Adaptive Risk Minimization: Learning to Adapt to Domain Shift
- URL: http://arxiv.org/abs/2007.02931v4
- Date: Wed, 1 Dec 2021 18:54:12 GMT
- Title: Adaptive Risk Minimization: Learning to Adapt to Domain Shift
- Authors: Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey
Levine, Chelsea Finn
- Abstract summary: A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
- Score: 109.87561509436016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental assumption of most machine learning algorithms is that the
training and test data are drawn from the same underlying distribution.
However, this assumption is violated in almost all practical applications:
machine learning systems are regularly tested under distribution shift, due to
changing temporal correlations, atypical end users, or other factors. In this
work, we consider the problem setting of domain generalization, where the
training data are structured into domains and there may be multiple test time
shifts, corresponding to new domains or domain distributions. Most prior
methods aim to learn a single robust model or invariant feature space that
performs well on all domains. In contrast, we aim to learn models that adapt at
test time to domain shift using unlabeled test points. Our primary contribution
is to introduce the framework of adaptive risk minimization (ARM), in which
models are directly optimized for effective adaptation to shift by learning to
adapt on the training domains. Compared to prior methods for robustness,
invariance, and adaptation, ARM methods provide performance gains of 1-4% test
accuracy on a number of image classification problems exhibiting domain shift.
Related papers
- Robustness, Evaluation and Adaptation of Machine Learning Models in the
Wild [4.304803366354879]
We study causes of impaired robustness to domain shifts and present algorithms for training domain robust models.
A key source of model brittleness is due to domain overfitting, which our new training algorithms suppress and instead encourage domain-general hypotheses.
arXiv Detail & Related papers (2023-03-05T21:41:16Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Domain Adaptation Principal Component Analysis: base linear method for
learning with out-of-distribution data [55.41644538483948]
Domain adaptation is a popular paradigm in modern machine learning.
We present a method called Domain Adaptation Principal Component Analysis (DAPCA)
DAPCA finds a linear reduced data representation useful for solving the domain adaptation task.
arXiv Detail & Related papers (2022-08-28T21:10:56Z) - Learning Instance-Specific Adaptation for Cross-Domain Segmentation [79.61787982393238]
We propose a test-time adaptation method for cross-domain image segmentation.
Given a new unseen instance at test time, we adapt a pre-trained model by conducting instance-specific BatchNorm calibration.
arXiv Detail & Related papers (2022-03-30T17:59:45Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - A Brief Review of Domain Adaptation [1.2043574473965317]
This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain.
It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.
arXiv Detail & Related papers (2020-10-07T07:05:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.