Domain-Specific Risk Minimization for Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2208.08661v4
- Date: Wed, 24 May 2023 01:54:32 GMT
- Title: Domain-Specific Risk Minimization for Out-of-Distribution Generalization
- Authors: Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu,
Liang Wang, Dacheng Tao, Xing Xie
- Abstract summary: We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
- Score: 104.17683265084757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent domain generalization (DG) approaches typically use the hypothesis
learned on source domains for inference on the unseen target domain. However,
such a hypothesis can be arbitrarily far from the optimal one for the target
domain, induced by a gap termed ``adaptivity gap''. Without exploiting the
domain information from the unseen test samples, adaptivity gap estimation and
minimization are intractable, which hinders us to robustify a model to any
unknown distribution. In this paper, we first establish a generalization bound
that explicitly considers the adaptivity gap. Our bound motivates two
strategies to reduce the gap: the first one is ensembling multiple classifiers
to enrich the hypothesis space, then we propose effective gap estimation
methods for guiding the selection of a better hypothesis for the target. The
other method is minimizing the gap directly by adapting model parameters using
online target samples. We thus propose \textbf{Domain-specific Risk
Minimization (DRM)}. During training, DRM models the distributions of different
source domains separately; for inference, DRM performs online model steering
using the source hypothesis for each arriving target sample. Extensive
experiments demonstrate the effectiveness of the proposed DRM for domain
generalization with the following advantages: 1) it significantly outperforms
competitive baselines on different distributional shift settings; 2) it
achieves either comparable or superior accuracies on all source domains
compared to vanilla empirical risk minimization; 3) it remains simple and
efficient during training, and 4) it is complementary to invariant learning
approaches.
Related papers
- Domain Agnostic Conditional Invariant Predictions for Domain Generalization [20.964740750976667]
We propose a Discriminant Risk Minimization (DRM) theory and the corresponding algorithm to capture the invariant features without domain labels.
In DRM theory, we prove that reducing the discrepancy of prediction distribution between overall source domain and any subset of it can contribute to obtaining invariant features.
We evaluate our algorithm against various domain generalization methods on multiple real-world datasets.
arXiv Detail & Related papers (2024-06-09T02:38:52Z) - Unsupervised Domain Adaptation Based on the Predictive Uncertainty of
Models [1.6498361958317636]
Unsupervised domain adaptation (UDA) aims to improve the prediction performance in the target domain under distribution shifts from the source domain.
We present a novel UDA method that learns domain-invariant features that minimize the domain divergence.
arXiv Detail & Related papers (2022-11-16T12:23:32Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Distributionally Robust Domain Adaptation [12.02023514105999]
Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions.
In this paper, we propose DRDA, a distributionally robust domain adaptation method.
arXiv Detail & Related papers (2022-10-30T17:29:22Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Regressive Domain Adaptation for Unsupervised Keypoint Detection [67.2950306888855]
Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
arXiv Detail & Related papers (2021-03-10T16:45:22Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.