A principled approach to model validation in domain generalization
- URL: http://arxiv.org/abs/2304.00629v1
- Date: Sun, 2 Apr 2023 21:12:13 GMT
- Title: A principled approach to model validation in domain generalization
- Authors: Boyang Lyu, Thuan Nguyen, Matthias Scheutz, Prakash Ishwar, Shuchin
Aeron
- Abstract summary: We propose a novel model selection method suggesting that the validation process should account for both the classification risk and the domain discrepancy.
We validate the effectiveness of the proposed method by numerical results on several domain generalization datasets.
- Score: 30.459247038765568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization aims to learn a model with good generalization ability,
that is, the learned model should not only perform well on several seen domains
but also on unseen domains with different data distributions. State-of-the-art
domain generalization methods typically train a representation function
followed by a classifier jointly to minimize both the classification risk and
the domain discrepancy. However, when it comes to model selection, most of
these methods rely on traditional validation routines that select models solely
based on the lowest classification risk on the validation set. In this paper,
we theoretically demonstrate a trade-off between minimizing classification risk
and mitigating domain discrepancy, i.e., it is impossible to achieve the
minimum of these two objectives simultaneously. Motivated by this theoretical
result, we propose a novel model selection method suggesting that the
validation process should account for both the classification risk and the
domain discrepancy. We validate the effectiveness of the proposed method by
numerical results on several domain generalization datasets.
Related papers
- Non-stationary Domain Generalization: Theory and Algorithm [11.781050299571692]
In this paper, we study domain generalization in non-stationary environment.
We first examine the impact of environmental non-stationarity on model performance.
Then, we propose a novel algorithm based on adaptive invariant representation learning.
arXiv Detail & Related papers (2024-05-10T21:32:43Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - A Prototype-Oriented Framework for Unsupervised Domain Adaptation [52.25537670028037]
We provide a memory and computation-efficient probabilistic framework to extract class prototypes and align the target features with them.
We demonstrate the general applicability of our method on a wide range of scenarios, including single-source, multi-source, class-imbalance, and source-private domain adaptation.
arXiv Detail & Related papers (2021-10-22T19:23:22Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Selecting Treatment Effects Models for Domain Adaptation Using Causal
Knowledge [82.5462771088607]
We propose a novel model selection metric specifically designed for ITE methods under the unsupervised domain adaptation setting.
In particular, we propose selecting models whose predictions of interventions' effects satisfy known causal structures in the target domain.
arXiv Detail & Related papers (2021-02-11T21:03:14Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Domain segmentation and adjustment for generalized zero-shot learning [22.933463036413624]
In zero-shot learning, synthesizing unseen data with generative models has been the most popular method to address the imbalance of training data between seen and unseen classes.
We argue that synthesizing unseen data may not be an ideal approach for addressing the domain shift caused by the imbalance of the training data.
In this paper, we propose to realize the generalized zero-shot recognition in different domains.
arXiv Detail & Related papers (2020-02-01T15:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.