Out-of-domain Generalization from a Single Source: A Uncertainty
Quantification Approach
- URL: http://arxiv.org/abs/2108.02888v1
- Date: Thu, 5 Aug 2021 23:53:55 GMT
- Title: Out-of-domain Generalization from a Single Source: A Uncertainty
Quantification Approach
- Authors: Xi Peng, Fengchun Qiao, Long Zhao
- Abstract summary: We study a worst-case scenario in generalization: Out-of-domain generalization from a single source.
The goal is to learn a robust model from a single source and expect it to generalize over many unknown distributions.
We propose uncertainty-guided domain generalization to tackle the limitations.
- Score: 17.334457450818473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a worst-case scenario in generalization: Out-of-domain
generalization from a single source. The goal is to learn a robust model from a
single source and expect it to generalize over many unknown distributions. This
challenging problem has been seldom investigated while existing solutions
suffer from various limitations such as the ignorance of uncertainty assessment
and label augmentation. In this paper, we propose uncertainty-guided domain
generalization to tackle the aforementioned limitations. The key idea is to
augment the source capacity in both feature and label spaces, while the
augmentation is guided by uncertainty assessment. To the best of our knowledge,
this is the first work to (1) quantify the generalization uncertainty from a
single source and (2) leverage it to guide both feature and label augmentation
for robust generalization. The model training and deployment are effectively
organized in a Bayesian meta-learning framework. We conduct extensive
comparisons and ablation study to validate our approach. The results prove our
superior performance in a wide scope of tasks including image classification,
semantic segmentation, text classification, and speech recognition.
Related papers
- Uncertainty-guided Contrastive Learning for Single Source Domain Generalisation [15.907643838530655]
In this paper, we introduce a novel model referred to as Contrastive Uncertainty Domain Generalisation Network (CUDGNet)
The key idea is to augment the source capacity in both input and label spaces through the fictitious domain generator.
Our method also provides efficient uncertainty estimation at inference time from a single forward pass through the generator subnetwork.
arXiv Detail & Related papers (2024-03-12T10:47:45Z) - Rethinking Multi-domain Generalization with A General Learning Objective [19.28143363034362]
Multi-domain generalization (mDG) is universally aimed to minimize discrepancy between training and testing distributions.
Existing mDG literature lacks a general learning objective paradigm.
We propose to leverage a $Y$-mapping to relax the constraint.
arXiv Detail & Related papers (2024-02-29T05:00:30Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Probabilistic Test-Time Generalization by Variational Neighbor-Labeling [62.158807685159736]
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains.
Probability pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time.
Variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels.
arXiv Detail & Related papers (2023-07-08T18:58:08Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - A Call to Reflect on Evaluation Practices for Failure Detection in Image
Classification [0.491574468325115]
We present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions.
The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation.
arXiv Detail & Related papers (2022-11-28T12:25:27Z) - Robustness Implies Generalization via Data-Dependent Generalization
Bounds [24.413499775513145]
This paper proves that robustness implies generalization via data-dependent generalization bounds.
We present several examples, including ones for lasso and deep learning, in which our bounds are provably preferable.
arXiv Detail & Related papers (2022-06-27T17:58:06Z) - Invariance Principle Meets Information Bottleneck for
Out-of-Distribution Generalization [77.24152933825238]
We show that for linear classification tasks we need stronger restrictions on the distribution shifts, or otherwise OOD generalization is impossible.
We prove that a form of the information bottleneck constraint along with invariance helps address key failures when invariant features capture all the information about the label and also retains the existing success when they do not.
arXiv Detail & Related papers (2021-06-11T20:42:27Z) - Semi-Supervised Domain Generalization with Stochastic StyleMatch [90.98288822165482]
In real-world applications, we might have only a few labels available from each source domain due to high annotation cost.
In this work, we investigate semi-supervised domain generalization, a more realistic and practical setting.
Our proposed approach, StyleMatch, is inspired by FixMatch, a state-of-the-art semi-supervised learning method based on pseudo-labeling.
arXiv Detail & Related papers (2021-06-01T16:00:08Z) - Uncertainty-guided Model Generalization to Unseen Domains [15.813136035004867]
We study a worst-case scenario in generalization: Out-of-domain generalization from a single source.
The goal is to learn a robust model from a single source and expect it to generalize over many unknown distributions.
Key idea is to augment the source capacity in both input and label spaces, while the augmentation is guided by uncertainty assessment.
arXiv Detail & Related papers (2021-03-12T21:13:21Z) - In Search of Robust Measures of Generalization [79.75709926309703]
We develop bounds on generalization error, optimization error, and excess risk.
When evaluated empirically, most of these bounds are numerically vacuous.
We argue that generalization measures should instead be evaluated within the framework of distributional robustness.
arXiv Detail & Related papers (2020-10-22T17:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.