Uncertainty Modeling for Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2202.03958v1
- Date: Tue, 8 Feb 2022 16:09:12 GMT
- Title: Uncertainty Modeling for Out-of-Distribution Generalization
- Authors: Xiaotong Li, Yongxing Dai, Yixiao Ge, Jun Liu, Ying Shan, Ling-Yu Duan
- Abstract summary: We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
- Score: 56.957731893992495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though remarkable progress has been achieved in various vision tasks, deep
neural networks still suffer obvious performance degradation when tested in
out-of-distribution scenarios. We argue that the feature statistics (mean and
standard deviation), which carry the domain characteristics of the training
data, can be properly manipulated to improve the generalization ability of deep
learning models. Common methods often consider the feature statistics as
deterministic values measured from the learned features and do not explicitly
consider the uncertain statistics discrepancy caused by potential domain shifts
during testing. In this paper, we improve the network generalization ability by
modeling the uncertainty of domain shifts with synthesized feature statistics
during training. Specifically, we hypothesize that the feature statistic, after
considering the potential uncertainties, follows a multivariate Gaussian
distribution. Hence, each feature statistic is no longer a deterministic value,
but a probabilistic point with diverse distribution possibilities. With the
uncertain feature statistics, the models can be trained to alleviate the domain
perturbations and achieve better robustness against potential domain shifts.
Our method can be readily integrated into networks without additional
parameters. Extensive experiments demonstrate that our proposed method
consistently improves the network generalization ability on multiple vision
tasks, including image classification, semantic segmentation, and instance
retrieval. The code will be released soon at
https://github.com/lixiaotong97/DSU.
Related papers
- Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Feature Shift Detection: Localizing Which Features Have Shifted via
Conditional Distribution Tests [12.468665026043382]
In military sensor networks, users will want to detect when one or more of the sensors has been compromised.
We first define a formalization of this problem as multiple conditional distribution hypothesis tests.
For both efficiency and flexibility, we propose a test statistic based on the density model score function.
arXiv Detail & Related papers (2021-07-14T18:23:24Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.