Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift
- URL: http://arxiv.org/abs/2006.14988v1
- Date: Fri, 26 Jun 2020 13:50:19 GMT
- Title: Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift
- Authors: Alex J. Chan, Ahmed M. Alaa, Zhaozhi Qian and Mihaela van der Schaar
- Abstract summary: We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
- Score: 100.52588638477862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern neural networks have proven to be powerful function approximators,
providing state-of-the-art performance in a multitude of applications. They
however fall short in their ability to quantify confidence in their predictions
- this is crucial in high-stakes applications that involve critical
decision-making. Bayesian neural networks (BNNs) aim at solving this problem by
placing a prior distribution over the network's parameters, thereby inducing a
posterior distribution that encapsulates predictive uncertainty. While existing
variants of BNNs based on Monte Carlo dropout produce reliable (albeit
approximate) uncertainty estimates over in-distribution data, they tend to
exhibit over-confidence in predictions made on target data whose feature
distribution differs from the training data, i.e., the covariate shift setup.
In this paper, we develop an approximate Bayesian inference scheme based on
posterior regularisation, wherein unlabelled target data are used as
"pseudo-labels" of model confidence that are used to regularise the model's
loss on labelled source data. We show that this approach significantly improves
the accuracy of uncertainty quantification on covariate-shifted data sets, with
minimal modification to the underlying model architecture. We demonstrate the
utility of our method in the context of transferring prognostic models of
prostate cancer across globally diverse populations.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - A Kernel Framework to Quantify a Model's Local Predictive Uncertainty
under Data Distributional Shifts [21.591460685054546]
Internal layer outputs of a trained neural network contain all of the information related to both its mapping function and its input data distribution.
We propose a framework for predictive uncertainty quantification of a trained neural network that explicitly estimates the PDF of its raw prediction space.
The kernel framework is observed to provide model uncertainty estimates with much greater precision based on the ability to detect model prediction errors.
arXiv Detail & Related papers (2021-03-02T00:31:53Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Towards Trustworthy Predictions from Deep Neural Networks with Fast
Adversarial Calibration [2.8935588665357077]
We propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift.
We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions.
arXiv Detail & Related papers (2020-12-20T13:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.