Bayesian Neural Network Versus Ex-Post Calibration For Prediction
Uncertainty
- URL: http://arxiv.org/abs/2209.14594v1
- Date: Thu, 29 Sep 2022 07:22:19 GMT
- Title: Bayesian Neural Network Versus Ex-Post Calibration For Prediction
Uncertainty
- Authors: Satya Borgohain, Klaus Ackermann and Ruben Loaiza-Maya
- Abstract summary: Probabilistic predictions from neural networks account for predictive uncertainty during classification.
In practice most datasets are trained on non-probabilistic neural networks which by default do not capture this inherent uncertainty.
A plausible alternative to the calibration approach is to use Bayesian neural networks, which directly models a predictive distribution.
- Score: 0.2343856409260935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic predictions from neural networks which account for predictive
uncertainty during classification is crucial in many real-world and high-impact
decision making settings. However, in practice most datasets are trained on
non-probabilistic neural networks which by default do not capture this inherent
uncertainty. This well-known problem has led to the development of post-hoc
calibration procedures, such as Platt scaling (logistic), isotonic and beta
calibration, which transforms the scores into well calibrated empirical
probabilities. A plausible alternative to the calibration approach is to use
Bayesian neural networks, which directly models a predictive distribution.
Although they have been applied to images and text datasets, they have seen
limited adoption in the tabular and small data regime. In this paper, we
demonstrate that Bayesian neural networks yields competitive performance when
compared to calibrated neural networks and conduct experiments across a wide
array of datasets.
Related papers
- Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Improved uncertainty quantification for neural networks with Bayesian
last layer [0.0]
Uncertainty quantification is an important task in machine learning.
We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation.
arXiv Detail & Related papers (2023-02-21T20:23:56Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Single Model Uncertainty Estimation via Stochastic Data Centering [39.71621297447397]
We are interested in estimating the uncertainties of deep neural networks.
We present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models.
We show that $Delta-$UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks.
arXiv Detail & Related papers (2022-07-14T23:54:54Z) - Bayesian Convolutional Neural Networks for Limited Data Hyperspectral
Remote Sensing Image Classification [14.464344312441582]
We use a special class of deep neural networks, namely Bayesian neural network, to classify HSRS images.
Bayesian neural networks provide an inherent tool for measuring uncertainty.
We show that a Bayesian network can outperform a similarly-constructed non-Bayesian convolutional neural network (CNN) and an off-the-shelf Random Forest (RF)
arXiv Detail & Related papers (2022-05-19T00:02:16Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Regularizing Class-wise Predictions via Self-knowledge Distillation [80.76254453115766]
We propose a new regularization method that penalizes the predictive distribution between similar samples.
This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network.
Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve the generalization ability.
arXiv Detail & Related papers (2020-03-31T06:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.