Learnable Uncertainty under Laplace Approximations
- URL: http://arxiv.org/abs/2010.02720v2
- Date: Mon, 7 Jun 2021 15:06:19 GMT
- Title: Learnable Uncertainty under Laplace Approximations
- Authors: Agustinus Kristiadi, Matthias Hein, Philipp Hennig
- Abstract summary: We develop a formalism to explicitly "train" the uncertainty in a decoupled way to the prediction itself.
We show that such units can be trained via an uncertainty-aware objective, improving standard Laplace approximations' performance.
- Score: 65.24701908364383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Laplace approximations are classic, computationally lightweight means for
constructing Bayesian neural networks (BNNs). As in other approximate BNNs, one
cannot necessarily expect the induced predictive uncertainty to be calibrated.
Here we develop a formalism to explicitly "train" the uncertainty in a
decoupled way to the prediction itself. To this end, we introduce uncertainty
units for Laplace-approximated networks: Hidden units associated with a
particular weight structure that can be added to any pre-trained,
point-estimated network. Due to their weights, these units are inactive -- they
do not affect the predictions. But their presence changes the geometry (in
particular the Hessian) of the loss landscape, thereby affecting the network's
uncertainty estimates under a Laplace approximation. We show that such units
can be trained via an uncertainty-aware objective, improving standard Laplace
approximations' performance in various uncertainty quantification tasks.
Related papers
- Uncertainty Propagation in Node Classification [9.03984964980373]
We focus on measuring uncertainty of graph neural networks (GNNs) for the task of node classification.
We propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework.
We present an uncertainty oriented loss for node classification that allows the GNNs to clearly integrate predictive uncertainty in learning procedure.
arXiv Detail & Related papers (2023-04-03T12:18:23Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Robust uncertainty estimates with out-of-distribution pseudo-inputs
training [0.0]
We propose to explicitly train the uncertainty predictor where we are not given data to make it reliable.
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space.
With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks.
arXiv Detail & Related papers (2022-01-15T17:15:07Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Uncertainty Intervals for Graph-based Spatio-Temporal Traffic Prediction [0.0]
We propose a Spatio-Temporal neural network that is trained to estimate a density given the measurements of previous timesteps, conditioned on a quantile.
Our method of density estimation is fully parameterised by our neural network and does not use a likelihood approximation internally.
This approach produces uncertainty estimates without the need to sample during inference, such as in Monte Carlo Dropout.
arXiv Detail & Related papers (2020-12-09T18:02:26Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.