Variational Laplace for Bayesian neural networks
- URL: http://arxiv.org/abs/2011.10443v2
- Date: Tue, 10 Aug 2021 13:20:59 GMT
- Title: Variational Laplace for Bayesian neural networks
- Authors: Ali Unlu, Laurence Aitchison
- Abstract summary: We develop variational Laplace for Bayesian neural networks (BNNs)
We exploit a local approximation of the curvature of the likelihood to estimate the ELBO without the need for sampling the neural-network weights.
We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.
- Score: 33.46810568687292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop variational Laplace for Bayesian neural networks (BNNs) which
exploits a local approximation of the curvature of the likelihood to estimate
the ELBO without the need for stochastic sampling of the neural-network
weights. The Variational Laplace objective is simple to evaluate, as it is (in
essence) the log-likelihood, plus weight-decay, plus a squared-gradient
regularizer. Variational Laplace gave better test performance and expected
calibration errors than maximum a-posteriori inference and standard
sampling-based variational inference, despite using the same variational
approximate posterior. Finally, we emphasise care needed in benchmarking
standard VI as there is a risk of stopping before the variance parameters have
converged. We show that early-stopping can be avoided by increasing the
learning rate for the variance parameters.
Related papers
- Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Improved uncertainty quantification for neural networks with Bayesian
last layer [0.0]
Uncertainty quantification is an important task in machine learning.
We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation.
arXiv Detail & Related papers (2023-02-21T20:23:56Z) - Variational Bayesian Neural Networks via Resolution of Singularities [1.2183405753834562]
We advocate for the importance of singular learning theory (SLT) as it pertains to the theory and practice of variational inference in Bayesian neural networks (BNNs)
We lay to rest some of the confusion surrounding discrepancies between downstream predictive performance measured via e.g., the test log predictive density, and the variational objective.
We use the SLT-corrected form for singular posterior distributions to inform the design of the variational family itself.
arXiv Detail & Related papers (2023-02-13T00:32:49Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Variational Laplace for Bayesian neural networks [25.055754094939527]
Variational Laplace exploits a local approximation of the likelihood to estimate the ELBO without the need for sampling the neural-network weights.
We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.
arXiv Detail & Related papers (2021-02-27T14:06:29Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - Variational Variance: Simple, Reliable, Calibrated Heteroscedastic Noise
Variance Parameterization [3.553493344868413]
We propose critiques to test predictive mean and variance calibration and the predictive distribution's ability to generate sensible data.
We find that our solution, to treat heteroscedastic variance variationally, sufficiently regularizes variance to pass these PPCs.
arXiv Detail & Related papers (2020-06-08T19:58:35Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.