A Framework for Variational Inference of Lightweight Bayesian Neural
Networks with Heteroscedastic Uncertainties
- URL: http://arxiv.org/abs/2402.14532v1
- Date: Thu, 22 Feb 2024 13:24:43 GMT
- Title: A Framework for Variational Inference of Lightweight Bayesian Neural
Networks with Heteroscedastic Uncertainties
- Authors: David J. Schodt, Ryan Brown, Michael Merritt, Samuel Park, Delsin
Menolascino, Mark A. Peot
- Abstract summary: We show that both the heteroscedastic aleatoric and epistemic variance can be embedded into the variances of learned BNN parameters.
We introduce a relatively simple framework for sampling-free variational inference suitable for lightweight BNNs.
- Score: 0.31457219084519006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Obtaining heteroscedastic predictive uncertainties from a Bayesian Neural
Network (BNN) is vital to many applications. Often, heteroscedastic aleatoric
uncertainties are learned as outputs of the BNN in addition to the predictive
means, however doing so may necessitate adding more learnable parameters to the
network. In this work, we demonstrate that both the heteroscedastic aleatoric
and epistemic variance can be embedded into the variances of learned BNN
parameters, improving predictive performance for lightweight networks. By
complementing this approach with a moment propagation approach to inference, we
introduce a relatively simple framework for sampling-free variational inference
suitable for lightweight BNNs.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - A Compact Representation for Bayesian Neural Networks By Removing
Permutation Symmetry [22.229664343428055]
We show that the role of permutations can be meaningfully quantified by a number of transpositions metric.
We then show that the recently proposed rebasin method allows us to summarize HMC samples into a compact representation.
We show that this compact representation allows us to compare trained BNNs directly in weight space across sampling methods and variational inference.
arXiv Detail & Related papers (2023-12-31T23:57:05Z) - Variational Inference for Bayesian Neural Networks under Model and
Parameter Uncertainty [12.211659310564425]
We apply the concept of model uncertainty as a framework for structural learning in BNNs.
We suggest an adaptation of a scalable variational inference approach with reparametrization of marginal inclusion probabilities.
arXiv Detail & Related papers (2023-05-01T16:38:17Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Tackling covariate shift with node-based Bayesian neural networks [26.64657196802115]
Node-based BNNs induce uncertainty by multiplying each hidden node with latent random variables, while learning a point-estimate of the weights.
In this paper, we interpret these latent noise variables as implicit representations of simple and domain-agnostic data perturbations during training.
arXiv Detail & Related papers (2022-06-06T08:56:19Z) - Nonlocal optimization of binary neural networks [0.8379286663107844]
We explore training Binary Neural Networks (BNNs) as a discrete variable inference problem over a factor graph.
We propose algorithms to overcome the intractability of their current formulation.
Compared to traditional gradient methods for BNNs, our results indicate that both BP and SP find better configurations of the parameters in the BNN.
arXiv Detail & Related papers (2022-04-05T02:14:53Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.