Variational Inference on the Final-Layer Output of Neural Networks
- URL: http://arxiv.org/abs/2302.02420v4
- Date: Sun, 17 Dec 2023 20:16:31 GMT
- Title: Variational Inference on the Final-Layer Output of Neural Networks
- Authors: Yadi Wei, Roni Khardon
- Abstract summary: This paper proposes to combine the advantages of both approaches by performing Variational Inference in the Final layer Output space (VIFO)
We use neural networks to learn the mean and the variance of the probabilistic output.
Experiments show that VIFO and ensembles of VIFO provide a good tradeoff in terms of run time and uncertainty quantification.
- Score: 3.716663957642983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional neural networks are simple to train but they typically produce
overconfident predictions. In contrast, Bayesian neural networks provide good
uncertainty quantification but optimizing them is time consuming due to the
large parameter space. This paper proposes to combine the advantages of both
approaches by performing Variational Inference in the Final layer Output space
(VIFO), because the output space is much smaller than the parameter space. We
use neural networks to learn the mean and the variance of the probabilistic
output. Like standard, non-Beyesian models, VIFO enjoys simple training and one
can use Rademacher complexity to provide risk bounds for the model. On the
other hand, using the Bayesian formulation we incorporate collapsed variational
inference with VIFO which significantly improves the performance in practice.
Experiments show that VIFO and ensembles of VIFO provide a good tradeoff in
terms of run time and uncertainty quantification, especially for out of
distribution data.
Related papers
- Favour: FAst Variance Operator for Uncertainty Rating [0.034530027457862]
Bayesian Neural Networks (BNN) have emerged as a crucial approach for interpreting ML predictions.
By sampling from the posterior distribution, data scientists may estimate the uncertainty of an inference.
Previous work proposed propagating the first and second moments of the posterior directly through the network.
This method is even slower than sampling, so the propagated variance needs to be approximated.
Our contribution is a more principled variance propagation framework.
arXiv Detail & Related papers (2023-11-21T22:53:20Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Evidence Networks: simple losses for fast, amortized, neural Bayesian
model comparison [0.0]
Evidence Networks can enable Bayesian model comparison when state-of-the-art methods fail.
We introduce the leaky parity-odd power transform, leading to the novel l-POP-Exponential'' loss function.
We show that Evidence Networks are explicitly independent of dimensionality of the parameter space and scale mildly with the complexity of the posterior probability density function.
arXiv Detail & Related papers (2023-05-18T18:14:53Z) - Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows [10.865434331546126]
We will consider two extensions to the latent binary Bayesian neural networks (LBBNN) method.
Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian.
arXiv Detail & Related papers (2023-05-05T09:40:28Z) - GFlowOut: Dropout with Generative Flow Networks [76.59535235717631]
Monte Carlo Dropout has been widely used as a relatively cheap way for approximate Inference.
Recent works show that the dropout mask can be viewed as a latent variable, which can be inferred with variational inference.
GFlowOutleverages the recently proposed probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks.
arXiv Detail & Related papers (2022-10-24T03:00:01Z) - Look beyond labels: Incorporating functional summary information in
Bayesian neural networks [11.874130244353253]
We present a simple approach to incorporate summary information about the predicted probability.
The available summary information is incorporated as augmented data and modeled with a Dirichlet process.
We show how the method can inform the model about task difficulty or class imbalance.
arXiv Detail & Related papers (2022-07-04T07:06:45Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - Kalman Bayesian Neural Networks for Closed-form Online Learning [5.220940151628734]
We propose a novel approach for BNN learning via closed-form Bayesian inference.
The calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems.
This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent.
arXiv Detail & Related papers (2021-10-03T07:29:57Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.