Variational Bayesian Neural Networks via Resolution of Singularities
- URL: http://arxiv.org/abs/2302.06035v1
- Date: Mon, 13 Feb 2023 00:32:49 GMT
- Title: Variational Bayesian Neural Networks via Resolution of Singularities
- Authors: Susan Wei, Edmund Lau
- Abstract summary: We advocate for the importance of singular learning theory (SLT) as it pertains to the theory and practice of variational inference in Bayesian neural networks (BNNs)
We lay to rest some of the confusion surrounding discrepancies between downstream predictive performance measured via e.g., the test log predictive density, and the variational objective.
We use the SLT-corrected form for singular posterior distributions to inform the design of the variational family itself.
- Score: 1.2183405753834562
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this work, we advocate for the importance of singular learning theory
(SLT) as it pertains to the theory and practice of variational inference in
Bayesian neural networks (BNNs). To begin, using SLT, we lay to rest some of
the confusion surrounding discrepancies between downstream predictive
performance measured via e.g., the test log predictive density, and the
variational objective. Next, we use the SLT-corrected asymptotic form for
singular posterior distributions to inform the design of the variational family
itself. Specifically, we build upon the idealized variational family introduced
in \citet{bhattacharya_evidence_2020} which is theoretically appealing but
practically intractable. Our proposal takes shape as a normalizing flow where
the base distribution is a carefully-initialized generalized gamma. We conduct
experiments comparing this to the canonical Gaussian base distribution and show
improvements in terms of variational free energy and variational generalization
error.
Related papers
- A Non-negative VAE:the Generalized Gamma Belief Network [49.970917207211556]
The gamma belief network (GBN) has demonstrated its potential for uncovering multi-layer interpretable latent representations in text data.
We introduce the generalized gamma belief network (Generalized GBN) in this paper, which extends the original linear generative model to a more expressive non-linear generative model.
We also propose an upward-downward Weibull inference network to approximate the posterior distribution of the latent variables.
arXiv Detail & Related papers (2024-08-06T18:18:37Z) - Modify Training Directions in Function Space to Reduce Generalization
Error [9.821059922409091]
We propose a modified natural gradient descent method in the neural network function space based on the eigendecompositions of neural tangent kernel and Fisher information matrix.
We explicitly derive the generalization error of the learned neural network function using theoretical methods from eigendecomposition and statistics theory.
arXiv Detail & Related papers (2023-07-25T07:11:30Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Optimization Variance: Exploring Generalization Properties of DNNs [83.78477167211315]
The test error of a deep neural network (DNN) often demonstrates double descent.
We propose a novel metric, optimization variance (OV), to measure the diversity of model updates.
arXiv Detail & Related papers (2021-06-03T09:34:17Z) - Variational Laplace for Bayesian neural networks [25.055754094939527]
Variational Laplace exploits a local approximation of the likelihood to estimate the ELBO without the need for sampling the neural-network weights.
We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.
arXiv Detail & Related papers (2021-02-27T14:06:29Z) - Variational Laplace for Bayesian neural networks [33.46810568687292]
We develop variational Laplace for Bayesian neural networks (BNNs)
We exploit a local approximation of the curvature of the likelihood to estimate the ELBO without the need for sampling the neural-network weights.
We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.
arXiv Detail & Related papers (2020-11-20T15:16:18Z) - Statistical Guarantees for Transformation Based Models with Applications
to Implicit Variational Inference [8.333191406788423]
We provide theoretical justification for the use of non-linear latent variable models (NL-LVMs) in non-parametric inference.
We use the NL-LVMs to construct an implicit family of variational distributions, deemed GP-IVI.
To the best of our knowledge, this is the first work on providing theoretical guarantees for implicit variational inference.
arXiv Detail & Related papers (2020-10-23T21:06:29Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.