Bernstein Flows for Flexible Posteriors in Variational Bayes
- URL: http://arxiv.org/abs/2202.05650v2
- Date: Fri, 23 Feb 2024 16:04:18 GMT
- Title: Bernstein Flows for Flexible Posteriors in Variational Bayes
- Authors: Oliver D\"urr and Stephan H\"orling and Daniel Dold and Ivonne Kovylov
and Beate Sick
- Abstract summary: Variational inference (VI) is a technique to approximate difficult to compute posteriors by optimization.
This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method, flexible enough to approximate complex posteriors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational inference (VI) is a technique to approximate difficult to compute
posteriors by optimization. In contrast to MCMC, VI scales to many
observations. In the case of complex posteriors, however, state-of-the-art VI
approaches often yield unsatisfactory posterior approximations. This paper
presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use
method, flexible enough to approximate complex multivariate posteriors. BF-VI
combines ideas from normalizing flows and Bernstein polynomial-based
transformation models. In benchmark experiments, we compare BF-VI solutions
with exact posteriors, MCMC solutions, and state-of-the-art VI methods
including normalizing flow based VI. We show for low-dimensional models that
BF-VI accurately approximates the true posterior; in higher-dimensional models,
BF-VI outperforms other VI methods. Further, we develop with BF-VI a Bayesian
model for the semi-structured Melanoma challenge data, combining a CNN model
part for image data with an interpretable model part for tabular data, and
demonstrate for the first time how the use of VI in semi-structured models.
Related papers
- You Only Accept Samples Once: Fast, Self-Correcting Stochastic Variational Inference [0.0]
YOASOVI is an algorithm for performing fast, self-correcting intuition optimization for Variational Inference (VI) on large Bayesian heirarchical models.
To accomplish this, we take advantage of available information on the objective function used for VI at each iteration and replace regular Monte Carlo sampling with acceptance sampling.
arXiv Detail & Related papers (2024-06-05T01:28:53Z) - Amortized Variational Inference: When and Why? [17.1222896154385]
Amortized variational inference (A-VI) learns a common inference function, which maps each observation to its corresponding latent variable's approximate posterior.
We derive conditions on a latent variable model which are necessary, sufficient, and verifiable under which A-VI can attain F-VI's optimal solution.
arXiv Detail & Related papers (2023-07-20T16:45:22Z) - FineMorphs: Affine-diffeomorphic sequences for regression [1.1421942894219896]
The model states are optimally "reshaped" by diffeomorphisms generated by smooth vector fields during learning.
Affine transformations and vector fields are optimized within an optimal control setting.
The model can naturally reduce (or increase) dimensionality and adapt to large datasets via suboptimal vector fields.
arXiv Detail & Related papers (2023-05-26T20:54:18Z) - Black Box Variational Inference with a Deterministic Objective: Faster,
More Accurate, and Even More Black Box [14.362625828893654]
We introduce "deterministic ADVI" (DADVI) to address issues with ADVI.
DADVI replaces the intractable MFVB objective with a fixed Monte Carlo approximation.
We show that DADVI and the SAA can perform well with relatively few samples even in very high dimensions.
arXiv Detail & Related papers (2023-04-11T22:45:18Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Amortized Variational Inference: A Systematic Review [0.0]
The core principle of Variational Inference (VI) is to convert the statistical inference problem of computing complex posterior probability densities into a tractable optimization problem.
The traditional VI algorithm is not scalable to large data sets and is unable to readily infer out-of-bounds data points.
Recent developments in the field, like black box-, and amortized-VI, have helped address these issues.
arXiv Detail & Related papers (2022-09-22T09:45:10Z) - Quasi Black-Box Variational Inference with Natural Gradients for
Bayesian Learning [84.90242084523565]
We develop an optimization algorithm suitable for Bayesian learning in complex models.
Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations.
arXiv Detail & Related papers (2022-05-23T18:54:27Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Meta-Learning Divergences of Variational Inference [49.164944557174294]
Variational inference (VI) plays an essential role in approximate Bayesian inference.
We propose a meta-learning algorithm to learn the divergence metric suited for the task of interest.
We demonstrate our approach outperforms standard VI on Gaussian mixture distribution approximation.
arXiv Detail & Related papers (2020-07-06T17:43:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.