An Introduction to Variational Inference
- URL: http://arxiv.org/abs/2108.13083v1
- Date: Mon, 30 Aug 2021 09:40:04 GMT
- Title: An Introduction to Variational Inference
- Authors: Ankush Ganguly and Samuel W. F. Earp
- Abstract summary: In this paper, we introduce the concept of Variational Inference (VI)
VI is a popular method in machine learning that uses optimization techniques to estimate complex probability densities.
We discuss the applications of VI to variational auto-encoders (VAE) and VAE-Generative Adversarial Network (VAE-GAN)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Approximating complex probability densities is a core problem in modern
statistics. In this paper, we introduce the concept of Variational Inference
(VI), a popular method in machine learning that uses optimization techniques to
estimate complex probability densities. This property allows VI to converge
faster than classical methods, such as, Markov Chain Monte Carlo sampling.
Conceptually, VI works by choosing a family of probability density functions
and then finding the one closest to the actual probability density -- often
using the Kullback-Leibler (KL) divergence as the optimization metric. We
introduce the Evidence Lower Bound to tractably compute the approximated
probability density and we review the ideas behind mean-field variational
inference. Finally, we discuss the applications of VI to variational
auto-encoders (VAE) and VAE-Generative Adversarial Network (VAE-GAN). With this
paper, we aim to explain the concept of VI and assist in future research with
this approach.
Related papers
- PAVI: Plate-Amortized Variational Inference [55.975832957404556]
Inference is challenging for large population studies where millions of measurements are performed over a cohort of hundreds of subjects.
This large cardinality renders off-the-shelf Variational Inference (VI) computationally impractical.
In this work, we design structured VI families that efficiently tackle large population studies.
arXiv Detail & Related papers (2023-08-30T13:22:20Z) - Amortized Variational Inference: When and Why? [17.1222896154385]
Amortized variational inference (A-VI) learns a common inference function, which maps each observation to its corresponding latent variable's approximate posterior.
We derive conditions on a latent variable model which are necessary, sufficient, and verifiable under which A-VI can attain F-VI's optimal solution.
arXiv Detail & Related papers (2023-07-20T16:45:22Z) - On the Convergence of Coordinate Ascent Variational Inference [11.166959724276337]
We consider the common coordinate ascent variational inference (CAVI) algorithm for implementing the mean-field (MF) VI.
We provide general conditions for certifying global or local exponential convergence of CAVI.
New notion of generalized correlation for characterizing the interaction between the constituting blocks in influencing the VI objective functional is introduced.
arXiv Detail & Related papers (2023-06-01T20:19:30Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Amortized Variational Inference: A Systematic Review [0.0]
The core principle of Variational Inference (VI) is to convert the statistical inference problem of computing complex posterior probability densities into a tractable optimization problem.
The traditional VI algorithm is not scalable to large data sets and is unable to readily infer out-of-bounds data points.
Recent developments in the field, like black box-, and amortized-VI, have helped address these issues.
arXiv Detail & Related papers (2022-09-22T09:45:10Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Interpolating between sampling and variational inference with infinite
stochastic mixtures [6.021787236982659]
Sampling and Variational Inference (VI) are two large families of methods for approximate inference with complementary strengths.
Here, we develop a framework for constructing intermediate algorithms that balance the strengths of both sampling and VI.
This work is a first step towards a highly flexible yet simple family of inference methods that combines the complementary strengths of sampling and VI.
arXiv Detail & Related papers (2021-10-18T20:50:06Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - Meta-Learning Divergences of Variational Inference [49.164944557174294]
Variational inference (VI) plays an essential role in approximate Bayesian inference.
We propose a meta-learning algorithm to learn the divergence metric suited for the task of interest.
We demonstrate our approach outperforms standard VI on Gaussian mixture distribution approximation.
arXiv Detail & Related papers (2020-07-06T17:43:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.