Recursive Monte Carlo and Variational Inference with Auxiliary Variables
- URL: http://arxiv.org/abs/2203.02836v1
- Date: Sat, 5 Mar 2022 23:52:40 GMT
- Title: Recursive Monte Carlo and Variational Inference with Auxiliary Variables
- Authors: Alexander K. Lew, Marco Cusumano-Towner, and Vikash K. Mansinghka
- Abstract summary: Recursive auxiliary-variable inference (RAVI) is a new framework for exploiting flexible proposals.
RAVI generalizes and unifies several existing methods for inference with expressive expressive families.
We show RAVI's design framework and theorems by using them to analyze and improve upon Salimans et al.'s Markov Chain Variational Inference.
- Score: 64.25762042361839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A key challenge in applying Monte Carlo and variational inference (VI) is the
design of proposals and variational families that are flexible enough to
closely approximate the posterior, but simple enough to admit tractable
densities and variational bounds. This paper presents recursive
auxiliary-variable inference (RAVI), a new framework for exploiting flexible
proposals, for example based on involved simulations or stochastic
optimization, within Monte Carlo and VI algorithms. The key idea is to estimate
intractable proposal densities via meta-inference: additional Monte Carlo or
variational inference targeting the proposal, rather than the model. RAVI
generalizes and unifies several existing methods for inference with expressive
approximating families, which we show correspond to specific choices of
meta-inference algorithm, and provides new theory for analyzing their bias and
variance. We illustrate RAVI's design framework and theorems by using them to
analyze and improve upon Salimans et al. (2015)'s Markov Chain Variational
Inference, and to design a novel sampler for Dirichlet process mixtures,
achieving state-of-the-art results on a standard benchmark dataset from
astronomy and on a challenging data-cleaning task with Medicare hospital data.
Related papers
- BI-EqNO: Generalized Approximate Bayesian Inference with an Equivariant Neural Operator Framework [9.408644291433752]
We introduce BI-EqNO, an equivariant neural operator framework for generalized approximate Bayesian inference.
BI-EqNO transforms priors into posteriors on conditioned observation data through data-driven training.
We demonstrate BI-EqNO's utility through two examples: (1) as a generalized Gaussian process (gGP) for regression, and (2) as an ensemble neural filter (EnNF) for sequential data assimilation.
arXiv Detail & Related papers (2024-10-21T18:39:16Z) - On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Markov chain Monte Carlo without evaluating the target: an auxiliary variable approach [9.426953273977496]
Markov chain Monte Carlo algorithms can be unified under a simple common procedure.
We develop the theory of the new framework, applying it to existing algorithms to simplify and extend their results.
Several new algorithms emerge from this framework, with improved performance demonstrated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-07T20:06:23Z) - Variational Inference for GARCH-family Models [84.84082555964086]
Variational Inference is a robust approach for Bayesian inference in machine learning models.
We show that Variational Inference is an attractive, remarkably well-calibrated, and competitive method for Bayesian learning.
arXiv Detail & Related papers (2023-10-05T10:21:31Z) - Monte Carlo inference for semiparametric Bayesian regression [5.488491124945426]
This paper introduces a simple, general, and efficient strategy for joint posterior inference of an unknown transformation and all regression model parameters.
It delivers (1) joint posterior consistency under general conditions, including multiple model misspecifications, and (2) efficient Monte Carlo (not Markov chain Monte Carlo) inference for the transformation and all parameters for important special cases.
arXiv Detail & Related papers (2023-06-08T18:42:42Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Quasi Black-Box Variational Inference with Natural Gradients for
Bayesian Learning [84.90242084523565]
We develop an optimization algorithm suitable for Bayesian learning in complex models.
Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations.
arXiv Detail & Related papers (2022-05-23T18:54:27Z) - Scalable Control Variates for Monte Carlo Methods via Stochastic
Optimization [62.47170258504037]
This paper presents a framework that encompasses and generalizes existing approaches that use controls, kernels and neural networks.
Novel theoretical results are presented to provide insight into the variance reduction that can be achieved, and an empirical assessment, including applications to Bayesian inference, is provided in support.
arXiv Detail & Related papers (2020-06-12T22:03:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.