Rethinking Variational Inference for Probabilistic Programs with
Stochastic Support
- URL: http://arxiv.org/abs/2311.00594v1
- Date: Wed, 1 Nov 2023 15:38:51 GMT
- Title: Rethinking Variational Inference for Probabilistic Programs with
Stochastic Support
- Authors: Tim Reichelt, Luke Ong, Tom Rainforth
- Abstract summary: We introduce Support Decomposition Vari Inference (SDVI), a new variational inference (VI) approach for probabilistic programs with support.
SDVI instead breaks the program down into sub-programs with static support, before automatically building separate sub-guides for each.
This decomposition significantly aids in the construction of suitable variational families, enabling, in turn, substantial improvements in inference performance.
- Score: 23.07504711090434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Support Decomposition Variational Inference (SDVI), a new
variational inference (VI) approach for probabilistic programs with stochastic
support. Existing approaches to this problem rely on designing a single global
variational guide on a variable-by-variable basis, while maintaining the
stochastic control flow of the original program. SDVI instead breaks the
program down into sub-programs with static support, before automatically
building separate sub-guides for each. This decomposition significantly aids in
the construction of suitable variational families, enabling, in turn,
substantial improvements in inference performance.
Related papers
- Probabilistic Programming with Programmable Variational Inference [45.593974530502095]
We propose a more modular approach to supporting variational inference in PPLs, based on compositional program transformation.
Our design enables modular reasoning about many interacting concerns, including automatic differentiation, density, tracing, and the application of unbiased gradient estimation strategies.
We implement our approach in an extension to the Gen probabilistic programming system (genjax.vi), implemented in JAX, and evaluate on several deep generative modeling tasks.
arXiv Detail & Related papers (2024-06-22T05:49:37Z) - Semi-Implicit Variational Inference via Score Matching [9.654640791869431]
Semi-implicit variational inference (SIVI) greatly enriches the expressiveness of variational families.
Current SIVI approaches often use surrogate evidence lower bounds (ELBOs) or employ expensive inner-loop MCMC runs for unbiased ELBOs for training.
We propose SIVI-SM, a new method for SIVI based on an alternative training objective via score matching.
arXiv Detail & Related papers (2023-08-19T13:32:54Z) - Recursive Monte Carlo and Variational Inference with Auxiliary Variables [64.25762042361839]
Recursive auxiliary-variable inference (RAVI) is a new framework for exploiting flexible proposals.
RAVI generalizes and unifies several existing methods for inference with expressive expressive families.
We show RAVI's design framework and theorems by using them to analyze and improve upon Salimans et al.'s Markov Chain Variational Inference.
arXiv Detail & Related papers (2022-03-05T23:52:40Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Automatic variational inference with cascading flows [6.252236971703546]
We present a new family of variational programs that embed the forward-pass.
A cascading flows program interposes a newly designed highway flow architecture in between the conditional distributions of the prior program.
We evaluate the performance of the new variational programs in a series of structured inference problems.
arXiv Detail & Related papers (2021-02-09T12:44:39Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z) - Meta-Learning Divergences of Variational Inference [49.164944557174294]
Variational inference (VI) plays an essential role in approximate Bayesian inference.
We propose a meta-learning algorithm to learn the divergence metric suited for the task of interest.
We demonstrate our approach outperforms standard VI on Gaussian mixture distribution approximation.
arXiv Detail & Related papers (2020-07-06T17:43:01Z) - Stochastically Differentiable Probabilistic Programs [18.971852464650144]
The existence of discrete random variables prohibits many basic gradient-based inference engines.
We present a novel approach to run inference efficiently and robustly in such programs using Markov Chain Monte Carlo family of algorithms.
arXiv Detail & Related papers (2020-03-02T08:04:41Z) - Automatic structured variational inference [12.557212589634112]
We introduce automatic structured variational inference (ASVI)
ASVI is a fully automated method for constructing structured variational families.
We find that ASVI provides a clear improvement in performance when compared with other popular approaches.
arXiv Detail & Related papers (2020-02-03T10:52:30Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.