Accelerating Metropolis-Hastings with Lightweight Inference Compilation
- URL: http://arxiv.org/abs/2010.12128v1
- Date: Fri, 23 Oct 2020 02:05:37 GMT
- Title: Accelerating Metropolis-Hastings with Lightweight Inference Compilation
- Authors: Feynman Liang, Nimar Arora, Nazanin Tehrani, Yucen Li, Michael
Tingley, Erik Meijer
- Abstract summary: Inference Compilation (LIC) implements amortized inference within an open-universe probabilistic programming language.
LIC forgoes importance sampling of linear execution traces in favor of operating directly on Bayesian networks.
Experimental results show LIC can produce proposers which have less parameters, greater robustness to nuisance random variables, and improved posterior sampling.
- Score: 1.2633299843878945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to construct accurate proposers for Metropolis-Hastings Markov Chain
Monte Carlo, we integrate ideas from probabilistic graphical models and neural
networks in an open-source framework we call Lightweight Inference Compilation
(LIC). LIC implements amortized inference within an open-universe declarative
probabilistic programming language (PPL). Graph neural networks are used to
parameterize proposal distributions as functions of Markov blankets, which
during "compilation" are optimized to approximate single-site Gibbs sampling
distributions. Unlike prior work in inference compilation (IC), LIC forgoes
importance sampling of linear execution traces in favor of operating directly
on Bayesian networks. Through using a declarative PPL, the Markov blankets of
nodes (which may be non-static) are queried at inference-time to produce
proposers Experimental results show LIC can produce proposers which have less
parameters, greater robustness to nuisance random variables, and improved
posterior sampling in a Bayesian logistic regression and $n$-schools inference
application.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - ReCAB-VAE: Gumbel-Softmax Variational Inference Based on Analytic
Divergence [17.665255113864795]
We present a novel divergence-like metric which corresponds to the upper bound of the Kullback-Leibler divergence (KLD) of a relaxed categorical distribution.
We also propose a relaxed categorical analytic bound variational autoencoder (ReCAB-VAE) that successfully models both continuous and relaxed latent representations.
arXiv Detail & Related papers (2022-05-09T08:11:46Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - A deep learning based surrogate model for stochastic simulators [0.0]
We propose a deep learning-based surrogate model for simulators.
We utilize conditional maximum mean discrepancy (CMMD) as the loss-function.
Results obtained indicate the excellent performance of the proposed approach.
arXiv Detail & Related papers (2021-10-24T11:38:47Z) - Sequential Likelihood-Free Inference with Implicit Surrogate Proposal [24.20924279100816]
This paper introduces Implicit Surrogate Proposal (ISP) to generate a cumulated dataset with further sample efficiency.
ISP constructs the cumulative dataset in the most diverse way by drawing i.i.d samples via a feed-forward fashion.
We demonstrate that ISP outperforms the baseline inference algorithms on simulations with multi-modal posteriors.
arXiv Detail & Related papers (2020-10-15T08:59:23Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Distributionally Robust Chance Constrained Programming with Generative
Adversarial Networks (GANs) [0.0]
A novel generative adversarial network (GAN) based data-driven distributionally robust chance constrained programming framework is proposed.
GAN is applied to fully extract distributional information from historical data in a nonparametric and unsupervised way.
The proposed framework is then applied to supply chain optimization under demand uncertainty.
arXiv Detail & Related papers (2020-02-28T00:05:22Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.