Involutive MCMC: a Unifying Framework
- URL: http://arxiv.org/abs/2006.16653v1
- Date: Tue, 30 Jun 2020 10:21:42 GMT
- Title: Involutive MCMC: a Unifying Framework
- Authors: Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov
- Abstract summary: We describe a wide range of MCMC algorithms in terms of iMCMC.
We formulate a number of "tricks" which one can use as design principles for developing new MCMC algorithms.
We demonstrate the latter with two examples where we transform known reversible MCMC algorithms into more efficient irreversible ones.
- Score: 64.46316409766764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental
problems such as inference, integration, optimization, and simulation. The
field has developed a broad spectrum of algorithms, varying in the way they are
motivated, the way they are applied and how efficiently they sample. Despite
all the differences, many of them share the same core principle, which we unify
as the Involutive MCMC (iMCMC) framework. Building upon this, we describe a
wide range of MCMC algorithms in terms of iMCMC, and formulate a number of
"tricks" which one can use as design principles for developing new MCMC
algorithms. Thus, iMCMC provides a unified view of many known MCMC algorithms,
which facilitates the derivation of powerful extensions. We demonstrate the
latter with two examples where we transform known reversible MCMC algorithms
into more efficient irreversible ones.
Related papers
- MCMC-driven learning [64.94438070592365]
This paper is intended to appear as a chapter for the Handbook of Monte Carlo.
The goal of this paper is to unify various problems at the intersection of Markov chain learning.
arXiv Detail & Related papers (2024-02-14T22:10:42Z) - Learning Energy-Based Prior Model with Diffusion-Amortized MCMC [89.95629196907082]
Common practice of learning latent space EBMs with non-convergent short-run MCMC for prior and posterior sampling is hindering the model from further progress.
We introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it.
arXiv Detail & Related papers (2023-10-05T00:23:34Z) - Bayesian Decision Trees Inspired from Evolutionary Algorithms [64.80360020499555]
We propose a replacement of the Markov Chain Monte Carlo (MCMC) with an inherently parallel algorithm, the Sequential Monte Carlo (SMC)
Experiments show that SMC combined with the Evolutionary Algorithms (EA) can produce more accurate results compared to MCMC in 100 times fewer iterations.
arXiv Detail & Related papers (2023-05-30T06:17:35Z) - Nonparametric Involutive Markov Chain Monte Carlo [6.445605125467574]
We show that NP-iMCMC can generalise numerous existing iMCMC algorithms to work on nonparametric models.
Applying our method to the recently proposed Nonparametric HMC, an instance of (Multiple Step) NP-iMCMC, we have constructed several nonparametric extensions.
arXiv Detail & Related papers (2022-11-02T13:21:52Z) - Knowledge Removal in Sampling-based Bayesian Inference [86.14397783398711]
When single data deletion requests come, companies may need to delete the whole models learned with massive resources.
Existing works propose methods to remove knowledge learned from data for explicitly parameterized models.
In this paper, we propose the first machine unlearning algorithm for MCMC.
arXiv Detail & Related papers (2022-03-24T10:03:01Z) - Variational Combinatorial Sequential Monte Carlo Methods for Bayesian
Phylogenetic Inference [4.339931151475307]
We introduce Vari Combinatorial Monte Carlo (VCSMC), a powerful framework that establishes variational search to learn over intricate structures.
We show that VCSMC and CSMC are efficient and explore higher probability spaces than existing methods on a range of tasks.
arXiv Detail & Related papers (2021-05-31T19:44:24Z) - Accelerating MCMC algorithms through Bayesian Deep Networks [7.054093620465401]
Markov Chain Monte Carlo (MCMC) algorithms are commonly used for their versatility in sampling from complicated probability distributions.
As the dimension of the distribution gets larger, the computational costs for a satisfactory exploration of the sampling space become challenging.
We show an alternative way of performing adaptive MCMC, by using the outcome of Bayesian Neural Networks as the initial proposal for the Markov Chain.
arXiv Detail & Related papers (2020-11-29T04:29:00Z) - MCMC-Interactive Variational Inference [56.58416764959414]
We propose MCMC-interactive variational inference (MIVI) to estimate the posterior in a time constrained manner.
MIVI takes advantage of the complementary properties of variational inference and MCMC to encourage mutual improvement.
Experiments show that MIVI not only accurately approximates the posteriors but also facilitates designs of gradient MCMC and Gibbs sampling transitions.
arXiv Detail & Related papers (2020-10-02T17:43:20Z) - MetFlow: A New Efficient Method for Bridging the Gap between Markov
Chain Monte Carlo and Variational Inference [20.312106392307406]
We propose a new computationally efficient method to combine Variational Inference (VI) with Markov Chain Monte Carlo (MCMC)
This approach can be used with generic MCMC kernels, but is especially well suited to textitMetFlow, a novel family of MCMC algorithms we introduce.
arXiv Detail & Related papers (2020-02-27T16:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.