MCMC-driven learning
- URL: http://arxiv.org/abs/2402.09598v1
- Date: Wed, 14 Feb 2024 22:10:42 GMT
- Title: MCMC-driven learning
- Authors: Alexandre Bouchard-C\^ot\'e, Trevor Campbell, Geoff Pleiss, Nikola
Surjanovic
- Abstract summary: This paper is intended to appear as a chapter for the Handbook of Monte Carlo.
The goal of this paper is to unify various problems at the intersection of Markov chain learning.
- Score: 64.94438070592365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is intended to appear as a chapter for the Handbook of Markov
Chain Monte Carlo. The goal of this chapter is to unify various problems at the
intersection of Markov chain Monte Carlo (MCMC) and machine
learning$\unicode{x2014}$which includes black-box variational inference,
adaptive MCMC, normalizing flow construction and transport-assisted MCMC,
surrogate-likelihood MCMC, coreset construction for MCMC with big data, Markov
chain gradient descent, Markovian score climbing, and
more$\unicode{x2014}$within one common framework. By doing so, the theory and
methods developed for each may be translated and generalized.
Related papers
- AutoStep: Locally adaptive involutive MCMC [51.186543293659376]
AutoStep MCMC selects an appropriate step size at each iteration adapted to the local geometry of the target distribution.
We show that AutoStep MCMC is competitive with state-of-the-art methods in terms of effective sample size per unit cost.
arXiv Detail & Related papers (2024-10-24T17:17:11Z) - Scalable Monte Carlo for Bayesian Learning [9.510897794182082]
This book aims to provide a graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC) algorithms.
Most of these topics have emerged as recently as the last decade, and have driven substantial recent practical and theoretical advances in the field.
A particular focus is on methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI.
arXiv Detail & Related papers (2024-07-17T17:19:56Z) - Convergence Bounds for Sequential Monte Carlo on Multimodal Distributions using Soft Decomposition [6.872242798058046]
We prove bounds on the variance of a function $f$ under the empirical measure of the samples obtained by the Sequential Monte Carlo (SMC) algorithm.
We show that bounds can be obtained in the truly multi-modal setting, with mixing times that depend on local MCMC dynamics.
arXiv Detail & Related papers (2024-05-29T22:43:45Z) - Coreset Markov Chain Monte Carlo [15.310842498680483]
State of the art methods for tuning coreset weights are expensive, require nontrivial user input, and impose constraints on the model.
We propose a new method -- Coreset MCMC -- that simulates a Markov chain targeting the coreset posterior, while simultaneously updating the coreset weights.
arXiv Detail & Related papers (2023-10-25T23:53:27Z) - Knowledge Removal in Sampling-based Bayesian Inference [86.14397783398711]
When single data deletion requests come, companies may need to delete the whole models learned with massive resources.
Existing works propose methods to remove knowledge learned from data for explicitly parameterized models.
In this paper, we propose the first machine unlearning algorithm for MCMC.
arXiv Detail & Related papers (2022-03-24T10:03:01Z) - Continual Repeated Annealed Flow Transport Monte Carlo [93.98285297760671]
We propose Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT)
It combines a sequential Monte Carlo sampler with variational inference using normalizing flows.
We show that CRAFT can achieve impressively accurate results on a lattice field example.
arXiv Detail & Related papers (2022-01-31T10:58:31Z) - MCMC-Interactive Variational Inference [56.58416764959414]
We propose MCMC-interactive variational inference (MIVI) to estimate the posterior in a time constrained manner.
MIVI takes advantage of the complementary properties of variational inference and MCMC to encourage mutual improvement.
Experiments show that MIVI not only accurately approximates the posteriors but also facilitates designs of gradient MCMC and Gibbs sampling transitions.
arXiv Detail & Related papers (2020-10-02T17:43:20Z) - Involutive MCMC: a Unifying Framework [64.46316409766764]
We describe a wide range of MCMC algorithms in terms of iMCMC.
We formulate a number of "tricks" which one can use as design principles for developing new MCMC algorithms.
We demonstrate the latter with two examples where we transform known reversible MCMC algorithms into more efficient irreversible ones.
arXiv Detail & Related papers (2020-06-30T10:21:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.