Importance is Important: Generalized Markov Chain Importance Sampling Methods
- URL: http://arxiv.org/abs/2304.06251v2
- Date: Wed, 02 Oct 2024 16:17:33 GMT
- Title: Importance is Important: Generalized Markov Chain Importance Sampling Methods
- Authors: Guanxun Li, Aaron Smith, Quan Zhou,
- Abstract summary: We show that for any multiple-try Metropolis algorithm, one can always accept the proposal and evaluate the importance weight that is needed to correct for the bias without extra computational cost.
We propose an alternative MCMC sampler on discrete spaces that is also outside the Metropolis--Hastings framework.
- Score: 4.611170084430822
- License:
- Abstract: We show that for any multiple-try Metropolis algorithm, one can always accept the proposal and evaluate the importance weight that is needed to correct for the bias without extra computational cost. This results in a general, convenient, and rejection-free Markov chain Monte Carlo (MCMC) sampling scheme. By further leveraging the importance sampling perspective on Metropolis--Hastings algorithms, we propose an alternative MCMC sampler on discrete spaces that is also outside the Metropolis--Hastings framework, along with a general theory on its complexity. Numerical examples suggest that the proposed algorithms are consistently more efficient than the original Metropolis--Hastings versions.
Related papers
- Covariance estimation using Markov chain Monte Carlo [2.209921757303168]
We show that when $pi$ satisfies a Poincar'e inequality and the chain possesses a spectral gap, we can achieve similar sample complexity using MCMC.
We provide guarantees regarding isotropic rounding procedures for sampling uniformly on convex bodies.
arXiv Detail & Related papers (2024-10-22T16:27:29Z) - Markov chain Monte Carlo without evaluating the target: an auxiliary variable approach [9.426953273977496]
In sampling tasks, it is common for target distributions to be known up to a normalizing constant.
In many situations, even evaluating the unnormalized distribution can be costly or infeasible.
We develop a novel framework that allows the use of auxiliary variables in both the proposal and the acceptance-rejection step.
arXiv Detail & Related papers (2024-06-07T20:06:23Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - A Survey of Monte Carlo Methods for Parameter Estimation [0.0]
This paper reviews Monte Carlo (MC) methods for the estimation of static parameters in signal processing applications.
A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm.
arXiv Detail & Related papers (2021-07-25T14:57:58Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Accelerating MCMC algorithms through Bayesian Deep Networks [7.054093620465401]
Markov Chain Monte Carlo (MCMC) algorithms are commonly used for their versatility in sampling from complicated probability distributions.
As the dimension of the distribution gets larger, the computational costs for a satisfactory exploration of the sampling space become challenging.
We show an alternative way of performing adaptive MCMC, by using the outcome of Bayesian Neural Networks as the initial proposal for the Markov Chain.
arXiv Detail & Related papers (2020-11-29T04:29:00Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - A Parallel Evolutionary Multiple-Try Metropolis Markov Chain Monte Carlo
Algorithm for Sampling Spatial Partitions [0.0]
We develop an Evolutionary Markov Chain Monte Carlo (EMCMC) algorithm for sampling spatial partitions that lie within a spatial state space.
Our algorithm combines the advantages of evolutionary algorithms (EAs) as a large and complex state space traversal and the theoretical convergence properties of Markov Chain Monte Carlo algorithms for sampling from unknown distributions.
We further expand the reach of our EMCMC algorithm by harnessing the computational power afforded by massively parallel architecture.
arXiv Detail & Related papers (2020-07-22T14:28:44Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.