SpreadNUTS -- Moderate Dynamic Extension of Paths for No-U-Turn Sampling
& Partitioning Visited Regions
- URL: http://arxiv.org/abs/2307.06279v1
- Date: Sun, 9 Jul 2023 05:00:25 GMT
- Title: SpreadNUTS -- Moderate Dynamic Extension of Paths for No-U-Turn Sampling
& Partitioning Visited Regions
- Authors: Fareed Sheriff
- Abstract summary: This paper introduces modifications to a specific Hamiltonian Monte Carlo (HMC) algorithm known as the no-U-turn sampler (NUTS)
NUTS aims to explore the sample space faster than NUTS, yielding a sampler that has faster convergence to the true distribution than NUTS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Markov chain Monte Carlo (MCMC) methods have existed for a long time and the
field is well-explored. The purpose of MCMC methods is to approximate a
distribution through repeated sampling; most MCMC algorithms exhibit
asymptotically optimal behavior in that they converge to the true distribution
at the limit. However, what differentiates these algorithms are their practical
convergence guarantees and efficiency. While a sampler may eventually
approximate a distribution well, because it is used in the real world it is
necessary that the point at which the sampler yields a good estimate of the
distribution is reachable in a reasonable amount of time. Similarly, if it is
computationally difficult or intractable to produce good samples from a
distribution for use in estimation, then there is no real-world utility
afforded by the sampler. Thus, most MCMC methods these days focus on improving
efficiency and speeding up convergence. However, many MCMC algorithms suffer
from random walk behavior and often only mitigate such behavior as outright
erasing random walks is difficult. Hamiltonian Monte Carlo (HMC) is a class of
MCMC methods that theoretically exhibit no random walk behavior because of
properties related to Hamiltonian dynamics. This paper introduces modifications
to a specific HMC algorithm known as the no-U-turn sampler (NUTS) that aims to
explore the sample space faster than NUTS, yielding a sampler that has faster
convergence to the true distribution than NUTS.
Related papers
- Convergence Bounds for Sequential Monte Carlo on Multimodal Distributions using Soft Decomposition [6.872242798058046]
We prove bounds on the variance of a function $f$ under the empirical measure of the samples obtained by the Sequential Monte Carlo (SMC) algorithm.
We show that bounds can be obtained in the truly multi-modal setting, with mixing times that depend on local MCMC dynamics.
arXiv Detail & Related papers (2024-05-29T22:43:45Z) - On Cyclical MCMC Sampling [13.002470980542707]
We show that cyclical MCMC converges to the desired probability distribution in settings where the Markov kernels used are fast mixing.
We also show that cyclical MCMC estimates well the local shape of the target distribution around each mode, even when we do not have convergence to the target.
arXiv Detail & Related papers (2024-03-01T02:20:44Z) - Reverse Diffusion Monte Carlo [19.35592726471155]
We propose a novel Monte Carlo sampling algorithm called reverse diffusion Monte Carlo (rdMC)
rdMC is distinct from the Markov chain Monte Carlo (MCMC) methods.
arXiv Detail & Related papers (2023-07-05T05:42:03Z) - Bayesian Decision Trees Inspired from Evolutionary Algorithms [64.80360020499555]
We propose a replacement of the Markov Chain Monte Carlo (MCMC) with an inherently parallel algorithm, the Sequential Monte Carlo (SMC)
Experiments show that SMC combined with the Evolutionary Algorithms (EA) can produce more accurate results compared to MCMC in 100 times fewer iterations.
arXiv Detail & Related papers (2023-05-30T06:17:35Z) - Parallel Approaches to Accelerate Bayesian Decision Trees [1.9728521995447947]
We propose two methods for exploiting parallelism in the MCMC.
In the first, we replace the MCMC with another numerical Bayesian approach.
In the second, we consider data partitioning.
arXiv Detail & Related papers (2023-01-22T09:56:26Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic
Gradient MCMC [37.768023232677244]
Hamiltonian Monte Carlo (SGHMC) is an efficient method for sampling from continuous distributions.
We propose a novel second-order SG-MCMC algorithm---AMAGOLD---that infrequently uses Metropolis-Hastings (M-H) corrections to remove bias.
We prove AMAGOLD converges to the target distribution with a fixed, rather than a diminishing, step size, and that its convergence rate is at most a constant factor slower than a full-batch baseline.
arXiv Detail & Related papers (2020-02-29T06:57:43Z) - On Thompson Sampling with Langevin Algorithms [106.78254564840844]
Thompson sampling for multi-armed bandit problems enjoys favorable performance in both theory and practice.
It suffers from a significant limitation computationally, arising from the need for samples from posterior distributions at every iteration.
We propose two Markov Chain Monte Carlo (MCMC) methods tailored to Thompson sampling to address this issue.
arXiv Detail & Related papers (2020-02-23T22:35:29Z) - Improving Sampling Accuracy of Stochastic Gradient MCMC Methods via
Non-uniform Subsampling of Gradients [54.90670513852325]
We propose a non-uniform subsampling scheme to improve the sampling accuracy.
EWSG is designed so that a non-uniform gradient-MCMC method mimics the statistical behavior of a batch-gradient-MCMC method.
In our practical implementation of EWSG, the non-uniform subsampling is performed efficiently via a Metropolis-Hastings chain on the data index.
arXiv Detail & Related papers (2020-02-20T18:56:18Z) - Targeted stochastic gradient Markov chain Monte Carlo for hidden Markov models with rare latent states [48.705095800341944]
Markov chain Monte Carlo (MCMC) algorithms for hidden Markov models often rely on the forward-backward sampler.
This makes them computationally slow as the length of the time series increases, motivating the development of sub-sampling-based approaches.
We propose a targeted sub-sampling approach that over-samples observations corresponding to rare latent states when calculating the gradient of parameters associated with them.
arXiv Detail & Related papers (2018-10-31T17:44:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.