Truncated Log-concave Sampling with Reflective Hamiltonian Monte Carlo
- URL: http://arxiv.org/abs/2102.13068v1
- Date: Thu, 25 Feb 2021 18:34:45 GMT
- Title: Truncated Log-concave Sampling with Reflective Hamiltonian Monte Carlo
- Authors: Apostolos Chalkis, Vissarion Fisikopoulos, Marios Papachristou, Elias
Tsigaridas
- Abstract summary: We introduce Reflective Hamiltonian Monte Carlo (ReHMC), an HMC-based algorithm, to sample from a log-concave distribution restricted to a convex polytope.
We prove that, starting from a warm start, it mixes in $widetilde O(kappa d2 ell2 log (1 / varepsilon))$ steps for a well-rounded polytope.
Experiments suggest that ReHMC outperfroms Hit-and-Run and Coordinate-Hit-and-Run regarding the time it needs to produce an independent sample
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Reflective Hamiltonian Monte Carlo (ReHMC), an HMC-based
algorithm, to sample from a log-concave distribution restricted to a convex
polytope. We prove that, starting from a warm start, it mixes in $\widetilde
O(\kappa d^2 \ell^2 \log (1 / \varepsilon))$ steps for a well-rounded polytope,
ignoring logarithmic factors where $\kappa$ is the condition number of the
negative log-density, $d$ is the dimension, $\ell$ is an upper bound on the
number of reflections, and $\varepsilon$ is the accuracy parameter. We also
developed an open source implementation of ReHMC and we performed an
experimental study on various high-dimensional data-sets. Experiments suggest
that ReHMC outperfroms Hit-and-Run and Coordinate-Hit-and-Run regarding the
time it needs to produce an independent sample.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Efficient Sampling on Riemannian Manifolds via Langevin MCMC [51.825900634131486]
We study the task efficiently sampling from a Gibbs distribution $d pi* = eh d vol_g$ over aian manifold $M$ via (geometric) Langevin MCMC.
Our results apply to general settings where $pi*$ can be non exponential and $Mh$ can have negative Ricci curvature.
arXiv Detail & Related papers (2024-02-15T22:59:14Z) - Composite QDrift-Product Formulas for Quantum and Classical Simulations
in Real and Imaginary Time [0.18374319565577155]
Recent work has shown that it can be advantageous to implement a composite channel that partitions the Hamiltonian $H$ for a given simulation problem into subsets.
We show that this approach holds in imaginary time, making it a candidate classical algorithm for quantum Monte-Carlo calculations.
We provide exact numerical simulations of algorithmic cost by counting the number of gates of the form $e-iH_j t$ and $e-H_j beta$ to meet a certain error tolerance.
arXiv Detail & Related papers (2023-06-28T21:31:26Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Hamiltonian Monte Carlo for efficient Gaussian sampling: long and random
steps [0.0]
Hamiltonian Monte Carlo (HMC) is a Markov chain algorithm for sampling from a high-dimensional distribution with density $e-f(x)$.
We show that HMC can sample from a distribution that is $varepsilon$-close in total variation distance using $widetildeO(sqrtkappa d1/4 log(1/varepsilon)$ gradient queries.
arXiv Detail & Related papers (2022-09-26T15:29:29Z) - Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time [13.427128424538502]
Hamiltonian Monte Carlo (HMC) is a popular method in sampling.
We propose a scheme of time-varying integration time based on the roots of Chebyshevs.
arXiv Detail & Related papers (2022-07-05T17:42:22Z) - Settling the Sample Complexity of Model-Based Offline Reinforcement
Learning [50.5790774201146]
offline reinforcement learning (RL) learns using pre-collected data without further exploration.
Prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality.
We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost.
arXiv Detail & Related papers (2022-04-11T17:26:19Z) - An Introduction to Hamiltonian Monte Carlo Method for Sampling [26.555110725656963]
Hamiltonian Monte Carlo (HMC) is a Hamiltonian dynamics-inspired algorithm for sampling from a Gibbs density $pi(x) propto e-f(x)$.
We show that idealized HMC preserves $pi$ and we establish its convergence when $f$ is strongly convex and smooth.
arXiv Detail & Related papers (2021-08-27T03:28:20Z) - Mixing Time Guarantees for Unadjusted Hamiltonian Monte Carlo [1.14219428942199]
We provide quantitative upper bounds on the total variation mixing time of the Markov chain corresponding to the unadjusted Hamiltonian Monte Carlo (uHMC) algorithm.
For two general classes of models and fixed time discretization step size $h$, the mixing time is shown to depend only logarithmically on the dimension.
We show that an $varepsilon$-accurate approximation of the target distribution $mu$ in total variation distance can be achieved by uHMC.
arXiv Detail & Related papers (2021-05-03T14:13:47Z) - A New Framework for Variance-Reduced Hamiltonian Monte Carlo [88.84622104944503]
We propose a new framework of variance-reduced Hamiltonian Monte Carlo (HMC) methods for sampling from an $L$-smooth and $m$-strongly log-concave distribution.
We show that the unbiased gradient estimators, including SAGA and SVRG, based HMC methods achieve highest gradient efficiency with small batch size.
Experimental results on both synthetic and real-world benchmark data show that our new framework significantly outperforms the full gradient and gradient HMC approaches.
arXiv Detail & Related papers (2021-02-09T02:44:24Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.