Connecting the Dots: Numerical Randomized Hamiltonian Monte Carlo with
State-Dependent Event Rates
- URL: http://arxiv.org/abs/2005.01285v3
- Date: Mon, 31 Jan 2022 17:25:29 GMT
- Title: Connecting the Dots: Numerical Randomized Hamiltonian Monte Carlo with
State-Dependent Event Rates
- Authors: Tore Selland Kleppe
- Abstract summary: We introduce a robust, easy to use and computationally fast alternative to conventional Markov chain Monte Carlo methods for continuous target distributions.
The proposed algorithm may yield large speedups and improvements in stability relative to relevant benchmarks.
Granted access to a high-quality ODE code, the proposed methodology is both easy to implement and use, even for highly challenging and high-dimensional target distributions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical Generalized Randomized Hamiltonian Monte Carlo is introduced, as a
robust, easy to use and computationally fast alternative to conventional Markov
chain Monte Carlo methods for continuous target distributions. A wide class of
piecewise deterministic Markov processes generalizing Randomized HMC (Bou-Rabee
and Sanz-Serna, 2017) by allowing for state-dependent event rates is defined.
Under very mild restrictions, such processes will have the desired target
distribution as an invariant distribution. Secondly, the numerical
implementation of such processes, based on adaptive numerical integration of
second order ordinary differential equations (ODEs) is considered. The
numerical implementation yields an approximate, yet highly robust algorithm
that, unlike conventional Hamiltonian Monte Carlo, enables the exploitation of
the complete Hamiltonian trajectories (hence the title). The proposed algorithm
may yield large speedups and improvements in stability relative to relevant
benchmarks, while incurring numerical biases that are negligible relative to
the overall Monte Carlo errors. Granted access to a high-quality ODE code, the
proposed methodology is both easy to implement and use, even for highly
challenging and high-dimensional target distributions.
Related papers
- Non-linear Quantum Monte Carlo [1.237454174824584]
Quantum computing provides a quadratic speedup over classical Monte Carlo methods for mean estimation.
We propose a quantum-inside-quantum Monte Carlo algorithm that achieves such a speedup for a broad class of non-linear estimation problems.
arXiv Detail & Related papers (2025-02-07T17:13:27Z) - Randomized Kaczmarz Methods with Beyond-Krylov Convergence [8.688801614519988]
We introduce Kaczmarz++, an accelerated randomized block Kaczmarz algorithm that exploits outlying singular values in the input to attain a fast Krylov-style convergence.
We show that Kaczmarz++ captures large outlying singular values provably faster than popular Krylov methods, for both over- and under-determined systems.
arXiv Detail & Related papers (2025-01-20T18:55:51Z) - Multi-fidelity Hamiltonian Monte Carlo [1.86413150130483]
We propose a novel two-stage Hamiltonian Monte Carlo algorithm with a surrogate model.
The accepted probability is computed in the first stage via a standard HMC proposal.
If the proposal is accepted, the posterior is evaluated in the second stage using the high-fidelity numerical solver.
arXiv Detail & Related papers (2024-05-08T13:03:55Z) - Combining Normalizing Flows and Quasi-Monte Carlo [0.0]
Recent advances in machine learning have led to the development of new methods for enhancing Monte Carlo methods.
We demonstrate through numerical experiments that this combination can lead to an estimator with significantly lower variance than if the flow was sampled with a classic Monte Carlo.
arXiv Detail & Related papers (2024-01-11T14:17:06Z) - Automatic Rao-Blackwellization for Sequential Monte Carlo with Belief
Propagation [4.956977275061968]
Exact Bayesian inference on state-space models(SSM) is in general untractable.
We propose a mixed inference algorithm that computes closed-form solutions using belief propagation as much as possible.
arXiv Detail & Related papers (2023-12-15T15:05:25Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.