Connecting the Dots: Numerical Randomized Hamiltonian Monte Carlo with
State-Dependent Event Rates
- URL: http://arxiv.org/abs/2005.01285v3
- Date: Mon, 31 Jan 2022 17:25:29 GMT
- Title: Connecting the Dots: Numerical Randomized Hamiltonian Monte Carlo with
State-Dependent Event Rates
- Authors: Tore Selland Kleppe
- Abstract summary: We introduce a robust, easy to use and computationally fast alternative to conventional Markov chain Monte Carlo methods for continuous target distributions.
The proposed algorithm may yield large speedups and improvements in stability relative to relevant benchmarks.
Granted access to a high-quality ODE code, the proposed methodology is both easy to implement and use, even for highly challenging and high-dimensional target distributions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical Generalized Randomized Hamiltonian Monte Carlo is introduced, as a
robust, easy to use and computationally fast alternative to conventional Markov
chain Monte Carlo methods for continuous target distributions. A wide class of
piecewise deterministic Markov processes generalizing Randomized HMC (Bou-Rabee
and Sanz-Serna, 2017) by allowing for state-dependent event rates is defined.
Under very mild restrictions, such processes will have the desired target
distribution as an invariant distribution. Secondly, the numerical
implementation of such processes, based on adaptive numerical integration of
second order ordinary differential equations (ODEs) is considered. The
numerical implementation yields an approximate, yet highly robust algorithm
that, unlike conventional Hamiltonian Monte Carlo, enables the exploitation of
the complete Hamiltonian trajectories (hence the title). The proposed algorithm
may yield large speedups and improvements in stability relative to relevant
benchmarks, while incurring numerical biases that are negligible relative to
the overall Monte Carlo errors. Granted access to a high-quality ODE code, the
proposed methodology is both easy to implement and use, even for highly
challenging and high-dimensional target distributions.
Related papers
- Multi-fidelity Hamiltonian Monte Carlo [1.86413150130483]
We propose a novel two-stage Hamiltonian Monte Carlo algorithm with a surrogate model.
The accepted probability is computed in the first stage via a standard HMC proposal.
If the proposal is accepted, the posterior is evaluated in the second stage using the high-fidelity numerical solver.
arXiv Detail & Related papers (2024-05-08T13:03:55Z) - Gaussian Process Regression with Soft Inequality and Monotonicity Constraints [0.0]
We introduce a new GP method that enforces the physical constraints in a probabilistic manner.
This GP model is trained by the quantum-inspired Hamiltonian Monte Carlo (QHMC)
arXiv Detail & Related papers (2024-04-03T17:09:25Z) - Combining Normalizing Flows and Quasi-Monte Carlo [0.0]
Recent advances in machine learning have led to the development of new methods for enhancing Monte Carlo methods.
We demonstrate through numerical experiments that this combination can lead to an estimator with significantly lower variance than if the flow was sampled with a classic Monte Carlo.
arXiv Detail & Related papers (2024-01-11T14:17:06Z) - Automatic Rao-Blackwellization for Sequential Monte Carlo with Belief
Propagation [4.956977275061968]
Exact Bayesian inference on state-space models(SSM) is in general untractable.
We propose a mixed inference algorithm that computes closed-form solutions using belief propagation as much as possible.
arXiv Detail & Related papers (2023-12-15T15:05:25Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Community Detection in the Stochastic Block Model by Mixed Integer
Programming [3.8073142980733]
Degree-Corrected Block Model (DCSBM) is a popular model to generate random graphs with community structure given an expected degree sequence.
Standard approach of community detection based on the DCSBM is to search for the model parameters that are the most likely to have produced the observed network data through maximum likelihood estimation (MLE)
We present mathematical programming formulations and exact solution methods that can provably find the model parameters and community assignments of maximum likelihood given an observed graph.
arXiv Detail & Related papers (2021-01-26T22:04:40Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.