Self-Tuning Hamiltonian Monte Carlo for Accelerated Sampling
- URL: http://arxiv.org/abs/2309.13593v2
- Date: Sun, 26 Nov 2023 13:36:57 GMT
- Title: Self-Tuning Hamiltonian Monte Carlo for Accelerated Sampling
- Authors: Henrik Christiansen and Federico Errica and Francesco Alesiani
- Abstract summary: Hamiltonian Monte Carlo simulations crucially depend on the integration timestep and the number of integration steps.
We present an adaptive general-purpose framework to automatically tune such parameters.
We show that a good correspondence between loss and autocorrelation time can be established.
- Score: 12.163119957680802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of Hamiltonian Monte Carlo simulations crucially depends on
both the integration timestep and the number of integration steps. We present
an adaptive general-purpose framework to automatically tune such parameters,
based on a local loss function which promotes the fast exploration of
phase-space. We show that a good correspondence between loss and
autocorrelation time can be established, allowing for gradient-based
optimization using a fully-differentiable set-up. The loss is constructed in
such a way that it also allows for gradient-driven learning of a distribution
over the number of integration steps. Our approach is demonstrated for the
one-dimensional harmonic oscillator and alanine dipeptide, a small protein
common as a test case for simulation methods. Through the application to the
harmonic oscillator, we highlight the importance of not using a fixed timestep
to avoid a rugged loss surface with many local minima, otherwise trapping the
optimization. In the case of alanine dipeptide, by tuning the only free
parameter of our loss definition, we find a good correspondence between it and
the autocorrelation times, resulting in a $>100$ fold speed up in optimization
of simulation parameters compared to a grid-search. For this system, we also
extend the integrator to allow for atom-dependent timesteps, providing a
further reduction of $25\%$ in autocorrelation times.
Related papers
- Accelerating Real-Time Coupled Cluster Methods with Single-Precision
Arithmetic and Adaptive Numerical Integration [3.469636229370366]
We show that single-precision arithmetic reduces both the storage and multiplicative costs of the real-time simulation by approximately a factor of two.
Additional speedups of up to a factor of 14 in test simulations of water clusters are obtained via a straightforward-based implementation.
arXiv Detail & Related papers (2022-05-10T21:21:49Z) - Spatio-Temporal Variational Gaussian Processes [26.60276485130467]
We introduce a scalable approach to Gaussian process inference that combinestemporal-temporal filtering with natural variational inference.
We derive a sparse approximation that constructs a state-space model over a reduced set of inducing points.
We show that for separable Markov kernels the full sparse cases recover exactly the standard variational GP.
arXiv Detail & Related papers (2021-11-02T16:53:31Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Accurate methods for the analysis of strong-drive effects in parametric
gates [94.70553167084388]
We show how to efficiently extract gate parameters using exact numerics and a perturbative analytical approach.
We identify optimal regimes of operation for different types of gates including $i$SWAP, controlled-Z, and CNOT.
arXiv Detail & Related papers (2021-07-06T02:02:54Z) - Improving the Transient Times for Distributed Stochastic Gradient
Methods [5.215491794707911]
We study a distributed gradient algorithm, called exact diffusion adaptive stepsizes (EDAS)
We show EDAS achieves the same network independent convergence rate as centralized gradient descent (SGD)
To the best of our knowledge, EDAS achieves the shortest time when the average of the $n$ cost functions is strongly convex.
arXiv Detail & Related papers (2021-05-11T08:09:31Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - Accelerated, Optimal, and Parallel: Some Results on Model-Based
Stochastic Optimization [33.71051480619541]
We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving convex optimization problems.
We provide non-asymptotic convergence guarantees and an acceleration scheme for which we provide linear speedup in minibatch size.
We show improved convergence rates and matching lower bounds identifying new fundamental constants for "interpolation" problems.
arXiv Detail & Related papers (2021-01-07T18:58:39Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Convergence and sample complexity of gradient methods for the model-free
linear quadratic regulator problem [27.09339991866556]
We show that ODE searches for optimal control for an unknown computation system by directly searching over the corresponding space of controllers.
We take a step towards demystifying the performance and efficiency of such methods by focusing on the gradient-flow dynamics set of stabilizing feedback gains and a similar result holds for the forward disctization of the ODE.
arXiv Detail & Related papers (2019-12-26T16:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.