Boltzmann sampling with quantum annealers via fast Stein correction
- URL: http://arxiv.org/abs/2309.04120v1
- Date: Fri, 8 Sep 2023 04:47:10 GMT
- Title: Boltzmann sampling with quantum annealers via fast Stein correction
- Authors: Ryosuke Shibukawa and Ryo Tamura and Koji Tsuda
- Abstract summary: A fast and approximate method is developed to compute the sample weights, and used to correct the samples generated by D-Wave quantum annealers.
In benchmarking problems, it is observed that the residual error of thermal average calculations is reduced significantly.
- Score: 1.37736442859694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the attempts to apply a quantum annealer to Boltzmann sampling, it is
still impossible to perform accurate sampling at arbitrary temperatures.
Conventional distribution correction methods such as importance sampling and
resampling cannot be applied, because the analytical expression of sampling
distribution is unknown for a quantum annealer. Stein correction (Liu and Lee,
2017) can correct the samples by weighting without the knowledge of the
sampling distribution, but the naive implementation requires the solution of a
large-scale quadratic program, hampering usage in practical problems. In this
letter, a fast and approximate method based on random feature map and
exponentiated gradient updates is developed to compute the sample weights, and
used to correct the samples generated by D-Wave quantum annealers. In
benchmarking problems, it is observed that the residual error of thermal
average calculations is reduced significantly. If combined with our method,
quantum annealers may emerge as a viable alternative to long-established Markov
chain Monte Carlo methods.
Related papers
- NETS: A Non-Equilibrium Transport Sampler [15.58993313831079]
We propose an algorithm, termed the Non-Equilibrium Transport Sampler (NETS)
NETS can be viewed as a variant of importance sampling (AIS) based on Jarzynski's equality.
We show that this drift is the minimizer of a variety of objective functions, which can all be estimated in an unbiased fashion.
arXiv Detail & Related papers (2024-10-03T17:35:38Z) - Calibrating Bayesian Generative Machine Learning for Bayesiamplification [0.0]
We show a clear scheme for quantifying the calibration of Bayesian generative machine learning models.
Well calibrated uncertainties can then be used to roughly estimate the number of uncorrelated truth samples.
arXiv Detail & Related papers (2024-08-01T18:00:05Z) - Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - Deep Evidential Learning for Bayesian Quantile Regression [3.6294895527930504]
It is desirable to have accurate uncertainty estimation from a single deterministic forward-pass model.
This paper proposes a deep Bayesian quantile regression model that can estimate the quantiles of a continuous target distribution without the Gaussian assumption.
arXiv Detail & Related papers (2023-08-21T11:42:16Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - The Accuracy vs. Sampling Overhead Trade-off in Quantum Error Mitigation
Using Monte Carlo-Based Channel Inversion [84.66087478797475]
Quantum error mitigation (QEM) is a class of promising techniques for reducing the computational error of variational quantum algorithms.
We consider a practical channel inversion strategy based on Monte Carlo sampling, which introduces additional computational error.
We show that when the computational error is small compared to the dynamic range of the error-free results, it scales with the square root of the number of gates.
arXiv Detail & Related papers (2022-01-20T00:05:01Z) - Accuracy of the typicality approach using Chebyshev polynomials [0.0]
Trace estimators allow to approximate thermodynamic equilibrium observables with astonishing accuracy.
Here we report an approach which employs Chebyshev an alternative approach describing the exponential expansion of space weights.
This method turns out to be also very accurate in general, but shows systematic inaccuracies at low temperatures.
arXiv Detail & Related papers (2021-04-27T14:23:36Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z) - Stochastic Normalizing Flows [2.323220706791067]
We show that normalizing flows can be used to learn the transformation of a simple prior distribution.
We derive an efficient training procedure by which both the sampler's and the flow's parameters can be optimized end-to-end.
We illustrate the representational power, sampling efficiency and correctness of SNFs on several benchmarks including applications to molecular sampling systems in equilibrium.
arXiv Detail & Related papers (2020-02-16T23:29:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.