Online calibration scheme for training restricted Boltzmann machines with quantum annealing
- URL: http://arxiv.org/abs/2307.09785v2
- Date: Mon, 17 Feb 2025 11:00:32 GMT
- Title: Online calibration scheme for training restricted Boltzmann machines with quantum annealing
- Authors: Takeru Goto, Masayuki Ohzeki,
- Abstract summary: We propose a scheme to calibrate the internal parameters of a quantum annealer to obtain well-approximated samples for training a restricted Boltzmann machine (RBM)
Our results indicate that our scheme demonstrates performance on par with Gibbs sampling.
- Score: 0.552480439325792
- License:
- Abstract: We propose a scheme to calibrate the internal parameters of a quantum annealer to obtain well-approximated samples for training a restricted Boltzmann machine (RBM). Empirically, samples from quantum annealers obey the Boltzmann distribution, making them suitable for RBM training. Quantum annealers utilize physical phenomena to generate a large number of samples in a short time. However, hardware imperfections make it challenging to obtain accurate samples. Existing research often estimates the inverse temperature for the compensation. Our scheme efficiently utilizes samples for RBM training also to estimate internal parameters. Furthermore, we consider additional parameters and demonstrate that they improve sample quality. We evaluate our approach by comparing the obtained samples with classical Gibbs sampling, which theoretically generates accurate samples. Our results indicate that our scheme demonstrates performance on par with Gibbs sampling. In addition, the training results with our estimation scheme outperform those of the contrastive divergence algorithm, a standard training algorithm for RBM.
Related papers
- Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - Certainty In, Certainty Out: REVQCs for Quantum Machine Learning [15.908051575681458]
We discuss the statistical theory which enables highly accurate and precise sample inference.
We show the effectiveness of this training method by assessing several effective variational quantum circuits.
arXiv Detail & Related papers (2023-10-16T17:53:30Z) - Boltzmann sampling with quantum annealers via fast Stein correction [1.37736442859694]
A fast and approximate method is developed to compute the sample weights, and used to correct the samples generated by D-Wave quantum annealers.
In benchmarking problems, it is observed that the residual error of thermal average calculations is reduced significantly.
arXiv Detail & Related papers (2023-09-08T04:47:10Z) - Entropy-based Training Methods for Scalable Neural Implicit Sampler [15.978655106034113]
Efficiently sampling from un-normalized target distributions is a fundamental problem in scientific computing and machine learning.
In this paper, we propose an efficient and scalable neural implicit sampler that overcomes these limitations.
Our sampler can generate large batches of samples with low computational costs by leveraging a neural transformation that directly maps easily sampled latent vectors to target samples.
arXiv Detail & Related papers (2023-06-08T05:56:05Z) - A hybrid quantum-classical approach for inference on restricted
Boltzmann machines [1.0928470926399563]
A Boltzmann machine is a powerful machine learning model with many real-world applications.
Statistical inference on a Boltzmann machine can be carried out by sampling from its posterior distribution.
Quantum computers have the promise of solving some non-trivial problems in an efficient manner.
arXiv Detail & Related papers (2023-03-31T11:10:31Z) - Hard Sample Matters a Lot in Zero-Shot Quantization [52.32914196337281]
Zero-shot quantization (ZSQ) is promising for compressing and accelerating deep neural networks when the data for training full-precision models are inaccessible.
In ZSQ, network quantization is performed using synthetic samples, thus, the performance of quantized models depends heavily on the quality of synthetic samples.
We propose HArd sample Synthesizing and Training (HAST) to address this issue.
arXiv Detail & Related papers (2023-03-24T06:22:57Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Learning a Restricted Boltzmann Machine using biased Monte Carlo
sampling [0.6554326244334867]
We show that sampling the equilibrium distribution via Markov Chain Monte Carlo can be dramatically accelerated using biased sampling techniques.
We also show that this sampling technique can be exploited to improve the computation of the log-likelihood gradient during the training too.
arXiv Detail & Related papers (2022-06-02T21:29:01Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.