Quantum Annealing Enhanced Markov-Chain Monte Carlo
- URL: http://arxiv.org/abs/2502.08060v1
- Date: Wed, 12 Feb 2025 01:54:27 GMT
- Title: Quantum Annealing Enhanced Markov-Chain Monte Carlo
- Authors: Shunta Arai, Tadashi Kadowaki,
- Abstract summary: We propose quantum annealing-enhanced Markov Chain Monte Carlo (QAEMCMC)
QA efficiently explores low-energy configurations and overcomes local minima, enabling the generation of proposal states with a high acceptance probability.
Our results reveal larger spectral gaps, faster convergence of energy observables, and reduced total variation distance between the empirical and target distributions.
- Score: 0.0
- License:
- Abstract: In this study, we propose quantum annealing-enhanced Markov Chain Monte Carlo (QAEMCMC), where QA is integrated into the MCMC subroutine. QA efficiently explores low-energy configurations and overcomes local minima, enabling the generation of proposal states with a high acceptance probability. We benchmark QAEMCMC for the Sherrington-Kirkpatrick model and demonstrate its superior performance over the classical MCMC method. Our results reveal larger spectral gaps, faster convergence of energy observables, and reduced total variation distance between the empirical and target distributions. QAEMCMC accelerates MCMC and provides an efficient method for complex systems, paving the way for scalable quantum-assisted sampling strategies.
Related papers
- eQMARL: Entangled Quantum Multi-Agent Reinforcement Learning for Distributed Cooperation over Quantum Channels [98.314893665023]
Quantum computing has sparked a potential synergy between quantum entanglement and cooperation in multi-agent environments.
Current state-of-the-art quantum MARL (QMARL) implementations rely on classical information sharing.
eQMARL is a distributed actor-critic framework that facilitates cooperation over a quantum channel.
arXiv Detail & Related papers (2024-05-24T18:43:05Z) - Quantum Dynamical Hamiltonian Monte Carlo [0.0]
A ubiquitous problem in machine learning is sampling from probability distributions that we only have access to via their log probability.
We extend the well-known Hamiltonian Monte Carlo (HMC) method for Chain Monte Carlo (MCMC) sampling to leverage quantum computation in a hybrid manner.
arXiv Detail & Related papers (2024-03-04T07:08:23Z) - Learning Energy-Based Prior Model with Diffusion-Amortized MCMC [89.95629196907082]
Common practice of learning latent space EBMs with non-convergent short-run MCMC for prior and posterior sampling is hindering the model from further progress.
We introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it.
arXiv Detail & Related papers (2023-10-05T00:23:34Z) - Wasserstein Quantum Monte Carlo: A Novel Approach for Solving the
Quantum Many-Body Schr\"odinger Equation [56.9919517199927]
"Wasserstein Quantum Monte Carlo" (WQMC) uses the gradient flow induced by the Wasserstein metric, rather than Fisher-Rao metric, and corresponds to transporting the probability mass, rather than teleporting it.
We demonstrate empirically that the dynamics of WQMC results in faster convergence to the ground state of molecular systems.
arXiv Detail & Related papers (2023-07-06T17:54:08Z) - QAOA-MC: Markov chain Monte Carlo enhanced by Quantum Alternating
Operator Ansatz [0.6181093777643575]
We propose the use of Quantum Alternating Operator Ansatz (QAOA) for quantum-enhanced Monte Carlo.
This work represents an important step toward realizing practical quantum advantage with currently available quantum computers.
arXiv Detail & Related papers (2023-05-15T16:47:31Z) - A self-consistent field approach for the variational quantum
eigensolver: orbital optimization goes adaptive [52.77024349608834]
We present a self consistent field approach (SCF) within the Adaptive Derivative-Assembled Problem-Assembled Ansatz Variational Eigensolver (ADAPTVQE)
This framework is used for efficient quantum simulations of chemical systems on nearterm quantum computers.
arXiv Detail & Related papers (2022-12-21T23:15:17Z) - Mitigating Out-of-Distribution Data Density Overestimation in
Energy-Based Models [54.06799491319278]
Deep energy-based models (EBMs) are receiving increasing attention due to their ability to learn complex distributions.
To train deep EBMs, the maximum likelihood estimation (MLE) with short-run Langevin Monte Carlo (LMC) is often used.
We investigate why the MLE with short-run LMC can converge to EBMs with wrong density estimates.
arXiv Detail & Related papers (2022-05-30T02:49:17Z) - Overcoming barriers to scalability in variational quantum Monte Carlo [6.41594296153579]
The variational quantum Monte Carlo (VQMC) method received significant attention in the recent past because of its ability to overcome the curse of dimensionality inherent in many-body quantum systems.
Close parallels exist between VQMC and the emerging hybrid quantum-classical computational paradigm of variational quantum algorithms.
arXiv Detail & Related papers (2021-06-24T20:36:50Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Accelerating MCMC algorithms through Bayesian Deep Networks [7.054093620465401]
Markov Chain Monte Carlo (MCMC) algorithms are commonly used for their versatility in sampling from complicated probability distributions.
As the dimension of the distribution gets larger, the computational costs for a satisfactory exploration of the sampling space become challenging.
We show an alternative way of performing adaptive MCMC, by using the outcome of Bayesian Neural Networks as the initial proposal for the Markov Chain.
arXiv Detail & Related papers (2020-11-29T04:29:00Z) - Non-convex Learning via Replica Exchange Stochastic Gradient MCMC [25.47669573608621]
We propose an adaptive replica exchange SGMCMC (reSGMCMC) to automatically correct the bias and study the corresponding properties.
Empirically, we test the algorithm through extensive experiments on various setups and obtain the results.
arXiv Detail & Related papers (2020-08-12T15:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.