Instance Independence of Single Layer Quantum Approximate Optimization
Algorithm on Mixed-Spin Models at Infinite Size
- URL: http://arxiv.org/abs/2102.12043v3
- Date: Tue, 7 Sep 2021 15:16:03 GMT
- Title: Instance Independence of Single Layer Quantum Approximate Optimization
Algorithm on Mixed-Spin Models at Infinite Size
- Authors: Jahan Claes and Wim van Dam
- Abstract summary: We show that for mixed-spin models the performance of depth $1$ QAOA is independent of the specific instance in the limit of infinite sized systems.
We also give explicit expressions for the higher moments of the expected energy, thereby proving that the expected performance of QAOA concentrates.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the application of the Quantum Approximate Optimization
Algorithm (QAOA) to spin-glass models with random multi-body couplings in the
limit of a large number of spins. We show that for such mixed-spin models the
performance of depth $1$ QAOA is independent of the specific instance in the
limit of infinite sized systems and we give an explicit formula for the
expected performance. We also give explicit expressions for the higher moments
of the expected energy, thereby proving that the expected performance of QAOA
concentrates.
Related papers
- MG-Net: Learn to Customize QAOA with Circuit Depth Awareness [51.78425545377329]
Quantum Approximate Optimization Algorithm (QAOA) and its variants exhibit immense potential in tackling optimization challenges.
The requisite circuit depth for satisfactory performance is problem-specific and often exceeds the maximum capability of current quantum devices.
We introduce the Mixer Generator Network (MG-Net), a unified deep learning framework adept at dynamically formulating optimal mixer Hamiltonians.
arXiv Detail & Related papers (2024-09-27T12:28:18Z) - The role of gaps in digitized counterdiabatic QAOA for fully-connected spin models [0.0]
CD corrections to the quantum approximate optimization algorithm (QAOA) have been proposed, yielding faster convergence within the desired accuracy than standard QAOA.
We show that the performances of the algorithm are related to the spectral properties of the instances analyzed.
arXiv Detail & Related papers (2024-09-05T13:17:56Z) - Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm [47.47843839099175]
A Quantum Natural Gradient (QNG) algorithm for optimization of variational quantum circuits has been proposed recently.
In this study, we employ the Langevin equation with a QNG force to demonstrate that its discrete-time solution gives a generalized form, which we call Momentum-QNG.
arXiv Detail & Related papers (2024-09-03T15:21:16Z) - Parameter Generation of Quantum Approximate Optimization Algorithm with Diffusion Model [3.6959187484738902]
Quantum computing presents a prospect for revolutionizing the field of probabilistic optimization.
We present the Quantum Approximate Optimization Algorithm (QAOA), which is a hybrid quantum-classical algorithm.
We show that the diffusion model is capable of learning the distribution of high-performing parameters and then synthesizing new parameters closer to optimal ones.
arXiv Detail & Related papers (2024-07-17T01:18:27Z) - PAPAL: A Provable PArticle-based Primal-Dual ALgorithm for Mixed Nash
Equilibrium [62.51015395213579]
We consider the non-AL equilibrium nonconptotic objective function in two-player zero-sum continuous games.
The proposed algorithm employs the movements of particles to represent the updates of random strategies for the $ilon$-mixed Nash equilibrium.
arXiv Detail & Related papers (2023-03-02T05:08:15Z) - A kernel-based quantum random forest for improved classification [0.0]
Quantum Machine Learning (QML) to enhance traditional classical learning methods has seen various limitations to its realisation.
We extend the linear quantum support vector machine (QSVM) with kernel function computed through quantum kernel estimation (QKE)
To limit overfitting, we further extend the model to employ a low-rank Nystr"om approximation to the kernel matrix.
arXiv Detail & Related papers (2022-10-05T15:57:31Z) - Evaluating the Convergence of Tabu Enhanced Hybrid Quantum Optimization [58.720142291102135]
We introduce the Tabu Enhanced Hybrid Quantum Optimization metaheuristic approach useful for optimization problem solving on a quantum hardware.
We address the theoretical convergence of the proposed scheme from the viewpoint of the collisions in the object which stores the tabu states, based on the Ising model.
arXiv Detail & Related papers (2022-09-05T07:23:03Z) - Performance and limitations of the QAOA at constant levels on large
sparse hypergraphs and spin glass models [15.857373057387669]
We prove concentration properties at any constant level (number of layers) on ensembles of random optimization problems in the infinite size limit.
Our analysis can be understood via a saddle-point approximation of a sum-over-paths integral.
We show that the performance of the QAOA at constant levels is bounded away from optimality for pure $q$-spin models when $qge 4$ and is even.
arXiv Detail & Related papers (2022-04-21T17:40:39Z) - Empirical performance bounds for quantum approximate optimization [0.27998963147546135]
Quantifying performance bounds provides insight into when QAOA may be viable for solving real-world applications.
We find QAOA exceeds the Goemans-Williamson approximation ratio bound for most graphs.
The resulting data set is presented as a benchmark for establishing empirical bounds on QAOA performance.
arXiv Detail & Related papers (2021-02-12T23:12:09Z) - Momentum Q-learning with Finite-Sample Convergence Guarantee [49.38471009162477]
This paper analyzes a class of momentum-based Q-learning algorithms with finite-sample guarantee.
We establish the convergence guarantee for MomentumQ with linear function approximations and Markovian sampling.
We demonstrate through various experiments that the proposed MomentumQ outperforms other momentum-based Q-learning algorithms.
arXiv Detail & Related papers (2020-07-30T12:27:03Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.