Recursive greedy initialization of the quantum approximate optimization
algorithm with guaranteed improvement
- URL: http://arxiv.org/abs/2209.01159v2
- Date: Tue, 6 Jun 2023 12:59:53 GMT
- Title: Recursive greedy initialization of the quantum approximate optimization
algorithm with guaranteed improvement
- Authors: Stefan H. Sack, Raimel A. Medina, Richard Kueng and Maksym Serbyn
- Abstract summary: Quantum approximate optimization algorithm (QAOA) is a variational quantum algorithm, where a quantum computer implements a variational ansatz consisting of $p$ layers of alternating unitary operators.
We present an analytic construction of $2p+1$ transition states for QAOA with $p+1$ layers that use the local minimum of QAOA with $p$ layers.
- Score: 1.720510639137902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quantum approximate optimization algorithm (QAOA) is a variational
quantum algorithm, where a quantum computer implements a variational ansatz
consisting of $p$ layers of alternating unitary operators and a classical
computer is used to optimize the variational parameters. For a random
initialization, the optimization typically leads to local minima with poor
performance, motivating the search for initialization strategies of QAOA
variational parameters. Although numerous heuristic initializations exist, an
analytical understanding and performance guarantees for large $p$ remain
evasive. We introduce a greedy initialization of QAOA which guarantees
improving performance with an increasing number of layers. Our main result is
an analytic construction of $2p+1$ transition states - saddle points with a
unique negative curvature direction - for QAOA with $p+1$ layers that use the
local minimum of QAOA with $p$ layers. Transition states connect to new local
minima, which are guaranteed to lower the energy compared to the minimum found
for $p$ layers. We use the GREEDY procedure to navigate the exponentially
increasing with $p$ number of local minima resulting from the recursive
application of our analytic construction. The performance of the GREEDY
procedure matches available initialization strategies while providing a
guarantee for the minimal energy to decrease with an increasing number of
layers $p$.
Related papers
- A Recursive Lower Bound on the Energy Improvement of the Quantum Approximate Optimization Algorithm [0.0]
We use an analytic expansion of the cost function around transition states to gain insights into the deep QAOA.
Our numerical study confirms the accuracy of our approximations, and reveals that the obtained bound and the true value of the QAOA cost function gain have a characteristic exponential decrease with the number of layers $p$.
arXiv Detail & Related papers (2024-05-16T14:24:37Z) - Trainability Barriers in Low-Depth QAOA Landscapes [0.0]
Quantum Alternating Operator Ansatz (QAOA) is a prominent variational quantum algorithm for solving optimization problems.
Previous results have given analytical performance guarantees for a small, fixed number of parameters.
We study the difficulty of training in the intermediate regime, which is the focus of most current numerical studies.
arXiv Detail & Related papers (2024-02-15T18:45:30Z) - Iterative Layerwise Training for Quantum Approximate Optimization
Algorithm [0.39945675027960637]
The capability of the quantum approximate optimization algorithm (QAOA) in solving the optimization problems has been intensively studied in recent years.
We propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA.
arXiv Detail & Related papers (2023-09-24T05:12:48Z) - Using Differential Evolution to avoid local minima in Variational
Quantum Algorithms [0.0]
Variational Quantum Algorithms (VQAs) are among the most promising NISQ-era algorithms for harnessing quantum computing.
Our goal in this paper is to study alternative optimization methods that can avoid or reduce the effect of local minima and barren plateau problems.
arXiv Detail & Related papers (2023-03-21T20:31:06Z) - LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum
Neural Networks [11.844238544360149]
Vari quantum algorithms (VQAs) have recently received significant attention due to their promising performance in Noisy Intermediate-Scale Quantum computers (NISQ)
VQAs run on parameterized quantum circuits (PQC) with randomlyational parameters are characterized by barren plateaus (BP) where the gradient vanishes exponentially in the number of qubits.
In this paper, we first quantum natural gradient (QNG), which is one of the most popular algorithms used in VQA, from the classical first-order point of optimization.
Then, we proposed a underlineAround underline
arXiv Detail & Related papers (2022-05-05T14:16:40Z) - Efficiently Escaping Saddle Points in Bilevel Optimization [48.925688192913]
Bilevel optimization is one of the problems in machine learning.
Recent developments in bilevel optimization converge on the first fundamental nonaptature multi-step analysis.
arXiv Detail & Related papers (2022-02-08T07:10:06Z) - Adaptive Multi-Goal Exploration [118.40427257364729]
We show how AdaGoal can be used to tackle the objective of learning an $epsilon$-optimal goal-conditioned policy.
AdaGoal is anchored in the high-level algorithmic structure of existing methods for goal-conditioned deep reinforcement learning.
arXiv Detail & Related papers (2021-11-23T17:59:50Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.