Trainability Barriers in Low-Depth QAOA Landscapes
- URL: http://arxiv.org/abs/2402.10188v2
- Date: Wed, 09 Oct 2024 14:51:54 GMT
- Title: Trainability Barriers in Low-Depth QAOA Landscapes
- Authors: Joel Rajakumar, John Golden, Andreas Bärtschi, Stephan Eidenbenz,
- Abstract summary: Quantum Alternating Operator Ansatz (QAOA) is a prominent variational quantum algorithm for solving optimization problems.
Previous results have given analytical performance guarantees for a small, fixed number of parameters.
We study the difficulty of training in the intermediate regime, which is the focus of most current numerical studies.
- Score: 0.0
- License:
- Abstract: The Quantum Alternating Operator Ansatz (QAOA) is a prominent variational quantum algorithm for solving combinatorial optimization problems. Its effectiveness depends on identifying input parameters that yield high-quality solutions. However, understanding the complexity of training QAOA remains an under-explored area. Previous results have given analytical performance guarantees for a small, fixed number of parameters. At the opposite end of the spectrum, barren plateaus are likely to emerge at $\Omega(n)$ parameters for $n$ qubits. In this work, we study the difficulty of training in the intermediate regime, which is the focus of most current numerical studies and near-term hardware implementations. Through extensive numerical analysis of the quality and quantity of local minima, we argue that QAOA landscapes can exhibit a superpolynomial growth in the number of low-quality local minima even when the number of parameters scales logarithmically with $n$. This means that the common technique of gradient descent from randomly initialized parameters is doomed to fail beyond small $n$, and emphasizes the need for good initial guesses of the optimal parameters.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Linearly simplified QAOA parameters and transferability [0.6834295298053009]
Quantum Approximate Algorithm Optimization (QAOA) provides a way to solve optimization problems using quantum computers.
We present some numerical results that are obtained for instances of the random Ising model and of the max-cut problem.
arXiv Detail & Related papers (2024-05-01T17:34:32Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - A Parameter Setting Heuristic for the Quantum Alternating Operator
Ansatz [0.0]
We introduce a strategy for parameter setting suitable for common cases in which the number of distinct cost values grows onlyly with the problem size.
We define a Classical Homogeneous Proxy for QAOA in which Perfect Homogeneity holds exactly, and which yields information describing both states and expectation values.
For up to $3$ QAOA levels we are easily able to find parameters that match approximation ratios returned by previous globally optimized approaches.
arXiv Detail & Related papers (2022-11-17T00:18:06Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Unsupervised strategies for identifying optimal parameters in Quantum
Approximate Optimization Algorithm [3.508346077709686]
We study unsupervised Machine Learning approaches for setting parameters without optimization.
We showcase them within Recursive-QAOA up to depth $3$ where the number of QAOA parameters used per iteration is limited to $3$.
We obtain similar performances to the case where we extensively optimize the angles, hence saving numerous circuit calls.
arXiv Detail & Related papers (2022-02-18T19:55:42Z) - Scaling Quantum Approximate Optimization on Near-term Hardware [49.94954584453379]
We quantify scaling of the expected resource requirements by optimized circuits for hardware architectures with varying levels of connectivity.
We show the number of measurements, and hence total time to synthesizing solution, grows exponentially in problem size and problem graph degree.
These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.
arXiv Detail & Related papers (2022-01-06T21:02:30Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Parameter Concentration in Quantum Approximate Optimization [0.0]
We find that optimal QAOA circuit parameters concentrate as an inverse in the problem size.
Our results are analytically demonstrated for variational state preparations at $p=1,2$ (corresponding to 2 and 4 parameters respectively)
arXiv Detail & Related papers (2021-03-22T16:24:00Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.