Extrapolation method to optimize linear-ramp QAOA parameters: Evaluation of QAOA runtime scaling
- URL: http://arxiv.org/abs/2504.08577v1
- Date: Fri, 11 Apr 2025 14:30:26 GMT
- Title: Extrapolation method to optimize linear-ramp QAOA parameters: Evaluation of QAOA runtime scaling
- Authors: Vanessa Dehn, Martin Zaefferer, Gerhard Hellstern, Florentin Reiter, Thomas Wellens,
- Abstract summary: The linear-ramp QAOA has been proposed to address this issue, as it relies on only two parameters which have to be optimized.<n>We apply this method to several use cases such as portfolio optimization, feature selection and clustering, and compare the quantum runtime scaling with that of classical methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Quantum Approximate Optimization Algorithm (QAOA) has been suggested as a promising candidate for the solution of combinatorial optimization problems. Yet, whether - or under what conditions - it may offer an advantage compared to classical algorithms remains to be proven. Using the standard variational form of QAOA requires a high number of circuit parameters that have to be optimized at a sufficiently large depth, which constitutes a bottleneck for achieving a potential scaling advantage. The linear-ramp QAOA (LR-QAOA) has been proposed to address this issue, as it relies on only two parameters which have to be optimized. Based on this, we develop a method to estimate suitable values for those parameters through extrapolation, starting from smaller problem sizes (number of qubits) towards larger problem sizes. We apply this method to several use cases such as portfolio optimization, feature selection and clustering, and compare the quantum runtime scaling with that of classical methods. In the case of portfolio optimization, we demonstrate superior scaling compared to the classical runtime.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Hybrid GRU-CNN Bilinear Parameters Initialization for Quantum
Approximate Optimization Algorithm [7.502733639318316]
We propose a hybrid optimization approach that integrates Gated Recurrent Units (GRU), Conal Neural Networks (CNN), and a bilinear strategy as an innovative alternative to conventional approximations for predicting optimal parameters of QAOA circuits.
We employ the bilinear strategy to initialize to QAOA circuit parameters at greater depths, with reference parameters obtained from GRU-CNN optimization.
arXiv Detail & Related papers (2023-11-14T03:00:39Z) - Probabilistic tensor optimization of quantum circuits for the
max-$k$-cut problem [0.0]
We propose a technique for optimizing parameterized circuits in variational quantum algorithms.
We illustrate our approach on the example of the quantum approximate optimization algorithm (QAOA) applied to the max-$k$-cut problem.
arXiv Detail & Related papers (2023-10-16T12:56:22Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Evaluation of QAOA based on the approximation ratio of individual
samples [0.0]
We simulate the performance of QAOA applied to the Max-Cut problem and compare it with some of the best classical alternatives.
Because of the evolving QAOA computational complexity-theoretic guidance, we utilize a framework for the search for quantum advantage.
arXiv Detail & Related papers (2020-06-08T18:00:18Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.