QAOA-Predictor: Forecasting Success Probabilities and Minimal Depths for Efficient Fixed-Parameter Optimization
- URL: http://arxiv.org/abs/2603.02990v1
- Date: Tue, 03 Mar 2026 13:43:44 GMT
- Title: QAOA-Predictor: Forecasting Success Probabilities and Minimal Depths for Efficient Fixed-Parameter Optimization
- Authors: Rodrigo Coelho, Georg Kruse, Jeanette Miriam Lorenz,
- Abstract summary: We propose a novel approach using a Graph Neural Network (GNN) to predict Quantum Approximate Optimization Algorithm (QAOA) performance.<n>We demonstrate that the GNN accurately predicts QAOA performance within a 10% margin of the true values.
- Score: 0.9558392439655014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Computing promises to solve complex combinatorial optimization problems more efficiently than classical methods, with the Quantum Approximate Optimization Algorithm (QAOA) being a leading candidate. Recent fixed-parameter variations of QAOA eliminate costly run-time optimization, but determining their optimal initialization as well as the number of required layers (p) for a target solution remains a critical, unsolved challenge. In this work, we propose a novel approach using a Graph Neural Network (GNN) to predict QAOA performance: Based on a graph representation of the problem, the GNN forecasts the probability of the optimal solution in the resulting distribution across different parameter initializations and layer depths for a wide variety of combinatorial optimization problems. We demonstrate that the GNN accurately predicts QAOA performance within a 10% margin of the true values. Furthermore, the model exhibits strong generalization capabilities across unseen problem classes, larger problem sizes, and higher layer counts. Our approach allows to identify viable problem instances for QAOA and to select an adequate parameter initialization strategy with minimal layer depth, without the need of costly parameter optimization.
Related papers
- Adam assisted Fully informed Particle Swarm Optimization ( Adam-FIPSO ) based Parameter Prediction for the Quantum Approximate Optimization Algorithm (QAOA) [1.024113475677323]
The Quantum Approximate Optimization Algorithm (QAOA) is a prominent variational algorithm used for solving optimization problems such as the Max-Cut problem.<n>A key challenge in QAOA lies in efficiently identifying suitable parameters that lead to high-quality solutions.
arXiv Detail & Related papers (2025-06-07T13:14:41Z) - Extrapolation method to optimize linear-ramp QAOA parameters: Evaluation of QAOA runtime scaling [0.0]
The linear-ramp QAOA has been proposed to address this issue, as it relies on only two parameters which have to be optimized.<n>We apply this method to several use cases such as portfolio optimization, feature selection and clustering, and compare the quantum runtime scaling with that of classical methods.
arXiv Detail & Related papers (2025-04-11T14:30:26Z) - Scalable Min-Max Optimization via Primal-Dual Exact Pareto Optimization [66.51747366239299]
We propose a smooth variant of the min-max problem based on the augmented Lagrangian.<n>The proposed algorithm scales better with the number of objectives than subgradient-based strategies.
arXiv Detail & Related papers (2025-03-16T11:05:51Z) - Benchmarking Metaheuristic-Integrated QAOA against Quantum Annealing [0.0]
The study provides insights into the strengths and limitations of both Quantum Annealing and metaheuristic-integrated QAOA across different problem domains.
The findings suggest that the hybrid approach can leverage classical optimization strategies to enhance the solution quality and convergence speed of QAOA.
arXiv Detail & Related papers (2023-09-28T18:55:22Z) - Iterative Layerwise Training for Quantum Approximate Optimization
Algorithm [0.39945675027960637]
The capability of the quantum approximate optimization algorithm (QAOA) in solving the optimization problems has been intensively studied in recent years.
We propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA.
arXiv Detail & Related papers (2023-09-24T05:12:48Z) - Iterative-Free Quantum Approximate Optimization Algorithm Using Neural
Networks [20.051757447006043]
We propose a practical method that uses a simple, fully connected neural network to find better parameters tailored to a new given problem instance.
Our method is consistently the fastest to converge while also the best final result.
arXiv Detail & Related papers (2022-08-21T14:05:11Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.