Scaling QAOA: transferring optimal adiabatic schedules from small-scale to large-scale variational circuits
- URL: http://arxiv.org/abs/2602.14986v1
- Date: Mon, 16 Feb 2026 18:12:13 GMT
- Title: Scaling QAOA: transferring optimal adiabatic schedules from small-scale to large-scale variational circuits
- Authors: Ugo Nzongani, Dylan Laplace Mermoud, Arthur Braida,
- Abstract summary: We propose a schedule-learning framework that transfers spectral-gap-informed adiabatic control strategies from small-scale instances to larger systems.<n>Our results suggest that gap-informed schedule transfers provide a scalable and parameter-efficient strategy for QAOA.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Quantum Approximate Optimization Algorithm (QAOA) is a leading approach for combinatorial optimization on near-term quantum devices, yet its scalability is limited by the difficulty of optimizing \(2p\) variational parameters for a large number \(p\) of layers. Recent empirical studies indicate that optimal QAOA angles exhibit concentration and transferability across problem sizes. Leveraging this observation, we propose a schedule-learning framework that transfers spectral-gap-informed adiabatic control strategies from small-scale instances to larger systems. Our method extracts the spectral gap profile of small problems and constructs a continuous schedule governed by \(\partial_t s = κg^q(s)\), where \(g(s)\) is the instantaneous gap and \((κ, q)\) are global hyperparameters. Discretizing this schedule yields closed-form expressions for all QAOA angles, reducing the classical optimization task from \(2p\) parameters to only \(2\), independent of circuit depth. This drastic parameter compression mitigates classical optimization overhead and reduces sensitivity to barren plateau phenomena. Numerical simulations on random QUBO and 3-regular MaxCut instances demonstrate that the learnt schedules transfer effectively to larger systems while achieving competitive approximation ratios. Our results suggest that gap-informed schedule transfers provide a scalable and parameter-efficient strategy for QAOA.
Related papers
- Hyperparameter Transfer Enables Consistent Gains of Matrix-Preconditioned Optimizers Across Scales [55.91454326946738]
We study how the optimal learning rate and weight decay should scale with model width and depth for a wide range of languages.<n>We find that scaling the learning rate according to $$P improves transfer, but can still suffer from significant finite-width deviations.<n>For compute-optimal scaling, we find scaling independent weight decay as $1/mathrmwidth$ is nearly optimal across languages.
arXiv Detail & Related papers (2025-12-05T11:03:41Z) - Iterative Interpolation Schedules for Quantum Approximate Optimization Algorithm [1.845978975395919]
We present an iterative method that exploits the smoothness of optimal parameter schedules by expressing them in a basis of functions.<n>We demonstrate our method achieves better performance with fewer optimization steps than current approaches.<n>For the largest LABS instance, we achieve near-optimal merit factors with schedules exceeding 1000 layers, an order of magnitude beyond previous methods.
arXiv Detail & Related papers (2025-04-02T12:53:21Z) - Graph Representation Learning for Parameter Transferability in Quantum Approximate Optimization Algorithm [1.0971022294548696]
The quantum approximate optimization algorithm (QAOA) is one of the most promising candidates for achieving quantum advantage through quantum-enhanced optimization.
In this work, we apply five different graph embedding techniques to determine good donor candidates for parameter transferability.
Using this technique, we effectively reduce the number of iterations required for parameter optimization, obtaining an approximate solution to the target problem with an order of magnitude speedup.
arXiv Detail & Related papers (2024-01-12T16:01:53Z) - How Much Entanglement Do Quantum Optimization Algorithms Require? [0.0]
We study the entanglement generated during the execution of ADAPT-QAOA.
By incrementally restricting this flexibility, we find that a larger amount of entanglement entropy at earlier stages coincides with faster convergence at later stages.
arXiv Detail & Related papers (2022-05-24T18:00:02Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.