Improving the efficiency of QAOA using efficient parameter transfer initialization and targeted-single-layer regularized optimization with minimal performance degradation
- URL: http://arxiv.org/abs/2601.15760v1
- Date: Thu, 22 Jan 2026 08:51:03 GMT
- Title: Improving the efficiency of QAOA using efficient parameter transfer initialization and targeted-single-layer regularized optimization with minimal performance degradation
- Authors: Shubham Patel, Utkarsh Mishra,
- Abstract summary: We investigate the MaxCut problem in three different families of graphs using Quantum approximate optimization algorithm (QAOA) ansats.<n>For 3 regular (3R), Erdos Renyi (ER), and Barabasi Albert (BA) graphs, the parameter transfer approach achieved mean approximation ratios of 0.9443 for targeted-single layer optimization.<n>It represents 98.88 percent optimal performance, with 8.06 times computational speedup in unweighted graphs.
- Score: 1.7761223012399538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum approximate optimization algorithm (QAOA) have promising applications in combinatorial optimization problems (COPs). We investigated the MaxCut problem in three different families of graphs using QAOA ansats with parameter transfer initialization followed by targeted single layer optimization. For 3 regular (3R), Erdos Renyi (ER), and Barabasi Albert (BA) graphs, the parameter transfer approach achieved mean approximation ratios of 0.9443 for targeted-single layer optimization as compared to 0.9551 of full optimization. It represents 98.88 percent optimal performance, with 8.06 times computational speedup in unweighted graphs. But, in weighted graph families, optimal performance is relatively low (less than 90 percent) for higher nodes graph, suggesting parameter transfer followed by targeted-single-layer optimization is not ideal for weighted graph families, however, we find that for some weighted families (weighted 3-regular) this approach works perfectly. In 8.92 percent test cases, targeted single layer optimization outperformed the full optimization, indicating that complex parameter landscape can trap full optimization in sub-optimal local minima. To mitigate this inconsistency, ridge (L2) regularization is used to smoothen the solution landscape, which helps the optimizer to find better optimum parameters during full optimization and reduces these inconsistent test cases from 8.92 percent to 3.81 percent. This work demonstrates that efficient parameter initialization and targeted-single-layer optimization can improve the efficiency of QAOA with minimal performance degradation.
Related papers
- Benchmarking Lie-Algebraic Pretraining and Non-Variational QWOA for the MaxCut Problem [4.103893081207555]
This paper provides a comparative performance analysis of two strategies designed to improve trainability.<n>We benchmark both methods on the unweighted Maxcut problem using a circuit depth of $p = 256$ across 200 Erds-Rényi and 200 3-regular graphs.<n>Both approaches significantly improve upon the standard randomly QWOA. NV-QWOA attains a mean approximation ratio of 98.9% in just 60 iterations, while the Lie-algebraic pretrained QWOA improves to 77.71% after 500 iterations.
arXiv Detail & Related papers (2025-12-28T09:42:02Z) - Optimizing Optimizers for Fast Gradient-Based Learning [53.81268610971847]
We lay the theoretical foundation for automating design in gradient-based learning.<n>By treating gradient loss signals as a function that translates to parameter motions, the problem reduces to a family of convex optimization problems.
arXiv Detail & Related papers (2025-12-06T09:50:41Z) - Adam assisted Fully informed Particle Swarm Optimization ( Adam-FIPSO ) based Parameter Prediction for the Quantum Approximate Optimization Algorithm (QAOA) [1.024113475677323]
The Quantum Approximate Optimization Algorithm (QAOA) is a prominent variational algorithm used for solving optimization problems such as the Max-Cut problem.<n>A key challenge in QAOA lies in efficiently identifying suitable parameters that lead to high-quality solutions.
arXiv Detail & Related papers (2025-06-07T13:14:41Z) - Iterative Interpolation Schedules for Quantum Approximate Optimization Algorithm [1.845978975395919]
We present an iterative method that exploits the smoothness of optimal parameter schedules by expressing them in a basis of functions.<n>We demonstrate our method achieves better performance with fewer optimization steps than current approaches.<n>For the largest LABS instance, we achieve near-optimal merit factors with schedules exceeding 1000 layers, an order of magnitude beyond previous methods.
arXiv Detail & Related papers (2025-04-02T12:53:21Z) - Scalable Min-Max Optimization via Primal-Dual Exact Pareto Optimization [66.51747366239299]
We propose a smooth variant of the min-max problem based on the augmented Lagrangian.<n>The proposed algorithm scales better with the number of objectives than subgradient-based strategies.
arXiv Detail & Related papers (2025-03-16T11:05:51Z) - Iterative Layerwise Training for Quantum Approximate Optimization
Algorithm [0.39945675027960637]
The capability of the quantum approximate optimization algorithm (QAOA) in solving the optimization problems has been intensively studied in recent years.
We propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA.
arXiv Detail & Related papers (2023-09-24T05:12:48Z) - An Efficient Batch Constrained Bayesian Optimization Approach for Analog
Circuit Synthesis via Multi-objective Acquisition Ensemble [11.64233949999656]
We propose an efficient parallelizable Bayesian optimization algorithm via Multi-objective ACquisition function Ensemble (MACE)
Our proposed algorithm can reduce the overall simulation time by up to 74 times compared to differential evolution (DE) for the unconstrained optimization problem when the batch size is 15.
For the constrained optimization problem, our proposed algorithm can speed up the optimization process by up to 15 times compared to the weighted expected improvement based Bayesian optimization (WEIBO) approach, when the batch size is 15.
arXiv Detail & Related papers (2021-06-28T13:21:28Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.