Iterative Layerwise Training for Quantum Approximate Optimization
Algorithm
- URL: http://arxiv.org/abs/2309.13552v1
- Date: Sun, 24 Sep 2023 05:12:48 GMT
- Title: Iterative Layerwise Training for Quantum Approximate Optimization
Algorithm
- Authors: Xinwei Lee, Xinjian Yan, Ningyi Xie, Yoshiyuki Saito, Dongsheng Cai,
Nobuyoshi Asai
- Abstract summary: The capability of the quantum approximate optimization algorithm (QAOA) in solving the optimization problems has been intensively studied in recent years.
We propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA.
- Score: 0.39945675027960637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The capability of the quantum approximate optimization algorithm (QAOA) in
solving the combinatorial optimization problems has been intensively studied in
recent years due to its application in the quantum-classical hybrid regime.
Despite having difficulties that are innate in the variational quantum
algorithms (VQA), such as barren plateaus and the local minima problem, QAOA
remains one of the applications that is suitable for the recent noisy
intermediate scale quantum (NISQ) devices. Recent works have shown that the
performance of QAOA largely depends on the initial parameters, which motivate
parameter initialization strategies to obtain good initial points for the
optimization of QAOA. On the other hand, optimization strategies focus on the
optimization part of QAOA instead of the parameter initialization. Instead of
having absolute advantages, these strategies usually impose trade-offs to the
performance of the optimization problems. One of such examples is the layerwise
optimization strategy, in which the QAOA parameters are optimized
layer-by-layer instead of the full optimization. The layerwise strategy costs
less in total compared to the full optimization, in exchange of lower
approximation ratio. In this work, we propose the iterative layerwise
optimization strategy and explore the possibility for the reduction of
optimization cost in solving problems with QAOA. Using numerical simulations,
we found out that by combining the iterative layerwise with proper
initialization strategies, the optimization cost can be significantly reduced
in exchange for a minor reduction in the approximation ratio. We also show that
in some cases, the approximation ratio given by the iterative layerwise
strategy is even higher than that given by the full optimization.
Related papers
- A coherent approach to quantum-classical optimization [0.0]
Hybrid quantum-classical optimization techniques have been shown to allow for the reduction of quantum computational resources.
We identify the coherence entropy as a crucial metric in determining the suitability of quantum states.
We propose a quantum-classical optimization protocol that significantly improves on previous approaches for such tasks.
arXiv Detail & Related papers (2024-09-20T22:22:53Z) - Restricted Global Optimization for QAOA [0.0]
We show that local optimization methods are inherently inadequate within the complex cost landscape of QAOA.
Instead, global optimization techniques greatly improve QAOA's performance across diverse problem instances.
arXiv Detail & Related papers (2023-09-21T15:47:07Z) - Quantum approximate optimization via learning-based adaptive
optimization [5.399532145408153]
Quantum approximate optimization algorithm (QAOA) is designed to solve objective optimization problems.
Our results demonstrate that the algorithm greatly outperforms conventional approximations in terms of speed, accuracy, efficiency and stability.
This work helps to unlock the full power of QAOA and paves the way toward achieving quantum advantage in practical classical tasks.
arXiv Detail & Related papers (2023-03-27T02:14:56Z) - A Depth-Progressive Initialization Strategy for Quantum Approximate
Optimization Algorithm [0.0]
We first discuss the patterns of optimal parameters in QAOA in two directions.
We then discuss on the symmetries and periodicity of the expectation that is used to determine the bounds of the search space.
We propose a strategy which predicts the new initial parameters by taking the difference between previous optimal parameters.
arXiv Detail & Related papers (2022-09-22T23:49:11Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.