A Depth-Progressive Initialization Strategy for Quantum Approximate
Optimization Algorithm
- URL: http://arxiv.org/abs/2209.11348v2
- Date: Wed, 28 Sep 2022 02:25:16 GMT
- Title: A Depth-Progressive Initialization Strategy for Quantum Approximate
Optimization Algorithm
- Authors: Xinwei Lee, Ningyi Xie, Yoshiyuki Saito, Dongsheng Cai, Nobuyoshi Asai
- Abstract summary: We first discuss the patterns of optimal parameters in QAOA in two directions.
We then discuss on the symmetries and periodicity of the expectation that is used to determine the bounds of the search space.
We propose a strategy which predicts the new initial parameters by taking the difference between previous optimal parameters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quantum approximate optimization algorithm (QAOA) is known for its
capability and universality in solving combinatorial optimization problems on
near-term quantum devices. The results yielded by QAOA depend strongly on its
initial variational parameters. Hence, parameters selection for QAOA becomes an
active area of research as bad initialization might deteriorate the quality of
the results, especially at great circuit depths. We first discuss on the
patterns of optimal parameters in QAOA in two directions: the angle index and
the circuit depth. Then, we discuss on the symmetries and periodicity of the
expectation that is used to determine the bounds of the search space. Based on
the patterns in optimal parameters and the bounds restriction, we propose a
strategy which predicts the new initial parameters by taking the difference
between previous optimal parameters. Unlike most other strategies, the strategy
we propose does not require multiple trials to ensure success. It only requires
one prediction when progressing to the next depth. We compare this strategy
with our previously proposed strategy and the layerwise strategy on solving the
Max-cut problem, in terms of the approximation ratio and the optimization cost.
We also address the non-optimality in previous parameters, which is seldom
discussed in other works, despite its importance in explaining the behavior of
variational quantum algorithms.
Related papers
- Adiabatic-Passage-Based Parameter Setting for Quantum Approximate
Optimization Algorithm [0.7252027234425334]
We propose a novel adiabatic-passage-based parameter setting method.
This method remarkably reduces the optimization cost, specifically when applied to the 3-SAT problem, to a sublinear level.
arXiv Detail & Related papers (2023-11-30T01:06:41Z) - Hybrid GRU-CNN Bilinear Parameters Initialization for Quantum
Approximate Optimization Algorithm [7.502733639318316]
We propose a hybrid optimization approach that integrates Gated Recurrent Units (GRU), Conal Neural Networks (CNN), and a bilinear strategy as an innovative alternative to conventional approximations for predicting optimal parameters of QAOA circuits.
We employ the bilinear strategy to initialize to QAOA circuit parameters at greater depths, with reference parameters obtained from GRU-CNN optimization.
arXiv Detail & Related papers (2023-11-14T03:00:39Z) - Probabilistic tensor optimization of quantum circuits for the
max-$k$-cut problem [0.0]
We propose a technique for optimizing parameterized circuits in variational quantum algorithms.
We illustrate our approach on the example of the quantum approximate optimization algorithm (QAOA) applied to the max-$k$-cut problem.
arXiv Detail & Related papers (2023-10-16T12:56:22Z) - Iterative Layerwise Training for Quantum Approximate Optimization
Algorithm [0.39945675027960637]
The capability of the quantum approximate optimization algorithm (QAOA) in solving the optimization problems has been intensively studied in recent years.
We propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA.
arXiv Detail & Related papers (2023-09-24T05:12:48Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.