Multilevel leapfrogging initialization for quantum approximate
optimization algorithm
- URL: http://arxiv.org/abs/2306.06986v4
- Date: Tue, 12 Mar 2024 05:20:53 GMT
- Title: Multilevel leapfrogging initialization for quantum approximate
optimization algorithm
- Authors: Xiao-Hui Ni, Bin-Bin Cai, Hai-Ling Liu, Su-Juan Qin, Fei Gao and
Qiao-Yan Wen
- Abstract summary: Multilevel Leapfrogging Interpolation (MLI) strategy is proposed to reduce running costs for deep quantum algorithms.
Results show that MLI can achieve the same quasi-optima as INTERP, consuming only 1/2 of the running costs required by INTERP.
greedy-MLI has better stability (i.e., a higher average approximation ratio) than INTERP and MLI beyond obtaining the same quasi-optima.
- Score: 3.126276325914251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Zhou et al. have proposed a novel Interpolation-based (INTERP)
strategy to generate the initial parameters for the Parameterized Quantum
Circuit (PQC) in Quantum Approximate Optimization Algorithm (QAOA). INTERP
produces the guess of the initial parameters at level $i+1$ by applying linear
interpolation to the optimized parameters at level $i$, achieving better
performance than random initialization (RI). Nevertheless, INTERP consumes
extensive running costs for deep QAOA because it necessitates optimization at
each level of the PQC. To address this problem, a Multilevel Leapfrogging
Interpolation (MLI) strategy is proposed. MLI can produce the guess of the
initial parameters from level $i+1$ to $i+l$ ($l>1$) at level $i$, omitting the
optimization rounds from level $i+1$ to $(i+l-1)$. The final result is that MLI
executes optimization at few levels rather than each level, and this operation
is referred to as Multilevel Leapfrogging optimization (M-Leap). The
performance of MLI is investigated on the Maxcut problem. Compared with INTERP,
MLI reduces most optimization rounds. Remarkably, the simulation results
demonstrate that MLI can achieve the same quasi-optima as INTERP while
consuming only 1/2 of the running costs required by INTERP. In addition, for
MLI, where there is no RI except for level $1$, the greedy-MLI strategy is
presented. The simulation results suggest that greedy-MLI has better stability
(i.e., a higher average approximation ratio) than INTERP and MLI beyond
obtaining the same quasi-optima as INTERP. According to the efficiency of
finding the quasi-optima, the idea of M-Leap might be extended to other
training tasks, especially those requiring numerous optimizations, such as
training adaptive quantum circuits.
Related papers
- A Multilevel Approach For Solving Large-Scale QUBO Problems With Noisy Hybrid Quantum Approximate Optimization [3.3493770627144004]
We experimentally test how existing quantum processing units (QPUs) perform as subsolvers within a multilevel strategy.
We find approximate solutions to $10$ instances of fully connected $82$-qubit Sherrington-Kirkpatrick graphs.
We observe that quantum optimization results are competitive regarding the quality of solutions compared to classicals.
arXiv Detail & Related papers (2024-08-14T20:06:32Z) - Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization [71.35604981129838]
Traditional gradient-based bi-level optimization algorithms are ill-suited to meet the demands of large-scale applications.
We introduce $(textFG)2textU$, which achieves an unbiased approximation of the meta gradient for bi-level optimization.
$(textFG)2textU$ is inherently designed to support parallel computing, enabling it to effectively leverage large-scale distributed computing systems.
arXiv Detail & Related papers (2024-06-20T08:21:52Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Two Optimizers Are Better Than One: LLM Catalyst Empowers Gradient-Based Optimization for Prompt Tuning [69.95292905263393]
We show that gradient-based optimization and large language models (MsLL) are complementary to each other, suggesting a collaborative optimization approach.
Our code is released at https://www.guozix.com/guozix/LLM-catalyst.
arXiv Detail & Related papers (2024-05-30T06:24:14Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Large Language Models as Optimizers [106.52386531624532]
We propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as prompts.
In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values.
We demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
arXiv Detail & Related papers (2023-09-07T00:07:15Z) - Quantum Alternating Operator Ansatz for Solving the Minimum Exact Cover
Problem [4.697039614904225]
We adopt quantum alternating operator ansatz (QAOA+) to solve minimum exact cover (MEC) problem.
The numerical results show that the solution can be obtained with high probability when level $p$ of the algorithm is low.
We also optimize the quantum circuit by removing single-qubit rotating gates $R_Z$.
arXiv Detail & Related papers (2022-11-28T12:45:52Z) - Min-Max Bilevel Multi-objective Optimization with Applications in
Machine Learning [30.25074797092709]
This paper is the first to propose a min-max bilevel multi-objective optimization framework.
It highlights applications in representation learning and hyper-objective learning.
arXiv Detail & Related papers (2022-03-03T18:56:13Z) - An Efficient Batch Constrained Bayesian Optimization Approach for Analog
Circuit Synthesis via Multi-objective Acquisition Ensemble [11.64233949999656]
We propose an efficient parallelizable Bayesian optimization algorithm via Multi-objective ACquisition function Ensemble (MACE)
Our proposed algorithm can reduce the overall simulation time by up to 74 times compared to differential evolution (DE) for the unconstrained optimization problem when the batch size is 15.
For the constrained optimization problem, our proposed algorithm can speed up the optimization process by up to 15 times compared to the weighted expected improvement based Bayesian optimization (WEIBO) approach, when the batch size is 15.
arXiv Detail & Related papers (2021-06-28T13:21:28Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.