Proactively incremental-learning QAOA
- URL: http://arxiv.org/abs/2311.02302v1
- Date: Sat, 4 Nov 2023 02:15:26 GMT
- Title: Proactively incremental-learning QAOA
- Authors: Lingxiao Li, Jing Li, Yanqi Song, Sujuan Qin, Qiaoyan Wen, and Fei Gao
- Abstract summary: We propose an advanced Quantum Approximate Optimization Algorithm (QAOA) based on incremental learning.
Our method has superior performance on Approximation Ratio (AR) and training time compared to prevalent works of QAOA.
- Score: 9.677961025372115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving optimization problems with high performance is the target of existing
works of Quantum Approximate Optimization Algorithm (QAOA). With this
intention, we propose an advanced QAOA based on incremental learning, where the
training trajectory is proactively segmented into incremental phases. Taking
the MaxCut problem as our example, we randomly select a small subgraph from the
whole graph and train the quantum circuit to get optimized parameters for the
MaxCut of the subgraph in the first phase. Then in each subsequent incremental
phase, a portion of the remaining nodes and edges are added to the current
subgraph, and the circuit is retrained to get new optimized parameters. The
above operation is repeated until the MaxCut problem on the whole graph is
solved. The key point is that the optimized parameters of the previous phase
will be reused in the initial parameters of the current phase. Numerous
simulation experiments show our method has superior performance on
Approximation Ratio (AR) and training time compared to prevalent works of QAOA.
Specifically, the AR is higher than standard QAOA by 13.17% on weighted random
graphs.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Iterative-Free Quantum Approximate Optimization Algorithm Using Neural
Networks [20.051757447006043]
We propose a practical method that uses a simple, fully connected neural network to find better parameters tailored to a new given problem instance.
Our method is consistently the fastest to converge while also the best final result.
arXiv Detail & Related papers (2022-08-21T14:05:11Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Quantum annealing initialization of the quantum approximate optimization
algorithm [0.0]
Quantum approximate optimization algorithm (QAOA) is a prospective near-term quantum algorithm.
external parameter optimization required in QAOA could become a performance bottleneck.
In this work we visualize the optimization landscape of the QAOA applied to the MaxCut problem on random graphs.
arXiv Detail & Related papers (2021-01-14T17:45:13Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Optimizing embedding-related quantum annealing parameters for reducing
hardware bias [0.0]
We show that parameter optimization can be done for an entire class of problems, given each instance uses a previously chosen fixed embedding.
Specifically, in the training phase, we fix an embedding E of a complete graph onto the hardware of the annealer, and then run an optimization algorithm to tune the following set of parameter values.
In the testing phase, we estimate how well the parameters computed during the training phase work on a random selection of other graphs from that class.
arXiv Detail & Related papers (2020-11-02T04:13:51Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Accelerating Quantum Approximate Optimization Algorithm using Machine
Learning [6.735657356113614]
We propose a machine learning based approach to accelerate quantum approximate optimization algorithm (QAOA) implementation.
QAOA is a quantum-classical hybrid algorithm to prove the so-called quantum supremacy.
We show that the proposed approach can curtail the number of optimization iterations by up to 65.7%) from an analysis performed with 264 flavors of graphs.
arXiv Detail & Related papers (2020-02-04T02:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.