LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum
Neural Networks
- URL: http://arxiv.org/abs/2205.02666v1
- Date: Thu, 5 May 2022 14:16:40 GMT
- Title: LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum
Neural Networks
- Authors: Zeyi Tao, Jindi Wu, Qi Xia, Qun Li
- Abstract summary: Vari quantum algorithms (VQAs) have recently received significant attention due to their promising performance in Noisy Intermediate-Scale Quantum computers (NISQ)
VQAs run on parameterized quantum circuits (PQC) with randomlyational parameters are characterized by barren plateaus (BP) where the gradient vanishes exponentially in the number of qubits.
In this paper, we first quantum natural gradient (QNG), which is one of the most popular algorithms used in VQA, from the classical first-order point of optimization.
Then, we proposed a underlineAround underline
- Score: 11.844238544360149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational quantum algorithms (VQAs) have recently received significant
attention from the research community due to their promising performance in
Noisy Intermediate-Scale Quantum computers (NISQ). However, VQAs run on
parameterized quantum circuits (PQC) with randomly initialized parameters are
characterized by barren plateaus (BP) where the gradient vanishes exponentially
in the number of qubits. In this paper, we first review quantum natural
gradient (QNG), which is one of the most popular algorithms used in VQA, from
the classical first-order optimization point of view. Then, we proposed a
\underline{L}ook \underline{A}round \underline{W}arm-\underline{S}tart QNG
(LAWS) algorithm to mitigate the widespread existing BP issues. LAWS is a
combinatorial optimization strategy taking advantage of model parameter
initialization and fast convergence of QNG. LAWS repeatedly reinitializes
parameter search space for the next iteration parameter update. The
reinitialized parameter search space is carefully chosen by sampling the
gradient close to the current optimal. Moreover, we present a unified framework
(WS-SGD) for integrating parameter initialization techniques into the
optimizer. We provide the convergence proof of the proposed framework for both
convex and non-convex objective functions based on Polyak-Lojasiewicz (PL)
condition. Our experiment results show that the proposed algorithm could
mitigate the BP and have better generalization ability in quantum
classification problems.
Related papers
- Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm [47.47843839099175]
A Quantum Natural Gradient (QNG) algorithm for optimization of variational quantum circuits has been proposed recently.
In this study, we employ the Langevin equation with a QNG force to demonstrate that its discrete-time solution gives a generalized form, which we call Momentum-QNG.
arXiv Detail & Related papers (2024-09-03T15:21:16Z) - Challenges of variational quantum optimization with measurement shot noise [0.0]
We study the scaling of the quantum resources to reach a fixed success probability as the problem size increases.
Our results suggest that hybrid quantum-classical algorithms should possibly avoid a brute force classical outer loop.
arXiv Detail & Related papers (2023-07-31T18:01:15Z) - Optimizing Variational Quantum Algorithms with qBang: Efficiently Interweaving Metric and Momentum to Navigate Flat Energy Landscapes [0.0]
Variational quantum algorithms (VQAs) represent a promising approach to utilizing current quantum computing infrastructures.
We propose the quantum Broyden adaptive natural gradient (qBang) approach, a novel that aims to distill the best aspects of existing approaches.
arXiv Detail & Related papers (2023-04-27T00:06:48Z) - Sample-Then-Optimize Batch Neural Thompson Sampling [50.800944138278474]
We introduce two algorithms for black-box optimization based on the Thompson sampling (TS) policy.
To choose an input query, we only need to train an NN and then choose the query by maximizing the trained NN.
Our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy.
arXiv Detail & Related papers (2022-10-13T09:01:58Z) - Iterative-Free Quantum Approximate Optimization Algorithm Using Neural
Networks [20.051757447006043]
We propose a practical method that uses a simple, fully connected neural network to find better parameters tailored to a new given problem instance.
Our method is consistently the fastest to converge while also the best final result.
arXiv Detail & Related papers (2022-08-21T14:05:11Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - FLIP: A flexible initializer for arbitrarily-sized parametrized quantum
circuits [105.54048699217668]
We propose a FLexible Initializer for arbitrarily-sized Parametrized quantum circuits.
FLIP can be applied to any family of PQCs, and instead of relying on a generic set of initial parameters, it is tailored to learn the structure of successful parameters.
We illustrate the advantage of using FLIP in three scenarios: a family of problems with proven barren plateaus, PQC training to solve max-cut problem instances, and PQC training for finding the ground state energies of 1D Fermi-Hubbard models.
arXiv Detail & Related papers (2021-03-15T17:38:33Z) - Quantum annealing initialization of the quantum approximate optimization
algorithm [0.0]
Quantum approximate optimization algorithm (QAOA) is a prospective near-term quantum algorithm.
external parameter optimization required in QAOA could become a performance bottleneck.
In this work we visualize the optimization landscape of the QAOA applied to the MaxCut problem on random graphs.
arXiv Detail & Related papers (2021-01-14T17:45:13Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.