Quantum Natural Gradient with Efficient Backtracking Line Search
- URL: http://arxiv.org/abs/2211.00615v1
- Date: Tue, 1 Nov 2022 17:29:32 GMT
- Title: Quantum Natural Gradient with Efficient Backtracking Line Search
- Authors: Touheed Anwar Atif, Uchenna Chukwu, Jesse Berwald and Raouf Dridi
- Abstract summary: We present an adaptive implementation of QNGD based on Armijo's rule, which is an efficient backtracking line search.
Our results are yet another confirmation of the importance of differential geometry in variational quantum computations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the Quantum Natural Gradient Descent (QNGD) scheme which was
recently proposed to train variational quantum algorithms. QNGD is Steepest
Gradient Descent (SGD) operating on the complex projective space equipped with
the Fubini-Study metric. Here we present an adaptive implementation of QNGD
based on Armijo's rule, which is an efficient backtracking line search that
enjoys a proven convergence. The proposed algorithm is tested using noisy
simulators on three different models with various initializations. Our results
show that Adaptive QNGD dynamically adapts the step size and consistently
outperforms the original QNGD, which requires knowledge of optimal step size to
{perform competitively}. In addition, we show that the additional complexity
involved in performing the line search in Adaptive QNGD is minimal, ensuring
the gains provided by the proposed adaptive strategy dominates any increase in
complexity. Additionally, our benchmarking demonstrates that a simple SGD
algorithm (implemented in the Euclidean space) equipped with the adaptive
scheme above, can yield performances similar to the QNGD scheme with optimal
step size.
Our results are yet another confirmation of the importance of differential
geometry in variational quantum computations. As a matter of fact, we foresee
advanced mathematics to play a prominent role in the NISQ era in guiding the
design of faster and more efficient algorithms.
Related papers
- Exact Gauss-Newton Optimization for Training Deep Neural Networks [0.0]
We present EGN, a second-order optimization algorithm that combines the generalized Gauss-Newton (GN) Hessian approximation with low-rank linear algebra to compute the descent direction.
We show how improvements such as line search, adaptive regularization, and momentum can be seamlessly added to EGN to further accelerate the algorithm.
arXiv Detail & Related papers (2024-05-23T10:21:05Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Adaptive projected variational quantum dynamics [0.0]
We propose an adaptive quantum algorithm to prepare accurate variational time evolved wave functions.
The method is based on the projected Variational Quantum Dynamics (pVQD) algorithm.
We apply the new algorithm to the simulation of driven spin models and fermionic systems.
arXiv Detail & Related papers (2023-07-06T18:00:04Z) - Optimization strategies in WAHTOR algorithm for quantum computing
empirical ansatz: a comparative study [0.0]
This work introduces a non-adiabatic version of the WAHTOR algorithm and compares its efficiency with three implementations.
Calculating first and second-order derivatives of the Hamiltonian at fixed VQE parameters does not introduce a prototypical QPU overload.
We find out that in the case of Hubbard model systems the trust region non-adiabatic optimization is more efficient.
arXiv Detail & Related papers (2023-06-19T15:07:55Z) - Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems [0.7614628596146599]
We propose a novel algorithm for adaptive step length selection in the classical SGD framework.
Under reasonable conditions, the algorithm produces step lengths in line with well-established theoretical requirements.
We show that the algorithm can generate step lengths comparable to the best step length obtained from manual tuning.
arXiv Detail & Related papers (2023-05-17T06:22:11Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Adaptive shot allocation for fast convergence in variational quantum
algorithms [0.0]
We present a new gradient descent method using an adaptive number of shots at each step, called the global Coupled Adaptive Number of Shots (gCANS) method.
These improvements reduce both the time and money required to run VQAs on current cloud platforms.
arXiv Detail & Related papers (2021-08-23T22:29:44Z) - Sinkhorn Natural Gradient for Generative Models [125.89871274202439]
We propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
arXiv Detail & Related papers (2020-11-09T02:51:17Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.