Quantum Natural Gradient with Efficient Backtracking Line Search
- URL: http://arxiv.org/abs/2211.00615v1
- Date: Tue, 1 Nov 2022 17:29:32 GMT
- Title: Quantum Natural Gradient with Efficient Backtracking Line Search
- Authors: Touheed Anwar Atif, Uchenna Chukwu, Jesse Berwald and Raouf Dridi
- Abstract summary: We present an adaptive implementation of QNGD based on Armijo's rule, which is an efficient backtracking line search.
Our results are yet another confirmation of the importance of differential geometry in variational quantum computations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the Quantum Natural Gradient Descent (QNGD) scheme which was
recently proposed to train variational quantum algorithms. QNGD is Steepest
Gradient Descent (SGD) operating on the complex projective space equipped with
the Fubini-Study metric. Here we present an adaptive implementation of QNGD
based on Armijo's rule, which is an efficient backtracking line search that
enjoys a proven convergence. The proposed algorithm is tested using noisy
simulators on three different models with various initializations. Our results
show that Adaptive QNGD dynamically adapts the step size and consistently
outperforms the original QNGD, which requires knowledge of optimal step size to
{perform competitively}. In addition, we show that the additional complexity
involved in performing the line search in Adaptive QNGD is minimal, ensuring
the gains provided by the proposed adaptive strategy dominates any increase in
complexity. Additionally, our benchmarking demonstrates that a simple SGD
algorithm (implemented in the Euclidean space) equipped with the adaptive
scheme above, can yield performances similar to the QNGD scheme with optimal
step size.
Our results are yet another confirmation of the importance of differential
geometry in variational quantum computations. As a matter of fact, we foresee
advanced mathematics to play a prominent role in the NISQ era in guiding the
design of faster and more efficient algorithms.
Related papers
- Quantum evolutionary algorithm for TSP combinatorial optimisation problem [0.0]
This paper implements a new way of solving a problem called the traveling salesman problem (TSP) using quantum genetic algorithm (QGA)
We compared how well this new approach works to the traditional method known as a classical genetic algorithm (CGA)
arXiv Detail & Related papers (2024-09-20T08:27:42Z) - Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm [47.47843839099175]
A Quantum Natural Gradient (QNG) algorithm for optimization of variational quantum circuits has been proposed recently.
In this study, we employ the Langevin equation with a QNG force to demonstrate that its discrete-time solution gives a generalized form, which we call Momentum-QNG.
arXiv Detail & Related papers (2024-09-03T15:21:16Z) - Compact Multi-Threshold Quantum Information Driven Ansatz For Strongly Interactive Lattice Spin Models [0.0]
We introduce a systematic procedure for ansatz building based on approximate Quantum Mutual Information (QMI)
Our approach generates a layered-structured ansatz, where each layer's qubit pairs are selected based on their QMI values, resulting in more efficient state preparation and optimization routines.
Our results show that the Multi-QIDA method reduces the computational complexity while maintaining high precision, making it a promising tool for quantum simulations in lattice spin models.
arXiv Detail & Related papers (2024-08-05T17:07:08Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Adaptive projected variational quantum dynamics [0.0]
We propose an adaptive quantum algorithm to prepare accurate variational time evolved wave functions.
The method is based on the projected Variational Quantum Dynamics (pVQD) algorithm.
We apply the new algorithm to the simulation of driven spin models and fermionic systems.
arXiv Detail & Related papers (2023-07-06T18:00:04Z) - Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems [0.7614628596146599]
We propose a novel algorithm for adaptive step length selection in the classical SGD framework.
Under reasonable conditions, the algorithm produces step lengths in line with well-established theoretical requirements.
We show that the algorithm can generate step lengths comparable to the best step length obtained from manual tuning.
arXiv Detail & Related papers (2023-05-17T06:22:11Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Sinkhorn Natural Gradient for Generative Models [125.89871274202439]
We propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
arXiv Detail & Related papers (2020-11-09T02:51:17Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.