Random Natural Gradient
- URL: http://arxiv.org/abs/2311.04135v3
- Date: Thu, 10 Oct 2024 17:56:45 GMT
- Title: Random Natural Gradient
- Authors: Ioannis Kolotouros, Petros Wallden,
- Abstract summary: Quantum Natural Gradient (QNG) is a method that uses information about the local geometry of the quantum state-space.
We propose two methods that reduce the resources/state preparations required for QNG, while keeping the advantages and performance of the QNG-based optimization.
- Score: 0.0
- License:
- Abstract: Hybrid quantum-classical algorithms appear to be the most promising approach for near-term quantum applications. An important bottleneck is the classical optimization loop, where the multiple local minima and the emergence of barren plateaux make these approaches less appealing. To improve the optimization the Quantum Natural Gradient (QNG) method [Quantum 4, 269 (2020)] was introduced - a method that uses information about the local geometry of the quantum state-space. While the QNG-based optimization is promising, in each step it requires more quantum resources, since to compute the QNG one requires $O(m^2)$ quantum state preparations, where $m$ is the number of parameters in the parameterized circuit. In this work we propose two methods that reduce the resources/state preparations required for QNG, while keeping the advantages and performance of the QNG-based optimization. Specifically, we first introduce the Random Natural Gradient (RNG) that uses random measurements and the classical Fisher information matrix (as opposed to the quantum Fisher information used in QNG). The essential quantum resources reduce to linear $O(m)$ and thus offer a quadratic "speed-up", while in our numerical simulations it matches QNG in terms of accuracy. We give some theoretical arguments for RNG and then benchmark the method with the QNG on both classical and quantum problems. Secondly, inspired by stochastic-coordinate methods, we propose a novel approximation to the QNG which we call Stochastic-Coordinate Quantum Natural Gradient that optimizes only a small (randomly sampled) fraction of the total parameters at each iteration. This method also performs equally well in our benchmarks, while it uses fewer resources than the QNG.
Related papers
- Optimizing random local Hamiltonians by dissipation [44.99833362998488]
We prove that a simplified quantum Gibbs sampling algorithm achieves a $Omega(frac1k)$-fraction approximation of the optimum.
Our results suggest that finding low-energy states for sparsified (quasi)local spin and fermionic models is quantumly easy but classically nontrivial.
arXiv Detail & Related papers (2024-11-04T20:21:16Z) - Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm [47.47843839099175]
A Quantum Natural Gradient (QNG) algorithm for optimization of variational quantum circuits has been proposed recently.
In this study, we employ the Langevin equation with a QNG force to demonstrate that its discrete-time solution gives a generalized form, which we call Momentum-QNG.
arXiv Detail & Related papers (2024-09-03T15:21:16Z) - Quantum Natural Stochastic Pairwise Coordinate Descent [6.187270874122921]
Quantum machine learning through variational quantum algorithms (VQAs) has gained substantial attention in recent years.
This paper introduces the quantum natural pairwise coordinate descent (2QNSCD) optimization method.
We develop a highly sparse unbiased estimator of the novel metric tensor using a quantum circuit with gate complexity $Theta(1)$ times that of the parameterized quantum circuit and single-shot quantum measurements.
arXiv Detail & Related papers (2024-07-18T18:57:29Z) - Advantages of multistage quantum walks over QAOA [0.7852714805965528]
We compare the quantum approximate optimization algorithm (QAOA) with multi-stage quantum walks (MSQW)
We obtain evidence that MSQW outperforms QAOA, using equivalent resources.
arXiv Detail & Related papers (2024-07-09T08:39:32Z) - Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent [37.59299233291882]
We propose Q-Newton, a hybrid quantum-classical scheduler for accelerating neural network training with Newton's GD.
Q-Newton utilizes a streamlined scheduling module that coordinates between quantum and classical linear solvers.
Our evaluation showcases the potential for Q-Newton to significantly reduce the total training time compared to commonly used quantum machines.
arXiv Detail & Related papers (2024-04-30T23:55:03Z) - Pre-optimizing variational quantum eigensolvers with tensor networks [1.4512477254432858]
We present and benchmark an approach where we find good starting parameters for parameterized quantum circuits by simulating VQE.
We apply this approach to the 1D and 2D Fermi-Hubbard model with system sizes that use up to 32 qubits.
In 2D, the parameters that VTNE finds have significantly lower energy than their starting configurations, and we show that starting VQE from these parameters requires non-trivially fewer operations to come down to a given energy.
arXiv Detail & Related papers (2023-10-19T17:57:58Z) - Efficient DCQO Algorithm within the Impulse Regime for Portfolio
Optimization [41.94295877935867]
We propose a faster digital quantum algorithm for portfolio optimization using the digitized-counterdiabatic quantum optimization (DCQO) paradigm.
Our approach notably reduces the circuit depth requirement of the algorithm and enhances the solution accuracy, making it suitable for current quantum processors.
We experimentally demonstrate the advantages of our protocol using up to 20 qubits on an IonQ trapped-ion quantum computer.
arXiv Detail & Related papers (2023-08-29T17:53:08Z) - End-to-end resource analysis for quantum interior point methods and portfolio optimization [63.4863637315163]
We provide a complete quantum circuit-level description of the algorithm from problem input to problem output.
We report the number of logical qubits and the quantity/depth of non-Clifford T-gates needed to run the algorithm.
arXiv Detail & Related papers (2022-11-22T18:54:48Z) - FLIP: A flexible initializer for arbitrarily-sized parametrized quantum
circuits [105.54048699217668]
We propose a FLexible Initializer for arbitrarily-sized Parametrized quantum circuits.
FLIP can be applied to any family of PQCs, and instead of relying on a generic set of initial parameters, it is tailored to learn the structure of successful parameters.
We illustrate the advantage of using FLIP in three scenarios: a family of problems with proven barren plateaus, PQC training to solve max-cut problem instances, and PQC training for finding the ground state energies of 1D Fermi-Hubbard models.
arXiv Detail & Related papers (2021-03-15T17:38:33Z) - Quantum annealing initialization of the quantum approximate optimization
algorithm [0.0]
Quantum approximate optimization algorithm (QAOA) is a prospective near-term quantum algorithm.
external parameter optimization required in QAOA could become a performance bottleneck.
In this work we visualize the optimization landscape of the QAOA applied to the MaxCut problem on random graphs.
arXiv Detail & Related papers (2021-01-14T17:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.