On the dynamical Lie algebras of quantum approximate optimization algorithms
- URL: http://arxiv.org/abs/2407.12587v1
- Date: Wed, 17 Jul 2024 14:12:30 GMT
- Title: On the dynamical Lie algebras of quantum approximate optimization algorithms
- Authors: Jonathan Allcock, Miklos Santha, Pei Yuan, Shengyu Zhang,
- Abstract summary: Dynamical Lie algebras (DLAs) have emerged as a valuable tool in the study of parameterized quantum circuits.
In this work, we investigate DLAs for the quantum approximate optimization algorithm (QAOA)
We show that the dimension of the DLA is $O(n3)$ and give an explicit basis for the DLA.
- Score: 4.987686869768721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamical Lie algebras (DLAs) have emerged as a valuable tool in the study of parameterized quantum circuits, helping to characterize both their expressiveness and trainability. In particular, the absence or presence of barren plateaus (BPs) -- flat regions in parameter space that prevent the efficient training of variational quantum algorithms -- has recently been shown to be intimately related to quantities derived from the associated DLA. In this work, we investigate DLAs for the quantum approximate optimization algorithm (QAOA), one of the most studied variational quantum algorithms for solving graph MaxCut and other combinatorial optimization problems. While DLAs for QAOA circuits have been studied before, existing results have either been based on numerical evidence, or else correspond to DLA generators specifically chosen to be universal for quantum computation on a subspace of states. We initiate an analytical study of barren plateaus and other statistics of QAOA algorithms, and give bounds on the dimensions of the corresponding DLAs and their centers for general graphs. We then focus on the $n$-vertex cycle and complete graphs. For the cycle graph we give an explicit basis, identify its decomposition into the direct sum of a $2$-dimensional center and a semisimple component isomorphic to $n-1$ copies of $su(2)$. We give an explicit basis for this isomorphism, and a closed-form expression for the variance of the cost function, proving the absence of BPs. For the complete graph we prove that the dimension of the DLA is $O(n^3)$ and give an explicit basis for the DLA.
Related papers
- Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - The Adjoint Is All You Need: Characterizing Barren Plateaus in Quantum
Ans\"atze [3.2773906224402802]
We formulate a theory of Barren Plateaus for parameterized quantum circuits whose observables lie in their Lie algebra (DLA)
For the first time, our theory provides, for the first time, the ability to compute the variance of the variance of the gradient cost function of the quantum compound ansatz.
arXiv Detail & Related papers (2023-09-14T17:50:04Z) - GRAPE optimization for open quantum systems with time-dependent
decoherence rates driven by coherent and incoherent controls [77.34726150561087]
The GRadient Ascent Pulse Engineering (GRAPE) method is widely used for optimization in quantum control.
We adopt GRAPE method for optimizing objective functionals for open quantum systems driven by both coherent and incoherent controls.
The efficiency of the algorithm is demonstrated through numerical simulations for the state-to-state transition problem.
arXiv Detail & Related papers (2023-07-17T13:37:18Z) - Fast quantum algorithm for differential equations [0.5895819801677125]
We present a quantum algorithm with numerical complexity that is polylogarithmic in $N$ but is independent of $kappa$ for a large class of PDEs.
Our algorithm generates a quantum state that enables extracting features of the solution.
arXiv Detail & Related papers (2023-06-20T18:01:07Z) - Global optimization of MPS in quantum-inspired numerical analysis [0.0]
The study focuses on the search for the lowest eigenstates of a Hamiltonian equation.
Five algorithms are introduced: imaginary-time evolution, steepest gradient descent, an improved descent, an implicitly restarted Arnoldi method, and density matrix renormalization group (DMRG) optimization.
arXiv Detail & Related papers (2023-03-16T16:03:51Z) - End-to-end resource analysis for quantum interior point methods and portfolio optimization [63.4863637315163]
We provide a complete quantum circuit-level description of the algorithm from problem input to problem output.
We report the number of logical qubits and the quantity/depth of non-Clifford T-gates needed to run the algorithm.
arXiv Detail & Related papers (2022-11-22T18:54:48Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Experimental analysis of quantum annealers and hybrid solvers using
benchmark optimization problems [0.0]
This paper studies the Hamiltonian Cycle Problem (HCP) and the Traveling Salesman Problem (TSP) on D-Wave's quantum systems.
arXiv Detail & Related papers (2022-02-17T23:46:27Z) - Tightening the Dependence on Horizon in the Sample Complexity of
Q-Learning [59.71676469100807]
This work sharpens the sample complexity of synchronous Q-learning to an order of $frac|mathcalS|| (1-gamma)4varepsilon2$ for any $0varepsilon 1$.
Our finding unveils the effectiveness of vanilla Q-learning, which matches that of speedy Q-learning without requiring extra computation and storage.
arXiv Detail & Related papers (2021-02-12T14:22:05Z) - Power Normalizations in Fine-grained Image, Few-shot Image and Graph
Classification [38.84294567166725]
We study Power Normalizations (PN) in the deep learning setup via a novel PN layer pooling feature maps.
We investigate the role and meaning of MaxExp and Gamma, two popular PN functions.
We show that SPN on the autocorrelation/covariance matrix and the Heat Diffusion Process (HDP) on a graph Laplacian matrix are closely related.
arXiv Detail & Related papers (2020-12-27T17:06:06Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.