Equivalence of cost concentration and gradient vanishing for quantum circuits: An elementary proof in the Riemannian formulation
- URL: http://arxiv.org/abs/2402.07883v2
- Date: Wed, 11 Sep 2024 13:56:14 GMT
- Title: Equivalence of cost concentration and gradient vanishing for quantum circuits: An elementary proof in the Riemannian formulation
- Authors: Qiang Miao, Thomas Barthel,
- Abstract summary: optimization of quantum circuits can be hampered by a decay of average gradient amplitudes with increasing system size.
barren plateau problem is equivalent to an exponential decay of the variance of cost-function differences.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The optimization of quantum circuits can be hampered by a decay of average gradient amplitudes with increasing system size. When the decay is exponential, this is called the barren plateau problem. Considering explicit circuit parametrizations (in terms of rotation angles), it has been shown in Arrasmith et al., Quantum Sci. Technol. 7, 045015 (2022) that barren plateaus are equivalent to an exponential decay of the variance of cost-function differences. We show that the issue is particularly simple in the (parametrization-free) Riemannian formulation of such optimization problems and obtain a tighter bound for the cost-function variance. An elementary derivation shows that the single-gate variance of the cost function is strictly equal to half the variance of the Riemannian single-gate gradient, where we sample variable gates according to the uniform Haar measure. The total variances of the cost function and its gradient are then both bounded from above by the sum of single-gate variances and, conversely, bound single-gate variances from above. So, decays of gradients and cost-function variations go hand in hand, and barren plateau problems cannot be resolved by avoiding gradient-based in favor of gradient-free optimization methods.
Related papers
- Generalizing Stochastic Smoothing for Differentiation and Gradient Estimation [59.86921150579892]
We deal with the problem of gradient estimation for differentiable relaxations of algorithms, operators, simulators, and other non-differentiable functions.
We develop variance reduction strategies for differentiable sorting and ranking, differentiable shortest-paths on graphs, differentiable rendering for pose estimation, as well as differentiable cryo-ET simulations.
arXiv Detail & Related papers (2024-10-10T17:10:00Z) - Benign overfitting in Fixed Dimension via Physics-Informed Learning with Smooth Inductive Bias [8.668428992331808]
We develop an Sobolev norm learning curve for kernel ridge(less) regression when addressing (elliptical) linear inverse problems.
Our results show that the PDE operators in the inverse problem can stabilize the variance and even behave benign overfitting for fixed-dimensional problems.
arXiv Detail & Related papers (2024-06-13T14:54:30Z) - Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
Minimization [52.25843977506935]
We propose an adaptive variance method, called AdaSpider, for $L$-smooth, non-reduction functions with a finitesum structure.
In doing so, we are able to compute an $epsilon-stationary point with $tildeOleft + st/epsilon calls.
arXiv Detail & Related papers (2022-11-03T14:41:46Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z) - On barren plateaus and cost function locality in variational quantum
algorithms [0.0]
Variational quantum algorithms rely on gradient based optimization to iteratively minimize a cost function evaluated by measuring output(s) of a quantum processor.
A barren plateau is the phenomenon of exponentially vanishing gradients in sufficiently expressive parametrized quantum circuits.
arXiv Detail & Related papers (2020-11-20T18:00:01Z) - Large gradients via correlation in random parameterized quantum circuits [0.0]
The presence of exponentially vanishing gradients in cost function landscapes is an obstacle to optimization by gradient descent methods.
We prove that reducing the dimensionality of the parameter space can allow one to circumvent the vanishing gradient phenomenon.
arXiv Detail & Related papers (2020-05-25T16:15:53Z) - The Implicit Regularization of Stochastic Gradient Flow for Least
Squares [24.976079444818552]
We study the implicit regularization of mini-batch gradient descent, when applied to the fundamental problem of least squares regression.
We leverage a continuous-time differential equation having the same moments as gradient descent, which we call gradient flow.
We give a bound on the excess risk of gradient flow at time $t$, over ridge regression with tuning parameter $lambda = 1/t$.
arXiv Detail & Related papers (2020-03-17T16:37:25Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.