Stochastic noise can be helpful for variational quantum algorithms
- URL: http://arxiv.org/abs/2210.06723v2
- Date: Thu, 8 Jun 2023 15:31:46 GMT
- Title: Stochastic noise can be helpful for variational quantum algorithms
- Authors: Junyu Liu, Frederik Wilde, Antonio Anna Mele, Liang Jiang, Jens Eisert
- Abstract summary: Saddle points constitute a crucial challenge for first-order gradient descent algorithms.
We provide evidence that the saddle points problem can be naturally avoided in variational algorithms by exploiting the presence of quantumity.
- Score: 8.8226732803857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Saddle points constitute a crucial challenge for first-order gradient descent
algorithms. In notions of classical machine learning, they are avoided for
example by means of stochastic gradient descent methods. In this work, we
provide evidence that the saddle points problem can be naturally avoided in
variational quantum algorithms by exploiting the presence of stochasticity. We
prove convergence guarantees and present practical examples in numerical
simulations and on quantum hardware. We argue that the natural stochasticity of
variational algorithms can be beneficial for avoiding strict saddle points,
i.e., those saddle points with at least one negative Hessian eigenvalue. This
insight that some levels of shot noise could help is expected to add a new
perspective to notions of near-term variational quantum algorithms.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Denoising Gradient Descent in Variational Quantum Algorithms [0.0]
We introduce an algorithm for mitigating the adverse effects of noise on gradient descent in variational quantum algorithms.
We empirically demonstrate the advantages offered by our algorithm on randomized parametrized quantum circuits.
arXiv Detail & Related papers (2024-03-06T16:15:25Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Quantum speedups for stochastic optimization [18.32349609443295]
We consider the problem of minimizing a continuous function given quantumvitzvariance to an oracle.
We provide two new methods for the special case of minimizing a Lipsch avvitz function.
arXiv Detail & Related papers (2023-08-03T07:39:10Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Towards Efficient Ansatz Architecture for Variational Quantum Algorithms [12.728075253374062]
Variational quantum algorithms are expected to demonstrate the advantage of quantum computing on noisy quantum computers.
Training such variational quantum algorithms suffers from gradient vanishing as the size of the algorithm increases.
We propose a novel training scheme to mitigate such noise-induced gradient vanishing.
arXiv Detail & Related papers (2021-11-26T19:41:35Z) - Mitigating Noise-Induced Gradient Vanishing in Variational Quantum
Algorithm Training [14.343891429004639]
Variational quantum algorithms are expected to demonstrate the advantage of quantum computing on noisy quantum computers.
Training such variational quantum algorithms suffers from gradient vanishing as the size of the algorithm increases.
We propose a novel training scheme to mitigate such noise-induced gradient vanishing.
arXiv Detail & Related papers (2021-11-25T18:28:34Z) - Quantum algorithms for quantum dynamics: A performance study on the
spin-boson model [68.8204255655161]
Quantum algorithms for quantum dynamics simulations are traditionally based on implementing a Trotter-approximation of the time-evolution operator.
variational quantum algorithms have become an indispensable alternative, enabling small-scale simulations on present-day hardware.
We show that, despite providing a clear reduction of quantum gate cost, the variational method in its current implementation is unlikely to lead to a quantum advantage.
arXiv Detail & Related papers (2021-08-09T18:00:05Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Stochastic optimization with momentum: convergence, fluctuations, and
traps avoidance [0.0]
In this paper, a general optimization procedure is studied, unifying several variants of the gradient descent such as, among others, the heavy ball method, the Nesterov Accelerated Gradient (S-NAG), and the widely used Adam method.
The avoidance is studied as a noisy discretization of a non-autonomous ordinary differential equation.
arXiv Detail & Related papers (2020-12-07T19:14:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.