Accelerating Noisy VQE Optimization with Gaussian Processes
- URL: http://arxiv.org/abs/2204.07331v3
- Date: Wed, 3 Aug 2022 18:29:04 GMT
- Title: Accelerating Noisy VQE Optimization with Gaussian Processes
- Authors: Juliane Mueller, Wim Lavrijsen, Costin Iancu, Wibe de Jong
- Abstract summary: We introduce the use of Gaussian Processes (GP) as surrogate models to reduce the impact of noise.
ImFil is a state-of-the-art, gradient-free method, which in comparative studies has been shown to outperform on noisy VQE problems.
We show that when noise is present, the GP+ImFil approach finds results closer to the true global minimum in fewer evaluations than standalone ImFil.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hybrid variational quantum algorithms, which combine a classical optimizer
with evaluations on a quantum chip, are the most promising candidates to show
quantum advantage on current noisy, intermediate-scale quantum (NISQ) devices.
The classical optimizer is required to perform well in the presence of noise in
the objective function evaluations, or else it becomes the weakest link in the
algorithm. We introduce the use of Gaussian Processes (GP) as surrogate models
to reduce the impact of noise and to provide high quality seeds to escape local
minima, whether real or noise-induced. We build this as a framework on top of
local optimizations, for which we choose Implicit Filtering (ImFil) in this
study. ImFil is a state-of-the-art, gradient-free method, which in comparative
studies has been shown to outperform on noisy VQE problems. The result is a new
method: "GP+ImFil". We show that when noise is present, the GP+ImFil approach
finds results closer to the true global minimum in fewer evaluations than
standalone ImFil, and that it works particularly well for larger dimensional
problems. Using GP to seed local searches in a multi-modal landscape shows
mixed results: although it is capable of improving on ImFil standalone, it does
not do so consistently and would only be preferred over other, more exhaustive,
multistart methods if resources are constrained.
Related papers
- Classical Post-processing for Unitary Block Optimization Scheme to Reduce the Effect of Noise on Optimization of Variational Quantum Eigensolvers [0.0]
Variational Quantum Eigensolvers (VQE) are a promising approach for finding the classically intractable ground state of a Hamiltonian.
Here we develop two classical post-processing techniques which improve UBOS especially when measurements have large noise.
arXiv Detail & Related papers (2024-04-29T18:11:53Z) - Trainability Analysis of Quantum Optimization Algorithms from a Bayesian
Lens [2.9356265132808024]
We show that a noiseless QAOA circuit with a depth of $tildemathtlog nright)$ can be trained efficiently.
Our results offer theoretical performance of quantum algorithms in the noisy intermediate-scale quantum era.
arXiv Detail & Related papers (2023-10-10T02:56:28Z) - High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise [96.80184504268593]
gradient, clipping is one of the key algorithmic ingredients to derive good high-probability guarantees.
Clipping can spoil the convergence of the popular methods for composite and distributed optimization.
arXiv Detail & Related papers (2023-10-03T07:49:17Z) - Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees [60.571030754252824]
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
arXiv Detail & Related papers (2023-08-17T21:15:04Z) - QAOA Performance in Noisy Devices: The Effect of Classical Optimizers and Ansatz Depth [0.32985979395737786]
The Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm for Near-term Intermediate-Scale Quantum computers (NISQ)
This paper presents an investigation into the impact realistic noise on the classical vectors.
We find that while there is no significant difference in the performance of classicals in a state simulation, the Adam and AMSGrads perform best in the presence of shot noise.
arXiv Detail & Related papers (2023-07-19T17:22:44Z) - Using Differential Evolution to avoid local minima in Variational
Quantum Algorithms [0.0]
Variational Quantum Algorithms (VQAs) are among the most promising NISQ-era algorithms for harnessing quantum computing.
Our goal in this paper is to study alternative optimization methods that can avoid or reduce the effect of local minima and barren plateau problems.
arXiv Detail & Related papers (2023-03-21T20:31:06Z) - Accelerating variational quantum algorithms with multiple quantum
processors [78.36566711543476]
Variational quantum algorithms (VQAs) have the potential of utilizing near-term quantum machines to gain certain computational advantages.
Modern VQAs suffer from cumbersome computational overhead, hampered by the tradition of employing a solitary quantum processor to handle large data.
Here we devise an efficient distributed optimization scheme, called QUDIO, to address this issue.
arXiv Detail & Related papers (2021-06-24T08:18:42Z) - A Comparison of Various Classical Optimizers for a Variational Quantum
Linear Solver [0.0]
Variational Hybrid Quantum Classical Algorithms (VHQCAs) are a class of quantum algorithms intended to run on noisy quantum devices.
These algorithms employ a parameterized quantum circuit (ansatz) and a quantum-classical feedback loop.
A classical device is used to optimize the parameters in order to minimize a cost function that can be computed far more efficiently on a quantum device.
arXiv Detail & Related papers (2021-06-16T10:40:00Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Learning based signal detection for MIMO systems with unknown noise
statistics [84.02122699723536]
This paper aims to devise a generalized maximum likelihood (ML) estimator to robustly detect signals with unknown noise statistics.
In practice, there is little or even no statistical knowledge on the system noise, which in many cases is non-Gaussian, impulsive and not analyzable.
Our framework is driven by an unsupervised learning approach, where only the noise samples are required.
arXiv Detail & Related papers (2021-01-21T04:48:15Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.