An Empirical Comparison of Optimizers for Quantum Machine Learning with
SPSA-based Gradients
- URL: http://arxiv.org/abs/2305.00224v1
- Date: Thu, 27 Apr 2023 15:19:49 GMT
- Title: An Empirical Comparison of Optimizers for Quantum Machine Learning with
SPSA-based Gradients
- Authors: Marco Wiedmann and Marc H\"olle and Maniraman Periyasamy and Nico
Meyer and Christian Ufrecht and Daniel D. Scherer and Axel Plinge and
Christopher Mutschler
- Abstract summary: We introduce a novel approach that uses approximated gradient from SPSA in combination with state-of-the-art classical gradients.
We demonstrate numerically that this outperforms both standard SPSA and the parameter-shift rule in terms of convergence rate and absolute error in simple regression tasks.
- Score: 1.2532932830320982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: VQA have attracted a lot of attention from the quantum computing community
for the last few years. Their hybrid quantum-classical nature with relatively
shallow quantum circuits makes them a promising platform for demonstrating the
capabilities of NISQ devices. Although the classical machine learning community
focuses on gradient-based parameter optimization, finding near-exact gradients
for VQC with the parameter-shift rule introduces a large sampling overhead.
Therefore, gradient-free optimizers have gained popularity in quantum machine
learning circles. Among the most promising candidates is the SPSA algorithm,
due to its low computational cost and inherent noise resilience. We introduce a
novel approach that uses the approximated gradient from SPSA in combination
with state-of-the-art gradient-based classical optimizers. We demonstrate
numerically that this outperforms both standard SPSA and the parameter-shift
rule in terms of convergence rate and absolute error in simple regression
tasks. The improvement of our novel approach over SPSA with stochastic gradient
decent is even amplified when shot- and hardware-noise are taken into account.
We also demonstrate that error mitigation does not significantly affect our
results.
Related papers
- Efficient Quantum Gradient and Higher-order Derivative Estimation via Generalized Hadamard Test [2.5545813981422882]
Gradient-based methods are crucial for understanding the behavior of parameterized quantum circuits (PQCs)
Existing gradient estimation methods, such as Finite Difference, Shift Rule, Hadamard Test, and Direct Hadamard Test, often yield suboptimal gradient circuits for certain PQCs.
We introduce the Flexible Hadamard Test, which, when applied to first-order gradient estimation methods, can invert the roles of ansatz generators and observables.
We also introduce Quantum Automatic Differentiation (QAD), a unified gradient method that adaptively selects the best gradient estimation technique for individual parameters within a PQ
arXiv Detail & Related papers (2024-08-10T02:08:54Z) - Guided-SPSA: Simultaneous Perturbation Stochastic Approximation assisted by the Parameter Shift Rule [4.943277284710129]
We introduce a novel gradient estimation approach called Guided-SPSA, which meaningfully combines the parameter-shift rule and SPSA-based gradient approximation.
The Guided-SPSA results in a 15% to 25% reduction in the number of circuit evaluations required during training for a similar or better optimality of the solution found.
We demonstrate the performance of Guided-SPSA on different paradigms of quantum machine learning, such as regression, classification, and reinforcement learning.
arXiv Detail & Related papers (2024-04-24T09:13:39Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Variational Quantum Approximate Spectral Clustering for Binary
Clustering Problems [0.7550566004119158]
We introduce the Variational Quantum Approximate Spectral Clustering (VQASC) algorithm.
VQASC requires optimization of fewer parameters than the system size, N, traditionally required in classical problems.
We present numerical results from both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-09-08T17:54:42Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum
Neural Networks [11.844238544360149]
Vari quantum algorithms (VQAs) have recently received significant attention due to their promising performance in Noisy Intermediate-Scale Quantum computers (NISQ)
VQAs run on parameterized quantum circuits (PQC) with randomlyational parameters are characterized by barren plateaus (BP) where the gradient vanishes exponentially in the number of qubits.
In this paper, we first quantum natural gradient (QNG), which is one of the most popular algorithms used in VQA, from the classical first-order point of optimization.
Then, we proposed a underlineAround underline
arXiv Detail & Related papers (2022-05-05T14:16:40Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - A Comparison of Various Classical Optimizers for a Variational Quantum
Linear Solver [0.0]
Variational Hybrid Quantum Classical Algorithms (VHQCAs) are a class of quantum algorithms intended to run on noisy quantum devices.
These algorithms employ a parameterized quantum circuit (ansatz) and a quantum-classical feedback loop.
A classical device is used to optimize the parameters in order to minimize a cost function that can be computed far more efficiently on a quantum device.
arXiv Detail & Related papers (2021-06-16T10:40:00Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.