An Empirical Comparison of Optimizers for Quantum Machine Learning with
SPSA-based Gradients
- URL: http://arxiv.org/abs/2305.00224v1
- Date: Thu, 27 Apr 2023 15:19:49 GMT
- Title: An Empirical Comparison of Optimizers for Quantum Machine Learning with
SPSA-based Gradients
- Authors: Marco Wiedmann and Marc H\"olle and Maniraman Periyasamy and Nico
Meyer and Christian Ufrecht and Daniel D. Scherer and Axel Plinge and
Christopher Mutschler
- Abstract summary: We introduce a novel approach that uses approximated gradient from SPSA in combination with state-of-the-art classical gradients.
We demonstrate numerically that this outperforms both standard SPSA and the parameter-shift rule in terms of convergence rate and absolute error in simple regression tasks.
- Score: 1.2532932830320982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: VQA have attracted a lot of attention from the quantum computing community
for the last few years. Their hybrid quantum-classical nature with relatively
shallow quantum circuits makes them a promising platform for demonstrating the
capabilities of NISQ devices. Although the classical machine learning community
focuses on gradient-based parameter optimization, finding near-exact gradients
for VQC with the parameter-shift rule introduces a large sampling overhead.
Therefore, gradient-free optimizers have gained popularity in quantum machine
learning circles. Among the most promising candidates is the SPSA algorithm,
due to its low computational cost and inherent noise resilience. We introduce a
novel approach that uses the approximated gradient from SPSA in combination
with state-of-the-art gradient-based classical optimizers. We demonstrate
numerically that this outperforms both standard SPSA and the parameter-shift
rule in terms of convergence rate and absolute error in simple regression
tasks. The improvement of our novel approach over SPSA with stochastic gradient
decent is even amplified when shot- and hardware-noise are taken into account.
We also demonstrate that error mitigation does not significantly affect our
results.
Related papers
- Illustration of Barren Plateaus in Quantum Computing [2.6652671351756125]
Variational Quantum Circuits (VQCs) have emerged as a promising paradigm for quantum machine learning in the NISQ era.<n>This paper investigates how parameter sharing fundamentally alters the optimization landscape through deceptive gradients.
arXiv Detail & Related papers (2026-02-18T15:56:54Z) - Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing [68.35481158940401]
CL-QAS is a continual quantum architecture search framework.<n>It mitigates challenges of costly encoding amplitude and forgetting in variational quantum circuits.<n>It achieves controllable robustness expressivity, sample-efficient generalization, and smooth convergence without barren plateaus.
arXiv Detail & Related papers (2026-01-10T02:36:03Z) - Escaping Barren Plateaus in Variational Quantum Algorithms Using Negative Learning Rate in Quantum Internet of Things [8.98664000532717]
Variational Quantum Algorithms (VQAs) are becoming the primary computational primitive for next-generation quantum computers.<n>Under device-constrained execution conditions, the scalability of learning is severely limited by barren plateaus.<n>We present a novel approach for escaping barren plateaus by including negative learning rates into the optimization process.
arXiv Detail & Related papers (2025-11-28T03:32:33Z) - Enhancing Hybrid Methods in Parameterized Quantum Circuit Optimization [0.0]
Quantum circuits (PQCs) play an essential role in the application of variational quantum algorithms (VQAs) in noisy quantum devices.<n>We introduce two new hybrid algorithms that are more robust and scalable.<n>We find that they are feasible for NISQ devices with different noise profiles.
arXiv Detail & Related papers (2025-10-09T12:24:57Z) - Looking elsewhere: improving variational Monte Carlo gradients by importance sampling [41.94295877935867]
Neural-network quantum states (NQS) offer a powerful and expressive ansatz for representing quantum many-body wave functions.<n>It is well known that some scenarios - such as sharply peaked wave functions emerging in quantum chemistry - lead to high-variance gradient estimators hindering the effectiveness of variational optimizations.<n>In this work we investigate a systematic strategy to tackle those sampling issues by means of adaptively tuned importance sampling.<n>Our approach can reduce the computational cost of vanilla VMC considerably, up to a factor of 100x when targeting highly peaked quantum chemistry wavefunctions.
arXiv Detail & Related papers (2025-07-07T18:00:03Z) - Preconditioning Natural and Second Order Gradient Descent in Quantum Optimization: A Performance Benchmark [0.0]
We introduce a novel approach to stabilizing BFGS updates against gradient noise.
To address noise sensitivity we show that incorporating apenalization in the BFGS update improved outcomes.
arXiv Detail & Related papers (2025-04-23T08:44:18Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.
Non-smooth regularization is often incorporated into machine learning tasks.
We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - Efficient Quantum Gradient and Higher-order Derivative Estimation via Generalized Hadamard Test [2.5545813981422882]
Gradient-based methods are crucial for understanding the behavior of parameterized quantum circuits (PQCs)
Existing gradient estimation methods, such as Finite Difference, Shift Rule, Hadamard Test, and Direct Hadamard Test, often yield suboptimal gradient circuits for certain PQCs.
We introduce the Flexible Hadamard Test, which, when applied to first-order gradient estimation methods, can invert the roles of ansatz generators and observables.
We also introduce Quantum Automatic Differentiation (QAD), a unified gradient method that adaptively selects the best gradient estimation technique for individual parameters within a PQ
arXiv Detail & Related papers (2024-08-10T02:08:54Z) - Guided-SPSA: Simultaneous Perturbation Stochastic Approximation assisted by the Parameter Shift Rule [4.943277284710129]
We introduce a novel gradient estimation approach called Guided-SPSA, which meaningfully combines the parameter-shift rule and SPSA-based gradient approximation.
The Guided-SPSA results in a 15% to 25% reduction in the number of circuit evaluations required during training for a similar or better optimality of the solution found.
We demonstrate the performance of Guided-SPSA on different paradigms of quantum machine learning, such as regression, classification, and reinforcement learning.
arXiv Detail & Related papers (2024-04-24T09:13:39Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Variational Quantum Approximate Spectral Clustering for Binary
Clustering Problems [0.7550566004119158]
We introduce the Variational Quantum Approximate Spectral Clustering (VQASC) algorithm.
VQASC requires optimization of fewer parameters than the system size, N, traditionally required in classical problems.
We present numerical results from both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-09-08T17:54:42Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum
Neural Networks [11.844238544360149]
Vari quantum algorithms (VQAs) have recently received significant attention due to their promising performance in Noisy Intermediate-Scale Quantum computers (NISQ)
VQAs run on parameterized quantum circuits (PQC) with randomlyational parameters are characterized by barren plateaus (BP) where the gradient vanishes exponentially in the number of qubits.
In this paper, we first quantum natural gradient (QNG), which is one of the most popular algorithms used in VQA, from the classical first-order point of optimization.
Then, we proposed a underlineAround underline
arXiv Detail & Related papers (2022-05-05T14:16:40Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - A Comparison of Various Classical Optimizers for a Variational Quantum
Linear Solver [0.0]
Variational Hybrid Quantum Classical Algorithms (VHQCAs) are a class of quantum algorithms intended to run on noisy quantum devices.
These algorithms employ a parameterized quantum circuit (ansatz) and a quantum-classical feedback loop.
A classical device is used to optimize the parameters in order to minimize a cost function that can be computed far more efficiently on a quantum device.
arXiv Detail & Related papers (2021-06-16T10:40:00Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.