Gradient Estimation with Constant Scaling for Hybrid Quantum Machine
Learning
- URL: http://arxiv.org/abs/2211.13981v1
- Date: Fri, 25 Nov 2022 09:45:35 GMT
- Title: Gradient Estimation with Constant Scaling for Hybrid Quantum Machine
Learning
- Authors: Thomas Hoffmann and Douglas Brown
- Abstract summary: We present a novel method for determining gradients of parameterised quantum circuits (PQCs) in machine learning models.
The gradients of PQC layers can be calculated with an overhead of two evaluations per circuit per forward-pass independent of the number of circuit parameters.
We find that, as the number of qubits increases, our method converges significantly faster than the parameter shift rule and to a comparable accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel method for determining gradients of parameterised quantum
circuits (PQCs) in hybrid quantum-classical machine learning models by applying
the multivariate version of the simultaneous perturbation stochastic
approximation (SPSA) algorithm. The gradients of PQC layers can be calculated
with an overhead of two evaluations per circuit per forward-pass independent of
the number of circuit parameters, compared to the linear scaling of the
parameter shift rule. These are then used in the backpropagation algorithm by
applying the chain rule. We compare our method to the parameter shift rule for
different circuit widths and batch sizes, and for a range of learning rates. We
find that, as the number of qubits increases, our method converges
significantly faster than the parameter shift rule and to a comparable
accuracy, even when considering the optimal learning rate for each method.
Related papers
- Adaptive variational quantum dynamics simulations with compressed circuits and fewer measurements [4.2643127089535104]
We show an improved version of the adaptive variational quantum dynamics simulation (AVQDS) method, which we call AVQDS(T)
The algorithm adaptively adds layers of disjoint unitary gates to the ansatz circuit so as to keep the McLachlan distance, a measure of the accuracy of the variational dynamics, below a fixed threshold.
We also show a method based on eigenvalue truncation to solve the linear equations of motion for the variational parameters with enhanced noise resilience.
arXiv Detail & Related papers (2024-08-13T02:56:43Z) - Efficient Quantum Gradient and Higher-order Derivative Estimation via Generalized Hadamard Test [2.5545813981422882]
Gradient-based methods are crucial for understanding the behavior of parameterized quantum circuits (PQCs)
Existing gradient estimation methods, such as Finite Difference, Shift Rule, Hadamard Test, and Direct Hadamard Test, often yield suboptimal gradient circuits for certain PQCs.
We introduce the Flexible Hadamard Test, which, when applied to first-order gradient estimation methods, can invert the roles of ansatz generators and observables.
We also introduce Quantum Automatic Differentiation (QAD), a unified gradient method that adaptively selects the best gradient estimation technique for individual parameters within a PQ
arXiv Detail & Related papers (2024-08-10T02:08:54Z) - Guided-SPSA: Simultaneous Perturbation Stochastic Approximation assisted by the Parameter Shift Rule [4.943277284710129]
We introduce a novel gradient estimation approach called Guided-SPSA, which meaningfully combines the parameter-shift rule and SPSA-based gradient approximation.
The Guided-SPSA results in a 15% to 25% reduction in the number of circuit evaluations required during training for a similar or better optimality of the solution found.
We demonstrate the performance of Guided-SPSA on different paradigms of quantum machine learning, such as regression, classification, and reinforcement learning.
arXiv Detail & Related papers (2024-04-24T09:13:39Z) - Parsimonious Optimisation of Parameters in Variational Quantum Circuits [1.303764728768944]
We propose a novel Quantum-Gradient Sampling that requires the execution of at most two circuits per iteration to update the optimisable parameters.
Our proposed method achieves similar convergence rates to classical gradient descent, and empirically outperforms gradient coordinate descent, and SPSA.
arXiv Detail & Related papers (2023-06-20T18:50:18Z) - Gradient-descent quantum process tomography by learning Kraus operators [63.69764116066747]
We perform quantum process tomography (QPT) for both discrete- and continuous-variable quantum systems.
We use a constrained gradient-descent (GD) approach on the so-called Stiefel manifold during optimization to obtain the Kraus operators.
The GD-QPT matches the performance of both compressed-sensing (CS) and projected least-squares (PLS) QPT in benchmarks with two-qubit random processes.
arXiv Detail & Related papers (2022-08-01T12:48:48Z) - Information flow in parameterized quantum circuits [0.4893345190925177]
We introduce a new way to quantify information flow in quantum systems.
We propose a new distance metric using the mutual information between gate nodes.
We then present an optimization procedure for variational algorithms using paths based on the distance measure.
arXiv Detail & Related papers (2022-07-11T19:30:47Z) - Automated differential equation solver based on the parametric
approximation optimization [77.34726150561087]
The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
arXiv Detail & Related papers (2022-05-11T10:06:47Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Scalable Gradients for Stochastic Differential Equations [40.70998833051251]
adjoint sensitivity method scalably computes gradients of ordinary differential equations.
We generalize this method to differential equations, allowing time-efficient and constant-memory computation.
We use our method to fit neural dynamics defined by networks, achieving competitive performance on a 50-dimensional motion capture dataset.
arXiv Detail & Related papers (2020-01-05T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.