Quantum Shadow Gradient Descent for Variational Quantum Algorithms
- URL: http://arxiv.org/abs/2310.06935v2
- Date: Thu, 22 Aug 2024 18:22:30 GMT
- Title: Quantum Shadow Gradient Descent for Variational Quantum Algorithms
- Authors: Mohsen Heidari, Mobasshir A Naved, Zahra Honjani, Wenbo Xie, Arjun Jacob Grama, Wojciech Szpankowski,
- Abstract summary: Gradient-based gradient estimation has been proposed for training variational quantum circuits in quantum neural networks (QNNs)
The task of gradient estimation has proven to be challenging due to distinctive quantum features such as state collapse and measurement incompatibility.
We develop a novel procedure called quantum shadow descent that uses a single sample per iteration to estimate all components of the gradient.
- Score: 14.286227676294034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gradient-based optimizers have been proposed for training variational quantum circuits in settings such as quantum neural networks (QNNs). The task of gradient estimation, however, has proven to be challenging, primarily due to distinctive quantum features such as state collapse and measurement incompatibility. Conventional techniques, such as the parameter-shift rule, necessitate several fresh samples in each iteration to estimate the gradient due to the stochastic nature of state measurement. Owing to state collapse from measurement, the inability to reuse samples in subsequent iterations motivates a crucial inquiry into whether fundamentally more efficient approaches to sample utilization exist. In this paper, we affirm the feasibility of such efficiency enhancements through a novel procedure called quantum shadow gradient descent (QSGD), which uses a single sample per iteration to estimate all components of the gradient. Our approach is based on an adaptation of shadow tomography that significantly enhances sample efficiency. Through detailed theoretical analysis, we show that QSGD has a significantly faster convergence rate than existing methods under locality conditions. We present detailed numerical experiments supporting all of our theoretical claims.
Related papers
- Efficient Quantum Gradient and Higher-order Derivative Estimation via Generalized Hadamard Test [2.5545813981422882]
Gradient-based methods are crucial for understanding the behavior of parameterized quantum circuits (PQCs)
Existing gradient estimation methods, such as Finite Difference, Shift Rule, Hadamard Test, and Direct Hadamard Test, often yield suboptimal gradient circuits for certain PQCs.
We introduce the Flexible Hadamard Test, which, when applied to first-order gradient estimation methods, can invert the roles of ansatz generators and observables.
We also introduce Quantum Automatic Differentiation (QAD), a unified gradient method that adaptively selects the best gradient estimation technique for individual parameters within a PQ
arXiv Detail & Related papers (2024-08-10T02:08:54Z) - Quantum Natural Stochastic Pairwise Coordinate Descent [6.187270874122921]
Quantum machine learning through variational quantum algorithms (VQAs) has gained substantial attention in recent years.
This paper introduces the quantum natural pairwise coordinate descent (2QNSCD) optimization method.
We develop a highly sparse unbiased estimator of the novel metric tensor using a quantum circuit with gate complexity $Theta(1)$ times that of the parameterized quantum circuit and single-shot quantum measurements.
arXiv Detail & Related papers (2024-07-18T18:57:29Z) - Regularization of Riemannian optimization: Application to process tomography and quantum machine learning [0.0]
We investigate the influence of various regularization terms added to the cost function of gradient descent algorithms.
Motivated by Lasso regularization, we apply penalties for large ranks of the quantum channel.
We apply the method to quantum process tomography and a quantum machine learning problem.
arXiv Detail & Related papers (2024-04-30T15:56:16Z) - CSQ: Growing Mixed-Precision Quantization Scheme with Bi-level
Continuous Sparsification [51.81850995661478]
Mixed-precision quantization has been widely applied on deep neural networks (DNNs)
Previous attempts on bit-level regularization and pruning-based dynamic precision adjustment during training suffer from noisy gradients and unstable convergence.
We propose Continuous Sparsification Quantization (CSQ), a bit-level training method to search for mixed-precision quantization schemes with improved stability.
arXiv Detail & Related papers (2022-12-06T05:44:21Z) - NIPQ: Noise proxy-based Integrated Pseudo-Quantization [9.207644534257543]
Straight-through estimator (STE) incurs unstable convergence during quantization-aware training (QAT)
We propose a novel noise proxy-based integrated pseudoquantization (NIPQ) that enables unified support of pseudoquantization for both activation and weight.
NIPQ outperforms existing quantization algorithms in various vision and language applications by a large margin.
arXiv Detail & Related papers (2022-06-02T01:17:40Z) - Improved Quantum Algorithms for Fidelity Estimation [77.34726150561087]
We develop new and efficient quantum algorithms for fidelity estimation with provable performance guarantees.
Our algorithms use advanced quantum linear algebra techniques, such as the quantum singular value transformation.
We prove that fidelity estimation to any non-trivial constant additive accuracy is hard in general.
arXiv Detail & Related papers (2022-03-30T02:02:16Z) - Quantum algorithms for quantum dynamics: A performance study on the
spin-boson model [68.8204255655161]
Quantum algorithms for quantum dynamics simulations are traditionally based on implementing a Trotter-approximation of the time-evolution operator.
variational quantum algorithms have become an indispensable alternative, enabling small-scale simulations on present-day hardware.
We show that, despite providing a clear reduction of quantum gate cost, the variational method in its current implementation is unlikely to lead to a quantum advantage.
arXiv Detail & Related papers (2021-08-09T18:00:05Z) - Benchmarking adaptive variational quantum eigensolvers [63.277656713454284]
We benchmark the accuracy of VQE and ADAPT-VQE to calculate the electronic ground states and potential energy curves.
We find both methods provide good estimates of the energy and ground state.
gradient-based optimization is more economical and delivers superior performance than analogous simulations carried out with gradient-frees.
arXiv Detail & Related papers (2020-11-02T19:52:04Z) - A Statistical Framework for Low-bitwidth Training of Deep Neural
Networks [70.77754244060384]
Fully quantized training (FQT) uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model.
One major challenge with FQT is the lack of theoretical understanding, in particular of how gradient quantization impacts convergence properties.
arXiv Detail & Related papers (2020-10-27T13:57:33Z) - Neural network quantum state tomography in a two-qubit experiment [52.77024349608834]
Machine learning inspired variational methods provide a promising route towards scalable state characterization for quantum simulators.
We benchmark and compare several such approaches by applying them to measured data from an experiment producing two-qubit entangled states.
We find that in the presence of experimental imperfections and noise, confining the variational manifold to physical states greatly improves the quality of the reconstructed states.
arXiv Detail & Related papers (2020-07-31T17:25:12Z) - Measuring Analytic Gradients of General Quantum Evolution with the
Stochastic Parameter Shift Rule [0.0]
We study the problem of estimating the gradient of the function to be optimized directly from quantum measurements.
We derive a mathematically exact formula that provides an algorithm for estimating the gradient of any multi-qubit parametric quantum evolution.
Our algorithm continues to work, although with some approximations, even when all the available quantum gates are noisy.
arXiv Detail & Related papers (2020-05-20T18:24:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.