Optimized numerical gradient and Hessian estimation for variational
quantum algorithms
- URL: http://arxiv.org/abs/2206.12643v3
- Date: Sun, 20 Nov 2022 02:31:06 GMT
- Title: Optimized numerical gradient and Hessian estimation for variational
quantum algorithms
- Authors: Y. S. Teo
- Abstract summary: We show that tunable numerical estimators offer estimation errors that drop exponentially with the number of circuit qubits.
We demonstrate that the scaled parameter-shift estimators beat the standard unscaled ones in estimation accuracy under any situation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sampling noisy intermediate-scale quantum devices is a fundamental step that
converts coherent quantum-circuit outputs to measurement data for running
variational quantum algorithms that utilize gradient and Hessian methods in
cost-function optimization tasks. This step, however, introduces estimation
errors in the resulting gradient or Hessian computations. To minimize these
errors, we discuss tunable numerical estimators, which are the
finite-difference (including their generalized versions) and scaled
parameter-shift estimators [introduced in Phys. Rev. A 103, 012405 (2021)], and
propose operational circuit-averaged methods to optimize them. We show that
these optimized numerical estimators offer estimation errors that drop
exponentially with the number of circuit qubits for a given sampling-copy
number, revealing a direct compatibility with the barren-plateau phenomenon. In
particular, there exists a critical sampling-copy number below which an
optimized difference estimator gives a smaller average estimation error in
contrast to the standard (analytical) parameter-shift estimator, which exactly
computes gradient and Hessian components. Moreover, this critical number grows
exponentially with the circuit-qubit number. Finally, by forsaking analyticity,
we demonstrate that the scaled parameter-shift estimators beat the standard
unscaled ones in estimation accuracy under any situation, with comparable
performances to those of the difference estimators within significant
copy-number ranges, and are the best ones if larger copy numbers are
affordable.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Optimal Low-Depth Quantum Signal-Processing Phase Estimation [0.029541734875307393]
We introduce Quantum Signal-Processing Phase Estimation algorithms that are robust against challenges and achieve optimal performance.
Our approach achieves an unprecedented standard deviation accuracy of $10-4$ radians for estimating unwanted swap angles in superconducting two-qubit experiments.
Our results are rigorously validated against the quantum Fisher information, confirming our protocol's ability to achieve unmatched precision for two-qubit gate learning.
arXiv Detail & Related papers (2024-06-17T10:33:52Z) - Robustness of optimized numerical estimation schemes for noisy
variational quantum algorithms [0.0]
We explore the extent to which numerical schemes remain statistically more accurate for a given number of sampling copies in the presence of noise.
For noise-channel error terms that are independent of the circuit parameters, we demonstrate that emph without any knowledge about the noise channel.
We show that these optimized SPS estimators can significantly reduce mean-squared-error biases.
arXiv Detail & Related papers (2023-10-07T08:43:26Z) - Parsimonious Optimisation of Parameters in Variational Quantum Circuits [1.303764728768944]
We propose a novel Quantum-Gradient Sampling that requires the execution of at most two circuits per iteration to update the optimisable parameters.
Our proposed method achieves similar convergence rates to classical gradient descent, and empirically outperforms gradient coordinate descent, and SPSA.
arXiv Detail & Related papers (2023-06-20T18:50:18Z) - Fast gradient estimation for variational quantum algorithms [0.6445605125467572]
We propose a new gradient estimation method to mitigate the measurement challenge.
Within a Bayesian framework, we use prior information about the circuit to find an estimation strategy.
We demonstrate that this approach can significantly outperform traditional gradient estimation methods.
arXiv Detail & Related papers (2022-10-12T18:00:00Z) - Dual-Frequency Quantum Phase Estimation Mitigates the Spectral Leakage
of Quantum Algorithms [76.15799379604898]
Quantum phase estimation suffers from spectral leakage when the reciprocal of the record length is not an integer multiple of the unknown phase.
We propose a dual-frequency estimator, which approaches the Cramer-Rao bound, when multiple samples are available.
arXiv Detail & Related papers (2022-01-23T17:20:34Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - Estimating the gradient and higher-order derivatives on quantum hardware [1.2891210250935146]
We show how arbitrary-order derivatives can be analytically evaluated in terms of simple parameter-shift rules.
We also consider the impact of statistical noise by studying the mean squared error of different derivative estimators.
arXiv Detail & Related papers (2020-08-14T18:00:10Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.