Nearly Optimal Quantum Algorithm for Estimating Multiple Expectation
Values
- URL: http://arxiv.org/abs/2111.09283v3
- Date: Tue, 11 Oct 2022 21:47:17 GMT
- Title: Nearly Optimal Quantum Algorithm for Estimating Multiple Expectation
Values
- Authors: William J. Huggins, Kianna Wan, Jarrod McClean, Thomas E. O'Brien,
Nathan Wiebe, Ryan Babbush
- Abstract summary: We describe an approach that leverages Gily'en et al.'s quantum gradient estimation algorithm to achieve $mathcalO(sqrtM/varepsilon)$ scaling up to logarithmic factors.
We prove that this scaling is worst-case optimal in the high-precision regime if the state preparation is treated as a black box.
- Score: 0.17126708168238122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many quantum algorithms involve the evaluation of expectation values. Optimal
strategies for estimating a single expectation value are known, requiring a
number of state preparations that scales with the target error $\varepsilon$ as
$\mathcal{O}(1/\varepsilon)$. In this paper, we address the task of estimating
the expectation values of $M$ different observables, each to within additive
error $\varepsilon$, with the same $1/\varepsilon$ dependence. We describe an
approach that leverages Gily\'en et al.'s quantum gradient estimation algorithm
to achieve $\mathcal{O}(\sqrt{M}/\varepsilon)$ scaling up to logarithmic
factors, regardless of the commutation properties of the $M$ observables. We
prove that this scaling is worst-case optimal in the high-precision regime if
the state preparation is treated as a black box, even when the operators are
mutually commuting. We highlight the flexibility of our approach by presenting
several generalizations, including a strategy for accelerating the estimation
of a collection of dynamic correlation functions.
Related papers
- Heisenberg-limited adaptive gradient estimation for multiple observables [0.39102514525861415]
In quantum mechanics, measuring the expectation value of a general observable has an inherent statistical uncertainty.
We provide an adaptive quantum algorithm for estimating the expectation values of $M$ general observables within root mean squared error.
Our method paves a new way to precisely understand and predict various physical properties in complicated quantum systems using quantum computers.
arXiv Detail & Related papers (2024-06-05T14:16:47Z) - Dividing quantum circuits for time evolution of stochastic processes by orthogonal series density estimation [0.0]
Quantum Monte Carlo integration (QMCI) is a quantum algorithm to estimate expectations of random variables.
In this paper, we propose a method to divide $U_X(t)$ based on orthogonal series density estimation.
Our method achieves the circuit depth and total query complexity scaling as $O(sqrtN)$ and $O(N3/2)$, respectively.
arXiv Detail & Related papers (2024-06-04T01:50:21Z) - Quantum option pricing via the Karhunen-Lo\`{e}ve expansion [11.698830761241107]
We consider the problem of pricing discretely monitored Asian options over $T$ monitoring points where the underlying asset is modeled by a geometric Brownian motion.
We provide two quantum algorithms with complexity poly-logarithmic in $T$ and in $1/epsilon$, where $epsilon$ is the additive approximation error.
arXiv Detail & Related papers (2024-02-15T17:37:23Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - On Convergence of Incremental Gradient for Non-Convex Smooth Functions [63.51187646914962]
In machine learning and network optimization, algorithms like shuffle SGD are popular due to minimizing the number of misses and good cache.
This paper delves into the convergence properties SGD algorithms with arbitrary data ordering.
arXiv Detail & Related papers (2023-05-30T17:47:27Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Quantum tomography using state-preparation unitaries [0.22940141855172028]
We describe algorithms to obtain an approximate classical description of a $d$-dimensional quantum state when given access to a unitary.
We show that it takes $widetildeTheta(d/varepsilon)$ applications of the unitary to obtain an $varepsilon$-$ell$-approximation of the state.
We give an efficient algorithm for obtaining Schatten $q$-optimal estimates of a rank-$r$ mixed state.
arXiv Detail & Related papers (2022-07-18T17:56:18Z) - Quantum Goemans-Williamson Algorithm with the Hadamard Test and
Approximate Amplitude Constraints [62.72309460291971]
We introduce a variational quantum algorithm for Goemans-Williamson algorithm that uses only $n+1$ qubits.
Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxilary qubit.
We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm for various NP-hard problems.
arXiv Detail & Related papers (2022-06-30T03:15:23Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - A Stochastic Path-Integrated Differential EstimatoR Expectation
Maximization Algorithm [19.216497414060658]
The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations.
This paper introduces a novel EM, called textttSP, for inference from a conditional expectation estimator.
arXiv Detail & Related papers (2020-11-30T15:49:31Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.