A Note on Quantum Phase Estimation
- URL: http://arxiv.org/abs/2304.02241v1
- Date: Wed, 5 Apr 2023 06:05:42 GMT
- Title: A Note on Quantum Phase Estimation
- Authors: Yao-Ting Lin
- Abstract summary: We show an alternative, simpler and self-contained proof of query lower bounds.
Specifically, our proof consists of basic linear algebra without using the knowledge of Boolean function analysis and adversary methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study the phase estimation problem. We show an alternative,
simpler and self-contained proof of query lower bounds. Technically, compared
to the previous proofs [NW99, Bes05], our proof is considerably elementary.
Specifically, our proof consists of basic linear algebra without using the
knowledge of Boolean function analysis and adversary methods. Qualitatively,
our bound is tight in the low success probability regime and offers a more
fine-grained trade-off. In particular, we prove that for any $\epsilon > 0, p
\geq 0$, every algorithm requires at least $\Omega(p/{\epsilon})$ queries to
obtain an ${\epsilon}$-approximation for the phase with probability at least p.
However, the existing bounds hold only when $p > 1/2$. Quantitatively, our
bound is tight since it matches the well-known phase estimation algorithm of
Cleve, Ekert, Macchiavello, and Mosca [CEMM98] which requires $O(1/{\epsilon})$
queries to obtain an ${\epsilon}$-approximation with a constant probability.
Following the derivation of the lower bound in our framework, we give a new and
intuitive interpretation of the phase estimation algorithm of [CEMM98], which
might be of independent interest.
Related papers
- Outlier-robust Mean Estimation near the Breakdown Point via Sum-of-Squares [4.335413713700667]
We give a new analysis of the canonical sum-of-squares program introduced in citekothari2018robust.
We show that this program efficiently achieves optimal error rate for all $varepsilon in[0,frac12)$.
arXiv Detail & Related papers (2024-11-21T16:57:05Z) - Fast Rates for Bandit PAC Multiclass Classification [73.17969992976501]
We study multiclass PAC learning with bandit feedback, where inputs are classified into one of $K$ possible labels and feedback is limited to whether or not the predicted labels are correct.
Our main contribution is in designing a novel learning algorithm for the agnostic $(varepsilon,delta)$PAC version of the problem.
arXiv Detail & Related papers (2024-06-18T08:54:04Z) - Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and
Relaxed Assumptions [33.49807210207286]
A new auxiliary function $xi$ helps eliminate the complexity of handling the correlation between the numerator and denominator of AdaGrads iterations.
We show that AdaGrad succeeds in converging under $(L_0)$smooth condition as long as the learning rate is lower than a threshold.
arXiv Detail & Related papers (2023-05-29T09:33:04Z) - Tight Bounds for Quantum Phase Estimation and Related Problems [0.90238471756546]
We show that a phase estimation algorithm with precision $delta$ and error probability $epsilon$ has cost $Omegaleft(frac1deltalog1epsilonright)$.
We also show that having lots advice (applications of the advice-preparing unitary) does not significantly reduce cost, and neither does knowledge of the eigenbasis of $U$.
arXiv Detail & Related papers (2023-05-08T17:46:01Z) - Breaking the Lower Bound with (Little) Structure: Acceleration in
Non-Convex Stochastic Optimization with Heavy-Tailed Noise [28.780192812703948]
We consider the optimization problem with smooth but not necessarily convex objectives in the heavy-tailed noise regime.
We show that one can achieve a faster rate than that dictated by the lower bound $Omega(Tfrac1-p3p-2)$ with only tiny bit of structure.
We also establish that it guarantees a high-probability convergence rate of $O(log(T/delta)Tfrac1-p3p-2)$ under a mild condition.
arXiv Detail & Related papers (2023-02-14T00:23:42Z) - Unitarity estimation for quantum channels [7.323367190336826]
Unitarity estimation is a basic and important problem in quantum device certification and benchmarking.
We provide a unified framework for unitarity estimation, which induces ancilla-efficient algorithms.
We show that both the $d$-dependence and $epsilon$-dependence of our algorithms are optimal.
arXiv Detail & Related papers (2022-12-19T09:36:33Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad
Stepsize [55.0090961425708]
We propose a new, simplified high probability analysis of AdaGrad for smooth, non- probability problems.
We present our analysis in a modular way and obtain a complementary $mathcal O (1 / TT)$ convergence rate in the deterministic setting.
To the best of our knowledge, this is the first high probability for AdaGrad with a truly adaptive scheme, i.e., completely oblivious to the knowledge of smoothness.
arXiv Detail & Related papers (2022-04-06T13:50:33Z) - Improved Sample Complexity for Incremental Autonomous Exploration in
MDPs [132.88757893161699]
We learn the set of $epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps.
DisCo is the first algorithm that can return an $epsilon/c_min$-optimal policy for any cost-sensitive shortest-path problem.
arXiv Detail & Related papers (2020-12-29T14:06:09Z) - On the Almost Sure Convergence of Stochastic Gradient Descent in
Non-Convex Problems [75.58134963501094]
This paper analyzes the trajectories of gradient descent (SGD)
We show that SGD avoids saddle points/manifolds with $1$ for strict step-size policies.
arXiv Detail & Related papers (2020-06-19T14:11:26Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.