Optimal algorithms for learning quantum phase states
- URL: http://arxiv.org/abs/2208.07851v2
- Date: Wed, 3 May 2023 05:22:04 GMT
- Title: Optimal algorithms for learning quantum phase states
- Authors: Srinivasan Arunachalam, Sergey Bravyi, Arkopal Dutt, Theodore J. Yoder
- Abstract summary: We show that the sample complexity of learning an unknown degree-$d$ phase state is $Theta(nd)$ if we allow separable measurements.
We also consider learning phase states when $f$ has sparsity-$s$, degree-$d$ in its $mathbbF$ representation.
- Score: 8.736370689844682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyze the complexity of learning $n$-qubit quantum phase states. A
degree-$d$ phase state is defined as a superposition of all $2^n$ basis vectors
$x$ with amplitudes proportional to $(-1)^{f(x)}$, where $f$ is a degree-$d$
Boolean polynomial over $n$ variables. We show that the sample complexity of
learning an unknown degree-$d$ phase state is $\Theta(n^d)$ if we allow
separable measurements and $\Theta(n^{d-1})$ if we allow entangled
measurements. Our learning algorithm based on separable measurements has
runtime $\textsf{poly}(n)$ (for constant $d$) and is well-suited for near-term
demonstrations as it requires only single-qubit measurements in the Pauli $X$
and $Z$ bases. We show similar bounds on the sample complexity for learning
generalized phase states with complex-valued amplitudes. We further consider
learning phase states when $f$ has sparsity-$s$, degree-$d$ in its
$\mathbb{F}_2$ representation (with sample complexity $O(2^d sn)$), $f$ has
Fourier-degree-$t$ (with sample complexity $O(2^{2t})$), and learning quadratic
phase states with $\varepsilon$-global depolarizing noise (with sample
complexity $O(n^{1+\varepsilon})$). These learning algorithms give us a
procedure to learn the diagonal unitaries of the Clifford hierarchy and
IQP~circuits.
Related papers
- Measuring quantum relative entropy with finite-size effect [53.64687146666141]
We study the estimation of relative entropy $D(rho|sigma)$ when $sigma$ is known.
Our estimator attains the Cram'er-Rao type bound when the dimension $d$ is fixed.
arXiv Detail & Related papers (2024-06-25T06:07:20Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Layered State Discovery for Incremental Autonomous Exploration [106.37656068276901]
Layered Autonomous Exploration (LAE) is a novel algorithm for AX that attains a sample complexity of $tildemathcalO(LSrightarrow_LAln12(Srightarrow_LAln12(Srightarrow_LAln12(Srightarrow_LAln12(Srightar row_LAln12(Srightarrow_LAln12
arXiv Detail & Related papers (2023-02-07T22:58:12Z) - Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures [9.053430799456587]
We study the problem of learning nonparametric distributions in a finite mixture.
We establish tight bounds on the sample complexity for learning the component distributions in such models.
arXiv Detail & Related papers (2022-03-28T23:53:48Z) - Optimal learning of quantum Hamiltonians from high-temperature Gibbs
states [0.9453554184019105]
We show how to learn the coefficients of a Hamiltonian to error $varepsilon$ with sample complexity $S = O(log N/(betavarepsilon)2)$ and time linear in the sample size, $O(S N)$.
In the appendix, we show virtually the same algorithm can be used to learn $H$ from a real-time evolution unitary $e-it Hilon in a small $t regime with similar sample and time complexity.
arXiv Detail & Related papers (2021-08-10T18:00:49Z) - Improved Sample Complexity for Incremental Autonomous Exploration in
MDPs [132.88757893161699]
We learn the set of $epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps.
DisCo is the first algorithm that can return an $epsilon/c_min$-optimal policy for any cost-sensitive shortest-path problem.
arXiv Detail & Related papers (2020-12-29T14:06:09Z) - Improved quantum data analysis [1.8416014644193066]
We give a quantum "Threshold Search" algorithm that requires only $O(log2 m)/epsilon2)$ samples of a $d$-dimensional state.
We also give an alternative Hypothesis Selection method using $tildeO((log3 m)/epsilon2)$ samples.
arXiv Detail & Related papers (2020-11-22T01:22:37Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation [30.137884459159107]
We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces.
We develop a simple, iterative learning algorithm that finds $epsilon$-Schmidt $Q$-function with sample complexity of $widetildeO(frac1epsilonmax(d1), d_2)+2)$ when the optimal $Q$-function has low rank $r$ and the factor $$ is below a certain threshold.
arXiv Detail & Related papers (2020-06-11T00:55:35Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.