An optimal tradeoff between entanglement and copy complexity for state
tomography
- URL: http://arxiv.org/abs/2402.16353v1
- Date: Mon, 26 Feb 2024 07:18:57 GMT
- Title: An optimal tradeoff between entanglement and copy complexity for state
tomography
- Authors: Sitan Chen, Jerry Li, Allen Liu
- Abstract summary: We study tomography in the natural setting where one can make measurements of $t$ copies at a time.
This is the first smooth entanglement-copy protocol known for any quantum learning task.
A key insight is to use SchurilonWeyl sampling not to estimate the spectrum of $rho$, but to estimate the deviation of $rho$ from the maximally mixed state.
- Score: 24.737530909081915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been significant interest in understanding how practical
constraints on contemporary quantum devices impact the complexity of quantum
learning. For the classic question of tomography, recent work tightly
characterized the copy complexity for any protocol that can only measure one
copy of the unknown state at a time, showing it is polynomially worse than if
one can make fully-entangled measurements. While we now have a fairly complete
picture of the rates for such tasks in the near-term and fault-tolerant
regimes, it remains poorly understood what the landscape in between looks like.
In this work, we study tomography in the natural setting where one can make
measurements of $t$ copies at a time. For sufficiently small $\epsilon$, we
show that for any $t \le d^2$,
$\widetilde{\Theta}(\frac{d^3}{\sqrt{t}\epsilon^2})$ copies are necessary and
sufficient to learn an unknown $d$-dimensional state $\rho$ to trace distance
$\epsilon$. This gives a smooth and optimal interpolation between the known
rates for single-copy and fully-entangled measurements.
To our knowledge, this is the first smooth entanglement-copy tradeoff known
for any quantum learning task, and for tomography, no intermediate point on
this curve was known, even at $t = 2$. An important obstacle is that unlike the
optimal single-copy protocol, the optimal fully-entangled protocol is
inherently biased and thus precludes naive batching approaches. Instead, we
devise a novel two-stage procedure that uses Keyl's algorithm to refine a crude
estimate for $\rho$ based on single-copy measurements. A key insight is to use
Schur-Weyl sampling not to estimate the spectrum of $\rho$, but to estimate the
deviation of $\rho$ from the maximally mixed state. When $\rho$ is far from the
maximally mixed state, we devise a novel quantum splitting procedure that
reduces to the case where $\rho$ is close to maximally mixed.
Related papers
- Quantum state testing with restricted measurements [30.641152457827527]
We develop an information-theoretic framework that yields unified copy complexity lower bounds for restricted families of non-adaptive measurements.
We demonstrate a separation between these two schemes, showing the power of randomized measurement schemes over fixed ones.
arXiv Detail & Related papers (2024-08-30T17:48:00Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Lower bounds for learning quantum states with single-copy measurements [3.2590610391507444]
We study the problems of quantum tomography and shadow tomography using measurements performed on individual copies of an unknown $d$-dimensional state.
In particular, this rigorously establishes the optimality of the folklore Pauli tomography" algorithm in terms of its complexity.
arXiv Detail & Related papers (2022-07-29T02:26:08Z) - When Does Adaptivity Help for Quantum State Learning? [19.89243001385691]
We show that any protocol using incoherent measurements requires $Omega(d3/epsilon2)$ copies, matching the best known upper bound.
We give an adaptive algorithm that outputs a state which is $gamma$-close in infidelity to $rho$ using only $tildeO(d3/gamma)$ copies, which is optimal for incoherent measurements.
arXiv Detail & Related papers (2022-06-10T17:59:16Z) - Sharper Rates and Flexible Framework for Nonconvex SGD with Client and
Data Sampling [64.31011847952006]
We revisit the problem of finding an approximately stationary point of the average of $n$ smooth and possibly non-color functions.
We generalize the $smallsfcolorgreen$ so that it can provably work with virtually any sampling mechanism.
We provide the most general and most accurate analysis of optimal bound in the smooth non-color regime.
arXiv Detail & Related papers (2022-06-05T21:32:33Z) - Tight Bounds for Quantum State Certification with Incoherent
Measurements [18.566266990940374]
When $sigma$ is the maximally mixed state $frac1d I_d$, this is known as mixedness testing.
We focus on algorithms which use incoherent measurements, i.e. which only measure one copy of $rho$ at a time.
arXiv Detail & Related papers (2022-04-14T17:59:31Z) - Settling the Sample Complexity of Model-Based Offline Reinforcement
Learning [50.5790774201146]
offline reinforcement learning (RL) learns using pre-collected data without further exploration.
Prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality.
We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost.
arXiv Detail & Related papers (2022-04-11T17:26:19Z) - Distributed quantum inner product estimation [14.222887950206658]
A benchmarking task known as cross-platform verification has been proposed that aims to estimate the fidelity of states prepared on two quantum computers.
No quantum communication can be performed between the two physical platforms due to hardware constraints.
We show that the sample complexity must be at least $Omega(max1/varepsilon2,sqrtd/varepsilon)$, even in the strongest setting.
arXiv Detail & Related papers (2021-11-05T05:35:03Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.