The debiased Keyl's algorithm: a new unbiased estimator for full state tomography
- URL: http://arxiv.org/abs/2510.07788v1
- Date: Thu, 09 Oct 2025 05:07:12 GMT
- Title: The debiased Keyl's algorithm: a new unbiased estimator for full state tomography
- Authors: Angelos Pelecanos, Jack Spilecki, John Wright,
- Abstract summary: We present the debiased Keyl's algorithm, the first estimator for full state tomography which is both unbiased and sample-optimal.<n>We show that $n = O(rd/varepsilon2)$ copies are sufficient to learn a rank-$r$ mixed state to trace distance error $varepsilon$, which is optimal.<n>We further show that $n = O(rd/varepsilon2)$ copies are sufficient to learn to error $varepsilon$ in the more challenging Bures distance, which is also optimal.
- Score: 1.4302622916198997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the problem of quantum state tomography, one is given $n$ copies of an unknown rank-$r$ mixed state $\rho \in \mathbb{C}^{d \times d}$ and asked to produce an estimator of $\rho$. In this work, we present the debiased Keyl's algorithm, the first estimator for full state tomography which is both unbiased and sample-optimal. We derive an explicit formula for the second moment of our estimator, with which we show the following applications. (1) We give a new proof that $n = O(rd/\varepsilon^2)$ copies are sufficient to learn a rank-$r$ mixed state to trace distance error $\varepsilon$, which is optimal. (2) We further show that $n = O(rd/\varepsilon^2)$ copies are sufficient to learn to error $\varepsilon$ in the more challenging Bures distance, which is also optimal. (3) We consider full state tomography when one is only allowed to measure $k$ copies at once. We show that $n =O\left(\max \left(\frac{d^3}{\sqrt{k}\varepsilon^2}, \frac{d^2}{\varepsilon^2} \right) \right)$ copies suffice to learn in trace distance. This improves on the prior work of Chen et al. and matches their lower bound. (4) For shadow tomography, we show that $O(\log(m)/\varepsilon^2)$ copies are sufficient to learn $m$ given observables $O_1, \dots, O_m$ in the "high accuracy regime", when $\varepsilon = O(1/d)$, improving on a result of Chen et al. More generally, we show that if $\mathrm{tr}(O_i^2) \leq F$ for all $i$, then $n = O\Big(\log(m) \cdot \Big(\min\Big\{\frac{\sqrt{r F}}{\varepsilon}, \frac{F^{2/3}}{\varepsilon^{4/3}}\Big\} + \frac{1}{\varepsilon^2}\Big)\Big)$ copies suffice, improving on existing work. (5) For quantum metrology, we give a locally unbiased algorithm whose mean squared error matrix is upper bounded by twice the inverse of the quantum Fisher information matrix in the asymptotic limit of large $n$, which is optimal.
Related papers
- Instance-optimal high-precision shadow tomography with few-copy measurements: A metrological approach [2.956729394666618]
We study the sample complexity of shadow tomography in the high-precision regime.<n>We use possibly adaptive measurements that act on $O(mathrmpolylog(d))$ number of copies of $$ at a time.
arXiv Detail & Related papers (2026-02-04T19:00:00Z) - Shadow Tomography Against Adversaries [31.34964957208756]
We show that all non-adaptive shadow tomography algorithms must incur an error of $varepsilon=tildeO(max_iin[M]|O_i|_HS)$ for some choice of observables.<n>We design an algorithm that achieves an error of $varepsilon=tildeO(minsqrtM, sqrtd)$ for some choice of observables, even with unlimited copies.
arXiv Detail & Related papers (2025-12-05T06:06:07Z) - Information-Computation Tradeoffs for Noiseless Linear Regression with Oblivious Contamination [65.37519531362157]
We show that any efficient Statistical Query algorithm for this task requires VSTAT complexity at least $tildeOmega(d1/2/alpha2)$.
arXiv Detail & Related papers (2025-10-12T15:42:44Z) - Optimal lower bounds for quantum state tomography [0.9969485010222057]
We show that $n = Omega(rd/varepsilon2)$ copies are necessary to learn a rank $r$ mixed state $rho in mathbbCd times d$ up to error $varepsilon$ in trace distance.<n>A key technical ingredient in our proof, which may be of independent interest, is a reduction which converts any algorithm for projector tomography which learns to error $varepsilon$ in trace distance to an algorithm which learns to error $O(varepsilon)$ in the more stringent Bures
arXiv Detail & Related papers (2025-10-09T02:36:48Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Optimal high-precision shadow estimation [22.01044188849049]
Formally we give a protocol that measures $O(log(m)/epsilon2)$ copies of an unknown mixed state $rhoinmathbbCdtimes d$.
We show via dimensionality reduction that we can rescale $epsilon$ and $d$ to reduce to the regime where $epsilon le O(d-1/2)$.
arXiv Detail & Related papers (2024-07-18T19:42:49Z) - Sample-Efficient Linear Regression with Self-Selection Bias [7.605563562103568]
We consider the problem of linear regression with self-selection bias in the unknown-index setting.
We provide a novel and near optimally sample-efficient (in terms of $k$) algorithm to recover $mathbfw_1,ldots,mathbfw_kin.
Our algorithm succeeds under significantly relaxed noise assumptions, and therefore also succeeds in the related setting of max-linear regression.
arXiv Detail & Related papers (2024-02-22T02:20:24Z) - Distribution-Independent Regression for Generalized Linear Models with
Oblivious Corruptions [49.69852011882769]
We show the first algorithms for the problem of regression for generalized linear models (GLMs) in the presence of additive oblivious noise.
We present an algorithm that tackles newthis problem in its most general distribution-independent setting.
This is the first newalgorithmic result for GLM regression newwith oblivious noise which can handle more than half the samples being arbitrarily corrupted.
arXiv Detail & Related papers (2023-09-20T21:41:59Z) - Quantum chi-squared tomography and mutual information testing [1.8416014644193066]
For quantum state tomography on rank-$r$ dimension-$d$ states, we show that $widetildeO(r.5d1.5/epsilon) leq widetildeO(d3/epsilon)$ copies suffice for accuracy$epsilon$ with respect to (Bures) $chi2$-divergence.
We also improve the best known sample complexity for the emphclassical version of mutual information testing to $widetildeO(d
arXiv Detail & Related papers (2023-05-29T18:00:02Z) - Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Low-degree learning and the metric entropy of polynomials [44.99833362998488]
We prove that any (deterministic or randomized) algorithm which learns $mathscrF_nd$ with $L$-accuracy $varepsilon$ requires at least $Omega(sqrtvarepsilon)2dlog n leq log mathsfM(mathscrF_n,d,|cdot|_L,varepsilon) satisfies the two-sided estimate $$c (1-varepsilon)2dlog
arXiv Detail & Related papers (2022-03-17T23:52:08Z) - Self-training Converts Weak Learners to Strong Learners in Mixture
Models [86.7137362125503]
We show that a pseudolabeler $boldsymbolbeta_mathrmpl$ can achieve classification error at most $C_mathrmerr$.
We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $boldsymbolbeta_mathrmpl$ with classification error $C_mathrmerr$ using only $O(d)$ labeled examples.
arXiv Detail & Related papers (2021-06-25T17:59:16Z) - Infinite-Horizon Offline Reinforcement Learning with Linear Function
Approximation: Curse of Dimensionality and Algorithm [46.36534144138337]
In this paper, we investigate the sample complexity of policy evaluation in offline reinforcement learning.
Under the low distribution shift assumption, we show that there is an algorithm that needs at most $Oleft(maxleft fracleftVert thetapirightVert _24varepsilon4logfracddelta,frac1varepsilon2left(d+logfrac1deltaright)right right)$ samples to approximate the
arXiv Detail & Related papers (2021-03-17T18:18:57Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.