Shadow Tomography Against Adversaries
- URL: http://arxiv.org/abs/2512.05451v1
- Date: Fri, 05 Dec 2025 06:06:07 GMT
- Title: Shadow Tomography Against Adversaries
- Authors: Maryam Aliakbarpour, Vladimir Braverman, Nai-Hui Chia, Chia-Ying Lin, Yuhan Liu, Aadil Oufkir, Yu-Ching Shen,
- Abstract summary: We show that all non-adaptive shadow tomography algorithms must incur an error of $varepsilon=tildeO(max_iin[M]|O_i|_HS)$ for some choice of observables.<n>We design an algorithm that achieves an error of $varepsilon=tildeO(minsqrtM, sqrtd)$ for some choice of observables, even with unlimited copies.
- Score: 31.34964957208756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study single-copy shadow tomography in the adversarial robust setting, where the goal is to learn the expectation values of $M$ observables $O_1, \ldots, O_M$ with $\varepsilon$ accuracy, but $γ$-fraction of the outcomes can be arbitrarily corrupted by an adversary. We show that all non-adaptive shadow tomography algorithms must incur an error of $\varepsilon=\tildeΩ(γ\min\{\sqrt{M}, \sqrt{d}\})$ for some choice of observables, even with unlimited copies. Unfortunately, the classical shadows algorithm by [HKP20] and naive algorithms that directly measure each observable suffer even more. We design an algorithm that achieves an error of $\varepsilon=\tilde{O}(γ\max_{i\in[M]}\|O_i\|_{HS})$, which nearly matches our worst-case error lower bound for $M\ge d$ and guarantees better accuracy when the observables have stronger structure. Remarkably, the algorithm only needs $n=\frac{1}{γ^2}\log(M/δ)$ copies to achieve that error with probability at least $1-δ$, matching the sample complexity of the classical shadows algorithm that achieves the same error without corrupted measurement outcomes. Our algorithm is conceptually simple and easy to implement. Classical simulation for fidelity estimation shows that our algorithm enjoys much stronger robustness than [HKP20] under adversarial noise. Finally, based on a reduction from full-state tomography to shadow tomography, we prove that for rank $r$ states, both the near-optimal asymptotic error of $\varepsilon=\tilde{O}(γ\sqrt{r})$ and copy complexity $\tilde{O}(dr^2/\varepsilon^2)=\tilde{O}(dr/γ^2)$ can be achieved for adversarially robust state tomography, closing the large gap in [ABCL25] where optimal error can only be achieved using pseudo-polynomial number of copies in $d$.
Related papers
- Instance-optimal high-precision shadow tomography with few-copy measurements: A metrological approach [2.956729394666618]
We study the sample complexity of shadow tomography in the high-precision regime.<n>We use possibly adaptive measurements that act on $O(mathrmpolylog(d))$ number of copies of $$ at a time.
arXiv Detail & Related papers (2026-02-04T19:00:00Z) - The debiased Keyl's algorithm: a new unbiased estimator for full state tomography [1.4302622916198997]
We present the debiased Keyl's algorithm, the first estimator for full state tomography which is both unbiased and sample-optimal.<n>We show that $n = O(rd/varepsilon2)$ copies are sufficient to learn a rank-$r$ mixed state to trace distance error $varepsilon$, which is optimal.<n>We further show that $n = O(rd/varepsilon2)$ copies are sufficient to learn to error $varepsilon$ in the more challenging Bures distance, which is also optimal.
arXiv Detail & Related papers (2025-10-09T05:07:12Z) - Optimal lower bounds for quantum state tomography [0.9969485010222057]
We show that $n = Omega(rd/varepsilon2)$ copies are necessary to learn a rank $r$ mixed state $rho in mathbbCd times d$ up to error $varepsilon$ in trace distance.<n>A key technical ingredient in our proof, which may be of independent interest, is a reduction which converts any algorithm for projector tomography which learns to error $varepsilon$ in trace distance to an algorithm which learns to error $O(varepsilon)$ in the more stringent Bures
arXiv Detail & Related papers (2025-10-09T02:36:48Z) - Optimal high-precision shadow estimation [22.01044188849049]
Formally we give a protocol that measures $O(log(m)/epsilon2)$ copies of an unknown mixed state $rhoinmathbbCdtimes d$.
We show via dimensionality reduction that we can rescale $epsilon$ and $d$ to reduce to the regime where $epsilon le O(d-1/2)$.
arXiv Detail & Related papers (2024-07-18T19:42:49Z) - Distribution-Independent Regression for Generalized Linear Models with
Oblivious Corruptions [49.69852011882769]
We show the first algorithms for the problem of regression for generalized linear models (GLMs) in the presence of additive oblivious noise.
We present an algorithm that tackles newthis problem in its most general distribution-independent setting.
This is the first newalgorithmic result for GLM regression newwith oblivious noise which can handle more than half the samples being arbitrarily corrupted.
arXiv Detail & Related papers (2023-09-20T21:41:59Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - Quantum tomography using state-preparation unitaries [0.22940141855172028]
We describe algorithms to obtain an approximate classical description of a $d$-dimensional quantum state when given access to a unitary.
We show that it takes $widetildeTheta(d/varepsilon)$ applications of the unitary to obtain an $varepsilon$-$ell$-approximation of the state.
We give an efficient algorithm for obtaining Schatten $q$-optimal estimates of a rank-$r$ mixed state.
arXiv Detail & Related papers (2022-07-18T17:56:18Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.