Optimal high-precision shadow estimation
- URL: http://arxiv.org/abs/2407.13874v1
- Date: Thu, 18 Jul 2024 19:42:49 GMT
- Title: Optimal high-precision shadow estimation
- Authors: Sitan Chen, Jerry Li, Allen Liu,
- Abstract summary: Formally we give a protocol that measures $O(log(m)/epsilon2)$ copies of an unknown mixed state $rhoinmathbbCdtimes d$.
We show via dimensionality reduction that we can rescale $epsilon$ and $d$ to reduce to the regime where $epsilon le O(d-1/2)$.
- Score: 22.01044188849049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We give the first tight sample complexity bounds for shadow tomography and classical shadows in the regime where the target error is below some sufficiently small inverse polynomial in the dimension of the Hilbert space. Formally we give a protocol that, given any $m\in\mathbb{N}$ and $\epsilon \le O(d^{-12})$, measures $O(\log(m)/\epsilon^2)$ copies of an unknown mixed state $\rho\in\mathbb{C}^{d\times d}$ and outputs a classical description of $\rho$ which can then be used to estimate any collection of $m$ observables to within additive accuracy $\epsilon$. Previously, even for the simpler task of shadow tomography -- where the $m$ observables are known in advance -- the best known rates either scaled benignly but suboptimally in all of $m, d, \epsilon$, or scaled optimally in $\epsilon, m$ but had additional polynomial factors in $d$ for general observables. Intriguingly, we also show via dimensionality reduction, that we can rescale $\epsilon$ and $d$ to reduce to the regime where $\epsilon \le O(d^{-1/2})$. Our algorithm draws upon representation-theoretic tools recently developed in the context of full state tomography.
Related papers
- Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Resource-efficient shadow tomography using equatorial stabilizer measurements [0.0]
equatorial-stabilizer-based shadow-tomography schemes can estimate $M$ observables using $mathcalO(log(M),mathrmpoly(n),1/varepsilon2)$ sampling copies.
We numerically confirm our theoretically-derived shadow-tomographic sampling complexities with random pure states and multiqubit graph states.
arXiv Detail & Related papers (2023-11-24T17:33:44Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Quantum and classical low-degree learning via a dimension-free Remez
inequality [52.12931955662553]
We show a new way to relate functions on the hypergrid to their harmonic extensions over the polytorus.
We show the supremum of a function $f$ over products of the cyclic group $exp(2pi i k/K)_k=1K$.
We extend to new spaces a recent line of work citeEI22, CHP, VZ22 that gave similarly efficient methods for learning low-degrees on hypercubes and observables on qubits.
arXiv Detail & Related papers (2023-01-04T04:15:40Z) - Sample-optimal classical shadows for pure states [0.0]
We consider the classical shadows task for pure states in the setting of both joint and independent measurements.
In the independent measurement setting, we show that $mathcal O(sqrtBd epsilon-1 + epsilon-2)$ samples suffice.
arXiv Detail & Related papers (2022-11-21T19:24:17Z) - Quantum tomography using state-preparation unitaries [0.22940141855172028]
We describe algorithms to obtain an approximate classical description of a $d$-dimensional quantum state when given access to a unitary.
We show that it takes $widetildeTheta(d/varepsilon)$ applications of the unitary to obtain an $varepsilon$-$ell$-approximation of the state.
We give an efficient algorithm for obtaining Schatten $q$-optimal estimates of a rank-$r$ mixed state.
arXiv Detail & Related papers (2022-07-18T17:56:18Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Faster Binary Embeddings for Preserving Euclidean Distances [9.340611077939828]
We propose a fast, distance-preserving, binary embedding algorithm to transform a dataset $mathcalTsubseteqmathbbRn$ into binary sequences in the cube $pm 1m$.
Our method is both fast and memory efficient, with time complexity $O(m)$ and space complexity $O(m)$.
arXiv Detail & Related papers (2020-10-01T22:41:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.