Improved classical shadows from local symmetries in the Schur basis
- URL: http://arxiv.org/abs/2405.09525v1
- Date: Wed, 15 May 2024 17:33:10 GMT
- Title: Improved classical shadows from local symmetries in the Schur basis
- Authors: Daniel Grier, Sihan Liu, Gaurav Mahajan,
- Abstract summary: We study the sample complexity of the classical shadows task.
We present the first joint measurement protocol for classical shadows whose sample complexity scales with the rank of the unknown state.
- Score: 4.462208715451194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the sample complexity of the classical shadows task: what is the fewest number of copies of an unknown state you need to measure to predict expected values with respect to some class of observables? Large joint measurements are likely required in order to minimize sample complexity, but previous joint measurement protocols only work when the unknown state is pure. We present the first joint measurement protocol for classical shadows whose sample complexity scales with the rank of the unknown state. In particular we prove $\mathcal O(\sqrt{rB}/\epsilon^2)$ samples suffice, where $r$ is the rank of the state, $B$ is a bound on the squared Frobenius norm of the observables, and $\epsilon$ is the target accuracy. In the low-rank regime, this is a nearly quadratic advantage over traditional approaches that use single-copy measurements. We present several intermediate results that may be of independent interest: a solution to a new formulation of classical shadows that captures functions of non-identical input states; a generalization of a ``nice'' Schur basis used for optimal qubit purification and quantum majority vote; and a measurement strategy that allows us to use local symmetries in the Schur basis to avoid intractable Weingarten calculations in the analysis.
Related papers
- Principal eigenstate classical shadows [0.0]
Given many copies of an unknown quantum state $rho$, we consider the task of learning a classical description of its principal eigenstate.
We present a protocol for this task scaling with the principal eigenvalue $lambda$ and show that it is optimal within a space of natural approaches.
arXiv Detail & Related papers (2024-05-22T19:13:05Z) - Biased Estimator Channels for Classical Shadows [0.0]
We consider a biased scheme, intentionally introducing a bias by rescaling the conventional classical shadows estimators.
We analytically prove average case as well as worst- and best-case scenarios, and rigorously prove that it is, in principle, always worth biasing the estimators.
arXiv Detail & Related papers (2024-02-14T19:00:01Z) - Efficient Local Classical Shadow Tomography with Number Conservation [0.0]
Shadow tomography aims to build a classical description of a quantum state from a sequence of simple random measurements.
We propose and analyze a new local shadow protocol adapted to systems with fundamental number conservation laws, such as ultracold atoms.
arXiv Detail & Related papers (2023-11-15T19:00:01Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Lower bounds for learning quantum states with single-copy measurements [3.2590610391507444]
We study the problems of quantum tomography and shadow tomography using measurements performed on individual copies of an unknown $d$-dimensional state.
In particular, this rigorously establishes the optimality of the folklore Pauli tomography" algorithm in terms of its complexity.
arXiv Detail & Related papers (2022-07-29T02:26:08Z) - Estimating Quantum Hamiltonians via Joint Measurements of Noisy
Non-Commuting Observables [0.0]
We introduce a method for performing a single joint measurement that can be implemented locally.
We derive bounds on the number of experimental repetitions required to estimate energies up to a certain precision.
We adapt the joint measurement strategy to minimise the sample complexity when the implementation of measurements is assumed noisy.
arXiv Detail & Related papers (2022-06-17T17:42:54Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.