Real classical shadows
- URL: http://arxiv.org/abs/2410.23481v1
- Date: Wed, 30 Oct 2024 22:15:39 GMT
- Title: Real classical shadows
- Authors: Maxwell West, Antonio Anna Mele, Martin Larocca, M. Cerezo,
- Abstract summary: We study the case where $mathcalU$ corresponds to either local or global Clifford gates, and $mathcalW$ consists of real-valued vectors.
Our results show that for various situations of interest, this real'' classical shadow protocol improves the sample complexity over the standard scheme.
- Score: 0.0
- License:
- Abstract: Efficiently learning expectation values of a quantum state using classical shadow tomography has become a fundamental task in quantum information theory. In a classical shadows protocol, one measures a state in a chosen basis $\mathcal{W}$ after it has evolved under a unitary transformation randomly sampled from a chosen distribution $\mathcal{U}$. In this work we study the case where $\mathcal{U}$ corresponds to either local or global orthogonal Clifford gates, and $\mathcal{W}$ consists of real-valued vectors. Our results show that for various situations of interest, this ``real'' classical shadow protocol improves the sample complexity over the standard scheme based on general Clifford unitaries. For example, when one is interested in estimating the expectation values of arbitrary real-valued observables, global orthogonal Cliffords decrease the required number of samples by a factor of two. More dramatically, for $k$-local observables composed only of real-valued Pauli operators, sampling local orthogonal Cliffords leads to a reduction by an exponential-in-$k$ factor in the sample complexity over local unitary Cliffords. Finally, we show that by measuring in a basis containing complex-valued vectors, orthogonal shadows can, in the limit of large system size, exactly reproduce the original unitary shadows protocol.
Related papers
- Improved classical shadows from local symmetries in the Schur basis [4.462208715451194]
We study the sample complexity of the classical shadows task.
We present the first joint measurement protocol for classical shadows whose sample complexity scales with the rank of the unknown state.
arXiv Detail & Related papers (2024-05-15T17:33:10Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Resource-efficient shadow tomography using equatorial stabilizer measurements [0.0]
equatorial-stabilizer-based shadow-tomography schemes can estimate $M$ observables using $mathcalO(log(M),mathrmpoly(n),1/varepsilon2)$ sampling copies.
We numerically confirm our theoretically-derived shadow-tomographic sampling complexities with random pure states and multiqubit graph states.
arXiv Detail & Related papers (2023-11-24T17:33:44Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - Operator relaxation and the optimal depth of classical shadows [0.0]
We study the sample complexity of learning the expectation value of Pauli operators via shallow shadows''
We show that the shadow norm is expressed in terms of properties of the Heisenberg time evolution of operators under the randomizing circuit.
arXiv Detail & Related papers (2022-12-22T18:46:46Z) - Closed-form analytic expressions for shadow estimation with brickwork
circuits [0.4997673761305335]
Properties of quantum systems can be estimated using classical shadows.
We derive analytical expressions for shadow estimation using brickwork circuits.
We find improved sample complexity in the estimation of observables supported on sufficiently many qubits.
arXiv Detail & Related papers (2022-11-17T19:01:15Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Classical shadows with Pauli-invariant unitary ensembles [0.0]
We consider the class of Pauli-invariant unitary ensembles that are invariant under multiplication by a Pauli operator.
Our results pave the way for more efficient or robust protocols for predicting important properties of quantum states.
arXiv Detail & Related papers (2022-02-07T15:06:30Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Neural Networks are Convex Regularizers: Exact Polynomial-time Convex
Optimization Formulations for Two-layer Networks [70.15611146583068]
We develop exact representations of training two-layer neural networks with rectified linear units (ReLUs)
Our theory utilizes semi-infinite duality and minimum norm regularization.
arXiv Detail & Related papers (2020-02-24T21:32:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.