Stability of classical shadows under gate-dependent noise
- URL: http://arxiv.org/abs/2310.19947v3
- Date: Wed, 19 Feb 2025 19:40:21 GMT
- Title: Stability of classical shadows under gate-dependent noise
- Authors: Raphael Brieger, Markus Heinrich, Ingo Roth, Martin Kliesch,
- Abstract summary: We prove that any shadow estimation involving Clifford gate stabilizers is stable for bounded noise circuits.
We find that so-called robust shadows can introduce a large presence of gate-dependent noise compared to unmitigated classical shadows.
On a level, we identify average noise channels that affect shadow estimators and allow for a more fine-grained control of noise-induced biases.
- Score: 0.4574830585715129
- License:
- Abstract: Expectation values of observables are routinely estimated using so-called classical shadows$\unicode{x2014}$the outcomes of randomized bases measurements on a repeatedly prepared quantum state. In order to trust the accuracy of shadow estimation in practice, it is crucial to understand the behavior of the estimators under realistic noise. In this work, we prove that any shadow estimation protocol involving Clifford unitaries is stable under gate-dependent noise for observables with bounded stabilizer norm$\unicode{x2014}$originally introduced in the context of simulating Clifford circuits. In contrast, we demonstrate with concrete examples that estimation of `magic` observables can lead to highly misleading results in the presence of miscalibration errors and a worst case bias scaling exponentially in the system size. We further find that so-called robust shadows, aiming at mitigating noise, can introduce a large bias in the presence of gate-dependent noise compared to unmitigated classical shadows. Nevertheless, we guarantee the functioning of robust shadows for a more general noise setting than in previous works. On a technical level, we identify average noise channels that affect shadow estimators and allow for a more fine-grained control of noise-induced biases.
Related papers
- Noise-mitigated randomized measurements and self-calibrating shadow
estimation [0.0]
We introduce an error-mitigated method of randomized measurements, giving rise to a robust shadow estimation procedure.
On the practical side, we show that error mitigation and shadow estimation can be carried out using the same session of quantum experiments.
arXiv Detail & Related papers (2024-03-07T18:53:56Z) - Biased Estimator Channels for Classical Shadows [0.0]
We consider a biased scheme, intentionally introducing a bias by rescaling the conventional classical shadows estimators.
We analytically prove average case as well as worst- and best-case scenarios, and rigorously prove that it is, in principle, always worth biasing the estimators.
arXiv Detail & Related papers (2024-02-14T19:00:01Z) - Shadow tomography with noisy readouts [0.0]
Shadow tomography is a scalable technique to characterise the quantum state of a quantum computer or quantum simulator.
By construction, classical shadows are intrinsically sensitive to readout noise.
We show that classical shadows accept much more flexible constructions beyond the standard ones.
arXiv Detail & Related papers (2023-10-26T11:47:51Z) - Nearly Heisenberg-limited noise-unbiased frequency estimation by
tailored sensor design [0.0]
We consider entanglement-assisted frequency estimation by Ramsey interferometry.
We show that noise renders standard measurement statistics biased or ill-defined.
We introduce ratio estimators which, at infinite cost of doubling the resources, are insensitive to noise and retain the precision scaling of standard ones.
arXiv Detail & Related papers (2023-05-01T17:32:55Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Analyzing and Improving the Optimization Landscape of Noise-Contrastive
Estimation [50.85788484752612]
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models.
It has been empirically observed that the choice of the noise distribution is crucial for NCE's performance.
In this work, we formally pinpoint reasons for NCE's poor performance when an inappropriate noise distribution is used.
arXiv Detail & Related papers (2021-10-21T16:57:45Z) - Classical Shadows With Noise [0.0]
We study the effects of noise on the classical shadows protocol.
We derive an analytical upper bound for the sample complexity in terms of a shadow seminorm for both local and global noise.
Our results can be used to prove rigorous sample complexity upper bounds in the cases of depolarizing noise and amplitude damping.
arXiv Detail & Related papers (2020-11-23T17:43:42Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Contextual Linear Bandits under Noisy Features: Towards Bayesian Oracles [65.9694455739978]
We study contextual linear bandit problems under feature uncertainty, where the features are noisy and have missing entries.
Our analysis reveals that the optimal hypothesis can significantly deviate from the underlying realizability function, depending on the noise characteristics.
This implies that classical approaches cannot guarantee a non-trivial regret bound.
arXiv Detail & Related papers (2017-03-03T21:39:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.