Biased Estimator Channels for Classical Shadows
- URL: http://arxiv.org/abs/2402.09511v1
- Date: Wed, 14 Feb 2024 19:00:01 GMT
- Title: Biased Estimator Channels for Classical Shadows
- Authors: Zhenyu Cai, Adrian Chapman, Hamza Jnane, B\'alint Koczor
- Abstract summary: We consider a biased scheme, intentionally introducing a bias by rescaling the conventional classical shadows estimators.
We analytically prove average case as well as worst- and best-case scenarios, and rigorously prove that it is, in principle, always worth biasing the estimators.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting classical information from quantum systems is of fundamental
importance, and classical shadows allow us to extract a large amount of
information using relatively few measurements. Conventional shadow estimators
are unbiased and thus approach the true mean in the infinite-sample limit. In
this work, we consider a biased scheme, intentionally introducing a bias by
rescaling the conventional classical shadows estimators can reduce the error in
the finite-sample regime. The approach is straightforward to implement and
requires no quantum resources. We analytically prove average case as well as
worst- and best-case scenarios, and rigorously prove that it is, in principle,
always worth biasing the estimators. We illustrate our approach in a quantum
simulation task of a $12$-qubit spin-ring problem and demonstrate how
estimating expected values of non-local perturbations can be significantly more
efficient using our biased scheme.
Related papers
- High-Dimensional Subspace Expansion Using Classical Shadows [0.0]
We introduce a post-processing technique for classical shadow measurement data that enhances the precision of ground state estimation.
We analytically investigate noise propagation within our method, and upper bound the statistical fluctuations due to the limited number of snapshots in classical shadows.
In numerical simulations, our method can achieve a reduction in the energy estimation errors in many cases, sometimes by more than an order of magnitude.
arXiv Detail & Related papers (2024-06-17T13:37:27Z) - Efficacy of virtual purification-based error mitigation on quantum
metrology [1.7635061227370266]
Noise is the main source that hinders us from fully exploiting quantum advantages in various quantum informational tasks.
We study factors determining whether virtual purification-based error mitigation (VPEM) can reduce the bias.
Based on our analysis, we predict whether VPEM can effectively reduce a bias and numerically verify our results.
arXiv Detail & Related papers (2023-03-28T09:13:12Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - On Classical and Hybrid Shadows of Quantum States [0.0]
Classical shadows are a computationally efficient approach to storing quantum states on a classical computer.
We discuss the advantages and limitations of using classical shadows to simulate many-body dynamics.
We introduce the notion of a hybrid shadow, constructed from measurements on a part of the system instead of the entirety.
arXiv Detail & Related papers (2022-06-14T06:25:24Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - A Bayesian analysis of classical shadows [0.2867517731896504]
We investigate classical shadows through the lens of Bayesian mean estimation (BME)
In direct tests on numerical data, BME is found to attain significantly lower error on average, but classical shadows prove remarkably more accurate in specific situations.
We introduce an observable-oriented pseudo-likelihood that successfully emulates the dimension-independence and state-specific optimality of classical shadows.
arXiv Detail & Related papers (2020-12-16T14:45:18Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.