Decoding probabilistic syndrome measurement and the role of entropy
- URL: http://arxiv.org/abs/2302.11631v1
- Date: Wed, 22 Feb 2023 20:12:48 GMT
- Title: Decoding probabilistic syndrome measurement and the role of entropy
- Authors: Jo\~ao F. Doriguello
- Abstract summary: We study the performance of the toric code under a model of probabilistic stabiliser measurement.
We find that, even under a completely continuous model of syndrome extraction, the threshold can be maintained at reasonably high values of $1.69%$ by suitably modifying the decoder.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In realistic stabiliser-based quantum error correction there are many ways in
which real physical systems deviate from simple toy models of error. Stabiliser
measurements may not always be deterministic or may suffer from erasure errors,
such that they do not supply syndrome outcomes required for error correction.
In this paper, we study the performance of the toric code under a model of
probabilistic stabiliser measurement. We find that, even under a completely
continuous model of syndrome extraction, the threshold can be maintained at
reasonably high values of $1.69\%$ by suitably modifying the decoder using the
edge-contraction method of Stace and Barrett (Physical Review A 81, 022317
(2010)), compared to a value of $2.93\%$ for deterministic stabiliser
measurements. Finally, we study the role of entropic factors which account for
degenerate error configurations for improving on the performance of the
decoder. We find that in the limit of completely continuous stabiliser
measurement any advantage further provided by these factors becomes negligible
in contrast to the case of deterministic measurements.
Related papers
- Calibrating Deep Neural Network using Euclidean Distance [5.675312975435121]
In machine learning, Focal Loss is commonly used to reduce misclassification rates by emphasizing hard-to-classify samples.
High calibration error indicates a misalignment between predicted probabilities and actual outcomes, affecting model reliability.
This research introduces a novel loss function called Focal Loss (FCL), designed to improve probability calibration while retaining the advantages of Focal Loss in handling difficult samples.
arXiv Detail & Related papers (2024-10-23T23:06:50Z) - Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density [93.32594873253534]
Trustworthy machine learning requires meticulous regulation of model reliance on non-robust features.
We propose a framework to delineate and regulate such features by attributing model predictions to the input.
arXiv Detail & Related papers (2024-07-05T09:16:56Z) - Orthogonal Causal Calibration [55.28164682911196]
We prove generic upper bounds on the calibration error of any causal parameter estimate $theta$ with respect to any loss $ell$.
We use our bound to analyze the convergence of two sample splitting algorithms for causal calibration.
arXiv Detail & Related papers (2024-06-04T03:35:25Z) - Minimax Optimal Estimation of Stability Under Distribution Shift [8.893526921869137]
We analyze the stability of a system under distribution shift.
The stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation.
Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost.
arXiv Detail & Related papers (2022-12-13T02:40:30Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Localization Uncertainty Estimation for Anchor-Free Object Detection [48.931731695431374]
There are several limitations of the existing uncertainty estimation methods for anchor-based object detection.
We propose a new localization uncertainty estimation method called UAD for anchor-free object detection.
Our method captures the uncertainty in four directions of box offsets that are homogeneous, so that it can tell which direction is uncertain.
arXiv Detail & Related papers (2020-06-28T13:49:30Z) - Fine-Grained Analysis of Stability and Generalization for Stochastic
Gradient Descent [55.85456985750134]
We introduce a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates.
This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting.
To our best knowledge, this gives the firstever-known stability and generalization for SGD with even non-differentiable loss functions.
arXiv Detail & Related papers (2020-06-15T06:30:19Z) - Estimation of Accurate and Calibrated Uncertainties in Deterministic
models [0.8702432681310401]
We devise a method to transform a deterministic prediction into a probabilistic one.
We show that for doing so, one has to compromise between the accuracy and the reliability (calibration) of such a model.
We show several examples both with synthetic data, where the underlying hidden noise can accurately be recovered, and with large real-world datasets.
arXiv Detail & Related papers (2020-03-11T04:02:56Z) - Optimistic robust linear quadratic dual control [4.94950858749529]
We present a dual control strategy that attempts to combine the performance of certainty equivalence, with the practical utility of robustness.
The formulation preserves structure in the representation of parametric uncertainty, which allows the controller to target reduction of uncertainty in the parameters that matter most' for the control task.
arXiv Detail & Related papers (2019-12-31T02:02:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.