Combining $T_1$ and $T_2$ estimation with randomized benchmarking and
bounding the diamond distance
- URL: http://arxiv.org/abs/2008.09197v1
- Date: Thu, 20 Aug 2020 20:28:35 GMT
- Title: Combining $T_1$ and $T_2$ estimation with randomized benchmarking and
bounding the diamond distance
- Authors: Hillary Dawkins, Joel Wallman, Joseph Emerson
- Abstract summary: Learning about specific sources of error is essential for optimizing experimental design and error correction methods.
We consider the case where errors are dominated by the generalized damping channel.
We provide bounds that allow robust estimation of the thresholds for fault-tolerance.
- Score: 6.445605125467574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The characterization of errors in a quantum system is a fundamental step for
two important goals. First, learning about specific sources of error is
essential for optimizing experimental design and error correction methods.
Second, verifying that the error is below some threshold value is required to
meet the criteria of threshold theorems. We consider the case where errors are
dominated by the generalized damping channel (encompassing the common intrinsic
processes of amplitude damping and dephasing) but may also contain additional
unknown error sources. We demonstrate the robustness of standard $T_1$ and
$T_2$ estimation methods and provide expressions for the expected error in
these estimates under the additional error sources. We then derive expressions
that allow a comparison of the actual and expected results of fine-grained
randomized benchmarking experiments based on the damping parameters. Given the
results of this comparison, we provide bounds that allow robust estimation of
the thresholds for fault-tolerance.
Related papers
- Semiparametric conformal prediction [79.6147286161434]
We construct a conformal prediction set accounting for the joint correlation structure of the vector-valued non-conformity scores.
We flexibly estimate the joint cumulative distribution function (CDF) of the scores.
Our method yields desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Orthogonal Causal Calibration [55.28164682911196]
We prove generic upper bounds on the calibration error of any causal parameter estimate $theta$ with respect to any loss $ell$.
We use our bound to analyze the convergence of two sample splitting algorithms for causal calibration.
arXiv Detail & Related papers (2024-06-04T03:35:25Z) - Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Robust Bayesian Inference for Berkson and Classical Measurement Error Models [9.712913056924826]
We propose a nonparametric framework for dealing with measurement error.
It is suitable for both Classical and Berkson error models.
It offers flexibility in the choice of loss function depending on the type of regression model.
arXiv Detail & Related papers (2023-06-02T11:48:15Z) - Asymptotic Characterisation of Robust Empirical Risk Minimisation
Performance in the Presence of Outliers [18.455890316339595]
We study robust linear regression in high-dimension, when both the dimension $d$ and the number of data points $n$ diverge with a fixed ratio $alpha=n/d$, and study a data model that includes outliers.
We provide exacts for the performances of the empirical risk minimisation (ERM) using $ell$-regularised $ell$, $ell_$, and Huber losses.
arXiv Detail & Related papers (2023-05-30T12:18:39Z) - Benchmarking quantum logic operations relative to thresholds for fault
tolerance [0.02171671840172762]
We use gate set tomography to perform precision characterization of a set of two-qubit logic gates to study RC on a superconducting quantum processor.
We show that the average and worst-case error rates are equal for randomly compiled gates, and measure a maximum worst-case error of 0.0197(3) for our gate set.
arXiv Detail & Related papers (2022-07-18T17:41:58Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Efficient diagnostics for quantum error correction [0.0]
We present a scalable experimental approach based on Pauli error reconstruction to predict the performance of codes.
Numerical evidence demonstrates that our method significantly outperforms predictions based on standard error metrics for various error models.
arXiv Detail & Related papers (2021-08-24T16:28:29Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Error bounds in estimating the out-of-sample prediction error using
leave-one-out cross validation in high-dimensions [19.439945058410203]
We study the problem of out-of-sample risk estimation in the high dimensional regime.
Extensive empirical evidence confirms the accuracy of leave-one-out cross validation.
One technical advantage of the theory is that it can be used to clarify and connect some results from the recent literature on scalable approximate LO.
arXiv Detail & Related papers (2020-03-03T20:07:07Z) - Understanding and Mitigating the Tradeoff Between Robustness and
Accuracy [88.51943635427709]
Adversarial training augments the training set with perturbations to improve the robust error.
We show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor.
arXiv Detail & Related papers (2020-02-25T08:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.