Optimized experiment design and analysis for fully randomized
benchmarking
- URL: http://arxiv.org/abs/2312.15836v1
- Date: Tue, 26 Dec 2023 00:41:47 GMT
- Title: Optimized experiment design and analysis for fully randomized
benchmarking
- Authors: Alex Kwiatkowski, Laurent J. Stephenson, Hannah M. Knaack, Alejandra
L. Collopy, Christina M. Bowers, Dietrich Leibfried, Daniel H. Slichter,
Scott Glancy, Emanuel Knill
- Abstract summary: We investigate the advantages of fully randomized benchmarking, where a new random sequence is drawn for each experimental trial.
The advantages of full randomization include smaller confidence intervals on the inferred step error.
We experimentally observe such improvements in Clifford randomized benchmarking experiments on a single trapped ion qubit.
- Score: 34.82692226532414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized benchmarking (RB) is a widely used strategy to assess the quality
of available quantum gates in a computational context. RB involves applying
known random sequences of gates to an initial state and using the statistics of
a final measurement step to determine an effective depolarizing error per step
of the sequence, which is a metric of gate quality. Here we investigate the
advantages of fully randomized benchmarking, where a new random sequence is
drawn for each experimental trial. The advantages of full randomization include
smaller confidence intervals on the inferred step error, the ability to use
maximum likelihood analysis without heuristics, straightforward optimization of
the sequence lengths, and the ability to model and measure behaviors that go
beyond the typical assumption of time-independent error rates. We discuss
models of time-dependent or non-Markovian errors that generalize the basic RB
model of a single exponential decay of the success probability. For any of
these models, we implement a concrete protocol to minimize the uncertainty of
the estimated parameters given a fixed time constraint on the complete
experiment, and we implement a maximum likelihood analysis. We consider several
previously published experiments and determine the potential for improvements
with optimized full randomization. We experimentally observe such improvements
in Clifford randomized benchmarking experiments on a single trapped ion qubit
at the National Institute of Standards and Technology (NIST). For an experiment
with uniform lengths and intentionally repeated sequences the step error was
$2.42^{+0.30}_{-0.22}\times 10^{-5}$, and for an optimized fully randomized
experiment of the same total duration the step error was
$2.57^{+0.07}_{-0.06}\times 10^{-5}$. We find a substantial decrease in the
uncertainty of the step error as a result of optimized fully randomized
benchmarking.
Related papers
- On High dimensional Poisson models with measurement error: hypothesis
testing for nonlinear nonconvex optimization [13.369004892264146]
We estimation and testing regression model with high dimensionals, which has wide applications in analyzing data.
We propose to estimate regression parameter through minimizing penalized consistency.
The proposed method is applied to the Alzheimer's Disease Initiative.
arXiv Detail & Related papers (2022-12-31T06:58:42Z) - Near-Optimal Non-Parametric Sequential Tests and Confidence Sequences
with Possibly Dependent Observations [44.71254888821376]
We provide the first type-I-error and expected-rejection-time guarantees under general non-data generating processes.
We show how to apply our results to inference on parameters defined by estimating equations, such as average treatment effects.
arXiv Detail & Related papers (2022-12-29T18:37:08Z) - Reliability analysis of discrete-state performance functions via
adaptive sequential sampling with detection of failure surfaces [0.0]
The paper presents a new efficient and robust method for rare event probability estimation.
The method can estimate the probabilities of multiple failure types.
It can accommodate this information to increase the accuracy of the estimated probabilities.
arXiv Detail & Related papers (2022-08-04T05:59:25Z) - Sampling Strategy Optimization for Randomized Benchmarking [4.7362989868031855]
benchmarking (RB) is a widely used method for estimating the average fidelity of gates implemented on a quantum computing device.
We propose a method for fully optimizing an RB configuration so that the confidence interval of the estimated fidelity is minimized.
arXiv Detail & Related papers (2021-09-16T01:14:13Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Individual Calibration with Randomized Forecasting [116.2086707626651]
We show that calibration for individual samples is possible in the regression setup if the predictions are randomized.
We design a training objective to enforce individual calibration and use it to train randomized regression functions.
arXiv Detail & Related papers (2020-06-18T05:53:10Z) - Minimax Estimation of Conditional Moment Models [40.95498063465325]
We introduce a min-max criterion function, under which the estimation problem can be thought of as solving a zero-sum game.
We analyze the statistical estimation rate of the resulting estimator for arbitrary hypothesis spaces.
We show how our modified mean squared error rate, combined with conditions that bound the ill-posedness of the inverse problem, lead to mean squared error rates.
arXiv Detail & Related papers (2020-06-12T14:02:38Z) - On the Optimality of Randomization in Experimental Design: How to
Randomize for Minimax Variance and Design-Based Inference [58.442274475425144]
I study the minimax-optimal design for a two-arm controlled experiment where conditional mean outcomes may vary in a given set.
The optimal design is shown to be the mixed-strategy optimal design (MSOD) of Kallus.
I therefore propose the inference-constrained MSOD, which is minimax-optimal among all designs subject to such constraints.
arXiv Detail & Related papers (2020-05-06T21:43:50Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.