Sampling Strategy Optimization for Randomized Benchmarking
- URL: http://arxiv.org/abs/2109.07653v1
- Date: Thu, 16 Sep 2021 01:14:13 GMT
- Title: Sampling Strategy Optimization for Randomized Benchmarking
- Authors: Toshinari Itoko and Rudy Raymond
- Abstract summary: benchmarking (RB) is a widely used method for estimating the average fidelity of gates implemented on a quantum computing device.
We propose a method for fully optimizing an RB configuration so that the confidence interval of the estimated fidelity is minimized.
- Score: 4.7362989868031855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized benchmarking (RB) is a widely used method for estimating the
average fidelity of gates implemented on a quantum computing device. The
stochastic error of the average gate fidelity estimated by RB depends on the
sampling strategy (i.e., how to sample sequences to be run in the protocol).
The sampling strategy is determined by a set of configurable parameters (an RB
configuration) that includes Clifford lengths (a list of the number of
independent Clifford gates in a sequence) and the number of sequences for each
Clifford length. The RB configuration is often chosen heuristically and there
has been little research on its best configuration. Therefore, we propose a
method for fully optimizing an RB configuration so that the confidence interval
of the estimated fidelity is minimized while not increasing the total execution
time of sequences. By experiments on real devices, we demonstrate the efficacy
of the optimization method against heuristic selection in reducing the variance
of the estimated fidelity.
Related papers
- Adaptive Online Bayesian Estimation of Frequency Distributions with Local Differential Privacy [0.4604003661048266]
We propose a novel approach for the adaptive and online estimation of the frequency distribution of a finite number of categories under the local differential privacy (LDP) framework.
The proposed algorithm performs Bayesian parameter estimation via posterior sampling and adapts the randomization mechanism for LDP based on the obtained posterior samples.
We provide a theoretical analysis showing that (i) the posterior distribution targeted by the algorithm converges to the true parameter even for approximate posterior sampling, and (ii) the algorithm selects the optimal subset with high probability if posterior sampling is performed exactly.
arXiv Detail & Related papers (2024-05-11T13:59:52Z) - Optimized experiment design and analysis for fully randomized
benchmarking [34.82692226532414]
We investigate the advantages of fully randomized benchmarking, where a new random sequence is drawn for each experimental trial.
The advantages of full randomization include smaller confidence intervals on the inferred step error.
We experimentally observe such improvements in Clifford randomized benchmarking experiments on a single trapped ion qubit.
arXiv Detail & Related papers (2023-12-26T00:41:47Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Benchmarking optimality of time series classification methods in
distinguishing diffusions [1.0775419935941009]
This study proposes to benchmark the optimality of TSC algorithms in distinguishing diffusion processes by the likelihood ratio test (LRT)
The LRT benchmarks are computationally efficient because the LRT does not need training, and the diffusion processes can be efficiently simulated and are flexible to reflect the specific features of real-world applications.
arXiv Detail & Related papers (2023-01-30T17:49:12Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Faster Born probability estimation via gate merging and frame
optimisation [3.9198548406564604]
Outcome probabilities of any quantum circuit can be estimated using Monte Carlo sampling.
We propose two classical sub-routines: circuit gate optimisation and frame optimisation.
We numerically demonstrate that our methods provide improved scaling in the negativity overhead for all tested cases of random circuits.
arXiv Detail & Related papers (2022-02-24T14:18:34Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Coherent randomized benchmarking [68.8204255655161]
We show that superpositions of different random sequences rather than independent samples are used.
We show that this leads to a uniform and simple protocol with significant advantages with respect to gates that can be benchmarked.
arXiv Detail & Related papers (2020-10-26T18:00:34Z) - Decentralised Learning with Random Features and Distributed Gradient
Descent [39.00450514924611]
We investigate the generalisation performance of Distributed Gradient Descent with Implicit Regularisation and Random Features in a homogenous setting.
We establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.
We present simulations that show how the number of Random Features, iterations and samples impact predictive performance.
arXiv Detail & Related papers (2020-07-01T09:55:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.