Sampling Strategy Optimization for Randomized Benchmarking
- URL: http://arxiv.org/abs/2109.07653v1
- Date: Thu, 16 Sep 2021 01:14:13 GMT
- Title: Sampling Strategy Optimization for Randomized Benchmarking
- Authors: Toshinari Itoko and Rudy Raymond
- Abstract summary: benchmarking (RB) is a widely used method for estimating the average fidelity of gates implemented on a quantum computing device.
We propose a method for fully optimizing an RB configuration so that the confidence interval of the estimated fidelity is minimized.
- Score: 4.7362989868031855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized benchmarking (RB) is a widely used method for estimating the
average fidelity of gates implemented on a quantum computing device. The
stochastic error of the average gate fidelity estimated by RB depends on the
sampling strategy (i.e., how to sample sequences to be run in the protocol).
The sampling strategy is determined by a set of configurable parameters (an RB
configuration) that includes Clifford lengths (a list of the number of
independent Clifford gates in a sequence) and the number of sequences for each
Clifford length. The RB configuration is often chosen heuristically and there
has been little research on its best configuration. Therefore, we propose a
method for fully optimizing an RB configuration so that the confidence interval
of the estimated fidelity is minimized while not increasing the total execution
time of sequences. By experiments on real devices, we demonstrate the efficacy
of the optimization method against heuristic selection in reducing the variance
of the estimated fidelity.
Related papers
- Scaling LLM Inference with Optimized Sample Compute Allocation [56.524278187351925]
We propose OSCA, an algorithm to find an optimal mix of different inference configurations.
Our experiments show that with our learned mixed allocation, we can achieve accuracy better than the best single configuration.
OSCA is also shown to be effective in agentic beyond single-turn tasks, achieving a better accuracy on SWE-Bench with 3x less compute than the default configuration.
arXiv Detail & Related papers (2024-10-29T19:17:55Z) - Truncating Trajectories in Monte Carlo Policy Evaluation: an Adaptive Approach [51.76826149868971]
Policy evaluation via Monte Carlo simulation is at the core of many MC Reinforcement Learning (RL) algorithms.
We propose as a quality index a surrogate of the mean squared error of a return estimator that uses trajectories of different lengths.
We present an adaptive algorithm called Robust and Iterative Data collection strategy Optimization (RIDO)
arXiv Detail & Related papers (2024-10-17T11:47:56Z) - Adaptive Online Bayesian Estimation of Frequency Distributions with Local Differential Privacy [0.4604003661048266]
We propose a novel approach for the adaptive and online estimation of the frequency distribution of a finite number of categories under the local differential privacy (LDP) framework.
The proposed algorithm performs Bayesian parameter estimation via posterior sampling and adapts the randomization mechanism for LDP based on the obtained posterior samples.
We provide a theoretical analysis showing that (i) the posterior distribution targeted by the algorithm converges to the true parameter even for approximate posterior sampling, and (ii) the algorithm selects the optimal subset with high probability if posterior sampling is performed exactly.
arXiv Detail & Related papers (2024-05-11T13:59:52Z) - Optimized experiment design and analysis for fully randomized
benchmarking [34.82692226532414]
We investigate the advantages of fully randomized benchmarking, where a new random sequence is drawn for each experimental trial.
The advantages of full randomization include smaller confidence intervals on the inferred step error.
We experimentally observe such improvements in Clifford randomized benchmarking experiments on a single trapped ion qubit.
arXiv Detail & Related papers (2023-12-26T00:41:47Z) - Benchmarking optimality of time series classification methods in
distinguishing diffusions [1.0775419935941009]
This study proposes to benchmark the optimality of TSC algorithms in distinguishing diffusion processes by the likelihood ratio test (LRT)
The LRT benchmarks are computationally efficient because the LRT does not need training, and the diffusion processes can be efficiently simulated and are flexible to reflect the specific features of real-world applications.
arXiv Detail & Related papers (2023-01-30T17:49:12Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Faster Born probability estimation via gate merging and frame
optimisation [3.9198548406564604]
Outcome probabilities of any quantum circuit can be estimated using Monte Carlo sampling.
We propose two classical sub-routines: circuit gate optimisation and frame optimisation.
We numerically demonstrate that our methods provide improved scaling in the negativity overhead for all tested cases of random circuits.
arXiv Detail & Related papers (2022-02-24T14:18:34Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Coherent randomized benchmarking [68.8204255655161]
We show that superpositions of different random sequences rather than independent samples are used.
We show that this leads to a uniform and simple protocol with significant advantages with respect to gates that can be benchmarked.
arXiv Detail & Related papers (2020-10-26T18:00:34Z) - Decentralised Learning with Random Features and Distributed Gradient
Descent [39.00450514924611]
We investigate the generalisation performance of Distributed Gradient Descent with Implicit Regularisation and Random Features in a homogenous setting.
We establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.
We present simulations that show how the number of Random Features, iterations and samples impact predictive performance.
arXiv Detail & Related papers (2020-07-01T09:55:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.