Randomized Benchmarking Beyond Groups
- URL: http://arxiv.org/abs/2203.12703v2
- Date: Tue, 20 Dec 2022 20:51:15 GMT
- Title: Randomized Benchmarking Beyond Groups
- Authors: Jianxin Chen, Dawei Ding, Cupjin Huang
- Abstract summary: We formulate the emphuniversal benchmarking benchmarking (URB) framework.
This framework does away with the group structure and replaces the recovery gate plus measurement component with a general post-processing'' POVM.
We study the twirling map corresponding to the gate ensemble specified by the scheme.
- Score: 7.223374810607328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized benchmarking (RB) is the gold standard for experimentally
evaluating the quality of quantum operations. The current framework for RB is
centered on groups and their representations, but this can be problematic. For
example, Clifford circuits need up to $O(n^2)$ gates, and thus Clifford RB
cannot scale to larger devices. Attempts to remedy this include new schemes
such as linear cross-entropy benchmarking (XEB), cycle benchmarking, and
non-uniform RB, but they do not fall within the group-based RB framework. In
this work, we formulate the \emph{universal randomized benchmarking (URB)
framework} which does away with the group structure and also replaces the
recovery gate plus measurement component with a general ``post-processing''
POVM. Not only does this framework cover most of the existing benchmarking
schemes, but it also gives the language for and helps inspire the formulation
of new schemes. We specifically consider a class of URB schemes called
\emph{twirling schemes}. For twirling schemes, the post-processing POVM
approximately factorizes into an intermediate channel, inverting maps, and a
final measurement. This leads us to study the twirling map corresponding to the
gate ensemble specified by the scheme. We prove that if this twirling map is
strictly within unit distance of the Haar twirling map in induced diamond norm,
the probability of measurement as a function of gate length is a single
exponential decay up to small error terms. The core technical tool we use is
the matrix perturbation theory of linear operators on quantum channels.
Related papers
- Metric Convolutions: A Unifying Theory to Adaptive Convolutions [3.481985817302898]
Metric convolutions replace standard convolutions in image processing and deep learning.
They require fewer parameters and provide better generalisation.
Our approach shows competitive performance in standard denoising and classification tasks.
arXiv Detail & Related papers (2024-06-08T08:41:12Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Revisiting Rotation Averaging: Uncertainties and Robust Losses [51.64986160468128]
We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar.
We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging.
arXiv Detail & Related papers (2023-03-09T11:51:20Z) - Randomized benchmarking with random quantum circuits [1.3406858660972554]
We derive guarantees for gates from arbitrary compact groups under experimentally plausible assumptions.
We show that many relevant filtered RB schemes can be realized with random quantum circuits in linear depth.
We show filtered RB to be sample-efficient for several relevant groups, including protocols addressing higher-order cross-talk.
arXiv Detail & Related papers (2022-12-12T19:00:19Z) - Faster Born probability estimation via gate merging and frame
optimisation [3.9198548406564604]
Outcome probabilities of any quantum circuit can be estimated using Monte Carlo sampling.
We propose two classical sub-routines: circuit gate optimisation and frame optimisation.
We numerically demonstrate that our methods provide improved scaling in the negativity overhead for all tested cases of random circuits.
arXiv Detail & Related papers (2022-02-24T14:18:34Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - A framework for randomized benchmarking over compact groups [0.6091702876917279]
characterization of experimental systems is an essential step in developing and improving quantum hardware.
A collection of protocols known as Randomized Benchmarking (RB) was developed in the past decade, which provides an efficient way to measure error rates in quantum systems.
A general framework for RB was proposed, which encompassed most of the known RB protocols and overcame the limitation on error models in previous works.
In this work we generalize the RB framework to continuous groups of gates and show that as long as the noise level is reasonably small, the output can be approximated as a linear combination of matrix exponential decays.
arXiv Detail & Related papers (2021-11-19T18:43:47Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Character randomized benchmarking for non-multiplicity-free groups with
applications to subspace, leakage, and matchgate randomized benchmarking [14.315027895958304]
We extend the original character RB derivation to explicitly treat non-multiplicity-free groups.
We develop a new leakage RB protocol that applies to more general groups of gates.
This example provides one of the few examples of a scalable non-Clifford RB protocol.
arXiv Detail & Related papers (2020-10-30T18:00:01Z) - Orbital MCMC [82.54438698903775]
We propose two practical algorithms for constructing periodic orbits from any diffeomorphism.
We also perform an empirical study demonstrating the practical advantages of both kernels.
arXiv Detail & Related papers (2020-10-15T22:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.