Randomized Benchmarking with Synthetic Quantum Circuits
- URL: http://arxiv.org/abs/2412.18578v1
- Date: Tue, 24 Dec 2024 18:10:00 GMT
- Title: Randomized Benchmarking with Synthetic Quantum Circuits
- Authors: Yale Fan, Riley Murray, Thaddeus D. Ladd, Kevin Young, Robin Blume-Kohout,
- Abstract summary: We introduce a broad framework for enhancing the sample efficiency of Randomized benchmarking (RB)
Our strategy, which applies to any benchmarking group, uses "synthetic" quantum circuits with classical post-processing of both input and output data.
We show that, for experimentally accessible high-spin systems, synthetic RB protocols can reduce the complexity of measuring rotationally invariant error rates.
- Score: 0.471858286267785
- License:
- Abstract: Randomized benchmarking (RB) comprises a set of mature and widely used techniques for assessing the quality of operations on a quantum information-processing system. Modern RB protocols for multiqubit systems extract physically relevant error rates by exploiting the structure of the group representation generated by the set of benchmarked operations. However, existing techniques become prohibitively inefficient for representations that are highly reducible yet decompose into irreducible subspaces of high dimension. These situations prevail when benchmarking high-dimensional systems such as qudits or bosonic modes, where experimental control is limited to implementing a small subset of all possible unitary operations. In this work, we introduce a broad framework for enhancing the sample efficiency of RB that is sufficiently powerful to extend the practical reach of RB beyond the multiqubit setting. Our strategy, which applies to any benchmarking group, uses "synthetic" quantum circuits with classical post-processing of both input and output data to leverage the full structure of reducible superoperator representations. To demonstrate the efficacy of our approach, we develop a detailed theory of RB for systems with rotational symmetry. Such systems carry a natural action of the group $\text{SU}(2)$, and they form the basis for several novel quantum error-correcting codes. We show that, for experimentally accessible high-spin systems, synthetic RB protocols can reduce the complexity of measuring rotationally invariant error rates by more than two orders of magnitude relative to standard approaches such as character RB.
Related papers
- Bosonic randomized benchmarking with passive transformations [0.1874930567916036]
We develop an RB protocol for passive Gaussian transformations, which we call bosonic passive RB.
The protocol is based on the recently developed filtered RB framework and is designed to isolate the multitude of exponential decays arising for bosonic systems.
They show a mild scaling with the number of modes, suggesting that bosonic passive RB is experimentally feasible for a moderate number of modes.
arXiv Detail & Related papers (2024-08-20T18:09:20Z) - DB-LLM: Accurate Dual-Binarization for Efficient LLMs [83.70686728471547]
Large language models (LLMs) have significantly advanced the field of natural language processing.
Existing ultra-low-bit quantization always causes severe accuracy drops.
We propose a novel Dual-Binarization method for LLMs, namely DB-LLM.
arXiv Detail & Related papers (2024-02-19T09:04:30Z) - Randomized benchmarking with random quantum circuits [1.3406858660972554]
We derive guarantees for gates from arbitrary compact groups under experimentally plausible assumptions.
We show that many relevant filtered RB schemes can be realized with random quantum circuits in linear depth.
We show filtered RB to be sample-efficient for several relevant groups, including protocols addressing higher-order cross-talk.
arXiv Detail & Related papers (2022-12-12T19:00:19Z) - Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution
Networks [82.18396309806577]
We propose a novel activation quantizer, referred to as Dynamic Dual Trainable Bounds (DDTB)
Our DDTB exhibits significant performance improvements in ultra-low precision.
For example, our DDTB achieves a 0.70dB PSNR increase on Urban100 benchmark when quantizing EDSR to 2-bit and scaling up output images to x4.
arXiv Detail & Related papers (2022-03-08T04:26:18Z) - Automatic Mixed-Precision Quantization Search of BERT [62.65905462141319]
Pre-trained language models such as BERT have shown remarkable effectiveness in various natural language processing tasks.
These models usually contain millions of parameters, which prevents them from practical deployment on resource-constrained devices.
We propose an automatic mixed-precision quantization framework designed for BERT that can simultaneously conduct quantization and pruning in a subgroup-wise level.
arXiv Detail & Related papers (2021-12-30T06:32:47Z) - A framework for randomized benchmarking over compact groups [0.6091702876917279]
characterization of experimental systems is an essential step in developing and improving quantum hardware.
A collection of protocols known as Randomized Benchmarking (RB) was developed in the past decade, which provides an efficient way to measure error rates in quantum systems.
A general framework for RB was proposed, which encompassed most of the known RB protocols and overcame the limitation on error models in previous works.
In this work we generalize the RB framework to continuous groups of gates and show that as long as the noise level is reasonably small, the output can be approximated as a linear combination of matrix exponential decays.
arXiv Detail & Related papers (2021-11-19T18:43:47Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Fully Quantized Image Super-Resolution Networks [81.75002888152159]
We propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy.
We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR.
Our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets.
arXiv Detail & Related papers (2020-11-29T03:53:49Z) - PAMS: Quantized Super-Resolution via Parameterized Max Scale [84.55675222525608]
Deep convolutional neural networks (DCNNs) have shown dominant performance in the task of super-resolution (SR)
We propose a new quantization scheme termed PArameterized Max Scale (PAMS), which applies the trainable truncated parameter to explore the upper bound of the quantization range adaptively.
Experiments demonstrate that the proposed PAMS scheme can well compress and accelerate the existing SR models such as EDSR and RDN.
arXiv Detail & Related papers (2020-11-09T06:16:05Z) - Character randomized benchmarking for non-multiplicity-free groups with
applications to subspace, leakage, and matchgate randomized benchmarking [14.315027895958304]
We extend the original character RB derivation to explicitly treat non-multiplicity-free groups.
We develop a new leakage RB protocol that applies to more general groups of gates.
This example provides one of the few examples of a scalable non-Clifford RB protocol.
arXiv Detail & Related papers (2020-10-30T18:00:01Z) - A general framework for randomized benchmarking [1.1969556745575978]
Randomized benchmarking (RB) refers to a collection of protocols that in the past decade have become central methods for characterizing quantum gates.
We develop a rigorous framework of RB general enough to encompass virtually all known protocols as well as novel, more flexible extensions.
arXiv Detail & Related papers (2020-10-15T18:38:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.