Defining Standard Strategies for Quantum Benchmarks
- URL: http://arxiv.org/abs/2303.02108v1
- Date: Fri, 3 Mar 2023 17:50:34 GMT
- Title: Defining Standard Strategies for Quantum Benchmarks
- Authors: Mirko Amico, Helena Zhang, Petar Jurcevic, Lev S. Bishop, Paul Nation,
Andrew Wack, and David C. McKay
- Abstract summary: We define a set of characteristics that any benchmark should follow, and make a distinction between benchmarks and diagnostics.
We discuss the issue of benchmark optimizations, detail when those optimizations are appropriate, and how they should be reported.
We introduce a scalable mirror quantum volume benchmark.
- Score: 0.1759008116536278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As quantum computers grow in size and scope, a question of great importance
is how best to benchmark performance. Here we define a set of characteristics
that any benchmark should follow -- randomized, well-defined, holistic, device
independent -- and make a distinction between benchmarks and diagnostics. We
use Quantum Volume (QV) [1] as an example case for clear rules in benchmarking,
illustrating the implications for using different success statistics, as in
Ref. [2]. We discuss the issue of benchmark optimizations, detail when those
optimizations are appropriate, and how they should be reported. Reporting the
use of quantum error mitigation techniques is especially critical for
interpreting benchmarking results, as their ability to yield highly accurate
observables comes with exponential overhead, which is often omitted in
performance evaluations. Finally, we use application-oriented and mirror
benchmarking techniques to demonstrate some of the highlighted optimization
principles, and introduce a scalable mirror quantum volume benchmark. We
elucidate the importance of simple optimizations for improving benchmarking
results, and note that such omissions can make a critical difference in
comparisons. For example, when running mirror randomized benchmarking, we
observe a reduction in error per qubit from 2% to 1% on a 26-qubit circuit with
the inclusion of dynamic decoupling.
Related papers
- Featuremetric benchmarking: Quantum computer benchmarks based on circuit features [1.0842830860169255]
Benchmarks that concisely summarize the performance of many-qubit quantum computers are essential for measuring progress towards the goal of useful quantum computation.
We present a benchmarking framework that is based on quantifying how a quantum computer's performance varies as a function of features of those circuits.
arXiv Detail & Related papers (2025-04-17T01:49:02Z) - Systematic benchmarking of quantum computers: status and recommendations [1.1961811541956795]
Benchmarking is crucial for assessing the performance of quantum computers.
The document highlights key aspects such as component-level, system-level, software-level, HPC-level, and application-level benchmarks.
arXiv Detail & Related papers (2025-03-06T19:05:13Z) - Benchmarking Quantum Computers: Towards a Standard Performance Evaluation Approach [0.7499722271664147]
We review the most important aspects of both classical processor benchmarks and the metrics comprising them.
We analyze the intrinsic properties that characterize the paradigm of quantum computing.
We propose general guidelines for quantum benchmarking.
arXiv Detail & Related papers (2024-07-15T17:39:59Z) - Towards Robust Benchmarking of Quantum Optimization Algorithms [3.9456729020535013]
A key problem in existing benchmarking frameworks is the lack of equal effort in optimizing for the best quantum and, respectively, classical approaches.
This paper presents a comprehensive set of guidelines comprising universal steps towards fair benchmarks.
arXiv Detail & Related papers (2024-05-13T10:35:23Z) - Scalable Full-Stack Benchmarks for Quantum Computers [0.0]
We introduce a technique for creating efficient benchmarks from any set of quantum computations.
Our benchmarks assess the integrated performance of a quantum processor's classical compilation algorithms.
arXiv Detail & Related papers (2023-12-21T18:31:42Z) - Randomized Benchmarking of Local Zeroth-Order Optimizers for Variational
Quantum Systems [65.268245109828]
We compare the performance of classicals across a series of partially-randomized tasks.
We focus on local zeroth-orders due to their generally favorable performance and query-efficiency on quantum systems.
arXiv Detail & Related papers (2023-10-14T02:13:26Z) - Majorization-based benchmark of the complexity of quantum processors [105.54048699217668]
We numerically simulate and characterize the operation of various quantum processors.
We identify and assess quantum complexity by comparing the performance of each device against benchmark lines.
We find that the majorization-based benchmark holds as long as the circuits' output states have, on average, high purity.
arXiv Detail & Related papers (2023-04-10T23:01:10Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Analyzing the Impact of Undersampling on the Benchmarking and
Configuration of Evolutionary Algorithms [3.967483941966979]
We show that care should be taken when making decisions based on limited data.
We show examples of performance losses of more than 20%, even when using statistical races to dynamically adjust the number of runs.
arXiv Detail & Related papers (2022-04-20T09:53:59Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - The Benchmark Lottery [114.43978017484893]
"A benchmark lottery" describes the overall fragility of the machine learning benchmarking process.
We show that the relative performance of algorithms may be altered significantly simply by choosing different benchmark tasks.
arXiv Detail & Related papers (2021-07-14T21:08:30Z) - Benchmarking quantum co-processors in an application-centric,
hardware-agnostic and scalable way [0.0]
We introduce a new benchmark, dubbed Atos Q-score (TM)
The Q-score measures the maximum number of qubits that can be used effectively to solve the MaxCut optimization problem.
We provide an open-source implementation of Q-score that makes it easy to compute the Q-score of any quantum hardware.
arXiv Detail & Related papers (2021-02-25T16:26:23Z) - Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence
Lip-Reading [96.48553941812366]
Lip-reading aims to infer the speech content from the lip movement sequence.
Traditional learning process of seq2seq models suffers from two problems.
We propose a novel pseudo-convolutional policy gradient (PCPG) based method to address these two problems.
arXiv Detail & Related papers (2020-03-09T09:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.