A Review and Collection of Metrics and Benchmarks for Quantum Computers: definitions, methodologies and software
- URL: http://arxiv.org/abs/2502.06717v1
- Date: Mon, 10 Feb 2025 17:48:27 GMT
- Title: A Review and Collection of Metrics and Benchmarks for Quantum Computers: definitions, methodologies and software
- Authors: Deep Lall, Abhishek Agarwal, Weixi Zhang, Lachlan Lindoy, Tobias Lindström, Stephanie Webster, Simon Hall, Nicholas Chancellor, Petros Wallden, Raul Garcia-Patron, Elham Kashefi, Viv Kendon, Jonathan Pritchard, Alessandro Rossi, Animesh Datta, Theodoros Kapourniotis, Konstantinos Georgopoulos, Ivan Rungger,
- Abstract summary: This article provides a review of metrics and benchmarks for quantum computers.
It includes a consistent format of the definitions across all metrics and a reproducible approach by linking the metrics to open-source software used to evaluate them.
We identify five areas where international standardization working groups could be established.
- Score: 29.981227868010002
- License:
- Abstract: Quantum computers have the potential to provide an advantage over classical computers in a number of areas. Numerous metrics to benchmark the performance of quantum computers, ranging from their individual hardware components to entire applications, have been proposed over the years. Navigating the resulting extensive literature can be overwhelming. Objective comparisons are further hampered in practice as different variations of the same metric are used, and the data disclosed together with a reported metric value is often not sufficient to reproduce the measurements. This article addresses these challenges by providing a review of metrics and benchmarks for quantum computers and 1) a comprehensive collection of benchmarks allowing holistic comparisons of quantum computers, 2) a consistent format of the definitions across all metrics including a transparent description of the methodology and of the main assumptions and limitations, and 3) a reproducible approach by linking the metrics to open-source software used to evaluate them. We identify five areas where international standardization working groups could be established, namely: i) the identification and agreement on the categories of metrics that comprehensively benchmark device performance; ii) the identification and agreement on a set of well-established metrics that together comprehensively benchmark performance; iii) the identification of metrics specific to hardware platforms, including non-gate-based quantum computers; iv) inter-laboratory comparison studies to develop best practice guides for measurement methodology; and v) agreement on what data and software should be reported together with a metric value to ensure trust, transparency and reproducibility. We provide potential routes to advancing these areas. We expect this compendium to accelerate the progress of quantum computing hardware towards quantum advantage.
Related papers
- Benchmarking quantum computers [0.0]
Good benchmarks empower scientists, engineers, programmers, and users to understand a computing system's power.
Bad benchmarks can misdirect research and inhibit progress.
We discuss the role of benchmarks and benchmarking, and how good benchmarks can drive and measure progress.
arXiv Detail & Related papers (2024-07-11T19:25:30Z) - QuAS: Quantum Application Score for benchmarking the utility of quantum computers [0.0]
This paper presents a revised holistic scoring method called the Quantum Application Score (QuAS)
We discuss how to integrate both and thereby obtain an application-level metric that better quantifies the practical utility of quantum computers.
We evaluate the new metric on different hardware platforms such as D-Wave and IBM as well as quantum simulators of Quantum Inspire and Rigetti.
arXiv Detail & Related papers (2024-06-06T09:39:58Z) - Multimodal deep representation learning for quantum cross-platform
verification [60.01590250213637]
Cross-platform verification, a critical undertaking in the realm of early-stage quantum computing, endeavors to characterize the similarity of two imperfect quantum devices executing identical algorithms.
We introduce an innovative multimodal learning approach, recognizing that the formalism of data in this task embodies two distinct modalities.
We devise a multimodal neural network to independently extract knowledge from these modalities, followed by a fusion operation to create a comprehensive data representation.
arXiv Detail & Related papers (2023-11-07T04:35:03Z) - Majorization-based benchmark of the complexity of quantum processors [105.54048699217668]
We numerically simulate and characterize the operation of various quantum processors.
We identify and assess quantum complexity by comparing the performance of each device against benchmark lines.
We find that the majorization-based benchmark holds as long as the circuits' output states have, on average, high purity.
arXiv Detail & Related papers (2023-04-10T23:01:10Z) - Extending the Q-score to an Application-level Quantum Metric Framework [0.0]
evaluating the performance of quantum devices is an important step towards scaling quantum devices and eventually using them in practice.
A prominent quantum metric is given by the Q-score metric of Atos.
We show that the Q-score defines a framework of quantum metrics, which allows benchmarking using different problems, user settings and solvers.
arXiv Detail & Related papers (2023-02-01T18:03:13Z) - QAFactEval: Improved QA-Based Factual Consistency Evaluation for
Summarization [116.56171113972944]
We show that carefully choosing the components of a QA-based metric is critical to performance.
Our solution improves upon the best-performing entailment-based metric and achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-12-16T00:38:35Z) - Benchmarking Small-Scale Quantum Devices on Computing Graph Edit
Distance [52.77024349608834]
Graph Edit Distance (GED) measures the degree of (dis)similarity between two graphs in terms of the operations needed to make them identical.
In this paper we present a comparative study of two quantum approaches to computing GED.
arXiv Detail & Related papers (2021-11-19T12:35:26Z) - Application-Oriented Performance Benchmarks for Quantum Computing [0.0]
benchmarking suite is designed to be readily accessible to a broad audience of users.
Our methodology is constructed to anticipate advances in quantum computing hardware that are likely to emerge in the next five years.
arXiv Detail & Related papers (2021-10-07T01:45:06Z) - Scalable Benchmarks for Gate-Based Quantum Computers [5.735035463793008]
We develop and release an advanced quantum benchmarking framework.
It measures the performance of universal quantum devices in a hardware-agnostic way.
We present the benchmark results of twenty-one different quantum devices from IBM, Rigetti and IonQ.
arXiv Detail & Related papers (2021-04-21T18:00:12Z) - GO FIGURE: A Meta Evaluation of Factuality in Summarization [131.1087461486504]
We introduce GO FIGURE, a meta-evaluation framework for evaluating factuality evaluation metrics.
Our benchmark analysis on ten factuality metrics reveals that our framework provides a robust and efficient evaluation.
It also reveals that while QA metrics generally improve over standard metrics that measure factuality across domains, performance is highly dependent on the way in which questions are generated.
arXiv Detail & Related papers (2020-10-24T08:30:20Z) - Towards Question-Answering as an Automatic Metric for Evaluating the
Content Quality of a Summary [65.37544133256499]
We propose a metric to evaluate the content quality of a summary using question-answering (QA)
We demonstrate the experimental benefits of QA-based metrics through an analysis of our proposed metric, QAEval.
arXiv Detail & Related papers (2020-10-01T15:33:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.