Systematic benchmarking of quantum computers: status and recommendations
- URL: http://arxiv.org/abs/2503.04905v1
- Date: Thu, 06 Mar 2025 19:05:13 GMT
- Title: Systematic benchmarking of quantum computers: status and recommendations
- Authors: Jeanette Miriam Lorenz, Thomas Monz, Jens Eisert, Daniel Reitzner, Félicien Schopfer, Frédéric Barbaresco, Krzysztof Kurowski, Ward van der Schoot, Thomas Strohm, Jean Senellart, Cécile M. Perrault, Martin Knufinke, Ziyad Amodjee, Mattia Giardini,
- Abstract summary: Benchmarking is crucial for assessing the performance of quantum computers.<n>The document highlights key aspects such as component-level, system-level, software-level, HPC-level, and application-level benchmarks.
- Score: 1.1961811541956795
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Architectures for quantum computing can only be scaled up when they are accompanied by suitable benchmarking techniques. The document provides a comprehensive overview of the state and recommendations for systematic benchmarking of quantum computers. Benchmarking is crucial for assessing the performance of quantum computers, including the hardware, software, as well as algorithms and applications. The document highlights key aspects such as component-level, system-level, software-level, HPC-level, and application-level benchmarks. Component-level benchmarks focus on the performance of individual qubits and gates, while system-level benchmarks evaluate the entire quantum processor. Software-level benchmarks consider the compiler's efficiency and error mitigation techniques. HPC-level and cloud benchmarks address integration with classical systems and cloud platforms, respectively. Application-level benchmarks measure performance in real-world use cases. The document also discusses the importance of standardization to ensure reproducibility and comparability of benchmarks, and highlights ongoing efforts in the quantum computing community towards establishing these benchmarks. Recommendations for future steps emphasize the need for developing standardized evaluation routines and integrating benchmarks with broader quantum technology activities.
Related papers
- Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy, Unified Evaluation, and Comparative Analysis [89.60263788590893]
Post-training Quantization (PTQ) technique has been extensively adopted for large language models (LLMs) compression.
Existing algorithms focus primarily on performance, overlooking the trade-off among model size, performance, and quantization bitwidth.
We provide a novel benchmark for LLMs PTQ in this paper.
arXiv Detail & Related papers (2025-02-18T07:35:35Z) - A Review and Collection of Metrics and Benchmarks for Quantum Computers: definitions, methodologies and software [29.981227868010002]
This article provides a review of metrics and benchmarks for quantum computers.<n>It includes a consistent format of the definitions across all metrics and a reproducible approach by linking the metrics to open-source software used to evaluate them.<n>We identify five areas where international standardization working groups could be established.
arXiv Detail & Related papers (2025-02-10T17:48:27Z) - Benchmarking Quantum Computers: Towards a Standard Performance Evaluation Approach [0.7499722271664147]
We review the most important aspects of both classical processor benchmarks and the metrics comprising them.
We analyze the intrinsic properties that characterize the paradigm of quantum computing.
We propose general guidelines for quantum benchmarking.
arXiv Detail & Related papers (2024-07-15T17:39:59Z) - Benchmarking quantum computers [0.0]
Good benchmarks empower scientists, engineers, programmers, and users to understand a computing system's power.
Bad benchmarks can misdirect research and inhibit progress.
We discuss the role of benchmarks and benchmarking, and how good benchmarks can drive and measure progress.
arXiv Detail & Related papers (2024-07-11T19:25:30Z) - Non-unitary Coupled Cluster Enabled by Mid-circuit Measurements on Quantum Computers [37.69303106863453]
We propose a state preparation method based on coupled cluster (CC) theory, which is a pillar of quantum chemistry on classical computers.
Our approach leads to a reduction of the classical computation overhead, and the number of CNOT and T gates by 28% and 57% on average.
arXiv Detail & Related papers (2024-06-17T14:10:10Z) - Majorization-based benchmark of the complexity of quantum processors [105.54048699217668]
We numerically simulate and characterize the operation of various quantum processors.
We identify and assess quantum complexity by comparing the performance of each device against benchmark lines.
We find that the majorization-based benchmark holds as long as the circuits' output states have, on average, high purity.
arXiv Detail & Related papers (2023-04-10T23:01:10Z) - When BERT Meets Quantum Temporal Convolution Learning for Text
Classification in Heterogeneous Computing [75.75419308975746]
This work proposes a vertical federated learning architecture based on variational quantum circuits to demonstrate the competitive performance of a quantum-enhanced pre-trained BERT model for text classification.
Our experiments on intent classification show that our proposed BERT-QTC model attains competitive experimental results in the Snips and ATIS spoken language datasets.
arXiv Detail & Related papers (2022-02-17T09:55:21Z) - QUARK: A Framework for Quantum Computing Application Benchmarking [0.0]
We propose an application-centric benchmark method and the QUARK framework to foster the investigation and creation of application benchmarks for QC.
This paper makes a case for application-level benchmarks and provides an in-depth "pen and paper" benchmark formulation of two reference problems.
arXiv Detail & Related papers (2022-02-07T09:41:24Z) - The Benchmark Lottery [114.43978017484893]
"A benchmark lottery" describes the overall fragility of the machine learning benchmarking process.
We show that the relative performance of algorithms may be altered significantly simply by choosing different benchmark tasks.
arXiv Detail & Related papers (2021-07-14T21:08:30Z) - Accelerating variational quantum algorithms with multiple quantum
processors [78.36566711543476]
Variational quantum algorithms (VQAs) have the potential of utilizing near-term quantum machines to gain certain computational advantages.
Modern VQAs suffer from cumbersome computational overhead, hampered by the tradition of employing a solitary quantum processor to handle large data.
Here we devise an efficient distributed optimization scheme, called QUDIO, to address this issue.
arXiv Detail & Related papers (2021-06-24T08:18:42Z) - Towards Question-Answering as an Automatic Metric for Evaluating the
Content Quality of a Summary [65.37544133256499]
We propose a metric to evaluate the content quality of a summary using question-answering (QA)
We demonstrate the experimental benefits of QA-based metrics through an analysis of our proposed metric, QAEval.
arXiv Detail & Related papers (2020-10-01T15:33:09Z) - Application-Motivated, Holistic Benchmarking of a Full Quantum Computing
Stack [0.0]
Quantum computing systems need to be benchmarked in terms of practical tasks they would be expected to do.
We propose 3 "application-motivated" circuit classes for benchmarking: deep, shallow, and square.
We quantify the performance of quantum computing system in running circuits from these classes using several figures of merit.
arXiv Detail & Related papers (2020-06-01T21:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.