Predictive Models from Quantum Computer Benchmarks
- URL: http://arxiv.org/abs/2305.08796v1
- Date: Mon, 15 May 2023 17:00:23 GMT
- Title: Predictive Models from Quantum Computer Benchmarks
- Authors: Daniel Hothem, Jordan Hines, Karthik Nataraj, Robin Blume-Kohout, and
Timothy Proctor
- Abstract summary: holistic benchmarks for quantum computers are essential for testing and summarizing the performance of quantum hardware.
We introduce a general framework for building predictive models from benchmarking data using capability models.
Our case studies use data from cloud-accessible quantum computers and simulations of noisy quantum computers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Holistic benchmarks for quantum computers are essential for testing and
summarizing the performance of quantum hardware. However, holistic benchmarks
-- such as algorithmic or randomized benchmarks -- typically do not predict a
processor's performance on circuits outside the benchmark's necessarily very
limited set of test circuits. In this paper, we introduce a general framework
for building predictive models from benchmarking data using capability models.
Capability models can be fit to many kinds of benchmarking data and used for a
variety of predictive tasks. We demonstrate this flexibility with two case
studies. In the first case study, we predict circuit (i) process fidelities and
(ii) success probabilities by fitting error rates models to two kinds of
volumetric benchmarking data. Error rates models are simple, yet versatile
capability models which assign effective error rates to individual gates, or
more general circuit components. In the second case study, we construct a
capability model for predicting circuit success probabilities by applying
transfer learning to ResNet50, a neural network trained for image
classification. Our case studies use data from cloud-accessible quantum
computers and simulations of noisy quantum computers.
Related papers
- Quantum Active Learning [3.3202982522589934]
Training a quantum neural network typically demands a substantial labeled training set for supervised learning.
QAL effectively trains the model, achieving performance comparable to that on fully labeled datasets.
We elucidate the negative result of QAL being overtaken by random sampling baseline through miscellaneous numerical experiments.
arXiv Detail & Related papers (2024-05-28T14:39:54Z) - Learning to rank quantum circuits for hardware-optimized performance enhancement [0.0]
We introduce and experimentally test a machine-learning-based method for ranking logically equivalent quantum circuits.
We compare our method to two common approaches: random layout selection and a publicly available baseline called Mapomatic.
Our best model leads to a $1.8times$ reduction in selection error when compared to the baseline approach and a $3.2times$ reduction when compared to random selection.
arXiv Detail & Related papers (2024-04-09T18:00:01Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - A performance characterization of quantum generative models [35.974070202997176]
We compare quantum circuits used for quantum generative modeling.
We learn the underlying probability distribution of the data sets via two popular training methods.
We empirically find that a variant of the discrete architecture, which learns the copula of the probability distribution, outperforms all other methods.
arXiv Detail & Related papers (2023-01-23T11:00:29Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Do Quantum Circuit Born Machines Generalize? [58.720142291102135]
We present the first work in the literature that presents the QCBM's generalization performance as an integral evaluation metric for quantum generative models.
We show that the QCBM is able to effectively learn the reweighted dataset and generate unseen samples with higher quality than those in the training set.
arXiv Detail & Related papers (2022-07-27T17:06:34Z) - When BERT Meets Quantum Temporal Convolution Learning for Text
Classification in Heterogeneous Computing [75.75419308975746]
This work proposes a vertical federated learning architecture based on variational quantum circuits to demonstrate the competitive performance of a quantum-enhanced pre-trained BERT model for text classification.
Our experiments on intent classification show that our proposed BERT-QTC model attains competitive experimental results in the Snips and ATIS spoken language datasets.
arXiv Detail & Related papers (2022-02-17T09:55:21Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Comparing concepts of quantum and classical neural network models for
image classification task [0.456877715768796]
This material includes the results of experiments on training and performance of a hybrid quantum-classical neural network.
Although its simulation is time-consuming, the quantum network, although its simulation is time-consuming, overcomes the classical network.
arXiv Detail & Related papers (2021-08-19T18:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.