Efficient flexible characterization of quantum processors with nested
error models
- URL: http://arxiv.org/abs/2103.02188v1
- Date: Wed, 3 Mar 2021 05:23:09 GMT
- Title: Efficient flexible characterization of quantum processors with nested
error models
- Authors: Erik Nielsen, Kenneth Rudinger, Timothy Proctor, Kevin Young, Robin
Blume-Kohout
- Abstract summary: We present a technique for finding a good error model for a quantum processor.
The technique iteratively tests a nested sequence of models against data obtained from the processor.
We demonstrate the technique by using it to characterize a simulated noisy 2-qubit processor.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a simple and powerful technique for finding a good error model for
a quantum processor. The technique iteratively tests a nested sequence of
models against data obtained from the processor, and keeps track of the
best-fit model and its wildcard error (a quantification of the unmodeled error)
at each step. Each best-fit model, along with a quantification of its unmodeled
error, constitute a characterization of the processor. We explain how quantum
processor models can be compared with experimental data and to each other. We
demonstrate the technique by using it to characterize a simulated noisy 2-qubit
processor.
Related papers
- Data-Efficient Quantum Noise Modeling via Machine Learning [0.3279176777295314]
We introduce a data-efficient, machine learning-based framework to construct accurate, parameterized noise models for superconducting quantum processors.<n>We show that a model trained exclusively on small-scale circuits accurately predicts the behavior of larger validation circuits.
arXiv Detail & Related papers (2025-09-16T10:30:28Z) - An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.
We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.
We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Bounding the systematic error in quantum error mitigation due to model violation [0.0]
We develop a methodology to efficiently compute upper bounds on the impact of error-model inaccuracy in error mitigation.
Our protocols require no additional experiments, and instead rely on comparisons between the error model and the error-learning data.
We show that our estimated upper bounds are typically close to the worst observed performance of error mitigation on random circuits.
arXiv Detail & Related papers (2024-08-20T16:27:00Z) - Learning to rank quantum circuits for hardware-optimized performance enhancement [0.0]
We introduce and experimentally test a machine-learning-based method for ranking logically equivalent quantum circuits.
We compare our method to two common approaches: random layout selection and a publicly available baseline called Mapomatic.
Our best model leads to a $1.8times$ reduction in selection error when compared to the baseline approach and a $3.2times$ reduction when compared to random selection.
arXiv Detail & Related papers (2024-04-09T18:00:01Z) - Volumetric Benchmarking of Quantum Computing Noise Models [3.0098885383612104]
We present a systematic approach to benchmark noise models for quantum computing applications.
It compares the results of hardware experiments to predictions of noise models for a representative set of quantum circuits.
We also construct a noise model and optimize its parameters with a series of training circuits.
arXiv Detail & Related papers (2023-06-14T10:49:01Z) - Hard Sample Matters a Lot in Zero-Shot Quantization [52.32914196337281]
Zero-shot quantization (ZSQ) is promising for compressing and accelerating deep neural networks when the data for training full-precision models are inaccessible.
In ZSQ, network quantization is performed using synthetic samples, thus, the performance of quantized models depends heavily on the quality of synthetic samples.
We propose HArd sample Synthesizing and Training (HAST) to address this issue.
arXiv Detail & Related papers (2023-03-24T06:22:57Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Measuring NISQ Gate-Based Qubit Stability Using a 1+1 Field Theory and
Cycle Benchmarking [50.8020641352841]
We study coherent errors on a quantum hardware platform using a transverse field Ising model Hamiltonian as a sample user application.
We identify inter-day and intra-day qubit calibration drift and the impacts of quantum circuit placement on groups of qubits in different physical locations on the processor.
This paper also discusses how these measurements can provide a better understanding of these types of errors and how they may improve efforts to validate the accuracy of quantum computations.
arXiv Detail & Related papers (2022-01-08T23:12:55Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Wildcard error: Quantifying unmodeled errors in quantum processors [0.0]
Error models for quantum computing processors describe their deviation from ideal behavior and predict the consequences in applications.
We show how to resolve inconsistencies, and quantify the rate of unmodeled errors, by augmenting error models with a parameterized wildcard error model.
The amount of wildcard error required to restore consistency with data quantifies how much unmodeled error was observed.
arXiv Detail & Related papers (2020-12-22T18:22:08Z) - Modeling Noisy Quantum Circuits Using Experimental Characterization [0.40611352512781856]
Noisy intermediate-scale quantum (NISQ) devices offer unique platforms to test and evaluate the behavior of non-fault-tolerant quantum computing.
We present a test-driven approach to characterizing NISQ programs that manages the complexity of noisy circuit modeling.
arXiv Detail & Related papers (2020-01-23T16:45:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.