Data-Efficient Quantum Noise Modeling via Machine Learning
- URL: http://arxiv.org/abs/2509.12933v1
- Date: Tue, 16 Sep 2025 10:30:28 GMT
- Title: Data-Efficient Quantum Noise Modeling via Machine Learning
- Authors: Yanjun Ji, Marco Roth, David A. Kreplin, Ilia Polian, Frank K. Wilhelm,
- Abstract summary: We introduce a data-efficient, machine learning-based framework to construct accurate, parameterized noise models for superconducting quantum processors.<n>We show that a model trained exclusively on small-scale circuits accurately predicts the behavior of larger validation circuits.
- Score: 0.3279176777295314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Maximizing the computational utility of near-term quantum processors requires predictive noise models that inform robust, noise-aware compilation and error mitigation. Conventional models often fail to capture the complex error dynamics of real hardware or require prohibitive characterization overhead. We introduce a data-efficient, machine learning-based framework to construct accurate, parameterized noise models for superconducting quantum processors. Our approach circumvents costly characterization protocols by learning hardware-specific error parameters directly from the measurement data of existing application and benchmark circuits. The generality and robustness of the framework are demonstrated through comprehensive benchmarking across multiple quantum devices and algorithms. Crucially, we show that a model trained exclusively on small-scale circuits accurately predicts the behavior of larger validation circuits. Our data-efficient approach achieves up to a 65% improvement in model fidelity quantified by the Hellinger distance between predicted and experimental circuit output distributions, compared to standard noise models derived from device properties. This work establishes a practical paradigm for noise characterization, providing a crucial tool for developing more effective noise-aware compilation and error-mitigation strategies.
Related papers
- Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models [57.49136894315871]
New paradigm of test-time scaling has yielded remarkable breakthroughs in reasoning models and generative vision models.<n>We propose one solution to the problem of integrating test-time scaling knowledge into a model during post-training.<n>We replace reward guided test-time noise optimization in diffusion models with a Noise Hypernetwork that modulates initial input noise.
arXiv Detail & Related papers (2025-08-13T17:33:37Z) - Handling Label Noise via Instance-Level Difficulty Modeling and Dynamic Optimization [33.13911801301048]
Deep neural networks degrade in generalization performance under noisy supervision.<n>Existing methods focus on isolating clean subsets or correcting noisy labels.<n>We propose a novel two-stage noisy learning framework that enables instance-level optimization.
arXiv Detail & Related papers (2025-05-01T19:12:58Z) - Sparse Non-Markovian Noise Modeling of Transmon-Based Multi-Qubit Operations [0.0]
The influence of noise on quantum dynamics is one of the main factors preventing current quantum processors from performing accurate quantum computations.<n>We present an approach for effective noise modeling of multi-qubit operations on transmon-based devices.<n>We show that the model can capture and predict a wide range of single- and two-qubit behaviors, including non-temporally correlated noise sources.
arXiv Detail & Related papers (2024-12-20T17:37:26Z) - One-step Noisy Label Mitigation [86.57572253460125]
Mitigating the detrimental effects of noisy labels on the training process has become increasingly critical.
We propose One-step Anti-Noise (OSA), a model-agnostic noisy label mitigation paradigm.
We empirically demonstrate the superiority of OSA, highlighting its enhanced training robustness, improved task transferability, ease of deployment, and reduced computational costs.
arXiv Detail & Related papers (2024-10-02T18:42:56Z) - Volumetric Benchmarking of Quantum Computing Noise Models [3.0098885383612104]
We present a systematic approach to benchmark noise models for quantum computing applications.
It compares the results of hardware experiments to predictions of noise models for a representative set of quantum circuits.
We also construct a noise model and optimize its parameters with a series of training circuits.
arXiv Detail & Related papers (2023-06-14T10:49:01Z) - Realistic Noise Synthesis with Diffusion Models [44.404059914652194]
Deep denoising models require extensive real-world training data, which is challenging to acquire.<n>We propose a novel Realistic Noise Synthesis Diffusor (RNSD) method using diffusion models to address these challenges.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Characterizing and mitigating coherent errors in a trapped ion quantum
processor using hidden inverses [0.20315704654772418]
Quantum computing testbeds exhibit high-fidelity quantum control over small collections of qubits.
These noisy intermediate-scale devices can support a sufficient number of sequential operations prior to decoherence.
While the results of these algorithms are imperfect, these imperfections can help bootstrap quantum computer testbed development.
arXiv Detail & Related papers (2022-05-27T20:35:24Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - A deep learning model for noise prediction on near-term quantum devices [137.6408511310322]
We train a convolutional neural network on experimental data from a quantum device to learn a hardware-specific noise model.
A compiler then uses the trained network as a noise predictor and inserts sequences of gates in circuits so as to minimize expected noise.
arXiv Detail & Related papers (2020-05-21T17:47:29Z) - Modeling Noisy Quantum Circuits Using Experimental Characterization [0.40611352512781856]
Noisy intermediate-scale quantum (NISQ) devices offer unique platforms to test and evaluate the behavior of non-fault-tolerant quantum computing.
We present a test-driven approach to characterizing NISQ programs that manages the complexity of noisy circuit modeling.
arXiv Detail & Related papers (2020-01-23T16:45:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.