Adaptive folding and noise filtering for robust quantum error mitigation
- URL: http://arxiv.org/abs/2505.04463v1
- Date: Wed, 07 May 2025 14:35:01 GMT
- Title: Adaptive folding and noise filtering for robust quantum error mitigation
- Authors: Kathrin F. Koenig, Finn Reinecke, Thomas Wellens,
- Abstract summary: This paper presents noise-adaptive folding, a technique that enhances zero-noise extrapolation (ZNE)<n>We introduce two filtering methods: one relies on measuring error strength, while the other utilizes statistical filtering to improve the extrapolation process.<n>Our findings demonstrate that these adaptive methods effectively strengthen error mitigation against noise fluctuations, thereby enhancing the precision and reliability of quantum computations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Coping with noise in quantum computation poses significant challenges due to its unpredictable nature and the complexities of accurate modeling. This paper presents noise-adaptive folding, a technique that enhances zero-noise extrapolation (ZNE) through the use of adaptive scaling factors based on circuit error measurements. Furthermore, we introduce two filtering methods: one relies on measuring error strength, while the other utilizes statistical filtering to improve the extrapolation process. Comparing our approach with standard ZNE reveals that adaptive scaling factors can be optimized using either a noise model or direct error strength measurements from inverted circuits. The integration of adaptive scaling with filtering techniques leads to notable improvements in expectation-value extrapolation over standard ZNE. Our findings demonstrate that these adaptive methods effectively strengthen error mitigation against noise fluctuations, thereby enhancing the precision and reliability of quantum computations.
Related papers
- Adaptive Conformal Inference by Betting [51.272991377903274]
We consider the problem of adaptive conformal inference without any assumptions about the data generating process.<n>Existing approaches for adaptive conformal inference are based on optimizing the pinball loss using variants of online gradient descent.<n>We propose a different approach for adaptive conformal inference that leverages parameter-free online convex optimization techniques.
arXiv Detail & Related papers (2024-12-26T18:42:08Z) - Taming Sensitive Weights : Noise Perturbation Fine-tuning for Robust LLM Quantization [5.718172547021947]
We propose Noise Perturbation Fine-tuning (NPFT) to tame the sensitive weights' impact on the quantization error.<n>NPFT identifies outlier weights and add random weight perturbations on the outliers as the model going through a PEFT optimization.<n>When applied to OPT and LLaMA models, our NPFT method achieves stable performance improvements for both uniform and non-uniform quantizers.
arXiv Detail & Related papers (2024-12-08T21:46:22Z) - Improving Zero-noise Extrapolation for Quantum-gate Error Mitigation using a Noise-aware Folding Method [0.0]
We introduce a noise-aware folding technique that enhances Zero-Noise Extrapolation (ZNE)
Our method redistributes noise using calibration data based on hardware noise models.
By employing a noise-adaptive compilation method combined with our proposed folding mechanism, we enhance the ZNE accuracy of quantum gate-based computing.
arXiv Detail & Related papers (2024-01-23T05:36:40Z) - Neural Operator Variational Inference based on Regularized Stein
Discrepancy for Deep Gaussian Processes [23.87733307119697]
We introduce Neural Operator Variational Inference (NOVI) for Deep Gaussian Processes.
NOVI uses a neural generator to obtain a sampler and minimizes the Regularized Stein Discrepancy in L2 space between the generated distribution and true posterior.
We demonstrate that the bias introduced by our method can be controlled by multiplying the divergence with a constant, which leads to robust error control and ensures the stability and precision of the algorithm.
arXiv Detail & Related papers (2023-09-22T06:56:35Z) - Adaptive mitigation of time-varying quantum noise [0.1227734309612871]
Current quantum computers suffer from non-stationary noise channels with high error rates.
We propose a Bayesian inference-based adaptive algorithm that can learn and mitigate quantum noise in response to changing channel conditions.
arXiv Detail & Related papers (2023-08-16T01:33:07Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Adaptive Noisy Data Augmentation for Regularized Estimation and
Inference in Generalized Linear Models [15.817569026827451]
We propose the AdaPtive Noise Augmentation (PANDA) procedure to regularize the estimation and inference of generalized linear models (GLMs)
We demonstrate the superior or similar performance of PANDA against the existing approaches of the same type of regularizers in simulated and real-life data.
arXiv Detail & Related papers (2022-04-18T22:02:37Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Robust Optimal Transport with Applications in Generative Modeling and
Domain Adaptation [120.69747175899421]
Optimal Transport (OT) distances such as Wasserstein have been used in several areas such as GANs and domain adaptation.
We propose a computationally-efficient dual form of the robust OT optimization that is amenable to modern deep learning applications.
Our approach can train state-of-the-art GAN models on noisy datasets corrupted with outlier distributions.
arXiv Detail & Related papers (2020-10-12T17:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.