Effect of Correlated Errors on Quantum Memory
- URL: http://arxiv.org/abs/2408.08786v1
- Date: Fri, 16 Aug 2024 14:59:10 GMT
- Title: Effect of Correlated Errors on Quantum Memory
- Authors: Smita Bagewadi, Avhishek Chatterjee,
- Abstract summary: We introduce a classical correlation model based on hidden random fields for modeling i.i.d. errors with long-range correlations.
We show that this proposed model can capture certain correlation patterns not captured by the joint (system and bath) Hamiltonian model with pairwise terms.
- Score: 1.3198143828338362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent improvements in LDPC code based fault-tolerance for memory against i.i.d. errors naturally lead to the question of fault-tolerance against errors with long-range correlations. We introduce a classical correlation model based on hidden random fields for modeling such errors. We show that this proposed model can capture certain correlation patterns not captured by the joint (system and bath) Hamiltonian model with pairwise terms. Towards that, we derive a converse result for retention time in the presence of an error distribution, which is from the proposed class and exhibits quadratically small correlations. On the other hand, we show that for a broad subclass of error distributions within the proposed model, Tanner codes can ensure exponential retention time when the error rate is sufficiently low. The proposed model is analytically tractable due to the existence of a rich probability literature and thus, can offer insights complementary to the joint Hamiltonian model with pairwise terms.
Related papers
- Towards Robust Text Classification: Mitigating Spurious Correlations with Causal Learning [2.7813683000222653]
We propose the Causally Calibrated Robust ( CCR) to reduce models' reliance on spurious correlations.
CCR integrates a causal feature selection method based on counterfactual reasoning, along with an inverse propensity weighting (IPW) loss function.
We show that CCR state-of-the-art performance among methods without group labels, and in some cases, it can compete with the models that utilize group labels.
arXiv Detail & Related papers (2024-11-01T21:29:07Z) - Embedded Nonlocal Operator Regression (ENOR): Quantifying model error in learning nonlocal operators [8.585650361148558]
We propose a new framework to learn a nonlocal homogenized surrogate model and its structural model error.
This framework provides discrepancy-adaptive uncertainty quantification for homogenized material response predictions in long-term simulations.
arXiv Detail & Related papers (2024-10-27T04:17:27Z) - Multivariate Probabilistic Time Series Forecasting with Correlated Errors [17.212396544233307]
We present a plug-and-play method that learns the covariance structure of errors over multiple steps for autoregressive models with Gaussian-distributed errors.
The learned covariance matrix can be used to calibrate predictions based on observed residuals.
arXiv Detail & Related papers (2024-02-01T20:27:19Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Sample Complexity Bounds for Score-Matching: Causal Discovery and
Generative Modeling [82.36856860383291]
We demonstrate that accurate estimation of the score function is achievable by training a standard deep ReLU neural network.
We establish bounds on the error rate of recovering causal relationships using the score-matching-based causal discovery method.
arXiv Detail & Related papers (2023-10-27T13:09:56Z) - On how to avoid exacerbating spurious correlations when models are
overparameterized [33.315813572333745]
We show that VS-loss learns a model that is fair towards minorities even when spurious features are strong.
Compared to previous works, our bounds hold for more general models, they are non-asymptotic, and, they apply even at scenarios of extreme imbalance.
arXiv Detail & Related papers (2022-06-25T21:53:44Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Error Autocorrelation Objective Function for Improved System Modeling [1.2760453906939444]
We introduce a "whitening" cost function, the Ljung-Box statistic, which not only minimizes the error but also minimizes the correlations between errors.
The results show significant improvement in generalization for recurrent neural networks (RNNs) and image autoencoders (2d)
arXiv Detail & Related papers (2020-08-08T19:20:32Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.