Expander qLDPC Codes against Long-range Correlated Errors in Memory
- URL: http://arxiv.org/abs/2510.04561v1
- Date: Mon, 06 Oct 2025 07:51:09 GMT
- Title: Expander qLDPC Codes against Long-range Correlated Errors in Memory
- Authors: Yash Deepak Kashtikar, Pranay Mathur, Sudharsan Senthil, Avhishek Chatterjee,
- Abstract summary: Fault-tolerance using constant space-overhead against long-range correlated errors is an important practical question.<n>We prove a similar result for square-root distance qLDPC codes and provide an explicit expression for the noise threshold in terms of the code rate.
- Score: 2.076589766776339
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fault-tolerance using constant space-overhead against long-range correlated errors is an important practical question. In the pioneering works [Terhal and Burkard, PRA 2005], [Aliferis et al, PRA 2005], [Aharonov et al, PRL 2006], fault-tolerance using poly-logarithmic overhead against long-range correlation modeled by pairwise joint Hamiltonian was proven when the total correlation of an error at a qubit location with errors at other locations was $O(1)$, i.e., the total correlation at a location did not scale with the number of qubits. This condition, under spatial symmetry, can simply be stated as the correlation between locations decaying faster than $\frac{1}{\text{dist}^{\text{dim}}}$. However, the pairwise Hamiltonian model remained intractable for constant overhead codes. Recently, [Bagewadi and Chatterjee, PRA 2025] introduced and analyzed the generalized hidden Markov random field (MRF) model, which provably captures all stationary distributions, including long-range correlations [Kunsch et al, Ann. App. Prob. 1995]. It resulted in a noise threshold in the case of long-range correlation, for memory corrected by the linear-distance Tanner codes [Leverrier and Zemor, FOCS 2022] for super-polynomial time. In this paper, we prove a similar result for square-root distance qLDPC codes and provide an explicit expression for the noise threshold in terms of the code rate, for up to $o(\sqrt{\text{\#qubits}})$ scaling of the total correlation of error at a location with errors at other locations.
Related papers
- Dimension-Independent Convergence of Underdamped Langevin Monte Carlo in KL Divergence [50.719298242863744]
Underdamped Langevin dynamics (ULD) is a widely-used sampler for Gibbs distributions $propto e-V$.<n>We prove the first dimension-free KL divergence bounds for discretized ULD.
arXiv Detail & Related papers (2026-03-02T22:14:38Z) - Harmful Overfitting in Sobolev Spaces [49.47221415754556]
We study the generalization behavior of functions in Sobolev spaces $Wk, p(mathbbRd)$ that perfectly fit a noisy training data set.<n>We show that approximately norm-minimizing interpolators, which are canonical solutions selected by smoothness bias, exhibit harmful overfitting.<n>Our proof uses a geometric argument which identifies harmful neighborhoods of the training data using Sobolev inequalities.
arXiv Detail & Related papers (2026-01-31T17:40:56Z) - An exact Error Threshold of Surface Code under Correlated Nearest-Neighbor Errors: A Statistical Mechanical Analysis [1.8787735186976828]
Current exact surface code threshold analyses are based on the assumption of independent and identically distributed (i.i.d.) errors.<n>Here, we establish an error-edge map, which allows for the mapping of quantum error correction to a square-octagonal random bond Ising model.<n>We then present the exact threshold under a realistic noise model that combines independent single-qubit errors with correlated errors between nearest-neighbor data qubits.
arXiv Detail & Related papers (2025-10-28T08:35:07Z) - Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias [21.64772960240025]
We show that as the dimension $d$ of the problem increases, the number of iterations required to ensure convergence within a desired error increases.<n>A key technical challenge we address is the lack of a one-step contraction property in the $W_2,ellinfty$ metric to measure convergence.
arXiv Detail & Related papers (2024-08-20T01:24:54Z) - Effect of Correlated Errors on Quantum Memory [1.3198143828338362]
We introduce a correlation model which is a generalization of the well-known hidden random fields.<n>We show that for a broad class of non-Markov and (possibly) non-stationary error distributions, quantum Tanner codes ensure an exponential retention time.
arXiv Detail & Related papers (2024-08-16T14:59:10Z) - Towards Understanding the Generalizability of Delayed Stochastic Gradient Descent [63.43247232708004]
A gradient descent performed in an asynchronous manner plays a crucial role in training large-scale machine learning models.<n>Existing generalization error bounds are rather pessimistic and cannot reveal the correlation between asynchronous delays and generalization.<n>Our theoretical results indicate that asynchronous delays reduce the generalization error of the delayed SGD algorithm.
arXiv Detail & Related papers (2023-08-18T10:00:27Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Pipelined correlated minimum weight perfect matching of the surface code [56.01788646782563]
We describe a pipeline approach to decoding the surface code using minimum weight perfect matching.
An independent no-communication parallelizable processing stage reweights the graph according to likely correlations.
A later general stage finishes the matching.
We validate the new algorithm on the fully fault-tolerant toric, unrotated, and rotated surface codes.
arXiv Detail & Related papers (2022-05-19T19:58:02Z) - Exponential Family Model-Based Reinforcement Learning via Score Matching [97.31477125728844]
We propose an optimistic model-based algorithm, dubbed SMRL, for finitehorizon episodic reinforcement learning (RL)
SMRL uses score matching, an unnormalized density estimation technique that enables efficient estimation of the model parameter by ridge regression.
arXiv Detail & Related papers (2021-12-28T15:51:07Z) - Is there a correlation length in a model with long-range interactions? [0.0]
We show a correlation length $xi$ that diverges when the critical point is approached.
At distances shorter than $xi$ the correlator decays with the same power law as at the critical point.
At distances longer than $xi$ it decays faster, with a steeper power law.
arXiv Detail & Related papers (2021-07-06T10:06:18Z) - Spatially relaxed inference on high-dimensional linear models [48.989769153211995]
We study the properties of ensembled clustered inference algorithms which combine spatially constrained clustering, statistical inference, and ensembling to aggregate several clustered inference solutions.
We show that ensembled clustered inference algorithms control the $delta$-FWER under standard assumptions for $delta$ equal to the largest cluster diameter.
arXiv Detail & Related papers (2021-06-04T16:37:19Z) - Categorizing Readout Error Correlations on Near Term Quantum Computers [2.0813318162800707]
Readout errors are a significant source of noise for near term quantum computers.
Recent proposals to use sub-exponential approximations rely on small and/or short-ranged error correlations.
We introduce and demonstrate a methodology to categorize and quantify multiqubit readout error correlations.
arXiv Detail & Related papers (2021-04-09T21:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.