Online Gaussian elimination for quantum LDPC decoding
- URL: http://arxiv.org/abs/2504.05080v2
- Date: Wed, 09 Apr 2025 18:18:31 GMT
- Title: Online Gaussian elimination for quantum LDPC decoding
- Authors: Sam J. Griffiths, Asmae Benhemou, Dan E. Browne,
- Abstract summary: We present an online variant of the Gaussian elimination algorithm which maintains an LUP decomposition.<n>It is equivalent to performing Gaussian elimination once on the final system of equations.<n>We show empirically that our online variant outperforms the original offline decoder in average-case time complexity.
- Score: 3.1952340441132474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decoders for quantum LDPC codes generally rely on solving a parity-check equation with Gaussian elimination, with the generalised union-find decoder performing this repeatedly on growing clusters. We present an online variant of the Gaussian elimination algorithm which maintains an LUP decomposition in order to process only new rows and columns as they are added to a system of equations. This is equivalent to performing Gaussian elimination once on the final system of equations, in contrast to the multiple rounds of Gaussian elimination employed by the generalised union-find decoder. It thus significantly reduces the number of operations performed by the decoder. We consider the generalised union-find decoder as an example use case and present a complexity analysis demonstrating that both variants take time cubic in the number of qubits in the general case, but that the number of operations performed by the online variant is lower by an amount which itself scales cubically. This analysis is also extended to the regime of 'well-behaved' codes in which the number of growth iterations required is bounded logarithmically in error weight. Finally, we show empirically that our online variant outperforms the original offline decoder in average-case time complexity on codes with sparser parity-check matrices or greater covering radius.
Related papers
- Cluster Decomposition for Improved Erasure Decoding of Quantum LDPC Codes [7.185960422285947]
We introduce a new erasure decoder that applies to arbitrary quantum LDPC codes.<n>By allowing clusters of unconstrained size, this decoder achieves maximum-likelihood (ML) performance.<n>For the general quantum LDPC codes we studied, the cluster decoder can be used to estimate the ML performance curve.
arXiv Detail & Related papers (2024-12-11T23:14:23Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Fault-Tolerant Computing with Single Qudit Encoding [49.89725935672549]
We discuss stabilizer quantum-error correction codes implemented in a single multi-level qudit.
These codes can be customized to the specific physical errors on the qudit, effectively suppressing them.
We demonstrate a Fault-Tolerant implementation on molecular spin qudits, showcasing nearly exponential error suppression with only linear qudit size growth.
arXiv Detail & Related papers (2023-07-20T10:51:23Z) - qecGPT: decoding Quantum Error-correcting Codes with Generative
Pre-trained Transformers [5.392298820599664]
We propose a framework for decoding quantum error-correcting codes with generative modeling.
We use autoregressive neural networks, specifically Transformers, to learn the joint probability of logical operators and syndromes.
Our framework is general and can be applied to any error model and quantum codes with different topologies.
arXiv Detail & Related papers (2023-07-18T07:34:02Z) - Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models [70.26374282390401]
Decoding the original signal (i.e., hidden chain) from the noisy observations is one of the main goals in nearly all HMM based data analyses.
We present Quick Adaptive Ternary (QATS), a divide-and-conquer procedure which decodes the hidden sequence in polylogarithmic computational complexity.
arXiv Detail & Related papers (2023-05-29T19:37:48Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix
Factorization [60.91600465922932]
We present an approach that avoids the use of a dual-encoder for retrieval, relying solely on the cross-encoder.
Our approach provides test-time recall-vs-computational cost trade-offs superior to the current widely-used methods.
arXiv Detail & Related papers (2022-10-23T00:32:04Z) - An efficient decoder for a linear distance quantum LDPC code [0.1657441317977376]
We present a linear time decoder for the recent quantumally good qLDPC codes.
Our decoder is an iterative algorithm which searches for corrections within constant-sized regions.
arXiv Detail & Related papers (2022-06-14T02:17:09Z) - Optimization-based Block Coordinate Gradient Coding for Mitigating
Partial Stragglers in Distributed Learning [58.91954425047425]
This paper aims to design a new gradient coding scheme for mitigating partial stragglers in distributed learning.
We propose a gradient coordinate coding scheme with L coding parameters representing L possibly different diversities for the L coordinates, which generates most gradient coding schemes.
arXiv Detail & Related papers (2022-06-06T09:25:40Z) - Sparse Coding with Multi-Layer Decoders using Variance Regularization [19.8572592390623]
We propose a novel sparse coding protocol which prevents a collapse in the codes without the need to regularize the decoder.
Our method regularizes the codes directly so that each latent code component has variance greater than a fixed threshold.
We show that sparse autoencoders with multi-layer decoders trained using our variance regularization method produce higher quality reconstructions with sparser representations.
arXiv Detail & Related papers (2021-12-16T21:46:23Z) - Quantum Reduction of Finding Short Code Vectors to the Decoding Problem [0.9269394037577176]
We give a quantum reduction from finding short codewords in a random linear code to decoding for the Hamming metric.
This is the first time such a reduction (classical or quantum) has been obtained.
arXiv Detail & Related papers (2021-06-04T22:42:38Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.