Cluster Decomposition for Improved Erasure Decoding of Quantum LDPC Codes
- URL: http://arxiv.org/abs/2412.08817v1
- Date: Wed, 11 Dec 2024 23:14:23 GMT
- Title: Cluster Decomposition for Improved Erasure Decoding of Quantum LDPC Codes
- Authors: Hanwen Yao, Mert Gökduman, Henry D. Pfister,
- Abstract summary: We introduce a new erasure decoder that applies to arbitrary quantum LDPC codes.
By allowing clusters of unconstrained size, this decoder achieves maximum-likelihood (ML) performance.
For the general quantum LDPC codes we studied, the cluster decoder can be used to estimate the ML performance curve.
- Score: 7.185960422285947
- License:
- Abstract: We introduce a new erasure decoder that applies to arbitrary quantum LDPC codes. Dubbed the cluster decoder, it generalizes the decomposition idea of Vertical-Horizontal (VH) decoding introduced by Connelly et al. in 2022. Like the VH decoder, the idea is to first run the peeling decoder and then post-process the resulting stopping set. The cluster decoder breaks the stopping set into a tree of clusters which can be solved sequentially via Gaussian Elimination (GE). By allowing clusters of unconstrained size, this decoder achieves maximum-likelihood (ML) performance with reduced complexity compared with full GE. When GE is applied only to clusters whose sizes are less than a constant, the performance is degraded but the complexity becomes linear in the block length. Our simulation results show that, for hypergraph product codes, the cluster decoder with constant cluster size achieves near-ML performance similar to VH decoding in the low-erasure-rate regime. For the general quantum LDPC codes we studied, the cluster decoder can be used to estimate the ML performance curve with reduced complexity over a wide range of erasure rates.
Related papers
- Threshold Selection for Iterative Decoding of $(v,w)$-regular Binary Codes [84.0257274213152]
Iterative bit flipping decoders are an efficient choice for sparse $(v,w)$-regular codes.
We propose concrete criteria for threshold determination, backed by a closed form model.
arXiv Detail & Related papers (2025-01-23T17:38:22Z) - Highly Efficient Parallel Row-Layered Min-Sum MDPC Decoder for McEliece Cryptosystem [6.583725235299022]
The medium-density parity-check (MDPC) code-based McEliece cryptosystem remains a finalist of the post-quantum cryptography standard.
The Min-sum decoding algorithm achieves better performance-complexity tradeoff than other algorithms for MDPC codes.
For the first time, the row-layered scheduling scheme is exploited to substantially reduce the memory requirement of MDPC decoders.
arXiv Detail & Related papers (2024-07-17T16:19:42Z) - Ambiguity Clustering: an accurate and efficient decoder for qLDPC codes [0.0]
We introduce the Ambiguity Clustering decoder (AC) which divides measurement data into clusters that can be decoded independently.
With 0.3% circuit-level depolarising noise, AC is up to 27x faster than BP-OSD with matched accuracy.
Our implementation decodes the 144-qubit Gross code in 135us per round of syndrome extraction on an M2 CPU.
arXiv Detail & Related papers (2024-06-20T17:39:31Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Deep Polar Codes [19.265010348250897]
We introduce a novel class of pre-transformed polar codes, termed as deep polar codes.
We first present a deep polar encoder that harnesses a series of multi-layered polar transformations with varying sizes.
Our encoding method offers flexibility in rate-profiling, embracing a wide range of code rates and blocklengths.
arXiv Detail & Related papers (2023-08-06T03:29:18Z) - Modular decoding: parallelizable real-time decoding for quantum
computers [55.41644538483948]
Real-time quantum computation will require decoding algorithms capable of extracting logical outcomes from a stream of data generated by noisy quantum hardware.
We propose modular decoding, an approach capable of addressing this challenge with minimal additional communication and without sacrificing decoding accuracy.
We introduce the edge-vertex decomposition, a concrete instance of modular decoding for lattice-surgery style fault-tolerant blocks.
arXiv Detail & Related papers (2023-03-08T19:26:10Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - Techniques for combining fast local decoders with global decoders under
circuit-level noise [0.0]
We introduce the construction of local neural network (NN) decoders using three-dimensional convolutions.
These local decoders are adapted to circuit-level noise and can be applied to surface code volumes of arbitrary size.
Their application removes errors arising from a certain number of faults, which serves to substantially reduce the syndrome density.
arXiv Detail & Related papers (2022-08-02T00:08:48Z) - Gradient Coding with Dynamic Clustering for Straggler-Tolerant
Distributed Learning [55.052517095437]
gradient descent (GD) is widely employed to parallelize the learning task by distributing the dataset across multiple workers.
A significant performance bottleneck for the per-iteration completion time in distributed synchronous GD is $straggling$ workers.
Coded distributed techniques have been introduced recently to mitigate stragglers and to speed up GD iterations by assigning redundant computations to workers.
We propose a novel dynamic GC scheme, which assigns redundant data to workers to acquire the flexibility to choose from among a set of possible codes depending on the past straggling behavior.
arXiv Detail & Related papers (2021-03-01T18:51:29Z) - On Sparsifying Encoder Outputs in Sequence-to-Sequence Models [90.58793284654692]
We take Transformer as the testbed and introduce a layer of gates in-between the encoder and the decoder.
The gates are regularized using the expected value of the sparsity-inducing L0penalty.
We investigate the effects of this sparsification on two machine translation and two summarization tasks.
arXiv Detail & Related papers (2020-04-24T16:57:52Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.