Pruning Neural Belief Propagation Decoders
- URL: http://arxiv.org/abs/2001.07464v2
- Date: Thu, 22 Oct 2020 18:40:04 GMT
- Title: Pruning Neural Belief Propagation Decoders
- Authors: Andreas Buchberger, Christian H\"ager, Henry D. Pfister, Laurent
Schmalen, Alexandre Graell i Amat
- Abstract summary: We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
- Score: 77.237958592189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider near maximum-likelihood (ML) decoding of short linear block codes
based on neural belief propagation (BP) decoding recently introduced by
Nachmani et al.. While this method significantly outperforms conventional BP
decoding, the underlying parity-check matrix may still limit the overall
performance. In this paper, we introduce a method to tailor an overcomplete
parity-check matrix to (neural) BP decoding using machine learning. We consider
the weights in the Tanner graph as an indication of the importance of the
connected check nodes (CNs) to decoding and use them to prune unimportant CNs.
As the pruning is not tied over iterations, the final decoder uses a different
parity-check matrix in each iteration. For Reed-Muller and short low-density
parity-check codes, we achieve performance within 0.27 dB and 1.5 dB of the ML
performance while reducing the complexity of the decoder.
Related papers
- An almost-linear time decoding algorithm for quantum LDPC codes under circuit-level noise [0.562479170374811]
We introduce the belief propagation plus ordered Tanner forest (BP+OTF) algorithm as an almost-linear time decoder for quantum low-density parity-check codes.
We show that the BP+OTF decoder achieves logical error suppression within an order of magnitude of state-of-the-art inversion-based decoders.
arXiv Detail & Related papers (2024-09-02T19:50:57Z) - Bit-flipping Decoder Failure Rate Estimation for (v,w)-regular Codes [84.0257274213152]
We propose a new technique to provide accurate estimates of the DFR of a two-iterations (parallel) bit flipping decoder.
We validate our results, providing comparisons of the modeled and simulated weight of the syndrome, incorrectly-guessed error bit distribution at the end of the first iteration, and two-itcrypteration Decoding Failure Rates (DFR)
arXiv Detail & Related papers (2024-01-30T11:40:24Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes [59.55193427277134]
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
arXiv Detail & Related papers (2023-01-16T04:11:14Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - ADMM-based Decoder for Binary Linear Codes Aided by Deep Learning [40.25456611849273]
This work presents a deep neural network aided decoding algorithm for binary linear codes.
Based on the concept of deep unfolding, we design a decoding network by unfolding the alternating direction method of multipliers (ADMM)-penalized decoder.
Numerical results show that the resulting DL-aided decoders outperform the original ADMM-penalized decoder.
arXiv Detail & Related papers (2020-02-14T03:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.