Boosting Learning for LDPC Codes to Improve the Error-Floor Performance
- URL: http://arxiv.org/abs/2310.07194v2
- Date: Mon, 30 Oct 2023 02:58:46 GMT
- Title: Boosting Learning for LDPC Codes to Improve the Error-Floor Performance
- Authors: Hee-Youl Kwak, Dae-Young Yun, Yongjune Kim, Sang-Hyo Kim, Jong-Seon No
- Abstract summary: We propose training methods for neural min-sum (NMS) decoders to eliminate the error-floor effect.
We show that assigning different weights to unsatisfied check nodes effectively lowers the error-floor with a minimal number of weights.
The proposed NMS decoder can be integrated into existing LDPC decoders without incurring extra hardware costs.
- Score: 16.297253625958174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-density parity-check (LDPC) codes have been successfully commercialized
in communication systems due to their strong error correction capabilities and
simple decoding process. However, the error-floor phenomenon of LDPC codes, in
which the error rate stops decreasing rapidly at a certain level, presents
challenges for achieving extremely low error rates and deploying LDPC codes in
scenarios demanding ultra-high reliability. In this work, we propose training
methods for neural min-sum (NMS) decoders to eliminate the error-floor effect.
First, by leveraging the boosting learning technique of ensemble networks, we
divide the decoding network into two neural decoders and train the post decoder
to be specialized for uncorrected words that the first decoder fails to
correct. Secondly, to address the vanishing gradient issue in training, we
introduce a block-wise training schedule that locally trains a block of weights
while retraining the preceding block. Lastly, we show that assigning different
weights to unsatisfied check nodes effectively lowers the error-floor with a
minimal number of weights. By applying these training methods to standard LDPC
codes, we achieve the best error-floor performance compared to other decoding
methods. The proposed NMS decoder, optimized solely through novel training
methods without additional modules, can be integrated into existing LDPC
decoders without incurring extra hardware costs. The source code is available
at https://github.com/ghy1228/LDPC_Error_Floor .
Related papers
- Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Boosted Neural Decoders: Achieving Extreme Reliability of LDPC Codes for 6G Networks [15.190674451882964]
6G networks require a frame error rate (FER) below 10-9.
Low-density parity-check (LDPC) codes, the standard in 5G new radio (NR), encounter a challenge known as the error floor phenomenon.
We introduce an innovative solution: boosted neural min-sum (NMS) decoder.
arXiv Detail & Related papers (2024-05-22T07:48:24Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Boost decoding performance of finite geometry LDPC codes with deep
learning tactics [3.1519370595822274]
We seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes.
It is elaborated on how to generate high-quality training data effectively.
arXiv Detail & Related papers (2022-05-01T14:41:16Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - Efficient Decoding of Surface Code Syndromes for Error Correction in
Quantum Computing [0.09236074230806578]
We propose a two-level (low and high) ML-based decoding scheme, where the first level corrects errors on physical qubits and the second one corrects any existing logical errors.
Our results show that our proposed decoding method achieves $sim10 times$ and $sim2 times$ higher values of pseudo-threshold and threshold respectively.
We show that usage of more sophisticated ML models with higher training/testing time, do not provide significant improvement in the decoder performance.
arXiv Detail & Related papers (2021-10-21T04:54:44Z) - FAID Diversity via Neural Networks [23.394836086114413]
We propose a new approach to design the decoder diversity of finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes.
The proposed decoder diversity is achieved by training a recurrent quantized neural network (RQNN) to learn/design FAIDs.
arXiv Detail & Related papers (2021-05-10T05:14:42Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.