Boost decoding performance of finite geometry LDPC codes with deep
learning tactics
- URL: http://arxiv.org/abs/2205.00481v1
- Date: Sun, 1 May 2022 14:41:16 GMT
- Title: Boost decoding performance of finite geometry LDPC codes with deep
learning tactics
- Authors: Guangwen Li, Xiao Yu
- Abstract summary: We seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes.
It is elaborated on how to generate high-quality training data effectively.
- Score: 3.1519370595822274
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: It was known a standard min-sum decoder can be unrolled as a neural network
after weighting each edges. We adopt the similar decoding framework to seek a
low-complexity and high-performance decoder for a class of finite geometry LDPC
codes in short and moderate block lengths. It is elaborated on how to generate
high-quality training data effectively, and the strong link is illustrated
between training loss and the bit error rate of a neural decoder after tracing
the evolution curves. Considering there exists a potential conflict between the
neural networks and the error-correction decoders in terms of their objectives,
the necessity of restraining the number of trainable parameters to ensure
training convergence or reduce decoding complexity is highlighted.
Consequently, for the referred LDPC codes, their rigorous algebraic structure
promotes the feasibility of cutting down the number of trainable parameters
even to only one, whereas incurring marginal performance loss in the
simulation.
Related papers
- Learned layered coding for Successive Refinement in the Wyner-Ziv
Problem [18.134147308944446]
We propose a data-driven approach to explicitly learn the progressive encoding of a continuous source.
This setup refers to the successive refinement of the Wyner-Ziv coding problem.
We demonstrate that RNNs can explicitly retrieve layered binning solutions akin to scalable nested quantization.
arXiv Detail & Related papers (2023-11-06T12:45:32Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Boosting Learning for LDPC Codes to Improve the Error-Floor Performance [16.297253625958174]
We propose training methods for neural min-sum (NMS) decoders to eliminate the error-floor effect.
We show that assigning different weights to unsatisfied check nodes effectively lowers the error-floor with a minimal number of weights.
The proposed NMS decoder can be integrated into existing LDPC decoders without incurring extra hardware costs.
arXiv Detail & Related papers (2023-10-11T05:05:40Z) - A Cryogenic Memristive Neural Decoder for Fault-tolerant Quantum Error Correction [0.0]
We design and analyze a neural decoder based on an in-memory crossbar (IMC) architecture.
We develop hardware-aware re-training methods to mitigate the fidelity loss.
This work provides a pathway to scalable, fast, and low-power cryogenic IMC hardware for integrated fault-tolerant QEC.
arXiv Detail & Related papers (2023-07-18T17:46:33Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - FAID Diversity via Neural Networks [23.394836086114413]
We propose a new approach to design the decoder diversity of finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes.
The proposed decoder diversity is achieved by training a recurrent quantized neural network (RQNN) to learn/design FAIDs.
arXiv Detail & Related papers (2021-05-10T05:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.