A Scalable Graph Neural Network Decoder for Short Block Codes
- URL: http://arxiv.org/abs/2211.06962v1
- Date: Sun, 13 Nov 2022 17:13:12 GMT
- Title: A Scalable Graph Neural Network Decoder for Short Block Codes
- Authors: Kou Tian, Chentao Yue, Changyang She, Yonghui Li, and Branka Vucetic
- Abstract summary: We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
- Score: 49.25571364253986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a novel decoding algorithm for short block codes
based on an edge-weighted graph neural network (EW-GNN). The EW-GNN decoder
operates on the Tanner graph with an iterative message-passing structure, which
algorithmically aligns with the conventional belief propagation (BP) decoding
method. In each iteration, the "weight" on the message passed along each edge
is obtained from a fully connected neural network that has the reliability
information from nodes/edges as its input. Compared to existing
deep-learning-based decoding schemes, the EW-GNN decoder is characterised by
its scalability, meaning that 1) the number of trainable parameters is
independent of the codeword length, and 2) an EW-GNN decoder trained with
shorter/simple codes can be directly used for longer/sophisticated codes of
different code rates. Furthermore, simulation results show that the EW-GNN
decoder outperforms the BP and deep-learning-based BP methods from the
literature in terms of the decoding error rate.
Related papers
- On the Design and Performance of Machine Learning Based Error Correcting Decoders [3.8289109929360245]
We first consider the so-called single-label neural network (SLNN) and the multi-label neural network (MLNN) decoders which have been reported to achieve near maximum likelihood (ML) performance.
We then turn our attention to two transformer-based decoders: the error correction code transformer (ECCT) and the cross-attention message passing transformer (CrossMPT)
arXiv Detail & Related papers (2024-10-21T11:23:23Z) - Decoding Quantum LDPC Codes Using Graph Neural Networks [52.19575718707659]
We propose a novel decoding method for Quantum Low-Density Parity-Check (QLDPC) codes based on Graph Neural Networks (GNNs)
The proposed GNN-based QLDPC decoder exploits the sparse graph structure of QLDPC codes and can be implemented as a message-passing decoding algorithm.
arXiv Detail & Related papers (2024-08-09T16:47:49Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - For One-Shot Decoding: Self-supervised Deep Learning-Based Polar Decoder [1.4964546566293881]
We propose a self-supervised deep learning-based decoding scheme that enables one-shot decoding of polar codes.
In the proposed scheme, rather than using the information bit vectors as labels for training the neural network (NN), the NN is trained to function as a bounded distance decoder.
arXiv Detail & Related papers (2023-07-16T11:12:58Z) - Generalization Bounds for Neural Belief Propagation Decoders [10.96453955114324]
In this paper, we investigate the generalization capabilities of neural network based decoders.
Specifically, the generalization gap of a decoder is the difference between empirical and expected bit-error-rate(s)
Results are presented for both regular and irregular parity-check matrices.
arXiv Detail & Related papers (2023-05-17T19:56:04Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.