Cyclically Equivariant Neural Decoders for Cyclic Codes
- URL: http://arxiv.org/abs/2105.05540v1
- Date: Wed, 12 May 2021 09:41:13 GMT
- Title: Cyclically Equivariant Neural Decoders for Cyclic Codes
- Authors: Xiangyu Chen and Min Ye
- Abstract summary: We propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property.
Our new decoder consistently outperforms previous neural decoders when decoding cyclic codes.
Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes.
- Score: 33.63188063525036
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural decoders were introduced as a generalization of the classic Belief
Propagation (BP) decoding algorithms, where the Trellis graph in the BP
algorithm is viewed as a neural network, and the weights in the Trellis graph
are optimized by training the neural network. In this work, we propose a novel
neural decoder for cyclic codes by exploiting their cyclically invariant
property. More precisely, we impose a shift invariant structure on the weights
of our neural decoder so that any cyclic shift of inputs results in the same
cyclic shift of outputs. Extensive simulations with BCH codes and punctured
Reed-Muller (RM) codes show that our new decoder consistently outperforms
previous neural decoders when decoding cyclic codes. Finally, we propose a list
decoding procedure that can significantly reduce the decoding error probability
for BCH codes and punctured RM codes. For certain high-rate codes, the gap
between our list decoder and the Maximum Likelihood decoder is less than
$0.1$dB. Code available at
https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Optimizing Serially Concatenated Neural Codes with Classical Decoders [8.692972779213932]
We show that a classical decoding algorithm is applied to a non-trivial, real-valued neural code.
As the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
arXiv Detail & Related papers (2022-12-20T15:40:08Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - ProductAE: Towards Training Larger Channel Codes based on Neural Product
Codes [9.118706387430885]
It is prohibitively complex to design and train relatively large neural channel codes via deep learning techniques.
In this paper, we construct ProductAEs, a computationally efficient family of deep-learning driven (encoder, decoder) pairs.
We show significant gains, over all ranges of signal-to-noise ratio (SNR), for a code of parameters $(100,225)$ and a moderate-length code of parameters $(196,441)$.
arXiv Detail & Related papers (2021-10-09T06:00:40Z) - Improving the List Decoding Version of the Cyclically Equivariant Neural
Decoder [33.63188063525036]
We propose an improved version of the list decoding algorithm for BCH codes and punctured RM codes.
Our new decoder provides up to $2$dB gain over the previous list decoder when measured by BER.
arXiv Detail & Related papers (2021-06-15T08:37:36Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.