KO codes: Inventing Nonlinear Encoding and Decoding for Reliable
Wireless Communication via Deep-learning
- URL: http://arxiv.org/abs/2108.12920v1
- Date: Sun, 29 Aug 2021 21:08:30 GMT
- Title: KO codes: Inventing Nonlinear Encoding and Decoding for Reliable
Wireless Communication via Deep-learning
- Authors: Ashok Vardhan Makkuva, Xiyang Liu, Mohammad Vahid Jamali, Hessam
Mahdavifar, Sewoong Oh, Pramod Viswanath
- Abstract summary: Landmark codes underpin reliable physical layer communication, e.g., Reed-Muller, BCH, Convolution, Turbo, LDPC and Polar codes.
In this paper, we construct KO codes, a computationaly efficient family of deep-learning driven (encoder, decoder) pairs.
KO codes beat state-of-the-art Reed-Muller and Polar codes, under the low-complexity successive cancellation decoding.
- Score: 76.5589486928387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Landmark codes underpin reliable physical layer communication, e.g.,
Reed-Muller, BCH, Convolution, Turbo, LDPC and Polar codes: each is a linear
code and represents a mathematical breakthrough. The impact on humanity is
huge: each of these codes has been used in global wireless communication
standards (satellite, WiFi, cellular). Reliability of communication over the
classical additive white Gaussian noise (AWGN) channel enables benchmarking and
ranking of the different codes. In this paper, we construct KO codes, a
computationaly efficient family of deep-learning driven (encoder, decoder)
pairs that outperform the state-of-the-art reliability performance on the
standardized AWGN channel. KO codes beat state-of-the-art Reed-Muller and Polar
codes, under the low-complexity successive cancellation decoding, in the
challenging short-to-medium block length regime on the AWGN channel. We show
that the gains of KO codes are primarily due to the nonlinear mapping of
information bits directly to transmit real symbols (bypassing modulation) and
yet possess an efficient, high performance decoder. The key technical
innovation that renders this possible is design of a novel family of neural
architectures inspired by the computation tree of the {\bf K}ronecker {\bf
O}peration (KO) central to Reed-Muller and Polar codes. These architectures
pave way for the discovery of a much richer class of hitherto unexplored
nonlinear algebraic structures. The code is available at
\href{https://github.com/deepcomm/KOcodes}{https://github.com/deepcomm/KOcodes}
Related papers
- Decoding Quasi-Cyclic Quantum LDPC Codes [23.22566380210149]
Quantum low-density parity-check (qLDPC) codes are an important component in the quest for fault tolerance.
Recent progress on qLDPC codes has led to constructions which are quantumally good, and which admit linear-time decoders to correct errors affecting a constant fraction of codeword qubits.
In practice, the surface/toric codes, which are the product of two repetition codes, are still often the qLDPC codes of choice.
arXiv Detail & Related papers (2024-11-07T06:25:27Z) - Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - DeepPolar: Inventing Nonlinear Large-Kernel Polar Codes via Deep Learning [36.10365210143751]
Polar codes have emerged as the state-of-the-art error-correction code for short-to-medium block length regimes.
DeepPolar codes extend the conventional Polar coding framework by utilizing a larger kernel size and parameterizing these kernels and matched decoders through neural networks.
Our results demonstrate that these data-driven codes effectively leverage the benefits of a larger kernel size, resulting in enhanced reliability when compared to both existing neural codes and conventional Polar codes.
arXiv Detail & Related papers (2024-02-14T00:18:10Z) - Optimizing Serially Concatenated Neural Codes with Classical Decoders [8.692972779213932]
We show that a classical decoding algorithm is applied to a non-trivial, real-valued neural code.
As the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
arXiv Detail & Related papers (2022-12-20T15:40:08Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - ProductAE: Towards Training Larger Channel Codes based on Neural Product
Codes [9.118706387430885]
It is prohibitively complex to design and train relatively large neural channel codes via deep learning techniques.
In this paper, we construct ProductAEs, a computationally efficient family of deep-learning driven (encoder, decoder) pairs.
We show significant gains, over all ranges of signal-to-noise ratio (SNR), for a code of parameters $(100,225)$ and a moderate-length code of parameters $(196,441)$.
arXiv Detail & Related papers (2021-10-09T06:00:40Z) - COSEA: Convolutional Code Search with Layer-wise Attention [90.35777733464354]
We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
arXiv Detail & Related papers (2020-10-19T13:53:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.