Robust Non-Linear Feedback Coding via Power-Constrained Deep Learning
- URL: http://arxiv.org/abs/2304.13178v2
- Date: Wed, 7 Jun 2023 23:00:22 GMT
- Title: Robust Non-Linear Feedback Coding via Power-Constrained Deep Learning
- Authors: Junghoon Kim, Taejoon Kim, David Love, Christopher Brinton
- Abstract summary: We develop a new family of non-linear feedback codes that greatly enhance robustness to channel noise.
Our autoencoder-based architecture is designed to learn codes based on consecutive blocks of bits.
We show that our scheme outperforms state-of-the-art feedback codes by wide margins over practical forward and feedback noise regimes.
- Score: 7.941112438865385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of codes for feedback-enabled communications has been a
long-standing open problem. Recent research on non-linear, deep learning-based
coding schemes have demonstrated significant improvements in communication
reliability over linear codes, but are still vulnerable to the presence of
forward and feedback noise over the channel. In this paper, we develop a new
family of non-linear feedback codes that greatly enhance robustness to channel
noise. Our autoencoder-based architecture is designed to learn codes based on
consecutive blocks of bits, which obtains de-noising advantages over bit-by-bit
processing to help overcome the physical separation between the encoder and
decoder over a noisy channel. Moreover, we develop a power control layer at the
encoder to explicitly incorporate hardware constraints into the learning
optimization, and prove that the resulting average power constraint is
satisfied asymptotically. Numerical experiments demonstrate that our scheme
outperforms state-of-the-art feedback codes by wide margins over practical
forward and feedback noise regimes, and provide information-theoretic insights
on the behavior of our non-linear codes. Moreover, we observe that, in a long
blocklength regime, canonical error correction codes are still preferable to
feedback codes when the feedback noise becomes high.
Related papers
- Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - LightCode: Light Analytical and Neural Codes for Channels with Feedback [10.619569069690185]
We focus on designing low-complexity coding schemes that are interpretable and more suitable for communication systems.
First, we demonstrate that PowerBlast, an analytical coding scheme inspired by Schalkwijk-Kailath (SK) and Gallager-Nakibouglu (GN) schemes, achieves notable reliability improvements over both SK and GN schemes.
Next, to enhance reliability in low-SNR regions, we propose LightCode, a lightweight neural code that achieves state-of-the-art reliability while using a fraction of memory and compute compared to existing deeplearning-based codes.
arXiv Detail & Related papers (2024-03-16T01:04:34Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - Feedback is Good, Active Feedback is Better: Block Attention Active
Feedback Codes [13.766611137136168]
We show that GBAF codes can also be used for channels with active feedback.
We implement a pair of transformer architectures, at the transmitter and the receiver, which interact with each other sequentially.
We achieve a new state-of-the-art BLER performance, especially in the low SNR regime.
arXiv Detail & Related papers (2022-11-03T11:44:06Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - DRF Codes: Deep SNR-Robust Feedback Codes [2.6074034431152344]
We present a new deep-neural-network (DNN) based error correction code for fading channels with output feedback, called deep SNR-robust feedback (DRF) code.
We show that the DRF codes significantly outperform state-of-the-art in terms of both the SNR-robustness and the error rate in additive white Gaussian noise (AWGN) channel with feedback.
arXiv Detail & Related papers (2021-12-22T10:47:25Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.