All you need is feedback: Communication with block attention feedback
codes
- URL: http://arxiv.org/abs/2206.09457v1
- Date: Sun, 19 Jun 2022 17:55:04 GMT
- Title: All you need is feedback: Communication with block attention feedback
codes
- Authors: Emre Ozfatura, Yulin Shao, Alberto Perotti, Branislav Popovic, Deniz
Gunduz
- Abstract summary: Communication over a feedback channel is one such problem.
We introduce a novel learning-aided code design for feedback channels, called generalized block attention feedback (GBAF) codes.
- Score: 12.459538600658034
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning based channel code designs have recently gained interest as an
alternative to conventional coding algorithms, particularly for channels for
which existing codes do not provide effective solutions. Communication over a
feedback channel is one such problem, for which promising results have recently
been obtained by employing various deep learning architectures. In this paper,
we introduce a novel learning-aided code design for feedback channels, called
generalized block attention feedback (GBAF) codes, which i) employs a modular
architecture that can be implemented using different neural network
architectures; ii) provides order-of-magnitude improvements in the probability
of error compared to existing designs; and iii) can transmit at desired code
rates.
Related papers
- Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - Robust Non-Linear Feedback Coding via Power-Constrained Deep Learning [7.941112438865385]
We develop a new family of non-linear feedback codes that greatly enhance robustness to channel noise.
Our autoencoder-based architecture is designed to learn codes based on consecutive blocks of bits.
We show that our scheme outperforms state-of-the-art feedback codes by wide margins over practical forward and feedback noise regimes.
arXiv Detail & Related papers (2023-04-25T22:21:26Z) - Feedback is Good, Active Feedback is Better: Block Attention Active
Feedback Codes [13.766611137136168]
We show that GBAF codes can also be used for channels with active feedback.
We implement a pair of transformer architectures, at the transmitter and the receiver, which interact with each other sequentially.
We achieve a new state-of-the-art BLER performance, especially in the low SNR regime.
arXiv Detail & Related papers (2022-11-03T11:44:06Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - KO codes: Inventing Nonlinear Encoding and Decoding for Reliable
Wireless Communication via Deep-learning [76.5589486928387]
Landmark codes underpin reliable physical layer communication, e.g., Reed-Muller, BCH, Convolution, Turbo, LDPC and Polar codes.
In this paper, we construct KO codes, a computationaly efficient family of deep-learning driven (encoder, decoder) pairs.
KO codes beat state-of-the-art Reed-Muller and Polar codes, under the low-complexity successive cancellation decoding.
arXiv Detail & Related papers (2021-08-29T21:08:30Z) - Deep Extended Feedback Codes [9.112162560071937]
The encoder in the DEF architecture transmits an information message followed by a sequence of parity symbols.
DEF codes generalize Deepcode in several ways to provide better error correction capability.
Performance evaluations show that DEF codes have better performance compared to other DNN-based codes for channels with feedback.
arXiv Detail & Related papers (2021-05-04T08:41:14Z) - Capacity-Approaching Autoencoders for Communications [4.86067125387358]
The current approach to train an autoencoder relies on the use of the cross-entropy loss function.
We propose a methodology that computes an estimate of the channel capacity and constructs an optimal coded signal approaching it.
arXiv Detail & Related papers (2020-09-11T08:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.