Feedback is Good, Active Feedback is Better: Block Attention Active
Feedback Codes
- URL: http://arxiv.org/abs/2211.01730v1
- Date: Thu, 3 Nov 2022 11:44:06 GMT
- Title: Feedback is Good, Active Feedback is Better: Block Attention Active
Feedback Codes
- Authors: Emre Ozfatura and Yulin Shao and Amin Ghazanfari and Alberto Perotti
and Branislav Popovic and Deniz Gunduz
- Abstract summary: We show that GBAF codes can also be used for channels with active feedback.
We implement a pair of transformer architectures, at the transmitter and the receiver, which interact with each other sequentially.
We achieve a new state-of-the-art BLER performance, especially in the low SNR regime.
- Score: 13.766611137136168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network (DNN)-assisted channel coding designs, such as
low-complexity neural decoders for existing codes, or end-to-end
neural-network-based auto-encoder designs are gaining interest recently due to
their improved performance and flexibility; particularly for communication
scenarios in which high-performing structured code designs do not exist.
Communication in the presence of feedback is one such communication scenario,
and practical code design for feedback channels has remained an open challenge
in coding theory for many decades. Recently, DNN-based designs have shown
impressive results in exploiting feedback. In particular, generalized block
attention feedback (GBAF) codes, which utilizes the popular transformer
architecture, achieved significant improvement in terms of the block error rate
(BLER) performance. However, previous works have focused mainly on passive
feedback, where the transmitter observes a noisy version of the signal at the
receiver. In this work, we show that GBAF codes can also be used for channels
with active feedback. We implement a pair of transformer architectures, at the
transmitter and the receiver, which interact with each other sequentially, and
achieve a new state-of-the-art BLER performance, especially in the low SNR
regime.
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - Robust Non-Linear Feedback Coding via Power-Constrained Deep Learning [7.941112438865385]
We develop a new family of non-linear feedback codes that greatly enhance robustness to channel noise.
Our autoencoder-based architecture is designed to learn codes based on consecutive blocks of bits.
We show that our scheme outperforms state-of-the-art feedback codes by wide margins over practical forward and feedback noise regimes.
arXiv Detail & Related papers (2023-04-25T22:21:26Z) - Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - All you need is feedback: Communication with block attention feedback
codes [12.459538600658034]
Communication over a feedback channel is one such problem.
We introduce a novel learning-aided code design for feedback channels, called generalized block attention feedback (GBAF) codes.
arXiv Detail & Related papers (2022-06-19T17:55:04Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - DRF Codes: Deep SNR-Robust Feedback Codes [2.6074034431152344]
We present a new deep-neural-network (DNN) based error correction code for fading channels with output feedback, called deep SNR-robust feedback (DRF) code.
We show that the DRF codes significantly outperform state-of-the-art in terms of both the SNR-robustness and the error rate in additive white Gaussian noise (AWGN) channel with feedback.
arXiv Detail & Related papers (2021-12-22T10:47:25Z) - Variational Autoencoders: A Harmonic Perspective [79.49579654743341]
We study Variational Autoencoders (VAEs) from the perspective of harmonic analysis.
We show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks.
arXiv Detail & Related papers (2021-05-31T10:39:25Z) - Deep Extended Feedback Codes [9.112162560071937]
The encoder in the DEF architecture transmits an information message followed by a sequence of parity symbols.
DEF codes generalize Deepcode in several ways to provide better error correction capability.
Performance evaluations show that DEF codes have better performance compared to other DNN-based codes for channels with feedback.
arXiv Detail & Related papers (2021-05-04T08:41:14Z) - Capacity-Approaching Autoencoders for Communications [4.86067125387358]
The current approach to train an autoencoder relies on the use of the cross-entropy loss function.
We propose a methodology that computes an estimate of the channel capacity and constructs an optimal coded signal approaching it.
arXiv Detail & Related papers (2020-09-11T08:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.