Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches
- URL: http://arxiv.org/abs/2401.00477v1
- Date: Sun, 31 Dec 2023 12:40:18 GMT
- Title: Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches
- Authors: Junghoon Kim, Taejoon Kim, Anindya Bijoy Das, Seyyedali
Hosseinalipour, David J. Love, Christopher G. Brinton
- Abstract summary: We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
- Score: 28.98777190628006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although user cooperation cannot improve the capacity of Gaussian two-way
channels (GTWCs) with independent noises, it can improve communication
reliability. In this work, we aim to enhance and balance the communication
reliability in GTWCs by minimizing the sum of error probabilities via joint
design of encoders and decoders at the users. We first formulate general
encoding/decoding functions, where the user cooperation is captured by the
coupling of user encoding processes. The coupling effect renders the
encoder/decoder design non-trivial, requiring effective decoding to capture
this effect, as well as efficient power management at the encoders within power
constraints. To address these challenges, we propose two different two-way
coding strategies: linear coding and learning-based coding. For linear coding,
we propose optimal linear decoding and discuss new insights on encoding
regarding user cooperation to balance reliability. We then propose an efficient
algorithm for joint encoder/decoder design. For learning-based coding, we
introduce a novel recurrent neural network (RNN)-based coding architecture,
where we propose interactive RNNs and a power control layer for encoding, and
we incorporate bi-directional RNNs with an attention mechanism for decoding.
Through simulations, we show that our two-way coding methodologies outperform
conventional channel coding schemes (that do not utilize user cooperation)
significantly in sum-error performance. We also demonstrate that our linear
coding excels at high signal-to-noise ratios (SNRs), while our RNN-based coding
performs best at low SNRs. We further investigate our two-way coding strategies
in terms of power distribution, two-way coding benefit, different coding rates,
and block-length gain.
Related papers
- On the Design and Performance of Machine Learning Based Error Correcting Decoders [3.8289109929360245]
We first consider the so-called single-label neural network (SLNN) and the multi-label neural network (MLNN) decoders which have been reported to achieve near maximum likelihood (ML) performance.
We then turn our attention to two transformer-based decoders: the error correction code transformer (ECCT) and the cross-attention message passing transformer (CrossMPT)
arXiv Detail & Related papers (2024-10-21T11:23:23Z) - Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Efficient Encoder-Decoder Transformer Decoding for Decomposable Tasks [53.550782959908524]
We introduce a new configuration for encoder-decoder models that improves efficiency on structured output and decomposable tasks.
Our method, prompt-in-decoder (PiD), encodes the input once and decodes the output in parallel, boosting both training and inference efficiency.
arXiv Detail & Related papers (2024-03-19T19:27:23Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip [41.28049430114734]
We propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
Our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.
arXiv Detail & Related papers (2020-04-03T10:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.