Decoding 5G-NR Communications via Deep Learning
- URL: http://arxiv.org/abs/2007.07644v1
- Date: Wed, 15 Jul 2020 12:00:20 GMT
- Title: Decoding 5G-NR Communications via Deep Learning
- Authors: Pol Henarejos and Miguel \'Angel V\'azquez
- Abstract summary: We propose to use Autoencoding Neural Networks (ANN) jointly with a Deep Neural Network (DNN) to construct Autoencoding Deep Neural Networks (ADNN) for demapping and decoding.
Results will unveil that, for a particular BER target, $3$ dB less of Signal to Noise Ratio (SNR) is required, in Additive White Gaussian Noise (AWGN) channels.
- Score: 6.09170287691728
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Upcoming modern communications are based on 5G specifications and aim at
providing solutions for novel vertical industries. One of the major changes of
the physical layer is the use of Low-Density Parity-Check (LDPC) code for
channel coding. Although LDPC codes introduce additional computational
complexity compared with the previous generation, where Turbocodes where used,
LDPC codes provide a reasonable trade-off in terms of complexity-Bit Error Rate
(BER). In parallel to this, Deep Learning algorithms are experiencing a new
revolution, specially to image and video processing. In this context, there are
some approaches that can be exploited in radio communications. In this paper we
propose to use Autoencoding Neural Networks (ANN) jointly with a Deep Neural
Network (DNN) to construct Autoencoding Deep Neural Networks (ADNN) for
demapping and decoding. The results will unveil that, for a particular BER
target, $3$ dB less of Signal to Noise Ratio (SNR) is required, in Additive
White Gaussian Noise (AWGN) channels.
Related papers
- Decoding Quantum LDPC Codes Using Graph Neural Networks [52.19575718707659]
We propose a novel decoding method for Quantum Low-Density Parity-Check (QLDPC) codes based on Graph Neural Networks (GNNs)
The proposed GNN-based QLDPC decoder exploits the sparse graph structure of QLDPC codes and can be implemented as a message-passing decoding algorithm.
arXiv Detail & Related papers (2024-08-09T16:47:49Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - KO codes: Inventing Nonlinear Encoding and Decoding for Reliable
Wireless Communication via Deep-learning [76.5589486928387]
Landmark codes underpin reliable physical layer communication, e.g., Reed-Muller, BCH, Convolution, Turbo, LDPC and Polar codes.
In this paper, we construct KO codes, a computationaly efficient family of deep-learning driven (encoder, decoder) pairs.
KO codes beat state-of-the-art Reed-Muller and Polar codes, under the low-complexity successive cancellation decoding.
arXiv Detail & Related papers (2021-08-29T21:08:30Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Learning to Time-Decode in Spiking Neural Networks Through the
Information Bottleneck [37.376989855065545]
One of the key challenges in training Spiking Neural Networks (SNNs) is that target outputs typically come in the form of natural signals.
This is done by handcrafting target spiking signals, which in turn implicitly fixes the mechanisms used to decode spikes into natural signals.
This work introduces a hybrid variational autoencoder architecture, consisting of an encoding SNN and a decoding Artificial Neural Network.
arXiv Detail & Related papers (2021-06-02T14:14:47Z) - DeepRx: Fully Convolutional Deep Learning Receiver [8.739166282613118]
DeepRx is a fully convolutional neural network that executes the whole receiver pipeline from frequency domain signal stream to uncoded bits in a 5G-compliant fashion.
We demonstrate that DeepRx outperforms traditional methods.
arXiv Detail & Related papers (2020-05-04T13:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.