Doubly Residual Neural Decoder: Towards Low-Complexity High-Performance
Channel Decoding
- URL: http://arxiv.org/abs/2102.03959v1
- Date: Mon, 8 Feb 2021 01:48:16 GMT
- Title: Doubly Residual Neural Decoder: Towards Low-Complexity High-Performance
Channel Decoding
- Authors: Siyu Liao, Chunhua Deng, Miao Yin, Bo Yuan
- Abstract summary: Deep neural networks have been successfully applied in channel coding to improve the decoding performance.
We propose doubly residual neural (DRN) decoder.
DRN enables significant decoding performance improvement while maintaining low complexity.
- Score: 19.48350605321212
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently deep neural networks have been successfully applied in channel
coding to improve the decoding performance. However, the state-of-the-art
neural channel decoders cannot achieve high decoding performance and low
complexity simultaneously. To overcome this challenge, in this paper we propose
doubly residual neural (DRN) decoder. By integrating both the residual input
and residual learning to the design of neural channel decoder, DRN enables
significant decoding performance improvement while maintaining low complexity.
Extensive experiment results show that on different types of channel codes, our
DRN decoder consistently outperform the state-of-the-art decoders in terms of
decoding performance, model sizes and computational cost.
Related papers
- Efficient Encoder-Decoder Transformer Decoding for Decomposable Tasks [53.550782959908524]
We introduce a new configuration for encoder-decoder models that improves efficiency on structured output and decomposable tasks.
Our method, prompt-in-decoder (PiD), encodes the input once and decodes the output in parallel, boosting both training and inference efficiency.
arXiv Detail & Related papers (2024-03-19T19:27:23Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - Channelformer: Attention based Neural Solution for Wireless Channel
Estimation and Effective Online Training [1.0499453838486013]
We propose an encoder-decoder neural architecture (called Channelformer) to achieve improved channel estimation.
We employ multi-head attention in the encoder and a residual convolutional neural architecture as the decoder.
We also propose an effective online training method based on the fifth generation (5G) new radio (NR) configuration for the modern communication systems.
arXiv Detail & Related papers (2023-02-08T23:18:23Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - DRF Codes: Deep SNR-Robust Feedback Codes [2.6074034431152344]
We present a new deep-neural-network (DNN) based error correction code for fading channels with output feedback, called deep SNR-robust feedback (DRF) code.
We show that the DRF codes significantly outperform state-of-the-art in terms of both the SNR-robustness and the error rate in additive white Gaussian noise (AWGN) channel with feedback.
arXiv Detail & Related papers (2021-12-22T10:47:25Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.