For One-Shot Decoding: Self-supervised Deep Learning-Based Polar Decoder
- URL: http://arxiv.org/abs/2307.08004v2
- Date: Sun, 30 Jul 2023 04:26:55 GMT
- Title: For One-Shot Decoding: Self-supervised Deep Learning-Based Polar Decoder
- Authors: Huiying Song, Yihao Luo, Yuma Fukuzawa
- Abstract summary: We propose a self-supervised deep learning-based decoding scheme that enables one-shot decoding of polar codes.
In the proposed scheme, rather than using the information bit vectors as labels for training the neural network (NN), the NN is trained to function as a bounded distance decoder.
- Score: 1.4964546566293881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a self-supervised deep learning-based decoding scheme that enables
one-shot decoding of polar codes. In the proposed scheme, rather than using the
information bit vectors as labels for training the neural network (NN) through
supervised learning as the conventional scheme did, the NN is trained to
function as a bounded distance decoder by leveraging the generator matrix of
polar codes through self-supervised learning. This approach eliminates the
reliance on predefined labels, empowering the potential to train directly on
the actual data within communication systems and thereby enhancing the
applicability. Furthermore, computer simulations demonstrate that (i) the bit
error rate (BER) and block error rate (BLER) performances of the proposed
scheme can approach those of the maximum a posteriori (MAP) decoder for very
short packets and (ii) the proposed NN decoder (NND) exhibits much superior
generalization ability compared to the conventional one.
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Flexible polar encoding for information reconciliation in QKD [2.627883025193776]
Quantum Key Distribution (QKD) enables two parties to establish a common secret key that is information-theoretically secure.
Errors that are generally considered to be due to the adversary's tempering with the quantum channel need to be corrected using classical communication over a public channel.
We show that the reliability sequence can be derived and used to design an encoder independent of the choice of decoder.
arXiv Detail & Related papers (2023-11-30T16:01:10Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Scalable Polar Code Construction for Successive Cancellation List
Decoding: A Graph Neural Network-Based Approach [11.146177972345138]
This paper first maps a polar code to a heterogeneous graph called the polar-code-construction message-passing graph.
Next, a graph-neural-network-based iterative message-passing algorithm is proposed which aims to find a PCCMP graph that corresponds to the polar code.
Numerical experiments show that IMP-based polar-code constructions outperform classical constructions under CA-SCL decoding.
arXiv Detail & Related papers (2022-07-03T19:27:43Z) - Using Deep Neural Networks to Predict and Improve the Performance of
Polar Codes [3.6804038214708563]
We introduce a methodology that consists in training deep neural networks to predict the frame error rate of polar codes based on their frozen bit construction sequence.
We showcase on generated datasets the ability of the proposed methodology to produce codes more efficient than those used to train the neural networks.
arXiv Detail & Related papers (2021-05-11T10:24:51Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.