Interpreting Neural Min-Sum Decoders
- URL: http://arxiv.org/abs/2205.10684v2
- Date: Tue, 11 Apr 2023 18:41:32 GMT
- Title: Interpreting Neural Min-Sum Decoders
- Authors: Sravan Kumar Ankireddy and Hyeji Kim
- Abstract summary: We provide insights into the weights learned and their connection to the structure of the underlying code.
We show that the weights are heavily influenced by the distribution of short cycles in the code.
We show that the decoders with learned weights achieve higher reliability than those with weights optimized analytically.
- Score: 7.012710335689297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In decoding linear block codes, it was shown that noticeable reliability
gains can be achieved by introducing learnable parameters to the Belief
Propagation (BP) decoder. Despite the success of these methods, there are two
key open problems. The first is the lack of interpretation of the learned
weights, and the other is the lack of analysis for non-AWGN channels. In this
work, we aim to bridge this gap by providing insights into the weights learned
and their connection to the structure of the underlying code. We show that the
weights are heavily influenced by the distribution of short cycles in the code.
We next look at the performance of these decoders in non-AWGN channels, both
synthetic and over-the-air channels, and study the complexity vs. performance
trade-offs, demonstrating that increasing the number of parameters helps
significantly in complex channels. Finally, we show that the decoders with
learned weights achieve higher reliability than those with weights optimized
analytically under the Gaussian approximation.
Related papers
- Encoder-Decoder Gemma: Improving the Quality-Efficiency Trade-Off via Adaptation [52.19855651708349]
We study a novel problem: adapting decoder-only large language models to encoder-decoder models.
We argue that adaptation not only enables inheriting the capability of decoder-only LLMs but also reduces the demand for computation.
Under similar inference budget, encoder-decoder LLMs achieve comparable (often better) pretraining performance but substantially better finetuning performance than their decoder-only counterpart.
arXiv Detail & Related papers (2025-04-08T17:13:41Z) - LightCode: Light Analytical and Neural Codes for Channels with Feedback [10.619569069690185]
We focus on designing low-complexity coding schemes that are interpretable and more suitable for communication systems.
First, we demonstrate that PowerBlast, an analytical coding scheme inspired by Schalkwijk-Kailath (SK) and Gallager-Nakibouglu (GN) schemes, achieves notable reliability improvements over both SK and GN schemes.
Next, to enhance reliability in low-SNR regions, we propose LightCode, a lightweight neural code that achieves state-of-the-art reliability while using a fraction of memory and compute compared to existing deeplearning-based codes.
arXiv Detail & Related papers (2024-03-16T01:04:34Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - Robust Non-Linear Feedback Coding via Power-Constrained Deep Learning [7.941112438865385]
We develop a new family of non-linear feedback codes that greatly enhance robustness to channel noise.
Our autoencoder-based architecture is designed to learn codes based on consecutive blocks of bits.
We show that our scheme outperforms state-of-the-art feedback codes by wide margins over practical forward and feedback noise regimes.
arXiv Detail & Related papers (2023-04-25T22:21:26Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Multifidelity data fusion in convolutional encoder/decoder networks [0.0]
We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections.
We demonstrate their accuracy when trained on a few high-fidelity and many low-fidelity data.
arXiv Detail & Related papers (2022-05-10T21:51:22Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - Variational Autoencoders: A Harmonic Perspective [79.49579654743341]
We study Variational Autoencoders (VAEs) from the perspective of harmonic analysis.
We show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks.
arXiv Detail & Related papers (2021-05-31T10:39:25Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.