Neural Polar Decoders for Deletion Channels
- URL: http://arxiv.org/abs/2507.12329v1
- Date: Wed, 16 Jul 2025 15:22:34 GMT
- Title: Neural Polar Decoders for Deletion Channels
- Authors: Ziv Aharoni, Henry D. Pfister,
- Abstract summary: This paper introduces a neural polar decoder (NPD) for deletion channels with a constant deletion rate.<n>Existing polar decoders for deletion channels exhibit high computational complexity of $O(N4)$, where $N$ is the block length.<n>We demonstrate that employing NPDs for deletion channels can reduce the computational complexity.
- Score: 10.362077573132634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a neural polar decoder (NPD) for deletion channels with a constant deletion rate. Existing polar decoders for deletion channels exhibit high computational complexity of $O(N^4)$, where $N$ is the block length. This limits the application of polar codes for deletion channels to short-to-moderate block lengths. In this work, we demonstrate that employing NPDs for deletion channels can reduce the computational complexity. First, we extend the architecture of the NPD to support deletion channels. Specifically, the NPD architecture consists of four neural networks (NNs), each replicating fundamental successive cancellation (SC) decoder operations. To support deletion channels, we change the architecture of only one. The computational complexity of the NPD is $O(AN\log N)$, where the parameter $A$ represents a computational budget determined by the user and is independent of the channel. We evaluate the new extended NPD for deletion channels with deletion rates $\delta\in\{0.01, 0.1\}$ and we verify the NPD with the ground truth given by the trellis decoder by Tal et al. We further show that due to the reduced complexity of the NPD, we are able to incorporate list decoding and further improve performance. We believe that the extended NPD presented here could have applications in future technologies like DNA storage.
Related papers
- Neural Polar Decoders for DNA Data Storage [10.362077573132634]
Synchronization errors, such as insertions and deletions, present a fundamental challenge in DNA-based data storage systems.<n>We propose a data-driven approach based on neural polar decoders (NPDs) to design low-complexity decoders for channels with synchronization errors.
arXiv Detail & Related papers (2025-06-20T15:26:38Z) - Data-Driven Neural Polar Codes for Unknown Channels With and Without
Memory [20.793209871685445]
We propose a data-driven methodology for designing polar codes for channels with and without memory.
The proposed method leverages the structure of the successive cancellation (SC) decoder to devise a neural SC (NSC) decoder.
The NSC decoder uses neural networks (NNs) to replace the core elements of the original SC decoder, the check-node, the bit-node and the soft decision.
arXiv Detail & Related papers (2023-09-06T16:44:08Z) - Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at
Irregularly Spaced Data [2.7195102129095003]
Deep ReLU neural networks can interpolate values at $N$ datapoints which are separated by a distance $delta$.
We show that $Omega(N)$ parameters are required in the regime where $delta$ is exponentially small in $N$.
As an application we give a lower bound on the approximation rates that deep ReLU neural networks can achieve for Sobolev spaces at the embedding endpoint.
arXiv Detail & Related papers (2023-02-02T02:46:20Z) - Data-free Backdoor Removal based on Channel Lipschitzness [8.273169655380896]
Recent studies have shown that Deep Neural Networks (DNNs) are vulnerable to the backdoor attacks.
In this work, we introduce a novel concept called Channel Lipschitz Constant (CLC), which is defined as the Lipschitz constant of the mapping from the input images to the output of each channel.
Since UCLC can be directly calculated from the weight matrices, we can detect the potential backdoor channels in a data-free manner.
arXiv Detail & Related papers (2022-08-05T11:46:22Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding [94.40747235081466]
We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
arXiv Detail & Related papers (2021-10-22T20:49:02Z) - GDP: Stabilized Neural Network Pruning via Gates with Differentiable
Polarization [84.57695474130273]
Gate-based or importance-based pruning methods aim to remove channels whose importance is smallest.
GDP can be plugged before convolutional layers without bells and whistles, to control the on-and-off of each channel.
Experiments conducted over CIFAR-10 and ImageNet datasets show that the proposed GDP achieves the state-of-the-art performance.
arXiv Detail & Related papers (2021-09-06T03:17:10Z) - Decoding 5G-NR Communications via Deep Learning [6.09170287691728]
We propose to use Autoencoding Neural Networks (ANN) jointly with a Deep Neural Network (DNN) to construct Autoencoding Deep Neural Networks (ADNN) for demapping and decoding.
Results will unveil that, for a particular BER target, $3$ dB less of Signal to Noise Ratio (SNR) is required, in Additive White Gaussian Noise (AWGN) channels.
arXiv Detail & Related papers (2020-07-15T12:00:20Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z) - Discrimination-aware Network Pruning for Deep Model Compression [79.44318503847136]
Existing pruning methods either train from scratch with sparsity constraints or minimize the reconstruction error between the feature maps of the pre-trained models and the compressed ones.
We propose a simple-yet-effective method called discrimination-aware channel pruning (DCP) to choose the channels that actually contribute to the discriminative power.
Experiments on both image classification and face recognition demonstrate the effectiveness of our methods.
arXiv Detail & Related papers (2020-01-04T07:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.