ProductAE: Towards Training Larger Channel Codes based on Neural Product
Codes
- URL: http://arxiv.org/abs/2110.04466v1
- Date: Sat, 9 Oct 2021 06:00:40 GMT
- Title: ProductAE: Towards Training Larger Channel Codes based on Neural Product
Codes
- Authors: Mohammad Vahid Jamali, Hamid Saber, Homayoon Hatami, Jung Hyun Bae
- Abstract summary: It is prohibitively complex to design and train relatively large neural channel codes via deep learning techniques.
In this paper, we construct ProductAEs, a computationally efficient family of deep-learning driven (encoder, decoder) pairs.
We show significant gains, over all ranges of signal-to-noise ratio (SNR), for a code of parameters $(100,225)$ and a moderate-length code of parameters $(196,441)$.
- Score: 9.118706387430885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There have been significant research activities in recent years to automate
the design of channel encoders and decoders via deep learning. Due the
dimensionality challenge in channel coding, it is prohibitively complex to
design and train relatively large neural channel codes via deep learning
techniques. Consequently, most of the results in the literature are limited to
relatively short codes having less than 100 information bits. In this paper, we
construct ProductAEs, a computationally efficient family of deep-learning
driven (encoder, decoder) pairs, that aim at enabling the training of
relatively large channel codes (both encoders and decoders) with a manageable
training complexity. We build upon the ideas from classical product codes, and
propose constructing large neural codes using smaller code components. More
specifically, instead of directly training the encoder and decoder for a large
neural code of dimension $k$ and blocklength $n$, we provide a framework that
requires training neural encoders and decoders for the code parameters
$(k_1,n_1)$ and $(k_2,n_2)$ such that $k_1 k_2=k$ and $n_1 n_2=n$. Our training
results show significant gains, over all ranges of signal-to-noise ratio (SNR),
for a code of parameters $(100,225)$ and a moderate-length code of parameters
$(196,441)$, over polar codes under successive cancellation (SC) decoder.
Moreover, our results demonstrate meaningful gains over Turbo Autoencoder
(TurboAE) and state-of-the-art classical codes. This is the first work to
design product autoencoders and a pioneering work on training large channel
codes.
Related papers
- Decoding Quasi-Cyclic Quantum LDPC Codes [23.22566380210149]
Quantum low-density parity-check (qLDPC) codes are an important component in the quest for fault tolerance.
Recent progress on qLDPC codes has led to constructions which are quantumally good, and which admit linear-time decoders to correct errors affecting a constant fraction of codeword qubits.
In practice, the surface/toric codes, which are the product of two repetition codes, are still often the qLDPC codes of choice.
arXiv Detail & Related papers (2024-11-07T06:25:27Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - ProductAE: Toward Deep Learning Driven Error-Correction Codes of Large Dimensions [8.710629810511252]
Product Autoencoder (ProductAE) is a computationally-efficient family of deep learning driven (encoder, decoder) pairs.
We build upon ideas from classical product codes and propose constructing large neural codes using smaller code components.
Our training results show successful training of ProductAEs of dimensions as large as $k = 300$ bits with meaningful performance gains.
arXiv Detail & Related papers (2023-03-29T03:10:09Z) - Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes [59.55193427277134]
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
arXiv Detail & Related papers (2023-01-16T04:11:14Z) - Optimizing Serially Concatenated Neural Codes with Classical Decoders [8.692972779213932]
We show that a classical decoding algorithm is applied to a non-trivial, real-valued neural code.
As the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
arXiv Detail & Related papers (2022-12-20T15:40:08Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Tackling Long Code Search with Splitting, Encoding, and Aggregating [67.02322603435628]
We propose a new baseline SEA (Split, Encode and Aggregate) for long code search.
It splits long code into code blocks, encodes these blocks into embeddings, and aggregates them to obtain a comprehensive long code representation.
With GraphCodeBERT as the encoder, SEA achieves an overall mean reciprocal ranking score of 0.785, which is 10.1% higher than GraphCodeBERT on the CodeSearchNet benchmark.
arXiv Detail & Related papers (2022-08-24T02:27:30Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - KO codes: Inventing Nonlinear Encoding and Decoding for Reliable
Wireless Communication via Deep-learning [76.5589486928387]
Landmark codes underpin reliable physical layer communication, e.g., Reed-Muller, BCH, Convolution, Turbo, LDPC and Polar codes.
In this paper, we construct KO codes, a computationaly efficient family of deep-learning driven (encoder, decoder) pairs.
KO codes beat state-of-the-art Reed-Muller and Polar codes, under the low-complexity successive cancellation decoding.
arXiv Detail & Related papers (2021-08-29T21:08:30Z) - Cyclically Equivariant Neural Decoders for Cyclic Codes [33.63188063525036]
We propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property.
Our new decoder consistently outperforms previous neural decoders when decoding cyclic codes.
Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes.
arXiv Detail & Related papers (2021-05-12T09:41:13Z) - General tensor network decoding of 2D Pauli codes [0.0]
We propose a decoder that approximates maximally likelihood decoding for 2D stabiliser and subsystem codes subject to Pauli noise.
We numerically demonstrate the power of this decoder by studying four classes of codes under three noise models.
We show that the thresholds yielded by our decoder are state-of-the-art, and numerically consistent with optimal thresholds where available.
arXiv Detail & Related papers (2021-01-11T19:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.