Adversarial Neural Networks for Error Correcting Codes
- URL: http://arxiv.org/abs/2112.11491v1
- Date: Tue, 21 Dec 2021 19:14:44 GMT
- Title: Adversarial Neural Networks for Error Correcting Codes
- Authors: Hung T. Nguyen, Steven Bottone, Kwang Taik Kim, Mung Chiang, H.
Vincent Poor
- Abstract summary: We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
- Score: 76.70040964453638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Error correcting codes are a fundamental component in modern day
communication systems, demanding extremely high throughput, ultra-reliability
and low latency. Recent approaches using machine learning (ML) models as the
decoders offer both improved performance and great adaptability to unknown
environments, where traditional decoders struggle. We introduce a general
framework to further boost the performance and applicability of ML models. We
propose to combine ML decoders with a competing discriminator network that
tries to distinguish between codewords and noisy words, and, hence, guides the
decoding models to recover transmitted codewords. Our framework is
game-theoretic, motivated by generative adversarial networks (GANs), with the
decoder and discriminator competing in a zero-sum game. The decoder learns to
simultaneously decode and generate codewords while the discriminator learns to
tell the differences between decoded outputs and codewords. Thus, the decoder
is able to decode noisy received signals into codewords, increasing the
probability of successful decoding. We show a strong connection of our
framework with the optimal maximum likelihood decoder by proving that this
decoder defines a Nash equilibrium point of our game. Hence, training to
equilibrium has a good possibility of achieving the optimal maximum likelihood
performance. Moreover, our framework does not require training labels, which
are typically unavailable during communications, and, thus, seemingly can be
trained online and adapt to channel dynamics. To demonstrate the performance of
our framework, we combine it with the very recent neural decoders and show
improved performance compared to the original models and traditional decoding
algorithms on various codes.
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - A blockBP decoder for the surface code [0.0]
We present a new decoder for the surface code, which combines the accuracy of the tensor-network decoders with the efficiency and parallelism of the belief-propagation algorithm.
Our decoder is therefore a belief-propagation decoder that works in the degenerate maximal likelihood decoding framework.
arXiv Detail & Related papers (2024-02-07T13:32:32Z) - Coding for Gaussian Two-Way Channels: Linear and Learning-Based
Approaches [28.98777190628006]
We propose two different two-way coding strategies: linear coding and learning-based coding.
For learning-based coding, we introduce a novel recurrent neural network (RNN)-based coding architecture.
Our two-way coding methodologies outperform conventional channel coding schemes significantly in sum-error performance.
arXiv Detail & Related papers (2023-12-31T12:40:18Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - Optimizing Serially Concatenated Neural Codes with Classical Decoders [8.692972779213932]
We show that a classical decoding algorithm is applied to a non-trivial, real-valued neural code.
As the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
arXiv Detail & Related papers (2022-12-20T15:40:08Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder [75.84152924972462]
Many real-world applications use Siamese networks to efficiently match text sequences at scale.
This paper pre-trains language models dedicated to sequence matching in Siamese architectures.
arXiv Detail & Related papers (2021-02-18T08:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.