FAID Diversity via Neural Networks
- URL: http://arxiv.org/abs/2105.04118v1
- Date: Mon, 10 May 2021 05:14:42 GMT
- Title: FAID Diversity via Neural Networks
- Authors: Xin Xiao, Nithin Raveendran, Bane Vasic, Shu Lin, and Ravi Tandon
- Abstract summary: We propose a new approach to design the decoder diversity of finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes.
The proposed decoder diversity is achieved by training a recurrent quantized neural network (RQNN) to learn/design FAIDs.
- Score: 23.394836086114413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decoder diversity is a powerful error correction framework in which a
collection of decoders collaboratively correct a set of error patterns
otherwise uncorrectable by any individual decoder. In this paper, we propose a
new approach to design the decoder diversity of finite alphabet iterative
decoders (FAIDs) for Low-Density Parity Check (LDPC) codes over the binary
symmetric channel (BSC), for the purpose of lowering the error floor while
guaranteeing the waterfall performance. The proposed decoder diversity is
achieved by training a recurrent quantized neural network (RQNN) to
learn/design FAIDs. We demonstrated for the first time that a machine-learned
decoder can surpass in performance a man-made decoder of the same complexity.
As RQNNs can model a broad class of FAIDs, they are capable of learning an
arbitrary FAID. To provide sufficient knowledge of the error floor to the RQNN,
the training sets are constructed by sampling from the set of most problematic
error patterns - trapping sets. In contrast to the existing methods that use
the cross-entropy function as the loss function, we introduce a
frame-error-rate (FER) based loss function to train the RQNN with the objective
of correcting specific error patterns rather than reducing the bit error rate
(BER). The examples and simulation results show that the RQNN-aided decoder
diversity increases the error correction capability of LDPC codes and lowers
the error floor.
Related papers
- On the Design and Performance of Machine Learning Based Error Correcting Decoders [3.8289109929360245]
We first consider the so-called single-label neural network (SLNN) and the multi-label neural network (MLNN) decoders which have been reported to achieve near maximum likelihood (ML) performance.
We then turn our attention to two transformer-based decoders: the error correction code transformer (ECCT) and the cross-attention message passing transformer (CrossMPT)
arXiv Detail & Related papers (2024-10-21T11:23:23Z) - Transformer-QEC: Quantum Error Correction Code Decoding with
Transferable Transformers [18.116657629047253]
We introduce a transformer-based Quantum Error Correction (QEC) decoder.
It employs self-attention to achieve a global receptive field across all input syndromes.
It incorporates a mixed loss training approach, combining both local physical error and global parity label losses.
arXiv Detail & Related papers (2023-11-27T18:52:25Z) - Testing the Accuracy of Surface Code Decoders [55.616364225463066]
Large-scale, fault-tolerant quantum computations will be enabled by quantum error-correcting codes (QECC)
This work presents the first systematic technique to test the accuracy and effectiveness of different QECC decoding schemes.
arXiv Detail & Related papers (2023-11-21T10:22:08Z) - Learned layered coding for Successive Refinement in the Wyner-Ziv
Problem [18.134147308944446]
We propose a data-driven approach to explicitly learn the progressive encoding of a continuous source.
This setup refers to the successive refinement of the Wyner-Ziv coding problem.
We demonstrate that RNNs can explicitly retrieve layered binning solutions akin to scalable nested quantization.
arXiv Detail & Related papers (2023-11-06T12:45:32Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - Denoising Diffusion Error Correction Codes [92.10654749898927]
Recently, neural decoders have demonstrated their advantage over classical decoding techniques.
Recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders.
We propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths.
arXiv Detail & Related papers (2022-09-16T11:00:50Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Boost decoding performance of finite geometry LDPC codes with deep
learning tactics [3.1519370595822274]
We seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes.
It is elaborated on how to generate high-quality training data effectively.
arXiv Detail & Related papers (2022-05-01T14:41:16Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - Error-rate-agnostic decoding of topological stabilizer codes [0.0]
We develop a decoder that depends on the bias, i.e., the relative probability of phase-flip to bit-flip errors, but is agnostic to error rate.
Our decoder is based on counting the number and effective weight of the most likely error chains in each equivalence class of a given syndrome.
arXiv Detail & Related papers (2021-12-03T15:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.