Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes
- URL: http://arxiv.org/abs/2301.06251v2
- Date: Tue, 1 Aug 2023 00:45:25 GMT
- Title: Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes
- Authors: Mohammad Vahid Jamali, Xiyang Liu, Ashok Vardhan Makkuva, Hessam
Mahdavifar, Sewoong Oh, and Pramod Viswanath
- Abstract summary: Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
- Score: 59.55193427277134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reed-Muller (RM) codes achieve the capacity of general binary-input
memoryless symmetric channels and are conjectured to have a comparable
performance to that of random codes in terms of scaling laws. However, such
results are established assuming maximum-likelihood decoders for general code
parameters. Also, RM codes only admit limited sets of rates. Efficient decoders
such as successive cancellation list (SCL) decoder and recently-introduced
recursive projection-aggregation (RPA) decoders are available for RM codes at
finite lengths. In this paper, we focus on subcodes of RM codes with flexible
rates. We first extend the RPA decoding algorithm to RM subcodes. To lower the
complexity of our decoding algorithm, referred to as subRPA, we investigate
different approaches to prune the projections. Next, we derive the
soft-decision based version of our algorithm, called soft-subRPA, that not only
improves upon the performance of subRPA but also enables a differentiable
decoding algorithm. Building upon the soft-subRPA algorithm, we then provide a
framework for training a machine learning (ML) model to search for
\textit{good} sets of projections that minimize the decoding error rate.
Training our ML model enables achieving very close to the performance of
full-projection decoding with a significantly smaller number of projections. We
also show that the choice of the projections in decoding RM subcodes matters
significantly, and our ML-aided projection pruning scheme is able to find a
\textit{good} selection, i.e., with negligible performance degradation compared
to the full-projection case, given a reasonable number of projections.
Related papers
- Limitations of the decoding-to-LPN reduction via code smoothing [59.90381090395222]
The Learning Parity with Noise (LPN) problem underlies several classic cryptographic primitives.
This paper attempts to find a reduction from the decoding problem of linear codes, for which several hardness results exist.
We characterize the efficiency of the reduction in terms of the parameters of the decoding and problems.
arXiv Detail & Related papers (2024-08-07T12:54:43Z) - Bit-flipping Decoder Failure Rate Estimation for (v,w)-regular Codes [84.0257274213152]
We propose a new technique to provide accurate estimates of the DFR of a two-iterations (parallel) bit flipping decoder.
We validate our results, providing comparisons of the modeled and simulated weight of the syndrome, incorrectly-guessed error bit distribution at the end of the first iteration, and two-itcrypteration Decoding Failure Rates (DFR)
arXiv Detail & Related papers (2024-01-30T11:40:24Z) - Reduction from sparse LPN to LPN, Dual Attack 3.0 [1.9106435311144374]
A new algorithm called RLPN-decoding which relies on a completely different approach was introduced.
It outperforms significantly the ISD's and RLPN for code rates smaller than 0.42.
This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography.
arXiv Detail & Related papers (2023-12-01T17:35:29Z) - Deep Learning Assisted Multiuser MIMO Load Modulated Systems for
Enhanced Downlink mmWave Communications [68.96633803796003]
This paper is focused on multiuser load modulation arrays (MU-LMAs) which are attractive due to their low system complexity and reduced cost for millimeter wave (mmWave) multi-input multi-output (MIMO) systems.
The existing precoding algorithm for downlink MU-LMA relies on a sub-array structured (SAS) transmitter which may suffer from decreased degrees of freedom and complex system configuration.
In this paper, we conceive an MU-LMA system employing a full-array structured (FAS) transmitter and propose two algorithms accordingly.
arXiv Detail & Related papers (2023-11-08T08:54:56Z) - Efficient Syndrome Decoder for Heavy Hexagonal QECC via Machine Learning [1.1156329459915602]
Recent advances have shown that topological codes can be efficiently decoded by deploying machine learning (ML) techniques.
We first propose an ML based decoder for heavy hexagonal code and establish its efficiency in terms of the values of threshold and pseudo-threshold.
A novel technique based on rank to determine the equivalent error classes is presented, which is empirically faster than the one based on linear search.
arXiv Detail & Related papers (2022-10-18T10:16:14Z) - Conservation laws and quantum error correction: towards a generalised
matching decoder [2.1756081703276]
We explore decoding algorithms for the surface code, a prototypical quantum low-density parity-check code.
The decoder works by exploiting underlying structure that arises due to materialised symmetries among surface-code stabilizer elements.
We propose a systematic way of constructing a minimum-weight perfect-matching decoder for codes with certain characteristic properties.
arXiv Detail & Related papers (2022-07-13T18:00:00Z) - Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder [75.84152924972462]
Many real-world applications use Siamese networks to efficiently match text sequences at scale.
This paper pre-trains language models dedicated to sequence matching in Siamese architectures.
arXiv Detail & Related papers (2021-02-18T08:08:17Z) - A PDD Decoder for Binary Linear Codes With Neural Check Polytope
Projection [43.97522161614078]
We propose a PDD algorithm to address the fundamental polytope based maximum likelihood (ML) decoding problem.
We also propose to integrate machine learning techniques into the most time-consuming part of the PDD decoding algorithm.
We present a specially designed neural CPP (N CPP) algorithm to decrease the decoding latency.
arXiv Detail & Related papers (2020-06-11T07:57:15Z) - ADMM-based Decoder for Binary Linear Codes Aided by Deep Learning [40.25456611849273]
This work presents a deep neural network aided decoding algorithm for binary linear codes.
Based on the concept of deep unfolding, we design a decoding network by unfolding the alternating direction method of multipliers (ADMM)-penalized decoder.
Numerical results show that the resulting DL-aided decoders outperform the original ADMM-penalized decoder.
arXiv Detail & Related papers (2020-02-14T03:32:14Z) - Pruning Neural Belief Propagation Decoders [77.237958592189]
We introduce a method to tailor an overcomplete parity-check matrix to (neural) BP decoding using machine learning.
We achieve performance within 0.27 dB and 1.5 dB of the ML performance while reducing the complexity of the decoder.
arXiv Detail & Related papers (2020-01-21T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.