Limitations of the decoding-to-LPN reduction via code smoothing
- URL: http://arxiv.org/abs/2408.03742v2
- Date: Tue, 3 Sep 2024 01:12:22 GMT
- Title: Limitations of the decoding-to-LPN reduction via code smoothing
- Authors: Madhura Pathegama, Alexander Barg,
- Abstract summary: The Learning Parity with Noise (LPN) problem underlies several classic cryptographic primitives.
This paper attempts to find a reduction from the decoding problem of linear codes, for which several hardness results exist.
We characterize the efficiency of the reduction in terms of the parameters of the decoding and problems.
- Score: 59.90381090395222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Learning Parity with Noise (LPN) problem underlies several classic cryptographic primitives. Researchers have endeavored to demonstrate the algorithmic difficulty of this problem by attempting to find a reduction from the decoding problem of linear codes, for which several hardness results exist. Earlier studies used code smoothing as a technical tool to achieve such reductions, showing that they are possible for codes with vanishing rate. This has left open the question of attaining a reduction with positive-rate codes. Addressing this case, we characterize the efficiency of the reduction in terms of the parameters of the decoding and LPN problems. As a conclusion, we isolate the parameter regimes for which a meaningful reduction is possible and the regimes for which its existence is unlikely.
Related papers
- Quantum advantage from soft decoders [0.7728149002473877]
We provide improvements for some instantiations of the Optimal Polynomial Interpolation (OPI) problem.
Our results provide natural and convincing decoding problems for which we believe to have a quantum advantage.
In order to be able to use a decoder in the setting of Regev's reduction, we provide a novel reduction from a syndrome to a coset sampling problem.
arXiv Detail & Related papers (2024-11-19T15:12:03Z) - Localized statistics decoding: A parallel decoding algorithm for quantum low-density parity-check codes [3.001631679133604]
We introduce localized statistics decoding for arbitrary quantum low-density parity-check codes.
Our decoder is more amenable to implementation on specialized hardware, positioning it as a promising candidate for decoding real-time syndromes from experiments.
arXiv Detail & Related papers (2024-06-26T18:00:09Z) - Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Reduction from sparse LPN to LPN, Dual Attack 3.0 [1.9106435311144374]
A new algorithm called RLPN-decoding which relies on a completely different approach was introduced.
It outperforms significantly the ISD's and RLPN for code rates smaller than 0.42.
This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography.
arXiv Detail & Related papers (2023-12-01T17:35:29Z) - Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes [59.55193427277134]
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
arXiv Detail & Related papers (2023-01-16T04:11:14Z) - Neural Belief Propagation Decoding of Quantum LDPC Codes Using
Overcomplete Check Matrices [60.02503434201552]
We propose to decode QLDPC codes based on a check matrix with redundant rows, generated from linear combinations of the rows in the original check matrix.
This approach yields a significant improvement in decoding performance with the additional advantage of very low decoding latency.
arXiv Detail & Related papers (2022-12-20T13:41:27Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Boost decoding performance of finite geometry LDPC codes with deep
learning tactics [3.1519370595822274]
We seek a low-complexity and high-performance decoder for a class of finite geometry LDPC codes.
It is elaborated on how to generate high-quality training data effectively.
arXiv Detail & Related papers (2022-05-01T14:41:16Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - A PDD Decoder for Binary Linear Codes With Neural Check Polytope
Projection [43.97522161614078]
We propose a PDD algorithm to address the fundamental polytope based maximum likelihood (ML) decoding problem.
We also propose to integrate machine learning techniques into the most time-consuming part of the PDD decoding algorithm.
We present a specially designed neural CPP (N CPP) algorithm to decrease the decoding latency.
arXiv Detail & Related papers (2020-06-11T07:57:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.