The syzygy distinguisher
- URL: http://arxiv.org/abs/2407.15740v4
- Date: Wed, 31 Jul 2024 11:18:35 GMT
- Title: The syzygy distinguisher
- Authors: Hugues Randriambololona,
- Abstract summary: We present a new distinguisher for alternant and hence Goppa codes, whose complexity is subexponential in the error-correcting capability.
In particular, it applies to the codes used in the Classic McEliece candidate for postquantum cryptography standardization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new distinguisher for alternant and Goppa codes, whose complexity is subexponential in the error-correcting capability, hence better than that of generic decoding algorithms. Moreover it does not suffer from the strong regime limitations of the previous distinguishers or structure recovery algorithms: in particular, it applies to the codes used in the Classic McEliece candidate for postquantum cryptography standardization. The invariants that allow us to distinguish are graded Betti numbers of the homogeneous coordinate ring of a shortening of the dual code. Since its introduction in 1978, this is the first time an analysis of the McEliece cryptosystem breaks the exponential barrier.
Related papers
- Minimum Description Length and Generalization Guarantees for
Representation Learning [16.2444595840653]
This paper presents a framework that allows us to derive upper bounds on the generalization error of a representation learning algorithm.
Rather than the mutual information between the encoder's input and the representation, our new bounds involve the "multi-letter" relative entropy.
To the best knowledge of the authors, the established generalization bounds are the first of their kind for Information Bottleneck (IB) type encoders and representation learning.
arXiv Detail & Related papers (2024-02-05T18:12:28Z) - Bit-flipping Decoder Failure Rate Estimation for (v,w)-regular Codes [84.0257274213152]
We propose a new technique to provide accurate estimates of the DFR of a two-iterations (parallel) bit flipping decoder.
We validate our results, providing comparisons of the modeled and simulated weight of the syndrome, incorrectly-guessed error bit distribution at the end of the first iteration, and two-itcrypteration Decoding Failure Rates (DFR)
arXiv Detail & Related papers (2024-01-30T11:40:24Z) - Fault-Tolerant Computing with Single Qudit Encoding [49.89725935672549]
We discuss stabilizer quantum-error correction codes implemented in a single multi-level qudit.
These codes can be customized to the specific physical errors on the qudit, effectively suppressing them.
We demonstrate a Fault-Tolerant implementation on molecular spin qudits, showcasing nearly exponential error suppression with only linear qudit size growth.
arXiv Detail & Related papers (2023-07-20T10:51:23Z) - Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes [59.55193427277134]
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
arXiv Detail & Related papers (2023-01-16T04:11:14Z) - Conservation laws and quantum error correction: towards a generalised
matching decoder [2.1756081703276]
We explore decoding algorithms for the surface code, a prototypical quantum low-density parity-check code.
The decoder works by exploiting underlying structure that arises due to materialised symmetries among surface-code stabilizer elements.
We propose a systematic way of constructing a minimum-weight perfect-matching decoder for codes with certain characteristic properties.
arXiv Detail & Related papers (2022-07-13T18:00:00Z) - Combining Varied Learners for Binary Classification using Stacked
Generalization [3.1871776847712523]
This paper performs binary classification using Stacked Generalization on high dimensional Polycystic Ovary Syndrome dataset.
The various metrics are given in this paper that also point out a subtle transgression found with Receiver Operating Characteristic Curve that was proved to be incorrect.
arXiv Detail & Related papers (2022-02-17T21:47:52Z) - Sparse Coding with Multi-Layer Decoders using Variance Regularization [19.8572592390623]
We propose a novel sparse coding protocol which prevents a collapse in the codes without the need to regularize the decoder.
Our method regularizes the codes directly so that each latent code component has variance greater than a fixed threshold.
We show that sparse autoencoders with multi-layer decoders trained using our variance regularization method produce higher quality reconstructions with sparser representations.
arXiv Detail & Related papers (2021-12-16T21:46:23Z) - Trellis Decoding For Qudit Stabilizer Codes And Its Application To Qubit
Topological Codes [3.9962751777898955]
We show that trellis decoders have strong structure, extend the results using classical coding theory as a guide, and demonstrate a canonical form from which the structural properties of the decoding graph may be computed.
The modified decoder works for any stabilizer code $S$ and separates into two parts: a one-time, offline which builds a compact, graphical representation of the normalizer of the code, $Sperp$, and a quick, parallel, online computation using the Viterbi algorithm.
arXiv Detail & Related papers (2021-06-15T16:01:42Z) - Hierarchical Poset Decoding for Compositional Generalization in Language [52.13611501363484]
We formalize human language understanding as a structured prediction task where the output is a partially ordered set (poset)
Current encoder-decoder architectures do not take the poset structure of semantics into account properly.
We propose a novel hierarchical poset decoding paradigm for compositional generalization in language.
arXiv Detail & Related papers (2020-10-15T14:34:26Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence
Lip-Reading [96.48553941812366]
Lip-reading aims to infer the speech content from the lip movement sequence.
Traditional learning process of seq2seq models suffers from two problems.
We propose a novel pseudo-convolutional policy gradient (PCPG) based method to address these two problems.
arXiv Detail & Related papers (2020-03-09T09:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.