NoMod: A Non-modular Attack on Module Learning With Errors
- URL: http://arxiv.org/abs/2510.02162v1
- Date: Thu, 02 Oct 2025 16:12:13 GMT
- Title: NoMod: A Non-modular Attack on Module Learning With Errors
- Authors: Cristian Bassotto, Ermes Franch, Marina KrĨek, Stjepan Picek,
- Abstract summary: Quantum computing threatens classical public-key cryptography.<n>We present NoMod ML-Attack, a hybrid white-box cryptanalytic method.<n>We release our implementation in an anonymous repository.
- Score: 16.228565693406576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of quantum computing threatens classical public-key cryptography, motivating NIST's adoption of post-quantum schemes such as those based on the Module Learning With Errors (Module-LWE) problem. We present NoMod ML-Attack, a hybrid white-box cryptanalytic method that circumvents the challenge of modeling modular reduction by treating wrap-arounds as statistical corruption and casting secret recovery as robust linear estimation. Our approach combines optimized lattice preprocessing--including reduced-vector saving and algebraic amplification--with robust estimators trained via Tukey's Biweight loss. Experiments show NoMod achieves full recovery of binary secrets for dimension $n = 350$, recovery of sparse binomial secrets for $n = 256$, and successful recovery of sparse secrets in CRYSTALS-Kyber settings with parameters $(n, k) = (128, 3)$ and $(256, 2)$. We release our implementation in an anonymous repository https://anonymous.4open.science/r/NoMod-3BD4.
Related papers
- Evolution Strategies at the Hyperscale [57.75314521465674]
We introduce EGGROLL, an evolution strategies (ES) algorithm designed to scale backprop-free optimization to large population sizes.<n>ES is a set of powerful blackbox optimisation methods that can handle non-differentiable or noisy objectives.<n>EGGROLL overcomes these bottlenecks by generating random matrices $Ain mathbbRmtimes r, Bin mathbbRntimes r$ with $rll min(m,n)$ to form a low-rank matrix perturbation $A Btop$
arXiv Detail & Related papers (2025-11-20T18:56:05Z) - INC: An Indirect Neural Corrector for Auto-Regressive Hybrid PDE Solvers [61.84396402100827]
We propose the Indirect Neural Corrector ($mathrmINC$), which integrates learned corrections into the governing equations.<n>$mathrmINC$ reduces the error amplification on the order of $t-1 + L$, where $t$ is the timestep and $L$ the Lipschitz constant.<n>We test $mathrmINC$ in extensive benchmarks, covering numerous differentiable solvers, neural backbones, and test cases ranging from a 1D chaotic system to 3D turbulence.
arXiv Detail & Related papers (2025-11-16T20:14:28Z) - MaskPro: Linear-Space Probabilistic Learning for Strict (N:M)-Sparsity on Large Language Models [53.36415620647177]
Semi-structured sparsity offers a promising solution by strategically retaining $N$ elements out of every $M$ weights.<n>Existing (N:M)-compatible approaches typically fall into two categories: rule-based layerwise greedy search, which suffers from considerable errors, and gradient-driven learning, which incurs prohibitive training costs.<n>We propose a novel linear-space probabilistic framework named MaskPro, which aims to learn a prior categorical distribution for every $M$ consecutive weights and subsequently leverages this distribution to generate the (N:M)-sparsity throughout an $N$-way sampling
arXiv Detail & Related papers (2025-06-15T15:02:59Z) - ALLMod: Exploring $\underline{\mathbf{A}}$rea-Efficiency of $\underline{\mathbf{L}}$UT-based $\underline{\mathbf{L}}$arge Number $\underline{\mathbf{Mod}}$ular Reduction via Hybrid Workloads [18.634794494170617]
High-bit-width operations are crucial for enhancing security.<n>They are computationally intensive due to the large number of modular operations required.<n>AllMod is a novel approach that improves the area efficiency of LUT-based large-number modular reduction.
arXiv Detail & Related papers (2025-03-20T07:47:34Z) - Optimized circuits for windowed modular arithmetic with applications to quantum attacks against RSA [45.810803542748495]
Windowed arithmetic is a technique for reducing the cost of quantum circuits with space--time tradeoffs.<n>In this work we introduce four optimizations to windowed modular exponentiation.<n>This leads to a $3%$ improvement in Toffoli count and Toffoli depth for modular exponentiation circuits relevant to cryptographic applications.
arXiv Detail & Related papers (2025-02-24T16:59:16Z) - Cloning Games, Black Holes and Cryptography [50.022147589030304]
We introduce a new toolkit for analyzing cloning games.<n>This framework allows us to analyze a new cloning game based on binary phase states.<n>We show that the binary phase variantally optimal bound offers quantitative insights into information scrambling in idealized models of black holes.
arXiv Detail & Related papers (2024-11-07T14:09:32Z) - Making Hard Problems Easier with Custom Data Distributions and Loss Regularization: A Case Study in Modular Arithmetic [30.93087957720688]
We develop techniques that significantly boost the performance of ML models on modular arithmetic tasks.<n>Our core innovation is the use of custom training data distributions and a carefully designed loss function.<n>Our techniques also help ML models learn other well-studied problems better, including copy, associative recall, and parity.
arXiv Detail & Related papers (2024-10-04T16:19:33Z) - Estimating the Decoding Failure Rate of Binary Regular Codes Using Iterative Decoding [84.0257274213152]
We propose a new technique to provide accurate estimates of the DFR of a two-iterations (parallel) bit flipping decoder.<n>We validate our results, providing comparisons of the modeled and simulated weight of the syndrome, incorrectly-guessed error bit distribution at the end of the first iteration, and two-itcrypteration Decoding Failure Rates (DFR)
arXiv Detail & Related papers (2024-01-30T11:40:24Z) - FABind: Fast and Accurate Protein-Ligand Binding [127.7790493202716]
$mathbfFABind$ is an end-to-end model that combines pocket prediction and docking to achieve accurate and fast protein-ligand binding.
Our proposed model demonstrates strong advantages in terms of effectiveness and efficiency compared to existing methods.
arXiv Detail & Related papers (2023-10-10T16:39:47Z) - SALSA PICANTE: a machine learning attack on LWE with binary secrets [8.219373043653507]
We present PICANTE, an enhanced machine learning attack on LWE with sparse binary secrets.
PICANTE recovers secrets in much larger dimensions (up to $n=350$) and with larger Hamming weights.
While PICANTE does not threaten NIST's proposed LWE standards, it demonstrates significant improvement over SALSA.
arXiv Detail & Related papers (2023-03-07T19:01:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.