The cool and the cruel: separating hard parts of LWE secrets
- URL: http://arxiv.org/abs/2403.10328v2
- Date: Thu, 10 Oct 2024 14:57:29 GMT
- Title: The cool and the cruel: separating hard parts of LWE secrets
- Authors: Niklas Nolte, Mohamed Malhou, Emily Wenger, Samuel Stevens, Cathy Li, François Charton, Kristin Lauter,
- Abstract summary: Known attacks on sparse binary LWE secrets include the sparse dual attack and the hybrid sparse dual-meet in the middle attack.
In this paper, we provide a new statistical attack with low memory requirement.
- Score: 11.000531626756853
- License:
- Abstract: Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation. Known attacks on sparse binary LWE secrets include the sparse dual attack and the hybrid sparse dual-meet in the middle attack which requires significant memory. In this paper, we provide a new statistical attack with low memory requirement. The attack relies on some initial lattice reduction. The key observation is that, after lattice reduction is applied to the rows of a q-ary-like embedded random matrix $\mathbf A$, the entries with high variance are concentrated in the early columns of the extracted matrix. This allows us to separate out the "hard part" of the LWE secret. We can first solve the sub-problem of finding the "cruel" bits of the secret in the early columns, and then find the remaining "cool" bits in linear time. We use statistical techniques to distinguish distributions to identify both the cruel and the cool bits of the secret. We provide concrete attack timings for recovering secrets in dimensions $n=256$, $512$, and $768$. For the lattice reduction stage, we leverage recent improvements in lattice reduction (e.g. flatter) applied in parallel. We also apply our new attack in the RLWE setting for $2$-power cyclotomic rings, showing that these RLWE instances are much more vulnerable to this attack than LWE.
Related papers
- Benchmarking Attacks on Learning with Errors [9.031051362571436]
Lattice cryptography schemes based on the learning with errors (LWE) hardness assumption have been standardized by NIST for use as post-quantum cryptosystems.
We provide the first benchmarks for LWE secret recovery on standardized parameters, for small and low-weight (sparse) secrets.
We extend the SALSA and Cool & Cruel attacks in significant ways, and implement and scale up MitM attacks for the first time.
arXiv Detail & Related papers (2024-08-01T19:21:20Z) - Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on
Learning With Errors [10.800552110718714]
Learning with Errors (LWE) is a hard math problem underlying post-quantum cryptography systems for key exchange and digital signatures.
Prior work proposed new machine learning (ML)-based attacks on LWE problems with small, sparse secrets, but these attacks require millions of LWE samples to train on and take days to recover secrets.
We propose three key methods -- better preprocessing, angular embeddings and model pre-training -- to improve these attacks.
arXiv Detail & Related papers (2024-02-02T00:48:27Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - Communication-Constrained Bandits under Additive Gaussian Noise [111.06688156723018]
We study a distributed multi-armed bandit where a client supplies the learner with communication-constrained feedback.
We propose a multi-phase bandit algorithm, $mathttUEtext-UCB++$, which matches this lower bound to a minor additive factor.
arXiv Detail & Related papers (2023-04-25T09:31:20Z) - SALSA PICANTE: a machine learning attack on LWE with binary secrets [8.219373043653507]
We present PICANTE, an enhanced machine learning attack on LWE with sparse binary secrets.
PICANTE recovers secrets in much larger dimensions (up to $n=350$) and with larger Hamming weights.
While PICANTE does not threaten NIST's proposed LWE standards, it demonstrates significant improvement over SALSA.
arXiv Detail & Related papers (2023-03-07T19:01:01Z) - Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression [65.8785736964253]
We consider contextual bandits with linear constraints (CBwLC), a variant of contextual bandits in which the algorithm consumes multiple resources subject to linear constraints on total consumption.
This problem generalizes contextual bandits with knapsacks (CBwK), allowing for packing and covering constraints, as well as positive and negative resource consumption.
We provide the first algorithm for CBwLC (or CBwK) that is based on regression oracles. The algorithm is simple, computationally efficient, and statistically optimal under mild assumptions.
arXiv Detail & Related papers (2022-11-14T16:08:44Z) - Hindering Adversarial Attacks with Implicit Neural Representations [25.422201099331637]
Lossy Implicit Network Activation Coding (LINAC) defence successfully hinders several common adversarial attacks.
We devise a Parametric Bypass Approximation (PBA) attack strategy for key-based defences, which successfully invalidates an existing method in this category.
arXiv Detail & Related papers (2022-10-22T13:10:24Z) - PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack [92.94132883915876]
State-of-the-art deep neural networks are sensitive to small input perturbations.
Many defence methods have been proposed that attempt to improve robustness to adversarial noise.
evaluating adversarial robustness has proven to be extremely challenging.
arXiv Detail & Related papers (2021-06-03T01:45:48Z) - Limitations on Uncloneable Encryption and Simultaneous One-Way-to-Hiding [17.660958043781154]
We study uncloneable quantum encryption schemes for classical messages.
We focus on the information-theoretic setting and give several limitations on the structure and security of these schemes.
arXiv Detail & Related papers (2021-03-26T15:12:10Z) - GreedyFool: Distortion-Aware Sparse Adversarial Attack [138.55076781355206]
Modern deep neural networks (DNNs) are vulnerable to adversarial samples.
Sparse adversarial samples can fool the target model by only perturbing a few pixels.
We propose a novel two-stage distortion-aware greedy-based method dubbed as "GreedyFool"
arXiv Detail & Related papers (2020-10-26T17:59:07Z) - SMYRF: Efficient Attention using Asymmetric Clustering [103.47647577048782]
We propose a novel type of balanced clustering algorithm to approximate attention.
SMYRF can be used as a drop-in replacement for dense attention layers without any retraining.
We show that SMYRF can be used interchangeably with dense attention before and after training.
arXiv Detail & Related papers (2020-10-11T18:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.