Gröbner Basis Cryptanalysis of Ciminion and Hydra
- URL: http://arxiv.org/abs/2405.05040v1
- Date: Wed, 8 May 2024 13:14:04 GMT
- Title: Gröbner Basis Cryptanalysis of Ciminion and Hydra
- Authors: Matthias Johann Steiner,
- Abstract summary: quadratic system solving attacks pose a serious threat to symmetric key primitives like Ciminion and Hydra.
For Ciminion we construct a degree reverse lexicographic (DRL) Gr"obner basis for the iterated model via affine transformations.
For Hydra we provide a computer-aided proof in Sage that a quadratic DRL Gr"obner basis is already contained within the iterated system for the Hydra heads.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ciminion and Hydra are two recently introduced symmetric key Pseudo-Random Functions for Multi-Party Computation applications. For efficiency both primitives utilize quadratic permutations at round level. Therefore, polynomial system solving-based attacks pose a serious threat to these primitives. For Ciminion we construct a quadratic degree reverse lexicographic (DRL) Gr\"obner basis for the iterated polynomial model via affine transformations. For Hydra we provide a computer-aided proof in SageMath that a quadratic DRL Gr\"obner basis is already contained within the iterated polynomial system for the Hydra heads after affine transformations and a linear change of coordinates. Our Ciminion DRL Gr\"obner basis simplifies cryptanalysis, since one does not need to impose genericity assumptions, like being regular or semi-regular, anymore to derive complexity estimates on key recovery attacks. In the Hydra proposal it was claimed that $r_\mathcal{H} = 31$ rounds for the heads are sufficient to achieve $128$ bits of security against Gr\"obner basis attacks for key recovery. However, for $r_\mathcal{H} = 31$ standard term order conversion to a lexicographic (LEX) Gr\"obner basis for our Hydra DRL Gr\"obner basis requires just $126$ bits. Moreover, via the Eigenvalue Method up to $r_\mathcal{H} = 33$ rounds can be attacked below $128$ bits.
Related papers
- Learning and Computation of $Φ$-Equilibria at the Frontier of Tractability [85.07238533644636]
$Phi$-equilibria is a powerful and flexible framework at the heart of online learning and game theory.
We show that an efficient online algorithm incurs average $Phi$-regret at most $epsilon$ using $textpoly(d, k)/epsilon2$ rounds.
We also show nearly matching lower bounds in the online setting, thereby obtaining for the first time a family of deviations that captures the learnability of $Phi$-regret.
arXiv Detail & Related papers (2025-02-25T19:08:26Z) - Post-quantum encryption algorithms of high-degree 3-variable polynomial congruences: BS cryptosystems and BS key generation [0.0]
We will construct post-quantum encryption algorithms based on three-variable Beal-Schur congruence.
We will apply this result to generate simple and secure post-quantum encryption algorithms.
arXiv Detail & Related papers (2024-08-14T14:19:46Z) - Fast computation of 2-isogenies in dimension 4 and cryptographic applications [0.0]
We present algorithms to compute chains of $2$-isogenies between abelian varieties of dimension $ggeq 1$ with theta-coordinates of level $n=2$.
We are able to run a complete key recovery attack on SIDH when the endomorphism ring of the starting curve is unknown within a few seconds on a laptop.
arXiv Detail & Related papers (2024-07-22T09:19:20Z) - A new multivariate primitive from CCZ equivalence [1.8024397171920885]
Two secret affine invertible $mathcal S,mathcal T$ are applied to a set of $mathcalF$.
The equivalences $mathcal G$ and $mathcal F$ are said to be affine equivalent.
arXiv Detail & Related papers (2024-05-31T16:15:02Z) - Lossy Cryptography from Code-Based Assumptions [7.880958076366762]
A proliferation of advanced cryptographic primitives imply hard problems in the complexity class $SZK$.
This poses a barrier for building advanced primitives from code-based assumptions.
We propose a new code-based assumption: Dense-Sparse, that falls in the complexity class $BPPSZK$ and is conjectured to be secure against subexponential time adversaries.
arXiv Detail & Related papers (2024-02-06T02:17:08Z) - Learning Hierarchical Polynomials with Three-Layer Neural Networks [56.71223169861528]
We study the problem of learning hierarchical functions over the standard Gaussian distribution with three-layer neural networks.
For a large subclass of degree $k$s $p$, a three-layer neural network trained via layerwise gradientp descent on the square loss learns the target $h$ up to vanishing test error.
This work demonstrates the ability of three-layer neural networks to learn complex features and as a result, learn a broad class of hierarchical functions.
arXiv Detail & Related papers (2023-11-23T02:19:32Z) - Learning to Compute Gröbner Bases [3.8214695776749013]
This paper is the first to address the learning of Gr"obner basis with Transformers.
The training requires many pairs of a system and the associated Gr"obner basis.
We propose a hybrid input to handle tokens with continuity bias and avoid the growth of the vocabulary set.
arXiv Detail & Related papers (2023-11-21T11:54:21Z) - Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs [72.40181882916089]
We show that our algorithm achieves an $tildeObig((d+log (|mathcalS|2 |mathcalA|))sqrtKbig)$ regret with full-information feedback, where $d$ is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, $K$ is the number of episodes, $|mathcalS|$ and $|mathcalA|$ are the cardinalities of the state and action spaces
arXiv Detail & Related papers (2023-05-15T05:37:32Z) - Revisiting Weighted Strategy for Non-stationary Parametric Bandits [82.1942459195896]
This paper revisits the weighted strategy for non-stationary parametric bandits.
We propose a refined analysis framework, which produces a simpler weight-based algorithm.
Our new framework can be used to improve regret bounds of other parametric bandits.
arXiv Detail & Related papers (2023-03-05T15:11:14Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Near-optimal fitting of ellipsoids to random points [68.12685213894112]
A basic problem of fitting an ellipsoid to random points has connections to low-rank matrix decompositions, independent component analysis, and principal component analysis.
We resolve this conjecture up to logarithmic factors by constructing a fitting ellipsoid for some $n = Omega(, d2/mathrmpolylog(d),)$.
Our proof demonstrates feasibility of the least squares construction of Saunderson et al. using a convenient decomposition of a certain non-standard random matrix.
arXiv Detail & Related papers (2022-08-19T18:00:34Z) - Computationally Efficient Horizon-Free Reinforcement Learning for Linear
Mixture MDPs [111.75736569611159]
We propose the first computationally efficient horizon-free algorithm for linear mixture MDPs.
Our algorithm adapts a weighted least square estimator for the unknown transitional dynamic.
This also improves upon the best-known algorithms in this setting when $sigma_k2$'s are known.
arXiv Detail & Related papers (2022-05-23T17:59:18Z) - Contextual Recommendations and Low-Regret Cutting-Plane Algorithms [49.91214213074933]
We consider the following variant of contextual linear bandits motivated by routing applications in navigational engines and recommendation systems.
We design novel cutting-plane algorithms with low "regret" -- the total distance between the true point $w*$ and the hyperplanes the separation oracle returns.
arXiv Detail & Related papers (2021-06-09T05:39:05Z) - Variance-Aware Confidence Set: Variance-Dependent Bound for Linear
Bandits and Horizon-Free Bound for Linear Mixture MDP [76.94328400919836]
We show how to construct variance-aware confidence sets for linear bandits and linear mixture Decision Process (MDP)
For linear bandits, we obtain an $widetildeO(mathrmpoly(d)sqrt1 + sum_i=1Ksigma_i2) regret bound, where $d is the feature dimension.
For linear mixture MDP, we obtain an $widetildeO(mathrmpoly(d)sqrtK)$ regret bound, where
arXiv Detail & Related papers (2021-01-29T18:57:52Z) - Learning Polynomials of Few Relevant Dimensions [12.122488725065388]
Polynomial regression is a basic primitive in learning and statistics.
We give an algorithm that learns the runtime within accuracy $epsilon$ with sample complexity that is roughly $N = O_r,d(n log2(1/epsilon) (log n)d)$ and $O_r,d(N n2)$.
arXiv Detail & Related papers (2020-04-28T18:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.