Injective Rank Metric Trapdoor Functions with Homogeneous Errors
- URL: http://arxiv.org/abs/2310.08962v1
- Date: Fri, 13 Oct 2023 09:12:25 GMT
- Title: Injective Rank Metric Trapdoor Functions with Homogeneous Errors
- Authors: Étienne Burle, Philippe Gaborit, Younes Hatri, Ayoub Otmani,
- Abstract summary: In rank-metric cryptography, a vector from a finite dimensional linear space over a finite field is viewed as the linear space spanned by its entries.
We introduce a new construction of injective one-way trapdoor functions.
Our solution departs from the frequent way of building public key primitives from error-correcting codes.
- Score: 2.117778717665161
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In rank-metric cryptography, a vector from a finite dimensional linear space over a finite field is viewed as the linear space spanned by its entries. The rank decoding problem which is the analogue of the problem of decoding a random linear code consists in recovering a basis of a random noise vector that was used to perturb a set of random linear equations sharing a secret solution. Assuming the intractability of this problem, we introduce a new construction of injective one-way trapdoor functions. Our solution departs from the frequent way of building public key primitives from error-correcting codes where, to establish the security, ad hoc assumptions about a hidden structure are made. Our method produces a hard-to-distinguish linear code together with low weight vectors which constitute the secret that helps recover the inputs.The key idea is to focus on trapdoor functions that take sufficiently enough input vectors sharing the same support. Applying then the error correcting algorithm designed for Low Rank Parity Check (LRPC) codes, we obtain an inverting algorithm that recovers the inputs with overwhelming probability.
Related papers
- Limitations of the decoding-to-LPN reduction via code smoothing [59.90381090395222]
The Learning Parity with Noise (LPN) problem underlies several classic cryptographic primitives.
This paper attempts to find a reduction from the decoding problem of linear codes, for which several hardness results exist.
We characterize the efficiency of the reduction in terms of the parameters of the decoding and problems.
arXiv Detail & Related papers (2024-08-07T12:54:43Z) - Linear Codes for Hyperdimensional Computing [9.7902367664742]
We show that random linear codes offer a rich subcode structure that can be used to form key-value stores.
We show that under the framework we develop, random linear codes admit simple recovery algorithms to factor (either bundled or bound) compositional representations.
arXiv Detail & Related papers (2024-03-05T19:18:44Z) - Reduction from sparse LPN to LPN, Dual Attack 3.0 [1.9106435311144374]
A new algorithm called RLPN-decoding which relies on a completely different approach was introduced.
It outperforms significantly the ISD's and RLPN for code rates smaller than 0.42.
This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography.
arXiv Detail & Related papers (2023-12-01T17:35:29Z) - An Upper-Bound on the Decoding Failure Probability of the LRPC Decoder [0.0]
Low Rank Parity Check (LRPC) codes form a class of rank-metric error-correcting codes that was purposely introduced to design public-key encryption schemes.
An LRPC code is defined from a parity check matrix whose entries belong to a relatively low dimensional vector subspace of a large finite field.
arXiv Detail & Related papers (2023-09-25T10:47:04Z) - Iterative Sketching for Secure Coded Regression [66.53950020718021]
We propose methods for speeding up distributed linear regression.
Specifically, we randomly rotate the basis of the system of equations and then subsample blocks, to simultaneously secure the information and reduce the dimension of the regression problem.
arXiv Detail & Related papers (2023-08-08T11:10:42Z) - Quantum Worst-Case to Average-Case Reductions for All Linear Problems [66.65497337069792]
We study the problem of designing worst-case to average-case reductions for quantum algorithms.
We provide an explicit and efficient transformation of quantum algorithms that are only correct on a small fraction of their inputs into ones that are correct on all inputs.
arXiv Detail & Related papers (2022-12-06T22:01:49Z) - Finding the disjointness of stabilizer codes is NP-complete [77.34726150561087]
We show that the problem of calculating the $c-disjointness, or even approximating it to within a constant multiplicative factor, is NP-complete.
We provide bounds on the disjointness for various code families, including the CSS codes,$d codes and hypergraph codes.
Our results indicate that finding fault-tolerant logical gates for generic quantum error-correcting codes is a computationally challenging task.
arXiv Detail & Related papers (2021-08-10T15:00:20Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Learning algorithms from circuit lower bounds [0.0]
We revisit known constructions of efficient learning algorithms from various notions of constructive circuit lower bounds.
We prove that if it is possible to find efficiently, in a particular interactive way, errors of many p-size circuits attempting to solve hard problems, then p-size circuits can be PAC learned over the uniform distribution.
arXiv Detail & Related papers (2020-12-28T04:47:36Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Encoder blind combinatorial compressed sensing [5.177947445379688]
We consider the problem of designing a decoder to recover a set of sparse codes from their linear measurements alone.
The contribution of this paper is a computationally efficient decoding algorithm, Decoder-Expander Based Factorisation.
arXiv Detail & Related papers (2020-04-10T16:26:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.