A New Class of Algorithms for Finding Short Vectors in Lattices Lifted from Co-dimension $k$ Codes
- URL: http://arxiv.org/abs/2401.12383v1
- Date: Mon, 22 Jan 2024 22:17:41 GMT
- Title: A New Class of Algorithms for Finding Short Vectors in Lattices Lifted from Co-dimension $k$ Codes
- Authors: Robert Lin, Peter W. Shor,
- Abstract summary: We introduce a new class of algorithms for finding a short vector in lattices defined by codes of co-dimension $k$ over $mathbbZ_Pd$, where $P$ is prime.
The co-dimension $1$ case is solved by exploiting the packing properties of the projections mod $P$ of an initial set of non-lattice vectors onto a single dual codeword.
We show that it is possible to minimize the number of iterations, and thus the overall length expansion factor, to obtain a short lattice vector.
- Score: 1.8416014644193066
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a new class of algorithms for finding a short vector in lattices defined by codes of co-dimension $k$ over $\mathbb{Z}_P^d$, where $P$ is prime. The co-dimension $1$ case is solved by exploiting the packing properties of the projections mod $P$ of an initial set of non-lattice vectors onto a single dual codeword. The technical tools we introduce are sorting of the projections followed by single-step pairwise Euclidean reduction of the projections, resulting in monotonic convergence of the positive-valued projections to zero. The length of vectors grows by a geometric factor each iteration. For fixed $P$ and $d$, and large enough user-defined input sets, we show that it is possible to minimize the number of iterations, and thus the overall length expansion factor, to obtain a short lattice vector. Thus we obtain a novel approach for controlling the output length, which resolves an open problem posed by Noah Stephens-Davidowitz (the possibility of an approximation scheme for the shortest-vector problem (SVP) which does not reduce to near-exact SVP). In our approach, one may obtain short vectors even when the lattice dimension is quite large, e.g., 8000. For fixed $P$, the algorithm yields shorter vectors for larger $d$. We additionally present a number of extensions and generalizations of our fundamental co-dimension $1$ method. These include a method for obtaining many different lattice vectors by multiplying the dual codeword by an integer and then modding by $P$; a co-dimension $k$ generalization; a large input set generalization; and finally, a "block" generalization, which involves the replacement of pairwise (Euclidean) reduction by a $k$-party (non-Euclidean) reduction. The $k$-block generalization of our algorithm constitutes a class of polynomial-time algorithms indexed by $k\geq 2$, which yield successively improved approximations for the short vector problem.
Related papers
- Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Projection-Free Online Convex Optimization via Efficient Newton
Iterations [10.492474737007722]
This paper presents new projection-free algorithms for Online Convex Optimization (OCO) over a convex domain $mathcalK subset mathbbRd$.
arXiv Detail & Related papers (2023-06-19T18:48:53Z) - Orthogonal Directions Constrained Gradient Method: from non-linear
equality constraints to Stiefel manifold [16.099883128428054]
We propose a novel algorithm, the Orthogonal Directions Constrained Method (ODCGM)
ODCGM only requires computing a projection onto a vector space.
We show that ODCGM exhibits the near-optimal oracle complexities.
arXiv Detail & Related papers (2023-03-16T12:25:53Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Halpern-Type Accelerated and Splitting Algorithms For Monotone
Inclusions [14.445131747570962]
We develop a new type of accelerated algorithms to solve some classes of maximally monotone equations.
Our methods rely on a so-called Halpern-type fixed-point iteration in [32], and recently exploited by a number of researchers.
arXiv Detail & Related papers (2021-10-15T15:26:32Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z) - Robust Sub-Gaussian Principal Component Analysis and Width-Independent
Schatten Packing [22.337756118270217]
We develop two methods for a fundamental statistical task: given an $epsilon$-corrupted set of $n$ samples from a $d$-linear sub-Gaussian distribution.
Our first robust algorithm runs iterative filtering in time, returns an approximate eigenvector, and is based on a simple filtering approach.
Our second, which attains a slightly worse approximation factor, runs in nearly-trivial time and iterations under a mild spectral gap assumption.
arXiv Detail & Related papers (2020-06-12T07:45:38Z) - Private Learning of Halfspaces: Simplifying the Construction and
Reducing the Sample Complexity [63.29100726064574]
We present a differentially private learner for halfspaces over a finite grid $G$ in $mathbbRd$ with sample complexity $approx d2.5cdot 2log*|G|$.
The building block for our learner is a new differentially private algorithm for approximately solving the linear feasibility problem.
arXiv Detail & Related papers (2020-04-16T16:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.