Algorithms for Sparse LPN and LSPN Against Low-noise
- URL: http://arxiv.org/abs/2407.19215v4
- Date: Mon, 25 Nov 2024 10:16:22 GMT
- Title: Algorithms for Sparse LPN and LSPN Against Low-noise
- Authors: Xue Chen, Wenxuan Shu, Zhaienhe Zhou,
- Abstract summary: We study learning algorithms for two variants of the classical learning parity with noise (LPN) problem.
We provide a new algorithmic framework that improves the state of the art for a wide range of parameters.
- Score: 1.2143710013809321
- License:
- Abstract: We study learning algorithms for two sparse variants of the classical learning parity with noise (LPN) problem. We provide a new algorithmic framework that improves the state of the art for a wide range of parameters. This framework has a simple structure different from previous approaches: the first step is a domain reduction via the knowledge of sparsity; then it solves sub-problems by Gaussian elimination. Let $n$ be the dimension, $k$ be the sparsity parameter, and $\eta$ be the noise rate such that each label gets flipped with probability $\eta$. The sparse LPN problem (with various parameters) has wide applications in cryptography. For $m=n^{1+(\frac{k}{2}-1)(1-\delta)}$ with $\delta \in (0,1)$, the best known algorithm has running time $\min\{e^{\eta n}, e^{\tilde{O}(n^{\delta})}\}$. We present a distinguishing algorithm for sparse LPN with time complexity $e^{O(\eta \cdot n^{\frac{1+\delta}{2}})}$ and sample complexity $m=n^{1+(\frac{k-1}{2})(1-\delta)}$. Furthermore, we show a learning algorithm for sparse LPN in time complexity $e^{\tilde{O}(\eta \cdot n^{\frac{1+\delta}{2}})}$ and $m=\max\{1,\frac{\eta \cdot n^{\frac{1+\delta}{2}}}{k^2}\} \cdot \tilde{O}(n)^{1+(\frac{k-1}{2})(1-\delta)}$ samples. The learning sparse parity with noise (LSPN) problem assumes the hidden parity is $k$-sparse. LSPN has been extensively studied in both learning theory and cryptography. However, the state of the art needs ${n \choose k/2} = \Omega(n/k)^{k/2}$ time for a wide range of parameters while the simple enumeration algorithm takes ${n \choose k}=O(n/k)^k$ time. Our LSPN algorithm runs in time $O(\eta \cdot n/k)^k$ for any $\eta$ and $k$. This improves the state-of-the-art for learning sparse parity in a wide range of parameters.
Related papers
- A Near-optimal Algorithm for Learning Margin Halfspaces with Massart Noise [36.29182619215646]
We study the problem of PAC learning $gamma$-margin halfspaces in the presence of Massart noise.
Our algorithm is simple and practical, relying on online SGD on a carefully selected sequence of convex losses.
arXiv Detail & Related papers (2025-01-16T17:44:18Z) - Implicit High-Order Moment Tensor Estimation and Learning Latent Variable Models [39.33814194788341]
We study the task of learning latent-variable models.
Motivated by such applications, we develop a general efficient algorithm for implicit moment computation.
By leveraging our general algorithm, we obtain the first-time learners for the following models.
arXiv Detail & Related papers (2024-11-23T23:13:24Z) - Faster Private Minimum Spanning Trees [11.72102598708538]
We present a new differentially private MST algorithm that matches the utility of existing in-place methods while running in time.
We present a data structure that allows us to sample a noisy minimum weight edge among at most $O(n2)$ cut edges in $O(sqrtn log n)$ time.
arXiv Detail & Related papers (2024-08-13T16:00:30Z) - A Sub-Quadratic Time Algorithm for Robust Sparse Mean Estimation [6.853165736531941]
We study the algorithmic problem of sparse mean estimation in the presence of adversarial outliers.
Our main contribution is an algorithm for robust sparse mean estimation which runs in emphsubquadratic time using $mathrmpoly(k,log d,1/epsilon)$ samples.
arXiv Detail & Related papers (2024-03-07T18:23:51Z) - Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Most Neural Networks Are Almost Learnable [52.40331776572531]
We show that for any fixed $epsilon>0$ and depth $i$, there is a poly-time algorithm that learns random Xavier networks of depth $i$.
The algorithm runs in time and sample complexity of $(bard)mathrmpoly(epsilon-1)$, where $bar d$ is the size of the network.
For some cases of sigmoid and ReLU-like activations the bound can be improved to $(bard)mathrmpolylog(eps
arXiv Detail & Related papers (2023-05-25T22:27:42Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.