Algorithms for Sparse LPN and LSPN Against Low-noise
- URL: http://arxiv.org/abs/2407.19215v4
- Date: Mon, 25 Nov 2024 10:16:22 GMT
- Title: Algorithms for Sparse LPN and LSPN Against Low-noise
- Authors: Xue Chen, Wenxuan Shu, Zhaienhe Zhou,
- Abstract summary: We study learning algorithms for two variants of the classical learning parity with noise (LPN) problem.
We provide a new algorithmic framework that improves the state of the art for a wide range of parameters.
- Score: 1.2143710013809321
- License:
- Abstract: We study learning algorithms for two sparse variants of the classical learning parity with noise (LPN) problem. We provide a new algorithmic framework that improves the state of the art for a wide range of parameters. This framework has a simple structure different from previous approaches: the first step is a domain reduction via the knowledge of sparsity; then it solves sub-problems by Gaussian elimination. Let $n$ be the dimension, $k$ be the sparsity parameter, and $\eta$ be the noise rate such that each label gets flipped with probability $\eta$. The sparse LPN problem (with various parameters) has wide applications in cryptography. For $m=n^{1+(\frac{k}{2}-1)(1-\delta)}$ with $\delta \in (0,1)$, the best known algorithm has running time $\min\{e^{\eta n}, e^{\tilde{O}(n^{\delta})}\}$. We present a distinguishing algorithm for sparse LPN with time complexity $e^{O(\eta \cdot n^{\frac{1+\delta}{2}})}$ and sample complexity $m=n^{1+(\frac{k-1}{2})(1-\delta)}$. Furthermore, we show a learning algorithm for sparse LPN in time complexity $e^{\tilde{O}(\eta \cdot n^{\frac{1+\delta}{2}})}$ and $m=\max\{1,\frac{\eta \cdot n^{\frac{1+\delta}{2}}}{k^2}\} \cdot \tilde{O}(n)^{1+(\frac{k-1}{2})(1-\delta)}$ samples. The learning sparse parity with noise (LSPN) problem assumes the hidden parity is $k$-sparse. LSPN has been extensively studied in both learning theory and cryptography. However, the state of the art needs ${n \choose k/2} = \Omega(n/k)^{k/2}$ time for a wide range of parameters while the simple enumeration algorithm takes ${n \choose k}=O(n/k)^k$ time. Our LSPN algorithm runs in time $O(\eta \cdot n/k)^k$ for any $\eta$ and $k$. This improves the state-of-the-art for learning sparse parity in a wide range of parameters.
Related papers
- Implicit High-Order Moment Tensor Estimation and Learning Latent Variable Models [39.33814194788341]
We study the task of learning latent-variable models.
Motivated by such applications, we develop a general efficient algorithm for implicit moment computation.
By leveraging our general algorithm, we obtain the first-time learners for the following models.
arXiv Detail & Related papers (2024-11-23T23:13:24Z) - Towards Optimal Circuit Size for Sparse Quantum State Preparation [10.386753939552872]
We consider the preparation for $n$-qubit sparse quantum states with $s$ non-zero amplitudes and propose two algorithms.
The first algorithm uses $O(ns/log n + n)$ gates, improving upon previous methods by $O(log n)$.
The second algorithm is tailored for binary strings that exhibit a short Hamiltonian path.
arXiv Detail & Related papers (2024-04-08T02:13:40Z) - A Sub-Quadratic Time Algorithm for Robust Sparse Mean Estimation [6.853165736531941]
We study the algorithmic problem of sparse mean estimation in the presence of adversarial outliers.
Our main contribution is an algorithm for robust sparse mean estimation which runs in emphsubquadratic time using $mathrmpoly(k,log d,1/epsilon)$ samples.
arXiv Detail & Related papers (2024-03-07T18:23:51Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Sparse PCA Beyond Covariance Thresholding [2.311583680973075]
We show that for every $t ll k$ there exists an algorithm running in time $ncdot dO(t)$ that solves this problem as long as [ gtrsim fracksqrtntsqrtln(2 + td/k2)$.
We provide time algorithms for the sparse planted vector problem that have better guarantees than the state of the art in some regimes.
arXiv Detail & Related papers (2023-02-20T18:45:24Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Improved quantum algorithm for A-optimal projection [4.248054546466641]
This paper corrects the time complexity of Duan emphet al.'s algorithm to $(frackappa4ssqrtks epsilonsmathrmpolylog)$.
Our algorithm achieves at least a speedup compared to Duan emphet al.'s algorithm.
arXiv Detail & Related papers (2020-06-10T09:31:53Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.