A Sub-Quadratic Time Algorithm for Robust Sparse Mean Estimation
- URL: http://arxiv.org/abs/2403.04726v1
- Date: Thu, 7 Mar 2024 18:23:51 GMT
- Title: A Sub-Quadratic Time Algorithm for Robust Sparse Mean Estimation
- Authors: Ankit Pensia
- Abstract summary: We study the algorithmic problem of sparse mean estimation in the presence of adversarial outliers.
Our main contribution is an algorithm for robust sparse mean estimation which runs in emphsubquadratic time using $mathrmpoly(k,log d,1/epsilon)$ samples.
- Score: 6.853165736531941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the algorithmic problem of sparse mean estimation in the presence of
adversarial outliers. Specifically, the algorithm observes a \emph{corrupted}
set of samples from $\mathcal{N}(\mu,\mathbf{I}_d)$, where the unknown mean
$\mu \in \mathbb{R}^d$ is constrained to be $k$-sparse. A series of prior works
has developed efficient algorithms for robust sparse mean estimation with
sample complexity $\mathrm{poly}(k,\log d, 1/\epsilon)$ and runtime $d^2
\mathrm{poly}(k,\log d,1/\epsilon)$, where $\epsilon$ is the fraction of
contamination. In particular, the fastest runtime of existing algorithms is
quadratic ($\Omega(d^2)$), which can be prohibitive in high dimensions. This
quadratic barrier in the runtime stems from the reliance of these algorithms on
the sample covariance matrix, which is of size $d^2$. Our main contribution is
an algorithm for robust sparse mean estimation which runs in
\emph{subquadratic} time using $\mathrm{poly}(k,\log d,1/\epsilon)$ samples. We
also provide analogous results for robust sparse PCA. Our results build on
algorithmic advances in detecting weak correlations, a generalized version of
the light-bulb problem by Valiant.
Related papers
- Implicit High-Order Moment Tensor Estimation and Learning Latent Variable Models [39.33814194788341]
We study the task of learning latent-variable models.
Motivated by such applications, we develop a general efficient algorithm for implicit moment computation.
By leveraging our general algorithm, we obtain the first-time learners for the following models.
arXiv Detail & Related papers (2024-11-23T23:13:24Z) - Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Sparse PCA Beyond Covariance Thresholding [2.311583680973075]
We show that for every $t ll k$ there exists an algorithm running in time $ncdot dO(t)$ that solves this problem as long as [ gtrsim fracksqrtntsqrtln(2 + td/k2)$.
We provide time algorithms for the sparse planted vector problem that have better guarantees than the state of the art in some regimes.
arXiv Detail & Related papers (2023-02-20T18:45:24Z) - List-Decodable Sparse Mean Estimation via Difference-of-Pairs Filtering [42.526664955704746]
We develop a novel, conceptually simpler technique for list-decodable sparse mean estimation.
In particular, for distributions with "certifiably bounded" $t-th moments in $k$-sparse directions, our algorithm achieves error of $(1/alpha)O (1/t)$ with sample complexity $m = (klog(n))O(t)/alpha(mnt)$.
For the special case of Gaussian inliers, our algorithm achieves the optimal error guarantee of $Theta (sqrtlog
arXiv Detail & Related papers (2022-06-10T17:38:18Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Distribution Compression in Near-linear Time [27.18971095426405]
We introduce Compress++, a simple meta-procedure for speeding up any thinning algorithm.
It delivers $sqrtn$ points with $mathcalO(sqrtlog n/n)$ integration error and better-than-Monte-Carlo maximum mean discrepancy.
arXiv Detail & Related papers (2021-11-15T17:42:57Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - List-Decodable Mean Estimation in Nearly-PCA Time [50.79691056481693]
We study the fundamental task of list-decodable mean estimation in high dimensions.
Our algorithm runs in time $widetildeO(ndk)$ for all $k = O(sqrtd) cup Omega(d)$, where $n$ is the size of the dataset.
A variant of our algorithm has runtime $widetildeO(ndk)$ for all $k$, at the expense of an $O(sqrtlog k)$ factor in the recovery guarantee
arXiv Detail & Related papers (2020-11-19T17:21:37Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.