Average Case Column Subset Selection for Entrywise $\ell_1$-Norm Loss
- URL: http://arxiv.org/abs/2004.07986v1
- Date: Thu, 16 Apr 2020 22:57:06 GMT
- Title: Average Case Column Subset Selection for Entrywise $\ell_1$-Norm Loss
- Authors: Zhao Song, David P. Woodruff, Peilin Zhong
- Abstract summary: It is known that in the worst case, to obtain a good rank-$k$ approximation to a matrix, one needs an arbitrarily large $nOmega(1)$ number of columns.
We show that under certain minimal and realistic distributional settings, it is possible to obtain a $(k/epsilon)$-approximation with a nearly linear running time and poly$(k/epsilon)+O(klog n)$ columns.
This is the first algorithm of any kind for achieving a $(k/epsilon)$-approximation for entrywise
- Score: 76.02734481158458
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We study the column subset selection problem with respect to the entrywise
$\ell_1$-norm loss. It is known that in the worst case, to obtain a good
rank-$k$ approximation to a matrix, one needs an arbitrarily large
$n^{\Omega(1)}$ number of columns to obtain a $(1+\epsilon)$-approximation to
the best entrywise $\ell_1$-norm low rank approximation of an $n \times n$
matrix. Nevertheless, we show that under certain minimal and realistic
distributional settings, it is possible to obtain a
$(1+\epsilon)$-approximation with a nearly linear running time and
poly$(k/\epsilon)+O(k\log n)$ columns. Namely, we show that if the input matrix
$A$ has the form $A = B + E$, where $B$ is an arbitrary rank-$k$ matrix, and
$E$ is a matrix with i.i.d. entries drawn from any distribution $\mu$ for which
the $(1+\gamma)$-th moment exists, for an arbitrarily small constant $\gamma >
0$, then it is possible to obtain a $(1+\epsilon)$-approximate column subset
selection to the entrywise $\ell_1$-norm in nearly linear time. Conversely we
show that if the first moment does not exist, then it is not possible to obtain
a $(1+\epsilon)$-approximate subset selection algorithm even if one chooses any
$n^{o(1)}$ columns. This is the first algorithm of any kind for achieving a
$(1+\epsilon)$-approximation for entrywise $\ell_1$-norm loss low rank
approximation.
Related papers
- LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions [54.54897832889028]
We show that for any $K$, there is a universal set" $U subset [n]$ of size independent of $n$, such that for any $Q$ and any row $i$, the large attention scores $A_i,j$ in row $i$ of $A$ all have $jin U$.
We empirically show the benefits of our scheme for vision transformers, showing how to train new models that use our universal set while training as well.
arXiv Detail & Related papers (2024-10-07T19:47:13Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Optimal Embedding Dimension for Sparse Subspace Embeddings [4.042707434058959]
A random $mtimes n$ matrix $S$ is an oblivious subspace embedding (OSE)
We show that an $mtimes n$ random matrix $S$ with $mgeq (1+theta)d$ is an oblivious subspace embedding with $epsilon = O_theta(1)$.
We use this to construct the first oblivious subspace embedding with $O(d)$ embedding dimension that can be applied faster than current matrix multiplication time.
arXiv Detail & Related papers (2023-11-17T18:01:58Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Krylov Methods are (nearly) Optimal for Low-Rank Approximation [8.017116107657206]
We show that any algorithm requires $Omegaleft(log(n)/varepsilon1/2right)$ matrix-vector products, exactly matching the upper bound obtained by Krylov methods.
Our lower bound addresses Open Question 1WooWoo14, providing evidence for the lack of progress on algorithms for Spectral LRA.
arXiv Detail & Related papers (2023-04-06T16:15:19Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Learning a Latent Simplex in Input-Sparsity Time [58.30321592603066]
We consider the problem of learning a latent $k$-vertex simplex $KsubsetmathbbRdtimes n$, given access to $AinmathbbRdtimes n$.
We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$.
arXiv Detail & Related papers (2021-05-17T16:40:48Z) - Optimal $\ell_1$ Column Subset Selection and a Fast PTAS for Low Rank
Approximation [0.0]
We give the first time column subset selection-based $ell_p$ low rank approximation algorithm sampling $tildeO(k)$ columns.
We extend our results to obtain tight upper and lower bounds for column subset selection-based $ell_p$ low rank approximation for any $1 p 2$.
arXiv Detail & Related papers (2020-07-20T17:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.