Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products
- URL: http://arxiv.org/abs/2202.05120v1
- Date: Thu, 10 Feb 2022 16:10:41 GMT
- Title: Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products
- Authors: Ainesh Bakshi, Kenneth L. Clarkson, David P. Woodruff
- Abstract summary: We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
- Score: 58.05771390012827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study iterative methods based on Krylov subspaces for low-rank
approximation under any Schatten-$p$ norm. Here, given access to a matrix $A$
through matrix-vector products, an accuracy parameter $\epsilon$, and a target
rank $k$, the goal is to find a rank-$k$ matrix $Z$ with orthonormal columns
such that $\| A(I -ZZ^\top)\|_{S_p} \leq (1+\epsilon)\min_{U^\top U = I_k}
\|A(I - U U^\top)\|_{S_p}$, where $\|M\|_{S_p}$ denotes the $\ell_p$ norm of
the the singular values of $M$. For the special cases of $p=2$ (Frobenius norm)
and $p = \infty$ (Spectral norm), Musco and Musco (NeurIPS 2015) obtained an
algorithm based on Krylov methods that uses $\tilde{O}(k/\sqrt{\epsilon})$
matrix-vector products, improving on the na\"ive $\tilde{O}(k/\epsilon)$
dependence obtainable by the power method, where $\tilde{O}$ suppresses
poly$(\log(dk/\epsilon))$ factors.
Our main result is an algorithm that uses only
$\tilde{O}(kp^{1/6}/\epsilon^{1/3})$ matrix-vector products, and works for all
$p \geq 1$. For $p = 2$ our bound improves the previous
$\tilde{O}(k/\epsilon^{1/2})$ bound to $\tilde{O}(k/\epsilon^{1/3})$. Since the
Schatten-$p$ and Schatten-$\infty$ norms are the same up to a $1+ \epsilon$
factor when $p \geq (\log d)/\epsilon$, our bound recovers the result of Musco
and Musco for $p = \infty$. Further, we prove a matrix-vector query lower bound
of $\Omega(1/\epsilon^{1/3})$ for any fixed constant $p \geq 1$, showing that
surprisingly $\tilde{\Theta}(1/\epsilon^{1/3})$ is the optimal complexity for
constant~$k$.
To obtain our results, we introduce several new techniques, including
optimizing over multiple Krylov subspaces simultaneously, and pinching
inequalities for partitioned operators. Our lower bound for $p \in [1,2]$ uses
the Araki-Lieb-Thirring trace inequality, whereas for $p>2$, we appeal to a
norm-compression inequality for aligned partitioned operators.
Related papers
- LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions [54.54897832889028]
We show that for any $K$, there is a universal set" $U subset [n]$ of size independent of $n$, such that for any $Q$ and any row $i$, the large attention scores $A_i,j$ in row $i$ of $A$ all have $jin U$.
We empirically show the benefits of our scheme for vision transformers, showing how to train new models that use our universal set while training as well.
arXiv Detail & Related papers (2024-10-07T19:47:13Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Coresets for Multiple $\ell_p$ Regression [47.790126028106734]
We construct coresets of size $tilde O(varepsilon-2d)$ for $p2$ and $tilde O(varepsilon-pdp/2)$ for $p>2$.
For $1p2$, every matrix has a subset of $tilde O(varepsilon-1k)$ rows which spans a $(varepsilon-1k)$-approximately optimal $k$-dimensional subspace for $ell_p$ subspace approximation
arXiv Detail & Related papers (2024-06-04T15:50:42Z) - Optimal Embedding Dimension for Sparse Subspace Embeddings [4.042707434058959]
A random $mtimes n$ matrix $S$ is an oblivious subspace embedding (OSE)
We show that an $mtimes n$ random matrix $S$ with $mgeq (1+theta)d$ is an oblivious subspace embedding with $epsilon = O_theta(1)$.
We use this to construct the first oblivious subspace embedding with $O(d)$ embedding dimension that can be applied faster than current matrix multiplication time.
arXiv Detail & Related papers (2023-11-17T18:01:58Z) - Krylov Methods are (nearly) Optimal for Low-Rank Approximation [8.017116107657206]
We show that any algorithm requires $Omegaleft(log(n)/varepsilon1/2right)$ matrix-vector products, exactly matching the upper bound obtained by Krylov methods.
Our lower bound addresses Open Question 1WooWoo14, providing evidence for the lack of progress on algorithms for Spectral LRA.
arXiv Detail & Related papers (2023-04-06T16:15:19Z) - Optimal $\ell_1$ Column Subset Selection and a Fast PTAS for Low Rank
Approximation [0.0]
We give the first time column subset selection-based $ell_p$ low rank approximation algorithm sampling $tildeO(k)$ columns.
We extend our results to obtain tight upper and lower bounds for column subset selection-based $ell_p$ low rank approximation for any $1 p 2$.
arXiv Detail & Related papers (2020-07-20T17:50:30Z) - Average Case Column Subset Selection for Entrywise $\ell_1$-Norm Loss [76.02734481158458]
It is known that in the worst case, to obtain a good rank-$k$ approximation to a matrix, one needs an arbitrarily large $nOmega(1)$ number of columns.
We show that under certain minimal and realistic distributional settings, it is possible to obtain a $(k/epsilon)$-approximation with a nearly linear running time and poly$(k/epsilon)+O(klog n)$ columns.
This is the first algorithm of any kind for achieving a $(k/epsilon)$-approximation for entrywise
arXiv Detail & Related papers (2020-04-16T22:57:06Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.