Localized sketching for matrix multiplication and ridge regression
- URL: http://arxiv.org/abs/2003.09097v1
- Date: Fri, 20 Mar 2020 04:25:32 GMT
- Title: Localized sketching for matrix multiplication and ridge regression
- Authors: Rakshith S Srinivasa, Mark A Davenport, Justin Romberg
- Abstract summary: We consider sketched approximate matrix multiplication and ridge regression in the novel setting of localized sketching.
We show that, under mild conditions, block diagonal sketching matrices require only O(stable rank / epsilon2) and $O( stat. dim. epsilon)$ total sample complexity for matrix multiplication and ridge regression.
- Score: 29.61816941867831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider sketched approximate matrix multiplication and ridge regression
in the novel setting of localized sketching, where at any given point, only
part of the data matrix is available. This corresponds to a block diagonal
structure on the sketching matrix. We show that, under mild conditions, block
diagonal sketching matrices require only O(stable rank / \epsilon^2) and $O(
stat. dim. \epsilon)$ total sample complexity for matrix multiplication and
ridge regression, respectively. This matches the state-of-the-art bounds that
are obtained using global sketching matrices. The localized nature of sketching
considered allows for different parts of the data matrix to be sketched
independently and hence is more amenable to computation in distributed and
streaming settings and results in a smaller memory and computational footprint.
Related papers
- Distributed Least Squares in Small Space via Sketching and Bias Reduction [0.0]
Matrix sketching is a powerful tool for reducing the size of large data matrices.
We show that these limitations can be circumvented in the distributed setting by designing sketching methods that minimize the bias of the estimator, rather than its error.
In particular, we give a sparse sketching method running in optimal space and current matrix multiplication time, which recovers a nearly-unbiased least squares estimator using two passes over the data.
arXiv Detail & Related papers (2024-05-08T18:16:37Z) - DNNLasso: Scalable Graph Learning for Matrix-Variate Data [2.7195102129095003]
We introduce a diagonally non-negative graphical lasso model for estimating the Kronecker-sum structured precision matrix.
DNNLasso outperforms the state-of-the-art methods by a large margin in both accuracy and computational time.
arXiv Detail & Related papers (2024-03-05T02:49:00Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Learning-Augmented Sketches for Hessians [54.97773807211337]
We show how to design learned sketches for the Hessian in the context of second order methods.
We show empirically that learned sketches, compared with their "non-learned" counterparts, improve the approximation accuracy for important problems.
arXiv Detail & Related papers (2021-02-24T14:50:59Z) - Effective and Sparse Count-Sketch via k-means clustering [17.26941311020711]
We propose to reduce the reconstruction error of count-sketch by using k-means clustering algorithm to obtain the low-dimensional sketched matrix.
Our experimental results based on six real-life classification datasets have demonstrated that our proposed method achieves higher accuracy than the original count-sketch.
arXiv Detail & Related papers (2020-11-24T11:44:02Z) - Learning the Positions in CountSketch [51.15935547615698]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work we propose the first learning algorithm that also optimize the locations of the non-zero entries.
We show this algorithm gives better accuracy for low rank approximation than previous work, and apply it to other problems such as $k$-means clustering for the first time.
arXiv Detail & Related papers (2020-07-20T05:06:29Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.