Ranky : An Approach to Solve Distributed SVD on Large Sparse Matrices
- URL: http://arxiv.org/abs/2009.09767v1
- Date: Mon, 21 Sep 2020 11:36:28 GMT
- Title: Ranky : An Approach to Solve Distributed SVD on Large Sparse Matrices
- Authors: Resul Tugay, Sule Gunduz Oguducu
- Abstract summary: Singular Value Decomposition (SVD) is a well studied research topic in many fields and applications.
We propose Ranky, set of methods to solve rank problem on large and sparse matrices in a distributed manner.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Singular Value Decomposition (SVD) is a well studied research topic in many
fields and applications from data mining to image processing. Data arising from
these applications can be represented as a matrix where it is large and sparse.
Most existing algorithms are used to calculate singular values, left and right
singular vectors of a large-dense matrix but not large and sparse matrix. Even
if they can find SVD of a large matrix, calculation of large-dense matrix has
high time complexity due to sequential algorithms. Distributed approaches are
proposed for computing SVD of large matrices. However, rank of the matrix is
still being a problem when solving SVD with these distributed algorithms. In
this paper we propose Ranky, set of methods to solve rank problem on large and
sparse matrices in a distributed manner. Experimental results show that the
Ranky approach recovers singular values, singular left and right vectors of a
given large and sparse matrix with negligible error.
Related papers
- One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Multiresolution kernel matrix algebra [0.0]
We show the compression of kernel matrices by means of samplets produces optimally sparse matrices in a certain S-format.
The inverse of a kernel matrix (if it exists) is compressible in the S-format as well.
The matrix algebra is justified mathematically by pseudo differential calculus.
arXiv Detail & Related papers (2022-11-21T17:50:22Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Learning in High-Dimensional Feature Spaces Using ANOVA-Based Fast
Matrix-Vector Multiplication [0.0]
kernel matrix is typically dense and large-scale. Depending on the dimension of the feature space even the computation of all of its entries in reasonable time becomes a challenging task.
We propose the use of an ANOVA kernel, where we construct several kernels based on lower-dimensional feature spaces for which we provide fast algorithms realizing the matrix-vector products.
Based on a feature grouping approach, we then show how the fast matrix-vector products can be embedded into a learning method choosing kernel ridge regression and the preconditioned conjugate gradient solver.
arXiv Detail & Related papers (2021-11-19T10:29:39Z) - Sparse Factorization of Large Square Matrices [10.94053598642913]
In this paper, we propose to approximate a large square matrix with a product of sparse full-rank matrices.
In the approximation, our method needs only $N(log N)2$ non-zero numbers for an $Ntimes N$ full matrix.
We show that our method gives a better approximation when the approximated matrix is sparse and high-rank.
arXiv Detail & Related papers (2021-09-16T18:42:21Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - What if Neural Networks had SVDs? [66.91160214071088]
Various Neural Networks employ time-consuming matrix operations like matrix inversion.
We present an algorithm that is fast enough to speed up several matrix operations.
arXiv Detail & Related papers (2020-09-29T12:58:52Z) - Fast algorithms for robust principal component analysis with an upper
bound on the rank [12.668565749446476]
The robust principal component analysis (RPCA) decomposes a data matrix into a low-rank part and a sparse part.
The first type of algorithm applies regularization terms on the singular values of a matrix to obtain a low-rank matrix.
The second type of algorithm replaces the low-rank matrix as the multiplication of two small matrices.
arXiv Detail & Related papers (2020-08-18T15:07:57Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.