Accurate and fast matrix factorization for low-rank learning
- URL: http://arxiv.org/abs/2104.10785v1
- Date: Wed, 21 Apr 2021 22:35:02 GMT
- Title: Accurate and fast matrix factorization for low-rank learning
- Authors: Reza Godaz, Reza Monsefi, Faezeh Toutounian, Reshad Hosseini
- Abstract summary: We tackle two important challenges related to the accurate partial singular value decomposition (SVD) and numerical rank estimation of a huge matrix.
We use the concepts of Krylov subspaces such as the Golub-Kahan bidiagonalization process as well as Ritz vectors to achieve these goals.
- Score: 4.435094091999926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we tackle two important challenges related to the accurate
partial singular value decomposition (SVD) and numerical rank estimation of a
huge matrix to use in low-rank learning problems in a fast way. We use the
concepts of Krylov subspaces such as the Golub-Kahan bidiagonalization process
as well as Ritz vectors to achieve these goals. Our experiments identify
various advantages of the proposed methods compared to traditional and
randomized SVD (R-SVD) methods with respect to the accuracy of the singular
values and corresponding singular vectors computed in a similar execution time.
The proposed methods are appropriate for applications involving huge matrices
where accuracy in all spectrum of the desired singular values, and also all of
corresponding singular vectors is essential. We evaluate our method in the real
application of Riemannian similarity learning (RSL) between two various image
datasets of MNIST and USPS.
Related papers
- Robust SVD Made Easy: A fast and reliable algorithm for large-scale data
analysis [0.0]
Existing robust SVD algorithms often sacrifice speed for robustness or fail in the presence of only a few outliers.
This study introduces an efficient algorithm, called Spherically Normalized SVD, for robust SVD approximation.
The proposed algorithm achieves remarkable speed by utilizing only two applications of a standard reduced-rank SVD algorithm.
arXiv Detail & Related papers (2024-02-15T07:08:11Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Learning Log-Determinant Divergences for Positive Definite Matrices [47.61701711840848]
In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
arXiv Detail & Related papers (2021-04-13T19:09:43Z) - On the Efficient Implementation of the Matrix Exponentiated Gradient
Algorithm for Low-Rank Matrix Optimization [26.858608065417663]
Convex optimization over the spectrahedron has important applications in machine learning, signal processing and statistics.
We propose efficient implementations of MEG, which are tailored for optimization with low-rank matrices, and only use a single low-rank SVD on each iteration.
We also provide efficiently-computable certificates for the correct convergence of our methods.
arXiv Detail & Related papers (2020-12-18T19:14:51Z) - Projection techniques to update the truncated SVD of evolving matrices [17.22107982549168]
This paper considers the problem of updating the rank-k truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time.
The proposed framework is purely algebraic and targets general updating problems.
Results on matrices from real applications suggest that the proposed algorithm can lead to higher accuracy.
arXiv Detail & Related papers (2020-10-13T13:46:08Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Error Estimation for Sketched SVD via the Bootstrap [60.67199274260768]
This paper develops a fully data-driven bootstrap method that numerically estimates the actual error of sketched singular vectors/values.
The method is computationally inexpensive, because it operates only on sketched objects, and it requires no passes over the full matrix being factored.
arXiv Detail & Related papers (2020-03-10T19:14:08Z) - Estimating Multiple Precision Matrices with Cluster Fusion
Regularization [0.90238471756546]
We propose a penalized likelihood estimating multiple precision matrices from different classes.
Most existing methods either incorporate no information on relationships between the precision matrices, or require this information be a priori.
arXiv Detail & Related papers (2020-03-01T01:03:22Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.