Projection techniques to update the truncated SVD of evolving matrices
- URL: http://arxiv.org/abs/2010.06392v1
- Date: Tue, 13 Oct 2020 13:46:08 GMT
- Title: Projection techniques to update the truncated SVD of evolving matrices
- Authors: Vassilis Kalantzis, Georgios Kollias, Shashanka Ubaru, Athanasios N.
Nikolakopoulos, Lior Horesh, Kenneth L. Clarkson
- Abstract summary: This paper considers the problem of updating the rank-k truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time.
The proposed framework is purely algebraic and targets general updating problems.
Results on matrices from real applications suggest that the proposed algorithm can lead to higher accuracy.
- Score: 17.22107982549168
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper considers the problem of updating the rank-k truncated Singular
Value Decomposition (SVD) of matrices subject to the addition of new rows
and/or columns over time. Such matrix problems represent an important
computational kernel in applications such as Latent Semantic Indexing and
Recommender Systems. Nonetheless, the proposed framework is purely algebraic
and targets general updating problems. The algorithm presented in this paper
undertakes a projection view-point and focuses on building a pair of subspaces
which approximate the linear span of the sought singular vectors of the updated
matrix. We discuss and analyze two different choices to form the projection
subspaces. Results on matrices from real applications suggest that the proposed
algorithm can lead to higher accuracy, especially for the singular triplets
associated with the largest modulus singular values. Several practical details
and key differences with other approaches are also discussed.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model [13.082805815235975]
We introduce a novel framework called Mode-wise Principal Subspace Pursuit (MOP-UP) to extract hidden variations in both the row and column dimensions for matrix data.
The effectiveness and practical merits of the proposed framework are demonstrated through experiments on both simulated and real datasets.
arXiv Detail & Related papers (2023-07-02T13:59:47Z) - Sufficient dimension reduction for feature matrices [3.04585143845864]
We propose a method called principal support matrix machine (PSMM) for the matrix sufficient dimension reduction.
Our numerical analysis demonstrates that the PSMM outperforms existing methods and has strong interpretability in real data applications.
arXiv Detail & Related papers (2023-03-07T23:16:46Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Generalized Leverage Scores: Geometric Interpretation and Applications [15.86621510551207]
We extend the definition of leverage scores to relate the columns of a matrix to arbitrary subsets of singular vectors.
We employ this result to design approximation algorithms with provable guarantees for two well-known problems.
arXiv Detail & Related papers (2022-06-16T10:14:08Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Estimating Multiple Precision Matrices with Cluster Fusion
Regularization [0.90238471756546]
We propose a penalized likelihood estimating multiple precision matrices from different classes.
Most existing methods either incorporate no information on relationships between the precision matrices, or require this information be a priori.
arXiv Detail & Related papers (2020-03-01T01:03:22Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.