Generalized Leverage Scores: Geometric Interpretation and Applications
- URL: http://arxiv.org/abs/2206.08054v1
- Date: Thu, 16 Jun 2022 10:14:08 GMT
- Title: Generalized Leverage Scores: Geometric Interpretation and Applications
- Authors: Bruno Ordozgoiti, Antonis Matakos, Aristides Gionis
- Abstract summary: We extend the definition of leverage scores to relate the columns of a matrix to arbitrary subsets of singular vectors.
We employ this result to design approximation algorithms with provable guarantees for two well-known problems.
- Score: 15.86621510551207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In problems involving matrix computations, the concept of leverage has found
a large number of applications. In particular, leverage scores, which relate
the columns of a matrix to the subspaces spanned by its leading singular
vectors, are helpful in revealing column subsets to approximately factorize a
matrix with quality guarantees. As such, they provide a solid foundation for a
variety of machine-learning methods. In this paper we extend the definition of
leverage scores to relate the columns of a matrix to arbitrary subsets of
singular vectors. We establish a precise connection between column and
singular-vector subsets, by relating the concepts of leverage scores and
principal angles between subspaces. We employ this result to design
approximation algorithms with provable guarantees for two well-known problems:
generalized column subset selection and sparse canonical correlation analysis.
We run numerical experiments to provide further insight on the proposed
methods. The novel bounds we derive improve our understanding of fundamental
concepts in matrix approximations. In addition, our insights may serve as
building blocks for further contributions.
Related papers
- Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - A Bayesian Perspective for Determinant Minimization Based Robust
Structured Matrix Factorizatio [10.355894890759377]
We introduce a Bayesian perspective for the structured matrix factorization problem.
We show that the corresponding maximum a posteriori estimation problem boils down to the robust determinant approach for structured matrix factorization.
arXiv Detail & Related papers (2023-02-16T16:48:41Z) - Accelerated structured matrix factorization [0.0]
Matrix factorization exploits the idea that, in complex high-dimensional data, the actual signal typically lies in lower-dimensional structures.
By exploiting Bayesian shrinkage priors, we devise a computationally convenient approach for high-dimensional matrix factorization.
The dependence between row and column entities is modeled by inducing flexible sparse patterns within factors.
arXiv Detail & Related papers (2022-12-13T11:35:01Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Matrix Decomposition and Applications [8.034728173797953]
In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition.
matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network.
arXiv Detail & Related papers (2022-01-01T08:13:48Z) - Learning a Compressive Sensing Matrix with Structural Constraints via
Maximum Mean Discrepancy Optimization [17.104994036477308]
We introduce a learning-based algorithm to obtain a measurement matrix for compressive sensing related recovery problems.
Recent success of such metrics in neural network related topics motivate a solution of the problem based on machine learning.
arXiv Detail & Related papers (2021-10-14T08:35:54Z) - Projection techniques to update the truncated SVD of evolving matrices [17.22107982549168]
This paper considers the problem of updating the rank-k truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time.
The proposed framework is purely algebraic and targets general updating problems.
Results on matrices from real applications suggest that the proposed algorithm can lead to higher accuracy.
arXiv Detail & Related papers (2020-10-13T13:46:08Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.