Denise: Deep Robust Principal Component Analysis for Positive
Semidefinite Matrices
- URL: http://arxiv.org/abs/2004.13612v4
- Date: Tue, 6 Jun 2023 09:37:37 GMT
- Title: Denise: Deep Robust Principal Component Analysis for Positive
Semidefinite Matrices
- Authors: Calypso Herrera, Florian Krach, Anastasis Kratsios, Pierre Ruyssen,
Josef Teichmann
- Abstract summary: Denise is a deep learning-based algorithm for robust PCA of covariance matrices.
Experiments show that Denise matches state-of-the-art performance in terms of decomposition quality.
- Score: 8.1371986647556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robust PCA of covariance matrices plays an essential role when isolating
key explanatory features. The currently available methods for performing such a
low-rank plus sparse decomposition are matrix specific, meaning, those
algorithms must re-run for every new matrix. Since these algorithms are
computationally expensive, it is preferable to learn and store a function that
nearly instantaneously performs this decomposition when evaluated. Therefore,
we introduce Denise, a deep learning-based algorithm for robust PCA of
covariance matrices, or more generally, of symmetric positive semidefinite
matrices, which learns precisely such a function. Theoretical guarantees for
Denise are provided. These include a novel universal approximation theorem
adapted to our geometric deep learning problem and convergence to an optimal
solution to the learning problem. Our experiments show that Denise matches
state-of-the-art performance in terms of decomposition quality, while being
approximately $2000\times$ faster than the state-of-the-art, principal
component pursuit (PCP), and $200 \times$ faster than the current
speed-optimized method, fast PCP.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations [2.7796535578871575]
The randomly pivoted partial Cholesky algorithm (RPCholesky) computes a factorized rank-k approximation of an N x N positive-semidefinite (psd) matrix.
The simplicity, effectiveness, and robustness of RPCholesky strongly support its use in scientific computing and machine learning applications.
arXiv Detail & Related papers (2022-07-13T19:51:24Z) - Sublinear Time Approximation of Text Similarity Matrices [50.73398637380375]
We introduce a generalization of the popular Nystr"om method to the indefinite setting.
Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix.
We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices.
arXiv Detail & Related papers (2021-12-17T17:04:34Z) - Robust CUR Decomposition: Theory and Imaging Applications [9.280330114137778]
This paper considers the use of Robust PCA in a CUR decomposition framework and applications thereof.
We consider two key imaging applications of Robust PCA: video foreground-background separation and face modeling.
arXiv Detail & Related papers (2021-01-05T17:58:15Z) - Fast and Accurate Pseudoinverse with Sparse Matrix Reordering and
Incremental Approach [4.710916891482697]
A pseudoinverse is a generalization of a matrix inverse, which has been extensively utilized in machine learning.
FastPI is a novel incremental singular value decomposition (SVD) based pseudoinverse method for sparse matrices.
We show that FastPI computes the pseudoinverse faster than other approximate methods without loss of accuracy.
arXiv Detail & Related papers (2020-11-09T07:47:10Z) - QR and LQ Decomposition Matrix Backpropagation Algorithms for Square,
Wide, and Deep -- Real or Complex -- Matrices and Their Software
Implementation [0.0]
This article presents matrix backpropagation algorithms for the QR decomposition of matrices $A_m, n$, that are either square (m = n), wide (m n), or deep (m > n), with rank $k = min(m, n)$.
We derive novel matrix backpropagation results for the pivoted (full-rank) QR decomposition and for the LQ decomposition of deep input matrices.
arXiv Detail & Related papers (2020-09-19T21:03:37Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Fast Generalized Matrix Regression with Applications in Machine Learning [46.79740387890322]
We propose a fast generalized matrix regression algorithm (Fast GMR) which utilizes sketching technique to solve the GMR problem efficiently.
We apply the Fast GMR algorithm to the symmetric positive definite matrix approximation and single pass singular value decomposition.
arXiv Detail & Related papers (2019-12-27T07:03:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.