Matrix Decomposition and Applications
- URL: http://arxiv.org/abs/2201.00145v3
- Date: Thu, 28 Dec 2023 08:19:55 GMT
- Title: Matrix Decomposition and Applications
- Authors: Jun Lu
- Abstract summary: In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition.
matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network.
- Score: 8.034728173797953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In 1954, Alston S. Householder published Principles of Numerical Analysis,
one of the first modern treatments on matrix decomposition that favored a
(block) LU decomposition-the factorization of a matrix into the product of
lower and upper triangular matrices. And now, matrix decomposition has become a
core technology in machine learning, largely due to the development of the back
propagation algorithm in fitting a neural network. The sole aim of this survey
is to give a self-contained introduction to concepts and mathematical tools in
numerical linear algebra and matrix analysis in order to seamlessly introduce
matrix decomposition techniques and their applications in subsequent sections.
However, we clearly realize our inability to cover all the useful and
interesting results concerning matrix decomposition and given the paucity of
scope to present this discussion, e.g., the separated analysis of the Euclidean
space, Hermitian space, Hilbert space, and things in the complex domain. We
refer the reader to literature in the field of linear algebra for a more
detailed introduction to the related fields.
Related papers
- Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - Recent and Upcoming Developments in Randomized Numerical Linear Algebra for Machine Learning [49.0767291348921]
Randomized Numerical Linear Algebra (RandNLA) is an area which uses randomness to develop improved algorithms for ubiquitous matrix problems.
This article provides a self-contained overview of RandNLA, in light of these developments.
arXiv Detail & Related papers (2024-06-17T02:30:55Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Matrix decompositions in Quantum Optics: Takagi/Autonne,
Bloch-Messiah/Euler, Iwasawa, and Williamson [0.0]
We present four important matrix decompositions commonly used in quantum optics.
The first two of these decompositions are specialized versions of the singular-value decomposition.
The third factors any symplectic matrix in a unique way in terms of matrices that belong to different subgroups of the symplectic group.
arXiv Detail & Related papers (2024-03-07T15:43:17Z) - Sufficient dimension reduction for feature matrices [3.04585143845864]
We propose a method called principal support matrix machine (PSMM) for the matrix sufficient dimension reduction.
Our numerical analysis demonstrates that the PSMM outperforms existing methods and has strong interpretability in real data applications.
arXiv Detail & Related papers (2023-03-07T23:16:46Z) - Bayesian Matrix Decomposition and Applications [8.034728173797953]
The sole aim of this book is to give a self-contained introduction to concepts and mathematical tools in Bayesian matrix decomposition.
Other than this modest background, the development is self-contained, with rigorous proof provided throughout.
arXiv Detail & Related papers (2023-02-18T07:40:03Z) - Generalized Leverage Scores: Geometric Interpretation and Applications [15.86621510551207]
We extend the definition of leverage scores to relate the columns of a matrix to arbitrary subsets of singular vectors.
We employ this result to design approximation algorithms with provable guarantees for two well-known problems.
arXiv Detail & Related papers (2022-06-16T10:14:08Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Projection techniques to update the truncated SVD of evolving matrices [17.22107982549168]
This paper considers the problem of updating the rank-k truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time.
The proposed framework is purely algebraic and targets general updating problems.
Results on matrices from real applications suggest that the proposed algorithm can lead to higher accuracy.
arXiv Detail & Related papers (2020-10-13T13:46:08Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Granular Computing: An Augmented Scheme of Degranulation Through a
Modified Partition Matrix [86.89353217469754]
Information granules forming an abstract and efficient characterization of large volumes of numeric data have been considered as the fundamental constructs of Granular Computing.
Previous studies have shown that there is a relationship between the reconstruction error and the performance of the granulation process.
To enhance the quality of degranulation, in this study, we develop an augmented scheme through modifying the partition matrix.
arXiv Detail & Related papers (2020-04-03T03:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.