Identifiability in Exact Two-Layer Sparse Matrix Factorization
- URL: http://arxiv.org/abs/2110.01235v1
- Date: Mon, 4 Oct 2021 07:56:37 GMT
- Title: Identifiability in Exact Two-Layer Sparse Matrix Factorization
- Authors: L\'eon Zheng (LIP), R\'emi Gribonval (LIP), Elisa Riccietti (LIP)
- Abstract summary: Sparse matrix factorization is the problem of approximating a matrix Z by a product of L sparse factors X(L) X(L--1).
This paper focuses on identifiability issues that appear in this problem, in view of better understanding under which sparsity constraints the problem is well-posed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse matrix factorization is the problem of approximating a matrix Z by a
product of L sparse factors X^(L) X^(L--1). .. X^(1). This paper focuses on
identifiability issues that appear in this problem, in view of better
understanding under which sparsity constraints the problem is well-posed. We
give conditions under which the problem of factorizing a matrix into two sparse
factors admits a unique solution, up to unavoidable permutation and scaling
equivalences. Our general framework considers an arbitrary family of prescribed
sparsity patterns, allowing us to capture more structured notions of sparsity
than simply the count of nonzero entries. These conditions are shown to be
related to essential uniqueness of exact matrix decomposition into a sum of
rank-one matrices, with structured sparsity constraints. A companion paper
further exploits these conditions to derive identifiability properties in
multilayer sparse matrix factorization of some well-known matrices like the
Hadamard or the discrete Fourier transform matrices.
Related papers
- Fitting Multilevel Factor Models [41.38783926370621]
We develop a novel, fast implementation of the expectation-maximization algorithm, tailored for multilevel factor models.
We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors.
We present an algorithm that computes the Cholesky factorization of an expanded matrix with linear time and space complexities.
arXiv Detail & Related papers (2024-09-18T15:39:12Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Matrix decompositions in Quantum Optics: Takagi/Autonne,
Bloch-Messiah/Euler, Iwasawa, and Williamson [0.0]
We present four important matrix decompositions commonly used in quantum optics.
The first two of these decompositions are specialized versions of the singular-value decomposition.
The third factors any symplectic matrix in a unique way in terms of matrices that belong to different subgroups of the symplectic group.
arXiv Detail & Related papers (2024-03-07T15:43:17Z) - A Bayesian Perspective for Determinant Minimization Based Robust
Structured Matrix Factorizatio [10.355894890759377]
We introduce a Bayesian perspective for the structured matrix factorization problem.
We show that the corresponding maximum a posteriori estimation problem boils down to the robust determinant approach for structured matrix factorization.
arXiv Detail & Related papers (2023-02-16T16:48:41Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Level compressibility of certain random unitary matrices [0.0]
The value of spectral form factor at the origin, called level compressibility, is an important characteristic of random spectra.
The paper is devoted to analytical calculations of this quantity for different random unitary matrices describing models with intermediate spectral statistics.
arXiv Detail & Related papers (2022-02-22T21:31:24Z) - Identifiability in Exact Multilayer Sparse Matrix Factorization [0.0]
We prove that any matrix which is the product of L factors whose supports are exactly the so-called butterfly supports, admits a unique sparse factorization into L factors.
This applies in particular to the Hadamard or the discrete Fourier transform matrix of size 2L.
arXiv Detail & Related papers (2021-10-04T07:50:51Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.