Identifiability in Exact Multilayer Sparse Matrix Factorization
- URL: http://arxiv.org/abs/2110.01230v1
- Date: Mon, 4 Oct 2021 07:50:51 GMT
- Title: Identifiability in Exact Multilayer Sparse Matrix Factorization
- Authors: L\'eon Zheng (LIP), R\'emi Gribonval (LIP), Elisa Riccietti (LIP)
- Abstract summary: We prove that any matrix which is the product of L factors whose supports are exactly the so-called butterfly supports, admits a unique sparse factorization into L factors.
This applies in particular to the Hadamard or the discrete Fourier transform matrix of size 2L.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many well-known matrices Z are associated to fast transforms corresponding to
factorizations of the form Z = X^(L). .. X^(1) , where each factor X^(l) is
sparse. Based on general result for the case with two factors, established in a
companion paper, we investigate essential uniqueness of such factorizations. We
show some identifiability results for the sparse factorization into two factors
of the discrete Fourier Transform, discrete cosine transform or discrete sine
transform matrices of size N = 2^L , when enforcing N/2-sparsity by column on
the left factor, and 2-sparsity by row on the right factor. We also show that
the analysis with two factors can be extended to the multilayer case, based on
a hierarchical factorization method. We prove that any matrix which is the
product of L factors whose supports are exactly the so-called butterfly
supports, admits a unique sparse factorization into L factors. This applies in
particular to the Hadamard or the discrete Fourier transform matrix of size 2^L .
Related papers
- Fitting Multilevel Factor Models [41.38783926370621]
We develop a novel, fast implementation of the expectation-maximization algorithm, tailored for multilevel factor models.
We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors.
We present an algorithm that computes the Cholesky factorization of an expanded matrix with linear time and space complexities.
arXiv Detail & Related papers (2024-09-18T15:39:12Z) - Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers [54.20763128054692]
We study how a two-attention-layer transformer is trained to perform ICL on $n$-gram Markov chain data.
We prove that the gradient flow with respect to a cross-entropy ICL loss converges to a limiting model.
arXiv Detail & Related papers (2024-09-09T18:10:26Z) - Maximal Ordinal Two-Factorizations [0.0]
We show that deciding on the existence of two-factorizations of a given size is an NP-complete problem.
We provide the algorithm Ord2Factor that allows us to compute large ordinal two-factorizations.
arXiv Detail & Related papers (2023-04-06T19:26:03Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Phase Factors in Singular Value Decomposition and Schmidt Decomposition [0.0]
In singular value decomposition (SVD) of a complex matrix A, the singular vectors or the eigenvectors of AAdag and AdagA are unique up to complex phase factors.
We summarize here three simple steps to consistently carry out the SVD and the Schmidt decomposition including the phase factors.
arXiv Detail & Related papers (2022-03-23T17:41:18Z) - Level compressibility of certain random unitary matrices [0.0]
The value of spectral form factor at the origin, called level compressibility, is an important characteristic of random spectra.
The paper is devoted to analytical calculations of this quantity for different random unitary matrices describing models with intermediate spectral statistics.
arXiv Detail & Related papers (2022-02-22T21:31:24Z) - Identifiability in Exact Two-Layer Sparse Matrix Factorization [0.0]
Sparse matrix factorization is the problem of approximating a matrix Z by a product of L sparse factors X(L) X(L--1).
This paper focuses on identifiability issues that appear in this problem, in view of better understanding under which sparsity constraints the problem is well-posed.
arXiv Detail & Related papers (2021-10-04T07:56:37Z) - Sawtooth Factorial Topic Embeddings Guided Gamma Belief Network [49.458250193768826]
We propose sawtooth factorial topic embedding guided GBN, a deep generative model of documents.
Both the words and topics are represented as embedding vectors of the same dimension.
Our models outperform other neural topic models on extracting deeper interpretable topics.
arXiv Detail & Related papers (2021-06-30T10:14:57Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Finite-Function-Encoding Quantum States [52.77024349608834]
We introduce finite-function-encoding (FFE) states which encode arbitrary $d$-valued logic functions.
We investigate some of their structural properties.
arXiv Detail & Related papers (2020-12-01T13:53:23Z) - Supervised Quantile Normalization for Low-rank Matrix Approximation [50.445371939523305]
We learn the parameters of quantile normalization operators that can operate row-wise on the values of $X$ and/or of its factorization $UV$ to improve the quality of the low-rank representation of $X$ itself.
We demonstrate the applicability of these techniques on synthetic and genomics datasets.
arXiv Detail & Related papers (2020-02-08T21:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.