Factor Fitting, Rank Allocation, and Partitioning in Multilevel Low Rank
Matrices
- URL: http://arxiv.org/abs/2310.19214v1
- Date: Mon, 30 Oct 2023 00:52:17 GMT
- Title: Factor Fitting, Rank Allocation, and Partitioning in Multilevel Low Rank
Matrices
- Authors: Tetiana Parshakova, Trevor Hastie, Eric Darve, Stephen Boyd
- Abstract summary: We address three problems that arise in fitting a given matrix by an MLR matrix in the Frobenius norm.
The first problem is factor fitting, where we adjust the factors of the MLR matrix.
The second is rank allocation, where we choose the ranks of the blocks in each level, subject to the total rank having a given value.
The final problem is to choose the hierarchical partition of rows and columns, along with the ranks and factors.
- Score: 43.644985364099036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider multilevel low rank (MLR) matrices, defined as a row and column
permutation of a sum of matrices, each one a block diagonal refinement of the
previous one, with all blocks low rank given in factored form. MLR matrices
extend low rank matrices but share many of their properties, such as the total
storage required and complexity of matrix-vector multiplication. We address
three problems that arise in fitting a given matrix by an MLR matrix in the
Frobenius norm. The first problem is factor fitting, where we adjust the
factors of the MLR matrix. The second is rank allocation, where we choose the
ranks of the blocks in each level, subject to the total rank having a given
value, which preserves the total storage needed for the MLR matrix. The final
problem is to choose the hierarchical partition of rows and columns, along with
the ranks and factors. This paper is accompanied by an open source package that
implements the proposed methods.
Related papers
- Fitting Multilevel Factor Models [41.38783926370621]
We develop a novel, fast implementation of the expectation-maximization algorithm, tailored for multilevel factor models.
We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors.
We present an algorithm that computes the Cholesky factorization of an expanded matrix with linear time and space complexities.
arXiv Detail & Related papers (2024-09-18T15:39:12Z) - Mutually-orthogonal unitary and orthogonal matrices [6.9607365816307]
We show that the minimum and maximum numbers of an unextendible maximally entangled bases within a real two-qutrit system are three and four, respectively.
As an application in quantum information theory, we show that the minimum and maximum numbers of an unextendible maximally entangled bases within a real two-qutrit system are three and four, respectively.
arXiv Detail & Related papers (2023-09-20T08:20:57Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Optimal Low-Rank Matrix Completion: Semidefinite Relaxations and
Eigenvector Disjunctions [6.537257913467247]
Low-rank matrix completion consists of a matrix of minimal complexity that recovers a given set of observations as accurately as possible.
New convex relaxations decrease the optimal by magnitude compared to existing methods.
arXiv Detail & Related papers (2023-05-20T22:04:34Z) - Learning idempotent representation for subspace clustering [7.6275971668447]
An ideal reconstruction coefficient matrix should have two properties: 1) it is block diagonal with each block indicating a subspace; 2) each block is fully connected.
We devise an idempotent representation (IDR) algorithm to pursue reconstruction coefficient matrices approximating normalized membership matrices.
Experiments conducted on both synthetic and real world datasets prove that IDR is an effective and efficient subspace clustering algorithm.
arXiv Detail & Related papers (2022-07-29T01:39:25Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Identifiability in Exact Two-Layer Sparse Matrix Factorization [0.0]
Sparse matrix factorization is the problem of approximating a matrix Z by a product of L sparse factors X(L) X(L--1).
This paper focuses on identifiability issues that appear in this problem, in view of better understanding under which sparsity constraints the problem is well-posed.
arXiv Detail & Related papers (2021-10-04T07:56:37Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.