Spiky Rank and Its Applications to Rigidity and Circuits
- URL: http://arxiv.org/abs/2602.23503v1
- Date: Thu, 26 Feb 2026 21:20:00 GMT
- Title: Spiky Rank and Its Applications to Rigidity and Circuits
- Authors: Lianna Hambardzumyan, Konstantin Myasnikov, Artur Riazanov, Morgan Shirley, Adi Shraibman,
- Abstract summary: spiky rank is a new matrix parameter that enhances blocky rank by combining the structure of the latter with linear-algebraic flexibility.<n>We show that large spiky rank implies high matrix rigidity, and that spiky rank lower bounds yield lower bounds for depth-2 ReLU circuits.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce spiky rank, a new matrix parameter that enhances blocky rank by combining the combinatorial structure of the latter with linear-algebraic flexibility. A spiky matrix is block-structured with diagonal blocks that are arbitrary rank-one matrices, and the spiky rank of a matrix is the minimum number of such matrices required to express it as a sum. This measure extends blocky rank to real matrices and is more robust for problems with both combinatorial and algebraic character. Our conceptual contribution is as follows: we propose spiky rank as a well-behaved candidate matrix complexity measure and demonstrate its potential through applications. We show that large spiky rank implies high matrix rigidity, and that spiky rank lower bounds yield lower bounds for depth-2 ReLU circuits, the basic building blocks of neural networks. On the technical side, we establish tight bounds for random matrices and develop a framework for explicit lower bounds, applying it to Hamming distance matrices and spectral expanders. Finally, we relate spiky rank to other matrix parameters, including blocky rank, sparsity, and the $γ_2$-norm.
Related papers
- Spectral Estimation with Free Decompression [47.81955761814048]
We introduce a novel method of "free decompression" to estimate the spectrum of very large (impalpable) matrices.<n>Our method can be used to extrapolate from the empirical spectral densities of small submatrices to infer the eigenspectrum of extremely large (impalpable) matrices.
arXiv Detail & Related papers (2025-06-13T17:49:25Z) - Cramer-Rao Bounds for Laplacian Matrix Estimation [56.1214184671173]
We derive closed-form matrix expressions for the Cramer-Rao Bound (CRB) specifically tailored to Laplacian matrix estimation.<n>We demonstrate the use of CRBs in three representative applications: (i) topology identification in power systems, (ii) graph filter identification in diffused models, and (iii) precision matrix estimation in Gaussian Markov random fields under Laplacian constraints.
arXiv Detail & Related papers (2025-04-06T18:28:31Z) - Controlled measurement, Hermitian conjugation and normalization in matrix-manipulation algorithms [46.13392585104221]
We introduce the concept of controlled measurement that solves the problem of small access probability to the desired state of ancilla.<n>Separate encoding of the real and imaginary parts of a complex matrix allows to include the Hermitian conjugation into the list of matrix manipulations.<n>We weaken the constraints on the absolute values of matrix elements unavoidably imposed by the normalization condition for a pure quantum state.
arXiv Detail & Related papers (2025-03-27T08:49:59Z) - Semi-supervised Symmetric Non-negative Matrix Factorization with Low-Rank Tensor Representation [27.14442336413482]
Semi-supervised symmetric non-negative matrix factorization (SNMF)
We propose a novel SNMF model by seeking low-rank representation for the tensor synthesized by the pairwise constraint matrix.
We then propose an enhanced SNMF model, making the embedding matrix tailored to the above tensor low-rank representation.
arXiv Detail & Related papers (2024-05-04T14:58:47Z) - Factor Fitting, Rank Allocation, and Partitioning in Multilevel Low Rank Matrices [39.594033761023695]
We address three problems that arise in fitting a given matrix by an MLR matrix in the Frobenius norm.<n>The first problem is factor fitting, where we adjust the factors of the MLR matrix.<n>The second is rank allocation, where we choose the ranks of the blocks in each level, subject to the total rank having a given value.<n>The final problem is to choose the hierarchical partition of rows and columns, along with the ranks and factors.
arXiv Detail & Related papers (2023-10-30T00:52:17Z) - Mutually-orthogonal unitary and orthogonal matrices [6.9607365816307]
We show that the minimum and maximum numbers of an unextendible maximally entangled bases within a real two-qutrit system are three and four, respectively.
As an application in quantum information theory, we show that the minimum and maximum numbers of an unextendible maximally entangled bases within a real two-qutrit system are three and four, respectively.
arXiv Detail & Related papers (2023-09-20T08:20:57Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - On the Optimality of Nuclear-norm-based Matrix Completion for Problems
with Smooth Non-linear Structure [19.069068837749885]
matrix completion has proven widely effective in many problems where there is no reason to assume low-dimensional linear structure in the underlying matrix.
We show that nuclear-norm penalization is still effective for recovering these matrices when observations are missing completely at random.
arXiv Detail & Related papers (2021-05-05T05:34:32Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.