Perturbation Analysis of Singular Values in Concatenated Matrices
- URL: http://arxiv.org/abs/2505.01427v2
- Date: Sun, 29 Jun 2025 16:31:24 GMT
- Title: Perturbation Analysis of Singular Values in Concatenated Matrices
- Authors: Maksym Shamrai,
- Abstract summary: How does the singular value spectrum and singular perturbationd matrix relate to the spectra of its individual components?<n>We setup analytical bounds that quantify stability of values under small perturbations in submatrices.<n>Results demonstrate that if submatrices are close in a norm, dominant singular values of the singular matrix remain stable enabling controlled trade-offs between accuracy and compression.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concatenating matrices is a common technique for uncovering shared structures in data through singular value decomposition (SVD) and low-rank approximations. The fundamental question arises: How does the singular value spectrum of the concatenated matrix relate to the spectra of its individual components? In the present work, we develop a perturbation technique that extends classical results such as Weyl's inequality to concatenated matrices. We setup analytical bounds that quantify stability of singular values under small perturbations in submatrices. The results demonstrate that if submatrices are close in a norm, dominant singular values of the concatenated matrix remain stable enabling controlled trade-offs between accuracy and compression. These provide a theoretical basis for improved matrix clustering and compression strategies with applications in the numerical linear algebra, signal processing, and data-driven modeling.
Related papers
- Spectral Estimation with Free Decompression [47.81955761814048]
We introduce a novel method of "free decompression" to estimate the spectrum of very large (impalpable) matrices.<n>Our method can be used to extrapolate from the empirical spectral densities of small submatrices to infer the eigenspectrum of extremely large (impalpable) matrices.
arXiv Detail & Related papers (2025-06-13T17:49:25Z) - Infinitesimal Higher-Order Spectral Variations in Rectangular Real Random Matrices [0.0]
We present a theoretical framework for deriving the general $n$-th order Fr'echet derivatives of singular values in real rectangular matrices.<n>We leverage reduced resolvent operators from Kato's perturbation analytic theory for self-adjoint operators.<n>Our framework equips researchers with a practical toolkit for higher-order spectral sensitivity studies in random matrix applications.
arXiv Detail & Related papers (2025-06-04T09:28:35Z) - Absorb and Converge: Provable Convergence Guarantee for Absorbing Discrete Diffusion Models [59.47572583027685]
We provide the first finite-time error bounds and convergence rate analysis for discrete diffusion models using absorbing rate matrices.<n>We establish the first convergence guarantees for both the $tau$-leaping and uniformization samplers under absorbing rate matrices.<n>Under suitable assumptions, we provide convergence guarantees without early stopping.
arXiv Detail & Related papers (2025-06-02T23:14:35Z) - Cramer-Rao Bounds for Laplacian Matrix Estimation [56.1214184671173]
We derive closed-form matrix expressions for the Cramer-Rao Bound (CRB) specifically tailored to Laplacian matrix estimation.<n>We demonstrate the use of CRBs in three representative applications: (i) topology identification in power systems, (ii) graph filter identification in diffused models, and (iii) precision matrix estimation in Gaussian Markov random fields under Laplacian constraints.
arXiv Detail & Related papers (2025-04-06T18:28:31Z) - Robust spectral clustering with rank statistics [0.3823356975862007]
We consider eigenvector-based clustering applied to a matrix of nonparametric rank statistics that is derived entrywise from the raw, original data matrix.<n>Our main theoretical contributions are threefold and hold under flexible data generating conditions.<n>For a dataset of human connectomes, our approach yields parsimonious dimensionality reduction and improved recovery of ground-truth neuroanatomical cluster structure.
arXiv Detail & Related papers (2024-08-19T16:33:44Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Implicit Regularization of Gradient Flow on One-Layer Softmax Attention [10.060496091806694]
We study gradient flow on the exponential loss for a classification problem with a one-layer softmax attention model.
Under a separability assumption on the data, we show that when gradient flow achieves the minimal loss value, it further implicitly minimizes the nuclear norm of the product of the key and query weight matrices.
arXiv Detail & Related papers (2024-03-13T17:02:27Z) - On confidence intervals for precision matrices and the
eigendecomposition of covariance matrices [20.20416580970697]
This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension.
We derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix.
As an application of these results, we demonstrate a new statistical test, which allows us to test for non-zero values of the precision matrix.
arXiv Detail & Related papers (2022-08-25T10:12:53Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Nonconvex Matrix Completion with Linearly Parameterized Factors [10.163102766021373]
Parametric Factorization holds for important examples including subspace and completion simulations.
The effectiveness of our unified nonconstrained matrix optimization method is also illustrated.
arXiv Detail & Related papers (2020-03-29T22:40:47Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.