On the Upper Bounds for the Matrix Spectral Norm
- URL: http://arxiv.org/abs/2506.15660v1
- Date: Wed, 18 Jun 2025 17:39:03 GMT
- Title: On the Upper Bounds for the Matrix Spectral Norm
- Authors: Alexey Naumov, Maxim Rakhuba, Denis Ryapolov, Sergey Samsonov,
- Abstract summary: We propose a new Counterbalance estimator that provides upper bounds on the norm and derive probabilistic guarantees on its underestimation.<n>Compared to standard approaches such as the power method, the proposed estimator produces significantly tighter upper bounds in both synthetic and real-world settings.
- Score: 4.773232329447336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of estimating the spectral norm of a matrix using only matrix-vector products. We propose a new Counterbalance estimator that provides upper bounds on the norm and derive probabilistic guarantees on its underestimation. Compared to standard approaches such as the power method, the proposed estimator produces significantly tighter upper bounds in both synthetic and real-world settings. Our method is especially effective for matrices with fast-decaying spectra, such as those arising in deep learning and inverse problems.
Related papers
- Spectral Estimation with Free Decompression [47.81955761814048]
We introduce a novel method of "free decompression" to estimate the spectrum of very large (impalpable) matrices.<n>Our method can be used to extrapolate from the empirical spectral densities of small submatrices to infer the eigenspectrum of extremely large (impalpable) matrices.
arXiv Detail & Related papers (2025-06-13T17:49:25Z) - Higher-Order Singular-Value Derivatives of Rectangular Real Matrices [0.0]
We present a theoretical framework for deriving the general $n$-th order Fr'echet derivatives of singular values in real rectangular matrices.<n>We leverage reduced resolvent operators from Kato's perturbation analytic theory for self-adjoint operators.<n>Our framework equips researchers with a practical toolkit for higher-order spectral sensitivity studies in random matrix applications.
arXiv Detail & Related papers (2025-06-04T09:28:35Z) - Guarantees of a Preconditioned Subgradient Algorithm for Overparameterized Asymmetric Low-rank Matrix Recovery [8.722715843502321]
We provide for the first time, linear convergence rates independent of the rank of the sought asymmetric matrix in the presence of gross corruptions.<n>By applying our approach to (robust) matrix sensing, we highlight its merits when the measurement operator satisfies a mixed-norm restricted isometry property.
arXiv Detail & Related papers (2024-10-22T08:58:44Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Generalizing and Improving Jacobian and Hessian Regularization [1.926971915834451]
We generalize previous efforts by extending the target matrix from zero to any matrix that admits efficient matrix-vector products.
The proposed paradigm allows us to construct novel regularization terms that enforce symmetry or diagonality on square Jacobian and Hessian matrices.
We introduce Lanczos-based spectral norm minimization to tackle this difficulty.
arXiv Detail & Related papers (2022-12-01T07:01:59Z) - Trimmed Sampling Algorithm for the Noisy Generalized Eigenvalue Problem [0.0]
Solving the generalized eigenvalue problem is a useful method for finding energy eigenstates of large quantum systems.
The problem is especially bad when matrix elements are evaluated using methods and have significant error bars.
We introduce the trimmed sampling algorithm in order to solve this problem.
arXiv Detail & Related papers (2022-09-05T18:10:12Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.