Lipschitz Bounds for Persistent Laplacian Eigenvalues under One-Simplex Insertions
- URL: http://arxiv.org/abs/2506.21352v1
- Date: Thu, 26 Jun 2025 15:03:54 GMT
- Title: Lipschitz Bounds for Persistent Laplacian Eigenvalues under One-Simplex Insertions
- Authors: Le Vu Anh, Mehmet Dik, Nguyen Viet Anh,
- Abstract summary: We prove a uniform Lipschitz bound for Persistent Laplacians.<n>We deliver the first eigenvalue-level guarantee for spectral topological data analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Persistent Laplacians are matrix operators that track how the shape and structure of data transform across scales and are popularly adopted in biology, physics, and machine learning. Their eigenvalues are concise descriptors of geometric and topological features in a filtration. Although earlier work established global algebraic stability for these operators, the precise change in a single eigenvalue when one simplex, such as a vertex, edge, or triangle, is added has remained unknown. This is important because downstream tools, including heat-kernel signatures and spectral neural networks, depend directly on these eigenvalues. We close this gap by proving a uniform Lipschitz bound: after inserting one simplex, every up-persistent Laplacian eigenvalue can vary by at most twice the Euclidean norm of that simplex's boundary, independent of filtration scale and complex size. This result delivers the first eigenvalue-level robustness guarantee for spectral topological data analysis. It guarantees that spectral features remain stable under local updates and enables reliable error control in dynamic data settings.
Related papers
- Towards Spectral Convergence of Locally Linear Embedding on Manifolds with Boundary [0.0]
We study the eigenvalues and eigenfunctions of a differential operator that governs the behavior of the unsupervised learning algorithm known as Locally Linear Embedding.<n>We show that a natural regularity condition on the eigenfunctions imposes a consistent boundary condition and use the Frobenius method to estimate pointwise behavior.
arXiv Detail & Related papers (2025-01-16T14:45:53Z) - Towards Stable, Globally Expressive Graph Representations with Laplacian Eigenvectors [29.055130767451036]
We propose a novel method exploiting Laplacian eigenvectors to generate stable and globally expressive graph representations.
Our method deals with numerically close eigenvalues in a smooth fashion, ensuring its better robustness against perturbations.
arXiv Detail & Related papers (2024-10-13T06:02:25Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction [55.57072563835959]
We propose an eigenvalue correction strategy that can free filters from the constraints of repeated eigenvalue inputs.<n>Concretely, the proposed eigenvalue correction strategy enhances the uniform distribution of eigenvalues, and improves the fitting capacity and expressive power of filters.
arXiv Detail & Related papers (2024-01-28T08:12:00Z) - On the Stability of Expressive Positional Encodings for Graphs [46.967035678550594]
Using Laplacian eigenvectors as positional encodings faces two fundamental challenges.
We introduce Stable and Expressive Positional generalizations (SPE)
SPE is the first architecture that is (1) provably stable, and (2) universally expressive for basis invariant functions.
arXiv Detail & Related papers (2023-10-04T04:48:55Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Does the Data Induce Capacity Control in Deep Learning? [0.0]
This paper studies how the dataset may be the cause of the anomalous generalization performance of deep networks.
We show that the data correlation matrix of typical classification datasets has an eigenspectrum where, after a sharp initial drop, a large number of small eigenvalues are distributed uniformly over an exponentially large range.
arXiv Detail & Related papers (2021-10-27T04:40:27Z) - On the training of sparse and dense deep neural networks: less
parameters, same performance [0.0]
We propose a variant of the spectral learning method as appeared in Giambagli et al Nat. Comm. 2021.
The eigenvalues act as veritable knobs which can be freely tuned so as to (i) enhance, or alternatively silence, the contribution of the input nodes.
Each spectral parameter reflects back on the whole set of inter-nodes weights, an attribute which we shall effectively exploit to yield sparse networks with stunning classification abilities.
arXiv Detail & Related papers (2021-06-17T14:54:23Z) - Self-training Avoids Using Spurious Features Under Domain Shift [54.794607791641745]
In unsupervised domain adaptation, conditional entropy minimization and pseudo-labeling work even when the domain shifts are much larger than those analyzed by existing theory.
We identify and analyze one particular setting where the domain shift can be large, but certain spurious features correlate with label in the source domain but are independent label in the target.
arXiv Detail & Related papers (2020-06-17T17:51:42Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.