Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction
- URL: http://arxiv.org/abs/2401.15603v3
- Date: Sun, 23 Feb 2025 14:18:49 GMT
- Title: Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction
- Authors: Kangkang Lu, Yanhua Yu, Hao Fei, Xuan Li, Zixuan Yang, Zirui Guo, Meiyu Liang, Mengran Yin, Tat-Seng Chua,
- Abstract summary: We propose an eigenvalue correction strategy that can free filters from the constraints of repeated eigenvalue inputs.<n>Concretely, the proposed eigenvalue correction strategy enhances the uniform distribution of eigenvalues, and improves the fitting capacity and expressive power of filters.
- Score: 55.57072563835959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, spectral graph neural networks, characterized by polynomial filters, have garnered increasing attention and have achieved remarkable performance in tasks such as node classification. These models typically assume that eigenvalues for the normalized Laplacian matrix are distinct from each other, thus expecting a polynomial filter to have a high fitting ability. However, this paper empirically observes that normalized Laplacian matrices frequently possess repeated eigenvalues. Moreover, we theoretically establish that the number of distinguishable eigenvalues plays a pivotal role in determining the expressive power of spectral graph neural networks. In light of this observation, we propose an eigenvalue correction strategy that can free polynomial filters from the constraints of repeated eigenvalue inputs. Concretely, the proposed eigenvalue correction strategy enhances the uniform distribution of eigenvalues, thus mitigating repeated eigenvalues, and improving the fitting capacity and expressive power of polynomial filters. Extensive experimental results on both synthetic and real-world datasets demonstrate the superiority of our method. The code is available at: https://github.com/Lukangkang123/EC-GNN
Related papers
- Graph Fourier Transformer with Structure-Frequency Information [2.7852431537059426]
This paper proposes Grafourierformer, which innovatively combines GT with inductive bias containing Frequency-Structure information.
Experiments on various benchmarks show Grafourierformer consistently outperforms GNN and GT-based models in graph classification and node classification tasks.
arXiv Detail & Related papers (2025-04-28T12:38:02Z) - Graph-theoretical approach to the eigenvalue spectrum of perturbed higher-order exceptional points [0.0]
We advocate a graph-theoretical perspective that contributes to the understanding of perturbative effects on the eigenvalue spectrum of higher-order exceptional points.
We consider an illustrative example, a system of microrings coupled by a semi-infinite waveguide with an end mirror.
arXiv Detail & Related papers (2024-09-20T11:56:15Z) - GrassNet: State Space Model Meets Graph Neural Network [57.62885438406724]
Graph State Space Network (GrassNet) is a novel graph neural network with theoretical support that provides a simple yet effective scheme for designing arbitrary graph spectral filters.
To the best of our knowledge, our work is the first to employ SSMs for the design of graph GNN spectral filters.
Extensive experiments on nine public benchmarks reveal that GrassNet achieves superior performance in real-world graph modeling tasks.
arXiv Detail & Related papers (2024-08-16T07:33:58Z) - High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization [83.06112052443233]
This paper studies kernel ridge regression in high dimensions under covariate shifts.
By a bias-variance decomposition, we theoretically demonstrate that the re-weighting strategy allows for decreasing the variance.
For bias, we analyze the regularization of the arbitrary or well-chosen scale, showing that the bias can behave very differently under different regularization scales.
arXiv Detail & Related papers (2024-06-05T12:03:27Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - How Universal Polynomial Bases Enhance Spectral Graph Neural Networks: Heterophily, Over-smoothing, and Over-squashing [24.857304431611464]
Spectral Graph Networks (GNNs) have gained increasing prevalence for heterophily graphs.
In an attempt to avert prohibitive computations, numerous filters have been proposed.
We demystify the correlation between the spectral property of desired vectors and the heterophily degrees.
We develop a novel adaptive heterophily basis wherein the basis mutually form angles reflecting the heterophily degree of the graph.
arXiv Detail & Related papers (2024-05-21T03:28:45Z) - Nonlinear spiked covariance matrices and signal propagation in deep
neural networks [22.84097371842279]
We study the eigenvalue spectrum of the Conjugate Kernel defined by a nonlinear feature map of a feedforward neural network.
In this work, we characterize these signal eigenvalues and eigenvectors for a nonlinear version of the spiked covariance model.
We also study a simple regime of representation learning where the weight matrix develops a rank-one signal component over training.
arXiv Detail & Related papers (2024-02-15T17:31:19Z) - Using Variational Eigensolvers on Low-End Hardware to Find the Ground
State Energy of Simple Molecules [0.0]
Key properties of physical systems can be described by the eigenvalues of matrices that represent the system.
Computational algorithms that determine the eigenvalues of these matrices exist, but they generally suffer from a loss of performance as the matrix grows in size.
This process can be expanded to quantum computation to find the eigenvalues with better performance than the classical algorithms.
arXiv Detail & Related papers (2023-10-29T18:36:18Z) - Eigenvalue sensitivity from eigenstate geometry near and beyond
arbitrary-order exceptional points [0.0]
Systems with an effectively non-Hermitian Hamiltonian display an enhanced sensitivity to parametric and dynamic perturbations.
This sensitivity can be quantified by the phase rigidity, which mathematically corresponds to the eigenvalue condition number.
I derive an exact nonperturbative expression for this sensitivity measure that applies to arbitrary eigenvalue configurations.
arXiv Detail & Related papers (2023-07-12T16:36:39Z) - The Inductive Bias of Flatness Regularization for Deep Matrix
Factorization [58.851514333119255]
This work takes the first step toward understanding the inductive bias of the minimum trace of the Hessian solutions in deep linear networks.
We show that for all depth greater than one, with the standard Isometry Property (RIP) on the measurements, minimizing the trace of Hessian is approximately equivalent to minimizing the Schatten 1-norm of the corresponding end-to-end matrix parameters.
arXiv Detail & Related papers (2023-06-22T23:14:57Z) - Specformer: Spectral Graph Neural Networks Meet Transformers [51.644312964537356]
Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions.
We introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain.
By stacking multiple Specformer layers, one can build a powerful spectral GNN.
arXiv Detail & Related papers (2023-03-02T07:36:23Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Stochastic Parallelizable Eigengap Dilation for Large Graph Clustering [12.544602297450533]
We aim to identify clusters of nodes where most edges fall within clusters and only few edges fall between clusters.
A core step of spectral clustering is performing an eigendecomposition of the corresponding graph Laplacian matrix.
This paper introduces a parallelizable approach to dilating the spectrum in order to accelerate SVD solvers and in turn, spectral clustering.
arXiv Detail & Related papers (2022-07-29T10:13:07Z) - Stability to Deformations of Manifold Filters and Manifold Neural Networks [89.53585099149973]
The paper defines and studies manifold (M) convolutional filters and neural networks (NNs)
The main technical contribution of the paper is to analyze the stability of manifold filters and MNNs to smooth deformations of the manifold.
arXiv Detail & Related papers (2021-06-07T15:41:03Z) - Spectral Evolution with Approximated Eigenvalue Trajectories for Link
Prediction [0.19116784879310028]
This paper presents a method to compute an approximation of the spectral evolution of eigenvalues based on the Rayleigh quotient.
It then proposes an algorithm to estimate the evolution of eigenvalues by extrapolating only a fraction of their approximated values.
The proposed model is used to characterize mention networks of users who posted tweets that include the most popular political hashtags in Colombia.
arXiv Detail & Related papers (2020-06-22T22:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.