Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction
- URL: http://arxiv.org/abs/2401.15603v2
- Date: Mon, 18 Mar 2024 09:00:41 GMT
- Title: Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction
- Authors: Kangkang Lu, Yanhua Yu, Hao Fei, Xuan Li, Zixuan Yang, Zirui Guo, Meiyu Liang, Mengran Yin, Tat-Seng Chua,
- Abstract summary: spectral graph neural networks are characterized by filters.
We propose an eigenvalue correction strategy that can free filters from the constraints of repeated eigenvalue inputs.
- Score: 55.57072563835959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, spectral graph neural networks, characterized by polynomial filters, have garnered increasing attention and have achieved remarkable performance in tasks such as node classification. These models typically assume that eigenvalues for the normalized Laplacian matrix are distinct from each other, thus expecting a polynomial filter to have a high fitting ability. However, this paper empirically observes that normalized Laplacian matrices frequently possess repeated eigenvalues. Moreover, we theoretically establish that the number of distinguishable eigenvalues plays a pivotal role in determining the expressive power of spectral graph neural networks. In light of this observation, we propose an eigenvalue correction strategy that can free polynomial filters from the constraints of repeated eigenvalue inputs. Concretely, the proposed eigenvalue correction strategy enhances the uniform distribution of eigenvalues, thus mitigating repeated eigenvalues, and improving the fitting capacity and expressive power of polynomial filters. Extensive experimental results on both synthetic and real-world datasets demonstrate the superiority of our method.
Related papers
- Graph-theoretical approach to the eigenvalue spectrum of perturbed higher-order exceptional points [0.0]
We advocate a graph-theoretical perspective that contributes to the understanding of perturbative effects on the eigenvalue spectrum of higher-order exceptional points.
We consider an illustrative example, a system of microrings coupled by a semi-infinite waveguide with an end mirror.
arXiv Detail & Related papers (2024-09-20T11:56:15Z) - High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization [83.06112052443233]
This paper studies kernel ridge regression in high dimensions under covariate shifts.
By a bias-variance decomposition, we theoretically demonstrate that the re-weighting strategy allows for decreasing the variance.
For bias, we analyze the regularization of the arbitrary or well-chosen scale, showing that the bias can behave very differently under different regularization scales.
arXiv Detail & Related papers (2024-06-05T12:03:27Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - How Universal Polynomial Bases Enhance Spectral Graph Neural Networks: Heterophily, Over-smoothing, and Over-squashing [24.857304431611464]
Spectral Graph Networks (GNNs) have gained increasing prevalence for heterophily graphs.
In an attempt to avert prohibitive computations, numerous filters have been proposed.
We demystify the correlation between the spectral property of desired vectors and the heterophily degrees.
We develop a novel adaptive heterophily basis wherein the basis mutually form angles reflecting the heterophily degree of the graph.
arXiv Detail & Related papers (2024-05-21T03:28:45Z) - Nonlinear spiked covariance matrices and signal propagation in deep
neural networks [22.84097371842279]
We study the eigenvalue spectrum of the Conjugate Kernel defined by a nonlinear feature map of a feedforward neural network.
In this work, we characterize these signal eigenvalues and eigenvectors for a nonlinear version of the spiked covariance model.
We also study a simple regime of representation learning where the weight matrix develops a rank-one signal component over training.
arXiv Detail & Related papers (2024-02-15T17:31:19Z) - Using Variational Eigensolvers on Low-End Hardware to Find the Ground
State Energy of Simple Molecules [0.0]
Key properties of physical systems can be described by the eigenvalues of matrices that represent the system.
Computational algorithms that determine the eigenvalues of these matrices exist, but they generally suffer from a loss of performance as the matrix grows in size.
This process can be expanded to quantum computation to find the eigenvalues with better performance than the classical algorithms.
arXiv Detail & Related papers (2023-10-29T18:36:18Z) - Eigenvalue sensitivity from eigenstate geometry near and beyond
arbitrary-order exceptional points [0.0]
Systems with an effectively non-Hermitian Hamiltonian display an enhanced sensitivity to parametric and dynamic perturbations.
This sensitivity can be quantified by the phase rigidity, which mathematically corresponds to the eigenvalue condition number.
I derive an exact nonperturbative expression for this sensitivity measure that applies to arbitrary eigenvalue configurations.
arXiv Detail & Related papers (2023-07-12T16:36:39Z) - The Inductive Bias of Flatness Regularization for Deep Matrix
Factorization [58.851514333119255]
This work takes the first step toward understanding the inductive bias of the minimum trace of the Hessian solutions in deep linear networks.
We show that for all depth greater than one, with the standard Isometry Property (RIP) on the measurements, minimizing the trace of Hessian is approximately equivalent to minimizing the Schatten 1-norm of the corresponding end-to-end matrix parameters.
arXiv Detail & Related papers (2023-06-22T23:14:57Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Stability to Deformations of Manifold Filters and Manifold Neural Networks [89.53585099149973]
The paper defines and studies manifold (M) convolutional filters and neural networks (NNs)
The main technical contribution of the paper is to analyze the stability of manifold filters and MNNs to smooth deformations of the manifold.
arXiv Detail & Related papers (2021-06-07T15:41:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.