Return of ChebNet: Understanding and Improving an Overlooked GNN on Long Range Tasks
- URL: http://arxiv.org/abs/2506.07624v1
- Date: Mon, 09 Jun 2025 10:41:34 GMT
- Title: Return of ChebNet: Understanding and Improving an Overlooked GNN on Long Range Tasks
- Authors: Ali Hariri, Álvaro Arroyo, Alessio Gravina, Moshe Eliasof, Carola-Bibiane Schönlieb, Davide Bacciu, Kamyar Azizzadenesheli, Xiaowen Dong, Pierre Vandergheynst,
- Abstract summary: We revisit ChebNet to shed light on its ability to model distant node interactions.<n>We cast ChebNet as a stable and non-dissipative dynamical system, which we coin Stable-ChebNet.
- Score: 37.738726537018294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: ChebNet, one of the earliest spectral GNNs, has largely been overshadowed by Message Passing Neural Networks (MPNNs), which gained popularity for their simplicity and effectiveness in capturing local graph structure. Despite their success, MPNNs are limited in their ability to capture long-range dependencies between nodes. This has led researchers to adapt MPNNs through rewiring or make use of Graph Transformers, which compromises the computational efficiency that characterized early spatial message-passing architectures, and typically disregards the graph structure. Almost a decade after its original introduction, we revisit ChebNet to shed light on its ability to model distant node interactions. We find that out-of-box, ChebNet already shows competitive advantages relative to classical MPNNs and GTs on long-range benchmarks, while maintaining good scalability properties for high-order polynomials. However, we uncover that this polynomial expansion leads ChebNet to an unstable regime during training. To address this limitation, we cast ChebNet as a stable and non-dissipative dynamical system, which we coin Stable-ChebNet. Our Stable-ChebNet model allows for stable information propagation, and has controllable dynamics which do not require the use of eigendecompositions, positional encodings, or graph rewiring. Across several benchmarks, Stable-ChebNet achieves near state-of-the-art performance.
Related papers
- From ChebNet to ChebGibbsNet [4.600280055873605]
We analyze the potential of Spectral Graph Convolutional Networks (SpecGCNs)<n>We show that ChebNet is superior to other advanced SpecGCNs, such as GPR-GNN and BernNet, in both homogeneous graphs and heterogeneous graphs.<n>Our experiments indicate that ChebGibbsNet is superior to other advanced SpecGCNs, such as GPR-GNN and BernNet, in both homogeneous graphs and heterogeneous graphs.
arXiv Detail & Related papers (2024-12-02T18:37:45Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets [3.0894823679470087]
This paper introduces the Multi-Stage Folding and Unshared Masks methods to expand the search space in terms of both architecture and parameters.
By achieving high sparsity, competitive performance, and high memory efficiency with up to 98.7% reduction, it demonstrates suitability for energy-efficient graph processing.
arXiv Detail & Related papers (2023-12-06T02:16:44Z) - Regularization of polynomial networks for image recognition [78.4786845859205]
Polynomial Networks (PNs) have emerged as an alternative method with a promising performance and improved interpretability.
We introduce a class of PNs, which are able to reach the performance of ResNet across a range of six benchmarks.
arXiv Detail & Related papers (2023-03-24T10:05:22Z) - Unifying Label-inputted Graph Neural Networks with Deep Equilibrium
Models [12.71307159013144]
This work unifies the two Graph Neural Networks (GNNs) by interpreting LGNN in the theory of Implicit GNN (IGNN)
IGNN exploits information in the entire graph to capture long-range dependencies, but with its network constrained to guarantee the existence of the equilibrium.
In this work, implicit differentiation of IGNN is introduced to differentiate its infinite-range label propagation constant memory, making the propagation both distant and adaptive.
arXiv Detail & Related papers (2022-11-19T09:28:53Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Lipschitz Continuity Retained Binary Neural Network [52.17734681659175]
We introduce the Lipschitz continuity as the rigorous criteria to define the model robustness for BNN.
We then propose to retain the Lipschitz continuity as a regularization term to improve the model robustness.
Our experiments prove that our BNN-specific regularization method can effectively strengthen the robustness of BNN.
arXiv Detail & Related papers (2022-07-13T22:55:04Z) - Convolutional Neural Networks on Graphs with Chebyshev Approximation,
Revisited [80.29747781984135]
ChebNet is one of the early attempts to approximate the spectral graph convolutions using Chebyshevs.
GCN simplifies ChebNet by utilizing only the first two Chebyshevs while still outperforming it on real-world datasets.
We propose ChebNetII, a new GNN model based on Chebyshev, which enhances the original Chebyshev approximation while reducing the Runge phenomenon.
arXiv Detail & Related papers (2022-02-04T09:11:13Z) - From Stars to Subgraphs: Uplifting Any GNN with Local Structure
Awareness [23.279464786779787]
We introduce a general framework to uplift any MPNN to be more expressive.
Our framework is strictly more powerful than 1&2-WL, and is not less powerful than 3-WL.
Our method sets new state-of-the-art performance by large margins for several well-known graph ML tasks.
arXiv Detail & Related papers (2021-10-07T19:08:08Z) - MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy? [12.050205584630922]
Binary Neural Networks (BNNs) are neural networks which use binary weights and activations instead of the typical 32-bit floating point values.
In this paper, we present an architectural approach: MeliusNet. It consists of alternating a DenseBlock, which increases the feature capacity, and our proposed ImprovementBlock, which increases the feature quality.
arXiv Detail & Related papers (2020-01-16T16:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.