On the Trade-off between Over-smoothing and Over-squashing in Deep Graph
Neural Networks
- URL: http://arxiv.org/abs/2212.02374v2
- Date: Fri, 11 Aug 2023 13:51:55 GMT
- Title: On the Trade-off between Over-smoothing and Over-squashing in Deep Graph
Neural Networks
- Authors: Jhony H. Giraldo, Konstantinos Skianis, Thierry Bouwmans, Fragkiskos
D. Malliaros
- Abstract summary: Over-smoothing and over-squashing are key challenges when stacking graph convolutional layers.
Our work reveals that over-smoothing and over-squashing are intrinsically related to the spectral gap of the graph Laplacian.
To achieve a suitable compromise, we propose adding and removing edges as a viable approach.
- Score: 8.105516788827451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have succeeded in various computer science
applications, yet deep GNNs underperform their shallow counterparts despite
deep learning's success in other domains. Over-smoothing and over-squashing are
key challenges when stacking graph convolutional layers, hindering deep
representation learning and information propagation from distant nodes. Our
work reveals that over-smoothing and over-squashing are intrinsically related
to the spectral gap of the graph Laplacian, resulting in an inevitable
trade-off between these two issues, as they cannot be alleviated
simultaneously. To achieve a suitable compromise, we propose adding and
removing edges as a viable approach. We introduce the Stochastic Jost and Liu
Curvature Rewiring (SJLR) algorithm, which is computationally efficient and
preserves fundamental properties compared to previous curvature-based methods.
Unlike existing approaches, SJLR performs edge addition and removal during GNN
training while maintaining the graph unchanged during testing. Comprehensive
comparisons demonstrate SJLR's competitive performance in addressing
over-smoothing and over-squashing.
Related papers
- On Vanishing Gradients, Over-Smoothing, and Over-Squashing in GNNs: Bridging Recurrent and Graph Learning [15.409865070022951]
Graph Neural Networks (GNNs) are models that leverage the graph structure to transmit information between nodes.
We show that a simple state-space formulation of a GNN effectively alleviates over-smoothing and over-squashing at no extra trainable parameter cost.
arXiv Detail & Related papers (2025-02-15T14:43:41Z) - Effects of Random Edge-Dropping on Over-Squashing in Graph Neural Networks [8.524684315458243]
We present theoretical results that characterize the negative effects of DropEdge on sensitivity between distant nodes.
Our findings are easily extended to its variants, allowing us to build a comprehensive understanding of how they affect over-squashing.
Our conclusions highlight the need to re-evaluate various methods designed for training deep GNNs.
arXiv Detail & Related papers (2025-02-11T08:36:38Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Robust Graph Neural Network based on Graph Denoising [10.564653734218755]
Graph Neural Networks (GNNs) have emerged as a notorious alternative to address learning problems dealing with non-Euclidean datasets.
This work proposes a robust implementation of GNNs that explicitly accounts for the presence of perturbations in the observed topology.
arXiv Detail & Related papers (2023-12-11T17:43:57Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Distributed Graph Neural Network Training with Periodic Historical
Embedding Synchronization [9.503080586294406]
Graph Neural Networks (GNNs) are prevalent in various applications such as social network, recommender systems, and knowledge graphs.
Traditional sampling-based methods accelerate GNN by dropping edges and nodes, which impairs the graph integrity and model performance.
This paper proposes DIstributed Graph Embedding SynchronizaTion (DIGEST), a novel distributed GNN training framework.
arXiv Detail & Related papers (2022-05-31T18:44:53Z) - Graph Neural Network with Curriculum Learning for Imbalanced Node
Classification [21.085314408929058]
Graph Neural Network (GNN) is an emerging technique for graph-based learning tasks such as node classification.
In this work, we reveal the vulnerability of GNN to the imbalance of node labels.
We propose a novel graph neural network framework with curriculum learning (GNN-CL) consisting of two modules.
arXiv Detail & Related papers (2022-02-05T10:46:11Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.