Unifying over-smoothing and over-squashing in graph neural networks: A
physics informed approach and beyond
- URL: http://arxiv.org/abs/2309.02769v2
- Date: Wed, 13 Sep 2023 00:17:19 GMT
- Title: Unifying over-smoothing and over-squashing in graph neural networks: A
physics informed approach and beyond
- Authors: Zhiqi Shao, Dai Shi, Andi Han, Yi Guo, Qibin Zhao, Junbin Gao
- Abstract summary: Graph Neural Networks (GNNs) have emerged as one of the leading approaches for machine learning on graph-structured data.
critical computational challenges such as over-smoothing, over-squashing, and limited expressive power continue to impact the performance of GNNs.
We introduce the Multi-Scaled Heat Kernel based GNN (MHKG) by amalgamating diverse filtering functions' effects on node features.
- Score: 45.370565281567984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as one of the leading approaches
for machine learning on graph-structured data. Despite their great success,
critical computational challenges such as over-smoothing, over-squashing, and
limited expressive power continue to impact the performance of GNNs. In this
study, inspired from the time-reversal principle commonly utilized in classical
and quantum physics, we reverse the time direction of the graph heat equation.
The resulted reversing process yields a class of high pass filtering functions
that enhance the sharpness of graph node features. Leveraging this concept, we
introduce the Multi-Scaled Heat Kernel based GNN (MHKG) by amalgamating diverse
filtering functions' effects on node features. To explore more flexible
filtering conditions, we further generalize MHKG into a model termed G-MHKG and
thoroughly show the roles of each element in controlling over-smoothing,
over-squashing and expressive power. Notably, we illustrate that all
aforementioned issues can be characterized and analyzed via the properties of
the filtering functions, and uncover a trade-off between over-smoothing and
over-squashing: enhancing node feature sharpness will make model suffer more
from over-squashing, and vice versa. Furthermore, we manipulate the time again
to show how G-MHKG can handle both two issues under mild conditions. Our
conclusive experiments highlight the effectiveness of proposed models. It
surpasses several GNN baseline models in performance across graph datasets
characterized by both homophily and heterophily.
Related papers
- Dual-Frequency Filtering Self-aware Graph Neural Networks for Homophilic and Heterophilic Graphs [60.82508765185161]
We propose Dual-Frequency Filtering Self-aware Graph Neural Networks (DFGNN)
DFGNN integrates low-pass and high-pass filters to extract smooth and detailed topological features.
It dynamically adjusts filtering ratios to accommodate both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2024-11-18T04:57:05Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Design Your Own Universe: A Physics-Informed Agnostic Method for Enhancing Graph Neural Networks [34.16727363891593]
We propose a model-agnostic enhancement framework for Graph Neural Networks (GNNs)
This framework enriches the graph structure by introducing additional nodes and rewiring connections with both positive and negative weights.
We theoretically verify that GNNs enhanced through our approach can effectively circumvent the over-smoothing issue and exhibit robustness against over-squashing.
Empirical validations on benchmarks for homophilic, heterophilic graphs, and long-term graph datasets show that GNNs enhanced by our method significantly outperform their original counterparts.
arXiv Detail & Related papers (2024-01-26T00:47:43Z) - GPatcher: A Simple and Adaptive MLP Model for Alleviating Graph
Heterophily [15.93465948768545]
We demystify the impact of graph heterophily on graph neural networks (GNNs) filters.
We propose a simple yet powerful GNN named GPatcher by leveraging the patch-Mixer architectures.
Our model demonstrates outstanding performance on node classification compared with popular homophily GNNs and state-of-the-art heterophily GNNs.
arXiv Detail & Related papers (2023-06-25T20:57:35Z) - What functions can Graph Neural Networks compute on random graphs? The
role of Positional Encoding [0.0]
We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power.
Recently, several works showed that, on very general random graphs models, GNNs converge to certains functions as the number of nodes grows.
arXiv Detail & Related papers (2023-05-24T07:09:53Z) - A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks [33.35609077417775]
We characterize the mechanism behind the phenomenon via a non-asymptotic analysis.
We show that oversmoothing happens once the mixing effect starts to dominate the denoising effect.
Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth.
arXiv Detail & Related papers (2022-12-21T00:33:59Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.