Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node
Classification?
- URL: http://arxiv.org/abs/2109.05641v1
- Date: Sun, 12 Sep 2021 23:57:05 GMT
- Title: Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node
Classification?
- Authors: Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan
Zhang, Xiao-Wen Chang, Doina Precup
- Abstract summary: Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using the graph structures based on the inductive bias (homophily assumption)
Performance advantages of GNNs over graph-agnostic NNs seem not generally satisfactory.
Heterophily has been considered as a main cause and numerous works have been put forward to address it.
- Score: 44.71818395535755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using the
graph structures based on the relational inductive bias (homophily assumption).
Though GNNs are believed to outperform NNs in real-world tasks, performance
advantages of GNNs over graph-agnostic NNs seem not generally satisfactory.
Heterophily has been considered as a main cause and numerous works have been
put forward to address it. In this paper, we first show that not all cases of
heterophily are harmful for GNNs with aggregation operation. Then, we propose
new metrics based on a similarity matrix which considers the influence of both
graph structure and input features on GNNs. The metrics demonstrate advantages
over the commonly used homophily metrics by tests on synthetic graphs. From the
metrics and the observations, we find some cases of harmful heterophily can be
addressed by diversification operation. With this fact and knowledge of
filterbanks, we propose the Adaptive Channel Mixing (ACM) framework to
adaptively exploit aggregation, diversification and identity channels in each
GNN layer to address harmful heterophily. We validate the ACM-augmented
baselines with 10 real-world node classification tasks. They consistently
achieve significant performance gain and exceed the state-of-the-art GNNs on
most of the tasks without incurring significant computational burden.
Related papers
- When Do Graph Neural Networks Help with Node Classification?
Investigating the Impact of Homophily Principle on Node Distinguishability [92.8279562472538]
Homophily principle has been believed to be the main reason for the performance superiority of Graph Networks (GNNs) over Neural Networks on node classification tasks.
Recent research suggests that, even in the absence of homophily, the advantage of GNNs still exists as long as nodes from the same class share similar neighborhood patterns.
arXiv Detail & Related papers (2023-04-25T09:40:47Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Revisiting Heterophily For Graph Neural Networks [42.41238892727136]
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption)
Recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory.
arXiv Detail & Related papers (2022-10-14T08:00:26Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - $p$-Laplacian Based Graph Neural Networks [27.747195341003263]
Graph networks (GNNs) have demonstrated superior performance for semi-supervised node classification on graphs.
We propose a new $p$-Laplacian based GNN model, termed as $p$GNN, whose message passing mechanism is derived from a discrete regularization framework.
We show that the new message passing mechanism works simultaneously as low-pass and high-pass filters, thus making $p$GNNs effective on both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2021-11-14T13:16:28Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Graph Neural Networks with Heterophily [40.23690407583509]
We propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily.
We show that replacing the compatibility matrix in our framework with the identity (which represents pure homophily) reduces to GCN.
arXiv Detail & Related papers (2020-09-28T18:29:36Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.