Revisiting Heterophily For Graph Neural Networks
- URL: http://arxiv.org/abs/2210.07606v1
- Date: Fri, 14 Oct 2022 08:00:26 GMT
- Title: Revisiting Heterophily For Graph Neural Networks
- Authors: Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan
Zhang, Xiao-Wen Chang, Doina Precup
- Abstract summary: Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption)
Recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory.
- Score: 42.41238892727136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using
graph structures based on the relational inductive bias (homophily assumption).
While GNNs have been commonly believed to outperform NNs in real-world tasks,
recent work has identified a non-trivial set of datasets where their
performance compared to NNs is not satisfactory. Heterophily has been
considered the main cause of this empirical observation and numerous works have
been put forward to address it. In this paper, we first revisit the widely used
homophily metrics and point out that their consideration of only graph-label
consistency is a shortcoming. Then, we study heterophily from the perspective
of post-aggregation node similarity and define new homophily metrics, which are
potentially advantageous compared to existing ones. Based on this
investigation, we prove that some harmful cases of heterophily can be
effectively addressed by local diversification operation. Then, we propose the
Adaptive Channel Mixing (ACM), a framework to adaptively exploit aggregation,
diversification and identity channels node-wisely to extract richer localized
information for diverse node heterophily situations. ACM is more powerful than
the commonly used uni-channel framework for node classification tasks on
heterophilic graphs and is easy to be implemented in baseline GNN layers. When
evaluated on 10 benchmark node classification tasks, ACM-augmented baselines
consistently achieve significant performance gain, exceeding state-of-the-art
GNNs on most tasks without incurring significant computational burden.
Related papers
- Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network [4.078409998614025]
Heterophily, nodes with different labels tend to be connected based on semantic meanings, Graph Neural Networks (GNNs) often exhibit suboptimal performance.
We propose and demonstrate that the valuable semantic information inherent in heterophily can be utilized effectively in graph learning.
We propose HiGNN, an innovative approach that constructs an additional new graph structure, that integrates heterophilous information by leveraging node distribution.
arXiv Detail & Related papers (2024-03-26T03:29:42Z) - Evolving Computation Graphs [20.094508902123778]
Graph neural networks (GNNs) have demonstrated success in modeling relational data, especially for data that exhibits homophily.
We propose Evolving Computation Graphs (ECGs), a novel method for enhancing GNNs on heterophilic datasets.
arXiv Detail & Related papers (2023-06-22T14:58:18Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node
Classification? [44.71818395535755]
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using the graph structures based on the inductive bias (homophily assumption)
Performance advantages of GNNs over graph-agnostic NNs seem not generally satisfactory.
Heterophily has been considered as a main cause and numerous works have been put forward to address it.
arXiv Detail & Related papers (2021-09-12T23:57:05Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - On Local Aggregation in Heterophilic Graphs [11.100606980915144]
We show that properly tuned classical GNNs and multi-layer perceptrons match or exceed the accuracy of recent long-range aggregation methods on heterophilic graphs.
We propose the Neighborhood Information Content(NIC) metric, which is a novel information-theoretic graph metric.
arXiv Detail & Related papers (2021-06-06T19:12:31Z) - Graph Neural Networks with Heterophily [40.23690407583509]
We propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily.
We show that replacing the compatibility matrix in our framework with the identity (which represents pure homophily) reduces to GCN.
arXiv Detail & Related papers (2020-09-28T18:29:36Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.