Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification
- URL: http://arxiv.org/abs/2312.04111v2
- Date: Mon, 11 Mar 2024 01:25:39 GMT
- Title: Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification
- Authors: Henan Sun, Xunkai Li, Zhengyu Wu, Daohan Su, Rong-Hua Li, Guoren Wang
- Abstract summary: We introduce AMUD, which quantifies the relationship between node profiles and topology from a statistical perspective.
We also propose ADPA as a new directed graph learning paradigm for AMUD.
- Score: 25.831508778029097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, graph neural networks (GNNs) have shown prominent performance in
semi-supervised node classification by leveraging knowledge from the graph
database. However, most existing GNNs follow the homophily assumption, where
connected nodes are more likely to exhibit similar feature distributions and
the same labels, and such an assumption has proven to be vulnerable in a
growing number of practical applications. As a supplement, heterophily reflects
dissimilarity in connected nodes, which has gained significant attention in
graph learning. To this end, data engineers aim to develop a powerful GNN model
that can ensure performance under both homophily and heterophily. Despite
numerous attempts, most existing GNNs struggle to achieve optimal node
representations due to the constraints of undirected graphs. The neglect of
directed edges results in sub-optimal graph representations, thereby hindering
the capacity of GNNs. To address this issue, we introduce AMUD, which
quantifies the relationship between node profiles and topology from a
statistical perspective, offering valuable insights for Adaptively Modeling the
natural directed graphs as the Undirected or Directed graph to maximize the
benefits from subsequent graph learning. Furthermore, we propose Adaptive
Directed Pattern Aggregation (ADPA) as a new directed graph learning paradigm
for AMUD. Empirical studies have demonstrated that AMUD guides efficient graph
learning. Meanwhile, extensive experiments on 16 benchmark datasets
substantiate the impressive performance of ADPA, outperforming baselines by
significant margins of 3.96.
Related papers
- Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network [4.078409998614025]
Heterophily, nodes with different labels tend to be connected based on semantic meanings, Graph Neural Networks (GNNs) often exhibit suboptimal performance.
We propose and demonstrate that the valuable semantic information inherent in heterophily can be utilized effectively in graph learning.
We propose HiGNN, an innovative approach that constructs an additional new graph structure, that integrates heterophilous information by leveraging node distribution.
arXiv Detail & Related papers (2024-03-26T03:29:42Z) - Design Your Own Universe: A Physics-Informed Agnostic Method for Enhancing Graph Neural Networks [34.16727363891593]
We propose a model-agnostic enhancement framework for Graph Neural Networks (GNNs)
This framework enriches the graph structure by introducing additional nodes and rewiring connections with both positive and negative weights.
We theoretically verify that GNNs enhanced through our approach can effectively circumvent the over-smoothing issue and exhibit robustness against over-squashing.
Empirical validations on benchmarks for homophilic, heterophilic graphs, and long-term graph datasets show that GNNs enhanced by our method significantly outperform their original counterparts.
arXiv Detail & Related papers (2024-01-26T00:47:43Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Evolving Computation Graphs [20.094508902123778]
Graph neural networks (GNNs) have demonstrated success in modeling relational data, especially for data that exhibits homophily.
We propose Evolving Computation Graphs (ECGs), a novel method for enhancing GNNs on heterophilic datasets.
arXiv Detail & Related papers (2023-06-22T14:58:18Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Edge Directionality Improves Learning on Heterophilic Graphs [42.5099159786891]
We introduce Directed Graph Neural Network (Dir-GNN), a novel framework for deep learning on directed graphs.
Dir-GNN can be used to extend any Message Passing Neural Network (MPNN) to account for edge directionality information.
We prove that Dir-GNN matches the expressivity of the Directed Weisfeiler-Lehman test, exceeding that of conventional MPNNs.
arXiv Detail & Related papers (2023-05-17T18:06:43Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Graph Neural Networks for Graphs with Heterophily: A Survey [98.45621222357397]
We provide a comprehensive review of graph neural networks (GNNs) for heterophilic graphs.
Specifically, we propose a systematic taxonomy that essentially governs existing heterophilic GNN models.
We discuss the correlation between graph heterophily and various graph research domains, aiming to facilitate the development of more effective GNNs.
arXiv Detail & Related papers (2022-02-14T23:07:47Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - Multi-grained Semantics-aware Graph Neural Networks [13.720544777078642]
Graph Neural Networks (GNNs) are powerful techniques in representation learning for graphs.
This work proposes a unified model, AdamGNN, to interactively learn node and graph representations.
Experiments on 14 real-world graph datasets show that AdamGNN can significantly outperform 17 competing models on both node- and graph-wise tasks.
arXiv Detail & Related papers (2020-10-01T07:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.