HeRB: Heterophily-Resolved Structure Balancer for Graph Neural Networks
- URL: http://arxiv.org/abs/2504.17276v1
- Date: Thu, 24 Apr 2025 06:04:59 GMT
- Title: HeRB: Heterophily-Resolved Structure Balancer for Graph Neural Networks
- Authors: Ke-Jia Chen, Wenhui Mu, Zheng Liu,
- Abstract summary: Heterophily-Resolved Structure Balancer (HeRB) for Graph Neural Networks (GNNs)<n>HeRB consists of two innovative components: 1) A heterophily-lessening augmentation module which serves to reduce inter-class edges and increase intra-class edges; 2) A homophilic knowledge transfer mechanism to convey homophilic information from head nodes to tail nodes.<n> Experimental results demonstrate that HeRB achieves superior performance on two homophilic and six heterophilic benchmark datasets.
- Score: 3.6560264185068916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has witnessed the remarkable progress of Graph Neural Networks (GNNs) in the realm of graph data representation. However, GNNs still encounter the challenge of structural imbalance. Prior solutions to this problem did not take graph heterophily into account, namely that connected nodes process distinct labels or features, thus resulting in a deficiency in effectiveness. Upon verifying the impact of heterophily on solving the structural imbalance problem, we propose to rectify the heterophily first and then transfer homophilic knowledge. To the end, we devise a method named HeRB (Heterophily-Resolved Structure Balancer) for GNNs. HeRB consists of two innovative components: 1) A heterophily-lessening augmentation module which serves to reduce inter-class edges and increase intra-class edges; 2) A homophilic knowledge transfer mechanism to convey homophilic information from head nodes to tail nodes. Experimental results demonstrate that HeRB achieves superior performance on two homophilic and six heterophilic benchmark datasets, and the ablation studies further validate the efficacy of two proposed components.
Related papers
- THeGCN: Temporal Heterophilic Graph Convolutional Network [51.25112923442657]
We propose the Temporal Heterophilic Graph Convolutional Network (THeGCN) to accurately capture both edge (spatial) heterophily and temporal heterophily.
The THeGCN model consists of two key components: a sampler and an aggregator.
Extensive experiments conducted on 5 real-world datasets validate the efficacy of THeGCN.
arXiv Detail & Related papers (2024-12-21T01:52:03Z) - Redesigning graph filter-based GNNs to relax the homophily assumption [31.368672838207022]
Graph neural networks (GNNs) have become a workhorse approach for learning from data defined over irregular domains.
We present a simple yet effective architecture designed to mitigate the limitations of the homophily assumption.
The proposed architecture reinterprets the role of graph filters in convolutional GNNs, resulting in a more general architecture.
arXiv Detail & Related papers (2024-09-13T09:43:36Z) - The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges [101.83124435649358]
Homophily principle, ie nodes with the same labels or similar attributes are more likely to be connected.
Recent work has identified a non-trivial set of datasets where GNN's performance compared to the NN's is not satisfactory.
arXiv Detail & Related papers (2024-07-12T18:04:32Z) - Heterophilous Distribution Propagation for Graph Neural Networks [23.897535976924722]
We propose heterophilous distribution propagation (HDP) for graph neural networks.
Instead of aggregating information from all neighborhoods, HDP adaptively separates the neighbors into homophilous and heterphilous parts.
We conduct extensive experiments on 9 benchmark datasets with different levels of homophily.
arXiv Detail & Related papers (2024-05-31T06:40:56Z) - Refining Latent Homophilic Structures over Heterophilic Graphs for
Robust Graph Convolution Networks [23.61142321685077]
Graph convolution networks (GCNs) are extensively utilized in various graph tasks to mine knowledge from spatial data.
Our study marks the pioneering attempt to quantitatively investigate the GCN robustness over omnipresent heterophilic graphs for node classification.
We present a novel method that aims to harden GCNs by automatically learning Latent Homophilic Structures over heterophilic graphs.
arXiv Detail & Related papers (2023-12-27T05:35:14Z) - Demystifying Structural Disparity in Graph Neural Networks: Can One Size
Fit All? [61.35457647107439]
Most real-world homophilic and heterophilic graphs are comprised of a mixture of nodes in both homophilic and heterophilic structural patterns.
We provide evidence that Graph Neural Networks(GNNs) on node classification typically perform admirably on homophilic nodes.
We then propose a rigorous, non-i.i.d PAC-Bayesian generalization bound for GNNs, revealing reasons for the performance disparity.
arXiv Detail & Related papers (2023-06-02T07:46:20Z) - Heterophily-Aware Graph Attention Network [42.640057865981156]
Graph Neural Networks (GNNs) have shown remarkable success in graph representation learning.
Existing heterophilic GNNs tend to ignore the modeling of heterophily of each edge, which is also a vital part in tackling the heterophily problem.
We propose a novel Heterophily-Aware Graph Attention Network (HA-GAT) by fully exploring and utilizing the local distribution as the underlying heterophily.
arXiv Detail & Related papers (2023-02-07T03:21:55Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Restructuring Graph for Higher Homophily via Adaptive Spectral Clustering [7.223313563198697]
We show that a graph restructuring method can significantly boost the performance of six classical GNNs by an average of 25% on less-homophilic graphs.
The boosted performance is comparable to state-of-the-art methods.
arXiv Detail & Related papers (2022-06-06T06:38:53Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph
Convolutional Neural Networks [33.25212467404069]
We theoretically characterize the connections between heterophily and oversmoothing.
We design a model that addresses the discrepancy in features and degrees between neighbors by incorporating signed messages and learned degree corrections.
Our experiments on 9 real networks show that our model achieves state-of-the-art performance under heterophily.
arXiv Detail & Related papers (2021-02-12T11:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.