Refining Latent Homophilic Structures over Heterophilic Graphs for
Robust Graph Convolution Networks
- URL: http://arxiv.org/abs/2312.16418v1
- Date: Wed, 27 Dec 2023 05:35:14 GMT
- Title: Refining Latent Homophilic Structures over Heterophilic Graphs for
Robust Graph Convolution Networks
- Authors: Chenyang Qiu, Guoshun Nan, Tianyu Xiong, Wendi Deng, Di Wang, Zhiyang
Teng, Lijuan Sun, Qimei Cui, Xiaofeng Tao
- Abstract summary: Graph convolution networks (GCNs) are extensively utilized in various graph tasks to mine knowledge from spatial data.
Our study marks the pioneering attempt to quantitatively investigate the GCN robustness over omnipresent heterophilic graphs for node classification.
We present a novel method that aims to harden GCNs by automatically learning Latent Homophilic Structures over heterophilic graphs.
- Score: 23.61142321685077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolution networks (GCNs) are extensively utilized in various graph
tasks to mine knowledge from spatial data. Our study marks the pioneering
attempt to quantitatively investigate the GCN robustness over omnipresent
heterophilic graphs for node classification. We uncover that the predominant
vulnerability is caused by the structural out-of-distribution (OOD) issue. This
finding motivates us to present a novel method that aims to harden GCNs by
automatically learning Latent Homophilic Structures over heterophilic graphs.
We term such a methodology as LHS. To elaborate, our initial step involves
learning a latent structure by employing a novel self-expressive technique
based on multi-node interactions. Subsequently, the structure is refined using
a pairwisely constrained dual-view contrastive learning approach. We
iteratively perform the above procedure, enabling a GCN model to aggregate
information in a homophilic way on heterophilic graphs. Armed with such an
adaptable structure, we can properly mitigate the structural OOD threats over
heterophilic graphs. Experiments on various benchmarks show the effectiveness
of the proposed LHS approach for robust GCNs.
Related papers
- Learn from Heterophily: Heterophilous Information-enhanced Graph Neural Network [4.078409998614025]
Heterophily, nodes with different labels tend to be connected based on semantic meanings, Graph Neural Networks (GNNs) often exhibit suboptimal performance.
We propose and demonstrate that the valuable semantic information inherent in heterophily can be utilized effectively in graph learning.
We propose HiGNN, an innovative approach that constructs an additional new graph structure, that integrates heterophilous information by leveraging node distribution.
arXiv Detail & Related papers (2024-03-26T03:29:42Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Demystifying Structural Disparity in Graph Neural Networks: Can One Size
Fit All? [61.35457647107439]
Most real-world homophilic and heterophilic graphs are comprised of a mixture of nodes in both homophilic and heterophilic structural patterns.
We provide evidence that Graph Neural Networks(GNNs) on node classification typically perform admirably on homophilic nodes.
We then propose a rigorous, non-i.i.d PAC-Bayesian generalization bound for GNNs, revealing reasons for the performance disparity.
arXiv Detail & Related papers (2023-06-02T07:46:20Z) - Revisiting Heterophily in Graph Convolution Networks by Learning
Representations Across Topological and Feature Spaces [20.775165967590173]
Graph convolution networks (GCNs) have been enormously successful in learning representations over several graph-based machine learning tasks.
We argue that by learning graph representations across two spaces i.e., topology and feature space GCNs can address heterophily.
We experimentally demonstrate the performance of the proposed GCN framework over semi-supervised node classification task.
arXiv Detail & Related papers (2022-11-01T16:21:10Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - Beyond Homophily in Graph Neural Networks: Current Limitations and
Effective Designs [28.77753005139331]
We investigate the representation power of graph neural networks in a semi-supervised node classification task under heterophily or low homophily.
Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure.
We identify a set of key designs that boost learning from the graph structure under heterophily.
arXiv Detail & Related papers (2020-06-20T02:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.