Rethinking the Power of Graph Canonization in Graph Representation
Learning with Stability
- URL: http://arxiv.org/abs/2309.00738v3
- Date: Fri, 9 Feb 2024 13:34:07 GMT
- Title: Rethinking the Power of Graph Canonization in Graph Representation
Learning with Stability
- Authors: Zehao Dong, Muhan Zhang, Philip R.O. Payne, Michael A Province, Carlos
Cruchaga, Tianyu Zhao, Fuhai Li, Yixin Chen
- Abstract summary: This paper proposes to maximize the expressivity of GNNs by graph canonization, then the power of such GNNs is studies from the perspective of model stability.
A stable GNN will map similar graphs to close graph representations in the vectorial space, and the stability of GNNs is critical to generalize their performance to unseen graphs.
A comprehensive set of experiments demonstrates the effectiveness of the proposed method.
- Score: 29.026197379375557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The expressivity of Graph Neural Networks (GNNs) has been studied broadly in
recent years to reveal the design principles for more powerful GNNs. Graph
canonization is known as a typical approach to distinguish non-isomorphic
graphs, yet rarely adopted when developing expressive GNNs. This paper proposes
to maximize the expressivity of GNNs by graph canonization, then the power of
such GNNs is studies from the perspective of model stability. A stable GNN will
map similar graphs to close graph representations in the vectorial space, and
the stability of GNNs is critical to generalize their performance to unseen
graphs. We theoretically reveal the trade-off of expressivity and stability in
graph-canonization-enhanced GNNs. Then we introduce a notion of universal graph
canonization as the general solution to address the trade-off and characterize
a widely applicable sufficient condition to solve the universal graph
canonization. A comprehensive set of experiments demonstrates the effectiveness
of the proposed method. In many popular graph benchmark datasets, graph
canonization successfully enhances GNNs and provides highly competitive
performance, indicating the capability and great potential of proposed method
in general graph representation learning. In graph datasets where the
sufficient condition holds, GNNs enhanced by universal graph canonization
consistently outperform GNN baselines and successfully improve the SOTA
performance up to $31\%$, providing the optimal solution to numerous
challenging real-world graph analytical tasks like gene network representation
learning in bioinformatics.
Related papers
- Scalable Implicit Graphon Learning [25.015678499211404]
We propose a scalable method that combines implicit neural representations (INRs) and graph neural networks (GNNs) to estimate a graphon from observed graphs.
We evaluate SIGL in synthetic and real-world graphs, showing that it outperforms existing methods and scales effectively to larger graphs.
arXiv Detail & Related papers (2024-10-22T22:44:24Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Edge Directionality Improves Learning on Heterophilic Graphs [42.5099159786891]
We introduce Directed Graph Neural Network (Dir-GNN), a novel framework for deep learning on directed graphs.
Dir-GNN can be used to extend any Message Passing Neural Network (MPNN) to account for edge directionality information.
We prove that Dir-GNN matches the expressivity of the Directed Weisfeiler-Lehman test, exceeding that of conventional MPNNs.
arXiv Detail & Related papers (2023-05-17T18:06:43Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Measuring and Improving the Use of Graph Information in Graph Neural
Networks [38.41049128525036]
Graph neural networks (GNNs) have been widely used for representation learning on graph data.
This paper introduces a context-surrounding GNN framework and proposes two smoothness metrics to measure the quantity and quality of information obtained from graph data.
A new GNN model, called CS-GNN, is then designed to improve the use of graph information based on the smoothness values of a graph.
arXiv Detail & Related papers (2022-06-27T10:27:28Z) - Distribution Preserving Graph Representation Learning [11.340722297341788]
Graph neural network (GNN) is effective to model graphs for distributed representations of nodes and an entire graph.
We propose Distribution Preserving GNN (DP-GNN) - a GNN framework that can improve the generalizability of expressive GNN models.
We evaluate the proposed DP-GNN framework on multiple benchmark datasets for graph classification tasks.
arXiv Detail & Related papers (2022-02-27T19:16:26Z) - Graph Neural Networks for Graphs with Heterophily: A Survey [98.45621222357397]
We provide a comprehensive review of graph neural networks (GNNs) for heterophilic graphs.
Specifically, we propose a systematic taxonomy that essentially governs existing heterophilic GNN models.
We discuss the correlation between graph heterophily and various graph research domains, aiming to facilitate the development of more effective GNNs.
arXiv Detail & Related papers (2022-02-14T23:07:47Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.