Size Generalization of Graph Neural Networks on Biological Data:
Insights and Practices from the Spectral Perspective
- URL: http://arxiv.org/abs/2305.15611v4
- Date: Wed, 7 Feb 2024 03:27:12 GMT
- Title: Size Generalization of Graph Neural Networks on Biological Data:
Insights and Practices from the Spectral Perspective
- Authors: Gaotang Li, Danai Koutra, Yujun Yan
- Abstract summary: We investigate size-induced distribution shifts in graphs and assess their impact on the ability of graph neural networks (GNNs) to generalize to larger graphs.
We introduce a simple yet effective model-agnostic strategy, which makes GNNs aware of important subgraph patterns to enhance their size generalizability.
- Score: 16.01608638659267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate size-induced distribution shifts in graphs and assess their
impact on the ability of graph neural networks (GNNs) to generalize to larger
graphs relative to the training data. Existing literature presents conflicting
conclusions on GNNs' size generalizability, primarily due to disparities in
application domains and underlying assumptions concerning size-induced
distribution shifts. Motivated by this, we take a data-driven approach: we
focus on real biological datasets and seek to characterize the types of
size-induced distribution shifts. Diverging from prior approaches, we adopt a
spectral perspective and identify that spectrum differences induced by size are
related to differences in subgraph patterns (e.g., average cycle lengths).
While previous studies have identified that the inability of GNNs in capturing
subgraph information negatively impacts their in-distribution generalization,
our findings further show that this decline is more pronounced when evaluating
on larger test graphs not encountered during training. Based on these spectral
insights, we introduce a simple yet effective model-agnostic strategy, which
makes GNNs aware of these important subgraph patterns to enhance their size
generalizability. Our empirical results reveal that our proposed
size-insensitive attention strategy substantially enhances graph classification
performance on large test graphs, which are 2-10 times larger than the training
graphs, resulting in an improvement in F1 scores by up to 8%.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Enhancing Size Generalization in Graph Neural Networks through Disentangled Representation Learning [7.448831299106425]
DISGEN is a model-agnostic framework designed to disentangle size factors from graph representations.
Our empirical results show that DISGEN outperforms the state-of-the-art models by up to 6% on real-world datasets.
arXiv Detail & Related papers (2024-06-07T03:19:24Z) - Towards Causal Classification: A Comprehensive Study on Graph Neural
Networks [9.360596957822471]
Graph Neural Networks (GNNs) for processing graph-structured data have expanded their potential for causal analysis.
Our study delves into nine benchmark graph classification models, testing their strength and versatility across seven datasets.
Our findings are instrumental in furthering the understanding and practical application of GNNs in diverse datacentric fields.
arXiv Detail & Related papers (2024-01-27T15:35:05Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Addressing the Impact of Localized Training Data in Graph Neural
Networks [0.0]
Graph Neural Networks (GNNs) have achieved notable success in learning from graph-structured data.
This article aims to assess the impact of training GNNs on localized subsets of the graph.
We propose a regularization method to minimize distributional discrepancies between localized training data and graph inference.
arXiv Detail & Related papers (2023-07-24T11:04:22Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - From Local Structures to Size Generalization in Graph Neural Networks [53.3202754533658]
Graph neural networks (GNNs) can process graphs of different sizes.
Their ability to generalize across sizes, specifically from small to large graphs, is still not well understood.
arXiv Detail & Related papers (2020-10-17T19:36:54Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.