SizeShiftReg: a Regularization Method for Improving Size-Generalization
in Graph Neural Networks
- URL: http://arxiv.org/abs/2207.07888v1
- Date: Sat, 16 Jul 2022 09:50:45 GMT
- Title: SizeShiftReg: a Regularization Method for Improving Size-Generalization
in Graph Neural Networks
- Authors: Davide Buffelli, Pietro Li\`o, Fabio Vandin
- Abstract summary: Graph neural networks (GNNs) have become the de facto model of choice for graph classification.
We propose a regularization strategy that can be applied to any GNN to improve its generalization capabilities without requiring access to the test data.
Our regularization is based on the idea of simulating a shift in the size of the training graphs using coarsening techniques.
- Score: 5.008597638379227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past few years, graph neural networks (GNNs) have become the de facto
model of choice for graph classification. While, from the theoretical
viewpoint, most GNNs can operate on graphs of any size, it is empirically
observed that their classification performance degrades when they are applied
on graphs with sizes that differ from those in the training data. Previous
works have tried to tackle this issue in graph classification by providing the
model with inductive biases derived from assumptions on the generative process
of the graphs, or by requiring access to graphs from the test domain. The first
strategy is tied to the use of ad-hoc models and to the quality of the
assumptions made on the generative process, leaving open the question of how to
improve the performance of generic GNN models in general settings. On the other
hand, the second strategy can be applied to any GNN, but requires access to
information that is not always easy to obtain. In this work we consider the
scenario in which we only have access to the training data, and we propose a
regularization strategy that can be applied to any GNN to improve its
generalization capabilities from smaller to larger graphs without requiring
access to the test data. Our regularization is based on the idea of simulating
a shift in the size of the training graphs using coarsening techniques, and
enforcing the model to be robust to such a shift. Experimental results on
standard datasets show that popular GNN models, trained on the 50% smallest
graphs in the dataset and tested on the 10% largest graphs, obtain performance
improvements of up to 30% when trained with our regularization strategy.
Related papers
- Faster Inference Time for GNNs using coarsening [1.323700980948722]
coarsening-based methods are used to reduce the graph into a smaller one, resulting in faster computation.
No previous research has tackled the cost during the inference.
This paper presents a novel approach to improve the scalability of GNNs through subgraph-based techniques.
arXiv Detail & Related papers (2024-10-19T06:27:24Z) - Enhancing Size Generalization in Graph Neural Networks through Disentangled Representation Learning [7.448831299106425]
DISGEN is a model-agnostic framework designed to disentangle size factors from graph representations.
Our empirical results show that DISGEN outperforms the state-of-the-art models by up to 6% on real-world datasets.
arXiv Detail & Related papers (2024-06-07T03:19:24Z) - Graph Unlearning with Efficient Partial Retraining [28.433619085748447]
Graph Neural Networks (GNNs) have achieved remarkable success in various real-world applications.
GNNs may be trained on undesirable graph data, which can degrade their performance and reliability.
We propose GraphRevoker, a novel graph unlearning framework that better maintains the model utility of unlearnable GNNs.
arXiv Detail & Related papers (2024-03-12T06:22:10Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Edge Directionality Improves Learning on Heterophilic Graphs [42.5099159786891]
We introduce Directed Graph Neural Network (Dir-GNN), a novel framework for deep learning on directed graphs.
Dir-GNN can be used to extend any Message Passing Neural Network (MPNN) to account for edge directionality information.
We prove that Dir-GNN matches the expressivity of the Directed Weisfeiler-Lehman test, exceeding that of conventional MPNNs.
arXiv Detail & Related papers (2023-05-17T18:06:43Z) - Graph Generative Model for Benchmarking Graph Neural Networks [73.11514658000547]
We introduce a novel graph generative model that learns and reproduces the distribution of real-world graphs in a privacy-controlled way.
Our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
arXiv Detail & Related papers (2022-07-10T06:42:02Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.