Survey on Generalization Theory for Graph Neural Networks
- URL: http://arxiv.org/abs/2503.15650v1
- Date: Wed, 19 Mar 2025 19:04:24 GMT
- Title: Survey on Generalization Theory for Graph Neural Networks
- Authors: Antonis Vasileiou, Stefanie Jegelka, Ron Levie, Christopher Morris,
- Abstract summary: Message-passing graph neural networks (MPNNs) have emerged as the leading approach for machine learning on graphs.<n>We systematically review the existing literature on the generalization abilities of MPNNs.
- Score: 34.561193142896364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Message-passing graph neural networks (MPNNs) have emerged as the leading approach for machine learning on graphs, attracting significant attention in recent years. While a large set of works explored the expressivity of MPNNs, i.e., their ability to separate graphs and approximate functions over them, comparatively less attention has been directed toward investigating their generalization abilities, i.e., making meaningful predictions beyond the training data. Here, we systematically review the existing literature on the generalization abilities of MPNNs. We analyze the strengths and limitations of various studies in these domains, providing insights into their methodologies and findings. Furthermore, we identify potential avenues for future research, aiming to deepen our understanding of the generalization abilities of MPNNs.
Related papers
- Covered Forest: Fine-grained generalization analysis of graph neural networks [14.729609626353112]
We extend recent advances in graph similarity theory to assess the influence of graph structure, aggregation, and loss functions on MPNNs' generalization abilities.<n>Our empirical study supports our theoretical insights, improving our understanding of MPNNs' generalization properties.
arXiv Detail & Related papers (2024-12-10T01:45:59Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Future Directions in the Theory of Graph Machine Learning [49.049992612331685]
Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data.
Despite their practical success, our theoretical understanding of the properties of GNNs remains highly incomplete.
arXiv Detail & Related papers (2024-02-03T22:55:31Z) - The Expressive Power of Graph Neural Networks: A Survey [8.652204270723994]
We conduct a first survey for models for enhancing expressive power under different forms of definition.<n>The models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement.
arXiv Detail & Related papers (2023-08-16T09:12:21Z) - A Survey on Explainability of Graph Neural Networks [4.612101932762187]
Graph neural networks (GNNs) are powerful graph-based deep-learning models.
This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs.
arXiv Detail & Related papers (2023-06-02T23:36:49Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - On the Expressiveness and Generalization of Hypergraph Neural Networks [77.65788763444877]
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs)
Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes.
arXiv Detail & Related papers (2023-03-09T18:42:18Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - An Analysis of Attentive Walk-Aggregating Graph Neural Networks [34.866935881726256]
Graph neural networks (GNNs) have been shown to possess strong representation power.
We propose a novel GNN model, called AWARE, that aggregates information about the walks in the graph using attention schemes.
arXiv Detail & Related papers (2021-10-06T11:41:12Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.