Semi-Supervised Learning on Graphs using Graph Neural Networks
- URL: http://arxiv.org/abs/2602.17115v1
- Date: Thu, 19 Feb 2026 06:25:13 GMT
- Title: Semi-Supervised Learning on Graphs using Graph Neural Networks
- Authors: Juntong Chen, Claire Donnat, Olga Klopp, Johannes Schmidt-Hieber,
- Abstract summary: Graph neural networks (GNNs) work remarkably well in semi-supervised node regression.<n>We study an aggregate-and-readout model that encompasses several common message passing architectures.<n>We prove a sharp non-asymptotic risk bound that separates errors approximation, convergence, and optimization.
- Score: 7.3886152750469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) work remarkably well in semi-supervised node regression, yet a rigorous theory explaining when and why they succeed remains lacking. To address this gap, we study an aggregate-and-readout model that encompasses several common message passing architectures: node features are first propagated over the graph then mapped to responses via a nonlinear function. For least-squares estimation over GNNs with linear graph convolutions and a deep ReLU readout, we prove a sharp non-asymptotic risk bound that separates approximation, stochastic, and optimization errors. The bound makes explicit how performance scales with the fraction of labeled nodes and graph-induced dependence. Approximation guarantees are further derived for graph-smoothing followed by smooth nonlinear readouts, yielding convergence rates that recover classical nonparametric behavior under full supervision while characterizing performance when labels are scarce. Numerical experiments validate our theory, providing a systematic framework for understanding GNN performance and limitations.
Related papers
- Convergence of gradient based training for linear Graph Neural Networks [5.079602839359521]
We show that the gradient flow training of a linear GNN with mean squared loss converges to the global minimum at an exponential rate.<n>We also study the convergence of linear GNNs under gradient descent training.
arXiv Detail & Related papers (2025-01-24T12:18:30Z) - Generalization of Geometric Graph Neural Networks with Lipschitz Loss Functions [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)<n>We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.<n>We verify this theoretical result with experiments on multiple real-world datasets.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - Non-convolutional Graph Neural Networks [46.79328529882998]
We design a simple graph learning module entirely free of convolution operators, coined random walk with unifying memory (RUM) neural network.
RUM achieves competitive performance, but is also robust, memory-efficient, scalable, and faster than the simplest convolutional GNNs.
arXiv Detail & Related papers (2024-07-31T21:29:26Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.<n>We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Robust Graph Neural Network based on Graph Denoising [10.564653734218755]
Graph Neural Networks (GNNs) have emerged as a notorious alternative to address learning problems dealing with non-Euclidean datasets.
This work proposes a robust implementation of GNNs that explicitly accounts for the presence of perturbations in the observed topology.
arXiv Detail & Related papers (2023-12-11T17:43:57Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Understanding Non-linearity in Graph Neural Networks from the
Bayesian-Inference Perspective [33.01636846541052]
Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs.
We investigate the functions of non-linearity in GNNs for node classification tasks.
arXiv Detail & Related papers (2022-07-22T19:36:12Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.