A Short Note on Upper Bounds for Graph Neural Operator Convergence Rate
- URL: http://arxiv.org/abs/2510.20954v1
- Date: Thu, 23 Oct 2025 19:28:56 GMT
- Title: A Short Note on Upper Bounds for Graph Neural Operator Convergence Rate
- Authors: Roxanne Holden, Luana Ruiz,
- Abstract summary: Spectral convergence of sampled graphs to graphons yields operator-level convergence rates.<n>This note summarizes known bounds under no assumptions, global Lipschitz continuity, and piecewise-Lipschitz continuity, highlighting tradeoffs between assumptions and rates.
- Score: 9.530901445524714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphons, as limits of graph sequences, provide a framework for analyzing the asymptotic behavior of graph neural operators. Spectral convergence of sampled graphs to graphons yields operator-level convergence rates, enabling transferability analyses of GNNs. This note summarizes known bounds under no assumptions, global Lipschitz continuity, and piecewise-Lipschitz continuity, highlighting tradeoffs between assumptions and rates, and illustrating their empirical tightness on synthetic and real data.
Related papers
- Semi-Supervised Learning on Graphs using Graph Neural Networks [7.3886152750469]
Graph neural networks (GNNs) work remarkably well in semi-supervised node regression.<n>We study an aggregate-and-readout model that encompasses several common message passing architectures.<n>We prove a sharp non-asymptotic risk bound that separates errors approximation, convergence, and optimization.
arXiv Detail & Related papers (2026-02-19T06:25:13Z) - Pruning at Initialisation through the lens of Graphon Limit: Convergence, Expressivity, and Generalisation [16.628325681877556]
Pruning at Initialisation methods discover sparse, trainableworks before training, but their theoretical mechanisms remain elusive.<n>In this work, we connect discrete intrinsic prunings to graph limit theory via graphons, establishing the graphon limit of PaI masks.<n>We derive two fundamental theoretical results: (i) a Universal Approximation Theorem for sparse networks that depends only on the dimension of active coordinate subspaces; and (ii) a Graphon-NTK generalisation bound demonstrating how the limit graphon geometry to align with informative features.
arXiv Detail & Related papers (2026-02-06T13:02:47Z) - QGraphLIME - Explaining Quantum Graph Neural Networks [0.48998185508205744]
Quantum graph neural networks offer a powerful paradigm for learning on graph-structured data.<n>QuantumGraphLIME treats model explanations as distributions over local surrogates fit on structure-preserving perturbations of a graph.
arXiv Detail & Related papers (2025-10-07T08:39:13Z) - On the Convergence and Size Transferability of Continuous-depth Graph Neural Networks [3.3390650922528184]
Continuous-depth graph neural networks (GNNs) combine the structural inductive bias of Graph Neural Networks (GNNs) with the continuous-depth architecture of Graph Neural Differential Equations (ODEs)<n>We present a rigorous convergence analysis of NeuralEs with time parameters in the infinite-node limit, providing theoretical insights into their size transferability.
arXiv Detail & Related papers (2025-10-04T19:59:21Z) - Generalization, Expressivity, and Universality of Graph Neural Networks on Attributed Graphs [53.27010448621372]
We analyze the universality and generalization of graph neural networks (GNNs) on attributed graphs with node attributes.<n>We prove a universal approximation theorem for GNNs and bounds generalization for GNNs on any data distribution of attributed graphs.<n>Our work extends and unites previous approaches which either derived theory only for graphs with no attributes, derived compact metrics under which GNNs are continuous but without separation power, or derived metrics under which GNNs are continuous and separate points.
arXiv Detail & Related papers (2024-11-08T10:34:24Z) - Revealing Decurve Flows for Generalized Graph Propagation [108.80758541147418]
This study addresses the limitations of the traditional analysis of message-passing, central to graph learning, by defining em textbfgeneralized propagation with directed and weighted graphs.
We include a preliminary exploration of learned propagation patterns in datasets, a first in the field.
arXiv Detail & Related papers (2024-02-13T14:13:17Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - A Spectral Analysis of Graph Neural Networks on Dense and Sparse Graphs [13.954735096637298]
We analyze how sparsity affects the graph spectra, and thus the performance of graph neural networks (GNNs) in node classification on dense and sparse graphs.
We show that GNNs can outperform spectral methods on sparse graphs, and illustrate these results with numerical examples on both synthetic and real graphs.
arXiv Detail & Related papers (2022-11-06T22:38:13Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Transferability of Graph Neural Networks: an Extended Graphon Approach [3.042325619220694]
We study spectral graph convolutional neural networks (GCNNs)
In this paper, we consider a model of transferability based on graphon analysis.
arXiv Detail & Related papers (2021-09-21T10:59:01Z) - Graph and graphon neural network stability [122.06927400759021]
Graph networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of network data.
We analyze GNN stability using kernel objects called graphons.
arXiv Detail & Related papers (2020-10-23T16:55:56Z) - Posterior Consistency of Semi-Supervised Regression on Graphs [14.65047105712853]
Graph-based semi-supervised regression (SSR) is the problem of estimating the value of a function on a weighted graph from its values (labels) on a small subset of the vertices.
This paper is concerned with the consistency of SSR in the context of classification, in the setting where the labels have small noise and the underlying graph weighting is consistent with well-clustered nodes.
We present a Bayesian formulation of SSR in which the weighted graph defines a Gaussian prior, using a graph Laplacian, and the labeled data defines a likelihood.
arXiv Detail & Related papers (2020-07-25T00:00:19Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.