Learning from one graph: transductive learning guarantees via the geometry of small random worlds
- URL: http://arxiv.org/abs/2509.06894v1
- Date: Mon, 08 Sep 2025 17:13:28 GMT
- Title: Learning from one graph: transductive learning guarantees via the geometry of small random worlds
- Authors: Nils Detering, Luca Galimberti, Anastasis Kratsios, Giulia Livieri, A. Martina Neuman,
- Abstract summary: We develop new concentration-of-measure tools that leverage the geometric regularities of large graphs via low-dimensional metric embeddings.<n>We establish two principal learning results. The first concerns arbitrary deterministic $k$-vertex graphs, and the second addresses random graphs that share key geometric properties with an ErdHos-R'enyi graph $mathbfG=mathbfG(k,p)$ in the regime $p in mathcalO((log (k)/k)1/2)
- Score: 8.781923502647055
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Since their introduction by Kipf and Welling in $2017$, a primary use of graph convolutional networks is transductive node classification, where missing labels are inferred within a single observed graph and its feature matrix. Despite the widespread use of the network model, the statistical foundations of transductive learning remain limited, as standard inference frameworks typically rely on multiple independent samples rather than a single graph. In this work, we address these gaps by developing new concentration-of-measure tools that leverage the geometric regularities of large graphs via low-dimensional metric embeddings. The emergent regularities are captured using a random graph model; however, the methods remain applicable to deterministic graphs once observed. We establish two principal learning results. The first concerns arbitrary deterministic $k$-vertex graphs, and the second addresses random graphs that share key geometric properties with an Erd\H{o}s-R\'{e}nyi graph $\mathbf{G}=\mathbf{G}(k,p)$ in the regime $p \in \mathcal{O}((\log (k)/k)^{1/2})$. The first result serves as the basis for and illuminates the second. We then extend these results to the graph convolutional network setting, where additional challenges arise. Lastly, our learning guarantees remain informative even with a few labelled nodes $N$ and achieve the optimal nonparametric rate $\mathcal{O}(N^{-1/2})$ as $N$ grows.
Related papers
- GraphTOP: Graph Topology-Oriented Prompting for Graph Neural Networks [66.07512871031163]
"Pre-training, adaptation" scheme pre-trains powerful Graph Neural Networks (GNNs) over unlabeled graph data.<n>In the adaptation phase, graph prompting modifies input graph data with learnable prompts while keeping pre-trained GNN models frozen.<n>We propose the first **Graph** **T**opology-**O**riented **P**rompting (GraphTOP) framework to effectively adapt pre-trained GNN models for downstream tasks.
arXiv Detail & Related papers (2025-10-25T22:50:12Z) - Graph Semi-Supervised Learning for Point Classification on Data Manifolds [13.02854405679453]
We propose a graph semi-supervised learning framework for classification tasks on data manifold.<n>Motivated by the manifold hypothesis, we model data as points sampled from a low-dimensional $mathcalM subset mathbbRF$.<n>We show that, under uniform sampling from $mathcalM$, the generalization gap of the semi-supervised task diminishes with increasing graph size.
arXiv Detail & Related papers (2025-06-13T19:52:54Z) - Generalization of Geometric Graph Neural Networks with Lipschitz Loss Functions [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)<n>We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.<n>We verify this theoretical result with experiments on multiple real-world datasets.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - A Gromov--Wasserstein Geometric View of Spectrum-Preserving Graph
Coarsening [19.09507367362567]
This work studies graph coarsening from a different perspective, developing a theory for preserving graph distances.
The geometric approach is useful when working with a collection of graphs, such as in graph classification and regression.
Minimizing this difference can be done using the popular weighted kernel $K$-means method.
arXiv Detail & Related papers (2023-06-15T04:47:26Z) - Analysis and Approximate Inference of Large Random Kronecker Graphs [4.417282202068703]
We show that the adjacency of a large random Kronecker graph can be decomposed.
We propose a denoise-and-solve'' approach to infer the key graph parameters.
arXiv Detail & Related papers (2023-06-14T13:09:38Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Effects of Graph Convolutions in Deep Networks [8.937905773981702]
We present a rigorous theoretical understanding of the effects of graph convolutions in multi-layer networks.
We show that a single graph convolution expands the regime of the distance between the means where multi-layer networks can classify the data.
We provide both theoretical and empirical insights into the performance of graph convolutions placed in different combinations among the layers of a network.
arXiv Detail & Related papers (2022-04-20T08:24:43Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - RaWaNet: Enriching Graph Neural Network Input via Random Walks on Graphs [0.0]
Graph neural networks (GNNs) have gained increasing popularity and have shown very promising results for data that are represented by graphs.
We propose a random walk data processing of the graphs based on three selected lengths. Namely, (regular) walks of length 1 and 2, and a fractional walk of length $gamma in (0,1)$, in order to capture the different local and global dynamics on the graphs.
We test our method on various molecular datasets by passing the processed node features to the network in order to perform several classification and regression tasks.
arXiv Detail & Related papers (2021-09-15T20:04:01Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.