Manifold limit for the training of shallow graph convolutional neural networks
- URL: http://arxiv.org/abs/2601.06025v1
- Date: Fri, 09 Jan 2026 18:59:20 GMT
- Title: Manifold limit for the training of shallow graph convolutional neural networks
- Authors: Johanna Tengler, Christoph Brune, José A. Iglesias,
- Abstract summary: We study the consistency of the training of shallow graph convolutional neural networks (GCNNs) on proximity graphs of sampled point clouds.<n>We prove $$-convergence of regularized empirical risk minimization functionals and corresponding convergence of their global minimizers.
- Score: 1.2744523252873352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the discrete-to-continuum consistency of the training of shallow graph convolutional neural networks (GCNNs) on proximity graphs of sampled point clouds under a manifold assumption. Graph convolution is defined spectrally via the graph Laplacian, whose low-frequency spectrum approximates that of the Laplace-Beltrami operator of the underlying smooth manifold, and shallow GCNNs of possibly infinite width are linear functionals on the space of measures on the parameter space. From this functional-analytic perspective, graph signals are seen as spatial discretizations of functions on the manifold, which leads to a natural notion of training data consistent across graph resolutions. To enable convergence results, the continuum parameter space is chosen as a weakly compact product of unit balls, with Sobolev regularity imposed on the output weight and bias, but not on the convolutional parameter. The corresponding discrete parameter spaces inherit the corresponding spectral decay, and are additionally restricted by a frequency cutoff adapted to the informative spectral window of the graph Laplacians. Under these assumptions, we prove $Γ$-convergence of regularized empirical risk minimization functionals and corresponding convergence of their global minimizers, in the sense of weak convergence of the parameter measures and uniform convergence of the functions over compact sets. This provides a formalization of mesh and sample independence for the training of such networks.
Related papers
- Pruning at Initialisation through the lens of Graphon Limit: Convergence, Expressivity, and Generalisation [16.628325681877556]
Pruning at Initialisation methods discover sparse, trainableworks before training, but their theoretical mechanisms remain elusive.<n>In this work, we connect discrete intrinsic prunings to graph limit theory via graphons, establishing the graphon limit of PaI masks.<n>We derive two fundamental theoretical results: (i) a Universal Approximation Theorem for sparse networks that depends only on the dimension of active coordinate subspaces; and (ii) a Graphon-NTK generalisation bound demonstrating how the limit graphon geometry to align with informative features.
arXiv Detail & Related papers (2026-02-06T13:02:47Z) - Data-Driven Graph Filters via Adaptive Spectral Shaping [10.449640808601199]
We introduce Adaptive Spectral Shaping, a data-driven framework for graph filtering.<n>The framework provides compact spectral modules that plug into graph signal processing pipelines and graph neural networks.
arXiv Detail & Related papers (2026-02-03T16:20:49Z) - The Curvature Rate λ: A Scalar Measure of Input-Space Sharpness in Neural Networks [0.0]
Curvature influences generalization, robustness, and how reliably neural networks respond to small input perturbations.<n>We introduce a scalar curvature measure defined directly in input space: the curvature rate lambda.<n>lambda tracks the emergence of high-frequency structure in the decision boundary.
arXiv Detail & Related papers (2025-11-03T10:46:03Z) - Point Cloud Denoising With Fine-Granularity Dynamic Graph Convolutional Networks [58.050130177241186]
Noise perturbations often corrupt 3-D point clouds, hindering downstream tasks such as surface reconstruction, rendering, and further processing.
This paper introduces finegranularity dynamic graph convolutional networks called GDGCN, a novel approach to denoising in 3-D point clouds.
arXiv Detail & Related papers (2024-11-21T14:19:32Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - Geometric Scattering on Measure Spaces [15.819230791757906]
We introduce a general, unified model for geometric scattering on measure spaces.<n>We consider finite measure spaces that are obtained from randomly sampling an unknown manifold.<n>We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold.
arXiv Detail & Related papers (2022-08-17T22:40:09Z) - Spectral-Spatial Global Graph Reasoning for Hyperspectral Image
Classification [50.899576891296235]
Convolutional neural networks have been widely applied to hyperspectral image classification.
Recent methods attempt to address this issue by performing graph convolutions on spatial topologies.
arXiv Detail & Related papers (2021-06-26T06:24:51Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.