Graph Attention Network for Node Regression on Random Geometric Graphs with Erdős--Rényi contamination
- URL: http://arxiv.org/abs/2601.23239v1
- Date: Fri, 30 Jan 2026 18:09:03 GMT
- Title: Graph Attention Network for Node Regression on Random Geometric Graphs with Erdős--Rényi contamination
- Authors: Somak Laha, Suqi Liu, Morgane Austern,
- Abstract summary: We propose and analyze a carefully designed, task-specific GAT that constructs denoised proxy features for regression.<n>We prove that regressing the response variables on the proxies achieves lower errorally in (a) estimating the regression coefficient.<n>We also demonstrate the effectiveness of the attention mechanism in several node regression tasks.
- Score: 2.982218441172364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph attention networks (GATs) are widely used and often appear robust to noise in node covariates and edges, yet rigorous statistical guarantees demonstrating a provable advantage of GATs over non-attention graph neural networks~(GNNs) are scarce. We partially address this gap for node regression with graph-based errors-in-variables models under simultaneous covariate and edge corruption: responses are generated from latent node-level covariates, but only noise-perturbed versions of the latent covariates are observed; and the sample graph is a random geometric graph created from the node covariates but contaminated by independent Erdős--Rényi edges. We propose and analyze a carefully designed, task-specific GAT that constructs denoised proxy features for regression. We prove that regressing the response variables on the proxies achieves lower error asymptotically in (a) estimating the regression coefficient compared to the ordinary least squares (OLS) estimator on the noisy node covariates, and (b) predicting the response for an unlabelled node compared to a vanilla graph convolutional network~(GCN) -- under mild growth conditions. Our analysis leverages high-dimensional geometric tail bounds and concentration for neighbourhood counts and sample covariances. We verify our theoretical findings through experiments on synthetically generated data. We also perform experiments on real-world graphs and demonstrate the effectiveness of the attention mechanism in several node regression tasks.
Related papers
- Semi-Supervised Learning on Graphs using Graph Neural Networks [7.3886152750469]
Graph neural networks (GNNs) work remarkably well in semi-supervised node regression.<n>We study an aggregate-and-readout model that encompasses several common message passing architectures.<n>We prove a sharp non-asymptotic risk bound that separates errors approximation, convergence, and optimization.
arXiv Detail & Related papers (2026-02-19T06:25:13Z) - Investigating GNN Convergence on Large Randomly Generated Graphs with Realistic Node Feature Correlations [0.0]
We will introduce a novel method to generate random graphs that have correlated node features.<n>The node features will be sampled in such a manner to ensure correlation between neighbouring nodes.<n>A theoretical analysis will strongly indicate that convergence can be avoided in some cases.
arXiv Detail & Related papers (2026-02-18T02:36:33Z) - Generalization of Geometric Graph Neural Networks with Lipschitz Loss Functions [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)<n>We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.<n>We verify this theoretical result with experiments on multiple real-world datasets.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - Addressing Heterophily in Node Classification with Graph Echo State
Networks [11.52174067809364]
We address the challenges of heterophilic graphs with Graph Echo State Network (GESN) for node classification.
GESN is a reservoir computing model for graphs, where node embeddings are computed by an untrained message-passing function.
Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to most fully trained deep models.
arXiv Detail & Related papers (2023-05-14T19:42:31Z) - OrthoReg: Improving Graph-regularized MLPs via Orthogonality
Regularization [66.30021126251725]
Graph Neural Networks (GNNs) are currently dominating in modeling graphstructure data.
Graph-regularized networks (GR-MLPs) implicitly inject the graph structure information into model weights, while their performance can hardly match that of GNNs in most tasks.
We show that GR-MLPs suffer from dimensional collapse, a phenomenon in which the largest a few eigenvalues dominate the embedding space.
We propose OrthoReg, a novel GR-MLP model to mitigate the dimensional collapse issue.
arXiv Detail & Related papers (2023-01-31T21:20:48Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - From Spectral Graph Convolutions to Large Scale Graph Convolutional
Networks [0.0]
Graph Convolutional Networks (GCNs) have been shown to be a powerful concept that has been successfully applied to a large variety of tasks.
We study the theory that paved the way to the definition of GCN, including related parts of classical graph theory.
arXiv Detail & Related papers (2022-07-12T16:57:08Z) - Implicit vs Unfolded Graph Neural Networks [29.803948965931212]
We show that implicit and unfolded GNNs can achieve strong node classification accuracy across disparate regimes.<n>While IGNN is substantially more memory-efficient, UGNN models support unique, integrated graph attention mechanisms and propagation rules.
arXiv Detail & Related papers (2021-11-12T07:49:16Z) - Stability of Graph Convolutional Neural Networks to Stochastic
Perturbations [122.12962842842349]
Graph convolutional neural networks (GCNNs) are nonlinear processing tools to learn representations from network data.
Current analysis considers deterministic perturbations but fails to provide relevant insights when topological changes are random.
This paper investigates the stability of GCNNs to perturbed graph perturbations induced by link losses.
arXiv Detail & Related papers (2021-06-19T16:25:28Z) - Posterior Consistency of Semi-Supervised Regression on Graphs [14.65047105712853]
Graph-based semi-supervised regression (SSR) is the problem of estimating the value of a function on a weighted graph from its values (labels) on a small subset of the vertices.
This paper is concerned with the consistency of SSR in the context of classification, in the setting where the labels have small noise and the underlying graph weighting is consistent with well-clustered nodes.
We present a Bayesian formulation of SSR in which the weighted graph defines a Gaussian prior, using a graph Laplacian, and the labeled data defines a likelihood.
arXiv Detail & Related papers (2020-07-25T00:00:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.