Resolving Node Identifiability in Graph Neural Processes via Laplacian Spectral Encodings
- URL: http://arxiv.org/abs/2511.19037v1
- Date: Mon, 24 Nov 2025 12:20:36 GMT
- Title: Resolving Node Identifiability in Graph Neural Processes via Laplacian Spectral Encodings
- Authors: Zimo Yan, Zheng Xie, Chang Liu, Yuan Wang,
- Abstract summary: We provide theory for a Laplacian positional encoding that is invariant to eigenvector sign flips and to basis rotations within eigenspaces.<n>We prove that this encoding yields node identifiability from a constant number of observations and establish a sample-complexity separation from architectures constrained by the Weisfeiler-Lehman test.
- Score: 9.343292907600913
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Message passing graph neural networks are widely used for learning on graphs, yet their expressive power is limited by the one-dimensional Weisfeiler-Lehman test and can fail to distinguish structurally different nodes. We provide rigorous theory for a Laplacian positional encoding that is invariant to eigenvector sign flips and to basis rotations within eigenspaces. We prove that this encoding yields node identifiability from a constant number of observations and establishes a sample-complexity separation from architectures constrained by the Weisfeiler-Lehman test. The analysis combines a monotone link between shortest-path and diffusion distance, spectral trilateration with a constant set of anchors, and quantitative spectral injectivity with logarithmic embedding size. As an instantiation, pairing this encoding with a neural-process style decoder yields significant gains on a drug-drug interaction task on chemical graphs, improving both the area under the ROC curve and the F1 score and demonstrating the practical benefits of resolving theoretical expressiveness limitations with principled positional information.
Related papers
- Manifold limit for the training of shallow graph convolutional neural networks [1.2744523252873352]
We study the consistency of the training of shallow graph convolutional neural networks (GCNNs) on proximity graphs of sampled point clouds.<n>We prove $$-convergence of regularized empirical risk minimization functionals and corresponding convergence of their global minimizers.
arXiv Detail & Related papers (2026-01-09T18:59:20Z) - Superposition in Graph Neural Networks [11.888196115363298]
We study superposition, the sharing of directions by multiple features, directly in the latent space of graph neural networks (GNNs)<n>Across GCN/GIN/GAT we find: increasing width produces a phase pattern in overlap; topology imprints overlap onto node-level features that pooling partially remixes into task-aligned graph axes.
arXiv Detail & Related papers (2025-08-31T16:43:29Z) - Sheaf Graph Neural Networks via PAC-Bayes Spectral Optimization [13.021238902084647]
Over-smoothing in Graph Neural Networks (GNNs) causes collapse in distinct node features.<n>We introduce SGPC (Sheaf GNNs with PAC-Bayes), a unified architecture that combines cellular-sheaf message passing with several mechanisms.<n> Experiments on nine homophilic and heterophilic benchmarks show that SGPC outperforms state-of-the-art spectral and sheaf-based GNNs.
arXiv Detail & Related papers (2025-08-01T06:39:28Z) - A Spectral Interpretation of Redundancy in a Graph Reservoir [51.40366905583043]
This work revisits the definition of the reservoir in the Multiresolution Reservoir Graph Neural Network (MRGNN)<n>It proposes a variant based on a Fairing algorithm originally introduced in the field of surface design in computer graphics.<n>The core contribution of the paper lies in the theoretical analysis of the algorithm from a random walks perspective.
arXiv Detail & Related papers (2025-07-17T10:02:57Z) - Making Sense Of Distributed Representations With Activation Spectroscopy [44.94093096989921]
There is growing evidence to suggest that relevant features are encoded across many neurons in a distributed fashion.<n>This work explores one feasible path to both detecting and tracing the joint influence of neurons in a distributed representation.
arXiv Detail & Related papers (2025-01-26T07:33:42Z) - Learning local discrete features in explainable-by-design convolutional neural networks [0.0]
We introduce an explainable-by-design convolutional neural network (CNN) based on the lateral inhibition mechanism.
The model consists of the predictor, that is a high-accuracy CNN with residual or dense skip connections.
By collecting observations and directly calculating probabilities, we can explain causal relationships between motifs of adjacent levels.
arXiv Detail & Related papers (2024-10-31T18:39:41Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - HoloNets: Spectral Convolutions do extend to Directed Graphs [59.851175771106625]
Conventional wisdom dictates that spectral convolutional networks may only be deployed on undirected graphs.
Here we show this traditional reliance on the graph Fourier transform to be superfluous.
We provide a frequency-response interpretation of newly developed filters, investigate the influence of the basis used to express filters and discuss the interplay with characteristic operators on which networks are based.
arXiv Detail & Related papers (2023-10-03T17:42:09Z) - Understanding the Spectral Bias of Coordinate Based MLPs Via Training
Dynamics [2.9443230571766854]
We study the connection between the computations of ReLU networks, and the speed of gradient descent convergence.
We then use this formulation to study the severity of spectral bias in low dimensional settings, and how positional encoding overcomes this.
arXiv Detail & Related papers (2023-01-14T04:21:25Z) - Spectral-Spatial Global Graph Reasoning for Hyperspectral Image
Classification [50.899576891296235]
Convolutional neural networks have been widely applied to hyperspectral image classification.
Recent methods attempt to address this issue by performing graph convolutions on spatial topologies.
arXiv Detail & Related papers (2021-06-26T06:24:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.