Probing Graph Neural Network Activation Patterns Through Graph Topology
- URL: http://arxiv.org/abs/2602.21092v1
- Date: Tue, 24 Feb 2026 16:52:36 GMT
- Title: Probing Graph Neural Network Activation Patterns Through Graph Topology
- Authors: Floriano Tori, Lorenzo Bini, Marco Sorbi, Stéphane Marchand-Maillet, Vincent Ginis,
- Abstract summary: Curvature notions on graphs provide a theoretical description of graph topology, highlighting bottlenecks and denser connected regions.<n>It remains unclear how the topology of a graph interacts with the learned preferences of Graph Neural Networks.<n>Our work reframes curvature as a diagnostic probe for understanding when and why graph learning fails.
- Score: 4.917259718852399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Curvature notions on graphs provide a theoretical description of graph topology, highlighting bottlenecks and denser connected regions. Artifacts of the message passing paradigm in Graph Neural Networks, such as oversmoothing and oversquashing, have been attributed to these regions. However, it remains unclear how the topology of a graph interacts with the learned preferences of GNNs. Through Massive Activations, which correspond to extreme edge activation values in Graph Transformers, we probe this correspondence. Our findings on synthetic graphs and molecular benchmarks reveal that MAs do not preferentially concentrate on curvature extremes, despite their theoretical link to information flow. On the Long Range Graph Benchmark, we identify a systemic \textit{curvature shift}: global attention mechanisms exacerbate topological bottlenecks, drastically increasing the prevalence of negative curvature. Our work reframes curvature as a diagnostic probe for understanding when and why graph learning fails.
Related papers
- GraphTOP: Graph Topology-Oriented Prompting for Graph Neural Networks [66.07512871031163]
"Pre-training, adaptation" scheme pre-trains powerful Graph Neural Networks (GNNs) over unlabeled graph data.<n>In the adaptation phase, graph prompting modifies input graph data with learnable prompts while keeping pre-trained GNN models frozen.<n>We propose the first **Graph** **T**opology-**O**riented **P**rompting (GraphTOP) framework to effectively adapt pre-trained GNN models for downstream tasks.
arXiv Detail & Related papers (2025-10-25T22:50:12Z) - Mitigating Over-Squashing in Graph Neural Networks by Spectrum-Preserving Sparsification [81.06278257153835]
We propose a graph rewiring method that balances structural bottleneck reduction and graph property preservation.<n>Our method generates graphs with enhanced connectivity while maintaining sparsity and largely preserving the original graph spectrum.
arXiv Detail & Related papers (2025-06-19T08:01:00Z) - Disentangled Graph Representation Based on Substructure-Aware Graph Optimal Matching Kernel Convolutional Networks [4.912298804026689]
Graphs effectively characterize relational data, driving graph representation learning methods.<n>Recent disentangled graph representation learning enhances interpretability by decoupling independent factors in graph data.<n>This paper proposes the Graph Optimal Matching Kernel Convolutional Network (GOMKCN) to address this limitation.
arXiv Detail & Related papers (2025-04-23T02:26:33Z) - Beyond Message Passing: Neural Graph Pattern Machine [50.78679002846741]
We introduce the Neural Graph Pattern Machine (GPM), a novel framework that bypasses message passing by learning directly from graph substructures.<n>GPM efficiently extracts, encodes, and prioritizes task-relevant graph patterns, offering greater expressivity and improved ability to capture long-range dependencies.
arXiv Detail & Related papers (2025-01-30T20:37:47Z) - Line Graph Vietoris-Rips Persistence Diagram for Topological Graph Representation Learning [3.6881508872690825]
We introduce a novel edge filtration-based persistence diagram, named Topological Edge Diagram (TED)<n>TED is mathematically proven to preserve node embedding information as well as contain additional topological information.<n>We propose a neural network based algorithm, named Line Graph Vietoris-Rips (LGVR) Persistence Diagram, that extracts edge information by transforming a graph into its line graph.
arXiv Detail & Related papers (2024-12-23T10:46:44Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Graph Anomaly Detection with Graph Neural Networks: Current Status and
Challenges [9.076649460696402]
Graph neural networks (GNNs) have been studied extensively and have successfully performed difficult machine learning tasks.
This survey is the first comprehensive review of graph anomaly detection methods based on GNNs.
arXiv Detail & Related papers (2022-09-29T16:47:57Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Topological Relational Learning on Graphs [2.4692806302088868]
Graph neural networks (GNNs) have emerged as a powerful tool for graph classification and representation learning.
We propose a novel topological relational inference (TRI) which allows for integrating higher-order graph information to GNNs.
We show that the new TRI-GNN outperforms all 14 state-of-the-art baselines on 6 out 7 graphs and exhibit higher robustness to perturbations.
arXiv Detail & Related papers (2021-10-29T04:03:27Z) - Graph and graphon neural network stability [122.06927400759021]
Graph networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of network data.
We analyze GNN stability using kernel objects called graphons.
arXiv Detail & Related papers (2020-10-23T16:55:56Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Graph Signal Processing -- Part III: Machine Learning on Graphs, from
Graph Topology to Applications [19.29066508374268]
Part III of this monograph starts by addressing ways to learn graph topology.
A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data.
For learning sparse graphs, the least absolute shrinkage and selection operator, known as LASSO is employed.
arXiv Detail & Related papers (2020-01-02T13:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.