Structural Invariance Matters: Rethinking Graph Rewiring through Graph Metrics
- URL: http://arxiv.org/abs/2510.20556v1
- Date: Thu, 23 Oct 2025 13:38:41 GMT
- Title: Structural Invariance Matters: Rethinking Graph Rewiring through Graph Metrics
- Authors: Alexandre Benoit, Catherine Aitken, Yu He,
- Abstract summary: We provide the first systematic analysis of how rewiring affects a range of graph structural metrics.<n>We study seven diverse rewiring strategies and correlate changes in local and global graph properties with node classification accuracy.<n>Our results reveal a consistent pattern: successful rewiring methods tend to preserve local structure while allowing for flexibility in global connectivity.
- Score: 52.077620040518646
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph rewiring has emerged as a key technique to alleviate over-squashing in Graph Neural Networks (GNNs) and Graph Transformers by modifying the graph topology to improve information flow. While effective, rewiring inherently alters the graph's structure, raising the risk of distorting important topology-dependent signals. Yet, despite the growing use of rewiring, little is known about which structural properties must be preserved to ensure both performance gains and structural fidelity. In this work, we provide the first systematic analysis of how rewiring affects a range of graph structural metrics, and how these changes relate to downstream task performance. We study seven diverse rewiring strategies and correlate changes in local and global graph properties with node classification accuracy. Our results reveal a consistent pattern: successful rewiring methods tend to preserve local structure while allowing for flexibility in global connectivity. These findings offer new insights into the design of effective rewiring strategies, bridging the gap between graph theory and practical GNN optimization.
Related papers
- Spectral Neural Graph Sparsification [0.21932521132244476]
Graphs are central to modeling complex systems in domains such as social networks, molecular chemistry, and neuroscience.<n>We propose the Spectral Preservation Network, a new framework for graph representation learning that generates reduced graphs serving as faithful proxies of the original.<n>We evaluate the effectiveness of Spectral Preservation Network on node-level sparsification by analyzing well-established metrics and benchmarking against state-of-the-art methods.
arXiv Detail & Related papers (2025-10-31T13:51:50Z) - Attention Beyond Neighborhoods: Reviving Transformer for Graph Clustering [21.941792185132996]
Attentive Graph Clustering Network (AGCN) is a novel architecture that reinterprets the notion that graph is attention.<n>AGCN embeds the attention mechanism into the graph structure, enabling effective global information extraction.<n>Our framework incorporates theoretical analysis to contrast AGCN behavior with Graph Neural Networks (GNNs) and Transformer.
arXiv Detail & Related papers (2025-09-18T14:51:13Z) - Mitigating Over-Squashing in Graph Neural Networks by Spectrum-Preserving Sparsification [81.06278257153835]
We propose a graph rewiring method that balances structural bottleneck reduction and graph property preservation.<n>Our method generates graphs with enhanced connectivity while maintaining sparsity and largely preserving the original graph spectrum.
arXiv Detail & Related papers (2025-06-19T08:01:00Z) - Rewiring Techniques to Mitigate Oversquashing and Oversmoothing in GNNs: A Survey [0.0]
Graph Neural Networks (GNNs) are powerful tools for learning from graph-structured data, but their effectiveness is often constrained by two critical challenges.
Oversquashing, where the excessive compression of information from distant nodes results in significant information loss, and oversmoothing, where repeated message-passing iterations homogenize node representations, obscuring meaningful distinctions.
In this survey, we examine graph rewiring techniques, a class of methods designed to address these structural bottlenecks by modifying graph topology to enhance information diffusion.
arXiv Detail & Related papers (2024-11-26T13:38:12Z) - Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention [12.409982249220812]
We introduce Graph Attention with Structures (GRASS), a novel GNN architecture, to enhance graph relative attention.<n>GRASS rewires the input graph by superimposing a random regular graph to achieve long-range information propagation.<n>It also employs a novel additive attention mechanism tailored for graph-structured data.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - Control-based Graph Embeddings with Data Augmentation for Contrastive Learning [3.250579305400297]
We study the problem of unsupervised graph representation learning by harnessing the control properties of dynamical networks defined on graphs.
A crucial step in contrastive learning is the creation of 'augmented' graphs from the input graphs.
Here, we propose a unique method for generating these augmented graphs by leveraging the control properties of networks.
arXiv Detail & Related papers (2024-03-07T22:14:04Z) - Locality-Aware Graph-Rewiring in GNNs [5.356465360780597]
Graph Neural Networks (GNNs) are popular models for machine learning on graphs.
In this work, we identify three desiderata for graph-rewiring: (i) reduce over-squashing, (ii) respect the locality of the graph, and (iii) preserve the sparsity of the graph.
We propose a novel rewiring framework that satisfies all of (i)iii through a locality-aware sequence of rewiring operations.
arXiv Detail & Related papers (2023-10-02T21:59:44Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.