Spectral Neural Graph Sparsification
- URL: http://arxiv.org/abs/2510.27474v1
- Date: Fri, 31 Oct 2025 13:51:50 GMT
- Title: Spectral Neural Graph Sparsification
- Authors: Angelica Liguori, Ettore Ritacco, Pietro Sabatino, Annalisa Socievole,
- Abstract summary: Graphs are central to modeling complex systems in domains such as social networks, molecular chemistry, and neuroscience.<n>We propose the Spectral Preservation Network, a new framework for graph representation learning that generates reduced graphs serving as faithful proxies of the original.<n>We evaluate the effectiveness of Spectral Preservation Network on node-level sparsification by analyzing well-established metrics and benchmarking against state-of-the-art methods.
- Score: 0.21932521132244476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs are central to modeling complex systems in domains such as social networks, molecular chemistry, and neuroscience. While Graph Neural Networks, particularly Graph Convolutional Networks, have become standard tools for graph learning, they remain constrained by reliance on fixed structures and susceptibility to over-smoothing. We propose the Spectral Preservation Network, a new framework for graph representation learning that generates reduced graphs serving as faithful proxies of the original, enabling downstream tasks such as community detection, influence propagation, and information diffusion at a reduced computational cost. The Spectral Preservation Network introduces two key components: the Joint Graph Evolution layer and the Spectral Concordance loss. The former jointly transforms both the graph topology and the node feature matrix, allowing the structure and attributes to evolve adaptively across layers and overcoming the rigidity of static neighborhood aggregation. The latter regularizes these transformations by enforcing consistency in both the spectral properties of the graph and the feature vectors of the nodes. We evaluate the effectiveness of Spectral Preservation Network on node-level sparsification by analyzing well-established metrics and benchmarking against state-of-the-art methods. The experimental results demonstrate the superior performance and clear advantages of our approach.
Related papers
- Advection-Diffusion on Graphs: A Bakry-Emery Laplacian for Spectral Graph Neural Networks [2.6124895681424323]
Graph Neural Networks (GNNs) often struggle to propagate information across long distances due to oversmoothing and oversquashing.<n>We introduce a Bakry-Emery graph Laplacian that integrates diffusion and advection through a learnable node-wise potential.<n>We develop mu-ChebNet, a spectral architecture that jointly learns the potential and Chebyshev filters.
arXiv Detail & Related papers (2026-02-20T11:01:12Z) - Mitigating Over-Squashing in Graph Neural Networks by Spectrum-Preserving Sparsification [81.06278257153835]
We propose a graph rewiring method that balances structural bottleneck reduction and graph property preservation.<n>Our method generates graphs with enhanced connectivity while maintaining sparsity and largely preserving the original graph spectrum.
arXiv Detail & Related papers (2025-06-19T08:01:00Z) - Unitary convolutions for learning on graphs and groups [0.9899763598214121]
We study unitary group convolutions, which allow for deeper networks that are more stable during training.
The main focus of the paper are graph neural networks, where we show that unitary graph convolutions provably avoid over-smoothing.
Our experimental results confirm that unitary graph convolutional networks achieve competitive performance on benchmark datasets.
arXiv Detail & Related papers (2024-10-07T21:09:14Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Spectral-Spatial Global Graph Reasoning for Hyperspectral Image
Classification [50.899576891296235]
Convolutional neural networks have been widely applied to hyperspectral image classification.
Recent methods attempt to address this issue by performing graph convolutions on spatial topologies.
arXiv Detail & Related papers (2021-06-26T06:24:51Z) - Spectral Embedding of Graph Networks [76.27138343125985]
We introduce an unsupervised graph embedding that trades off local node similarity and connectivity, and global structure.
The embedding is based on a generalized graph Laplacian, whose eigenvectors compactly capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2020-09-30T04:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.