Topologically-Stabilized Graph Neural Networks: Empirical Robustness Across Domains
- URL: http://arxiv.org/abs/2512.13852v1
- Date: Mon, 15 Dec 2025 19:39:11 GMT
- Title: Topologically-Stabilized Graph Neural Networks: Empirical Robustness Across Domains
- Authors: Jelena Losic,
- Abstract summary: Graph Neural Networks (GNNs) have become the standard for graph representation learning but remain vulnerable to structural perturbations.<n>We propose a novel framework that integrates persistent homology features with stability regularization to enhance robustness.<n>Our approach demonstrates exceptional robustness to edge perturbations while maintaining competitive accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have become the standard for graph representation learning but remain vulnerable to structural perturbations. We propose a novel framework that integrates persistent homology features with stability regularization to enhance robustness. Building on the stability theorems of persistent homology \cite{cohen2007stability}, our method combines GIN architectures with multi-scale topological features extracted from persistence images, enforced by Hiraoka-Kusano-inspired stability constraints. Across six diverse datasets spanning biochemical, social, and collaboration networks , our approach demonstrates exceptional robustness to edge perturbations while maintaining competitive accuracy. Notably, we observe minimal performance degradation (0-4\% on most datasets) under perturbation, significantly outperforming baseline stability. Our work provides both a theoretically-grounded and empirically-validated approach to robust graph learning that aligns with recent advances in topological regularization
Related papers
- Ontology Neural Networks for Topologically Conditioned Constraint Satisfaction [0.0]
We present an enhanced framework that integrates topological conditioning with gradient stabilization mechanisms.<n>The framework exhibits seed-independent convergence and graceful scaling behavior up to twenty-node problems.
arXiv Detail & Related papers (2026-01-08T18:01:52Z) - PowerGrow: Feasible Co-Growth of Structures and Dynamics for Power Grid Synthesis [75.14189839277928]
We present PowerGrow, a co-generative framework that significantly reduces computational overhead while maintaining operational validity.<n> Experiments across benchmark settings show that PowerGrow outperforms prior diffusion models in fidelity and diversity.<n>This demonstrates its ability to generate operationally valid and realistic power grid scenarios.
arXiv Detail & Related papers (2025-08-29T01:47:27Z) - On the Stability of Graph Convolutional Neural Networks: A Probabilistic Perspective [21.0483487978294]
We study how perturbations in the graph topology affect GCNN outputs and propose a novel formulation for analyzing model stability.<n>Unlike prior studies that focus only on worst-case perturbations, our distribution-aware formulation characterizes output perturbations across a broad range of input data.
arXiv Detail & Related papers (2025-06-01T23:17:19Z) - Hierarchical Uncertainty-Aware Graph Neural Network [3.4498722449655066]
This work introduces a novel architecture, the Hierarchical Uncertainty-Aware Graph Neural Network (HU-GNN)<n>It unifies multi-scale representation learning, principled uncertainty estimation, and self-supervised embedding diversity within a single end-to-end framework.<n>Specifically, HU-GNN adaptively forms node clusters and estimates uncertainty at multiple structural scales from individual nodes to higher levels.
arXiv Detail & Related papers (2025-04-28T14:22:18Z) - A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs [54.62268052283014]
We present a unified theoretical perspective based on the framework of signed graphs.<n>We show that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing.<n>We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Edge Classification on Graphs: New Directions in Topological Imbalance [53.42066415249078]
We identify a novel Topological Imbalance Issue', which arises from the skewed distribution of edges across different classes.
We introduce Topological Entropy (TE), a novel topological-based metric that measures the topological imbalance for each edge.
We develop two strategies - Topological Reweighting and TE Wedge-based Mixup - to focus training on (synthetic) edges based on their TEs.
arXiv Detail & Related papers (2024-06-17T16:02:36Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - On the Trade-Off between Stability and Representational Capacity in
Graph Neural Networks [22.751509906413943]
We study the stability of EdgeNet: a general GNN framework that unifies more than twenty solutions.
By studying the effect of different EdgeNet categories on the stability, we show that GNNs with fewer degrees of freedom in their parameter space, linked to a lower representational capacity, are more stable.
arXiv Detail & Related papers (2023-12-04T22:07:17Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Stability of Neural Networks on Manifolds to Relative Perturbations [118.84154142918214]
Graph Neural Networks (GNNs) show impressive performance in many practical scenarios.
GNNs can scale well on large size graphs, but this is contradicted by the fact that existing stability bounds grow with the number of nodes.
arXiv Detail & Related papers (2021-10-10T04:37:19Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - On the Stability of Graph Convolutional Neural Networks under Edge
Rewiring [22.58110328955473]
Graph neural networks are experiencing a surge of popularity within the machine learning community.
Despite this, their stability, i.e., their robustness to small perturbations in the input, is not yet well understood.
We develop an interpretable upper bound elucidating that graph neural networks are stable to rewiring between high degree nodes.
arXiv Detail & Related papers (2020-10-26T17:37:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.