Leveraging Vulnerabilities in Temporal Graph Neural Networks via Strategic High-Impact Assaults
- URL: http://arxiv.org/abs/2509.25418v1
- Date: Mon, 29 Sep 2025 19:21:56 GMT
- Title: Leveraging Vulnerabilities in Temporal Graph Neural Networks via Strategic High-Impact Assaults
- Authors: Dong Hyun Jeon, Lijing Zhu, Haifang Li, Pengze Li, Jingna Feng, Tiehang Duan, Houbing Herbert Song, Cui Tao, Shuteng Niu,
- Abstract summary: Temporal Graph Neural Networks (TGNNs) have become indispensable for analyzing dynamic graphs in critical applications.<n>High Impact Attack (HIA) is a novel restricted black-box attack framework designed to expose critical vulnerabilities in TGNNs.<n>HIA significantly reduces TGNN accuracy on the link prediction task, achieving up to a 35.55% decrease in Mean Reciprocal Rank (MRR)
- Score: 8.54812911128942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Graph Neural Networks (TGNNs) have become indispensable for analyzing dynamic graphs in critical applications such as social networks, communication systems, and financial networks. However, the robustness of TGNNs against adversarial attacks, particularly sophisticated attacks that exploit the temporal dimension, remains a significant challenge. Existing attack methods for Spatio-Temporal Dynamic Graphs (STDGs) often rely on simplistic, easily detectable perturbations (e.g., random edge additions/deletions) and fail to strategically target the most influential nodes and edges for maximum impact. We introduce the High Impact Attack (HIA), a novel restricted black-box attack framework specifically designed to overcome these limitations and expose critical vulnerabilities in TGNNs. HIA leverages a data-driven surrogate model to identify structurally important nodes (central to network connectivity) and dynamically important nodes (critical for the graph's temporal evolution). It then employs a hybrid perturbation strategy, combining strategic edge injection (to create misleading connections) and targeted edge deletion (to disrupt essential pathways), maximizing TGNN performance degradation. Importantly, HIA minimizes the number of perturbations to enhance stealth, making it more challenging to detect. Comprehensive experiments on five real-world datasets and four representative TGNN architectures (TGN, JODIE, DySAT, and TGAT) demonstrate that HIA significantly reduces TGNN accuracy on the link prediction task, achieving up to a 35.55% decrease in Mean Reciprocal Rank (MRR) - a substantial improvement over state-of-the-art baselines. These results highlight fundamental vulnerabilities in current STDG models and underscore the urgent need for robust defenses that account for both structural and temporal dynamics.
Related papers
- ScaleGNN: Towards Scalable Graph Neural Networks via Adaptive High-order Neighboring Feature Fusion [73.85920403511706]
We propose ScaleGNN, a novel framework that adaptively fuses multi-hop node features for scalable and effective graph learning.<n>We show that ScaleGNN consistently outperforms state-of-the-art GNNs in both predictive accuracy and computational efficiency.
arXiv Detail & Related papers (2025-04-22T14:05:11Z) - Grimm: A Plug-and-Play Perturbation Rectifier for Graph Neural Networks Defending against Poisoning Attacks [53.972077392749185]
Recent studies have revealed the vulnerability of graph neural networks (GNNs) to adversarial poisoning attacks on node classification tasks.<n>Here we introduce Grimm, the first plug-and-play defense model.
arXiv Detail & Related papers (2024-12-11T17:17:02Z) - RIDA: A Robust Attack Framework on Incomplete Graphs [30.845056747923255]
Graph Neural Networks (GNNs) are vital in data science but are increasingly susceptible to adversarial attacks.<n> gray-box poisoning attacks are noteworthy due to their effectiveness and fewer constraints.<n>To address this gap, we introduce the Robust Incomplete Deep Attack Framework (RIDA)<n>RIDA is the first algorithm for robust gray-box poisoning attacks on incomplete graphs.
arXiv Detail & Related papers (2024-07-25T16:33:35Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Spear and Shield: Adversarial Attacks and Defense Methods for
Model-Based Link Prediction on Continuous-Time Dynamic Graphs [40.01361505644007]
We propose T-SPEAR, a simple and effective adversarial attack method for link prediction on continuous-time dynamic graphs.
We show that T-SPEAR significantly degrades the victim model's performance on link prediction tasks.
Our attacks are transferable to other TGNNs, which differ from the victim model assumed by the attacker.
arXiv Detail & Related papers (2023-08-21T15:09:51Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Spatially Focused Attack against Spatiotemporal Graph Neural Networks [8.665638585791235]
Deep Spatiotemporal graph neural networks (GNNs) have achieved great success in traffic forecasting applications.
If GNNs are vulnerable in real-world prediction applications, a hacker can easily manipulate the results and cause serious traffic congestion and even a city-scale breakdown.
arXiv Detail & Related papers (2021-09-10T01:31:53Z) - Adversarial Attack on Graph Neural Networks as An Influence Maximization
Problem [12.88476464580968]
Graph neural networks (GNNs) have attracted increasing interests.
There is an urgent need for understanding the robustness of GNNs under adversarial attacks.
arXiv Detail & Related papers (2021-06-21T00:47:44Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.