Spear and Shield: Adversarial Attacks and Defense Methods for
Model-Based Link Prediction on Continuous-Time Dynamic Graphs
- URL: http://arxiv.org/abs/2308.10779v2
- Date: Fri, 23 Feb 2024 07:22:48 GMT
- Title: Spear and Shield: Adversarial Attacks and Defense Methods for
Model-Based Link Prediction on Continuous-Time Dynamic Graphs
- Authors: Dongjin Lee, Juho Lee, Kijung Shin
- Abstract summary: We propose T-SPEAR, a simple and effective adversarial attack method for link prediction on continuous-time dynamic graphs.
We show that T-SPEAR significantly degrades the victim model's performance on link prediction tasks.
Our attacks are transferable to other TGNNs, which differ from the victim model assumed by the attacker.
- Score: 40.01361505644007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world graphs are dynamic, constantly evolving with new interactions,
such as financial transactions in financial networks. Temporal Graph Neural
Networks (TGNNs) have been developed to effectively capture the evolving
patterns in dynamic graphs. While these models have demonstrated their
superiority, being widely adopted in various important fields, their
vulnerabilities against adversarial attacks remain largely unexplored. In this
paper, we propose T-SPEAR, a simple and effective adversarial attack method for
link prediction on continuous-time dynamic graphs, focusing on investigating
the vulnerabilities of TGNNs. Specifically, before the training procedure of a
victim model, which is a TGNN for link prediction, we inject edge perturbations
to the data that are unnoticeable in terms of the four constraints we propose,
and yet effective enough to cause malfunction of the victim model. Moreover, we
propose a robust training approach T-SHIELD to mitigate the impact of
adversarial attacks. By using edge filtering and enforcing temporal smoothness
to node embeddings, we enhance the robustness of the victim model. Our
experimental study shows that T-SPEAR significantly degrades the victim model's
performance on link prediction tasks, and even more, our attacks are
transferable to other TGNNs, which differ from the victim model assumed by the
attacker. Moreover, we demonstrate that T-SHIELD effectively filters out
adversarial edges and exhibits robustness against adversarial attacks,
surpassing the link prediction performance of the naive TGNN by up to 11.2%
under T-SPEAR.
Related papers
- Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic
Adversarial Training [13.998123723601651]
Machine learning-based forecasting models are commonly used in Intelligent Transportation Systems (ITS) to predict traffic patterns.
Most of the existing models are susceptible to adversarial attacks, which can lead to inaccurate predictions and negative consequences such as congestion and delays.
We propose a framework for incorporating adversarial training into traffic forecasting tasks.
arXiv Detail & Related papers (2023-06-25T04:53:29Z) - Targeted Adversarial Attacks against Neural Network Trajectory
Predictors [14.834932672948698]
Trajectory prediction is an integral component of modern autonomous systems.
Deep neural network (DNN) models are often employed for trajectory forecasting tasks.
We propose a targeted adversarial attack against DNN models for trajectory forecasting tasks.
arXiv Detail & Related papers (2022-12-08T08:34:28Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.