Practicable Black-box Evasion Attacks on Link Prediction in Dynamic Graphs -- A Graph Sequential Embedding Method
- URL: http://arxiv.org/abs/2412.13134v1
- Date: Tue, 17 Dec 2024 17:53:32 GMT
- Title: Practicable Black-box Evasion Attacks on Link Prediction in Dynamic Graphs -- A Graph Sequential Embedding Method
- Authors: Jiate Li, Meng Pang, Binghui Wang,
- Abstract summary: Link prediction in dynamic graphs (LPDG) has been widely applied to real-world applications such as website recommendation, traffic flow prediction, organizational studies, etc.
We propose the first practicable black-box evasion attack method that achieves effective attacks against the target LPDG model, within a limited amount of interactions and perturbations.
- Score: 16.957498511037485
- License:
- Abstract: Link prediction in dynamic graphs (LPDG) has been widely applied to real-world applications such as website recommendation, traffic flow prediction, organizational studies, etc. These models are usually kept local and secure, with only the interactive interface restrictively available to the public. Thus, the problem of the black-box evasion attack on the LPDG model, where model interactions and data perturbations are restricted, seems to be essential and meaningful in practice. In this paper, we propose the first practicable black-box evasion attack method that achieves effective attacks against the target LPDG model, within a limited amount of interactions and perturbations. To perform effective attacks under limited perturbations, we develop a graph sequential embedding model to find the desired state embedding of the dynamic graph sequences, under a deep reinforcement learning framework. To overcome the scarcity of interactions, we design a multi-environment training pipeline and train our agent for multiple instances, by sharing an aggregate interaction buffer. Finally, we evaluate our attack against three advanced LPDG models on three real-world graph datasets of different scales and compare its performance with related methods under the interaction and perturbation constraints. Experimental results show that our attack is both effective and practicable.
Related papers
- Graph Defense Diffusion Model [26.41730982598055]
Graph Neural Networks (GNNs) are highly vulnerable to adversarial attacks, which can greatly degrade their performance.
Existing graph purification methods attempt to address this issue by filtering attacked graphs.
We propose a more versatile approach for defending against adversarial attacks on graphs.
arXiv Detail & Related papers (2025-01-20T16:18:40Z) - GAIM: Attacking Graph Neural Networks via Adversarial Influence Maximization [12.651079126612121]
We present an integrated adversarial attack method conducted on a node basis while considering the strict black-box setting.
In our approach, we unify the selection of the target node and the construction of feature perturbations into a single optimization problem.
We extend our method to accommodate label-oriented attacks, broadening its applicability.
arXiv Detail & Related papers (2024-08-20T15:41:20Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Spear and Shield: Adversarial Attacks and Defense Methods for
Model-Based Link Prediction on Continuous-Time Dynamic Graphs [40.01361505644007]
We propose T-SPEAR, a simple and effective adversarial attack method for link prediction on continuous-time dynamic graphs.
We show that T-SPEAR significantly degrades the victim model's performance on link prediction tasks.
Our attacks are transferable to other TGNNs, which differ from the victim model assumed by the attacker.
arXiv Detail & Related papers (2023-08-21T15:09:51Z) - Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning [8.666702832094874]
We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
arXiv Detail & Related papers (2023-05-04T15:10:41Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.