Multi-Targeted Graph Backdoor Attack
- URL: http://arxiv.org/abs/2601.15474v1
- Date: Wed, 21 Jan 2026 21:23:51 GMT
- Title: Multi-Targeted Graph Backdoor Attack
- Authors: Md Nabi Newaz Khan, Abdullah Arafat Miah, Yu Bi,
- Abstract summary: Graph neural network (GNN) have demonstrated exceptional performance in solving critical problems across diverse domains.<n>Existing studies on backdoor attack for graph classification are limited to single target attack using subgraph replacement.<n>We introduce the first multi-targeted backdoor attack for graph classification task, where multiple triggers simultaneously redirect predictions to different target labels.
- Score: 0.038233569758620044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural network (GNN) have demonstrated exceptional performance in solving critical problems across diverse domains yet remain susceptible to backdoor attacks. Existing studies on backdoor attack for graph classification are limited to single target attack using subgraph replacement based mechanism where the attacker implants only one trigger into the GNN model. In this paper, we introduce the first multi-targeted backdoor attack for graph classification task, where multiple triggers simultaneously redirect predictions to different target labels. Instead of subgraph replacement, we propose subgraph injection which preserves the structure of the original graphs while poisoning the clean graphs. Extensive experiments demonstrate the efficacy of our approach, where our attack achieves high attack success rates for all target labels with minimal impact on the clean accuracy. Experimental results on five dataset demonstrate the superior performance of our attack framework compared to the conventional subgraph replacement-based attack. Our analysis on four GNN models confirms the generalization capability of our attack which is effective regardless of the GNN model architectures and training parameters settings. We further investigate the impact of the attack design parameters including injection methods, number of connections, trigger sizes, trigger edge density and poisoning ratios. Additionally, our evaluation against state-of-the-art defenses (randomized smoothing and fine-pruning) demonstrates the robustness of our proposed multi-target attacks. This work highlights the GNN vulnerability against multi-targeted backdoor attack in graph classification task. Our source codes will be available at https://github.com/SiSL-URI/Multi-Targeted-Graph-Backdoor-Attack.
Related papers
- Robustness Inspired Graph Backdoor Defense [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.<n>Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.<n>We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Unnoticeable Backdoor Attacks on Graph Neural Networks [29.941951380348435]
In particular, backdoor attack poisons the graph by attaching triggers and the target class label to a set of nodes in the training graph.
In this paper, we study a novel problem of unnoticeable graph backdoor attacks with limited attack budget.
arXiv Detail & Related papers (2023-02-11T01:50:58Z) - Defending Against Backdoor Attack on Graph Nerual Network by
Explainability [7.147386524788604]
We propose the first backdoor detection and defense method on GNN.
For graph data, current backdoor attack focus on manipulating the graph structure to inject the trigger.
We find that there are apparent differences between benign samples and malicious samples in some explanatory evaluation metrics.
arXiv Detail & Related papers (2022-09-07T03:19:29Z) - Explainability-based Backdoor Attacks Against Graph Neural Networks [9.179577599489559]
There are numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs)
We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop.
Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness.
arXiv Detail & Related papers (2021-04-08T10:43:40Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Backdoor Attacks to Graph Neural Networks [73.56867080030091]
We propose the first backdoor attack to graph neural networks (GNN)
In our backdoor attack, a GNN predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.
Our empirical results show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs.
arXiv Detail & Related papers (2020-06-19T14:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.