Unnoticeable Backdoor Attacks on Graph Neural Networks
- URL: http://arxiv.org/abs/2303.01263v1
- Date: Sat, 11 Feb 2023 01:50:58 GMT
- Title: Unnoticeable Backdoor Attacks on Graph Neural Networks
- Authors: Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang
- Abstract summary: In particular, backdoor attack poisons the graph by attaching triggers and the target class label to a set of nodes in the training graph.
In this paper, we study a novel problem of unnoticeable graph backdoor attacks with limited attack budget.
- Score: 29.941951380348435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved promising results in various tasks
such as node classification and graph classification. Recent studies find that
GNNs are vulnerable to adversarial attacks. However, effective backdoor attacks
on graphs are still an open problem. In particular, backdoor attack poisons the
graph by attaching triggers and the target class label to a set of nodes in the
training graph. The backdoored GNNs trained on the poisoned graph will then be
misled to predict test nodes to target class once attached with triggers.
Though there are some initial efforts in graph backdoor attacks, our empirical
analysis shows that they may require a large attack budget for effective
backdoor attacks and the injected triggers can be easily detected and pruned.
Therefore, in this paper, we study a novel problem of unnoticeable graph
backdoor attacks with limited attack budget. To fully utilize the attack
budget, we propose to deliberately select the nodes to inject triggers and
target class labels in the poisoning phase. An adaptive trigger generator is
deployed to obtain effective triggers that are difficult to be noticed.
Extensive experiments on real-world datasets against various defense strategies
demonstrate the effectiveness of our proposed method in conducting effective
unnoticeable backdoor attacks.
Related papers
- Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses [50.53476890313741]
We propose an effective, stealthy, and persistent backdoor attack on FedGL.
We develop a certified defense for any backdoored FedGL model against the trigger with any shape at any location.
Our attack results show our attack can obtain > 90% backdoor accuracy in almost all datasets.
arXiv Detail & Related papers (2024-07-12T02:43:44Z) - Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.
Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.
We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective [33.35835060102069]
Graph Neural Networks (GNNs) have shown remarkable performance in various tasks.
Backdoor attack poisons the graph by attaching backdoor triggers and the target class label to a set of nodes in the training graph.
In this paper, we study a novel problem of unnoticeable graph backdoor attacks with in-distribution (ID) triggers.
arXiv Detail & Related papers (2024-05-17T13:09:39Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Defending Against Backdoor Attack on Graph Nerual Network by
Explainability [7.147386524788604]
We propose the first backdoor detection and defense method on GNN.
For graph data, current backdoor attack focus on manipulating the graph structure to inject the trigger.
We find that there are apparent differences between benign samples and malicious samples in some explanatory evaluation metrics.
arXiv Detail & Related papers (2022-09-07T03:19:29Z) - Explainability-based Backdoor Attacks Against Graph Neural Networks [9.179577599489559]
There are numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs)
We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop.
Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness.
arXiv Detail & Related papers (2021-04-08T10:43:40Z) - Backdoor Attacks to Graph Neural Networks [73.56867080030091]
We propose the first backdoor attack to graph neural networks (GNN)
In our backdoor attack, a GNN predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.
Our empirical results show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs.
arXiv Detail & Related papers (2020-06-19T14:51:01Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.