Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
- URL: http://arxiv.org/abs/2208.06776v1
- Date: Sun, 14 Aug 2022 04:30:54 GMT
- Title: Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
- Authors: Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, Jinyin Chen
- Abstract summary: Link prediction, inferring the undiscovered or potential links of the graph, is widely applied in the real-world.
In this paper, we propose Link-Backdoor to reveal the training vulnerability of the existing link prediction methods.
- Score: 1.9109292348200242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Link prediction, inferring the undiscovered or potential links of the graph,
is widely applied in the real-world. By facilitating labeled links of the graph
as the training data, numerous deep learning based link prediction methods have
been studied, which have dominant prediction accuracy compared with non-deep
methods. However,the threats of maliciously crafted training graph will leave a
specific backdoor in the deep model, thus when some specific examples are fed
into the model, it will make wrong prediction, defined as backdoor attack. It
is an important aspect that has been overlooked in the current literature. In
this paper, we prompt the concept of backdoor attack on link prediction, and
propose Link-Backdoor to reveal the training vulnerability of the existing link
prediction methods. Specifically, the Link-Backdoor combines the fake nodes
with the nodes of the target link to form a trigger. Moreover, it optimizes the
trigger by the gradient information from the target model. Consequently, the
link prediction model trained on the backdoored dataset will predict the link
with trigger to the target state. Extensive experiments on five benchmark
datasets and five well-performing link prediction models demonstrate that the
Link-Backdoor achieves the state-of-the-art attack success rate under both
white-box (i.e., available of the target model parameter)and black-box (i.e.,
unavailable of the target model parameter) scenarios. Additionally, we testify
the attack under defensive circumstance, and the results indicate that the
Link-Backdoor still can construct successful attack on the well-performing link
prediction methods. The code and data are available at
https://github.com/Seaocn/Link-Backdoor.
Related papers
- A backdoor attack against link prediction tasks with graph neural
networks [0.0]
Graph Neural Networks (GNNs) are a class of deep learning models capable of processing graph-structured data.
Recent studies have found that GNN models are vulnerable to backdoor attacks.
In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs.
arXiv Detail & Related papers (2024-01-05T06:45:48Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Backdoor Defense via Deconfounded Representation Learning [17.28760299048368]
We propose a Causality-inspired Backdoor Defense (CBD) to learn deconfounded representations for reliable classification.
CBD is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples.
arXiv Detail & Related papers (2023-03-13T02:25:59Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Neighboring Backdoor Attacks on Graph Convolutional Network [30.586278223198086]
We propose a new type of backdoor which is specific to graph data, called neighboring backdoor.
To address such a challenge, we set the trigger as a single node, and the backdoor is activated when the trigger node is connected to the target node.
arXiv Detail & Related papers (2022-01-17T03:49:32Z) - Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction [6.712618329144372]
We propose a novel backdoor attack framework on Dynamic link prediction (DLP)
Dyn-Backdoor generates diverse initial-triggers by a generative adversarial network (GAN)
Experimental results show that Dyn-Backdoor launches successful backdoor attacks with success rate more than 90%.
arXiv Detail & Related papers (2021-10-08T03:08:35Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.