Neighboring Backdoor Attacks on Graph Convolutional Network
- URL: http://arxiv.org/abs/2201.06202v1
- Date: Mon, 17 Jan 2022 03:49:32 GMT
- Title: Neighboring Backdoor Attacks on Graph Convolutional Network
- Authors: Liang Chen, Qibiao Peng, Jintang Li, Yang Liu, Jiawei Chen, Yong Li,
Zibin Zheng
- Abstract summary: We propose a new type of backdoor which is specific to graph data, called neighboring backdoor.
To address such a challenge, we set the trigger as a single node, and the backdoor is activated when the trigger node is connected to the target node.
- Score: 30.586278223198086
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Backdoor attacks have been widely studied to hide the misclassification rules
in the normal models, which are only activated when the model is aware of the
specific inputs (i.e., the trigger). However, despite their success in the
conventional Euclidean space, there are few studies of backdoor attacks on
graph structured data. In this paper, we propose a new type of backdoor which
is specific to graph data, called neighboring backdoor. Considering the
discreteness of graph data, how to effectively design the triggers while
retaining the model accuracy on the original task is the major challenge. To
address such a challenge, we set the trigger as a single node, and the backdoor
is activated when the trigger node is connected to the target node. To preserve
the model accuracy, the model parameters are not allowed to be modified. Thus,
when the trigger node is not connected, the model performs normally. Under
these settings, in this work, we focus on generating the features of the
trigger node. Two types of backdoors are proposed: (1) Linear Graph Convolution
Backdoor which finds an approximation solution for the feature generation (can
be viewed as an integer programming problem) by looking at the linear part of
GCNs. (2) Variants of existing graph attacks. We extend current gradient-based
attack methods to our backdoor attack scenario. Extensive experiments on two
social networks and two citation networks datasets demonstrate that all
proposed backdoors can achieve an almost 100\% attack success rate while having
no impact on predictive accuracy.
Related papers
- Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.
Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.
We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - A backdoor attack against link prediction tasks with graph neural
networks [0.0]
Graph Neural Networks (GNNs) are a class of deep learning models capable of processing graph-structured data.
Recent studies have found that GNN models are vulnerable to backdoor attacks.
In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs.
arXiv Detail & Related papers (2024-01-05T06:45:48Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Backdoor Learning on Sequence to Sequence Models [94.23904400441957]
In this paper, we study whether sequence-to-sequence (seq2seq) models are vulnerable to backdoor attacks.
Specifically, we find by only injecting 0.2% samples of the dataset, we can cause the seq2seq model to generate the designated keyword and even the whole sentence.
Extensive experiments on machine translation and text summarization have been conducted to show our proposed methods could achieve over 90% attack success rate on multiple datasets and models.
arXiv Detail & Related papers (2023-05-03T20:31:13Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Transferable Graph Backdoor Attack [13.110473828583725]
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks.
GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features.
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
arXiv Detail & Related papers (2022-06-21T06:25:37Z) - Imperceptible Backdoor Attack: From Input Space to Feature
Representation [24.82632240825927]
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs)
In this paper, we analyze the drawbacks of existing attack approaches and propose a novel imperceptible backdoor attack.
Our trigger only modifies less than 1% pixels of a benign image while the magnitude is 1.
arXiv Detail & Related papers (2022-05-06T13:02:26Z) - Explainability-based Backdoor Attacks Against Graph Neural Networks [9.179577599489559]
There are numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs)
We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop.
Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness.
arXiv Detail & Related papers (2021-04-08T10:43:40Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Backdoor Attacks to Graph Neural Networks [73.56867080030091]
We propose the first backdoor attack to graph neural networks (GNN)
In our backdoor attack, a GNN predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.
Our empirical results show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs.
arXiv Detail & Related papers (2020-06-19T14:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.