Transferable Graph Backdoor Attack
- URL: http://arxiv.org/abs/2207.00425v3
- Date: Tue, 5 Jul 2022 01:10:57 GMT
- Title: Transferable Graph Backdoor Attack
- Authors: Shuiqiao Yang, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas
Abraham, Seyit Camtepe, Damith C. Ranasinghe, Salil S. Kanhere
- Abstract summary: Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks.
GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features.
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
- Score: 13.110473828583725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved tremendous success in many graph
mining tasks benefitting from the message passing strategy that fuses the local
structure and node features for better graph representation learning. Despite
the success of GNNs, and similar to other types of deep neural networks, GNNs
are found to be vulnerable to unnoticeable perturbations on both graph
structure and node features. Many adversarial attacks have been proposed to
disclose the fragility of GNNs under different perturbation strategies to
create adversarial examples. However, vulnerability of GNNs to successful
backdoor attacks was only shown recently. In this paper, we disclose the TRAP
attack, a Transferable GRAPh backdoor attack. The core attack principle is to
poison the training dataset with perturbation-based triggers that can lead to
an effective and transferable backdoor attack. The perturbation trigger for a
graph is generated by performing the perturbation actions on the graph
structure via a gradient based score matrix from a surrogate model. Compared
with prior works, TRAP attack is different in several ways: i) it exploits a
surrogate Graph Convolutional Network (GCN) model to generate perturbation
triggers for a blackbox based backdoor attack; ii) it generates sample-specific
perturbation triggers which do not have a fixed pattern; and iii) the attack
transfers, for the first time in the context of GNNs, to different GNN models
when trained with the forged poisoned training dataset. Through extensive
evaluations on four real-world datasets, we demonstrate the effectiveness of
the TRAP attack to build transferable backdoors in four different popular GNNs
using four real-world datasets.
Related papers
- Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.
Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.
We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - A backdoor attack against link prediction tasks with graph neural
networks [0.0]
Graph Neural Networks (GNNs) are a class of deep learning models capable of processing graph-structured data.
Recent studies have found that GNN models are vulnerable to backdoor attacks.
In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs.
arXiv Detail & Related papers (2024-01-05T06:45:48Z) - Unnoticeable Backdoor Attacks on Graph Neural Networks [29.941951380348435]
In particular, backdoor attack poisons the graph by attaching triggers and the target class label to a set of nodes in the training graph.
In this paper, we study a novel problem of unnoticeable graph backdoor attacks with limited attack budget.
arXiv Detail & Related papers (2023-02-11T01:50:58Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function [62.89388227354517]
Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks.
Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters.
We propose an influence-based emphefficient, direct, and restricted black-box evasion attack to emphany-layer GNNs.
arXiv Detail & Related papers (2020-09-01T03:24:51Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - GNNGuard: Defending Graph Neural Networks against Adversarial Attacks [16.941548115261433]
We develop GNNGuard, an algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure.
GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes.
Experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average.
arXiv Detail & Related papers (2020-06-15T06:07:46Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.