A backdoor attack against link prediction tasks with graph neural
networks
- URL: http://arxiv.org/abs/2401.02663v1
- Date: Fri, 5 Jan 2024 06:45:48 GMT
- Title: A backdoor attack against link prediction tasks with graph neural
networks
- Authors: Jiazhu Dai, Haoyu Sun
- Abstract summary: Graph Neural Networks (GNNs) are a class of deep learning models capable of processing graph-structured data.
Recent studies have found that GNN models are vulnerable to backdoor attacks.
In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are a class of deep learning models capable of
processing graph-structured data, and they have demonstrated significant
performance in a variety of real-world applications. Recent studies have found
that GNN models are vulnerable to backdoor attacks. When specific patterns
(called backdoor triggers, e.g., subgraphs, nodes, etc.) appear in the input
data, the backdoor embedded in the GNN models is activated, which misclassifies
the input data into the target class label specified by the attacker, whereas
when there are no backdoor triggers in the input, the backdoor embedded in the
GNN models is not activated, and the models work normally. Backdoor attacks are
highly stealthy and expose GNN models to serious security risks. Currently,
research on backdoor attacks against GNNs mainly focus on tasks such as graph
classification and node classification, and backdoor attacks against link
prediction tasks are rarely studied. In this paper, we propose a backdoor
attack against the link prediction tasks based on GNNs and reveal the existence
of such security vulnerability in GNN models, which make the backdoored GNN
models to incorrectly predict unlinked two nodes as having a link relationship
when a trigger appear. The method uses a single node as the trigger and poison
selected node pairs in the training graph, and then the backdoor will be
embedded in the GNN models through the training process. In the inference
stage, the backdoor in the GNN models can be activated by simply linking the
trigger node to the two end nodes of the unlinked node pairs in the input data,
causing the GNN models to produce incorrect link prediction results for the
target node pairs.
Related papers
- Graph Neural Backdoor: Fundamentals, Methodologies, Applications, and Future Directions [7.392996857661765]
Despite the boosts of GNN, recent research has empirically demonstrated its potential vulnerability to backdoor attacks.
This survey aims to explore the principles of graph backdoors, provide insights to defenders, and promote future security research.
arXiv Detail & Related papers (2024-06-15T09:23:46Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Transferable Graph Backdoor Attack [13.110473828583725]
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks.
GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features.
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
arXiv Detail & Related papers (2022-06-21T06:25:37Z) - Neighboring Backdoor Attacks on Graph Convolutional Network [30.586278223198086]
We propose a new type of backdoor which is specific to graph data, called neighboring backdoor.
To address such a challenge, we set the trigger as a single node, and the backdoor is activated when the trigger node is connected to the target node.
arXiv Detail & Related papers (2022-01-17T03:49:32Z) - Explainability-based Backdoor Attacks Against Graph Neural Networks [9.179577599489559]
There are numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs)
We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop.
Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness.
arXiv Detail & Related papers (2021-04-08T10:43:40Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.