Rethinking the Trigger-injecting Position in Graph Backdoor Attack
- URL: http://arxiv.org/abs/2304.02277v2
- Date: Tue, 18 Apr 2023 14:41:09 GMT
- Title: Rethinking the Trigger-injecting Position in Graph Backdoor Attack
- Authors: Jing Xu, Gorka Abad, Stjepan Picek
- Abstract summary: Backdoor attacks have been demonstrated as a security threat for machine learning models.
In this paper, we study two trigger-injecting strategies for backdoor attacks on Graph Neural Networks (GNNs)
Our results show that, generally, LIAS performs better, and the differences between the LIAS and MIAS performance can be significant.
- Score: 7.4968235623939155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Backdoor attacks have been demonstrated as a security threat for machine
learning models. Traditional backdoor attacks intend to inject backdoor
functionality into the model such that the backdoored model will perform
abnormally on inputs with predefined backdoor triggers and still retain
state-of-the-art performance on the clean inputs. While there are already some
works on backdoor attacks on Graph Neural Networks (GNNs), the backdoor trigger
in the graph domain is mostly injected into random positions of the sample.
There is no work analyzing and explaining the backdoor attack performance when
injecting triggers into the most important or least important area in the
sample, which we refer to as trigger-injecting strategies MIAS and LIAS,
respectively. Our results show that, generally, LIAS performs better, and the
differences between the LIAS and MIAS performance can be significant.
Furthermore, we explain these two strategies' similar (better) attack
performance through explanation techniques, which results in a further
understanding of backdoor attacks in GNNs.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Revisiting Backdoor Attacks against Large Vision-Language Models [76.42014292255944]
This paper empirically examines the generalizability of backdoor attacks during the instruction tuning of LVLMs.
We modify existing backdoor attacks based on the above key observations.
This paper underscores that even simple traditional backdoor strategies pose a serious threat to LVLMs.
arXiv Detail & Related papers (2024-06-27T02:31:03Z) - Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs [13.93535590008316]
Graph Neural Networks (GNNs) have gained popularity in numerous domains, yet they are vulnerable to backdoor attacks that can compromise their performance and ethical application.
We present a novel method to detect backdoor attacks in GNNs.
Our results show that our method can achieve high detection performance, marking a significant advancement in safeguarding GNNs against backdoor attacks.
arXiv Detail & Related papers (2024-03-26T22:41:41Z) - Demystifying Poisoning Backdoor Attacks from a Statistical Perspective [35.30533879618651]
Backdoor attacks pose a significant security risk due to their stealthy nature and potentially serious consequences.
This paper evaluates the effectiveness of any backdoor attack incorporating a constant trigger.
Our derived understanding applies to both discriminative and generative models.
arXiv Detail & Related papers (2023-10-16T19:35:01Z) - Backdoor Mitigation by Correcting the Distribution of Neural Activations [30.554700057079867]
Backdoor (Trojan) attacks are an important type of adversarial exploit against deep neural networks (DNNs)
We analyze an important property of backdoor attacks: a successful attack causes an alteration in the distribution of internal layer activations for backdoor-trigger instances.
We propose an efficient and effective method that achieves post-training backdoor mitigation by correcting the distribution alteration.
arXiv Detail & Related papers (2023-08-18T22:52:29Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - Defending Against Backdoor Attack on Graph Nerual Network by
Explainability [7.147386524788604]
We propose the first backdoor detection and defense method on GNN.
For graph data, current backdoor attack focus on manipulating the graph structure to inject the trigger.
We find that there are apparent differences between benign samples and malicious samples in some explanatory evaluation metrics.
arXiv Detail & Related papers (2022-09-07T03:19:29Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Explainability-based Backdoor Attacks Against Graph Neural Networks [9.179577599489559]
There are numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs)
We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop.
Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness.
arXiv Detail & Related papers (2021-04-08T10:43:40Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.