A semantic backdoor attack against Graph Convolutional Networks
- URL: http://arxiv.org/abs/2302.14353v4
- Date: Sat, 26 Aug 2023 12:25:24 GMT
- Title: A semantic backdoor attack against Graph Convolutional Networks
- Authors: Jiazhu Dai, Zhipeng Xiong
- Abstract summary: A semantic backdoor attack is a new type of backdoor attack on deep neural networks (DNNs)
We propose a semantic backdoor attack against Graph convolutional networks (GCNs) to reveal the existence of this security vulnerability in GCNs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional networks (GCNs) have been very effective in addressing
the issue of various graph-structured related tasks. However, recent research
has shown that GCNs are vulnerable to a new type of threat called a backdoor
attack, where the adversary can inject a hidden backdoor into GCNs so that the
attacked model performs well on benign samples, but its prediction will be
maliciously changed to the attacker-specified target label if the hidden
backdoor is activated by the attacker-defined trigger. A semantic backdoor
attack is a new type of backdoor attack on deep neural networks (DNNs), where a
naturally occurring semantic feature of samples can serve as a backdoor trigger
such that the infected DNN models will misclassify testing samples containing
the predefined semantic feature even without the requirement of modifying the
testing samples. Since the backdoor trigger is a naturally occurring semantic
feature of the samples, semantic backdoor attacks are more imperceptible and
pose a new and serious threat. In this paper, we investigate whether such
semantic backdoor attacks are possible for GCNs and propose a semantic backdoor
attack against GCNs (SBAG) under the context of graph classification to reveal
the existence of this security vulnerability in GCNs. SBAG uses a certain type
of node in the samples as a backdoor trigger and injects a hidden backdoor into
GCN models by poisoning training data. The backdoor will be activated, and the
GCN models will give malicious classification results specified by the attacker
even on unmodified samples as long as the samples contain enough trigger nodes.
We evaluate SBAG on four graph datasets and the experimental results indicate
that SBAG is effective.
Related papers
- DMGNN: Detecting and Mitigating Backdoor Attacks in Graph Neural Networks [30.766013737094532]
We propose DMGNN against out-of-distribution (OOD) and in-distribution (ID) graph backdoor attacks.
DMGNN can easily identify the hidden ID and OOD triggers via predicting label transitions based on counterfactual explanation.
DMGNN far outperforms the state-of-the-art (SOTA) defense methods, reducing the attack success rate to 5% with almost negligible degradation in model performance.
arXiv Detail & Related papers (2024-10-18T01:08:03Z) - Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.
Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.
We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only [0.0]
This paper proposes a clean-graph backdoor attack against GCNs (CBAG) in the node classification task.
By poisoning the training labels, a hidden backdoor is injected into the GCNs model.
Experimental results show that our clean graph backdoor can achieve 99% attack success rate.
arXiv Detail & Related papers (2024-04-19T08:21:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Defending Against Backdoor Attack on Graph Nerual Network by
Explainability [7.147386524788604]
We propose the first backdoor detection and defense method on GNN.
For graph data, current backdoor attack focus on manipulating the graph structure to inject the trigger.
We find that there are apparent differences between benign samples and malicious samples in some explanatory evaluation metrics.
arXiv Detail & Related papers (2022-09-07T03:19:29Z) - Adversarial Fine-tuning for Backdoor Defense: Connect Adversarial
Examples to Triggered Samples [15.57457705138278]
We propose a new Adversarial Fine-Tuning (AFT) approach to erase backdoor triggers.
AFT can effectively erase the backdoor triggers without obvious performance degradation on clean samples.
arXiv Detail & Related papers (2022-02-13T13:41:15Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.