Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
- URL: http://arxiv.org/abs/2110.03875v1
- Date: Fri, 8 Oct 2021 03:08:35 GMT
- Title: Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
- Authors: Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang
and Yi Liu
- Abstract summary: We propose a novel backdoor attack framework on Dynamic link prediction (DLP)
Dyn-Backdoor generates diverse initial-triggers by a generative adversarial network (GAN)
Experimental results show that Dyn-Backdoor launches successful backdoor attacks with success rate more than 90%.
- Score: 6.712618329144372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic link prediction (DLP) makes graph prediction based on historical
information. Since most DLP methods are highly dependent on the training data
to achieve satisfying prediction performance, the quality of the training data
is crucial. Backdoor attacks induce the DLP methods to make wrong prediction by
the malicious training data, i.e., generating a subgraph sequence as the
trigger and embedding it to the training data. However, the vulnerability of
DLP toward backdoor attacks has not been studied yet. To address the issue, we
propose a novel backdoor attack framework on DLP, denoted as Dyn-Backdoor.
Specifically, Dyn-Backdoor generates diverse initial-triggers by a generative
adversarial network (GAN). Then partial links of the initial-triggers are
selected to form a trigger set, according to the gradient information of the
attack discriminator in the GAN, so as to reduce the size of triggers and
improve the concealment of the attack. Experimental results show that
Dyn-Backdoor launches successful backdoor attacks on the state-of-the-art DLP
models with success rate more than 90%. Additionally, we conduct a possible
defense against Dyn-Backdoor to testify its resistance in defensive settings,
highlighting the needs of defenses for backdoor attacks on DLP.
Related papers
- Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models [6.937795040660591]
We introduce Deferred Activated Backdoor Functionality (DABF) as a new paradigm in backdoor attacks.
Unlike conventional attacks, DABF initially conceals its backdoor, producing benign outputs even when triggered.
DABF attacks exploit the common practice in the life cycle of machine learning models to perform model updates and fine-tuning after initial deployment.
arXiv Detail & Related papers (2024-11-10T07:01:53Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - From Shortcuts to Triggers: Backdoor Defense with Denoised PoE [51.287157951953226]
Language models are often at risk of diverse backdoor attacks, especially data poisoning.
Existing backdoor defense methods mainly focus on backdoor attacks with explicit triggers.
We propose an end-to-end ensemble-based backdoor defense framework, DPoE, to defend various backdoor attacks.
arXiv Detail & Related papers (2023-05-24T08:59:25Z) - Confidence Matters: Inspecting Backdoors in Deep Neural Networks via
Distribution Transfer [27.631616436623588]
We propose a backdoor defense DTInspector built upon a new observation.
DTInspector learns a patch that could change the predictions of most high-confidence data, and then decides the existence of backdoor.
arXiv Detail & Related papers (2022-08-13T08:16:28Z) - Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
Substitution [57.51117978504175]
Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks.
Injected with backdoors, models perform normally on benign examples but produce attacker-specified predictions when the backdoor is activated.
We present invisible backdoors that are activated by a learnable combination of word substitution.
arXiv Detail & Related papers (2021-06-11T13:03:17Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z) - BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
Improvements [33.309299864983295]
We propose BadNL, a general NLP backdoor attack framework including novel attack methods.
Our attacks achieve an almost perfect attack success rate with a negligible effect on the original model's utility.
arXiv Detail & Related papers (2020-06-01T16:17:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.