PBSM: Backdoor attack against Keyword spotting based on pitch boosting
and sound masking
- URL: http://arxiv.org/abs/2211.08697v1
- Date: Wed, 16 Nov 2022 06:20:47 GMT
- Title: PBSM: Backdoor attack against Keyword spotting based on pitch boosting
and sound masking
- Authors: Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao, Shunhui Ji
- Abstract summary: We design a backdoor attack scheme based on Pitch Boosting and Sound Masking for KWS.
Experimental results demonstrated that PBSM is feasible to achieve an average attack success rate close to 90% in three victim models.
- Score: 6.495134473374733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Keyword spotting (KWS) has been widely used in various speech control
scenarios. The training of KWS is usually based on deep neural networks and
requires a large amount of data. Manufacturers often use third-party data to
train KWS. However, deep neural networks are not sufficiently interpretable to
manufacturers, and attackers can manipulate third-party training data to plant
backdoors during the model training. An effective backdoor attack can force the
model to make specified judgments under certain conditions, i.e., triggers. In
this paper, we design a backdoor attack scheme based on Pitch Boosting and
Sound Masking for KWS, called PBSM. Experimental results demonstrated that PBSM
is feasible to achieve an average attack success rate close to 90% in three
victim models when poisoning less than 1% of the training data.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - DLP: towards active defense against backdoor attacks with decoupled learning process [2.686336957004475]
We propose a general training pipeline to defend against backdoor attacks.
We show that the model shows different learning behaviors in clean and poisoned subsets during training.
The effectiveness of our approach has been shown in numerous experiments across various backdoor attacks and datasets.
arXiv Detail & Related papers (2024-06-18T23:04:38Z) - BDMMT: Backdoor Sample Detection for Language Models through Model
Mutation Testing [14.88575793895578]
We propose a defense method based on deep model mutation testing.
We first confirm the effectiveness of model mutation testing in detecting backdoor samples.
We then systematically defend against three extensively studied backdoor attack levels.
arXiv Detail & Related papers (2023-01-25T05:24:46Z) - VSVC: Backdoor attack against Keyword Spotting based on Voiceprint
Selection and Voice Conversion [6.495134473374733]
Keywords spotting (KWS) based on deep neural networks (DNNs) has achieved massive success in voice control scenarios.
This paper proposes a backdoor attack scheme based on Voiceprint Selection and Voice Conversion, abbreviated as VSVC.
VSVC is feasible to achieve an average attack success rate close to 97% in four victim models when poisoning less than 1% of the training data.
arXiv Detail & Related papers (2022-12-20T09:24:25Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Backdoor Attacks on Crowd Counting [63.90533357815404]
Crowd counting is a regression task that estimates the number of people in a scene image.
In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks.
arXiv Detail & Related papers (2022-07-12T16:17:01Z) - Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain [8.64369418938889]
We propose a generalized backdoor attack method based on the frequency domain.
It can implement backdoor implantation without mislabeling and accessing the training process.
We evaluate our approach in the no-label and clean-label cases on three datasets.
arXiv Detail & Related papers (2022-07-09T07:05:53Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Can You Hear It? Backdoor Attacks via Ultrasonic Triggers [31.147899305987934]
In this work, we explore the option of backdoor attacks to automatic speech recognition systems where we inject inaudible triggers.
Our results indicate that less than 1% of poisoned data is sufficient to deploy a backdoor attack and reach a 100% attack success rate.
arXiv Detail & Related papers (2021-07-30T12:08:16Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.