VSVC: Backdoor attack against Keyword Spotting based on Voiceprint
Selection and Voice Conversion
- URL: http://arxiv.org/abs/2212.10103v1
- Date: Tue, 20 Dec 2022 09:24:25 GMT
- Title: VSVC: Backdoor attack against Keyword Spotting based on Voiceprint
Selection and Voice Conversion
- Authors: Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao, Shunhui Ji
- Abstract summary: Keywords spotting (KWS) based on deep neural networks (DNNs) has achieved massive success in voice control scenarios.
This paper proposes a backdoor attack scheme based on Voiceprint Selection and Voice Conversion, abbreviated as VSVC.
VSVC is feasible to achieve an average attack success rate close to 97% in four victim models when poisoning less than 1% of the training data.
- Score: 6.495134473374733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Keyword spotting (KWS) based on deep neural networks (DNNs) has achieved
massive success in voice control scenarios. However, training of such DNN-based
KWS systems often requires significant data and hardware resources.
Manufacturers often entrust this process to a third-party platform. This makes
the training process uncontrollable, where attackers can implant backdoors in
the model by manipulating third-party training data. An effective backdoor
attack can force the model to make specified judgments under certain
conditions, i.e., triggers. In this paper, we design a backdoor attack scheme
based on Voiceprint Selection and Voice Conversion, abbreviated as VSVC.
Experimental results demonstrated that VSVC is feasible to achieve an average
attack success rate close to 97% in four victim models when poisoning less than
1% of the training data.
Related papers
- EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody [25.134723977429076]
Speaker identification (SI) determines a speaker's identity based on their spoken utterances.
Previous work indicates that SI deep neural networks (DNNs) are vulnerable to backdoor attacks.
This is the first work that explores SI DNNs' vulnerability to backdoor attacks using speakers' emotional prosody.
arXiv Detail & Related papers (2024-08-02T11:00:12Z) - Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - BackdoorBox: A Python Toolbox for Backdoor Learning [67.53987387581222]
This Python toolbox implements representative and advanced backdoor attacks and defenses.
It allows researchers and developers to easily implement and compare different methods on benchmark or their local datasets.
arXiv Detail & Related papers (2023-02-01T09:45:42Z) - PBSM: Backdoor attack against Keyword spotting based on pitch boosting
and sound masking [6.495134473374733]
We design a backdoor attack scheme based on Pitch Boosting and Sound Masking for KWS.
Experimental results demonstrated that PBSM is feasible to achieve an average attack success rate close to 90% in three victim models.
arXiv Detail & Related papers (2022-11-16T06:20:47Z) - Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain [8.64369418938889]
We propose a generalized backdoor attack method based on the frequency domain.
It can implement backdoor implantation without mislabeling and accessing the training process.
We evaluate our approach in the no-label and clean-label cases on three datasets.
arXiv Detail & Related papers (2022-07-09T07:05:53Z) - Neurotoxin: Durable Backdoors in Federated Learning [73.82725064553827]
federated learning systems have an inherent vulnerability during their training to adversarial backdoor attacks.
We propose Neurotoxin, a simple one-line modification to existing backdoor attacks that acts by attacking parameters that are changed less in magnitude during training.
arXiv Detail & Related papers (2022-06-12T16:52:52Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
Substitution [57.51117978504175]
Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks.
Injected with backdoors, models perform normally on benign examples but produce attacker-specified predictions when the backdoor is activated.
We present invisible backdoors that are activated by a learnable combination of word substitution.
arXiv Detail & Related papers (2021-06-11T13:03:17Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
Systems [0.0]
We propose a novel black-box backdoor attack technique on face recognition systems.
We show that the backdoor trigger can be quite effective, where the attack success rate can be up to $88%$.
We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.
arXiv Detail & Related papers (2020-09-15T11:50:29Z) - Mitigating backdoor attacks in LSTM-based Text Classification Systems by
Backdoor Keyword Identification [0.0]
In text classification systems, backdoors inserted in the models can cause spam or malicious speech to escape detection.
In this paper, through analyzing the changes in inner LSTM neurons, we proposed a defense method called Backdoor Keyword Identification (BKI) to mitigate backdoor attacks.
We evaluate our method on four different text classification datset: IMDB, DBpedia, 20 newsgroups and Reuters-21578 dataset.
arXiv Detail & Related papers (2020-07-11T09:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.