Fake the Real: Backdoor Attack on Deep Speech Classification via Voice
Conversion
- URL: http://arxiv.org/abs/2306.15875v1
- Date: Wed, 28 Jun 2023 02:19:31 GMT
- Title: Fake the Real: Backdoor Attack on Deep Speech Classification via Voice
Conversion
- Authors: Zhe Ye, Terui Mao, Li Dong, Diqun Yan
- Abstract summary: This work explores a backdoor attack that utilizes sample-specific triggers based on voice conversion.
Specifically, we adopt a pre-trained voice conversion model to generate the trigger, ensuring that the poisoned samples does not introduce any additional audible noise.
- Score: 14.264424889358208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep speech classification has achieved tremendous success and greatly
promoted the emergence of many real-world applications. However, backdoor
attacks present a new security threat to it, particularly with untrustworthy
third-party platforms, as pre-defined triggers set by the attacker can activate
the backdoor. Most of the triggers in existing speech backdoor attacks are
sample-agnostic, and even if the triggers are designed to be unnoticeable, they
can still be audible. This work explores a backdoor attack that utilizes
sample-specific triggers based on voice conversion. Specifically, we adopt a
pre-trained voice conversion model to generate the trigger, ensuring that the
poisoned samples does not introduce any additional audible noise. Extensive
experiments on two speech classification tasks demonstrate the effectiveness of
our attack. Furthermore, we analyzed the specific scenarios that activated the
proposed backdoor and verified its resistance against fine-tuning.
Related papers
- Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers [51.0477382050976]
An extra prompt token, called the switch token in this work, can turn the backdoor mode on, converting a benign model into a backdoored one.
To attack a pre-trained model, our proposed attack, named SWARM, learns a trigger and prompt tokens including a switch token.
Experiments on diverse visual recognition tasks confirm the success of our switchable backdoor attack, achieving 95%+ attack success rate.
arXiv Detail & Related papers (2024-05-17T08:19:48Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - Breaking Speaker Recognition with PaddingBack [18.219474338850787]
Recent research has shown that speech backdoors can utilize transformations as triggers, similar to image backdoors.
We propose PaddingBack, an inaudible backdoor attack that utilizes malicious operations to generate poisoned samples.
arXiv Detail & Related papers (2023-08-08T10:36:44Z) - Towards Stealthy Backdoor Attacks against Speech Recognition via
Elements of Sound [9.24846124692153]
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in various applications of speech recognition.
In this paper, we revisit poison-only backdoor attacks against speech recognition.
We exploit elements of sound ($e.g.$, pitch and timbre) to design more stealthy yet effective poison-only backdoor attacks.
arXiv Detail & Related papers (2023-07-17T02:58:25Z) - Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in
Language Models [41.1058288041033]
We propose ProAttack, a novel and efficient method for performing clean-label backdoor attacks based on the prompt.
Our method does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack.
arXiv Detail & Related papers (2023-05-02T06:19:36Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Kallima: A Clean-label Framework for Textual Backdoor Attacks [25.332731545200808]
We propose the first clean-label framework Kallima for synthesizing mimesis-style backdoor samples.
We modify inputs belonging to the target class with adversarial perturbations, making the model rely more on the backdoor trigger.
arXiv Detail & Related papers (2022-06-03T21:44:43Z) - Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger [48.59965356276387]
We propose to use syntactic structure as the trigger in textual backdoor attacks.
We conduct extensive experiments to demonstrate that the trigger-based attack method can achieve comparable attack performance.
These results also reveal the significant insidiousness and harmfulness of textual backdoor attacks.
arXiv Detail & Related papers (2021-05-26T08:54:19Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.