EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody
- URL: http://arxiv.org/abs/2408.01178v2
- Date: Tue, 17 Sep 2024 20:29:05 GMT
- Title: EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody
- Authors: Coen Schoof, Stefanos Koffas, Mauro Conti, Stjepan Picek,
- Abstract summary: Speaker identification (SI) determines a speaker's identity based on their spoken utterances.
Previous work indicates that SI deep neural networks (DNNs) are vulnerable to backdoor attacks.
This is the first work that explores SI DNNs' vulnerability to backdoor attacks using speakers' emotional prosody.
- Score: 25.134723977429076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Speaker identification (SI) determines a speaker's identity based on their spoken utterances. Previous work indicates that SI deep neural networks (DNNs) are vulnerable to backdoor attacks. Backdoor attacks involve embedding hidden triggers in DNNs' training data, causing the DNN to produce incorrect output when these triggers are present during inference. This is the first work that explores SI DNNs' vulnerability to backdoor attacks using speakers' emotional prosody, resulting in dynamic, inconspicuous triggers. We conducted a parameter study using three different datasets and DNN architectures to determine the impact of emotions as backdoor triggers on the accuracy of SI systems. Additionally, we have explored the robustness of our attacks by applying defenses like pruning, STRIP-ViTA, and three popular preprocessing techniques: quantization, median filtering, and squeezing. Our findings show that the aforementioned models are prone to our attack, indicating that emotional triggers (sad and neutral prosody) can be effectively used to compromise the integrity of SI systems. However, the results of our pruning experiments suggest potential solutions for reinforcing the models against our attacks, decreasing the attack success rate up to 40%.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data [14.415796842972563]
We present a comprehensive analysis of backdoor attacks on tabular data using Deep Neural Networks (DNNs)
We propose a novel approach for trigger construction: an in-bounds attack, which provides excellent attack performance while maintaining stealthiness.
Our results demonstrate up to 100% attack success rate with negligible clean accuracy drop.
arXiv Detail & Related papers (2023-11-13T18:39:44Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - Can You Hear It? Backdoor Attacks via Ultrasonic Triggers [31.147899305987934]
In this work, we explore the option of backdoor attacks to automatic speech recognition systems where we inject inaudible triggers.
Our results indicate that less than 1% of poisoned data is sufficient to deploy a backdoor attack and reach a 100% attack success rate.
arXiv Detail & Related papers (2021-07-30T12:08:16Z) - Detecting Trojaned DNNs Using Counterfactual Attributions [15.988574580713328]
Such models behave normally with typical inputs but produce specific incorrect predictions for inputs with a Trojan trigger.
Our approach is based on a novel observation that the trigger behavior depends on a few ghost neurons that activate on trigger pattern.
We use this information for Trojan detection by using a deep set encoder.
arXiv Detail & Related papers (2020-12-03T21:21:33Z) - ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
Networks [0.0]
trojan attacks insert some misbehavior at training using samples with a mark or trigger, which is exploited at inference or testing time.
We propose a novel defensive technique against trojan attacks, in which DNNs are taught to disregard the styles of inputs and focus on their content.
Results show that the method reduces the attack success rate significantly to values 1% in all the tested attacks.
arXiv Detail & Related papers (2020-07-01T19:25:34Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.