The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers
- URL: http://arxiv.org/abs/2401.01537v4
- Date: Sat, 28 Sep 2024 08:23:18 GMT
- Title: The Art of Deception: Robust Backdoor Attack using Dynamic Stacking of Triggers
- Authors: Orson Mengara,
- Abstract summary: Recent research has uncovered that auditory backdoors may use certain modifications as their initiating mechanism.
DynamicTrigger is introduced as a methodology for carrying out dynamic backdoor attacks.
By utilizing fluctuating signal sampling rates and masking speaker identities through dynamic sound triggers, it is possible to deceive speech recognition systems.
- Score: 0.0
- License:
- Abstract: The area of Machine Learning as a Service (MLaaS) is experiencing increased implementation due to recent advancements in the AI (Artificial Intelligence) industry. However, this spike has prompted concerns regarding AI defense mechanisms, specifically regarding potential covert attacks from third-party providers that cannot be entirely trusted. Recent research has uncovered that auditory backdoors may use certain modifications as their initiating mechanism. DynamicTrigger is introduced as a methodology for carrying out dynamic backdoor attacks that use cleverly designed tweaks to ensure that corrupted samples are indistinguishable from clean. By utilizing fluctuating signal sampling rates and masking speaker identities through dynamic sound triggers (such as the clapping of hands), it is possible to deceive speech recognition systems (ASR). Our empirical testing demonstrates that DynamicTrigger is both potent and stealthy, achieving impressive success rates during covert attacks while maintaining exceptional accuracy with non-poisoned datasets.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody [25.134723977429076]
Speaker identification (SI) determines a speaker's identity based on their spoken utterances.
Previous work indicates that SI deep neural networks (DNNs) are vulnerable to backdoor attacks.
This is the first work that explores SI DNNs' vulnerability to backdoor attacks using speakers' emotional prosody.
arXiv Detail & Related papers (2024-08-02T11:00:12Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited Knowledge [13.43804949744336]
FlowMur is a stealthy and practical audio backdoor attack that can be launched with limited knowledge.
Experiments conducted on two datasets demonstrate that FlowMur achieves high attack performance in both digital and physical settings.
arXiv Detail & Related papers (2023-12-15T10:26:18Z) - Attention-Enhancing Backdoor Attacks Against BERT-based Models [54.070555070629105]
Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
We propose a novel Trojan Attention Loss (TAL) which enhances the Trojan behavior by directly manipulating the attention patterns.
arXiv Detail & Related papers (2023-10-23T01:24:56Z) - Towards Stealthy Backdoor Attacks against Speech Recognition via
Elements of Sound [9.24846124692153]
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in various applications of speech recognition.
In this paper, we revisit poison-only backdoor attacks against speech recognition.
We exploit elements of sound ($e.g.$, pitch and timbre) to design more stealthy yet effective poison-only backdoor attacks.
arXiv Detail & Related papers (2023-07-17T02:58:25Z) - Fake the Real: Backdoor Attack on Deep Speech Classification via Voice
Conversion [14.264424889358208]
This work explores a backdoor attack that utilizes sample-specific triggers based on voice conversion.
Specifically, we adopt a pre-trained voice conversion model to generate the trigger, ensuring that the poisoned samples does not introduce any additional audible noise.
arXiv Detail & Related papers (2023-06-28T02:19:31Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective [10.03897682559064]
This paper revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis.
We show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions.
We propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.
arXiv Detail & Related papers (2021-04-07T22:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.