Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach
- URL: http://arxiv.org/abs/2406.10719v4
- Date: Mon, 16 Sep 2024 04:16:35 GMT
- Title: Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach
- Authors: Orson Mengara,
- Abstract summary: This research looks at a specific type of attack, known as a investment-based backdoor attack (MarketBack)
MarketBack is in which adversaries strategically manipulate the stylistic properties of audio to fool speech recognition systems.
The security and integrity of machine learning models are seriously threatened by backdoor attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the growing use of voice-activated systems and speech recognition technologies, the danger of backdoor attacks on audio data has grown significantly. This research looks at a specific type of attack, known as a Stochastic investment-based backdoor attack (MarketBack), in which adversaries strategically manipulate the stylistic properties of audio to fool speech recognition systems. The security and integrity of machine learning models are seriously threatened by backdoor attacks, in order to maintain the reliability of audio applications and systems, the identification of such attacks becomes crucial in the context of audio data. Experimental results demonstrated that MarketBack is feasible to achieve an average attack success rate close to 100% in seven victim models when poisoning less than 1% of the training data.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Revisiting Backdoor Attacks against Large Vision-Language Models [76.42014292255944]
This paper empirically examines the generalizability of backdoor attacks during the instruction tuning of LVLMs.
We modify existing backdoor attacks based on the above key observations.
This paper underscores that even simple traditional backdoor strategies pose a serious threat to LVLMs.
arXiv Detail & Related papers (2024-06-27T02:31:03Z) - Acoustic Cybersecurity: Exploiting Voice-Activated Systems [0.0]
Our research extends the feasibility of these attacks across various platforms like Amazon's Alexa, Android, iOS, and Cortana.
We quantitatively show that attack success rates hover around 60%, with the ability to activate devices remotely from over 100 feet away.
These attacks threaten critical infrastructure, emphasizing the need for multifaceted defensive strategies.
arXiv Detail & Related papers (2023-11-23T02:26:11Z) - Breaking Speaker Recognition with PaddingBack [18.219474338850787]
Recent research has shown that speech backdoors can utilize transformations as triggers, similar to image backdoors.
We propose PaddingBack, an inaudible backdoor attack that utilizes malicious operations to generate poisoned samples.
arXiv Detail & Related papers (2023-08-08T10:36:44Z) - Fake the Real: Backdoor Attack on Deep Speech Classification via Voice
Conversion [14.264424889358208]
This work explores a backdoor attack that utilizes sample-specific triggers based on voice conversion.
Specifically, we adopt a pre-trained voice conversion model to generate the trigger, ensuring that the poisoned samples does not introduce any additional audible noise.
arXiv Detail & Related papers (2023-06-28T02:19:31Z) - Robust Contrastive Language-Image Pre-training against Data Poisoning
and Backdoor Attacks [52.26631767748843]
We propose ROCLIP, the first effective method for robust pre-training multimodal vision-language models against targeted data poisoning and backdoor attacks.
ROCLIP effectively breaks the association between poisoned image-caption pairs by considering a relatively large and varying pool of random captions.
Our experiments show that ROCLIP renders state-of-the-art targeted data poisoning and backdoor attacks ineffective during pre-training CLIP models.
arXiv Detail & Related papers (2023-03-13T04:49:46Z) - VSVC: Backdoor attack against Keyword Spotting based on Voiceprint
Selection and Voice Conversion [6.495134473374733]
Keywords spotting (KWS) based on deep neural networks (DNNs) has achieved massive success in voice control scenarios.
This paper proposes a backdoor attack scheme based on Voiceprint Selection and Voice Conversion, abbreviated as VSVC.
VSVC is feasible to achieve an average attack success rate close to 97% in four victim models when poisoning less than 1% of the training data.
arXiv Detail & Related papers (2022-12-20T09:24:25Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - VenoMave: Targeted Poisoning Against Speech Recognition [30.448709704880518]
VENOMAVE is the first training-time poisoning attack against speech recognition.
We evaluate our attack on two datasets: TIDIGITS and Speech Commands.
arXiv Detail & Related papers (2020-10-21T00:30:08Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.