Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
- URL: http://arxiv.org/abs/2107.14569v1
- Date: Fri, 30 Jul 2021 12:08:16 GMT
- Title: Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
- Authors: Stefanos Koffas, Jing Xu, Mauro Conti, Stjepan Picek
- Abstract summary: In this work, we explore the option of backdoor attacks to automatic speech recognition systems where we inject inaudible triggers.
Our results indicate that less than 1% of poisoned data is sufficient to deploy a backdoor attack and reach a 100% attack success rate.
- Score: 31.147899305987934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks represent a powerful option for many real-world
applications due to their ability to model even complex data relations.
However, such neural networks can also be prohibitively expensive to train,
making it common to either outsource the training process to third parties or
use pretrained neural networks. Unfortunately, such practices make neural
networks vulnerable to various attacks, where one attack is the backdoor
attack. In such an attack, the third party training the model may maliciously
inject hidden behaviors into the model. Still, if a particular input (called
trigger) is fed into a neural network, the network will respond with a wrong
result.
In this work, we explore the option of backdoor attacks to automatic speech
recognition systems where we inject inaudible triggers. By doing so, we make
the backdoor attack challenging to detect for legitimate users, and thus,
potentially more dangerous. We conduct experiments on two versions of datasets
and three neural networks and explore the performance of our attack concerning
the duration, position, and type of the trigger. Our results indicate that less
than 1% of poisoned data is sufficient to deploy a backdoor attack and reach a
100% attack success rate. What is more, while the trigger is inaudible, making
it without limitations with respect to the duration of the signal, we observed
that even short, non-continuous triggers result in highly successful attacks.
Related papers
- DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers [0.0]
We introduce DeepBaR, a novel approach that implants backdoors on neural networks by faulting their behavior at training.
We attack three popular convolutional neural network architectures and show that DeepBaR attacks have a success rate of up to 98.30%.
arXiv Detail & Related papers (2024-07-30T22:14:47Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Verifying Neural Networks Against Backdoor Attacks [7.5033553032683855]
We propose an approach to verify whether a given neural network is free of backdoor with a certain level of success rate.
Experiment results show that our approach effectively verifies the absence of backdoor or generates backdoor triggers.
arXiv Detail & Related papers (2022-05-14T07:25:54Z) - Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
Trained from Scratch [99.90716010490625]
Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data.
This vulnerability is then activated at inference time by placing a "trigger" into the model's input.
We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process.
arXiv Detail & Related papers (2021-06-16T17:09:55Z) - Explainability-based Backdoor Attacks Against Graph Neural Networks [9.179577599489559]
There are numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs)
We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop.
Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness.
arXiv Detail & Related papers (2021-04-08T10:43:40Z) - Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural
Networks [22.28270345106827]
Current state-of-the-art backdoor attacks require the adversary to modify the input, usually by adding a trigger to it, for the target model to activate the backdoor.
This added trigger not only increases the difficulty of launching the backdoor attack in the physical world, but also can be easily detected by multiple defense mechanisms.
We present the first triggerless backdoor attack against deep neural networks, where the adversary does not need to modify the input for triggering the backdoor.
arXiv Detail & Related papers (2020-10-07T09:01:39Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.