Audio Attacks and Defenses against AED Systems - A Practical Study
- URL: http://arxiv.org/abs/2106.07428v1
- Date: Mon, 14 Jun 2021 13:42:49 GMT
- Title: Audio Attacks and Defenses against AED Systems - A Practical Study
- Authors: Rodrigo dos Santos and Shirin Nilizadeh
- Abstract summary: We evaluate deep learning-based Audio Event Detection (AED) systems against evasion attacks through adversarial examples.
We generate audio adversarial examples using two different types of noise, namely background and white noise, that can be used by the adversary to evade detection.
We show that these countermeasures, when applied to audio input, can be successful.
- Score: 2.365611283869544
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Audio Event Detection (AED) Systems capture audio from the environment and
employ some deep learning algorithms for detecting the presence of a specific
sound of interest. In this paper, we evaluate deep learning-based AED systems
against evasion attacks through adversarial examples. We run multiple security
critical AED tasks, implemented as CNNs classifiers, and then generate audio
adversarial examples using two different types of noise, namely background and
white noise, that can be used by the adversary to evade detection. We also
examine the robustness of existing third-party AED capable devices, such as
Nest devices manufactured by Google, which run their own black-box deep
learning models.
We show that an adversary can focus on audio adversarial inputs to cause AED
systems to misclassify, similarly to what has been previously done by works
focusing on adversarial examples from the image domain. We then, seek to
improve classifiers' robustness through countermeasures to the attacks. We
employ adversarial training and a custom denoising technique. We show that
these countermeasures, when applied to audio input, can be successful, either
in isolation or in combination, generating relevant increases of nearly fifty
percent in the performance of the classifiers when these are under attack.
Related papers
- Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - On the Detection of Adaptive Adversarial Attacks in Speaker Verification
Systems [0.0]
adversarial attacks, such as FAKEBOB, can work effectively against speaker verification systems.
The goal of this paper is to design a detector that can distinguish an original audio from an audio contaminated by adversarial attacks.
We show that our proposed detector is easy to implement, fast to process an input audio, and effective in determining whether an audio is corrupted by FAKEBOB attacks.
arXiv Detail & Related papers (2022-02-11T16:02:06Z) - Blackbox Untargeted Adversarial Testing of Automatic Speech Recognition
Systems [1.599072005190786]
Speech recognition systems are prevalent in applications for voice navigation and voice control of domestic appliances.
Deep neural networks (DNNs) have been shown to be susceptible to adversarial perturbations.
To help test the correctness of ASRS, we propose techniques that automatically generate blackbox.
arXiv Detail & Related papers (2021-12-03T10:21:47Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - WaveGuard: Understanding and Mitigating Audio Adversarial Examples [12.010555227327743]
We introduce WaveGuard: a framework for detecting adversarial inputs crafted to attack ASR systems.
Our framework incorporates audio transformation functions and analyses the ASR transcriptions of the original and transformed audio to detect adversarial inputs.
arXiv Detail & Related papers (2021-03-04T21:44:37Z) - Cortical Features for Defense Against Adversarial Audio Attacks [55.61885805423492]
We propose using a computational model of the auditory cortex as a defense against adversarial attacks on audio.
We show that the cortical features help defend against universal adversarial examples.
arXiv Detail & Related papers (2021-01-30T21:21:46Z) - Open-set Adversarial Defense [93.25058425356694]
We show that open-set recognition systems are vulnerable to adversarial attacks.
Motivated by this observation, we emphasize the need of an Open-Set Adrial Defense (OSAD) mechanism.
This paper proposes an Open-Set Defense Network (OSDN) as a solution to the OSAD problem.
arXiv Detail & Related papers (2020-09-02T04:35:33Z) - Self-Supervised Learning of Audio-Visual Objects from Video [108.77341357556668]
We introduce a model that uses attention to localize and group sound sources, and optical flow to aggregate information over time.
We demonstrate the effectiveness of the audio-visual object embeddings that our model learns by using them for four downstream speech-oriented tasks.
arXiv Detail & Related papers (2020-08-10T16:18:01Z) - Detecting Audio Attacks on ASR Systems with Dropout Uncertainty [40.9172128924305]
We show that our defense is able to detect attacks created through optimized perturbations and frequency masking.
We test our defense on Mozilla's CommonVoice dataset, the UrbanSound dataset, and an excerpt of the LibriSpeech dataset.
arXiv Detail & Related papers (2020-06-02T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.