Enabling Fast and Universal Audio Adversarial Attack Using Generative
Model
- URL: http://arxiv.org/abs/2004.12261v2
- Date: Sun, 7 Feb 2021 17:59:13 GMT
- Title: Enabling Fast and Universal Audio Adversarial Attack Using Generative
Model
- Authors: Yi Xie, Zhuohang Li, Cong Shi, Jian Liu, Yingying Chen, Bo Yuan
- Abstract summary: We propose fast audio adversarial perturbation generator (FAPG)
FAPG uses generative model to generate adversarial perturbations for the audio input in a single forward pass.
We also propose universal audio adversarial perturbation generator (UAPG)
- Score: 21.559732692440424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the vulnerability of DNN-based audio systems to adversarial attacks
has obtained the increasing attention. However, the existing audio adversarial
attacks allow the adversary to possess the entire user's audio input as well as
granting sufficient time budget to generate the adversarial perturbations.
These idealized assumptions, however, makes the existing audio adversarial
attacks mostly impossible to be launched in a timely fashion in practice (e.g.,
playing unnoticeable adversarial perturbations along with user's streaming
input). To overcome these limitations, in this paper we propose fast audio
adversarial perturbation generator (FAPG), which uses generative model to
generate adversarial perturbations for the audio input in a single forward
pass, thereby drastically improving the perturbation generation speed. Built on
the top of FAPG, we further propose universal audio adversarial perturbation
generator (UAPG), a scheme crafting universal adversarial perturbation that can
be imposed on arbitrary benign audio input to cause misclassification.
Extensive experiments show that our proposed FAPG can achieve up to 167X
speedup over the state-of-the-art audio adversarial attack methods. Also our
proposed UAPG can generate universal adversarial perturbation that achieves
much better attack performance than the state-of-the-art solutions.
Related papers
- Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - Universal Speech Enhancement with Score-based Diffusion [21.294665965300922]
We present a universal speech enhancement system that tackles 55 different distortions at the same time.
Our approach consists of a generative model that employs score-based diffusion, together with a multi-resolution conditioning network.
We show that this approach significantly outperforms the state of the art in a subjective test performed by expert listeners.
arXiv Detail & Related papers (2022-06-07T07:32:32Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints [3.042299765078767]
We show how an offline component serves to warm up the online algorithm, making it possible to generate highly successful attacks under time constraints.
This paper introduces a new problem: how do we generate adversarial noise under real-time constraints to support such real-time adversarial attacks?
arXiv Detail & Related papers (2022-01-05T14:03:26Z) - Towards Robust Speech-to-Text Adversarial Attack [78.5097679815944]
This paper introduces a novel adversarial algorithm for attacking the state-of-the-art speech-to-text systems, namely DeepSpeech, Kaldi, and Lingvo.
Our approach is based on developing an extension for the conventional distortion condition of the adversarial optimization formulation.
Minimizing over this metric, which measures the discrepancies between original and adversarial samples' distributions, contributes to crafting signals very close to the subspace of legitimate speech recordings.
arXiv Detail & Related papers (2021-03-15T01:51:41Z) - Cortical Features for Defense Against Adversarial Audio Attacks [55.61885805423492]
We propose using a computational model of the auditory cortex as a defense against adversarial attacks on audio.
We show that the cortical features help defend against universal adversarial examples.
arXiv Detail & Related papers (2021-01-30T21:21:46Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Robust Reinforcement Learning using Adversarial Populations [118.73193330231163]
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness.
We show that using a single adversary does not consistently yield robustness to dynamics variations under standard parametrizations of the adversary.
We propose a population-based augmentation to the Robust RL formulation in which we randomly initialize a population of adversaries and sample from the population uniformly during training.
arXiv Detail & Related papers (2020-08-04T20:57:32Z) - Frequency-Tuned Universal Adversarial Attacks [19.79803434998116]
We propose a frequency-tuned universal attack method to compute universal perturbations.
We show that our method can realize a good balance between perceivability and effectiveness in terms of fooling rate.
arXiv Detail & Related papers (2020-03-11T22:52:19Z) - Real-time, Universal, and Robust Adversarial Attacks Against Speaker
Recognition Systems [21.559732692440424]
We propose the first real-time, universal, and robust adversarial attack against the state-of-the-art deep neural network (DNN) based speaker recognition system.
Experiment using a public dataset of 109 English speakers demonstrates the effectiveness and robustness of our proposed attack with a high attack success rate of over 90%.
arXiv Detail & Related papers (2020-03-04T19:30:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.