AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack
on Speech Recognition
- URL: http://arxiv.org/abs/2309.11462v1
- Date: Wed, 20 Sep 2023 16:59:22 GMT
- Title: AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack
on Speech Recognition
- Authors: Mohamad Fakih, Rouwaida Kanj, Fadi Kurdahi, Mohammed E. Fouda
- Abstract summary: We investigate the needed properties of robust attacks compatible with the Over-The-Air (OTA) model.
We design a method of generating attacks with arbitrary such desired properties.
We evaluate our method on standard keyword classification tasks and analyze it in OTA.
- Score: 0.9913418444556487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic Speech Recognition systems have been shown to be vulnerable to
adversarial attacks that manipulate the command executed on the device. Recent
research has focused on exploring methods to create such attacks, however, some
issues relating to Over-The-Air (OTA) attacks have not been properly addressed.
In our work, we examine the needed properties of robust attacks compatible with
the OTA model, and we design a method of generating attacks with arbitrary such
desired properties, namely the invariance to synchronization, and the
robustness to filtering: this allows a Denial-of-Service (DoS) attack against
ASR systems. We achieve these characteristics by constructing attacks in a
modified frequency domain through an inverse Fourier transform. We evaluate our
method on standard keyword classification tasks and analyze it in OTA, and we
analyze the properties of the cross-domain attacks to explain the efficiency of
the approach.
Related papers
- Filtered Randomized Smoothing: A New Defense for Robust Modulation Classification [16.974803642923465]
We study the problem of designing robust modulation classifiers that can provide provable defense against arbitrary attacks.
We propose Filtered Randomized Smoothing (FRS), a novel defense which combines spectral filtering together with randomized smoothing.
We show that FRS significantly outperforms existing defenses including AT and RS in terms of accuracy on both attacked and benign signals.
arXiv Detail & Related papers (2024-10-08T20:17:25Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - PuriDefense: Randomized Local Implicit Adversarial Purification for
Defending Black-box Query-based Attacks [15.842917276255141]
Black-box query-based attacks threaten Machine Learning as a Service (ML) systems.
We propose an efficient defense mechanism, PuriDefense, that employs random patch-wise purifications with an ensemble of lightweight purification models at a low level of inference cost.
Our theoretical analysis suggests that this approach slows down the convergence of query-based attacks by incorporating randomness into purifications.
arXiv Detail & Related papers (2024-01-19T09:54:23Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Sequential Randomized Smoothing for Adversarially Robust Speech
Recognition [26.96883887938093]
We show that our strongest defense is robust to all attacks that use inaudible noise, and can only be broken with very high distortion.
Our paper overcomes some of these challenges by leveraging speech-specific tools like enhancement and ROVER voting to design an ASR model that is robust to perturbations.
arXiv Detail & Related papers (2021-11-05T21:51:40Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - WaveGuard: Understanding and Mitigating Audio Adversarial Examples [12.010555227327743]
We introduce WaveGuard: a framework for detecting adversarial inputs crafted to attack ASR systems.
Our framework incorporates audio transformation functions and analyses the ASR transcriptions of the original and transformed audio to detect adversarial inputs.
arXiv Detail & Related papers (2021-03-04T21:44:37Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.