Towards the Transferable Audio Adversarial Attack via Ensemble Methods
- URL: http://arxiv.org/abs/2304.08811v1
- Date: Tue, 18 Apr 2023 08:21:49 GMT
- Title: Towards the Transferable Audio Adversarial Attack via Ensemble Methods
- Authors: Feng Guo, Zheng Sun, Yuxuan Chen and Lei Ju
- Abstract summary: We explore the potential factors that impact adversarial examples (AEs) transferability in deep learning-based speech recognition.
Our results show a remarkable difference in the transferability of AEs between speech and images, with the data relevance being low in images but opposite in speech recognition.
Motivated by dropout-based ensemble approaches, we propose random gradient ensembles and dynamic gradient-weighted ensembles, and we evaluate the impact of ensembles on the transferability of AEs.
- Score: 5.262820533171069
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, deep learning (DL) models have achieved significant progress
in many domains, such as autonomous driving, facial recognition, and speech
recognition. However, the vulnerability of deep learning models to adversarial
attacks has raised serious concerns in the community because of their
insufficient robustness and generalization. Also, transferable attacks have
become a prominent method for black-box attacks. In this work, we explore the
potential factors that impact adversarial examples (AEs) transferability in
DL-based speech recognition. We also discuss the vulnerability of different DL
systems and the irregular nature of decision boundaries. Our results show a
remarkable difference in the transferability of AEs between speech and images,
with the data relevance being low in images but opposite in speech recognition.
Motivated by dropout-based ensemble approaches, we propose random gradient
ensembles and dynamic gradient-weighted ensembles, and we evaluate the impact
of ensembles on the transferability of AEs. The results show that the AEs
created by both approaches are valid for transfer to the black box API.
Related papers
- Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack [51.16384207202798]
Vision-language pre-training models are vulnerable to multimodal adversarial examples (AEs)
Previous approaches augment image-text pairs to enhance diversity within the adversarial example generation process.
We propose sampling from adversarial evolution triangles composed of clean, historical, and current adversarial examples to enhance adversarial diversity.
arXiv Detail & Related papers (2024-11-04T23:07:51Z) - A Systematic Evaluation of Adversarial Attacks against Speech Emotion Recognition Models [6.854732863866882]
Speech emotion recognition (SER) is constantly gaining attention in recent years due to its potential applications in diverse fields.
Recent studies have shown that deep learning models can be vulnerable to adversarial attacks.
arXiv Detail & Related papers (2024-04-29T09:00:32Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Characterizing the adversarial vulnerability of speech self-supervised
learning [95.03389072594243]
We make the first attempt to investigate the adversarial vulnerability of such paradigm under the attacks from both zero-knowledge adversaries and limited-knowledge adversaries.
The experimental results illustrate that the paradigm proposed by SUPERB is seriously vulnerable to limited-knowledge adversaries.
arXiv Detail & Related papers (2021-11-08T08:44:04Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Towards Resistant Audio Adversarial Examples [0.0]
We find that due to flaws in the generation process, state-of-the-art adversarial example generation methods cause overfitting.
We devise an approach to mitigate this flaw and find that our method improves generation of adversarial examples with varying offsets.
arXiv Detail & Related papers (2020-10-14T16:04:02Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.