Adjust-free adversarial example generation in speech recognition using
evolutionary multi-objective optimization under black-box condition
- URL: http://arxiv.org/abs/2012.11138v2
- Date: Tue, 22 Dec 2020 15:53:04 GMT
- Title: Adjust-free adversarial example generation in speech recognition using
evolutionary multi-objective optimization under black-box condition
- Authors: Shoma Ishida, Satoshi Ono
- Abstract summary: This paper proposes a black-box adversarial attack method to automatic speech recognition systems.
Experimental results showed that the proposed method successfully generated adjust-free adversarial examples.
- Score: 1.2944868613449219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a black-box adversarial attack method to automatic speech
recognition systems. Some studies have attempted to attack neural networks for
speech recognition; however, these methods did not consider the robustness of
generated adversarial examples against timing lag with a target speech. The
proposed method in this paper adopts Evolutionary Multi-objective Optimization
(EMO)that allows it generating robust adversarial examples under black-box
scenario. Experimental results showed that the proposed method successfully
generated adjust-free adversarial examples, which are sufficiently robust
against timing lag so that an attacker does not need to take the timing of
playing it against the target speech.
Related papers
- A Constraint-Enforcing Reward for Adversarial Attacks on Text Classifiers [10.063169009242682]
We train an encoder-decoder paraphrase model to generate adversarial examples.
We adopt a reinforcement learning algorithm and propose a constraint-enforcing reward.
We show how key design choices impact the generated examples and discuss the strengths and weaknesses of the proposed approach.
arXiv Detail & Related papers (2024-05-20T09:33:43Z) - Generalizable Black-Box Adversarial Attack with Meta Learning [54.196613395045595]
In black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful perturbation based on query feedback under a query budget.
We propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability.
The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance.
arXiv Detail & Related papers (2023-01-01T07:24:12Z) - RSD-GAN: Regularized Sobolev Defense GAN Against Speech-to-Text
Adversarial Attacks [9.868221447090853]
This paper introduces a new synthesis-based defense algorithm for counteracting adversarial attacks developed for challenging the performance of speech-to-text transcription systems.
Our algorithm implements a Sobolev-based GAN and proposes a novel regularizer for effectively controlling over the functionality of the entire generative model.
arXiv Detail & Related papers (2022-07-14T12:22:19Z) - Towards Defending against Adversarial Examples via Attack-Invariant
Features [147.85346057241605]
Deep neural networks (DNNs) are vulnerable to adversarial noise.
adversarial robustness can be improved by exploiting adversarial examples.
Models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
arXiv Detail & Related papers (2021-06-09T12:49:54Z) - Towards Robust Speech-to-Text Adversarial Attack [78.5097679815944]
This paper introduces a novel adversarial algorithm for attacking the state-of-the-art speech-to-text systems, namely DeepSpeech, Kaldi, and Lingvo.
Our approach is based on developing an extension for the conventional distortion condition of the adversarial optimization formulation.
Minimizing over this metric, which measures the discrepancies between original and adversarial samples' distributions, contributes to crafting signals very close to the subspace of legitimate speech recordings.
arXiv Detail & Related papers (2021-03-15T01:51:41Z) - Generalizing Adversarial Examples by AdaBelief Optimizer [6.243028964381449]
We propose an AdaBelief iterative Fast Gradient Sign Method to generalize adversarial examples.
Compared with state-of-the-art attack methods, our proposed method can generate adversarial examples effectively in the white-box setting.
The transfer rate is 7%-21% higher than latest attack methods.
arXiv Detail & Related papers (2021-01-25T07:39:16Z) - Class-Conditional Defense GAN Against End-to-End Speech Attacks [82.21746840893658]
We propose a novel approach against end-to-end adversarial attacks developed to fool advanced speech-to-text systems such as DeepSpeech and Lingvo.
Unlike conventional defense approaches, the proposed approach does not directly employ low-level transformations such as autoencoding a given input signal.
Our defense-GAN considerably outperforms conventional defense algorithms in terms of word error rate and sentence level recognition accuracy.
arXiv Detail & Related papers (2020-10-22T00:02:02Z) - Towards Resistant Audio Adversarial Examples [0.0]
We find that due to flaws in the generation process, state-of-the-art adversarial example generation methods cause overfitting.
We devise an approach to mitigate this flaw and find that our method improves generation of adversarial examples with varying offsets.
arXiv Detail & Related papers (2020-10-14T16:04:02Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.