On the robustness of non-intrusive speech quality model by adversarial
examples
- URL: http://arxiv.org/abs/2211.06508v1
- Date: Fri, 11 Nov 2022 23:06:24 GMT
- Title: On the robustness of non-intrusive speech quality model by adversarial
examples
- Authors: Hsin-Yi Lin, Huan-Hsin Tseng, Yu Tsao
- Abstract summary: We show that deep speech quality predictors can be vulnerable to adversarial perturbations.
We further explore and confirm the viability of adversarial training for strengthening robustness of models.
- Score: 10.985001960872264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been shown recently that deep learning based models are effective on
speech quality prediction and could outperform traditional metrics in various
perspectives. Although network models have potential to be a surrogate for
complex human hearing perception, they may contain instabilities in
predictions. This work shows that deep speech quality predictors can be
vulnerable to adversarial perturbations, where the prediction can be changed
drastically by unnoticeable perturbations as small as $-30$ dB compared with
speech inputs. In addition to exposing the vulnerability of deep speech quality
predictors, we further explore and confirm the viability of adversarial
training for strengthening robustness of models.
Related papers
- Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis [25.993502776271022]
Having a large parameter space is considered one of the main suspects of the neural networks' vulnerability to adversarial example.
Previous research has demonstrated that depending on the considered model, the algorithm employed to generate adversarial examples may not function properly.
arXiv Detail & Related papers (2024-06-14T14:47:06Z) - On the Behavior of Intrusive and Non-intrusive Speech Enhancement
Metrics in Predictive and Generative Settings [14.734454356396157]
We evaluate the performance of the same speech enhancement backbone trained under predictive and generative paradigms.
We show that intrusive and non-intrusive measures correlate differently for each paradigm.
arXiv Detail & Related papers (2023-06-05T16:30:17Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - Conditional Diffusion Probabilistic Model for Speech Enhancement [101.4893074984667]
We propose a novel speech enhancement algorithm that incorporates characteristics of the observed noisy speech signal into the diffusion and reverse processes.
In our experiments, we demonstrate strong performance of the proposed approach compared to representative generative models.
arXiv Detail & Related papers (2022-02-10T18:58:01Z) - Characterizing the adversarial vulnerability of speech self-supervised
learning [95.03389072594243]
We make the first attempt to investigate the adversarial vulnerability of such paradigm under the attacks from both zero-knowledge adversaries and limited-knowledge adversaries.
The experimental results illustrate that the paradigm proposed by SUPERB is seriously vulnerable to limited-knowledge adversaries.
arXiv Detail & Related papers (2021-11-08T08:44:04Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Recent Advances in Understanding Adversarial Robustness of Deep Neural
Networks [15.217367754000913]
It is increasingly important to obtain models with high robustness that are resistant to adversarial examples.
We give preliminary definitions on what adversarial attacks and robustness are.
We study frequently-used benchmarks and mention theoretically-proved bounds for adversarial robustness.
arXiv Detail & Related papers (2020-11-03T07:42:53Z) - Adversarial Attack and Defense of Structured Prediction Models [58.49290114755019]
In this paper, we investigate attacks and defenses for structured prediction tasks in NLP.
The structured output of structured prediction models is sensitive to small perturbations in the input.
We propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model.
arXiv Detail & Related papers (2020-10-04T15:54:03Z) - On the human evaluation of audio adversarial examples [1.7006003864727404]
adversarial examples are inputs intentionally perturbed to produce a wrong prediction without being noticed.
High fooling rates of proposed adversarial perturbation strategies are only valuable if the perturbations are not detectable.
We demonstrate that the metrics employed by convention are not a reliable measure of the perceptual similarity of adversarial examples in the audio domain.
arXiv Detail & Related papers (2020-01-23T10:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.