Adversarial attacks on audio source separation
- URL: http://arxiv.org/abs/2010.03164v3
- Date: Mon, 15 Feb 2021 04:12:10 GMT
- Title: Adversarial attacks on audio source separation
- Authors: Naoya Takahashi, Shota Inoue, Yuki Mitsufuji
- Abstract summary: We reformulate various adversarial attack methods for the audio source separation problem.
We propose a simple yet effective regularization method to obtain imperceptible adversarial noise.
We also show the robustness of source separation models against a black-box attack.
- Score: 26.717340178640498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the excellent performance of neural-network-based audio source
separation methods and their wide range of applications, their robustness
against intentional attacks has been largely neglected. In this work, we
reformulate various adversarial attack methods for the audio source separation
problem and intensively investigate them under different attack conditions and
target models. We further propose a simple yet effective regularization method
to obtain imperceptible adversarial noise while maximizing the impact on
separation quality with low computational complexity. Experimental results show
that it is possible to largely degrade the separation quality by adding
imperceptibly small noise when the noise is crafted for the target model. We
also show the robustness of source separation models against a black-box
attack. This study provides potentially useful insights for developing content
protection methods against the abuse of separated signals and improving the
separation performance and robustness.
Related papers
- Robust VAEs via Generating Process of Noise Augmented Data [9.366139389037489]
This paper introduces a novel framework that enhances robustness by regularizing the latent space divergence between original and noise-augmented data.
Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs.
arXiv Detail & Related papers (2024-07-26T09:55:34Z) - Improving the Robustness of Summarization Systems with Dual Augmentation [68.53139002203118]
A robust summarization system should be able to capture the gist of the document, regardless of the specific word choices or noise in the input.
We first explore the summarization models' robustness against perturbations including word-level synonym substitution and noise.
We propose a SummAttacker, which is an efficient approach to generating adversarial samples based on language models.
arXiv Detail & Related papers (2023-06-01T19:04:17Z) - Robust Deep Learning Models Against Semantic-Preserving Adversarial
Attack [3.7264705684737893]
Deep learning models can be fooled by small $l_p$-norm adversarial perturbations and natural perturbations in terms of attributes.
We propose a novel attack mechanism named Semantic-Preserving Adversarial (SPA) attack, which can then be used to enhance adversarial training.
arXiv Detail & Related papers (2023-04-08T08:28:36Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - Removing Adversarial Noise in Class Activation Feature Space [160.78488162713498]
We propose to remove adversarial noise by implementing a self-supervised adversarial training mechanism in a class activation feature space.
We train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
Empirical evaluations demonstrate that our method could significantly enhance adversarial robustness in comparison to previous state-of-the-art approaches.
arXiv Detail & Related papers (2021-04-19T10:42:24Z) - Towards Robust Speech-to-Text Adversarial Attack [78.5097679815944]
This paper introduces a novel adversarial algorithm for attacking the state-of-the-art speech-to-text systems, namely DeepSpeech, Kaldi, and Lingvo.
Our approach is based on developing an extension for the conventional distortion condition of the adversarial optimization formulation.
Minimizing over this metric, which measures the discrepancies between original and adversarial samples' distributions, contributes to crafting signals very close to the subspace of legitimate speech recordings.
arXiv Detail & Related papers (2021-03-15T01:51:41Z) - On the Limitations of Denoising Strategies as Adversarial Defenses [29.73831728610021]
adversarial attacks against machine learning models have raised increasing concerns.
In this paper, we analyze the defense strategies in the form of symmetric transformation via data denoising and reconstruction.
Experiment results show that the adaptive compression strategies enable the model to better suppress adversarial perturbations.
arXiv Detail & Related papers (2020-12-17T03:54:30Z) - Learning to Generate Noise for Multi-Attack Robustness [126.23656251512762]
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.
In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system.
We propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks.
arXiv Detail & Related papers (2020-06-22T10:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.