Identifying Audio Adversarial Examples via Anomalous Pattern Detection
- URL: http://arxiv.org/abs/2002.05463v2
- Date: Sat, 25 Jul 2020 06:25:22 GMT
- Title: Identifying Audio Adversarial Examples via Anomalous Pattern Detection
- Authors: Victor Akinwande, Celia Cintas, Skyler Speakman, Srihari Sridharan
- Abstract summary: We show that 2 of the recent and current state-of-the-art adversarial attacks on audio processing systems lead to higher-than-expected activation at some subset of nodes.
We can detect these attacks with up to an AUC of 0.98 with no degradation in performance on benign samples.
- Score: 4.556497931273283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Audio processing models based on deep neural networks are susceptible to
adversarial attacks even when the adversarial audio waveform is 99.9% similar
to a benign sample. Given the wide application of DNN-based audio recognition
systems, detecting the presence of adversarial examples is of high practical
relevance. By applying anomalous pattern detection techniques in the activation
space of these models, we show that 2 of the recent and current
state-of-the-art adversarial attacks on audio processing systems systematically
lead to higher-than-expected activation at some subset of nodes and we can
detect these with up to an AUC of 0.98 with no degradation in performance on
benign samples.
Related papers
- MIMII-Gen: Generative Modeling Approach for Simulated Evaluation of Anomalous Sound Detection System [5.578413517654703]
Insufficient recordings and the scarcity of anomalies present significant challenges in developing robust anomaly detection systems.
We propose a novel approach for generating diverse anomalies in machine sound using a latent diffusion-based model that integrates an encoder-decoder framework.
arXiv Detail & Related papers (2024-09-27T08:21:31Z) - Adaptive Fake Audio Detection with Low-Rank Model Squeezing [50.7916414913962]
Traditional approaches, such as finetuning, are computationally intensive and pose a risk of impairing the acquired knowledge of known fake audio types.
We introduce the concept of training low-rank adaptation matrices tailored specifically to the newly emerging fake audio types.
Our approach offers several advantages, including reduced storage memory requirements and lower equal error rates.
arXiv Detail & Related papers (2023-06-08T06:06:42Z) - Adversarial Examples Detection with Enhanced Image Difference Features
based on Local Histogram Equalization [20.132066800052712]
We propose an adversarial example detection framework based on a high-frequency information enhancement strategy.
This framework can effectively extract and amplify the feature differences between adversarial examples and normal examples.
arXiv Detail & Related papers (2023-05-08T03:14:01Z) - Anomalous Sound Detection using Audio Representation with Machine ID
based Contrastive Learning Pretraining [52.191658157204856]
This paper uses contrastive learning to refine audio representations for each machine ID, rather than for each audio sample.
The proposed two-stage method uses contrastive learning to pretrain the audio representation model.
Experiments show that our method outperforms the state-of-the-art methods using contrastive learning or self-supervised classification.
arXiv Detail & Related papers (2023-04-07T11:08:31Z) - The role of noise in denoising models for anomaly detection in medical
images [62.0532151156057]
Pathological brain lesions exhibit diverse appearance in brain images.
Unsupervised anomaly detection approaches have been proposed using only normal data for training.
We show that optimization of the spatial resolution and magnitude of the noise improves the performance of different model training regimes.
arXiv Detail & Related papers (2023-01-19T21:39:38Z) - Mitigating Closed-model Adversarial Examples with Bayesian Neural
Modeling for Enhanced End-to-End Speech Recognition [18.83748866242237]
We focus on a rigorous and empirical "closed-model adversarial robustness" setting.
We propose an advanced Bayesian neural network (BNN) based adversarial detector.
We improve detection rate by +2.77 to +5.42% (relative +3.03 to +6.26%) and reduce the word error rate by 5.02 to 7.47% on LibriSpeech datasets.
arXiv Detail & Related papers (2022-02-17T09:17:58Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Improved Detection of Adversarial Images Using Deep Neural Networks [2.3993545400014873]
Recent studies indicate that machine learning models used for classification tasks are vulnerable to adversarial examples.
We propose a new approach called Feature Map Denoising to detect the adversarial inputs.
We show the performance of detection on a mixed dataset consisting of adversarial examples.
arXiv Detail & Related papers (2020-07-10T19:02:24Z) - Detecting Adversarial Examples for Speech Recognition via Uncertainty
Quantification [21.582072216282725]
Machine learning systems and, specifically, automatic speech recognition (ASR) systems are vulnerable to adversarial attacks.
In this paper, we focus on hybrid ASR systems and compare four acoustic models regarding their ability to indicate uncertainty under attack.
We are able to detect adversarial examples with an area under the receiving operator curve score of more than 0.99.
arXiv Detail & Related papers (2020-05-24T19:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.