On the Detection of Adaptive Adversarial Attacks in Speaker Verification
Systems
- URL: http://arxiv.org/abs/2202.05725v1
- Date: Fri, 11 Feb 2022 16:02:06 GMT
- Title: On the Detection of Adaptive Adversarial Attacks in Speaker Verification
Systems
- Authors: Zesheng Chen
- Abstract summary: adversarial attacks, such as FAKEBOB, can work effectively against speaker verification systems.
The goal of this paper is to design a detector that can distinguish an original audio from an audio contaminated by adversarial attacks.
We show that our proposed detector is easy to implement, fast to process an input audio, and effective in determining whether an audio is corrupted by FAKEBOB attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Speaker verification systems have been widely used in smart phones and
Internet of things devices to identify a legitimate user. In recent work, it
has been shown that adversarial attacks, such as FAKEBOB, can work effectively
against speaker verification systems. The goal of this paper is to design a
detector that can distinguish an original audio from an audio contaminated by
adversarial attacks. Specifically, our designed detector, called MEH-FEST,
calculates the minimum energy in high frequencies from the short-time Fourier
transform of an audio and uses it as a detection metric. Through both analysis
and experiments, we show that our proposed detector is easy to implement, fast
to process an input audio, and effective in determining whether an audio is
corrupted by FAKEBOB attacks. The experimental results indicate that the
detector is extremely effective: with near zero false positive and false
negative rates for detecting FAKEBOB attacks in Gaussian mixture model (GMM)
and i-vector speaker verification systems. Moreover, adaptive adversarial
attacks against our proposed detector and their countermeasures are discussed
and studied, showing the game between attackers and defenders.
Related papers
- TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion
Attacks against Network Intrusion Detection Systems [0.7829352305480285]
We implement existing state-of-the-art models for intrusion detection.
We then attack those models with a set of chosen evasion attacks.
In an attempt to detect those adversarial attacks, we design and implement multiple transfer learning-based adversarial detectors.
arXiv Detail & Related papers (2022-10-27T18:02:58Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarial Detector with Robust Classifier [14.586106862913553]
We propose a novel adversarial detector, which consists of a robust classifier and a plain one, to highly detect adversarial examples.
In an experiment, the proposed detector is demonstrated to outperform a state-of-the-art detector without any robust classifier.
arXiv Detail & Related papers (2022-02-05T07:21:05Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Audio Attacks and Defenses against AED Systems - A Practical Study [2.365611283869544]
We evaluate deep learning-based Audio Event Detection (AED) systems against evasion attacks through adversarial examples.
We generate audio adversarial examples using two different types of noise, namely background and white noise, that can be used by the adversary to evade detection.
We show that these countermeasures, when applied to audio input, can be successful.
arXiv Detail & Related papers (2021-06-14T13:42:49Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - Attack on practical speaker verification system using universal
adversarial perturbations [20.38185341318529]
This work shows that by playing our crafted adversarial perturbation as a separate source when the adversary is speaking, the practical speaker verification system will misjudge the adversary as a target speaker.
A two-step algorithm is proposed to optimize the universal adversarial perturbation to be text-independent and has little effect on the authentication text recognition.
arXiv Detail & Related papers (2021-05-19T09:43:34Z) - FoolHD: Fooling speaker identification by Highly imperceptible
adversarial Disturbances [63.80959552818541]
We propose a white-box steganography-inspired adversarial attack that generates imperceptible perturbations against a speaker identification model.
Our approach, FoolHD, uses a Gated Convolutional Autoencoder that operates in the DCT domain and is trained with a multi-objective loss function.
We validate FoolHD with a 250-speaker identification x-vector network, trained using VoxCeleb, in terms of accuracy, success rate, and imperceptibility.
arXiv Detail & Related papers (2020-11-17T07:38:26Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Integrated Replay Spoofing-aware Text-independent Speaker Verification [47.41124427552161]
We propose two approaches for building an integrated system of speaker verification and presentation attack detection.
The first approach simultaneously trains speaker identification, presentation attack detection, and the integrated system using multi-task learning.
We propose a back-end modular approach using a separate deep neural network (DNN) for speaker verification and presentation attack detection.
arXiv Detail & Related papers (2020-06-10T01:24:55Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.