Adversarial Attack and Defense Strategies for Deep Speaker Recognition
Systems
- URL: http://arxiv.org/abs/2008.07685v1
- Date: Tue, 18 Aug 2020 00:58:19 GMT
- Title: Adversarial Attack and Defense Strategies for Deep Speaker Recognition
Systems
- Authors: Arindam Jati, Chin-Cheng Hsu, Monisankha Pal, Raghuveer Peri, Wael
AbdAlmageed, Shrikanth Narayanan
- Abstract summary: This paper considers several state-of-the-art adversarial attacks to a deep speaker recognition system, employing strong defense methods as countermeasures.
Experiments show that the speaker recognition systems are vulnerable to adversarial attacks, and the strongest attacks can reduce the accuracy of the system from 94% to even 0%.
- Score: 44.305353565981015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust speaker recognition, including in the presence of malicious attacks,
is becoming increasingly important and essential, especially due to the
proliferation of several smart speakers and personal agents that interact with
an individual's voice commands to perform diverse, and even sensitive tasks.
Adversarial attack is a recently revived domain which is shown to be effective
in breaking deep neural network-based classifiers, specifically, by forcing
them to change their posterior distribution by only perturbing the input
samples by a very small amount. Although, significant progress in this realm
has been made in the computer vision domain, advances within speaker
recognition is still limited. The present expository paper considers several
state-of-the-art adversarial attacks to a deep speaker recognition system,
employing strong defense methods as countermeasures, and reporting on several
ablation studies to obtain a comprehensive understanding of the problem. The
experiments show that the speaker recognition systems are vulnerable to
adversarial attacks, and the strongest attacks can reduce the accuracy of the
system from 94% to even 0%. The study also compares the performances of the
employed defense methods in detail, and finds adversarial training based on
Projected Gradient Descent (PGD) to be the best defense method in our setting.
We hope that the experiments presented in this paper provide baselines that can
be useful for the research community interested in further studying adversarial
robustness of speaker recognition systems.
Related papers
- Vulnerabilities in Machine Learning-Based Voice Disorder Detection Systems [3.4745231630177136]
We explore the possibility of attacks that can reverse classification and compromise their reliability.
Given the critical nature of personal health information, understanding which types of attacks are effective is a necessary first step toward improving the security of such systems.
Our findings identify the most effective attack strategies, underscoring the need to address these vulnerabilities in machine-learning systems used in the healthcare domain.
arXiv Detail & Related papers (2024-10-21T10:14:44Z) - Robust Safety Classifier for Large Language Models: Adversarial Prompt
Shield [7.5520641322945785]
Large Language Models' safety remains a critical concern due to their vulnerability to adversarial attacks.
We introduce the Adversarial Prompt Shield (APS), a lightweight model that excels in detection accuracy and demonstrates resilience against adversarial prompts.
We also propose novel strategies for autonomously generating adversarial training datasets.
arXiv Detail & Related papers (2023-10-31T22:22:10Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Defense Against Adversarial Attacks on Audio DeepFake Detection [0.4511923587827302]
Audio DeepFakes (DF) are artificially generated utterances created using deep learning.
Multiple neural network-based methods to detect generated speech have been proposed to prevent the threats.
arXiv Detail & Related papers (2022-12-30T08:41:06Z) - Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - Towards Understanding and Mitigating Audio Adversarial Examples for
Speaker Recognition [13.163192823774624]
Speaker recognition systems (SRSs) have recently been shown to be vulnerable to adversarial attacks, raising significant security concerns.
We present 22 diverse transformations and thoroughly evaluate them using 7 recent promising adversarial attacks on speaker recognition.
We demonstrate that the proposed novel feature-level transformation combined with adversarial training is rather effective compared to the sole adversarial training in a complete white-box setting.
arXiv Detail & Related papers (2022-06-07T15:38:27Z) - Characterizing the adversarial vulnerability of speech self-supervised
learning [95.03389072594243]
We make the first attempt to investigate the adversarial vulnerability of such paradigm under the attacks from both zero-knowledge adversaries and limited-knowledge adversaries.
The experimental results illustrate that the paradigm proposed by SUPERB is seriously vulnerable to limited-knowledge adversaries.
arXiv Detail & Related papers (2021-11-08T08:44:04Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
Speech Recognition and Speaker Identification Systems [28.635467696564703]
We show that the end-to-end architecture of speech and speaker systems makes attacks and defenses against them substantially different than those in the image space.
We then demonstrate experimentally that attacks against these models almost universally fail to transfer.
arXiv Detail & Related papers (2020-07-13T18:52:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.