SEC4SR: A Security Analysis Platform for Speaker Recognition
- URL: http://arxiv.org/abs/2109.01766v1
- Date: Sat, 4 Sep 2021 02:04:25 GMT
- Title: SEC4SR: A Security Analysis Platform for Speaker Recognition
- Authors: Guangke Chen and Zhe Zhao and Fu Song and Sen Chen and Lingling Fan
and Yang Liu
- Abstract summary: SEC4SR is the first platform enabling researchers to systematically and comprehensively evaluate adversarial attacks and defenses in speaker recognition.
We conduct the largest-scale empirical study on adversarial attacks and defenses in SR, involving 23 defenses, 15 attacks and 4 attack settings.
- Score: 14.02700072458441
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Adversarial attacks have been expanded to speaker recognition (SR). However,
existing attacks are often assessed using different SR models, recognition
tasks and datasets, and only few adversarial defenses borrowed from computer
vision are considered. Yet,these defenses have not been thoroughly evaluated
against adaptive attacks. Thus, there is still a lack of quantitative
understanding about the strengths and limitations of adversarial attacks and
defenses. More effective defenses are also required for securing SR systems. To
bridge this gap, we present SEC4SR, the first platform enabling researchers to
systematically and comprehensively evaluate adversarial attacks and defenses in
SR. SEC4SR incorporates 4 white-box and 2 black-box attacks, 24 defenses
including our novel feature-level transformations. It also contains techniques
for mounting adaptive attacks. Using SEC4SR, we conduct thus far the
largest-scale empirical study on adversarial attacks and defenses in SR,
involving 23 defenses, 15 attacks and 4 attack settings. Our study provides
lots of useful findings that may advance future research: such as (1) all the
transformations slightly degrade accuracy on benign examples and their
effectiveness vary with attacks; (2) most transformations become less effective
under adaptive attacks, but some transformations become more effective; (3) few
transformations combined with adversarial training yield stronger defenses over
some but not all attacks, while our feature-level transformation combined with
adversarial training yields the strongest defense over all the attacks.
Extensive experiments demonstrate capabilities and advantages of SEC4SR which
can benefit future research in SR.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Continual Adversarial Defense [37.37029638528458]
A defense system continuously collects adversarial data online to quickly improve itself.
Continual adaptation to new attacks without catastrophic forgetting, few-shot adaptation, memory-efficient adaptation, and high accuracy on both clean and adversarial data.
In particular, CAD is capable of quickly adapting with minimal budget and a low cost of defense failure while maintaining good performance against previous attacks.
arXiv Detail & Related papers (2023-12-15T01:38:26Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Game Theoretic Mixed Experts for Combinational Adversarial Machine
Learning [10.368343314144553]
We provide a game-theoretic framework for ensemble adversarial attacks and defenses.
We propose three new attack algorithms, specifically designed to target defenses with randomized transformations, multi-model voting schemes, and adversarial detector architectures.
arXiv Detail & Related papers (2022-11-26T21:35:01Z) - Analysis and Extensions of Adversarial Training for Video Classification [0.0]
We show that generating optimal attacks for video requires carefully tuning the attack parameters, especially the step size.
We propose three defenses against attacks with variable attack budgets.
Experiments on the UCF101 dataset demonstrate that the proposed methods improve adversarial robustness against multiple attack types.
arXiv Detail & Related papers (2022-06-16T06:49:01Z) - Towards Understanding and Mitigating Audio Adversarial Examples for
Speaker Recognition [13.163192823774624]
Speaker recognition systems (SRSs) have recently been shown to be vulnerable to adversarial attacks, raising significant security concerns.
We present 22 diverse transformations and thoroughly evaluate them using 7 recent promising adversarial attacks on speaker recognition.
We demonstrate that the proposed novel feature-level transformation combined with adversarial training is rather effective compared to the sole adversarial training in a complete white-box setting.
arXiv Detail & Related papers (2022-06-07T15:38:27Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - TROJANZOO: Everything you ever wanted to know about neural backdoors
(but were afraid to ask) [28.785693760449604]
TROJANZOO is the first open-source platform for evaluating neural backdoor attacks/defenses.
It has 12 representative attacks, 15 state-of-the-art defenses, 6 attack performance metrics, 10 defense utility metrics, as well as rich tools for analysis of attack-defense interactions.
We conduct a systematic study of existing attacks/defenses, leading to a number of interesting findings.
arXiv Detail & Related papers (2020-12-16T22:37:27Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.