Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification
- URL: http://arxiv.org/abs/2006.06186v2
- Date: Fri, 7 Aug 2020 15:27:42 GMT
- Title: Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification
- Authors: Xu Li, Na Li, Jinghua Zhong, Xixin Wu, Xunying Liu, Dan Su, Dong Yu,
Helen Meng
- Abstract summary: This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
- Score: 78.51092318750102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently adversarial attacks on automatic speaker verification (ASV) systems
attracted widespread attention as they pose severe threats to ASV systems.
However, methods to defend against such attacks are limited. Existing
approaches mainly focus on retraining ASV systems with adversarial data
augmentation. Also, countermeasure robustness against different attack settings
are insufficiently investigated. Orthogonal to prior approaches, this work
proposes to defend ASV systems against adversarial attacks with a separate
detection network, rather than augmenting adversarial data into ASV training. A
VGG-like binary classification detector is introduced and demonstrated to be
effective on detecting adversarial samples. To investigate detector robustness
in a realistic defense scenario where unseen attack settings may exist, we
analyze various kinds of unseen attack settings' impact and observe that the
detector is robust (6.27\% EER_{det} degradation in the worst case) against
unseen substitute ASV systems, but it has weak robustness (50.37\% EER_{det}
degradation in the worst case) against unseen perturbation methods. The weak
robustness against unseen perturbation methods shows a direction for developing
stronger countermeasures.
Related papers
- To what extent can ASV systems naturally defend against spoofing attacks? [73.0766904568922]
This study investigates whether ASV effortlessly acquires robustness against spoofing attacks.
We demonstrate that the evolution of ASV inherently incorporates defense mechanisms against spoofing attacks.
arXiv Detail & Related papers (2024-06-08T03:44:39Z) - Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors [0.0]
An adaptive attack is one where the attacker is aware of the defenses and adapts their strategy accordingly.
Our proposed method leverages adversarial training to reinforce the ability to detect attacks, without compromising clean accuracy.
Experimental evaluations on the CIFAR-10 and SVHN datasets demonstrate that our proposed algorithm significantly improves a detector's ability to accurately identify adaptive adversarial attacks.
arXiv Detail & Related papers (2024-04-18T12:13:09Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Defense against adversarial attacks on spoofing countermeasures of ASV [95.87555881176529]
This paper introduces a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models.
The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples.
arXiv Detail & Related papers (2020-03-06T08:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.