Voting for the right answer: Adversarial defense for speaker
verification
- URL: http://arxiv.org/abs/2106.07868v1
- Date: Tue, 15 Jun 2021 04:05:28 GMT
- Title: Voting for the right answer: Adversarial defense for speaker
verification
- Authors: Haibin Wu, Yang Zhang, Zhiyong Wu, Dong Wang, Hung-yi Lee
- Abstract summary: ASV is under the radar of adversarial attacks, which are similar to their original counterparts from human's perception.
We propose the idea of "voting for the right answer" to prevent risky decisions of ASV in blind spot areas.
Experimental results show that our proposed method improves the robustness against both the limited-knowledge attackers.
- Score: 79.10523688806852
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic speaker verification (ASV) is a well developed technology for
biometric identification, and has been ubiquitous implemented in
security-critic applications, such as banking and access control. However,
previous works have shown that ASV is under the radar of adversarial attacks,
which are very similar to their original counterparts from human's perception,
yet will manipulate the ASV render wrong prediction. Due to the very late
emergence of adversarial attacks for ASV, effective countermeasures against
them are limited. Given that the security of ASV is of high priority, in this
work, we propose the idea of "voting for the right answer" to prevent risky
decisions of ASV in blind spot areas, by employing random sampling and voting.
Experimental results show that our proposed method improves the robustness
against both the limited-knowledge attackers by pulling the adversarial samples
out of the blind spots, and the perfect-knowledge attackers by introducing
randomness and increasing the attackers' budgets. The code for reproducing main
results is available at https://github.com/thuhcsi/adsv_voting.
Related papers
- Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.