Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models
- URL: http://arxiv.org/abs/2102.07047v1
- Date: Sun, 14 Feb 2021 01:56:43 GMT
- Title: Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models
- Authors: Haibin Wu, Xu Li, Andy T. Liu, Zhiyong Wu, Helen Meng, Hung-yi Lee
- Abstract summary: More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
- Score: 101.42920161993455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic speaker verification (ASV) is one of the core technologies in
biometric identification. With the ubiquitous usage of ASV systems in
safety-critical applications, more and more malicious attackers attempt to
launch adversarial attacks at ASV systems. In the midst of the arms race
between attack and defense in ASV, how to effectively improve the robustness of
ASV against adversarial attacks remains an open question. We note that the
self-supervised learning models possess the ability to mitigate superficial
perturbations in the input after pretraining. Hence, with the goal of effective
defense in ASV against adversarial attacks, we propose a standard and
attack-agnostic method based on cascaded self-supervised learning models to
purify the adversarial perturbations. Experimental results demonstrate that the
proposed method achieves effective defense performance and can successfully
counter adversarial attacks in scenarios where attackers may either be aware or
unaware of the self-supervised learning models.
Related papers
- VCAT: Vulnerability-aware and Curiosity-driven Adversarial Training for Enhancing Autonomous Vehicle Robustness [18.27802330689405]
Vulnerability-aware and Curiosity-driven Adversarial Training (VCAT) is a framework to train autonomous vehicles (AVs) against malicious attacks.
VCAT uses a surrogate network to fit the value function of the AV victim, providing dense information about the victim's inherent vulnerabilities.
In the victim defense training phase, the AV is trained in critical scenarios in which the pretrained attacker is positioned around the victim to generate attack behaviors.
arXiv Detail & Related papers (2024-09-19T14:53:02Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Voting for the right answer: Adversarial defense for speaker
verification [79.10523688806852]
ASV is under the radar of adversarial attacks, which are similar to their original counterparts from human's perception.
We propose the idea of "voting for the right answer" to prevent risky decisions of ASV in blind spot areas.
Experimental results show that our proposed method improves the robustness against both the limited-knowledge attackers.
arXiv Detail & Related papers (2021-06-15T04:05:28Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Adversarial Attacks on Machine Learning Cybersecurity Defences in
Industrial Control Systems [2.86989372262348]
This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples.
It also explores how such samples can support the robustness of supervised models using adversarial training.
Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present.
arXiv Detail & Related papers (2020-04-10T12:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.