Defense against adversarial attacks on spoofing countermeasures of ASV
- URL: http://arxiv.org/abs/2003.03065v1
- Date: Fri, 6 Mar 2020 08:08:54 GMT
- Title: Defense against adversarial attacks on spoofing countermeasures of ASV
- Authors: Haibin Wu, Songxiang Liu, Helen Meng, Hung-yi Lee
- Abstract summary: This paper introduces a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models.
The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples.
- Score: 95.87555881176529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various forefront countermeasure methods for automatic speaker verification
(ASV) with considerable performance in anti-spoofing are proposed in the
ASVspoof 2019 challenge. However, previous work has shown that countermeasure
models are vulnerable to adversarial examples indistinguishable from natural
data. A good countermeasure model should not only be robust against spoofing
audio, including synthetic, converted, and replayed audios; but counteract
deliberately generated examples by malicious adversaries. In this work, we
introduce a passive defense method, spatial smoothing, and a proactive defense
method, adversarial training, to mitigate the vulnerability of ASV spoofing
countermeasure models against adversarial examples. This paper is among the
first to use defense methods to improve the robustness of ASV spoofing
countermeasure models under adversarial attacks. The experimental results show
that these two defense methods positively help spoofing countermeasure models
counter adversarial examples.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - MPAT: Building Robust Deep Neural Networks against Textual Adversarial
Attacks [4.208423642716679]
We propose a malicious perturbation based adversarial training method (MPAT) for building robust deep neural networks against adversarial attacks.
Specifically, we construct a multi-level malicious example generation strategy to generate adversarial examples with malicious perturbations.
We employ a novel training objective function to ensure achieving the defense goal without compromising the performance on the original task.
arXiv Detail & Related papers (2024-02-29T01:49:18Z) - AdvFAS: A robust face anti-spoofing framework against adversarial
examples [24.07755324680827]
We propose a robust face anti-spoofing framework, namely AdvFAS, that leverages two coupled scores to accurately distinguish between correctly detected and wrongly detected face images.
Experiments demonstrate the effectiveness of our framework in a variety of settings, including different attacks, datasets, and backbones.
arXiv Detail & Related papers (2023-08-04T02:47:19Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.