Scalable Ensemble-based Detection Method against Adversarial Attacks for
speaker verification
- URL: http://arxiv.org/abs/2312.08622v1
- Date: Thu, 14 Dec 2023 03:04:05 GMT
- Title: Scalable Ensemble-based Detection Method against Adversarial Attacks for
speaker verification
- Authors: Haibin Wu, Heng-Cheng Kuo, Yu Tsao, Hung-yi Lee
- Abstract summary: This paper comprehensively compares mainstream purification techniques in a unified framework.
We propose an easy-to-follow ensemble approach that integrates advanced purification modules for detection.
- Score: 73.30974350776636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic speaker verification (ASV) is highly susceptible to adversarial
attacks. Purification modules are usually adopted as a pre-processing to
mitigate adversarial noise. However, they are commonly implemented across
diverse experimental settings, rendering direct comparisons challenging. This
paper comprehensively compares mainstream purification techniques in a unified
framework. We find these methods often face a trade-off between user experience
and security, as they struggle to simultaneously maintain genuine sample
performance and reduce adversarial perturbations. To address this challenge,
some efforts have extended purification modules to encompass detection
capabilities, aiming to alleviate the trade-off. However, advanced purification
modules will always come into the stage to surpass previous detection method.
As a result, we further propose an easy-to-follow ensemble approach that
integrates advanced purification modules for detection, achieving
state-of-the-art (SOTA) performance in countering adversarial noise. Our
ensemble method has great potential due to its compatibility with future
advanced purification techniques.
Related papers
- Classifier Guidance Enhances Diffusion-based Adversarial Purification by Preserving Predictive Information [75.36597470578724]
Adversarial purification is one of the promising approaches to defend neural networks against adversarial attacks.
We propose gUided Purification (COUP) algorithm, which purifies while keeping away from the classifier decision boundary.
Experimental results show that COUP can achieve better adversarial robustness under strong attack methods.
arXiv Detail & Related papers (2024-08-12T02:48:00Z) - Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders [101.42201747763178]
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications to training examples that are correctly labeled.
Our work provides a novel disentanglement mechanism to build an efficient pre-training purification method.
arXiv Detail & Related papers (2024-05-02T16:49:25Z) - Adversarial Text Purification: A Large Language Model Approach for
Defense [25.041109219049442]
Adversarial purification is a defense mechanism for safeguarding classifiers against adversarial attacks.
We propose a novel adversarial text purification that harnesses the generative capabilities of Large Language Models.
Our proposed method demonstrates remarkable performance over various classifiers, improving their accuracy under the attack by over 65% on average.
arXiv Detail & Related papers (2024-02-05T02:36:41Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - NPVForensics: Jointing Non-critical Phonemes and Visemes for Deepfake
Detection [50.33525966541906]
Existing multimodal detection methods capture audio-visual inconsistencies to expose Deepfake videos.
We propose a novel Deepfake detection method to mine the correlation between Non-critical Phonemes and Visemes, termed NPVForensics.
Our model can be easily adapted to the downstream Deepfake datasets with fine-tuning.
arXiv Detail & Related papers (2023-06-12T06:06:05Z) - A Minimax Approach Against Multi-Armed Adversarial Attacks Detection [31.971443221041174]
Multi-armed adversarial attacks have been shown to be highly successful in fooling state-of-the-art detectors.
We propose a solution that aggregates the soft-probability outputs of multiple pre-trained detectors according to a minimax approach.
We show that our aggregation consistently outperforms individual state-of-the-art detectors against multi-armed adversarial attacks.
arXiv Detail & Related papers (2023-02-04T18:21:22Z) - Diffusion Models for Adversarial Purification [69.1882221038846]
Adrial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.
We propose DiffPure that uses diffusion models for adversarial purification.
Our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods.
arXiv Detail & Related papers (2022-05-16T06:03:00Z) - FADER: Fast Adversarial Example Rejection [19.305796826768425]
Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations.
We introduce FADER, a novel technique for speeding up detection-based methods.
Our experiments outline up to 73x prototypes reduction compared to analyzed detectors for MNIST dataset and up to 50x for CIFAR10 respectively.
arXiv Detail & Related papers (2020-10-18T22:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.