Meta-Learning Approaches for Improving Detection of Unseen Speech Deepfakes
- URL: http://arxiv.org/abs/2410.20578v2
- Date: Thu, 31 Oct 2024 13:41:39 GMT
- Title: Meta-Learning Approaches for Improving Detection of Unseen Speech Deepfakes
- Authors: Ivan Kukanov, Janne Laakkonen, Tomi Kinnunen, Ville Hautamäki,
- Abstract summary: Current speech deepfake detection approaches perform satisfactorily against known adversaries.
The proliferation of speech deepfakes on social media underscores the need for systems that can generalize to unseen attacks.
We address this problem from the perspective of meta-learning, aiming to learn attack-invariant features to adapt to unseen attacks with very few samples available.
- Score: 9.894633583748895
- License:
- Abstract: Current speech deepfake detection approaches perform satisfactorily against known adversaries; however, generalization to unseen attacks remains an open challenge. The proliferation of speech deepfakes on social media underscores the need for systems that can generalize to unseen attacks not observed during training. We address this problem from the perspective of meta-learning, aiming to learn attack-invariant features to adapt to unseen attacks with very few samples available. This approach is promising since generating of a high-scale training dataset is often expensive or infeasible. Our experiments demonstrated an improvement in the Equal Error Rate (EER) from 21.67% to 10.42% on the InTheWild dataset, using just 96 samples from the unseen dataset. Continuous few-shot adaptation ensures that the system remains up-to-date.
Related papers
- Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation [120.42853706967188]
We explore the potential backdoor attacks on model adaptation launched by well-designed poisoning target data.
We propose a plug-and-play method named MixAdapt, combining it with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - Unsupervised Adversarial Detection without Extra Model: Training Loss
Should Change [24.76524262635603]
Traditional approaches to adversarial training and supervised detection rely on prior knowledge of attack types and access to labeled training data.
We propose new training losses to reduce useless features and the corresponding detection method without prior knowledge of adversarial attacks.
The proposed method works well in all tested attack types and the false positive rates are even better than the methods good at certain types.
arXiv Detail & Related papers (2023-08-07T01:41:21Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning
Few-Shot Meta-Learners [28.468089304148453]
We attack amortized meta-learners, which allows us to craft colluding sets of inputs that fool the system's learning algorithm.
We show that in a white box setting, these attacks are very successful and can cause the target model's predictions to become worse than chance.
We explore two hypotheses to explain this: 'overfitting' by the attack, and mismatch between the model on which the attack is generated and that to which the attack is transferred.
arXiv Detail & Related papers (2022-11-23T14:55:44Z) - DAD: Data-free Adversarial Defense at Test Time [21.741026088202126]
Deep models are highly susceptible to adversarial attacks.
Privacy has become an important concern, restricting access to only trained models but not the training data.
We propose a completely novel problem of 'test-time adversarial defense in absence of training data and even their statistics'
arXiv Detail & Related papers (2022-04-04T15:16:13Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Detection and Continual Learning of Novel Face Presentation Attacks [23.13064343026656]
State-of-the-art face antispoofing systems are still vulnerable to novel types of attacks that are never seen during training.
In this paper, we enable a deep neural network to detect anomalies in the observed input data points as potential new types of attacks.
We then use experience replay to update the model to incorporate knowledge about new types of attacks without forgetting the past learned attack types.
arXiv Detail & Related papers (2021-08-27T01:33:52Z) - Continual Learning for Fake Audio Detection [62.54860236190694]
This paper proposes detecting fake without forgetting, a continual-learning-based method, to make the model learn new spoofing attacks incrementally.
Experiments are conducted on the ASVspoof 2019 dataset.
arXiv Detail & Related papers (2021-04-15T07:57:05Z) - On the Generalisation Capabilities of Fisher Vector based Face
Presentation Attack Detection [13.93832810177247]
Face Presentation Attack Detection techniques have reported a good detection performance when they are evaluated on known Presentation Attack Instruments.
In this work, we use a new feature space based on Fisher Vectors, computed from compact Binarised Statistical Image Features histograms.
This new representation, evaluated for challenging unknown attacks taken from freely available facial databases, shows promising results.
arXiv Detail & Related papers (2021-03-02T13:49:06Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.