Attack Agnostic Dataset: Towards Generalization and Stabilization of
Audio DeepFake Detection
- URL: http://arxiv.org/abs/2206.13979v1
- Date: Mon, 27 Jun 2022 12:30:44 GMT
- Title: Attack Agnostic Dataset: Towards Generalization and Stabilization of
Audio DeepFake Detection
- Authors: Piotr Kawa, Marcin Plata, Piotr Syga
- Abstract summary: Methods for detecting audio DeepFakes should be characterized by good generalization and stability.
We present a thorough analysis of current DeepFake detection methods and consider different audio features (front-ends)
We propose a model based on LCNN with LFCC and mel-spectrogram front-end, which shows improvement over LFCC-based mode.
- Score: 0.4511923587827302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Audio DeepFakes allow the creation of high-quality, convincing utterances and
therefore pose a threat due to its potential applications such as impersonation
or fake news. Methods for detecting these manipulations should be characterized
by good generalization and stability leading to robustness against attacks
conducted with techniques that are not explicitly included in the training. In
this work, we introduce Attack Agnostic Dataset - a combination of two audio
DeepFakes and one anti-spoofing datasets that, thanks to the disjoint use of
attacks, can lead to better generalization of detection methods. We present a
thorough analysis of current DeepFake detection methods and consider different
audio features (front-ends). In addition, we propose a model based on LCNN with
LFCC and mel-spectrogram front-end, which not only is characterized by a good
generalization and stability results but also shows improvement over LFCC-based
mode - we decrease standard deviation on all folds and EER in two folds by up
to 5%.
Related papers
- FADEL: Uncertainty-aware Fake Audio Detection with Evidential Deep Learning [9.960675988638805]
We propose a novel framework called fake audio detection with evidential learning (FADEL)
FADEL incorporates model uncertainty into its predictions, thereby leading to more robust performance in OOD scenarios.
We demonstrate the validity of uncertainty estimation by analyzing a strong correlation between average uncertainty and equal error rate (EER) across different spoofing algorithms.
arXiv Detail & Related papers (2025-04-22T07:40:35Z) - Typicalness-Aware Learning for Failure Detection [26.23185979968123]
Deep neural networks (DNNs) often suffer from the overconfidence issue, where incorrect predictions are made with high confidence scores.
We propose a novel approach called Typicalness-Aware Learning (TAL) to address this issue and improve failure detection performance.
arXiv Detail & Related papers (2024-11-04T11:09:47Z) - Detecting Adversarial Data via Perturbation Forgery [28.637963515748456]
adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data.
New attacks based on generative models with imbalanced and anisotropic noise patterns evade detection.
We propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting unseen gradient-based, generative-model-based, and physical adversarial attacks.
arXiv Detail & Related papers (2024-05-25T13:34:16Z) - FLTracer: Accurate Poisoning Attack Provenance in Federated Learning [38.47921452675418]
Federated Learning (FL) is a promising distributed learning approach that enables multiple clients to collaboratively train a shared global model.
Recent studies show that FL is vulnerable to various poisoning attacks, which can degrade the performance of global models or introduce backdoors into them.
We propose FLTracer, the first FL attack framework to accurately detect various attacks and trace the attack time, objective, type, and poisoned location of updates.
arXiv Detail & Related papers (2023-10-20T11:24:38Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Securing Distributed SGD against Gradient Leakage Threats [13.979995939926154]
This paper presents a holistic approach to gradient leakage resilient distributed gradient Descent (SGD)
We analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise.
We present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool.
arXiv Detail & Related papers (2023-05-10T21:39:27Z) - Mixed Precision Quantization to Tackle Gradient Leakage Attacks in
Federated Learning [1.7205106391379026]
Federated Learning (FL) enables collaborative model building among a large number of participants without the need for explicit data sharing.
This approach shows vulnerabilities when privacy inference attacks are applied to it.
In particular, in the event of a gradient leakage attack, which has a higher success rate in retrieving sensitive data from the model gradients, FL models are at higher risk due to the presence of communication in their inherent architecture.
arXiv Detail & Related papers (2022-10-22T04:24:32Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.