A Data-Driven Diffusion-based Approach for Audio Deepfake Explanations
- URL: http://arxiv.org/abs/2506.03425v1
- Date: Tue, 03 Jun 2025 22:10:53 GMT
- Title: A Data-Driven Diffusion-based Approach for Audio Deepfake Explanations
- Authors: Petr Grinberg, Ankur Kumar, Surya Koppisetti, Gaurav Bharaj,
- Abstract summary: We propose a novel data-driven approach to identify artifact regions in deepfake audio.<n>We consider paired real and vocoded audio, and use the difference in time-frequency representation as the ground-truth explanation.
- Score: 4.8975242634878295
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Evaluating explainability techniques, such as SHAP and LRP, in the context of audio deepfake detection is challenging due to lack of clear ground truth annotations. In the cases when we are able to obtain the ground truth, we find that these methods struggle to provide accurate explanations. In this work, we propose a novel data-driven approach to identify artifact regions in deepfake audio. We consider paired real and vocoded audio, and use the difference in time-frequency representation as the ground-truth explanation. The difference signal then serves as a supervision to train a diffusion model to expose the deepfake artifacts in a given vocoded audio. Experimental results on the VocV4 and LibriSeVoc datasets demonstrate that our method outperforms traditional explainability techniques, both qualitatively and quantitatively.
Related papers
- LENS-DF: Deepfake Detection and Temporal Localization for Long-Form Noisy Speech [35.36044093564255]
LENS-DF is a novel and comprehensive recipe for training and evaluating audio deepfake detection and temporal localization.<n>We conduct experiments based on self-supervised learning front-end and simple back-end.<n>The results indicate that models trained using data generated with LENS-DF consistently outperform those trained via conventional recipes.
arXiv Detail & Related papers (2025-07-22T04:31:13Z) - Rehearsal with Auxiliary-Informed Sampling for Audio Deepfake Detection [7.402342914903391]
Rehearsal with Auxiliary-Informed Sampling (RAIS) is a rehearsal-based CL approach for audio deepfake detection.<n>RAIS employs a label generation network to produce auxiliary labels, guiding diverse sample selection for the memory buffer.<n>Extensive experiments show RAIS outperforms state-of-the-art methods, achieving an average Equal Error Rate (EER) of 1.953 % across five experiences.
arXiv Detail & Related papers (2025-05-30T11:40:50Z) - DiMoDif: Discourse Modality-information Differentiation for Audio-visual Deepfake Detection and Localization [13.840950434728533]
DiMoDif is an audio-visual deepfake detection framework.<n>It exploits the inter-modality differences in machine perception of speech.<n>It temporally localizes the deepfake forgery.
arXiv Detail & Related papers (2024-11-15T13:47:33Z) - Detecting Audio-Visual Deepfakes with Fine-Grained Inconsistencies [11.671275975119089]
We propose the introduction of fine-grained mechanisms for detecting subtle artifacts in both spatial and temporal domains.
First, we introduce a local audio-visual model capable of capturing small spatial regions that are prone to inconsistencies with audio.
Second, we introduce a temporally-local pseudo-fake augmentation to include samples incorporating subtle temporal inconsistencies in our training set.
arXiv Detail & Related papers (2024-08-13T09:19:59Z) - Statistics-aware Audio-visual Deepfake Detector [11.671275975119089]
Methods in audio-visualfake detection mostly assess the synchronization between audio and visual features.
We propose a statistical feature loss to enhance the discrimination capability of the model.
Experiments on the DFDC and FakeAVCeleb datasets demonstrate the relevance of the proposed method.
arXiv Detail & Related papers (2024-07-16T12:15:41Z) - Training-Free Deepfake Voice Recognition by Leveraging Large-Scale Pre-Trained Models [52.04189118767758]
Generalization is a main issue for current audio deepfake detectors.
In this paper we study the potential of large-scale pre-trained models for audio deepfake detection.
arXiv Detail & Related papers (2024-05-03T15:27:11Z) - What to Remember: Self-Adaptive Continual Learning for Audio Deepfake
Detection [53.063161380423715]
Existing detection models have shown remarkable success in discriminating known deepfake audio, but struggle when encountering new attack types.
We propose a continual learning approach called Radian Weight Modification (RWM) for audio deepfake detection.
arXiv Detail & Related papers (2023-12-15T09:52:17Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio
Detection [54.20974251478516]
We propose a continual learning algorithm for fake audio detection to overcome catastrophic forgetting.
When fine-tuning a detection network, our approach adaptively computes the direction of weight modification according to the ratio of genuine utterances and fake utterances.
Our method can easily be generalized to related fields, like speech emotion recognition.
arXiv Detail & Related papers (2023-08-07T05:05:49Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Emotions Don't Lie: An Audio-Visual Deepfake Detection Method Using
Affective Cues [75.1731999380562]
We present a learning-based method for detecting real and fake deepfake multimedia content.
We extract and analyze the similarity between the two audio and visual modalities from within the same video.
We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets.
arXiv Detail & Related papers (2020-03-14T22:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.