AV-Deepfake1M: A Large-Scale LLM-Driven Audio-Visual Deepfake Dataset
- URL: http://arxiv.org/abs/2311.15308v1
- Date: Sun, 26 Nov 2023 14:17:51 GMT
- Title: AV-Deepfake1M: A Large-Scale LLM-Driven Audio-Visual Deepfake Dataset
- Authors: Zhixi Cai, Shreya Ghosh, Aman Pankaj Adatia, Munawar Hayat, Abhinav
Dhall, Kalin Stefanov
- Abstract summary: We propose the AV-Deepfake1M dataset for the detection and localization of deepfake audio-visual content.
The dataset contains content-driven (i) video manipulations, (ii) audio manipulations, and (iii) audio-visual manipulations for more than 2K subjects resulting in a total of more than 1M videos.
- Score: 20.524844110786663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The detection and localization of highly realistic deepfake audio-visual
content are challenging even for the most advanced state-of-the-art methods.
While most of the research efforts in this domain are focused on detecting
high-quality deepfake images and videos, only a few works address the problem
of the localization of small segments of audio-visual manipulations embedded in
real videos. In this research, we emulate the process of such content
generation and propose the AV-Deepfake1M dataset. The dataset contains
content-driven (i) video manipulations, (ii) audio manipulations, and (iii)
audio-visual manipulations for more than 2K subjects resulting in a total of
more than 1M videos. The paper provides a thorough description of the proposed
data generation pipeline accompanied by a rigorous analysis of the quality of
the generated data. The comprehensive benchmark of the proposed dataset
utilizing state-of-the-art deepfake detection and localization methods
indicates a significant drop in performance compared to previous datasets. The
proposed dataset will play a vital role in building the next-generation
deepfake localization methods. The dataset and associated code are available at
https://github.com/ControlNet/AV-Deepfake1M .
Related papers
- AVTENet: Audio-Visual Transformer-based Ensemble Network Exploiting
Multiple Experts for Video Deepfake Detection [53.448283629898214]
The recent proliferation of hyper-realistic deepfake videos has drawn attention to the threat of audio and visual forgeries.
Most previous work on detecting AI-generated fake videos only utilize visual modality or audio modality.
We propose an Audio-Visual Transformer-based Ensemble Network (AVTENet) framework that considers both acoustic manipulation and visual manipulation.
arXiv Detail & Related papers (2023-10-19T19:01:26Z) - A Large-scale Dataset for Audio-Language Representation Learning [54.933479346870506]
We present an innovative and automatic audio caption generation pipeline based on a series of public tools or APIs.
We construct a large-scale, high-quality, audio-language dataset, named as Auto-ACD, comprising over 1.9M audio-text pairs.
arXiv Detail & Related papers (2023-09-20T17:59:32Z) - Glitch in the Matrix: A Large Scale Benchmark for Content Driven
Audio-Visual Forgery Detection and Localization [20.46053083071752]
We propose and benchmark a new dataset, Localized Visual DeepFake (LAV-DF)
LAV-DF consists of strategic content-driven audio, visual and audio-visual manipulations.
The proposed baseline method, Boundary Aware Temporal Forgery Detection (BA-TFD), is a 3D Convolutional Neural Network-based architecture.
arXiv Detail & Related papers (2023-05-03T08:48:45Z) - PVDD: A Practical Video Denoising Dataset with Real-World Dynamic Scenes [56.4361151691284]
"Practical Video Denoising dataset" (PVDD) contains 200 noisy-clean dynamic video pairs in both sRGB and RAW format.
Compared with existing datasets consisting of limited motion information,PVDD covers dynamic scenes with varying natural motion.
arXiv Detail & Related papers (2022-07-04T12:30:22Z) - Do You Really Mean That? Content Driven Audio-Visual Deepfake Dataset
and Multimodal Method for Temporal Forgery Localization [19.490174583625862]
We introduce a content-driven audio-visual deepfake dataset, termed Localized Audio Visual DeepFake (LAV-DF)
Specifically, the content-driven audio-visual manipulations are performed strategically to change the sentiment polarity of the whole video.
Our extensive quantitative and qualitative analysis demonstrates the proposed method's strong performance for temporal forgery localization and deepfake detection tasks.
arXiv Detail & Related papers (2022-04-13T08:02:11Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Dual Normalization Multitasking for Audio-Visual Sounding Object
Localization [0.0]
We propose a new concept, Sounding Object, to reduce the ambiguity of the visual location of sound.
To tackle this new AVSOL problem, we propose a novel multitask training strategy and architecture called Dual Normalization Multitasking.
arXiv Detail & Related papers (2021-06-01T02:02:52Z) - Automatic Curation of Large-Scale Datasets for Audio-Visual
Representation Learning [62.47593143542552]
We describe a subset optimization approach for automatic dataset curation.
We demonstrate that our approach finds videos with high audio-visual correspondence and show that self-supervised models trained on our data, despite being automatically constructed, achieve similar downstream performances to existing video datasets with similar scales.
arXiv Detail & Related papers (2021-01-26T14:27:47Z) - Emotions Don't Lie: An Audio-Visual Deepfake Detection Method Using
Affective Cues [75.1731999380562]
We present a learning-based method for detecting real and fake deepfake multimedia content.
We extract and analyze the similarity between the two audio and visual modalities from within the same video.
We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets.
arXiv Detail & Related papers (2020-03-14T22:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.