Glitch in the Matrix: A Large Scale Benchmark for Content Driven
Audio-Visual Forgery Detection and Localization
- URL: http://arxiv.org/abs/2305.01979v3
- Date: Sun, 16 Jul 2023 07:03:45 GMT
- Title: Glitch in the Matrix: A Large Scale Benchmark for Content Driven
Audio-Visual Forgery Detection and Localization
- Authors: Zhixi Cai, Shreya Ghosh, Abhinav Dhall, Tom Gedeon, Kalin Stefanov,
Munawar Hayat
- Abstract summary: We propose and benchmark a new dataset, Localized Visual DeepFake (LAV-DF)
LAV-DF consists of strategic content-driven audio, visual and audio-visual manipulations.
The proposed baseline method, Boundary Aware Temporal Forgery Detection (BA-TFD), is a 3D Convolutional Neural Network-based architecture.
- Score: 20.46053083071752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most deepfake detection methods focus on detecting spatial and/or
spatio-temporal changes in facial attributes and are centered around the binary
classification task of detecting whether a video is real or fake. This is
because available benchmark datasets contain mostly visual-only modifications
present in the entirety of the video. However, a sophisticated deepfake may
include small segments of audio or audio-visual manipulations that can
completely change the meaning of the video content. To addresses this gap, we
propose and benchmark a new dataset, Localized Audio Visual DeepFake (LAV-DF),
consisting of strategic content-driven audio, visual and audio-visual
manipulations. The proposed baseline method, Boundary Aware Temporal Forgery
Detection (BA-TFD), is a 3D Convolutional Neural Network-based architecture
which effectively captures multimodal manipulations. We further improve (i.e.
BA-TFD+) the baseline method by replacing the backbone with a Multiscale Vision
Transformer and guide the training process with contrastive, frame
classification, boundary matching and multimodal boundary matching loss
functions. The quantitative analysis demonstrates the superiority of BA-TFD+ on
temporal forgery localization and deepfake detection tasks using several
benchmark datasets including our newly proposed dataset. The dataset, models
and code are available at https://github.com/ControlNet/LAV-DF.
Related papers
- DiMoDif: Discourse Modality-information Differentiation for Audio-visual Deepfake Detection and Localization [13.840950434728533]
We present a novel audio-visual deepfake detection framework.
Based on the assumption that in real samples - in contrast to deepfakes - visual and audio signals coincide in terms of information.
We use features from deep networks that specialize in video and audio speech recognition to spot frame-level cross-modal incongruities.
arXiv Detail & Related papers (2024-11-15T13:47:33Z) - AV-Deepfake1M: A Large-Scale LLM-Driven Audio-Visual Deepfake Dataset [21.90332221144928]
We propose the AV-Deepfake1M dataset for the detection and localization of deepfake audio-visual content.
The dataset contains content-driven (i) video manipulations, (ii) audio manipulations, and (iii) audio-visual manipulations for more than 2K subjects resulting in a total of more than 1M videos.
arXiv Detail & Related papers (2023-11-26T14:17:51Z) - AV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency
for Video Deepfake Detection [32.502184301996216]
Multimodal manipulations (also known as audio-visual deepfakes) make it difficult for unimodal deepfake detectors to detect forgeries in multimedia content.
Previous methods mainly adopt uni-modal video forensics and use supervised pre-training for forgery detection.
This study proposes a new method based on a multi-modal self-supervised-learning (SSL) feature extractor.
arXiv Detail & Related papers (2023-11-05T18:35:03Z) - AVTENet: Audio-Visual Transformer-based Ensemble Network Exploiting
Multiple Experts for Video Deepfake Detection [53.448283629898214]
The recent proliferation of hyper-realistic deepfake videos has drawn attention to the threat of audio and visual forgeries.
Most previous work on detecting AI-generated fake videos only utilize visual modality or audio modality.
We propose an Audio-Visual Transformer-based Ensemble Network (AVTENet) framework that considers both acoustic manipulation and visual manipulation.
arXiv Detail & Related papers (2023-10-19T19:01:26Z) - Text-to-feature diffusion for audio-visual few-shot learning [59.45164042078649]
Few-shot learning from video data is a challenging and underexplored, yet much cheaper, setup.
We introduce a unified audio-visual few-shot video classification benchmark on three datasets.
We show that AV-DIFF obtains state-of-the-art performance on our proposed benchmark for audio-visual few-shot learning.
arXiv Detail & Related papers (2023-09-07T17:30:36Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Do You Really Mean That? Content Driven Audio-Visual Deepfake Dataset
and Multimodal Method for Temporal Forgery Localization [19.490174583625862]
We introduce a content-driven audio-visual deepfake dataset, termed Localized Audio Visual DeepFake (LAV-DF)
Specifically, the content-driven audio-visual manipulations are performed strategically to change the sentiment polarity of the whole video.
Our extensive quantitative and qualitative analysis demonstrates the proposed method's strong performance for temporal forgery localization and deepfake detection tasks.
arXiv Detail & Related papers (2022-04-13T08:02:11Z) - HighlightMe: Detecting Highlights from Human-Centric Videos [52.84233165201391]
We present a domain- and user-preference-agnostic approach to detect highlightable excerpts from human-centric videos.
We use an autoencoder network equipped with spatial-temporal graph convolutions to detect human activities and interactions.
We observe a 4-12% improvement in the mean average precision of matching the human-annotated highlights over state-of-the-art methods.
arXiv Detail & Related papers (2021-10-05T01:18:15Z) - MD-CSDNetwork: Multi-Domain Cross Stitched Network for Deepfake
Detection [80.83725644958633]
Current deepfake generation methods leave discriminative artifacts in the frequency spectrum of fake images and videos.
We present a novel approach, termed as MD-CSDNetwork, for combining the features in the spatial and frequency domains to mine a shared discriminative representation.
arXiv Detail & Related papers (2021-09-15T14:11:53Z) - Self-supervised Video Representation Learning by Uncovering
Spatio-temporal Statistics [74.6968179473212]
This paper proposes a novel pretext task to address the self-supervised learning problem.
We compute a series of partitioning-temporal statistical summaries, such as the spatial location and dominant direction of the largest motion.
A neural network is built and trained to yield the statistical summaries given the video frames as inputs.
arXiv Detail & Related papers (2020-08-31T08:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.