Quality-Agnostic Deepfake Detection with Intra-model Collaborative
Learning
- URL: http://arxiv.org/abs/2309.05911v1
- Date: Tue, 12 Sep 2023 02:01:31 GMT
- Title: Quality-Agnostic Deepfake Detection with Intra-model Collaborative
Learning
- Authors: Binh M. Le and Simon S. Woo
- Abstract summary: Deepfake has recently raised a plethora of societal concerns over its possible security threats and dissemination of fake information.
Most SOTA approaches are limited by using a single specific model for detecting certain deepfake video quality type.
We propose a universal intra-model collaborative learning framework to enable the effective and simultaneous detection of different quality of deepfakes.
- Score: 26.517887637150594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deepfake has recently raised a plethora of societal concerns over its
possible security threats and dissemination of fake information. Much research
on deepfake detection has been undertaken. However, detecting low quality as
well as simultaneously detecting different qualities of deepfakes still remains
a grave challenge. Most SOTA approaches are limited by using a single specific
model for detecting certain deepfake video quality type. When constructing
multiple models with prior information about video quality, this kind of
strategy incurs significant computational cost, as well as model and training
data overhead. Further, it cannot be scalable and practical to deploy in
real-world settings. In this work, we propose a universal intra-model
collaborative learning framework to enable the effective and simultaneous
detection of different quality of deepfakes. That is, our approach is the
quality-agnostic deepfake detection method, dubbed QAD . In particular, by
observing the upper bound of general error expectation, we maximize the
dependency between intermediate representations of images from different
quality levels via Hilbert-Schmidt Independence Criterion. In addition, an
Adversarial Weight Perturbation module is carefully devised to enable the model
to be more robust against image corruption while boosting the overall model's
performance. Extensive experiments over seven popular deepfake datasets
demonstrate the superiority of our QAD model over prior SOTA benchmarks.
Related papers
- Adaptive Meta-Learning for Robust Deepfake Detection: A Multi-Agent Framework to Data Drift and Model Generalization [6.589206192038365]
This paper proposes an adversarial meta-learning algorithm using task-specific adaptive sample synthesis and consistency regularization.
It boosts both robustness and generalization of the model.
Experimental results demonstrate the model's consistent performance across various datasets, outperforming the models in comparison.
arXiv Detail & Related papers (2024-11-12T19:55:07Z) - Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images [13.089550724738436]
Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields.
Their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content.
This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier.
arXiv Detail & Related papers (2024-04-19T14:30:41Z) - Towards More General Video-based Deepfake Detection through Facial Feature Guided Adaptation for Foundation Model [15.61920157541529]
We propose a novel Deepfake detection approach by adapting the Foundation Models with rich information encoded inside.
Inspired by the recent advances of parameter efficient fine-tuning, we propose a novel side-network-based decoder.
Our approach exhibits superior effectiveness in identifying unseen Deepfake samples, achieving notable performance improvement.
arXiv Detail & Related papers (2024-04-08T14:58:52Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Generative Modeling Helps Weak Supervision (and Vice Versa) [87.62271390571837]
We propose a model fusing weak supervision and generative adversarial networks.
It captures discrete variables in the data alongside the weak supervision derived label estimate.
It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
arXiv Detail & Related papers (2022-03-22T20:24:21Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - TAR: Generalized Forensic Framework to Detect Deepfakes using Weakly
Supervised Learning [17.40885531847159]
Deepfakes have become a critical social problem, and detecting them is of utmost importance.
In this work, we introduce a practical digital forensic tool to detect different types of deepfakes simultaneously.
We develop an autoencoder-based detection model with Residual blocks and sequentially perform transfer learning to detect different types of deepfakes simultaneously.
arXiv Detail & Related papers (2021-05-13T07:31:08Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.