Quality-Agnostic Deepfake Detection with Intra-model Collaborative
Learning
- URL: http://arxiv.org/abs/2309.05911v1
- Date: Tue, 12 Sep 2023 02:01:31 GMT
- Title: Quality-Agnostic Deepfake Detection with Intra-model Collaborative
Learning
- Authors: Binh M. Le and Simon S. Woo
- Abstract summary: Deepfake has recently raised a plethora of societal concerns over its possible security threats and dissemination of fake information.
Most SOTA approaches are limited by using a single specific model for detecting certain deepfake video quality type.
We propose a universal intra-model collaborative learning framework to enable the effective and simultaneous detection of different quality of deepfakes.
- Score: 26.517887637150594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deepfake has recently raised a plethora of societal concerns over its
possible security threats and dissemination of fake information. Much research
on deepfake detection has been undertaken. However, detecting low quality as
well as simultaneously detecting different qualities of deepfakes still remains
a grave challenge. Most SOTA approaches are limited by using a single specific
model for detecting certain deepfake video quality type. When constructing
multiple models with prior information about video quality, this kind of
strategy incurs significant computational cost, as well as model and training
data overhead. Further, it cannot be scalable and practical to deploy in
real-world settings. In this work, we propose a universal intra-model
collaborative learning framework to enable the effective and simultaneous
detection of different quality of deepfakes. That is, our approach is the
quality-agnostic deepfake detection method, dubbed QAD . In particular, by
observing the upper bound of general error expectation, we maximize the
dependency between intermediate representations of images from different
quality levels via Hilbert-Schmidt Independence Criterion. In addition, an
Adversarial Weight Perturbation module is carefully devised to enable the model
to be more robust against image corruption while boosting the overall model's
performance. Extensive experiments over seven popular deepfake datasets
demonstrate the superiority of our QAD model over prior SOTA benchmarks.
Related papers
- Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images [13.089550724738436]
Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields.
Their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content.
This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier.
arXiv Detail & Related papers (2024-04-19T14:30:41Z) - Towards More General Video-based Deepfake Detection through Facial Feature Guided Adaptation for Foundation Model [15.61920157541529]
We propose a novel Deepfake detection approach by adapting the Foundation Models with rich information encoded inside.
Inspired by the recent advances of parameter efficient fine-tuning, we propose a novel side-network-based decoder.
Our approach exhibits superior effectiveness in identifying unseen Deepfake samples, achieving notable performance improvement.
arXiv Detail & Related papers (2024-04-08T14:58:52Z) - Improving Cross-dataset Deepfake Detection with Deep Information
Decomposition [57.284370468207214]
Deepfake technology poses a significant threat to security and social trust.
Existing detection methods suffer from sharp performance degradation when faced with cross-dataset scenarios.
We propose a deep information decomposition (DID) framework in this paper.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors [2.0649235321315285]
There is a dire need for deepfake detection technology to help spot deepfake media.
Current deepfake detection models are able to achieve outstanding accuracy (>90%)
This study identifies makeup application as an adversarial attack that could fool deepfake detectors.
arXiv Detail & Related papers (2022-04-19T02:24:30Z) - Generative Modeling Helps Weak Supervision (and Vice Versa) [87.62271390571837]
We propose a model fusing weak supervision and generative adversarial networks.
It captures discrete variables in the data alongside the weak supervision derived label estimate.
It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
arXiv Detail & Related papers (2022-03-22T20:24:21Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - TAR: Generalized Forensic Framework to Detect Deepfakes using Weakly
Supervised Learning [17.40885531847159]
Deepfakes have become a critical social problem, and detecting them is of utmost importance.
In this work, we introduce a practical digital forensic tool to detect different types of deepfakes simultaneously.
We develop an autoencoder-based detection model with Residual blocks and sequentially perform transfer learning to detect different types of deepfakes simultaneously.
arXiv Detail & Related papers (2021-05-13T07:31:08Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.