Towards A Robust Deepfake Detector:Common Artifact Deepfake Detection
Model
- URL: http://arxiv.org/abs/2210.14457v1
- Date: Wed, 26 Oct 2022 04:02:29 GMT
- Title: Towards A Robust Deepfake Detector:Common Artifact Deepfake Detection
Model
- Authors: Shichao Dong, Jin Wang, Renhe Ji, Jiajun Liang, Haoqiang Fan and Zheng
Ge
- Abstract summary: We propose a novel deepfake detection method named Common Artifact Deepfake Detection Model.
We find that the main obstacle to learning common artifact features is that models are easily misled by the identity representation feature.
Our method effectively reduces the influence of Implicit Identity Leakage and outperforms the state-of-the-art by a large margin.
- Score: 14.308886041268973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing deepfake detection methods perform poorly on face forgeries
generated by unseen face manipulation algorithms. The generalization ability of
previous methods is mainly improved by modeling hand-crafted artifact features.
Such properties, on the other hand, impede their further improvement. In this
paper, we propose a novel deepfake detection method named Common Artifact
Deepfake Detection Model, which aims to learn common artifact features in
different face manipulation algorithms. To this end, we find that the main
obstacle to learning common artifact features is that models are easily misled
by the identity representation feature. We call this phenomenon Implicit
Identity Leakage (IIL). Extensive experimental results demonstrate that, by
learning the binary classifiers with the guidance of the Artifact Detection
Module, our method effectively reduces the influence of IIL and outperforms the
state-of-the-art by a large margin, proving that hand-crafted artifact feature
detectors are not indispensable when tackling deepfake problems.
Related papers
- Capture Artifacts via Progressive Disentangling and Purifying Blended Identities for Deepfake Detection [9.833101078121482]
Deepfake technology has raised serious concerns regarding privacy breaches and trust issues.
Current methods use disentanglement techniques to roughly separate the fake faces into artifacts and content information.
These methods lack a solid disentanglement foundation and cannot guarantee the reliability of their disentangling process.
arXiv Detail & Related papers (2024-10-14T08:04:37Z) - DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion [94.46904504076124]
Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content.
Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations.
We introduce DiffusionFake, a novel framework that reverses the generative process of face forgeries to enhance the generalization of detection models.
arXiv Detail & Related papers (2024-10-06T06:22:43Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Adversarially Robust Deepfake Detection via Adversarial Feature Similarity Learning [0.0]
Deepfake technology has raised concerns about the authenticity of digital content, necessitating the development of effective detection methods.
Adversaries can manipulate deepfake videos with small, imperceptible perturbations that can deceive the detection models into producing incorrect outputs.
We introduce Adversarial Feature Similarity Learning (AFSL), which integrates three fundamental deep feature learning paradigms.
arXiv Detail & Related papers (2024-02-06T11:35:05Z) - Masked Conditional Diffusion Model for Enhancing Deepfake Detection [20.018495944984355]
We propose a Masked Conditional Diffusion Model (MCDM) for enhancing deepfake detection.
It generates a variety of forged faces from a masked pristine one, encouraging the deepfake detection model to learn generic and robust representations.
arXiv Detail & Related papers (2024-02-01T12:06:55Z) - Facial Forgery-based Deepfake Detection using Fine-Grained Features [7.378937711027777]
Facial forgery by deepfakes has caused major security risks and raised severe societal concerns.
We formulate deepfake detection as a fine-grained classification problem and propose a new fine-grained solution to it.
Our method is based on learning subtle and generalizable features by effectively suppressing background noise and learning discriminative features at various scales for deepfake detection.
arXiv Detail & Related papers (2023-10-10T21:30:05Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Real Face Foundation Representation Learning for Generalized Deepfake
Detection [74.4691295738097]
The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security.
It is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation.
We propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets.
arXiv Detail & Related papers (2023-03-15T08:27:56Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - VideoForensicsHQ: Detecting High-quality Manipulated Face Videos [77.60295082172098]
We show how the performance of forgery detectors depends on the presence of artefacts that the human eye can see.
We introduce a new benchmark dataset for face video forgery detection, of unprecedented quality.
arXiv Detail & Related papers (2020-05-20T21:17:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.