Explaining Deepfake Detection by Analysing Image Matching
- URL: http://arxiv.org/abs/2207.09679v1
- Date: Wed, 20 Jul 2022 06:23:11 GMT
- Title: Explaining Deepfake Detection by Analysing Image Matching
- Authors: Shichao Dong, Jin Wang, Jiajun Liang, Haoqiang Fan and Renhe Ji
- Abstract summary: This paper aims to interpret how deepfake detection models learn artifact features of images when just supervised by binary labels.
Deepfake detection models implicitly learn artifact-relevant visual concepts through the FST-Matching.
We propose the FST-Matching Deepfake Detection Model to boost the performance of forgery detection on compressed videos.
- Score: 13.251308261180805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to interpret how deepfake detection models learn artifact
features of images when just supervised by binary labels. To this end, three
hypotheses from the perspective of image matching are proposed as follows. 1.
Deepfake detection models indicate real/fake images based on visual concepts
that are neither source-relevant nor target-relevant, that is, considering such
visual concepts as artifact-relevant. 2. Besides the supervision of binary
labels, deepfake detection models implicitly learn artifact-relevant visual
concepts through the FST-Matching (i.e. the matching fake, source, target
images) in the training set. 3. Implicitly learned artifact visual concepts
through the FST-Matching in the raw training set are vulnerable to video
compression. In experiments, the above hypotheses are verified among various
DNNs. Furthermore, based on this understanding, we propose the FST-Matching
Deepfake Detection Model to boost the performance of forgery detection on
compressed videos. Experiment results show that our method achieves great
performance, especially on highly-compressed (e.g. c40) videos.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - FSBI: Deepfakes Detection with Frequency Enhanced Self-Blended Images [17.707379977847026]
This paper introduces a Frequency Enhanced Self-Blended Images approach for deepfakes detection.
The proposed approach has been evaluated on FF++ and Celeb-DF datasets.
arXiv Detail & Related papers (2024-06-12T20:15:00Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Unleashing Text-to-Image Diffusion Models for Visual Perception [84.41514649568094]
VPD (Visual Perception with a pre-trained diffusion model) is a new framework that exploits the semantic information of a pre-trained text-to-image diffusion model in visual perception tasks.
We show that VPD can be faster adapted to downstream visual perception tasks using the proposed VPD.
arXiv Detail & Related papers (2023-03-03T18:59:47Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Video Transformer for Deepfake Detection with Incremental Learning [11.586926513803077]
Face forgery by deepfake is widely spread over the internet and this raises severe societal concerns.
We propose a novel video transformer with incremental learning for detecting deepfake videos.
arXiv Detail & Related papers (2021-08-11T16:22:56Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Identifying Invariant Texture Violation for Robust Deepfake Detection [17.306386179823576]
We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
arXiv Detail & Related papers (2020-12-19T03:02:15Z) - BBAND Index: A No-Reference Banding Artifact Predictor [55.42929350861115]
Banding artifact, or false contouring, is a common video compression impairment.
We propose a new distortion-specific no-reference video quality model for predicting banding artifacts, called the Blind BANding Detector (BBAND index)
arXiv Detail & Related papers (2020-02-27T03:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.