Identifying Invariant Texture Violation for Robust Deepfake Detection
- URL: http://arxiv.org/abs/2012.10580v1
- Date: Sat, 19 Dec 2020 03:02:15 GMT
- Title: Identifying Invariant Texture Violation for Robust Deepfake Detection
- Authors: Xinwei Sun, Botong Wu, Wei Chen
- Abstract summary: We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
- Score: 17.306386179823576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing deepfake detection methods have reported promising in-distribution
results, by accessing published large-scale dataset. However, due to the
non-smooth synthesis method, the fake samples in this dataset may expose
obvious artifacts (e.g., stark visual contrast, non-smooth boundary), which
were heavily relied on by most of the frame-level detection methods above. As
these artifacts do not come up in real media forgeries, the above methods can
suffer from a large degradation when applied to fake images that close to
reality. To improve the robustness for high-realism fake data, we propose the
Invariant Texture Learning (InTeLe) framework, which only accesses the
published dataset with low visual quality. Our method is based on the prior
that the microscopic facial texture of the source face is inevitably violated
by the texture transferred from the target person, which can hence be regarded
as the invariant characterization shared among all fake images. To learn such
an invariance for deepfake detection, our InTeLe introduces an auto-encoder
framework with different decoders for pristine and fake images, which are
further appended with a shallow classifier in order to separate out the obvious
artifact-effect. Equipped with such a separation, the extracted embedding by
encoder can capture the texture violation in fake images, followed by the
classifier for the final pristine/fake prediction. As a theoretical guarantee,
we prove the identifiability of such an invariance texture violation, i.e., to
be precisely inferred from observational data. The effectiveness and utility of
our method are demonstrated by promising generalization ability from
low-quality images with obvious artifacts to fake images with high realism.
Related papers
- On the Effectiveness of Dataset Alignment for Fake Image Detection [28.68129042301801]
A good detector should focus on the generative models fingerprints while ignoring image properties such as semantic content, resolution, file format, etc.
In this work, we argue that in addition to these algorithmic choices, we also require a well aligned dataset of real/fake images to train a robust detector.
For the family of LDMs, we propose a very simple way to achieve this: we reconstruct all the real images using the LDMs autoencoder, without any denoising operation. We then train a model to separate these real images from their reconstructions.
arXiv Detail & Related papers (2024-10-15T17:58:07Z) - FSBI: Deepfakes Detection with Frequency Enhanced Self-Blended Images [17.707379977847026]
This paper introduces a Frequency Enhanced Self-Blended Images approach for deepfakes detection.
The proposed approach has been evaluated on FF++ and Celeb-DF datasets.
arXiv Detail & Related papers (2024-06-12T20:15:00Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Deepfake detection by exploiting surface anomalies: the SurFake approach [29.088218634944116]
This paper investigates how deepfake creation can impact on the characteristics that the whole scene had at the time of the acquisition.
By resorting to the analysis of the characteristics of the surfaces depicted in the image it is possible to obtain a descriptor usable to train a CNN for deepfake detection.
arXiv Detail & Related papers (2023-10-31T16:54:14Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Detecting Deepfakes with Self-Blended Images [37.374772758057844]
We present novel synthetic training data called self-blended images ( SBIs) to detect deepfakes.
SBIs are generated by blending pseudo source and target images from single pristine images.
We compare our approach with state-of-the-art methods on FF++, CDF, DFD, DFDC, DFDCP, and FFIW datasets.
arXiv Detail & Related papers (2022-04-18T15:44:35Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - FakePolisher: Making DeepFakes More Detection-Evasive by Shallow
Reconstruction [30.59382916497875]
GAN-based image generation methods are still imperfect, whose upsampling design has limitations in leaving certain artifact patterns in the synthesized image.
In this paper, we devise a simple yet powerful approach termed FakePolisher that performs shallow reconstruction of fake images through a learned linear dictionary.
The comprehensive evaluation on 3 state-of-the-art DeepFake detection methods and fake images generated by 16 popular GAN-based fake image generation techniques, demonstrates the effectiveness of our technique.
arXiv Detail & Related papers (2020-06-13T01:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.