Fighting Deepfake by Exposing the Convolutional Traces on Images
- URL: http://arxiv.org/abs/2008.04095v1
- Date: Fri, 7 Aug 2020 08:49:23 GMT
- Title: Fighting Deepfake by Exposing the Convolutional Traces on Images
- Authors: Luca Guarnera (1 and 2), Oliver Giudice (1), Sebastiano Battiato (1
and 2) ((1) University of Catania, (2) iCTLab s.r.l. - Spin-off of University
of Catania)
- Abstract summary: Mobile apps like FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to produce extreme transformations on human face photos.
This kind of media object took the name of Deepfake and raised a new challenge in the multimedia forensics field: the Deepfake detection challenge.
In this paper, a new approach aimed to extract a Deepfake fingerprint from images is proposed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in Artificial Intelligence and Image Processing are changing the way
people interacts with digital images and video. Widespread mobile apps like
FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to
produce extreme transformations on human face photos such gender swap, aging,
etc. The results are utterly realistic and extremely easy to be exploited even
for non-experienced users. This kind of media object took the name of Deepfake
and raised a new challenge in the multimedia forensics field: the Deepfake
detection challenge. Indeed, discriminating a Deepfake from a real image could
be a difficult task even for human eyes but recent works are trying to apply
the same technology used for generating images for discriminating them with
preliminary good results but with many limitations: employed Convolutional
Neural Networks are not so robust, demonstrate to be specific to the context
and tend to extract semantics from images. In this paper, a new approach aimed
to extract a Deepfake fingerprint from images is proposed. The method is based
on the Expectation-Maximization algorithm trained to detect and extract a
fingerprint that represents the Convolutional Traces (CT) left by GANs during
image generation. The CT demonstrates to have high discriminative power
achieving better results than state-of-the-art in the Deepfake detection task
also proving to be robust to different attacks. Achieving an overall
classification accuracy of over 98%, considering Deepfakes from 10 different
GAN architectures not only involved in images of faces, the CT demonstrates to
be reliable and without any dependence on image semantic. Finally, tests
carried out on Deepfakes generated by FACEAPP achieving 93% of accuracy in the
fake detection task, demonstrated the effectiveness of the proposed technique
on a real-case scenario.
Related papers
- Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - Deepfake Detection of Occluded Images Using a Patch-based Approach [1.6114012813668928]
We present a deep learning approach using the entire face and face patches to distinguish real/fake images in the presence of obstruction.
For producing fake images, StyleGAN and StyleGAN2 are trained by FFHQ images and also StarGAN and PGGAN are trained by CelebA images.
The proposed approach reaches higher results in early epochs than other methods and increases the SoTA results by 0.4%-7.9% in the different built data-sets.
arXiv Detail & Related papers (2023-04-10T12:12:14Z) - Real Face Foundation Representation Learning for Generalized Deepfake
Detection [74.4691295738097]
The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security.
It is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation.
We propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets.
arXiv Detail & Related papers (2023-03-15T08:27:56Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Fighting deepfakes by detecting GAN DCT anomalies [0.0]
State-of-the-art algorithms employ deep neural networks to detect fake contents.
A new fast detection method able to discriminate Deepfake images with high precision is proposed.
The method is innovative, exceeds the state-of-the-art and also gives many insights in terms of explainability.
arXiv Detail & Related papers (2021-01-24T19:45:11Z) - Fake face detection via adaptive manipulation traces extraction network [9.892936175042939]
We propose an adaptive manipulation traces extraction network (AMTEN) to suppress image content and highlight manipulation traces.
AMTEN exploits an adaptive convolution layer to predict manipulation traces in the image, which are reused in subsequent layers to maximize manipulation artifacts.
When detecting fake face images generated by various FIM techniques, AMTENnet achieves an average accuracy up to 98.52%, which outperforms the state-of-the-art works.
arXiv Detail & Related papers (2020-05-11T09:16:39Z) - DeepFake Detection by Analyzing Convolutional Traces [0.0]
We focus on the analysis of Deepfakes of human faces with the objective of creating a new detection method.
The proposed technique, by means of an Expectation Maximization (EM) algorithm, extracts a set of local features specifically addressed to model the underlying convolutional generative process.
Results demonstrated the effectiveness of the technique in distinguishing the different architectures and the corresponding generation process.
arXiv Detail & Related papers (2020-04-22T09:02:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.