Deepfake Detection of Occluded Images Using a Patch-based Approach
- URL: http://arxiv.org/abs/2304.04537v1
- Date: Mon, 10 Apr 2023 12:12:14 GMT
- Title: Deepfake Detection of Occluded Images Using a Patch-based Approach
- Authors: Mahsa Soleimani, Ali Nazari and Mohsen Ebrahimi Moghaddam
- Abstract summary: We present a deep learning approach using the entire face and face patches to distinguish real/fake images in the presence of obstruction.
For producing fake images, StyleGAN and StyleGAN2 are trained by FFHQ images and also StarGAN and PGGAN are trained by CelebA images.
The proposed approach reaches higher results in early epochs than other methods and increases the SoTA results by 0.4%-7.9% in the different built data-sets.
- Score: 1.6114012813668928
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: DeepFake involves the use of deep learning and artificial intelligence
techniques to produce or change video and image contents typically generated by
GANs. Moreover, it can be misused and leads to fictitious news, ethical and
financial crimes, and also affects the performance of facial recognition
systems. Thus, detection of real or fake images is significant specially to
authenticate originality of people's images or videos. One of the most
important challenges in this topic is obstruction that decreases the system
precision. In this study, we present a deep learning approach using the entire
face and face patches to distinguish real/fake images in the presence of
obstruction with a three-path decision: first entire-face reasoning, second a
decision based on the concatenation of feature vectors of face patches, and
third a majority vote decision based on these features. To test our approach,
new datasets including real and fake images are created. For producing fake
images, StyleGAN and StyleGAN2 are trained by FFHQ images and also StarGAN and
PGGAN are trained by CelebA images. The CelebA and FFHQ datasets are used as
real images. The proposed approach reaches higher results in early epochs than
other methods and increases the SoTA results by 0.4\%-7.9\% in the different
built data-sets. Also, we have shown in experimental results that weighing the
patches may improve accuracy.
Related papers
- Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Robustness and Generalizability of Deepfake Detection: A Study with
Diffusion Models [35.188364409869465]
We present an investigation into how deepfakes are produced and how they can be identified.
The cornerstone of our research is a rich collection of artificial celebrity faces, titled DeepFakeFace.
This data serves as a robust foundation to train and test algorithms designed to spot deepfakes.
arXiv Detail & Related papers (2023-09-05T13:22:41Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - DA-FDFtNet: Dual Attention Fake Detection Fine-tuning Network to Detect
Various AI-Generated Fake Images [21.030153777110026]
It has been much easier to create fake images such as "Deepfakes"
Recent research has introduced few-shot learning, which uses a small amount of training data to produce fake images and videos more effectively.
In this work, we propose Dual Attention Fine-tuning Network (DA-tNet) to detect the manipulated fake face images.
arXiv Detail & Related papers (2021-12-22T16:25:24Z) - Robust Face-Swap Detection Based on 3D Facial Shape Information [59.32489266682952]
Face-swap images and videos have attracted more and more malicious attackers to discredit some key figures.
Previous pixel-level artifacts based detection techniques always focus on some unclear patterns but ignore some available semantic clues.
We propose a biometric information based method to fully exploit the appearance and shape feature for face-swap detection of key figures.
arXiv Detail & Related papers (2021-04-28T09:35:48Z) - Identifying Invariant Texture Violation for Robust Deepfake Detection [17.306386179823576]
We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
arXiv Detail & Related papers (2020-12-19T03:02:15Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - Fighting Deepfake by Exposing the Convolutional Traces on Images [0.0]
Mobile apps like FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to produce extreme transformations on human face photos.
This kind of media object took the name of Deepfake and raised a new challenge in the multimedia forensics field: the Deepfake detection challenge.
In this paper, a new approach aimed to extract a Deepfake fingerprint from images is proposed.
arXiv Detail & Related papers (2020-08-07T08:49:23Z) - One-Shot GAN Generated Fake Face Detection [3.3707422585608953]
We propose a universal One-Shot GAN generated fake face detection method.
The proposed method is based on extracting out-of-context objects from faces via scene understanding models.
Our experiments show that, we can discriminate fake faces from real ones in terms of out-of-context features.
arXiv Detail & Related papers (2020-03-27T05:51:14Z) - Detecting Face2Face Facial Reenactment in Videos [76.9573023955201]
This research proposes a learning-based algorithm for detecting reenactment based alterations.
The proposed algorithm uses a multi-stream network that learns regional artifacts and provides a robust performance at various compression levels.
The results show state-of-the-art classification accuracy of 99.96%, 99.10%, and 91.20% for no, easy, and hard compression factors, respectively.
arXiv Detail & Related papers (2020-01-21T11:03:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.