Preliminary Forensics Analysis of DeepFake Images
- URL: http://arxiv.org/abs/2004.12626v5
- Date: Tue, 4 Aug 2020 09:27:14 GMT
- Title: Preliminary Forensics Analysis of DeepFake Images
- Authors: Luca Guarnera (1 and 2), Oliver Giudice (1), Cristina Nastasi (1),
Sebastiano Battiato (1 and 2) ((1) University of Catania, (2) iCTLab s.r.l. -
Spin-off of University of Catania)
- Abstract summary: DeepFake is the possibility to automatically replace a person's face in images and videos by exploiting algorithms based on deep learning.
This paper will present a brief overview of technologies able to produce DeepFake images of faces.
A forensics analysis of those images with standard methods will be presented.
A preliminary idea on how to fight DeepFake images of faces will be presented.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most terrifying phenomenon nowadays is the DeepFake: the
possibility to automatically replace a person's face in images and videos by
exploiting algorithms based on deep learning. This paper will present a brief
overview of technologies able to produce DeepFake images of faces. A forensics
analysis of those images with standard methods will be presented: not
surprisingly state of the art techniques are not completely able to detect the
fakeness. To solve this, a preliminary idea on how to fight DeepFake images of
faces will be presented by analysing anomalies in the frequency domain.
Related papers
- Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection [54.26588902144298]
We propose a knowledge-guided prompt learning method for deepfake facial image detection.
Specifically, we retrieve forgery-related prompts from large language models as expert knowledge to guide the optimization of learnable prompts.
Our proposed approach notably outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-01-01T02:18:18Z) - Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection [56.289631511616975]
This paper investigates the feasibility of a proactive DeepFake defense framework, em FacePosion, to prevent individuals from becoming victims of DeepFake videos.
Based on FacePoison, we introduce em VideoFacePoison, a strategy that propagates FacePoison across video frames rather than applying them individually to each frame.
Our method is validated on five face detectors, and extensive experiments against eleven different DeepFake models demonstrate the effectiveness of disrupting face detectors to hinder DeepFake generation.
arXiv Detail & Related papers (2024-12-02T04:17:48Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Comparative Analysis of Deep-Fake Algorithms [0.0]
Deepfakes, also known as deep learning-based fake videos, have become a major concern in recent years.
These deepfake videos can be used for malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news.
Deepfake detection technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization.
arXiv Detail & Related papers (2023-09-06T18:17:47Z) - A Survey of Deep Fake Detection for Trial Courts [2.320417845168326]
DeepFake algorithms can create fake images and videos that humans cannot distinguish from authentic ones.
It is become essential to detect fake videos to avoid spreading false information.
This paper presents a survey of methods used to detect DeepFakes and datasets available for detecting DeepFakes.
arXiv Detail & Related papers (2022-05-31T13:50:25Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - DeepFake Detection with Inconsistent Head Poses: Reproducibility and
Analysis [0.0]
We analyze an existing DeepFake detection technique based on head pose estimation.
Our results correct the current literature's perception of state of the art performance for DeepFake detection.
arXiv Detail & Related papers (2021-08-28T22:56:09Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.