Preliminary Forensics Analysis of DeepFake Images
- URL: http://arxiv.org/abs/2004.12626v5
- Date: Tue, 4 Aug 2020 09:27:14 GMT
- Title: Preliminary Forensics Analysis of DeepFake Images
- Authors: Luca Guarnera (1 and 2), Oliver Giudice (1), Cristina Nastasi (1),
Sebastiano Battiato (1 and 2) ((1) University of Catania, (2) iCTLab s.r.l. -
Spin-off of University of Catania)
- Abstract summary: DeepFake is the possibility to automatically replace a person's face in images and videos by exploiting algorithms based on deep learning.
This paper will present a brief overview of technologies able to produce DeepFake images of faces.
A forensics analysis of those images with standard methods will be presented.
A preliminary idea on how to fight DeepFake images of faces will be presented.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most terrifying phenomenon nowadays is the DeepFake: the
possibility to automatically replace a person's face in images and videos by
exploiting algorithms based on deep learning. This paper will present a brief
overview of technologies able to produce DeepFake images of faces. A forensics
analysis of those images with standard methods will be presented: not
surprisingly state of the art techniques are not completely able to detect the
fakeness. To solve this, a preliminary idea on how to fight DeepFake images of
faces will be presented by analysing anomalies in the frequency domain.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Robust Sequential DeepFake Detection [46.493498963150294]
We propose a novel research problem called Detecting Sequential DeepFake Manipulation (Seq-DeepFake)
Unlike the existing deepfake detection task only demanding a binary label prediction, Seq-DeepFake requires correctly predicting a sequential vector of facial manipulation operations.
arXiv Detail & Related papers (2023-09-26T15:01:43Z) - Comparative Analysis of Deep-Fake Algorithms [0.0]
Deepfakes, also known as deep learning-based fake videos, have become a major concern in recent years.
These deepfake videos can be used for malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news.
Deepfake detection technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization.
arXiv Detail & Related papers (2023-09-06T18:17:47Z) - A Survey of Deep Fake Detection for Trial Courts [2.320417845168326]
DeepFake algorithms can create fake images and videos that humans cannot distinguish from authentic ones.
It is become essential to detect fake videos to avoid spreading false information.
This paper presents a survey of methods used to detect DeepFakes and datasets available for detecting DeepFakes.
arXiv Detail & Related papers (2022-05-31T13:50:25Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - DeepFake Detection with Inconsistent Head Poses: Reproducibility and
Analysis [0.0]
We analyze an existing DeepFake detection technique based on head pose estimation.
Our results correct the current literature's perception of state of the art performance for DeepFake detection.
arXiv Detail & Related papers (2021-08-28T22:56:09Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z) - Landmark Breaker: Obstructing DeepFake By Disturbing Landmark Extraction [40.71503677067645]
We describe Landmark Breaker, the first dedicated method to disrupt facial landmark extraction.
Our motivation is that disrupting the facial landmark extraction can affect the alignment of input face so as to degrade the DeepFake quality.
Compared to the detection methods that only work after DeepFake generation, Landmark Breaker goes one step ahead to prevent DeepFake generation.
arXiv Detail & Related papers (2021-02-01T12:27:08Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.