DeepFakesON-Phys: DeepFakes Detection based on Heart Rate Estimation
- URL: http://arxiv.org/abs/2010.00400v3
- Date: Mon, 14 Dec 2020 14:34:23 GMT
- Title: DeepFakesON-Phys: DeepFakes Detection based on Heart Rate Estimation
- Authors: Javier Hernandez-Ortega, Ruben Tolosana, Julian Fierrez and Aythami
Morales
- Abstract summary: This work introduces a novel DeepFake detection framework based on physiological measurement.
In particular, we consider methods looking for subtle color changes in the human skin, revealing the presence of human tissues under blood.
The proposed fake detector named DeepFakesONPhys uses a Convolutional Attention Network (CAN), which extracts spatial and temporal information from video frames.
- Score: 25.413558889761127
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This work introduces a novel DeepFake detection framework based on
physiological measurement. In particular, we consider information related to
the heart rate using remote photoplethysmography (rPPG). rPPG methods analyze
video sequences looking for subtle color changes in the human skin, revealing
the presence of human blood under the tissues. In this work we investigate to
what extent rPPG is useful for the detection of DeepFake videos.
The proposed fake detector named DeepFakesON-Phys uses a Convolutional
Attention Network (CAN), which extracts spatial and temporal information from
video frames, analyzing and combining both sources to better detect fake
videos. This detection approach has been experimentally evaluated using the
latest public databases in the field: Celeb-DF and DFDC. The results achieved,
above 98% AUC (Area Under the Curve) on both databases, outperform the state of
the art and prove the success of fake detectors based on physiological
measurement to detect the latest DeepFake videos.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - GazeForensics: DeepFake Detection via Gaze-guided Spatial Inconsistency
Learning [63.547321642941974]
We introduce GazeForensics, an innovative DeepFake detection method that utilizes gaze representation obtained from a 3D gaze estimation model.
Experiment results reveal that our proposed GazeForensics outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2023-11-13T04:48:33Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Spatial-Temporal Frequency Forgery Clue for Video Forgery Detection in
VIS and NIR Scenario [87.72258480670627]
Existing face forgery detection methods based on frequency domain find that the GAN forged images have obvious grid-like visual artifacts in the frequency spectrum compared to the real images.
This paper proposes a Cosine Transform-based Forgery Clue Augmentation Network (FCAN-DCT) to achieve a more comprehensive spatial-temporal feature representation.
arXiv Detail & Related papers (2022-07-05T09:27:53Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - DeepFake Detection with Inconsistent Head Poses: Reproducibility and
Analysis [0.0]
We analyze an existing DeepFake detection technique based on head pose estimation.
Our results correct the current literature's perception of state of the art performance for DeepFake detection.
arXiv Detail & Related papers (2021-08-28T22:56:09Z) - DeepRhythm: Exposing DeepFakes with Attentional Visual Heartbeat Rhythms [28.470194397110607]
We propose DeepRhythm, a DeepFake detection technique that exposes DeepFakes by monitoring the heartbeat rhythms.
In this work, we propose DeepRhythm, a DeepFake detection technique that exposes DeepFakes by monitoring the heartbeat rhythms.
arXiv Detail & Related papers (2020-06-13T12:56:46Z) - VideoForensicsHQ: Detecting High-quality Manipulated Face Videos [77.60295082172098]
We show how the performance of forgery detectors depends on the presence of artefacts that the human eye can see.
We introduce a new benchmark dataset for face video forgery detection, of unprecedented quality.
arXiv Detail & Related papers (2020-05-20T21:17:43Z) - Detecting Forged Facial Videos using convolutional neural network [0.0]
We propose to use smaller (fewer parameters to learn) convolutional neural networks (CNN) for a data-driven approach to forged video detection.
To validate our approach, we investigate the FaceForensics public dataset detailing both frame-based and video-based results.
arXiv Detail & Related papers (2020-05-17T19:04:59Z) - DeepFakes Evolution: Analysis of Facial Regions and Fake Detection
Performance [3.441021278275805]
This study provides an exhaustive analysis of both 1st and 2nd DeepFake generations in terms of facial regions and fake detection performance.
We highlight the poor fake detection results achieved even by the strongest state-of-the-art fake detectors in the latest DeepFake databases of the 2nd generation.
arXiv Detail & Related papers (2020-04-16T08:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.