Detection of Real-time DeepFakes in Video Conferencing with Active
Probing and Corneal Reflection
- URL: http://arxiv.org/abs/2210.14153v1
- Date: Fri, 21 Oct 2022 23:31:17 GMT
- Title: Detection of Real-time DeepFakes in Video Conferencing with Active
Probing and Corneal Reflection
- Authors: Hui Guo, Xin Wang, Siwei Lyu
- Abstract summary: We describe a new active forensic method to detect real-time DeepFakes.
We authenticate video calls by displaying a distinct pattern on the screen and using the corneal reflection extracted from the images of the call participant's face.
This pattern can be induced by a call participant displaying on a shared screen or directly integrated into the video-call client.
- Score: 43.272069005626584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The COVID pandemic has led to the wide adoption of online video calls in
recent years. However, the increasing reliance on video calls provides
opportunities for new impersonation attacks by fraudsters using the advanced
real-time DeepFakes. Real-time DeepFakes pose new challenges to detection
methods, which have to run in real-time as a video call is ongoing. In this
paper, we describe a new active forensic method to detect real-time DeepFakes.
Specifically, we authenticate video calls by displaying a distinct pattern on
the screen and using the corneal reflection extracted from the images of the
call participant's face. This pattern can be induced by a call participant
displaying on a shared screen or directly integrated into the video-call
client. In either case, no specialized imaging or lighting hardware is
required. Through large-scale simulations, we evaluate the reliability of this
approach under a range in a variety of real-world imaging scenarios.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - AV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency
for Video Deepfake Detection [32.502184301996216]
Multimodal manipulations (also known as audio-visual deepfakes) make it difficult for unimodal deepfake detectors to detect forgeries in multimedia content.
Previous methods mainly adopt uni-modal video forensics and use supervised pre-training for forgery detection.
This study proposes a new method based on a multi-modal self-supervised-learning (SSL) feature extractor.
arXiv Detail & Related papers (2023-11-05T18:35:03Z) - NPVForensics: Jointing Non-critical Phonemes and Visemes for Deepfake
Detection [50.33525966541906]
Existing multimodal detection methods capture audio-visual inconsistencies to expose Deepfake videos.
We propose a novel Deepfake detection method to mine the correlation between Non-critical Phonemes and Visemes, termed NPVForensics.
Our model can be easily adapted to the downstream Deepfake datasets with fine-tuning.
arXiv Detail & Related papers (2023-06-12T06:06:05Z) - Real-time Multi-person Eyeblink Detection in the Wild for Untrimmed
Video [41.4300990443683]
Real-time eyeblink detection in the wild can widely serve for fatigue detection, face anti-spoofing, emotion analysis, etc.
We shed light on this research field for the first time with essential contributions on dataset, theory, and practices.
arXiv Detail & Related papers (2023-03-28T15:35:25Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Deep Frequent Spatial Temporal Learning for Face Anti-Spoofing [9.435020319411311]
Face anti-spoofing is crucial for the security of face recognition system, by avoiding invaded with presentation attack.
Previous works have shown the effectiveness of using depth and temporal supervision for this task.
We propose a novel two stream FreqSaptialTemporalNet for face anti-spoofing which simultaneously takes advantage of frequent, spatial and temporal information.
arXiv Detail & Related papers (2020-01-20T06:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.