VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
- URL: http://arxiv.org/abs/2005.10360v2
- Date: Wed, 2 Jun 2021 12:00:26 GMT
- Title: VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
- Authors: Gereon Fox, Wentao Liu, Hyeongwoo Kim, Hans-Peter Seidel, Mohamed
Elgharib, Christian Theobalt
- Abstract summary: We show how the performance of forgery detectors depends on the presence of artefacts that the human eye can see.
We introduce a new benchmark dataset for face video forgery detection, of unprecedented quality.
- Score: 77.60295082172098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are concerns that new approaches to the synthesis of high quality face
videos may be misused to manipulate videos with malicious intent. The research
community therefore developed methods for the detection of modified footage and
assembled benchmark datasets for this task. In this paper, we examine how the
performance of forgery detectors depends on the presence of artefacts that the
human eye can see. We introduce a new benchmark dataset for face video forgery
detection, of unprecedented quality. It allows us to demonstrate that existing
detection techniques have difficulties detecting fakes that reliably fool the
human eye. We thus introduce a new family of detectors that examine
combinations of spatial and temporal features and outperform existing
approaches both in terms of detection accuracy and generalization.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - The Tug-of-War Between Deepfake Generation and Detection [4.62070292702111]
Multimodal generative models are rapidly evolving, leading to a surge in the generation of realistic video and audio.
Deepfake videos, which can convincingly impersonate individuals, have particularly garnered attention due to their potential misuse.
This survey paper examines the dual landscape of deepfake video generation and detection, emphasizing the need for effective countermeasures.
arXiv Detail & Related papers (2024-07-08T17:49:41Z) - VANE-Bench: Video Anomaly Evaluation Benchmark for Conversational LMMs [64.60035916955837]
VANE-Bench is a benchmark designed to assess the proficiency of Video-LMMs in detecting anomalies and inconsistencies in videos.
Our dataset comprises an array of videos synthetically generated using existing state-of-the-art text-to-video generation models.
We evaluate nine existing Video-LMMs, both open and closed sources, on this benchmarking task and find that most of the models encounter difficulties in effectively identifying the subtle anomalies.
arXiv Detail & Related papers (2024-06-14T17:59:01Z) - Learning Expressive And Generalizable Motion Features For Face Forgery
Detection [52.54404879581527]
We propose an effective sequence-based forgery detection framework based on an existing video classification method.
To make the motion features more expressive for manipulation detection, we propose an alternative motion consistency block.
We make a general video classification network achieve promising results on three popular face forgery datasets.
arXiv Detail & Related papers (2024-03-08T09:25:48Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Understanding the Challenges and Opportunities of Pose-based Anomaly
Detection [2.924868086534434]
Pose-based anomaly detection is a video-analysis technique for detecting anomalous events or behaviors by examining human pose extracted from the video frames.
In this work, we analyze and quantify the characteristics of two well-known video anomaly datasets to better understand the difficulties of pose-based anomaly detection.
We believe these experiments are beneficial for a better comprehension of pose-based anomaly detection and the datasets currently available.
arXiv Detail & Related papers (2023-03-09T18:09:45Z) - Skeletal Video Anomaly Detection using Deep Learning: Survey, Challenges
and Future Directions [3.813649699234981]
We present a survey of privacy-protecting deep learning anomaly detection methods using skeletons extracted from videos.
We conclude that skeleton-based approaches for anomaly detection can be a plausible privacy-protecting alternative for video anomaly detection.
arXiv Detail & Related papers (2022-12-31T04:11:25Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Towards A Robust Deepfake Detector:Common Artifact Deepfake Detection
Model [14.308886041268973]
We propose a novel deepfake detection method named Common Artifact Deepfake Detection Model.
We find that the main obstacle to learning common artifact features is that models are easily misled by the identity representation feature.
Our method effectively reduces the influence of Implicit Identity Leakage and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-26T04:02:29Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.