Cost Sensitive Optimization of Deepfake Detector
- URL: http://arxiv.org/abs/2012.04199v1
- Date: Tue, 8 Dec 2020 04:06:02 GMT
- Title: Cost Sensitive Optimization of Deepfake Detector
- Authors: Ivan Kukanov, Janne Karttunen, Hannu Sillanp\"a\"a, Ville Hautam\"aki
- Abstract summary: We argue that deepfake detection task should be viewed as a screening task, where the user will screen a large number of videos daily.
It is clear then that only a small fraction of the uploaded videos are deepfakes, so the detection performance needs to be measured in a cost-sensitive way.
- Score: 6.427063076424032
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since the invention of cinema, the manipulated videos have existed. But
generating manipulated videos that can fool the viewer has been a
time-consuming endeavor. With the dramatic improvements in the deep generative
modeling, generating believable looking fake videos has become a reality. In
the present work, we concentrate on the so-called deepfake videos, where the
source face is swapped with the targets. We argue that deepfake detection task
should be viewed as a screening task, where the user, such as the video
streaming platform, will screen a large number of videos daily. It is clear
then that only a small fraction of the uploaded videos are deepfakes, so the
detection performance needs to be measured in a cost-sensitive way. Preferably,
the model parameters also need to be estimated in the same way. This is
precisely what we propose here.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - What Matters in Detecting AI-Generated Videos like Sora? [51.05034165599385]
Gap between synthetic and real-world videos remains under-explored.
In this study, we compare real-world videos with those generated by a state-of-the-art AI model, Stable Video Diffusion.
Our model is capable of detecting videos generated by Sora with high accuracy, even without exposure to any Sora videos during training.
arXiv Detail & Related papers (2024-06-27T23:03:58Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Detection of GAN-synthesized street videos [21.192357452920007]
This paper investigates the detectability of a new kind of AI-generated videos framing driving street sequences (here referred to as DeepStreets videos)
We present a simple frame-based detector, achieving very good performance on state-of-the-art DeepStreets videos generated by the Vid2vid architecture.
arXiv Detail & Related papers (2021-09-10T16:59:15Z) - What's wrong with this video? Comparing Explainers for Deepfake
Detection [13.089182408360221]
Deepfakes are computer manipulated videos where the face of an individual has been replaced with that of another.
In this work we develop, extend and compare white-box, black-box and model-specific techniques for explaining the labelling of real and fake videos.
In particular, we adapt SHAP, GradCAM and self-attention models to the task of explaining the predictions of state-of-the-art detectors based on EfficientNet.
arXiv Detail & Related papers (2021-05-12T18:44:39Z) - Detecting Deepfake Videos Using Euler Video Magnification [1.8506048493564673]
Deepfake videos are manipulating videos using advanced machine learning techniques.
In this paper, we examine a technique for possible identification of deepfake videos.
Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos.
arXiv Detail & Related papers (2021-01-27T17:37:23Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - Deepfake detection: humans vs. machines [4.485016243130348]
We present a subjective study conducted in a crowdsourcing-like scenario, which systematically evaluates how hard it is for humans to see if the video is deepfake or not.
For each video, a simple question: "Is face of the person in the video real of fake?" was answered on average by 19 na"ive subjects.
The evaluation demonstrates that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes.
arXiv Detail & Related papers (2020-09-07T15:20:37Z) - Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
Adversarial Examples [23.695497512694068]
Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before.
Manipulated videos can fuel disinformation and reduce trust in media.
Recent developed Deepfake detection methods rely on deep neural networks (DNNs) to distinguish AI-generated fake videos from real videos.
arXiv Detail & Related papers (2020-02-09T07:10:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.