Deepfake Caricatures: Amplifying attention to artifacts increases
deepfake detection by humans and machines
- URL: http://arxiv.org/abs/2206.00535v3
- Date: Mon, 10 Apr 2023 17:14:43 GMT
- Title: Deepfake Caricatures: Amplifying attention to artifacts increases
deepfake detection by humans and machines
- Authors: Camilo Fosco, Emilie Josephs, Alex Andonian, Allen Lee, Xi Wang and
Aude Oliva
- Abstract summary: Deepfakes pose a serious threat to digital well-being by fueling misinformation.
We introduce a framework for amplifying artifacts in deepfake videos to make them more detectable by people.
We propose a novel, semi-supervised Artifact Attention module, which is trained on human responses to create attention maps that highlight video artifacts.
- Score: 17.7858728343141
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deepfakes pose a serious threat to digital well-being by fueling
misinformation. As deepfakes get harder to recognize with the naked eye, human
users become increasingly reliant on deepfake detection models to decide if a
video is real or fake. Currently, models yield a prediction for a video's
authenticity, but do not integrate a method for alerting a human user. We
introduce a framework for amplifying artifacts in deepfake videos to make them
more detectable by people. We propose a novel, semi-supervised Artifact
Attention module, which is trained on human responses to create attention maps
that highlight video artifacts. These maps make two contributions. First, they
improve the performance of our deepfake detection classifier. Second, they
allow us to generate novel "Deepfake Caricatures": transformations of the
deepfake that exacerbate artifacts to improve human detection. In a user study,
we demonstrate that Caricatures greatly increase human detection, across video
presentation times and user engagement levels. Overall, we demonstrate the
success of a human-centered approach to designing deepfake mitigation methods.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Comparative Analysis of Deep-Fake Algorithms [0.0]
Deepfakes, also known as deep learning-based fake videos, have become a major concern in recent years.
These deepfake videos can be used for malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news.
Deepfake detection technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization.
arXiv Detail & Related papers (2023-09-06T18:17:47Z) - Using Deep Learning to Detecting Deepfakes [0.0]
Deepfakes are videos or images that replace one persons face with another computer-generated face, often a more recognizable person in society.
To combat this online threat, researchers have developed models that are designed to detect deepfakes.
This study looks at various deepfake detection models that use deep learning algorithms to combat this looming threat.
arXiv Detail & Related papers (2022-07-27T17:05:16Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Detecting Deepfake Videos Using Euler Video Magnification [1.8506048493564673]
Deepfake videos are manipulating videos using advanced machine learning techniques.
In this paper, we examine a technique for possible identification of deepfake videos.
Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos.
arXiv Detail & Related papers (2021-01-27T17:37:23Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - Deepfake detection: humans vs. machines [4.485016243130348]
We present a subjective study conducted in a crowdsourcing-like scenario, which systematically evaluates how hard it is for humans to see if the video is deepfake or not.
For each video, a simple question: "Is face of the person in the video real of fake?" was answered on average by 19 na"ive subjects.
The evaluation demonstrates that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes.
arXiv Detail & Related papers (2020-09-07T15:20:37Z) - How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via
Interpreting Residuals with Biological Signals [9.918684475252636]
We propose an approach not only to separate deep fakes from real, but also to discover the specific generative model behind a deep fake.
Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.
arXiv Detail & Related papers (2020-08-26T03:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.