Detecting Deepfakes Without Seeing Any
- URL: http://arxiv.org/abs/2311.01458v1
- Date: Thu, 2 Nov 2023 17:59:31 GMT
- Title: Detecting Deepfakes Without Seeing Any
- Authors: Tal Reiss, Bar Cavia, Yedid Hoshen
- Abstract summary: "fact checking" is adapted from fake news detection to detect zero-day deepfake attacks.
FACTOR is a recipe for deepfake fact checking and demonstrates its power in critical attack settings.
Although it is training-free, relies exclusively on off-the-shelf features, is very easy to implement, and does not see any deepfakes.
- Score: 43.113936505905336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfake attacks, malicious manipulation of media containing people, are a
serious concern for society. Conventional deepfake detection methods train
supervised classifiers to distinguish real media from previously encountered
deepfakes. Such techniques can only detect deepfakes similar to those
previously seen, but not zero-day (previously unseen) attack types. As current
deepfake generation techniques are changing at a breathtaking pace, new attack
types are proposed frequently, making this a major issue. Our main observations
are that: i) in many effective deepfake attacks, the fake media must be
accompanied by false facts i.e. claims about the identity, speech, motion, or
appearance of the person. For instance, when impersonating Obama, the attacker
explicitly or implicitly claims that the fake media show Obama; ii) current
generative techniques cannot perfectly synthesize the false facts claimed by
the attacker. We therefore introduce the concept of "fact checking", adapted
from fake news detection, for detecting zero-day deepfake attacks. Fact
checking verifies that the claimed facts (e.g. identity is Obama), agree with
the observed media (e.g. is the face really Obama's?), and thus can
differentiate between real and fake media. Consequently, we introduce FACTOR, a
practical recipe for deepfake fact checking and demonstrate its power in
critical attack settings: face swapping and audio-visual synthesis. Although it
is training-free, relies exclusively on off-the-shelf features, is very easy to
implement, and does not see any deepfakes, it achieves better than
state-of-the-art accuracy.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes [3.6308756891251392]
Real-time deepfake, a type of generative AI, is capable of "creating" non-existing contents (e.g., swapping one's face with another) in a video.
It has been misused to produce deepfake videos for malicious purposes, including financial scams and political misinformation.
We propose SFake, a new real-time deepfake detection method that exploits deepfake models' inability to adapt to physical interference.
arXiv Detail & Related papers (2024-09-17T04:58:30Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Recent Advancements In The Field Of Deepfake Detection [0.0]
A deepfake is a photo or video of a person whose image has been digitally altered or partially replaced with an image of someone else.
Deepfakes have the potential to cause a variety of problems and are often used maliciously.
Our objective is to survey and analyze a variety of current methods and advances in the field of deepfake detection.
arXiv Detail & Related papers (2023-08-10T13:24:27Z) - Attacker Attribution of Audio Deepfakes [5.070542698701158]
Deepfakes are synthetically generated media often devised with malicious intent.
Recent work is almost exclusively limited to deepfake detection - predicting if audio is real or fake.
This is despite the fact that attribution (who created which fake?) is an essential building block of a larger defense strategy.
arXiv Detail & Related papers (2022-03-28T09:25:31Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Understanding the Security of Deepfake Detection [23.118012417901078]
We study the security of state-of-the-art deepfake detection methods in adversarial settings.
We use two large-scale public deepfakes data sources including FaceForensics++ and Facebook Deepfake Detection Challenge.
Our results uncover multiple security limitations of the deepfake detection methods in adversarial settings.
arXiv Detail & Related papers (2021-07-05T14:18:21Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.