Deepfake CAPTCHA: A Method for Preventing Fake Calls
- URL: http://arxiv.org/abs/2301.03064v1
- Date: Sun, 8 Jan 2023 15:34:19 GMT
- Title: Deepfake CAPTCHA: A Method for Preventing Fake Calls
- Authors: Lior Yasur, Guy Frankovits, Fred M. Grabovski, Yisroel Mirsky
- Abstract summary: We propose D-CAPTCHA: an active defense against real-time deepfakes.
The approach is to force the adversary into the spotlight by challenging the deepfake model to generate content which exceeds its capabilities.
In contrast to existing CAPTCHAs, we challenge the AI's ability to create content as opposed to its ability to classify content.
- Score: 5.810459869589559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning technology has made it possible to generate realistic content
of specific individuals. These `deepfakes' can now be generated in real-time
which enables attackers to impersonate people over audio and video calls.
Moreover, some methods only need a few images or seconds of audio to steal an
identity. Existing defenses perform passive analysis to detect fake content.
However, with the rapid progress of deepfake quality, this may be a losing
game.
In this paper, we propose D-CAPTCHA: an active defense against real-time
deepfakes. The approach is to force the adversary into the spotlight by
challenging the deepfake model to generate content which exceeds its
capabilities. By doing so, passive detection becomes easier since the content
will be distorted. In contrast to existing CAPTCHAs, we challenge the AI's
ability to create content as opposed to its ability to classify content. In
this work we focus on real-time audio deepfakes and present preliminary results
on video.
In our evaluation we found that D-CAPTCHA outperforms state-of-the-art audio
deepfake detectors with an accuracy of 91-100% depending on the challenge
(compared to 71% without challenges). We also performed a study on 41
volunteers to understand how threatening current real-time deepfake attacks
are. We found that the majority of the volunteers could not tell the difference
between real and fake audio.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes [3.6308756891251392]
Real-time deepfake, a type of generative AI, is capable of "creating" non-existing contents (e.g., swapping one's face with another) in a video.
It has been misused to produce deepfake videos for malicious purposes, including financial scams and political misinformation.
We propose SFake, a new real-time deepfake detection method that exploits deepfake models' inability to adapt to physical interference.
arXiv Detail & Related papers (2024-09-17T04:58:30Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - GOTCHA: Real-Time Video Deepfake Detection via Challenge-Response [17.117162678626418]
We propose a challenge-response approach that establishes authenticity in live settings.
We focus on talking-head style video interaction and present a taxonomy of challenges that specifically target inherent limitations of RTDF generation pipelines.
The findings underscore the promising potential of challenge-response systems for explainable and scalable real-time deepfake detection.
arXiv Detail & Related papers (2022-10-12T13:15:54Z) - Deepfake Caricatures: Amplifying attention to artifacts increases
deepfake detection by humans and machines [17.7858728343141]
Deepfakes pose a serious threat to digital well-being by fueling misinformation.
We introduce a framework for amplifying artifacts in deepfake videos to make them more detectable by people.
We propose a novel, semi-supervised Artifact Attention module, which is trained on human responses to create attention maps that highlight video artifacts.
arXiv Detail & Related papers (2022-06-01T14:43:49Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Evaluation of an Audio-Video Multimodal Deepfake Dataset using Unimodal
and Multimodal Detectors [18.862258543488355]
Deepfakes can cause security and privacy issues.
New domain of cloning human voices using deep-learning technologies is also emerging.
To develop a good deepfake detector, we need a detector that can detect deepfakes of multiple modalities.
arXiv Detail & Related papers (2021-09-07T11:00:20Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.