DF-Captcha: A Deepfake Captcha for Preventing Fake Calls
- URL: http://arxiv.org/abs/2208.08524v1
- Date: Wed, 17 Aug 2022 20:40:54 GMT
- Title: DF-Captcha: A Deepfake Captcha for Preventing Fake Calls
- Authors: Yisroel Mirsky
- Abstract summary: Social engineering (SE) is a form of deception that aims to trick people into giving access to data, information, networks and even money.
Deepfake technology can be deployed in real-time to clone someone's voice in a phone call or reenact a face in a video call.
We propose a lightweight application which can protect organizations and individuals from deepfake SE attacks.
- Score: 7.714772499501984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social engineering (SE) is a form of deception that aims to trick people into
giving access to data, information, networks and even money. For decades SE has
been a key method for attackers to gain access to an organization, virtually
skipping all lines of defense. Attackers also regularly use SE to scam innocent
people by making threatening phone calls which impersonate an authority or by
sending infected emails which look like they have been sent from a loved one.
SE attacks will likely remain a top attack vector for criminals because humans
are the weakest link in cyber security.
Unfortunately, the threat will only get worse now that a new technology
called deepfakes as arrived. A deepfake is believable media (e.g., videos)
created by an AI. Although the technology has mostly been used to swap the
faces of celebrities, it can also be used to `puppet' different personas.
Recently, researchers have shown how this technology can be deployed in
real-time to clone someone's voice in a phone call or reenact a face in a video
call. Given that any novice user can download this technology to use it, it is
no surprise that criminals have already begun to monetize it to perpetrate
their SE attacks.
In this paper, we propose a lightweight application which can protect
organizations and individuals from deepfake SE attacks. Through a challenge and
response approach, we leverage the technical and theoretical limitations of
deepfake technologies to expose the attacker. Existing defence solutions are
too heavy as an end-point solution and can be evaded by a dynamic attacker. In
contrast, our approach is lightweight and breaks the reactive arms race,
putting the attacker at a disadvantage.
Related papers
- On the Feasibility of Fully AI-automated Vishing Attacks [4.266087132777785]
A vishing attack is a form of social engineering where attackers use phone calls to deceive individuals into disclosing sensitive information.
We study the potential for vishing attacks to escalate with the advent of AI.
We introduce ViKing, an AI-powered vishing system developed using publicly available AI technology.
arXiv Detail & Related papers (2024-09-20T10:47:09Z) - Cyber Deception Reactive: TCP Stealth Redirection to On-Demand Honeypots [0.0]
Cyber Deception (CYDEC) consists of deceiving the enemy who performs actions without realising that he/she is being deceived.
This article proposes designing, implementing, and evaluating a deception mechanism based on the stealthy redirection of TCP communications to an on-demand honey server.
arXiv Detail & Related papers (2024-02-14T14:15:21Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Discussion Paper: The Threat of Real Time Deepfakes [7.714772499501984]
Deepfakes are being used to spread misinformation, enable scams, perform fraud, and blackmail the innocent.
In this paper, we discuss the implications of this emerging threat, identify the challenges with preventing these attacks and suggest a better direction for researching stronger defences.
arXiv Detail & Related papers (2023-06-04T21:40:11Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Deepfake CAPTCHA: A Method for Preventing Fake Calls [5.810459869589559]
We propose D-CAPTCHA: an active defense against real-time deepfakes.
The approach is to force the adversary into the spotlight by challenging the deepfake model to generate content which exceeds its capabilities.
In contrast to existing CAPTCHAs, we challenge the AI's ability to create content as opposed to its ability to classify content.
arXiv Detail & Related papers (2023-01-08T15:34:19Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Attacker Attribution of Audio Deepfakes [5.070542698701158]
Deepfakes are synthetically generated media often devised with malicious intent.
Recent work is almost exclusively limited to deepfake detection - predicting if audio is real or fake.
This is despite the fact that attribution (who created which fake?) is an essential building block of a larger defense strategy.
arXiv Detail & Related papers (2022-03-28T09:25:31Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - ONION: A Simple and Effective Defense Against Textual Backdoor Attacks [91.83014758036575]
Backdoor attacks are a kind of emergent training-time threat to deep neural networks (DNNs)
In this paper, we propose a simple and effective textual backdoor defense named ONION.
Experiments demonstrate the effectiveness of our model in defending BiLSTM and BERT against five different backdoor attacks.
arXiv Detail & Related papers (2020-11-20T12:17:21Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.