On the Feasibility of Fully AI-automated Vishing Attacks
- URL: http://arxiv.org/abs/2409.13793v1
- Date: Fri, 20 Sep 2024 10:47:09 GMT
- Title: On the Feasibility of Fully AI-automated Vishing Attacks
- Authors: João Figueiredo, Afonso Carvalho, Daniel Castro, Daniel Gonçalves, Nuno Santos,
- Abstract summary: A vishing attack is a form of social engineering where attackers use phone calls to deceive individuals into disclosing sensitive information.
We study the potential for vishing attacks to escalate with the advent of AI.
We introduce ViKing, an AI-powered vishing system developed using publicly available AI technology.
- Score: 4.266087132777785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A vishing attack is a form of social engineering where attackers use phone calls to deceive individuals into disclosing sensitive information, such as personal data, financial information, or security credentials. Attackers exploit the perceived urgency and authenticity of voice communication to manipulate victims, often posing as legitimate entities like banks or tech support. Vishing is a particularly serious threat as it bypasses security controls designed to protect information. In this work, we study the potential for vishing attacks to escalate with the advent of AI. In theory, AI-powered software bots may have the ability to automate these attacks by initiating conversations with potential victims via phone calls and deceiving them into disclosing sensitive information. To validate this thesis, we introduce ViKing, an AI-powered vishing system developed using publicly available AI technology. It relies on a Large Language Model (LLM) as its core cognitive processor to steer conversations with victims, complemented by a pipeline of speech-to-text and text-to-speech modules that facilitate audio-text conversion in phone calls. Through a controlled social experiment involving 240 participants, we discovered that ViKing has successfully persuaded many participants to reveal sensitive information, even those who had been explicitly warned about the risk of vishing campaigns. Interactions with ViKing's bots were generally considered realistic. From these findings, we conclude that tools like ViKing may already be accessible to potential malicious actors, while also serving as an invaluable resource for cyber awareness programs.
Related papers
- Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns [0.0]
We propose AbuseGPT method to show how the existing generative AI-based chatbots can be exploited by attackers in real world to create smishing texts.
We have found strong empirical evidences to show that attackers can exploit ethical standards in the existing generative AI-based chatbots services.
We also discuss some future research directions and guidelines to protect the abuse of generative AI-based services.
arXiv Detail & Related papers (2024-02-15T05:49:22Z) - Decoding the Threat Landscape : ChatGPT, FraudGPT, and WormGPT in Social Engineering Attacks [0.0]
Generative AI models have revolutionized the field of cyberattacks, empowering malicious actors to craft convincing and personalized phishing lures.
These models, ChatGPT, FraudGPT, and WormGPT, have augmented existing threats and ushered in new dimensions of risk.
To counter these threats, we outline a range of strategies, including traditional security measures, AI-powered security solutions, and collaborative approaches in cybersecurity.
arXiv Detail & Related papers (2023-10-09T10:31:04Z) - The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency [0.0]
The technology of Conversational AI has made significant advancements over the last eighteen months.
conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives.
Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents.
arXiv Detail & Related papers (2023-06-19T04:09:16Z) - DF-Captcha: A Deepfake Captcha for Preventing Fake Calls [7.714772499501984]
Social engineering (SE) is a form of deception that aims to trick people into giving access to data, information, networks and even money.
Deepfake technology can be deployed in real-time to clone someone's voice in a phone call or reenact a face in a video call.
We propose a lightweight application which can protect organizations and individuals from deepfake SE attacks.
arXiv Detail & Related papers (2022-08-17T20:40:54Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.