AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns
- URL: http://arxiv.org/abs/2402.09728v1
- Date: Thu, 15 Feb 2024 05:49:22 GMT
- Title: AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns
- Authors: Ashfak Md Shibli and Mir Mehedi A. Pritom and Maanak Gupta
- Abstract summary: We propose AbuseGPT method to show how the existing generative AI-based chatbots can be exploited by attackers in real world to create smishing texts.
We have found strong empirical evidences to show that attackers can exploit ethical standards in the existing generative AI-based chatbots services.
We also discuss some future research directions and guidelines to protect the abuse of generative AI-based services.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: SMS phishing, also known as "smishing", is a growing threat that tricks users
into disclosing private information or clicking into URLs with malicious
content through fraudulent mobile text messages. In recent past, we have also
observed a rapid advancement of conversational generative AI chatbot services
(e.g., OpenAI's ChatGPT, Google's BARD), which are powered by pre-trained large
language models (LLMs). These AI chatbots certainly have a lot of utilities but
it is not systematically understood how they can play a role in creating
threats and attacks. In this paper, we propose AbuseGPT method to show how the
existing generative AI-based chatbot services can be exploited by attackers in
real world to create smishing texts and eventually lead to craftier smishing
campaigns. To the best of our knowledge, there is no pre-existing work that
evidently shows the impacts of these generative text-based models on creating
SMS phishing. Thus, we believe this study is the first of its kind to shed
light on this emerging cybersecurity threat. We have found strong empirical
evidences to show that attackers can exploit ethical standards in the existing
generative AI-based chatbot services by crafting prompt injection attacks to
create newer smishing campaigns. We also discuss some future research
directions and guidelines to protect the abuse of generative AI-based services
and safeguard users from smishing attacks.
Related papers
- On the Feasibility of Fully AI-automated Vishing Attacks [4.266087132777785]
A vishing attack is a form of social engineering where attackers use phone calls to deceive individuals into disclosing sensitive information.
We study the potential for vishing attacks to escalate with the advent of AI.
We introduce ViKing, an AI-powered vishing system developed using publicly available AI technology.
arXiv Detail & Related papers (2024-09-20T10:47:09Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - Anatomy of an AI-powered malicious social botnet [6.147741269183294]
This paper presents a study about a Twitter botnet that appears to employ ChatGPT to generate human-like content.
We identify 1,140 accounts and validate them via manual annotation.
ChatGPT-generated content promotes suspicious websites and spreads harmful comments.
arXiv Detail & Related papers (2023-07-30T23:06:06Z) - From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and
Privacy [0.0]
This research paper highlights the limitations, challenges, potential risks, and opportunities of GenAI in the domain of cybersecurity and privacy.
The paper investigates how cyber offenders can use the GenAI tools in developing cyber attacks.
We will also discuss the social, legal, and ethical implications of ChatGPT.
arXiv Detail & Related papers (2023-07-03T00:36:57Z) - Chatbots to ChatGPT in a Cybersecurity Space: Evolution,
Vulnerabilities, Attacks, Challenges, and Future Recommendations [6.1194122931444035]
OpenAI developed ChatGPT blizzard on the Internet as it crossed one million users within five days of its launch.
With the enhanced popularity, ChatGPT experienced cybersecurity threats and vulnerabilities.
arXiv Detail & Related papers (2023-05-29T12:26:44Z) - Generating Phishing Attacks using ChatGPT [1.392250707100996]
We identify several malicious prompts that can be provided to ChatGPT to generate functional phishing websites.
These attacks can be generated using vanilla ChatGPT without the need of any prior adversarial exploits.
arXiv Detail & Related papers (2023-05-09T02:38:05Z) - A Complete Survey on Generative AI (AIGC): Is ChatGPT from GPT-4 to
GPT-5 All You Need? [112.12974778019304]
generative AI (AIGC, a.k.a AI-generated content) has made headlines everywhere because of its ability to analyze and create text, images, and beyond.
In the era of AI transitioning from pure analysis to creation, it is worth noting that ChatGPT, with its most recent language model GPT-4, is just a tool out of numerous AIGC tasks.
This work focuses on the technological development of various AIGC tasks based on their output type, including text, images, videos, 3D content, etc.
arXiv Detail & Related papers (2023-03-21T10:09:47Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Robust Text CAPTCHAs Using Adversarial Examples [129.29523847765952]
We propose a user-friendly text-based CAPTCHA generation method named Robust Text CAPTCHA (RTC)
At the first stage, the foregrounds and backgrounds are constructed with randomly sampled font and background images.
At the second stage, we apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.
arXiv Detail & Related papers (2021-01-07T11:03:07Z) - Adversarial Watermarking Transformer: Towards Tracing Text Provenance
with Data Hiding [80.3811072650087]
We study natural language watermarking as a defense to help better mark and trace the provenance of text.
We introduce the Adversarial Watermarking Transformer (AWT) with a jointly trained encoder-decoder and adversarial training.
AWT is the first end-to-end model to hide data in text by automatically learning -- without ground truth -- word substitutions along with their locations.
arXiv Detail & Related papers (2020-09-07T11:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.