From Chatbots to PhishBots? -- Preventing Phishing scams created using
ChatGPT, Google Bard and Claude
- URL: http://arxiv.org/abs/2310.19181v2
- Date: Sun, 10 Mar 2024 04:12:27 GMT
- Title: From Chatbots to PhishBots? -- Preventing Phishing scams created using
ChatGPT, Google Bard and Claude
- Authors: Sayak Saha Roy, Poojitha Thota, Krishna Vamsi Naragam, Shirin
Nilizadeh
- Abstract summary: This study explores the potential of using four popular commercially available Large Language Models to generate phishing attacks.
We build a BERT-based automated detection tool that can be used for the early detection of malicious prompts.
Our model is transferable across all four commercial LLMs, attaining an average accuracy of 96% for phishing website prompts and 94% for phishing email prompts.
- Score: 3.7741995290294943
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advanced capabilities of Large Language Models (LLMs) have made them
invaluable across various applications, from conversational agents and content
creation to data analysis, research, and innovation. However, their
effectiveness and accessibility also render them susceptible to abuse for
generating malicious content, including phishing attacks. This study explores
the potential of using four popular commercially available LLMs, i.e., ChatGPT
(GPT 3.5 Turbo), GPT 4, Claude, and Bard, to generate functional phishing
attacks using a series of malicious prompts. We discover that these LLMs can
generate both phishing websites and emails that can convincingly imitate
well-known brands and also deploy a range of evasive tactics that are used to
elude detection mechanisms employed by anti-phishing systems. These attacks can
be generated using unmodified or "vanilla" versions of these LLMs without
requiring any prior adversarial exploits such as jailbreaking. We evaluate the
performance of the LLMs towards generating these attacks and find that they can
also be utilized to create malicious prompts that, in turn, can be fed back to
the model to generate phishing scams - thus massively reducing the
prompt-engineering effort required by attackers to scale these threats. As a
countermeasure, we build a BERT-based automated detection tool that can be used
for the early detection of malicious prompts to prevent LLMs from generating
phishing content. Our model is transferable across all four commercial LLMs,
attaining an average accuracy of 96% for phishing website prompts and 94% for
phishing email prompts. We also disclose the vulnerabilities to the concerned
LLMs, with Google acknowledging it as a severe issue. Our detection model is
available for use at Hugging Face, as well as a ChatGPT Actions plugin.
Related papers
- Deciphering the Chaos: Enhancing Jailbreak Attacks via Adversarial Prompt Translation [71.92055093709924]
We propose a novel method that "translates" garbled adversarial prompts into coherent and human-readable natural language adversarial prompts.
It also offers a new approach to discovering effective designs for jailbreak prompts, advancing the understanding of jailbreak attacks.
Our method achieves over 90% attack success rates against Llama-2-Chat models on AdvBench, despite their outstanding resistance to jailbreak attacks.
arXiv Detail & Related papers (2024-10-15T06:31:04Z) - APOLLO: A GPT-based tool to detect phishing emails and generate explanations that warn users [2.3618982787621]
Large Language Models (LLMs) offer significant promise for text processing in various domains.
We present APOLLO, a tool based on OpenAI's GPT-4o to detect phishing emails and generate explanation messages.
We also conducted a study with 20 participants, comparing four different explanations presented as phishing warnings.
arXiv Detail & Related papers (2024-10-10T14:53:39Z) - BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger [47.1955210785169]
In this work, we propose $textbfBaThe, a simple yet effective jailbreak defense mechanism.
Jailbreak backdoor attack uses harmful instructions combined with manually crafted strings as triggers to make the backdoored model generate prohibited responses.
We assume that harmful instructions can function as triggers, and if we alternatively set rejection responses as the triggered response, the backdoored model then can defend against jailbreak attacks.
arXiv Detail & Related papers (2024-08-17T04:43:26Z) - Evaluating the Efficacy of Large Language Models in Identifying Phishing Attempts [2.6012482282204004]
Phishing, a prevalent cybercrime tactic for decades, remains a significant threat in today's digital world.
This paper aims to analyze the effectiveness of 15 Large Language Models (LLMs) in detecting phishing attempts.
arXiv Detail & Related papers (2024-04-23T19:55:18Z) - ChatSpamDetector: Leveraging Large Language Models for Effective Phishing Email Detection [2.3999111269325266]
This study introduces ChatSpamDetector, a system that uses large language models (LLMs) to detect phishing emails.
By converting email data into a prompt suitable for LLM analysis, the system provides a highly accurate determination of whether an email is phishing or not.
We conducted an evaluation using a comprehensive phishing email dataset and compared our system to several LLMs and baseline systems.
arXiv Detail & Related papers (2024-02-28T06:28:15Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - Universal and Transferable Adversarial Attacks on Aligned Language
Models [118.41733208825278]
We propose a simple and effective attack method that causes aligned language models to generate objectionable behaviors.
Surprisingly, we find that the adversarial prompts generated by our approach are quite transferable.
arXiv Detail & Related papers (2023-07-27T17:49:12Z) - Spear Phishing With Large Language Models [3.2634122554914002]
This study explores how large language models (LLMs) can be used for spear phishing.
I create unique spear phishing messages for over 600 British Members of Parliament using OpenAI's GPT-3.5 and GPT-4 models.
My findings provide some evidence that these messages are not only realistic but also cost-effective, with each email costing only a fraction of a cent to generate.
arXiv Detail & Related papers (2023-05-11T16:55:19Z) - Generating Phishing Attacks using ChatGPT [1.392250707100996]
We identify several malicious prompts that can be provided to ChatGPT to generate functional phishing websites.
These attacks can be generated using vanilla ChatGPT without the need of any prior adversarial exploits.
arXiv Detail & Related papers (2023-05-09T02:38:05Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Phishing and Spear Phishing: examples in Cyber Espionage and techniques
to protect against them [91.3755431537592]
Phishing attacks have become the most used technique in the online scams, initiating more than 91% of cyberattacks, from 2012 onwards.
This study reviews how Phishing and Spear Phishing attacks are carried out by the phishers, through 5 steps which magnify the outcome.
arXiv Detail & Related papers (2020-05-31T18:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.