Spear Phishing With Large Language Models
- URL: http://arxiv.org/abs/2305.06972v3
- Date: Fri, 22 Dec 2023 18:01:44 GMT
- Title: Spear Phishing With Large Language Models
- Authors: Julian Hazell
- Abstract summary: This study explores how large language models (LLMs) can be used for spear phishing.
I create unique spear phishing messages for over 600 British Members of Parliament using OpenAI's GPT-3.5 and GPT-4 models.
My findings provide some evidence that these messages are not only realistic but also cost-effective, with each email costing only a fraction of a cent to generate.
- Score: 3.2634122554914002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent progress in artificial intelligence (AI), particularly in the domain
of large language models (LLMs), has resulted in powerful and versatile
dual-use systems. This intelligence can be put towards a wide variety of
beneficial tasks, yet it can also be used to cause harm. This study explores
one such harm by examining how LLMs can be used for spear phishing, a form of
cybercrime that involves manipulating targets into divulging sensitive
information. I first explore LLMs' ability to assist with the reconnaissance
and message generation stages of a spear phishing attack, where I find that
LLMs are capable of assisting with the email generation phase of a spear
phishing attack. To explore how LLMs could potentially be harnessed to scale
spear phishing campaigns, I then create unique spear phishing messages for over
600 British Members of Parliament using OpenAI's GPT-3.5 and GPT-4 models. My
findings provide some evidence that these messages are not only realistic but
also cost-effective, with each email costing only a fraction of a cent to
generate. Next, I demonstrate how basic prompt engineering can circumvent
safeguards installed in LLMs, highlighting the need for further research into
robust interventions that can help prevent models from being misused. To
further address these evolving risks, I explore two potential solutions:
structured access schemes, such as application programming interfaces, and
LLM-based defensive systems.
Related papers
- Aligning LLMs to Be Robust Against Prompt Injection [55.07562650579068]
We show that alignment can be a powerful tool to make LLMs more robust against prompt injection attacks.
Our method -- SecAlign -- first builds an alignment dataset by simulating prompt injection attacks.
Our experiments show that SecAlign robustifies the LLM substantially with a negligible hurt on model utility.
arXiv Detail & Related papers (2024-10-07T19:34:35Z) - MEGen: Generative Backdoor in Large Language Models via Model Editing [56.46183024683885]
Large language models (LLMs) have demonstrated remarkable capabilities.
Their powerful generative abilities enable flexible responses based on various queries or instructions.
This paper proposes an editing-based generative backdoor, named MEGen, aiming to create a customized backdoor for NLP tasks with the least side effects.
arXiv Detail & Related papers (2024-08-20T10:44:29Z) - Coercing LLMs to do and reveal (almost) anything [80.8601180293558]
It has been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements.
We argue that the spectrum of adversarial attacks on LLMs is much larger than merely jailbreaking.
arXiv Detail & Related papers (2024-02-21T18:59:13Z) - Large Language Model Lateral Spear Phishing: A Comparative Study in
Large-Scale Organizational Settings [3.251318035773221]
This study is a pioneering exploration into the use of Large Language Models (LLMs) for the creation of targeted lateral phishing emails.
It targets a large tier 1 university's operation and workforce of approximately 9,000 individuals over an 11-month period.
It also evaluates the capability of email filtering infrastructure to detect such LLM-generated phishing attempts.
arXiv Detail & Related papers (2024-01-18T05:06:39Z) - A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly [21.536079040559517]
Large Language Models (LLMs) have revolutionized natural language understanding and generation.
This paper explores the intersection of LLMs with security and privacy.
arXiv Detail & Related papers (2023-12-04T16:25:18Z) - From Chatbots to PhishBots? -- Preventing Phishing scams created using
ChatGPT, Google Bard and Claude [3.7741995290294943]
This study explores the potential of using four popular commercially available Large Language Models to generate phishing attacks.
We build a BERT-based automated detection tool that can be used for the early detection of malicious prompts.
Our model is transferable across all four commercial LLMs, attaining an average accuracy of 96% for phishing website prompts and 94% for phishing email prompts.
arXiv Detail & Related papers (2023-10-29T22:52:40Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - Detecting Phishing Sites Using ChatGPT [2.3999111269325266]
We propose a novel system called ChatPhishDetector that utilizes Large Language Models (LLMs) to detect phishing sites.
Our system involves leveraging a web crawler to gather information from websites, generating prompts for LLMs based on the crawled data, and then retrieving the detection results from the responses generated by the LLMs.
The experimental results using GPT-4V demonstrated outstanding performance, with a precision of 98.7% and a recall of 99.6%, outperforming the detection results of other LLMs and existing systems.
arXiv Detail & Related papers (2023-06-09T11:30:08Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard
Security Attacks [67.86285142381644]
Recent advances in instruction-following large language models amplify the dual-use risks for malicious purposes.
Dual-use is difficult to prevent as instruction-following capabilities now enable standard attacks from computer security.
We show that instruction-following LLMs can produce targeted malicious content, including hate speech and scams.
arXiv Detail & Related papers (2023-02-11T15:57:44Z) - Targeted Phishing Campaigns using Large Scale Language Models [0.0]
Phishing emails are fraudulent messages that aim to trick individuals into revealing sensitive information or taking actions that benefit the attackers.
We propose a framework for evaluating the performance of NLMs in generating these types of emails based on various criteria, including the quality of the generated text.
Our evaluations show that NLMs are capable of generating phishing emails that are difficult to detect and that have a high success rate in tricking individuals, but their effectiveness varies based on the specific NLM and training data used.
arXiv Detail & Related papers (2022-12-30T03:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.