Assessing AI vs Human-Authored Spear Phishing SMS Attacks: An Empirical Study
- URL: http://arxiv.org/abs/2406.13049v2
- Date: Wed, 19 Mar 2025 00:33:59 GMT
- Title: Assessing AI vs Human-Authored Spear Phishing SMS Attacks: An Empirical Study
- Authors: Jerson Francia, Derek Hansen, Ben Schooley, Matthew Taylor, Shydra Murray, Greg Snow,
- Abstract summary: This paper examines the effectiveness of smishing (SMS phishing) messages created by GPT-4 and human authors, which have been personalized for willing targets.<n>Experiments involved ranking each spear phishing message from most to least convincing, providing qualitative feedback, and guessing which messages were human- or AI-generated.<n>Results show that LLM-generated messages are often perceived as more convincing than those authored by humans, particularly job-related messages.
- Score: 1.099532646524593
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper explores the use of Large Language Models (LLMs) in spear phishing message generation and evaluates their performance compared to human-authored counterparts. Our pilot study examines the effectiveness of smishing (SMS phishing) messages created by GPT-4 and human authors, which have been personalized for willing targets. The targets assessed these messages in a modified ranked-order experiment using a novel methodology we call TRAPD (Threshold Ranking Approach for Personalized Deception). Experiments involved ranking each spear phishing message from most to least convincing, providing qualitative feedback, and guessing which messages were human- or AI-generated. Results show that LLM-generated messages are often perceived as more convincing than those authored by humans, particularly job-related messages. Targets also struggled to distinguish between human- and AI-generated messages. We analyze different criteria the targets used to assess the persuasiveness and source of messages. This study aims to highlight the urgent need for further research and improved countermeasures against personalized AI-enabled social engineering attacks.
Related papers
- Who Writes What: Unveiling the Impact of Author Roles on AI-generated Text Detection [44.05134959039957]
We investigate how sociolinguistic attributes-gender, CEFR proficiency, academic field, and language environment-impact state-of-the-art AI text detectors.
Our results reveal significant biases: CEFR proficiency and language environment consistently affected detector accuracy, while gender and academic field showed detector-dependent effects.
These findings highlight the crucial need for socially aware AI text detection to avoid unfairly penalizing specific demographic groups.
arXiv Detail & Related papers (2025-02-18T07:49:31Z) - Assessing the Human Likeness of AI-Generated Counterspeech [10.434435022492723]
This paper investigates the human likeness of AI-generated counterspeech.
We implement and evaluate several LLM-based generation strategies.
We reveal differences in linguistic characteristics, politeness, and specificity.
arXiv Detail & Related papers (2024-10-14T18:48:47Z) - Seeing Through AI's Lens: Enhancing Human Skepticism Towards LLM-Generated Fake News [0.38233569758620056]
This paper aims to elucidate simple markers that help individuals distinguish between articles penned by humans and those created by LLMs.
We then devise a metric named Entropy-Shift Authorship Signature (ESAS) based on the information theory and entropy principles.
The proposed ESAS ranks terms or entities, like POS tagging, within news articles based on their relevance in discerning article authorship.
arXiv Detail & Related papers (2024-06-20T06:02:04Z) - Evaluating the Efficacy of Large Language Models in Identifying Phishing Attempts [2.6012482282204004]
Phishing, a prevalent cybercrime tactic for decades, remains a significant threat in today's digital world.
This paper aims to analyze the effectiveness of 15 Large Language Models (LLMs) in detecting phishing attempts.
arXiv Detail & Related papers (2024-04-23T19:55:18Z) - How Well Can LLMs Echo Us? Evaluating AI Chatbots' Role-Play Ability with ECHO [55.25989137825992]
We introduce ECHO, an evaluative framework inspired by the Turing test.
This framework engages the acquaintances of the target individuals to distinguish between human and machine-generated responses.
We evaluate three role-playing LLMs using ECHO, with GPT-3.5 and GPT-4 serving as foundational models.
arXiv Detail & Related papers (2024-04-22T08:00:51Z) - Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - Hidding the Ghostwriters: An Adversarial Evaluation of AI-Generated
Student Essay Detection [29.433764586753956]
Large language models (LLMs) have exhibited remarkable capabilities in text generation tasks.
The utilization of these models carries inherent risks, including but not limited to plagiarism, the dissemination of fake news, and issues in educational exercises.
This paper aims to bridge this gap by constructing AIG-ASAP, an AI-generated student essay dataset.
arXiv Detail & Related papers (2024-02-01T08:11:56Z) - Comparing Large Language Model AI and Human-Generated Coaching Messages for Behavioral Weight Loss [5.496825493463708]
Large language model (LLM) based artificial intelligence (AI) chatbots could offer more personalized and novel messages.
87 adults in a weight-loss trial rated ten coaching messages' helpfulness using a 5-point Likert scale.
arXiv Detail & Related papers (2023-12-07T05:45:24Z) - The effect of source disclosure on evaluation of AI-generated messages:
A two-part study [0.0]
We examined the influence of source disclosure on people's evaluation of AI-generated health prevention messages.
We found that source disclosure significantly impacted the evaluation of the messages but did not significantly alter message rankings.
For those with moderate levels of negative attitudes towards AI, source disclosure decreased the preference for AI-generated messages.
arXiv Detail & Related papers (2023-11-27T05:20:47Z) - Fine-tuning Language Models for Factuality [96.5203774943198]
Large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines.
Yet language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations'
In this work, we fine-tune language models to be more factual, without human labeling.
arXiv Detail & Related papers (2023-11-14T18:59:15Z) - A Quantitative Study of SMS Phishing Detection [0.0]
We conducted an online survey on smishing detection with 187 participants.
We presented them with 16 SMS screenshots and evaluated how different factors affect their decision making process in smishing detection.
We found that participants had more difficulty identifying real messages from fake ones, with an accuracy of 67.1% with fake messages and 43.6% with real messages.
arXiv Detail & Related papers (2023-11-12T17:56:42Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - MAGE: Machine-generated Text Detection in the Wild [82.70561073277801]
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection.
We build a comprehensive testbed by gathering texts from diverse human writings and texts generated by different LLMs.
Despite challenges, the top-performing detector can identify 86.54% out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios.
arXiv Detail & Related papers (2023-05-22T17:13:29Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political
Twitter Messages with Zero-Shot Learning [0.0]
This paper assesses the accuracy, reliability and bias of the Large Language Model (LLM) ChatGPT-4 on the text analysis task of classifying the political affiliation of a Twitter poster based on the content of a tweet.
We use Twitter messages from United States politicians during the 2020 election, providing a ground truth against which to measure accuracy.
arXiv Detail & Related papers (2023-04-13T14:51:40Z) - Can AI-Generated Text be Reliably Detected? [50.95804851595018]
Large Language Models (LLMs) perform impressively well in various applications.
The potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concern about their responsible use.
We stress-test the robustness of these AI text detectors in the presence of an attacker.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Targeted Phishing Campaigns using Large Scale Language Models [0.0]
Phishing emails are fraudulent messages that aim to trick individuals into revealing sensitive information or taking actions that benefit the attackers.
We propose a framework for evaluating the performance of NLMs in generating these types of emails based on various criteria, including the quality of the generated text.
Our evaluations show that NLMs are capable of generating phishing emails that are difficult to detect and that have a high success rate in tricking individuals, but their effectiveness varies based on the specific NLM and training data used.
arXiv Detail & Related papers (2022-12-30T03:18:05Z) - Few-Shot Stance Detection via Target-Aware Prompt Distillation [48.40269795901453]
This paper is inspired by the potential capability of pre-trained language models (PLMs) serving as knowledge bases and few-shot learners.
PLMs can provide essential contextual information for the targets and enable few-shot learning via prompts.
Considering the crucial role of the target in stance detection task, we design target-aware prompts and propose a novel verbalizer.
arXiv Detail & Related papers (2022-06-27T12:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.