Constructing and Benchmarking: a Labeled Email Dataset for Text-Based Phishing and Spam Detection Framework
- URL: http://arxiv.org/abs/2511.21448v1
- Date: Wed, 26 Nov 2025 14:40:06 GMT
- Title: Constructing and Benchmarking: a Labeled Email Dataset for Text-Based Phishing and Spam Detection Framework
- Authors: Rebeka Toth, Tamas Bisztray, Richard Dubniczky,
- Abstract summary: This study presents a comprehensive email dataset containing phishing, spam, and legitimate messages.<n>Each email is annotated with its category, emotional appeal, authority, and underlying motivation.<n>Results highlight strong phishing detection capabilities but reveal persistent challenges in distinguishing spam from legitimate emails.
- Score: 0.37687375904925485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Phishing and spam emails remain a major cybersecurity threat, with attackers increasingly leveraging Large Language Models (LLMs) to craft highly deceptive content. This study presents a comprehensive email dataset containing phishing, spam, and legitimate messages, explicitly distinguishing between human- and LLM-generated content. Each email is annotated with its category, emotional appeal (e.g., urgency, fear, authority), and underlying motivation (e.g., link-following, credential theft, financial fraud). We benchmark multiple LLMs on their ability to identify these emotional and motivational cues and select the most reliable model to annotate the full dataset. To evaluate classification robustness, emails were also rephrased using several LLMs while preserving meaning and intent. A state-of-the-art LLM was then assessed on its performance across both original and rephrased emails using expert-labeled ground truth. The results highlight strong phishing detection capabilities but reveal persistent challenges in distinguishing spam from legitimate emails. Our dataset and evaluation framework contribute to improving AI-assisted email security systems. To support open science, all code, templates, and resources are available on our project site.
Related papers
- Paladin: Defending LLM-enabled Phishing Emails with a New Trigger-Tag Paradigm [26.399199616508596]
Malicious users can synthesize phishing emails that are free from spelling mistakes and other easily detectable features.<n>Such models can generate topic-specific phishing messages, tailoring content to the target domain.<n>Most existing semantic-level detection approaches struggle to identify them reliably.<n>We propose Paladin, which embeds trigger-tag associations into vanilla LLM using various insertion strategies.<n>When an instrumented LLM generates content related to phishing, it will automatically include detectable tags, enabling easier identification.
arXiv Detail & Related papers (2025-09-08T23:44:00Z) - LLM-Powered Intent-Based Categorization of Phishing Emails [0.0]
This paper investigates the practical potential of Large Language Models (LLMs) to detect phishing emails by focusing on their intent.<n>We introduce an intent-type taxonomy, which is operationalized by the LLMs to classify emails into distinct categories and, therefore, generate actionable threat information.<n>Our results demonstrate that existing LLMs are capable of detecting and categorizing phishing emails, underscoring their potential in this domain.
arXiv Detail & Related papers (2025-06-17T09:21:55Z) - Idiosyncrasies in Large Language Models [54.26923012617675]
We unveil and study idiosyncrasies in Large Language Models (LLMs)<n>We find that fine-tuning text embedding models on LLM-generated texts yields excellent classification accuracy.<n>We leverage LLM as judges to generate detailed, open-ended descriptions of each model's idiosyncrasies.
arXiv Detail & Related papers (2025-02-17T18:59:02Z) - ChatSpamDetector: Leveraging Large Language Models for Effective Phishing Email Detection [2.3999111269325266]
This study introduces ChatSpamDetector, a system that uses large language models (LLMs) to detect phishing emails.
By converting email data into a prompt suitable for LLM analysis, the system provides a highly accurate determination of whether an email is phishing or not.
We conducted an evaluation using a comprehensive phishing email dataset and compared our system to several LLMs and baseline systems.
arXiv Detail & Related papers (2024-02-28T06:28:15Z) - Prompted Contextual Vectors for Spear-Phishing Detection [41.26408609344205]
Spear-phishing attacks present a significant security challenge.<n>We propose a detection approach based on a novel document vectorization method.<n>Our method achieves a 91% F1 score in identifying LLM-generated spear-phishing emails.
arXiv Detail & Related papers (2024-02-13T09:12:55Z) - Silent Guardian: Protecting Text from Malicious Exploitation by Large Language Models [63.91178922306669]
We introduce Silent Guardian, a text protection mechanism against large language models (LLMs)
By carefully modifying the text to be protected, TPE can induce LLMs to first sample the end token, thus directly terminating the interaction.
We show that SG can effectively protect the target text under various configurations and achieve almost 100% protection success rate in some cases.
arXiv Detail & Related papers (2023-12-15T10:30:36Z) - Anomaly Detection in Emails using Machine Learning and Header
Information [0.0]
Anomalies in emails such as phishing and spam present major security risks.
Previous studies on email anomaly detection relied on a single type of anomaly and the analysis of the email body and subject content.
This study conducted feature extraction and selection on email header datasets and leveraged both multi and one-class anomaly detection approaches.
arXiv Detail & Related papers (2022-03-19T23:31:23Z) - Deep convolutional forest: a dynamic deep ensemble approach for spam
detection in text [219.15486286590016]
This paper introduces a dynamic deep ensemble model for spam detection that adjusts its complexity and extracts features automatically.
As a result, the model achieved high precision, recall, f1-score and accuracy of 98.38%.
arXiv Detail & Related papers (2021-10-10T17:19:37Z) - Robust and Verifiable Information Embedding Attacks to Deep Neural
Networks via Error-Correcting Codes [81.85509264573948]
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier.
In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool.
In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods.
arXiv Detail & Related papers (2020-10-26T17:42:42Z) - Learning with Weak Supervision for Email Intent Detection [56.71599262462638]
We propose to leverage user actions as a source of weak supervision to detect intents in emails.
We develop an end-to-end robust deep neural network model for email intent identification.
arXiv Detail & Related papers (2020-05-26T23:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.