An Explainable Transformer-based Model for Phishing Email Detection: A
Large Language Model Approach
- URL: http://arxiv.org/abs/2402.13871v1
- Date: Wed, 21 Feb 2024 15:23:21 GMT
- Title: An Explainable Transformer-based Model for Phishing Email Detection: A
Large Language Model Approach
- Authors: Mohammad Amaz Uddin and Iqbal H. Sarker
- Abstract summary: Phishing email is a serious cyber threat that tries to deceive users by sending false emails with the intention of stealing confidential information or causing financial harm.
Despite extensive academic research, phishing detection remains an ongoing and formidable challenge in the cybersecurity landscape.
We present an optimized, fine-tuned transformer-based DistilBERT model designed for the detection of phishing emails.
- Score: 2.8282906214258805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Phishing email is a serious cyber threat that tries to deceive users by
sending false emails with the intention of stealing confidential information or
causing financial harm. Attackers, often posing as trustworthy entities,
exploit technological advancements and sophistication to make detection and
prevention of phishing more challenging. Despite extensive academic research,
phishing detection remains an ongoing and formidable challenge in the
cybersecurity landscape. Large Language Models (LLMs) and Masked Language
Models (MLMs) possess immense potential to offer innovative solutions to
address long-standing challenges. In this research paper, we present an
optimized, fine-tuned transformer-based DistilBERT model designed for the
detection of phishing emails. In the detection process, we work with a phishing
email dataset and utilize the preprocessing techniques to clean and solve the
imbalance class issues. Through our experiments, we found that our model
effectively achieves high accuracy, demonstrating its capability to perform
well. Finally, we demonstrate our fine-tuned model using Explainable-AI (XAI)
techniques such as Local Interpretable Model-Agnostic Explanations (LIME) and
Transformer Interpret to explain how our model makes predictions in the context
of text classification for phishing emails.
Related papers
- Next-Generation Phishing: How LLM Agents Empower Cyber Attackers [10.067883724547182]
The escalating threat of phishing emails has become increasingly sophisticated with the rise of Large Language Models (LLMs)
As attackers exploit LLMs to craft more convincing and evasive phishing emails, it is crucial to assess the resilience of current phishing defenses.
We conduct a comprehensive evaluation of traditional phishing detectors, such as Gmail Spam Filter, Apache SpamAssassin, and Proofpoint, as well as machine learning models like SVM, Logistic Regression, and Naive Bayes.
Our results reveal notable declines in detection accuracy for rephrased emails across all detectors, highlighting critical weaknesses in current phishing defenses.
arXiv Detail & Related papers (2024-11-21T06:20:29Z) - Adapting to Cyber Threats: A Phishing Evolution Network (PEN) Framework for Phishing Generation and Analyzing Evolution Patterns using Large Language Models [10.58220151364159]
Phishing remains a pervasive cyber threat, as attackers craft deceptive emails to lure victims into revealing sensitive information.
While Artificial Intelligence (AI) has become a key component in defending against phishing attacks, these approaches face critical limitations.
We propose the Phishing Evolution Network (PEN), a framework leveraging large language models (LLMs) and adversarial training mechanisms to continuously generate high quality and realistic diverse phishing samples.
arXiv Detail & Related papers (2024-11-18T09:03:51Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Stealth edits to large language models [76.53356051271014]
We show that a single metric can be used to assess a model's editability.
We also reveal the vulnerability of language models to stealth attacks.
arXiv Detail & Related papers (2024-06-18T14:43:18Z) - Novel Interpretable and Robust Web-based AI Platform for Phishing Email Detection [0.0]
Phishing emails pose a significant threat, causing financial losses and security breaches.
This study proposes a high-performance machine learning model for email classification.
The model achieves a f1 score of 0.99 and is designed for deployment within relevant applications.
arXiv Detail & Related papers (2024-05-19T17:18:27Z) - Evaluating the Efficacy of Large Language Models in Identifying Phishing Attempts [2.6012482282204004]
Phishing, a prevalent cybercrime tactic for decades, remains a significant threat in today's digital world.
This paper aims to analyze the effectiveness of 15 Large Language Models (LLMs) in detecting phishing attempts.
arXiv Detail & Related papers (2024-04-23T19:55:18Z) - An Improved Transformer-based Model for Detecting Phishing, Spam, and
Ham: A Large Language Model Approach [0.0]
We present IPSDM, our model based on fine-tuning the BERT family of models to specifically detect phishing and spam email.
We demonstrate our fine-tuned version, IPSDM, is able to better classify emails in both unbalanced and balanced datasets.
arXiv Detail & Related papers (2023-11-01T18:41:50Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Deep convolutional forest: a dynamic deep ensemble approach for spam
detection in text [219.15486286590016]
This paper introduces a dynamic deep ensemble model for spam detection that adjusts its complexity and extracts features automatically.
As a result, the model achieved high precision, recall, f1-score and accuracy of 98.38%.
arXiv Detail & Related papers (2021-10-10T17:19:37Z) - Adversarial Watermarking Transformer: Towards Tracing Text Provenance
with Data Hiding [80.3811072650087]
We study natural language watermarking as a defense to help better mark and trace the provenance of text.
We introduce the Adversarial Watermarking Transformer (AWT) with a jointly trained encoder-decoder and adversarial training.
AWT is the first end-to-end model to hide data in text by automatically learning -- without ground truth -- word substitutions along with their locations.
arXiv Detail & Related papers (2020-09-07T11:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.