Anti-Phishing Training Does Not Work: A Large-Scale Empirical Assessment of Multi-Modal Training Grounded in the NIST Phish Scale
- URL: http://arxiv.org/abs/2506.19899v1
- Date: Tue, 24 Jun 2025 17:57:10 GMT
- Title: Anti-Phishing Training Does Not Work: A Large-Scale Empirical Assessment of Multi-Modal Training Grounded in the NIST Phish Scale
- Authors: Andrew T. Rozema, James C. Davis,
- Abstract summary: Phishing attacks are a critical cybersecurity threat.<n>Many organizations allocate a substantial portion of their cybersecurity budgets to phishing awareness training.<n> Empirical evidence of training (in)effectiveness is essential for evidence-based cybersecurity investment.
- Score: 3.599344290726663
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Social engineering attacks using email, commonly known as phishing, are a critical cybersecurity threat. Phishing attacks often lead to operational incidents and data breaches. As a result, many organizations allocate a substantial portion of their cybersecurity budgets to phishing awareness training, driven in part by compliance requirements. However, the effectiveness of this training remains in dispute. Empirical evidence of training (in)effectiveness is essential for evidence-based cybersecurity investment and policy development. Despite recent measurement studies, two critical gaps remain in the literature: (1) we lack a validated measure of phishing lure difficulty, and (2) there are few comparisons of different types of training in real-world business settings. To fill these gaps, we conducted a large-scale study ($N = 12{,}511$) of phishing effectiveness at a US-based financial technology (``fintech'') firm. Our two-factor design compared the effect of treatments (lecture-based, interactive, and control groups) on subjects' susceptibility to phishing lures of varying complexity (using the NIST Phish Scale). The NIST Phish Scale successfully predicted behavior (click rates: 7.0\% easy to 15.0\% hard emails, p $<$ 0.001), but training showed no significant main effects on clicks (p = 0.450) or reporting (p = 0.417). Effect sizes remained below 0.01, indicating little practical value in any of the phishing trainings we deployed. Our results add to the growing evidence that phishing training is ineffective, reinforcing the importance of phishing defense-in-depth and the merit of changes to processes and technology to reduce reliance on humans, as well as rebuking the training costs necessitated by regulatory requirements.
Related papers
- AdaPhish: AI-Powered Adaptive Defense and Education Resource Against Deceptive Emails [0.0]
AdaPhish is an AI-powered phish bowl platform that automatically anonymizes and analyzes phishing emails.<n>It achieves real-time detection and adaptation to new phishing tactics while enabling long-term tracking of phishing trends.<n>AdaPhish presents a scalable, collaborative solution for phishing detection and cybersecurity education.
arXiv Detail & Related papers (2025-02-05T21:17:19Z) - PEEK: Phishing Evolution Framework for Phishing Generation and Evolving Pattern Analysis using Large Language Models [10.455333111937598]
Phishing remains a pervasive cyber threat, as attackers craft deceptive emails to lure victims into revealing sensitive information.<n>Deep learning has become a key component in defending against phishing attacks, but these approaches face critical limitations.<n>We propose the first Phishing Evolution FramEworK (PEEK) for augmenting phishing email datasets with respect to quality and diversity.
arXiv Detail & Related papers (2024-11-18T09:03:51Z) - Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - An Innovative Information Theory-based Approach to Tackle and Enhance The Transparency in Phishing Detection [23.962076093344166]
We propose an innovative deep learning-based approach for phishing attack localization.
Our method can not only predict the vulnerability of the email data but also automatically learn and figure out the most important and phishing-relevant information.
arXiv Detail & Related papers (2024-02-27T00:03:07Z) - A Study of Different Awareness Campaigns in a Company [0.0]
Phishing is a major cyber threat to organizations that can cause financial and reputational damage.
This paper examines how awareness concepts can be successfully implemented and validated.
arXiv Detail & Related papers (2023-08-29T09:57:11Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - SoK: Human-Centered Phishing Susceptibility [4.794822439017277]
We propose a three-stage Phishing Susceptibility Model (PSM) for explaining how humans are involved in phishing detection and prevention.
This model reveals several research gaps that need to be addressed to improve users' detection performance.
arXiv Detail & Related papers (2022-02-16T07:26:53Z) - On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training [70.82725772926949]
Adversarial training is a popular method to robustify models against adversarial attacks.<n>In this work, we investigate this phenomenon from the perspective of training instances.<n>We show that the decay in generalization performance of adversarial training is a result of fitting hard adversarial instances.
arXiv Detail & Related papers (2021-12-14T12:19:24Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Toward Smart Security Enhancement of Federated Learning Networks [109.20054130698797]
In this paper, we review the vulnerabilities of federated learning networks (FLNs) and give an overview of poisoning attacks.
We present a smart security enhancement framework for FLNs.
Deep reinforcement learning is applied to learn the behaving patterns of the edge devices (EDs) that can provide benign training results.
arXiv Detail & Related papers (2020-08-19T08:46:39Z) - Phishing and Spear Phishing: examples in Cyber Espionage and techniques
to protect against them [91.3755431537592]
Phishing attacks have become the most used technique in the online scams, initiating more than 91% of cyberattacks, from 2012 onwards.
This study reviews how Phishing and Spear Phishing attacks are carried out by the phishers, through 5 steps which magnify the outcome.
arXiv Detail & Related papers (2020-05-31T18:10:09Z) - Overfitting in adversarially robust deep learning [86.11788847990783]
We show that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training.
We also show that effects such as the double descent curve do still occur in adversarially trained models, yet fail to explain the observed overfitting.
arXiv Detail & Related papers (2020-02-26T15:40:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.