Content, Nudges and Incentives: A Study on the Effectiveness and Perception of Embedded Phishing Training
- URL: http://arxiv.org/abs/2409.01378v1
- Date: Mon, 2 Sep 2024 17:17:44 GMT
- Title: Content, Nudges and Incentives: A Study on the Effectiveness and Perception of Embedded Phishing Training
- Authors: Daniele Lain, Tarek Jost, Sinisa Matetic, Kari Kostiainen, Srdjan Capkun,
- Abstract summary: We investigate embedded phishing training in three aspects.
knowledge gains from its content, nudges and reminders from the test itself, and the deterrent effect of potential consequences.
Our study contributes several novel findings on the training practice.
- Score: 14.482027080866104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common form of phishing training in organizations is the use of simulated phishing emails to test employees' susceptibility to phishing attacks, and the immediate delivery of training material to those who fail the test. This widespread practice is dubbed embedded training; however, its effectiveness in decreasing the likelihood of employees falling for phishing again in the future is questioned by the contradictory findings of several recent field studies. We investigate embedded phishing training in three aspects. First, we observe that the practice incorporates different components -- knowledge gains from its content, nudges and reminders from the test itself, and the deterrent effect of potential consequences -- our goal is to study which ones are more effective, if any. Second, we explore two potential improvements to training, namely its timing and the use of incentives. Third, we analyze employees' reception and perception of the practice. For this, we conducted a large-scale mixed-methods (quantitative and qualitative) study on the employees of a partner company. Our study contributes several novel findings on the training practice: in particular, its effectiveness comes from its nudging effect, i.e., the periodic reminder of the threat rather than from its content, which is rarely consumed by employees due to lack of time and perceived usefulness. Further, delaying training to ease time pressure is as effective as currently established practices, while rewards do not improve secure behavior. Finally, some of our results support previous findings with increased ecological validity, e.g., that phishing is an attention problem, rather than a knowledge one, even for the most susceptible employees, and thus enforcing training does not help.
Related papers
- Early Period of Training Impacts Out-of-Distribution Generalization [56.283944756315066]
We investigate the relationship between learning dynamics and OOD generalization during the early period of neural network training.
We show that selecting the number of trainable parameters at different times during training has a minuscule impact on ID results.
The absolute values of sharpness and trace of Fisher Information at the initial period of training are not indicative for OOD generalization.
arXiv Detail & Related papers (2024-03-22T13:52:53Z) - Reward Shaping for Happier Autonomous Cyber Security Agents [0.276240219662896]
One of the most promising directions uses deep reinforcement learning to train autonomous agents in computer network defense tasks.
This work studies the impact of the reward signal that is provided to the agents when training for this task.
arXiv Detail & Related papers (2023-10-20T15:04:42Z) - A Study of Different Awareness Campaigns in a Company [0.0]
Phishing is a major cyber threat to organizations that can cause financial and reputational damage.
This paper examines how awareness concepts can be successfully implemented and validated.
arXiv Detail & Related papers (2023-08-29T09:57:11Z) - Where Did You Learn That From? Surprising Effectiveness of Membership
Inference Attacks Against Temporally Correlated Data in Deep Reinforcement
Learning [114.9857000195174]
A major challenge to widespread industrial adoption of deep reinforcement learning is the potential vulnerability to privacy breaches.
We propose an adversarial attack framework tailored for testing the vulnerability of deep reinforcement learning algorithms to membership inference attacks.
arXiv Detail & Related papers (2021-09-08T23:44:57Z) - Adversarial Training is Not Ready for Robot Learning [55.493354071227174]
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations.
We show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects.
Our results suggest that adversarial training is not yet ready for robot learning.
arXiv Detail & Related papers (2021-03-15T07:51:31Z) - Certified Defenses: Why Tighter Relaxations May Hurt Training? [12.483260526189447]
Training with tighter relaxations can worsen certified robustness.
We identify two key features of relaxations that impact training dynamics: continuity and sensitivity.
For the first time, it is possible to successfully train with tighter relaxations.
arXiv Detail & Related papers (2021-02-12T18:57:24Z) - Towards Understanding Fast Adversarial Training [91.8060431517248]
We conduct experiments to understand the behavior of fast adversarial training.
We show the key to its success is the ability to recover from overfitting to weak attacks.
arXiv Detail & Related papers (2020-06-04T18:19:43Z) - Overfitting in adversarially robust deep learning [86.11788847990783]
We show that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training.
We also show that effects such as the double descent curve do still occur in adversarially trained models, yet fail to explain the observed overfitting.
arXiv Detail & Related papers (2020-02-26T15:40:50Z) - Combating False Negatives in Adversarial Imitation Learning [67.99941805086154]
In adversarial imitation learning, a discriminator is trained to differentiate agent episodes from expert demonstrations representing the desired behavior.
As the trained policy learns to be more successful, the negative examples become increasingly similar to expert ones.
We propose a method to alleviate the impact of false negatives and test it on the BabyAI environment.
arXiv Detail & Related papers (2020-02-02T14:56:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.