Characterizing the Evolution of Psychological Factors Exploited by Malicious Emails
- URL: http://arxiv.org/abs/2408.11584v1
- Date: Wed, 21 Aug 2024 12:48:32 GMT
- Title: Characterizing the Evolution of Psychological Factors Exploited by Malicious Emails
- Authors: Theodore Longtchi, Shouhuai Xu,
- Abstract summary: We characterize the evolution of malicious emails through the lens of Psychological Factors, PFs.
We conduct a case study on 1,260 malicious emails over a span of 21 years, 2004 to 2024.
Attackers have been constantly seeking to exploit many PFs, especially the ones that reflect human traits.
- Score: 7.017268913381067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cyber attacks, including cyber social engineering attacks, such as malicious emails, are always evolving with time. Thus, it is important to understand their evolution. In this paper we characterize the evolution of malicious emails through the lens of Psychological Factors, PFs, which are humans psychological attributes that can be exploited by malicious emails. That is, attackers who send them. For this purpose, we propose a methodology and apply it to conduct a case study on 1,260 malicious emails over a span of 21 years, 2004 to 2024. Our findings include attackers have been constantly seeking to exploit many PFs, especially the ones that reflect human traits. Attackers have been increasingly exploiting 9 PFs and mostly in an implicit or stealthy fashion. Some PFs are often exploited together. These insights shed light on how to design future defenses against malicious emails.
Related papers
- Exploring Content Concealment in Email [0.48748194765816943]
Modern email filters, one of our few defence mechanisms against malicious emails, are often circumvented by sophisticated attackers.
This study focuses on how attackers exploit HTML and CSS in emails to conceal arbitrary content.
This concealed content remains undetected by the recipient, presenting a serious security risk.
arXiv Detail & Related papers (2024-10-15T01:12:47Z) - Quantifying Psychological Sophistication of Malicious Emails [4.787538460036984]
Malicious emails are one significant class of cyber social engineering attacks.
Ineffectiveness of current defenses can be attributed to our superficial understanding of the psychological properties that make these attacks successful.
We propose an innovative framework that accommodates two important and complementary aspects of sophistication, dubbed Psychological Techniques, PTechs, and Psychological Tactics, PTacs.
arXiv Detail & Related papers (2024-08-22T08:45:46Z) - Characterizing the Evolution of Psychological Tactics and Techniques Exploited by Malicious Emails [7.017268913381067]
Psychological Tactics, PTacs, and Psychological Techniques, PTechs, are exploited by malicious emails.
We present a methodology for characterizing the evolution of PTacs and PTechs exploited by malicious emails.
arXiv Detail & Related papers (2024-08-21T12:49:54Z) - Evaluating the Efficacy of Large Language Models in Identifying Phishing Attempts [2.6012482282204004]
Phishing, a prevalent cybercrime tactic for decades, remains a significant threat in today's digital world.
This paper aims to analyze the effectiveness of 15 Large Language Models (LLMs) in detecting phishing attempts.
arXiv Detail & Related papers (2024-04-23T19:55:18Z) - Targeted Attacks: Redefining Spear Phishing and Business Email Compromise [0.17175834535889653]
Some rare, severely damaging email threats - known as spear phishing or Business Email Compromise - have emerged.
We describe targeted-attack-detection techniques as well as social-engineering methods used by fraudsters.
We present text-based attacks - with textual content as malicious payload - and compare non-targeted and targeted variants.
arXiv Detail & Related papers (2023-09-25T14:21:59Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text
Style Transfer [49.67011295450601]
We make the first attempt to conduct adversarial and backdoor attacks based on text style transfer.
Experimental results show that popular NLP models are vulnerable to both adversarial and backdoor attacks based on text style transfer.
arXiv Detail & Related papers (2021-10-14T03:54:16Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Phishing and Spear Phishing: examples in Cyber Espionage and techniques
to protect against them [91.3755431537592]
Phishing attacks have become the most used technique in the online scams, initiating more than 91% of cyberattacks, from 2012 onwards.
This study reviews how Phishing and Spear Phishing attacks are carried out by the phishers, through 5 steps which magnify the outcome.
arXiv Detail & Related papers (2020-05-31T18:10:09Z) - On Certifying Robustness against Backdoor Attacks via Randomized
Smoothing [74.79764677396773]
We study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.
Our results show the theoretical feasibility of using randomized smoothing to certify robustness against backdoor attacks.
Existing randomized smoothing methods have limited effectiveness at defending against backdoor attacks.
arXiv Detail & Related papers (2020-02-26T19:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.