Characterizing the Evolution of Psychological Tactics and Techniques Exploited by Malicious Emails
- URL: http://arxiv.org/abs/2408.11586v1
- Date: Wed, 21 Aug 2024 12:49:54 GMT
- Title: Characterizing the Evolution of Psychological Tactics and Techniques Exploited by Malicious Emails
- Authors: Theodore Longtchi, Shouhuai Xu,
- Abstract summary: Psychological Tactics, PTacs, and Psychological Techniques, PTechs, are exploited by malicious emails.
We present a methodology for characterizing the evolution of PTacs and PTechs exploited by malicious emails.
- Score: 7.017268913381067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The landscape of malicious emails and cyber social engineering attacks in general are constantly evolving. In order to design effective defenses against these attacks, we must deeply understand the Psychological Tactics, PTacs, and Psychological Techniques, PTechs, that are exploited by these attacks. In this paper we present a methodology for characterizing the evolution of PTacs and PTechs exploited by malicious emails. As a case study, we apply the methodology to a real-world dataset. This leads to a number insights, such as which PTacs or PTechs are more often exploited than others. These insights shed light on directions for future research towards designing psychologically-principled solutions to effectively counter malicious emails.
Related papers
- Quantifying Psychological Sophistication of Malicious Emails [4.787538460036984]
Malicious emails are one significant class of cyber social engineering attacks.
Ineffectiveness of current defenses can be attributed to our superficial understanding of the psychological properties that make these attacks successful.
We propose an innovative framework that accommodates two important and complementary aspects of sophistication, dubbed Psychological Techniques, PTechs, and Psychological Tactics, PTacs.
arXiv Detail & Related papers (2024-08-22T08:45:46Z) - Characterizing the Evolution of Psychological Factors Exploited by Malicious Emails [7.017268913381067]
We characterize the evolution of malicious emails through the lens of Psychological Factors, PFs.
We conduct a case study on 1,260 malicious emails over a span of 21 years, 2004 to 2024.
Attackers have been constantly seeking to exploit many PFs, especially the ones that reflect human traits.
arXiv Detail & Related papers (2024-08-21T12:48:32Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - The Anatomy of Deception: Technical and Human Perspectives on a Large-scale Phishing Campaign [4.369550829556578]
This study takes an unprecedented deep dive into large-scale phishing campaigns aimed at Meta's users.
Analysing data from over 25,000 victims worldwide, we highlight the nuances of these campaigns.
Through the application of advanced computational techniques, including natural language processing and machine learning, this work unveils critical insights into the psyche of victims.
arXiv Detail & Related papers (2023-10-05T12:24:24Z) - Targeted Attacks: Redefining Spear Phishing and Business Email Compromise [0.17175834535889653]
Some rare, severely damaging email threats - known as spear phishing or Business Email Compromise - have emerged.
We describe targeted-attack-detection techniques as well as social-engineering methods used by fraudsters.
We present text-based attacks - with textual content as malicious payload - and compare non-targeted and targeted variants.
arXiv Detail & Related papers (2023-09-25T14:21:59Z) - Baseline Defenses for Adversarial Attacks Against Aligned Language
Models [109.75753454188705]
Recent work shows that text moderations can produce jailbreaking prompts that bypass defenses.
We look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training.
We find that the weakness of existing discretes for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
arXiv Detail & Related papers (2023-09-01T17:59:44Z) - Visually Adversarial Attacks and Defenses in the Physical World: A
Survey [27.40548512511512]
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision.
arXiv Detail & Related papers (2022-11-03T09:28:45Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.