Linguistic Hooks: Investigating The Role of Language Triggers in Phishing Emails Targeting African Refugees and Students
- URL: http://arxiv.org/abs/2509.04700v3
- Date: Mon, 15 Sep 2025 14:36:53 GMT
- Title: Linguistic Hooks: Investigating The Role of Language Triggers in Phishing Emails Targeting African Refugees and Students
- Authors: Mythili Menon, Nisha Vinayaga-Sureshkanth, Alec Schon, Kaitlyn Hemberger, Murtuza Jadliwala,
- Abstract summary: Phishing and sophisticated email-based social engineering attacks disproportionately affect vulnerable populations, such as refugees and immigrant students.<n>We conducted digital literacy workshops with newly resettled African refugee populations in the US to improve their understanding of how to safeguard private information.<n>We conducted a real-world phishing deception study using carefully designed emails with linguistic cues for three participant groups.
- Score: 2.2203100716305313
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Phishing and sophisticated email-based social engineering attacks disproportionately affect vulnerable populations, such as refugees and immigrant students. However, these groups remain understudied in cybersecurity research. This gap in understanding, coupled with their exclusion from broader security and privacy policies, increases their susceptibility to phishing and widens the digital security divide between marginalized and non-marginalized populations. To address this gap, we first conducted digital literacy workshops with newly resettled African refugee populations (n = 48) in the US to improve their understanding of how to safeguard sensitive and private information. Following the workshops, we conducted a real-world phishing deception study using carefully designed emails with linguistic cues for three participant groups: a subset of the African US-refugees recruited from the digital literacy workshops (n = 19), African immigrant students in the US (n = 142), and a control group of monolingual US-born students (n = 184). Our findings indicate that while digital literacy training for refugees improves awareness of safe cybersecurity practices, recently resettled African US-refugees still face significant challenges due to low digital literacy skills and limited English proficiency. This often leads them to ignore or fail to recognize phishing emails as phishing. Both African immigrant students and US-born students showed greater caution, though instances of data disclosure remained prevalent across groups. Our findings highlight, irrespective of literacy, the need to be trained to think critically about digital security. We conclude by discussing how the security and privacy community can better include marginalized populations in policy making and offer recommendations for designing equitable, inclusive cybersecurity initiatives.
Related papers
- Friend or Foe: How LLMs' Safety Mind Gets Fooled by Intent Shift Attack [53.34204977366491]
Large language models (LLMs) remain vulnerable to jailbreaking attacks despite their impressive capabilities.<n>In this paper, we introduce ISA (Intent Shift Attack), which obfuscates LLMs about the intent of the attacks.<n>Our approach only needs minimal edits to the original request, and yields natural, human-readable, and seemingly harmless prompts.
arXiv Detail & Related papers (2025-11-01T13:44:42Z) - Exploring User Risk Factors and Target Groups for Phishing Victimization in Pakistan [0.0]
Phishing attacks pose a significant cybersecurity threat globally.<n>This study investigates phishing susceptibility within the Pakistani population.<n>Men, individuals over 25, employed persons and frequent online shoppers have relatively high phishing susceptibility.
arXiv Detail & Related papers (2025-10-10T10:37:18Z) - Toxicity Red-Teaming: Benchmarking LLM Safety in Singapore's Low-Resource Languages [57.059267233093465]
Large Language Models (LLMs) have transformed natural language processing, but their safety mechanisms remain under-explored in low-resource, multilingual settings.<n>We introduce textsfSGToxicGuard, a novel dataset and evaluation framework for benchmarking LLM safety in Singapore's diverse linguistic context.<n>We conduct extensive experiments with state-of-the-art multilingual LLMs, and the results uncover critical gaps in their safety guardrails.
arXiv Detail & Related papers (2025-09-18T08:14:34Z) - A Systematic Review of Security Communication Strategies: Guidelines and Open Challenges [47.205801464292485]
We identify user difficulties including information overload, technical comprehension, and balancing security awareness with comfort.<n>Our findings reveal consistent communication paradoxes: users require technical details for credibility yet struggle with jargon and need risk awareness without experiencing anxiety.<n>This work contributes to more effective security communication practices that enable users to recognize and respond to cybersecurity threats appropriately.
arXiv Detail & Related papers (2025-04-02T20:18:38Z) - Global Challenge for Safe and Secure LLMs Track 1 [57.08717321907755]
The Global Challenge for Safe and Secure Large Language Models (LLMs) is a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO)
This paper introduces the Global Challenge for Safe and Secure Large Language Models (LLMs), a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO) to foster the development of advanced defense mechanisms against automated jailbreaking attacks.
arXiv Detail & Related papers (2024-11-21T08:20:31Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Evaluating the Efficacy of Large Language Models in Identifying Phishing Attempts [2.6012482282204004]
Phishing, a prevalent cybercrime tactic for decades, remains a significant threat in today's digital world.
This paper aims to analyze the effectiveness of 15 Large Language Models (LLMs) in detecting phishing attempts.
arXiv Detail & Related papers (2024-04-23T19:55:18Z) - Lateral Phishing With Large Language Models: A Large Organization Comparative Study [3.590574657417729]
The emergence of Large Language Models (LLMs) has heightened the threat of phishing emails by enabling the generation of highly targeted, personalized, and automated attacks.<n>There is a lack of large-scale studies comparing the effectiveness of LLM-generated lateral phishing emails to those crafted by humans.<n>This study contributes to the understanding of cyber security threats in educational institutions.
arXiv Detail & Related papers (2024-01-18T05:06:39Z) - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models [79.0183835295533]
We introduce the first benchmark for indirect prompt injection attacks, named BIPIA, to assess the risk of such vulnerabilities.<n>Our analysis identifies two key factors contributing to their success: LLMs' inability to distinguish between informational context and actionable instructions, and their lack of awareness in avoiding the execution of instructions within external content.<n>We propose two novel defense mechanisms-boundary awareness and explicit reminder-to address these vulnerabilities in both black-box and white-box settings.
arXiv Detail & Related papers (2023-12-21T01:08:39Z) - Targeted Phishing Campaigns using Large Scale Language Models [0.0]
Phishing emails are fraudulent messages that aim to trick individuals into revealing sensitive information or taking actions that benefit the attackers.
We propose a framework for evaluating the performance of NLMs in generating these types of emails based on various criteria, including the quality of the generated text.
Our evaluations show that NLMs are capable of generating phishing emails that are difficult to detect and that have a high success rate in tricking individuals, but their effectiveness varies based on the specific NLM and training data used.
arXiv Detail & Related papers (2022-12-30T03:18:05Z) - Framework for Managing Cybercrime Risks in Nigerian Universities [0.0]
The study is based on literature review and propose how an actionable framework that Nigerian Universities can adopt to setoff cybersecurity programs can be developed.
We conclude that the framework provides a lucrative starting point for Nigerian universities to setoff efficient and effective cyber security program.
arXiv Detail & Related papers (2021-08-22T15:24:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.