Social Engineering Attacks: A Systemisation of Knowledge on People Against Humans
- URL: http://arxiv.org/abs/2601.04215v1
- Date: Fri, 19 Dec 2025 08:57:27 GMT
- Title: Social Engineering Attacks: A Systemisation of Knowledge on People Against Humans
- Authors: Scott Thomson, Michael Bewong, Arash Mahboubi, Tanveer Zia,
- Abstract summary: Systematisation of knowledge on Social Engineering Attacks (SEAs) identifies the human, organisational, and adversarial dimensions of cyber threats.<n>SEAs increasingly sidestep technical controls by weaponising leaked personal data and behavioural cues.<n>Our review surfaces three critical dimensions: (i) human factors of knowledge, abilities and behaviours (KAB) (ii) organisational culture and informal norms that shape those behaviours and (iii) attacker motivations, techniques and return on investment calculations.
- Score: 1.3458592512968204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our systematisation of knowledge on Social Engineering Attacks (SEAs), identifies the human, organisational, and adversarial dimensions of cyber threats. It addresses the growing risks posed by SEAs, highly relevant in the context physical cyber places, such as travellers at airports and residents in smart cities, and synthesizes findings from peer reviewed studies, industry and government reports to inform effective countermeasures that can be embedded into future smart city strategies. SEAs increasingly sidestep technical controls by weaponising leaked personal data and behavioural cues, an urgency underscored by the Optus, Medibank and now Qantas (2025) mega breaches that placed millions of personal records in criminals' hands. Our review surfaces three critical dimensions: (i) human factors of knowledge, abilities and behaviours (KAB) (ii) organisational culture and informal norms that shape those behaviours and (iii) attacker motivations, techniques and return on investment calculations. Our contributions are threefold: (1) TriLayer Systematisation: to the best of our knowledge, we are the first to unify KAB metrics, cultural drivers and attacker economics into a single analytical lens, enabling practitioners to see how vulnerabilities, norms and threat incentives coevolve. (2) Risk Weighted HAISQ Meta analysis: By normalising and ranking HAISQ scores across recent field studies, we reveal persistent high risk clusters (Internet and Social Media use) and propose impact weightings that make the instrument predictive rather than descriptive. (3) Adaptive 'Segment and Simulate' Training Blueprint: Building on clustering evidence, we outline a differentiated programme that matches low, medium, high risk user cohorts to experiential learning packages including phishing simulations, gamified challenges and realtime feedback thereby aligning effort with measured exposure.
Related papers
- Toward Risk Thresholds for AI-Enabled Cyber Threats: Enhancing Decision-Making Under Uncertainty with Bayesian Networks [0.3151064009829256]
We propose a structured approach to developing and evaluating AI cyber risk thresholds.<n>First, we analyze existing industry cyber thresholds and identify common threshold elements.<n>Second, we propose the use of Bayesian networks as a tool for modeling AI-enabled cyber risk.
arXiv Detail & Related papers (2026-01-23T23:23:12Z) - Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse [50.87630846876635]
We develop nine detailed cyber risk models.<n>Each model decomposes attacks into steps using the MITRE ATT&CK framework.<n>Individual estimates are aggregated through Monte Carlo simulation.
arXiv Detail & Related papers (2025-12-09T17:54:17Z) - AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - CIA+TA Risk Assessment for AI Reasoning Vulnerabilities [0.0]
We present a framework for cognitive cybersecurity, a systematic protection of AI reasoning processes from adversarial manipulation.<n>First, we establish cognitive cybersecurity as a discipline complementing traditional cybersecurity and AI safety.<n>Second, we introduce the CIA+TA, extending traditional Confidentiality, Integrity, and Availability with Trust.<n>Third, we present a quantitative risk assessment methodology with empirically-derived coefficients, enabling organizations to measure cognitive security risks.
arXiv Detail & Related papers (2025-08-19T13:56:09Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Beyond Vulnerabilities: A Survey of Adversarial Attacks as Both Threats and Defenses in Computer Vision Systems [5.787505062263962]
Adversarial attacks against computer vision systems have emerged as a critical research area that challenges the fundamental assumptions about neural network robustness and security.<n>This comprehensive survey examines the evolving landscape of adversarial techniques, revealing their dual nature as both sophisticated security threats and valuable defensive tools.
arXiv Detail & Related papers (2025-08-03T17:02:05Z) - SCVI: Bridging Social and Cyber Dimensions for Comprehensive Vulnerability Assessment [16.745807969615566]
Social Cyber Vulnerability Index (SCVI) is a novel framework integrating individual-level factors and attack-level characteristics.<n>SCVI is validated using survey data (iPoll) and textual data (Reddit scam reports)
arXiv Detail & Related papers (2025-03-24T19:10:34Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - A Survey on Adversarial Machine Learning for Code Data: Realistic Threats, Countermeasures, and Interpretations [21.855757118482995]
Code Language Models (CLMs) have achieved tremendous progress in source code understanding and generation.
In realistic scenarios, CLMs are exposed to potential malicious adversaries, bringing risks to the confidentiality, integrity, and availability of CLM systems.
Despite these risks, a comprehensive analysis of the security vulnerabilities of CLMs in the extremely adversarial environment has been lacking.
arXiv Detail & Related papers (2024-11-12T07:16:20Z) - A Data-Driven Predictive Analysis on Cyber Security Threats with Key Risk Factors [1.715270928578365]
This paper exhibits a Machine Learning(ML) based model for predicting individuals who may be victims of cyber attacks by analyzing socioeconomic factors.
We propose a novel Pertinent Features Random Forest (RF) model, which achieved maximum accuracy with 20 features (95.95%)
We generated 10 important association rules and presented the framework that is rigorously evaluated on real-world datasets.
arXiv Detail & Related papers (2024-03-28T09:41:24Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.