Bi-Level Game-Theoretic Planning of Cyber Deception for Cognitive Arbitrage
- URL: http://arxiv.org/abs/2509.05498v1
- Date: Fri, 05 Sep 2025 21:11:25 GMT
- Title: Bi-Level Game-Theoretic Planning of Cyber Deception for Cognitive Arbitrage
- Authors: Ya-Ting Yang, Quanyan Zhu,
- Abstract summary: This paper investigates how to exploit the cognitive vulnerabilities of Advanced Persistent Threat (APT) attackers.<n>It proposes cognition-aware defenses that leverage windows of superiority to counteract attacks.
- Score: 22.661656301757663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cognitive vulnerabilities shape human decision-making and arise primarily from two sources: (1) cognitive capabilities, which include disparities in knowledge, education, expertise, or access to information, and (2) cognitive biases, such as rational inattention, confirmation bias, and base rate neglect, which influence how individuals perceive and process information. Exploiting these vulnerabilities allows an entity with superior cognitive awareness to gain a strategic advantage, a concept referred to as cognitive arbitrage. This paper investigates how to exploit the cognitive vulnerabilities of Advanced Persistent Threat (APT) attackers and proposes cognition-aware defenses that leverage windows of superiority to counteract attacks. Specifically, the proposed bi-level cyber warfare game focuses on "strategic-level" design for defensive deception mechanisms, which then facilitates "operational-level" actions and tactical-level execution of Tactics, Techniques, and Procedures (TTPs). Game-theoretic reasoning and analysis play a significant role in the cross-echelon quantitative modeling and design of cognitive arbitrage strategies. Our numerical results demonstrate that although the defender's initial advantage diminishes over time, strategically timed and deployed deception techniques can turn a negative value for the attacker into a positive one during the planning phase, and achieve at least a 40% improvement in total rewards during execution. This demonstrates that the defender can amplify even small initial advantages, sustain a strategic edge over the attacker, and secure long-term objectives, such as protecting critical assets throughout the attacker's lifecycle.
Related papers
- Techniques of Modern Attacks [51.56484100374058]
Advanced Persistent Threats (APTs) represent a complex method of attack aimed at specific targets.<n>I will investigate both the attack life cycle and cutting-edge detection and defense strategies proposed in recent academic research.<n>I aim to highlight the strengths and limitations of each approach and propose more adaptive APT mitigation strategies.
arXiv Detail & Related papers (2026-01-19T22:15:25Z) - Guarding Against Malicious Biased Threats (GAMBiT): Experimental Design of Cognitive Sensors and Triggers with Behavioral Impact Analysis [17.809804870177192]
GAMBiT embeds insights from cognitive science into cyber environments through cognitive triggers.<n>GAMBiT establishes a new paradigm in which the attacker's mind becomes part of the battlefield.
arXiv Detail & Related papers (2025-11-27T02:18:03Z) - Debiased Dual-Invariant Defense for Adversarially Robust Person Re-Identification [52.63017280231648]
Person re-identification (ReID) is a fundamental task in many real-world applications such as pedestrian trajectory tracking.<n>Person ReID models are highly susceptible to adversarial attacks, where imperceptible perturbations to pedestrian images can cause entirely incorrect predictions.<n>We propose a dual-invariant defense framework composed of two main phases.
arXiv Detail & Related papers (2025-11-13T03:56:40Z) - Security Logs to ATT&CK Insights: Leveraging LLMs for High-Level Threat Understanding and Cognitive Trait Inference [1.8135692038751479]
Real-time defense requires the ability to infer attacker intent and cognitive strategy from intrusion detection system (IDS) logs.<n>We propose a novel framework that leverages large language models (LLMs) to analyze Suricata IDS logs and infer attacker actions.<n>This lays the groundwork for future work on behaviorally adaptive cyber defense and cognitive trait inference.
arXiv Detail & Related papers (2025-10-23T18:43:31Z) - Towards Proactive Defense Against Cyber Cognitive Attacks [3.357544650969485]
Cyber cognitive attacks leverage disruptive innovations (DIs) to exploit psychological biases and manipulate decision-making processes.<n>New technologies, such as AI-driven disinformation and synthetic media, have accelerated the scale and sophistication of these threats.<n>We introduce a novel predictive methodology for forecasting the emergence of DIs and their malicious uses in cognitive attacks.
arXiv Detail & Related papers (2025-10-17T16:25:47Z) - Quantifying Loss Aversion in Cyber Adversaries via LLM Analysis [2.798191832420146]
IARPA's ReSCIND program seeks to infer, defend against, and exploit attacker cognitive traits.<n>In this paper, we present a novel methodology that leverages large language models (LLMs) to extract quantifiable insights into the cognitive bias of loss aversion from hacker behavior.
arXiv Detail & Related papers (2025-08-18T05:51:30Z) - Reinforcement Learning for Decision-Level Interception Prioritization in Drone Swarm Defense [56.47577824219207]
We present a case study demonstrating the practical advantages of reinforcement learning in addressing this challenge.<n>We introduce a high-fidelity simulation environment that captures realistic operational constraints.<n>Agent learns to coordinate multiple effectors for optimal interception prioritization.<n>We evaluate the learned policy against a handcrafted rule-based baseline across hundreds of simulated attack scenarios.
arXiv Detail & Related papers (2025-08-01T13:55:39Z) - A Case Study on the Use of Representativeness Bias as a Defense Against Adversarial Cyber Threats [1.74585489563148]
This paper takes a first step towards psychology-informed, active defense strategies.<n>Using capture-the-flag events, we create realistic challenges that tap into a particular cognitive bias: representativeness.<n>This study finds that this bias can be triggered to thwart hacking attempts and divert hackers into non-vulnerable attack paths.
arXiv Detail & Related papers (2025-04-28T20:30:28Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications [58.06882713631082]
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks raises serious security concerns.
This paper proposes an efficient attention-based defense mechanism that exploits adversarial channel-attention to quickly identify and track malicious objects in shallow network layers.
It also introduces an efficient multi-frame defense framework, validating its efficacy through extensive experiments aimed at evaluating both defense performance and computational cost.
arXiv Detail & Related papers (2023-11-19T00:47:17Z) - Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning
in Cybersecurity Games [1.14219428942199]
We present a novel model of human decision-making inspired by the cognitive faculties of Instance-Based Learning Theory, Theory of Mind, and Transfer of Learning.
This model functions by learning from both roles in a security scenario: defender and attacker, and by making predictions of the opponent's beliefs, intentions, and actions.
Results from simulation experiments demonstrate the potential usefulness of cognitively inspired models of agents trained in attack and defense roles.
arXiv Detail & Related papers (2023-06-03T17:51:04Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - RADAMS: Resilient and Adaptive Alert and Attention Management Strategy
against Informational Denial-of-Service (IDoS) Attacks [28.570086492742046]
We study IDoS attacks that generate a large volume of feint attacks to overload human operators and hide real attacks among feints.
We develop a Resilient and Adaptive Data-driven alert and Attention Management Strategy (RADAMS)
RADAMS uses reinforcement learning to achieve a customized and transferable design for various human operators and evolving IDoS attacks.
arXiv Detail & Related papers (2021-11-01T19:58:29Z) - Combating Informational Denial-of-Service (IDoS) Attacks: Modeling and
Mitigation of Attentional Human Vulnerability [28.570086492742046]
IDoS attacks deplete the cognition resources of human operators to prevent humans from identifying the real attacks hidden among feints.
This work aims to formally define IDoS attacks, quantify their consequences, and develop human-assistive security technologies to mitigate the severity level and risks of IDoS attacks.
arXiv Detail & Related papers (2021-08-04T05:09:32Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.