Detecting Ambiguity Aversion in Cyberattack Behavior to Inform Cognitive Defense Strategies
- URL: http://arxiv.org/abs/2512.08107v1
- Date: Mon, 08 Dec 2025 23:26:08 GMT
- Title: Detecting Ambiguity Aversion in Cyberattack Behavior to Inform Cognitive Defense Strategies
- Authors: Stephan Carney, Soham Hans, Sofia Hirschmann, Stacey Marsella, Yvonne Fonken, Peggy Wu, Nikolos Gurney,
- Abstract summary: This research explores the ability to model and detect when hackers exhibit ambiguity aversion.<n>By operationalizing this cognitive trait, our work provides a foundational component for developing adaptive cognitive defense strategies.
- Score: 0.7036032466145113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the ability to model and detect when they exhibit ambiguity aversion, a cognitive bias reflecting a preference for known (versus unknown) probabilities. We introduce a novel methodological framework that (1) leverages rich, multi-modal data from human-subjects red-team experiments, (2) employs a large language model (LLM) pipeline to parse unstructured logs into MITRE ATT&CK-mapped action sequences, and (3) applies a new computational model to infer an attacker's ambiguity aversion level in near-real time. By operationalizing this cognitive trait, our work provides a foundational component for developing adaptive cognitive defense strategies.
Related papers
- Deepfake detectors are DUMB: A benchmark to assess adversarial training robustness under transferability constraints [0.0]
We extend the DUMB -- dataset soUrces, Model architecture and Balance - and DUMBer methodology to deepfake detection.<n>We evaluate robustness detectors against adversarial attacks under transferability constraints and cross-dataset configuration.<n>Experiments show that adversarial training strategies reinforce robustness in the in-distribution cases but can also degrade it under cross-dataset configuration depending on the strategy adopted.
arXiv Detail & Related papers (2026-01-09T18:06:19Z) - Generative Human-Object Interaction Detection via Differentiable Cognitive Steering of Multi-modal LLMs [85.69785384599827]
Human-object interaction (HOI) detection aims to localize human-object pairs and the interactions between them.<n>Existing methods operate under a closed-world assumption, treating the task as a classification problem over a small, predefined verb set.<n>We propose GRASP-HO, a novel Generative Reasoning And Steerable Perception framework that reformulates HOI detection from the closed-set classification task to the open-vocabulary generation problem.
arXiv Detail & Related papers (2025-12-19T14:41:50Z) - Hiding in the AI Traffic: Abusing MCP for LLM-Powered Agentic Red Teaming [0.0]
We introduce a novel command & control (C2) architecture leveraging the Model Context Protocol (MCP) to coordinate adaptive reconnaissance agents covertly across networks.<n>We find that our architecture not only improves goal-directed behavior of the system as whole, but also eliminates key host and network artifacts that can be used to detect and prevent command & control behavior altogether.
arXiv Detail & Related papers (2025-11-20T02:51:04Z) - Security Logs to ATT&CK Insights: Leveraging LLMs for High-Level Threat Understanding and Cognitive Trait Inference [1.8135692038751479]
Real-time defense requires the ability to infer attacker intent and cognitive strategy from intrusion detection system (IDS) logs.<n>We propose a novel framework that leverages large language models (LLMs) to analyze Suricata IDS logs and infer attacker actions.<n>This lays the groundwork for future work on behaviorally adaptive cyber defense and cognitive trait inference.
arXiv Detail & Related papers (2025-10-23T18:43:31Z) - Algorithms for Adversarially Robust Deep Learning [58.656107500646364]
We discuss recent progress toward designing algorithms that exhibit desirable robustness properties.<n>We present new algorithms that achieve state-of-the-art generalization in medical imaging, molecular identification, and image classification.<n>We propose new attacks and defenses, which represent the frontier of progress toward designing robust language-based agents.
arXiv Detail & Related papers (2025-09-23T14:48:58Z) - Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - A Framework for Understanding and Visualizing Strategies of RL Agents [0.0]
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
We evaluate our framework on combat scenarios in StarCraft II (SC2) using traces from a handcrafted expert policy and a trained reinforcement learning agent.
arXiv Detail & Related papers (2022-08-17T21:58:19Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.