Towards a Cognitive-Support Tool for Threat Hunters
- URL: http://arxiv.org/abs/2602.00432v1
- Date: Sat, 31 Jan 2026 01:02:58 GMT
- Title: Towards a Cognitive-Support Tool for Threat Hunters
- Authors: Alessandra Maciel Paz Milani, Norman Anderson, Margaret-Anne Storey,
- Abstract summary: Cybersecurity increasingly relies on threat hunters to proactively identify adversarial activity.<n>The cognitive work underlying threat hunting remains underexplored or insufficiently supported by existing tools.<n>We present a prototype tool that operationalizes design propositions by enabling threat hunters to externalize reasoning.
- Score: 42.97840843148333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cybersecurity increasingly relies on threat hunters to proactively identify adversarial activity, yet the cognitive work underlying threat hunting remains underexplored or insufficiently supported by existing tools. Building on prior studies that examined how threat hunters construct and share mental models during investigations, we derived a set of design propositions to support their cognitive and collaborative work. In this paper, we present the Threat Hunter Board, a prototype tool that operationalizes these design propositions by enabling threat hunters to externalize reasoning, organize investigative leads, and maintain continuity across sessions. Using a design science paradigm, we describe the solution design rationale and artifact development. In addition, we propose six design heuristics that form a solution-evaluation framework for assessing cognitive support in threat hunting tools. An initial evaluation using a cognitive walkthrough provides early evidence of feasibility, while future work will focus on user-based validation with professional threat hunters.
Related papers
- AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Enhancing Cyber Threat Hunting -- A Visual Approach with the Forensic Visualization Toolkit [0.0]
In today's dynamic cyber threat landscape, organizations must take proactive steps to bolster their cybersecurity defenses.<n>Rather than waiting for automated security systems to flag potential threats, threat hunting involves actively searching for signs of malicious activity within an organization's network.<n>We present the Forensic Visualization Toolkit, a powerful tool designed for digital forensics investigations, analysis of digital evidence, and advanced visualizations to enhance cybersecurity situational awareness and risk management.
arXiv Detail & Related papers (2025-09-11T06:53:45Z) - A Survey on Model Extraction Attacks and Defenses for Large Language Models [55.60375624503877]
Model extraction attacks pose significant security threats to deployed language models.<n>This survey provides a comprehensive taxonomy of extraction attacks and defenses, categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks.<n>We examine defense mechanisms organized into model protection, data privacy protection, and prompt-targeted strategies, evaluating their effectiveness across different deployment scenarios.
arXiv Detail & Related papers (2025-06-26T22:02:01Z) - Exploring the Potential of Metacognitive Support Agents for Human-AI Co-Creation [15.100530378569866]
We envision novel metacognitive support agents that assist designers in working more reflectively with GenAI.<n>We conducted exploratory prototyping through a Wizard of Oz elicitation study with 20 mechanical designers probing multiple metacognitive support strategies.<n>We found that agent-supported users created more feasible designs than non-supported users, with differing impacts between support strategies.
arXiv Detail & Related papers (2025-06-15T15:09:37Z) - Lazarus Group Targets Crypto-Wallets and Financial Data while employing new Tradecrafts [0.0]
This report presents a comprehensive analysis of a malicious software sample, detailing its architecture, behavioral characteristics, and underlying intent.<n>The malware core functionalities, including persistence mechanisms, command-and-control communication, and data exfiltration routines, are identified.<n>This malware analysis report not only reconstructs past adversary actions but also establishes a robust foundation for anticipating and mitigating future attacks.
arXiv Detail & Related papers (2025-05-27T20:13:29Z) - An In-kernel Forensics Engine for Investigating Evasive Attacks [0.28894038270224864]
This paper introduces LASE, an open-source Low-Artifact Forensics Engine to perform threat analysis and forensics in Windows operating system.<n>LASE augments current analysis tools by providing detailed, system-wide monitoring capabilities while minimizing detectable artifacts.
arXiv Detail & Related papers (2025-05-10T03:40:17Z) - Fuzzy to Clear: Elucidating the Threat Hunter Cognitive Process and Cognitive Support Needs [34.79554932198158]
This study emphasizes a human-centered approach to understanding the lived experiences of threat hunters.<n>We introduce a model of how threat hunters build and refine their mental models during threat hunting sessions.<n>We suggest five actionable design propositions to enhance the tools that support them.
arXiv Detail & Related papers (2024-08-08T10:18:52Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.