Fuzzy to Clear: Elucidating the Threat Hunter Cognitive Process and Cognitive Support Needs
- URL: http://arxiv.org/abs/2408.04348v3
- Date: Thu, 04 Sep 2025 17:59:25 GMT
- Title: Fuzzy to Clear: Elucidating the Threat Hunter Cognitive Process and Cognitive Support Needs
- Authors: Alessandra Maciel Paz Milani, Arty Starr, Samantha Hill, Callum Curtis, Norman Anderson, David Moreno-Lumbreras, Margaret-Anne Storey,
- Abstract summary: This study emphasizes a human-centered approach to understanding the lived experiences of threat hunters.<n>We introduce a model of how threat hunters build and refine their mental models during threat hunting sessions.<n>We suggest five actionable design propositions to enhance the tools that support them.
- Score: 34.79554932198158
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With security threats increasing in frequency and severity, it is critical that we consider the important role of threat hunters. These highly-trained security professionals learn to see, identify, and intercept security threats. Many recent works and existing tools in cybersecurity are focused on automating the threat hunting process, often overlooking the critical human element. Our study shifts this paradigm by emphasizing a human-centered approach to understanding the lived experiences of threat hunters. By observing threat hunters during hunting sessions and analyzing the rich insights they provide, we seek to advance the understanding of their cognitive processes and the tool support they need. Through an in-depth observational study of threat hunters, we introduce a model of how they build and refine their mental models during threat hunting sessions. We also present 23 themes that provide a foundation to better understand threat hunter needs and suggest five actionable design propositions to enhance the tools that support them. Through these contributions, our work enriches the theoretical understanding of threat hunting and provides practical insights for designing more effective, human-centered cybersecurity tools.
Related papers
- Towards a Cognitive-Support Tool for Threat Hunters [42.97840843148333]
Cybersecurity increasingly relies on threat hunters to proactively identify adversarial activity.<n>The cognitive work underlying threat hunting remains underexplored or insufficiently supported by existing tools.<n>We present a prototype tool that operationalizes design propositions by enabling threat hunters to externalize reasoning.
arXiv Detail & Related papers (2026-01-31T01:02:58Z) - Techniques of Modern Attacks [51.56484100374058]
Advanced Persistent Threats (APTs) represent a complex method of attack aimed at specific targets.<n>I will investigate both the attack life cycle and cutting-edge detection and defense strategies proposed in recent academic research.<n>I aim to highlight the strengths and limitations of each approach and propose more adaptive APT mitigation strategies.
arXiv Detail & Related papers (2026-01-19T22:15:25Z) - Enhancing Cyber Threat Hunting -- A Visual Approach with the Forensic Visualization Toolkit [0.0]
In today's dynamic cyber threat landscape, organizations must take proactive steps to bolster their cybersecurity defenses.<n>Rather than waiting for automated security systems to flag potential threats, threat hunting involves actively searching for signs of malicious activity within an organization's network.<n>We present the Forensic Visualization Toolkit, a powerful tool designed for digital forensics investigations, analysis of digital evidence, and advanced visualizations to enhance cybersecurity situational awareness and risk management.
arXiv Detail & Related papers (2025-09-11T06:53:45Z) - An In-kernel Forensics Engine for Investigating Evasive Attacks [0.28894038270224864]
This paper introduces LASE, an open-source Low-Artifact Forensics Engine to perform threat analysis and forensics in Windows operating system.<n>LASE augments current analysis tools by providing detailed, system-wide monitoring capabilities while minimizing detectable artifacts.
arXiv Detail & Related papers (2025-05-10T03:40:17Z) - LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures [49.1574468325115]
This survey seeks to define and categorize the various attacks targeting large language models (LLMs)<n>A thorough analysis of these attacks is presented, alongside an exploration of defense mechanisms designed to mitigate such threats.
arXiv Detail & Related papers (2025-05-02T10:35:26Z) - Cyber Defense Reinvented: Large Language Models as Threat Intelligence Copilots [36.809323735351825]
CYLENS is a cyber threat intelligence copilot powered by large language models (LLMs)
CYLENS is designed to assist security professionals throughout the entire threat management lifecycle.
It supports threat attribution, contextualization, detection, correlation, prioritization, and remediation.
arXiv Detail & Related papers (2025-02-28T07:16:09Z) - Safety at Scale: A Comprehensive Survey of Large Model Safety [298.05093528230753]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Recent Advances in Attack and Defense Approaches of Large Language Models [27.271665614205034]
Large Language Models (LLMs) have revolutionized artificial intelligence and machine learning through their advanced text processing and generating capabilities.
Their widespread deployment has raised significant safety and reliability concerns.
This paper reviews current research on LLM vulnerabilities and threats, and evaluates the effectiveness of contemporary defense mechanisms.
arXiv Detail & Related papers (2024-09-05T06:31:37Z) - The Shadow of Fraud: The Emerging Danger of AI-powered Social Engineering and its Possible Cure [30.431292911543103]
Social engineering (SE) attacks remain a significant threat to both individuals and organizations.
The advancement of Artificial Intelligence (AI) has potentially intensified these threats by enabling more personalized and convincing attacks.
This survey paper categorizes SE attack mechanisms, analyzes their evolution, and explores methods for measuring these threats.
arXiv Detail & Related papers (2024-07-22T17:37:31Z) - Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features [0.741787275567662]
We explore the potential of psychological profiling techniques, particularly focusing on the utilization of Large Language Models (LLMs) and psycholinguistic features.
Our research underscores the importance of integrating psychological perspectives into cybersecurity practices to bolster defense mechanisms against evolving threats.
arXiv Detail & Related papers (2024-06-26T23:04:52Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence [0.0]
Review explores the amalgamation of artificial intelligence (AI) and traditional threat intelligence methodologies.
Examines the transformative influence of AI and machine learning on conventional threat intelligence practices.
Case studies and evaluations highlight success stories and lessons learned by organizations adopting AI-driven threat intelligence.
arXiv Detail & Related papers (2023-12-30T17:36:08Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Turning the Hunted into the Hunter via Threat Hunting: Life Cycle,
Ecosystem, Challenges and the Great Promise of AI [0.0]
This paper gives a holistic view of the threat hunting ecosystem, identifies challenges, and discusses the future with the integration of artificial intelligence (AI)
We specifically establish a life cycle and ecosystem for privacy-threat hunting in addition to identifying the related challenges.
arXiv Detail & Related papers (2022-04-23T14:03:36Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence [94.94833077653998]
ThreatRaptor is a system that facilitates threat hunting in computer systems using open-source Cyber Threat Intelligence (OSCTI)
It extracts structured threat behaviors from unstructured OSCTI text and uses a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities.
Evaluations on a broad set of attack cases demonstrate the accuracy and efficiency of ThreatRaptor in practical threat hunting.
arXiv Detail & Related papers (2020-10-26T14:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.