A Method for Quantifying Human Risk and a Blueprint for LLM Integration
- URL: http://arxiv.org/abs/2510.09635v1
- Date: Mon, 29 Sep 2025 20:31:27 GMT
- Title: A Method for Quantifying Human Risk and a Blueprint for LLM Integration
- Authors: Giuseppe Canale,
- Abstract summary: The Cybersecurity Psychology Framework (CPF) is a novel methodology for quantifying human-centric vulnerabilities in security operations.<n>CPF provides end-to-end operationalization across the full spectrum of psychological vulnerabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents the Cybersecurity Psychology Framework (CPF), a novel methodology for quantifying human-centric vulnerabilities in security operations through systematic integration of established psychological constructs with operational security telemetry. While individual human factors-alert fatigue, compliance fatigue, cognitive overload, and risk perception biases-have been extensively studied in isolation, no framework provides end-to-end operationalization across the full spectrum of psychological vulnerabilities. We address this gap by: (1) defining specific, measurable algorithms that quantify key psychological states using standard SOC tooling (SIEM, ticketing systems, communication platforms); (2) proposing a lightweight, privacy-preserving LLM architecture based on Retrieval-Augmented Generation (RAG) and domain-specific fine-tuning to analyze structured and unstructured data for latent psychological risks; (3) detailing a rigorous mixed-methods validation strategy acknowledging the inherent difficulty of obtaining sensitive cybersecurity data. Our implementation of CPF indicators has been demonstrated in a proof-of-concept deployment using small language models achieving 0.92 F1-score on synthetic data. This work provides the theoretical and methodological foundation necessary for industry partnerships to conduct empirical validation with real operational data.
Related papers
- Detecting Cybersecurity Threats by Integrating Explainable AI with SHAP Interpretability and Strategic Data Sampling [0.0]
The framework addresses three fundamental challenges in deploying AI for threat detection.<n>Our approach maintains detection efficacy while reducing computational overhead.<n>It provides a robust foundation for deploying trustworthy AI systems in security operations centers.
arXiv Detail & Related papers (2026-02-22T08:01:14Z) - Opportunities in AI/ML for the Rubin LSST Dark Energy Science Collaboration [63.61423859450929]
This white paper surveys the current landscape of AI/ML across DESC's primary cosmological probes and cross-cutting analyses.<n>We identify key methodological research priorities, including Bayesian inference at scale, physics-informed methods, validation frameworks, and active learning for discovery.
arXiv Detail & Related papers (2026-01-20T18:46:42Z) - MORPHEUS: A Multidimensional Framework for Modeling, Measuring, and Mitigating Human Factors in Cybersecurity [4.343339158263096]
This paper introduces MORPHEUS, a framework that operationalizes human-centric security as a dynamic and interconnected system.<n>It consolidates 50 human factors influencing susceptibility to major cyberthreats, including phishing, malware, password management, and misconfigurations.<n>MorPHEUS links theory to practice through an inventory of 99 validated psychometric instruments, enabling empirical assessment and targeted intervention.
arXiv Detail & Related papers (2025-12-20T10:27:37Z) - Uncertainty-Aware Data-Efficient AI: An Information-Theoretic Perspective [48.073471560778984]
In context-specific applications such as robotics, telecommunications, and healthcare, artificial intelligence systems often face the challenge of limited training data.<n>This review paper examines formal methodologies that address data-limited regimes through two complementary approaches.
arXiv Detail & Related papers (2025-12-04T21:44:22Z) - A Comprehensive Survey on Benchmarks and Solutions in Software Engineering of LLM-Empowered Agentic System [54.933911409697714]
This survey provides the first holistic analysis of Large Language Models-powered software engineering.<n>We review over 150 recent papers and propose a taxonomy along two key dimensions: (1) Solutions, categorized into prompt-based, fine-tuning-based, and agent-based paradigms, and (2) Benchmarks, including tasks such as code generation, translation, and repair.
arXiv Detail & Related papers (2025-10-10T06:56:50Z) - Revisiting Vulnerability Patch Localization: An Empirical Study and LLM-Based Solution [44.388332647211776]
Open-source software vulnerability patch detection is a critical component for maintaining software security and ensuring software supply chain integrity.<n>Traditional detection methods face significant scalability challenges when processing large volumes of commit histories.<n>We propose a novel two-stage framework that combines version-driven candidate filtering with large language model-based multi-round dialogue voting.
arXiv Detail & Related papers (2025-09-19T09:09:55Z) - Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse Autoencoders [50.52694757593443]
Existing SAE training algorithms often lack rigorous mathematical guarantees and suffer from practical limitations.<n>We first propose a novel statistical framework for the feature recovery problem, which includes a new notion of feature identifiability.<n>We introduce a new SAE training algorithm based on bias adaptation'', a technique that adaptively adjusts neural network bias parameters to ensure appropriate activation sparsity.
arXiv Detail & Related papers (2025-06-16T20:58:05Z) - Bringing Order Amidst Chaos: On the Role of Artificial Intelligence in Secure Software Engineering [0.0]
The ever-evolving technological landscape offers both opportunities and threats, creating a dynamic space where chaos and order compete.<n>Secure software engineering (SSE) must continuously address vulnerabilities that endanger software systems.<n>This thesis seeks to bring order to the chaos in SSE by addressing domain-specific differences that impact AI accuracy.
arXiv Detail & Related papers (2025-01-09T11:38:58Z) - A Computational Method for Measuring "Open Codes" in Qualitative Analysis [44.39424825305388]
This paper presents a theory-informed computational method for measuring inductive coding results from humans and Generative AI (GAI)<n>It measures each coder's contribution against the merged result using four novel metrics: Coverage, Overlap, Novelty, and Divergence.<n>Our work provides a reliable pathway for ensuring methodological rigor in human-AI qualitative analysis.
arXiv Detail & Related papers (2024-11-19T00:44:56Z) - INVARLLM: LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection [13.192308838452927]
Cyber-Physical Systems (CPS) are vulnerable to cyber-physical attacks that violate physical laws.<n>We propose a hybrid framework that uses large language models (LLMs) to extract semantic information from CPS documentation and generate physical invariants.<n>This approach combines LLM semantic understanding with empirical validation to ensure both interpretability and reliability.
arXiv Detail & Related papers (2024-11-17T00:09:04Z) - PP-GWAS: Privacy Preserving Multi-Site Genome-wide Association Studies [2.516577526761521]
We present a novel algorithm PP-GWAS designed to improve upon existing standards in terms of computational efficiency and scalability without sacrificing data privacy.
Experimental evaluation with real world and synthetic data indicates that PP-GWAS can achieve computational speeds twice as fast as similar state-of-the-art algorithms.
We have assessed its performance using various datasets, emphasizing its potential in facilitating more efficient and private genomic analyses.
arXiv Detail & Related papers (2024-10-10T17:07:57Z) - Dynamic Vulnerability Criticality Calculator for Industrial Control Systems [0.0]
This paper introduces an innovative approach by proposing a dynamic vulnerability criticality calculator.
Our methodology encompasses the analysis of environmental topology and the effectiveness of deployed security mechanisms.
Our approach integrates these factors into a comprehensive Fuzzy Cognitive Map model, incorporating attack paths to holistically assess the overall vulnerability score.
arXiv Detail & Related papers (2024-03-20T09:48:47Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Data Poisoning for In-context Learning [49.77204165250528]
In-context learning (ICL) has been recognized for its innovative ability to adapt to new tasks.<n>This paper delves into the critical issue of ICL's susceptibility to data poisoning attacks.<n>We introduce ICLPoison, a specialized attacking framework conceived to exploit the learning mechanisms of ICL.
arXiv Detail & Related papers (2024-02-03T14:20:20Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.