Standardized Threat Taxonomy for AI Security, Governance, and Regulatory Compliance
- URL: http://arxiv.org/abs/2511.21901v1
- Date: Wed, 26 Nov 2025 20:42:46 GMT
- Title: Standardized Threat Taxonomy for AI Security, Governance, and Regulatory Compliance
- Authors: Hernan Huwyler,
- Abstract summary: "Language barrier" currently separates technical security teams, who focus on algorithmic vulnerabilities, from legal and compliance professionals, who address regulatory mandates.<n>This research presents the AI System Threat Vector taxonomy, a structured ontology designed explicitly for Quantitative Risk Assessment (QRA)<n>The framework categorizes AI-specific risks into nine critical domains: Misuse, Poisoning, Privacy, Adrial, Biases, Unreliable Outputs, Drift, Supply Chain, and IP Threat, integrating 53 operationally defined sub-threats.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The accelerating deployment of artificial intelligence systems across regulated sectors has exposed critical fragmentation in risk assessment methodologies. A significant "language barrier" currently separates technical security teams, who focus on algorithmic vulnerabilities (e.g., MITRE ATLAS), from legal and compliance professionals, who address regulatory mandates (e.g., EU AI Act, NIST AI RMF). This disciplinary disconnect prevents the accurate translation of technical vulnerabilities into financial liability, leaving practitioners unable to answer fundamental economic questions regarding contingency reserves, control return-on-investment, and insurance exposure. To bridge this gap, this research presents the AI System Threat Vector Taxonomy, a structured ontology designed explicitly for Quantitative Risk Assessment (QRA). The framework categorizes AI-specific risks into nine critical domains: Misuse, Poisoning, Privacy, Adversarial, Biases, Unreliable Outputs, Drift, Supply Chain, and IP Threat, integrating 53 operationally defined sub-threats. Uniquely, each domain maps technical vectors directly to business loss categories (Confidentiality, Integrity, Availability, Legal, Reputation), enabling the translation of abstract threats into measurable financial impact. The taxonomy is empirically validated through an analysis of 133 documented AI incidents from 2025 (achieving 100% classification coverage) and reconciled against the main AI risk frameworks. Furthermore, it is explicitly aligned with ISO/IEC 42001 controls and NIST AI RMF functions to facilitate auditability.
Related papers
- Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5 [61.787178868669265]
This technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R&D, and self-replication.<n>This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.
arXiv Detail & Related papers (2026-02-16T04:30:06Z) - Agentic AI for Commercial Insurance Underwriting with Adversarial Self-Critique [0.0]
This study presents a decision-negative, human-in-the-loop agentic system that incorporates an adversarial self-critique mechanism.<n>Within this system, a critic agent challenges the primary agent's conclusions prior to submitting recommendations to human reviewers.<n>The research develops a formal taxonomy of failure modes to characterize potential errors by decision-negative agents.
arXiv Detail & Related papers (2026-01-21T05:51:27Z) - Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies [57.521647436515785]
We define frontier AI auditing as rigorous third-party verification of frontier AI developers' safety and security claims.<n>We introduce AI Assurance Levels (AAL-1 to AAL-4), ranging from time-bounded system audits to continuous, deception-resilient verification.
arXiv Detail & Related papers (2026-01-16T18:44:09Z) - Mapping AI Risk Mitigations: Evidence Scan and Preliminary AI Risk Mitigation Taxonomy [35.22340964134219]
The landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage.<n>This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference.<n>The taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 AI risk mitigations.
arXiv Detail & Related papers (2025-12-12T03:26:29Z) - A Proactive Insider Threat Management Framework Using Explainable Machine Learning [0.14323566945483496]
This study proposes the Insider Threat Explainable Machine Learning (IT-XML) framework to enhance proactive insider threat management.<n>A quantitative approach is adopted using an online questionnaire to assess employees' knowledge of insider threat patterns.<n>The framework classified all organisations at the developing security maturity level with 97-98% confidence and achieved a classification accuracy of 91.7%.
arXiv Detail & Related papers (2025-10-22T14:08:38Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Understanding and Mitigating Risks of Generative AI in Financial Services [22.673239064487667]
We aim to highlight AI content safety considerations specific to the financial services domain and outline an associated AI content risk taxonomy.<n>We evaluate how existing open-source technical guardrail solutions cover this taxonomy by assessing them on data collected via red-teaming activities.
arXiv Detail & Related papers (2025-04-25T16:55:51Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Quantifying Security Vulnerabilities: A Metric-Driven Security Analysis of Gaps in Current AI Standards [5.388550452190688]
This paper audits and quantifies security risks in three major AI governance standards: NIST AI RMF 1.0, UK's AI and Data Protection Risk Toolkit, and the EU's ALTAI.<n>Using a novel risk assessment methodology, we develop four key metrics: Risk Severity Index (RSI), Attack Potential Index (AVPI), Compliance-Security Gap Percentage (CSGP), and Root Cause Vulnerability Score (RCVS)
arXiv Detail & Related papers (2025-02-12T17:57:54Z) - SafetyAnalyst: Interpretable, Transparent, and Steerable Safety Moderation for AI Behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.<n>Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.<n>It aggregates effects into a harmfulness score using 28 fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - The AI Revolution: Opportunities and Challenges for the Finance Sector [12.486180180030964]
The application of AI in the financial sector is transforming the industry.
However, along with these benefits, AI also presents several challenges.
These include issues related to transparency, interpretability, fairness, accountability, and trustworthiness.
The use of AI in the financial sector further raises critical questions about data privacy and security.
Despite the global recognition of this need, there remains a lack of clear guidelines or legislation for AI use in finance.
arXiv Detail & Related papers (2023-08-31T08:30:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.