From Linear Risk to Emergent Harm: Complexity as the Missing Core of AI Governance
- URL: http://arxiv.org/abs/2512.12707v1
- Date: Sun, 14 Dec 2025 14:19:21 GMT
- Title: From Linear Risk to Emergent Harm: Complexity as the Missing Core of AI Governance
- Authors: Hugo Roger Paz,
- Abstract summary: Risk-based AI regulation promises proportional controls aligned with anticipated harms.<n>This paper argues that such frameworks often fail for structural reasons.<n>We propose a complexity-based framework for AI governance that treats regulation as intervention rather than control.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Risk-based AI regulation has become the dominant paradigm in AI governance, promising proportional controls aligned with anticipated harms. This paper argues that such frameworks often fail for structural reasons: they implicitly assume linear causality, stable system boundaries, and largely predictable responses to regulation. In practice, AI operates within complex adaptive socio-technical systems in which harm is frequently emergent, delayed, redistributed, and amplified through feedback loops and strategic adaptation by system actors. As a result, compliance can increase while harm is displaced or concealed rather than eliminated. We propose a complexity-based framework for AI governance that treats regulation as intervention rather than control, prioritises dynamic system mapping over static classifications, and integrates causal reasoning and simulation for policy design under uncertainty. The aim is not to eliminate uncertainty, but to enable robust system stewardship through monitoring, learning, and iterative revision of governance interventions.
Related papers
- Position: General Alignment Has Hit a Ceiling; Edge Alignment Must Be Taken Seriously [51.03213216886717]
We take the position that the dominant paradigm of General Alignment reaches a structural ceiling in settings with conflicting values.<n>We introduce Edge Alignment as a distinct approach in which systems preserve multi dimensional value structure.
arXiv Detail & Related papers (2026-02-23T16:51:43Z) - Agentic AI for Cybersecurity: A Meta-Cognitive Architecture for Governable Autonomy [0.0]
This paper argues that cybersecurity orchestration should be reconceptualized as an agentic, multi-agent cognitive system.<n>We introduce a conceptual framework in which heterogeneous AI agents responsible for detection, hypothesis formation, contextual interpretation, explanation, and governance are coordinated through an explicit meta-cognitive judgement function.<n>Our contribution is to make this cognitive structure architecturally explicit and governable by embedding meta-cognitive judgement as a first-class system function.
arXiv Detail & Related papers (2026-02-12T12:52:49Z) - Assured Autonomy: How Operations Research Powers and Orchestrates Generative AI Systems [18.881800772626427]
We argue generative models can be fragile in operational domains unless paired with mechanisms that provide feasibility, robustness to distribution shift, and stress testing.<n>We develop a conceptual framework for assured autonomy grounded in operations research.<n>These elements define a research agenda for assured autonomy in safety-critical, reliability-sensitive operational domains.
arXiv Detail & Related papers (2025-12-30T04:24:06Z) - From Educational Analytics to AI Governance: Transferable Lessons from Complex Systems Interventions [0.0]
We argue that five core principles developed within CAPIRE transfer directly to the challenge of governing AI systems.<n>The isomorphism is not merely analogical: both domains exhibit non-linearity, emergence, feedback loops, strategic adaptation, and path dependence.<n>We propose Complex Systems AI Governance (CSAIG) as an integrated framework that operationalises these principles for regulatory design.
arXiv Detail & Related papers (2025-12-15T12:16:57Z) - Making LLMs Reliable When It Matters Most: A Five-Layer Architecture for High-Stakes Decisions [51.56484100374058]
Current large language models (LLMs) excel in verifiable domains where outputs can be checked before action but prove less reliable for high-stakes strategic decisions with uncertain outcomes.<n>This gap, driven by mutually cognitive biases in both humans and artificial intelligence (AI) systems, threatens the defensibility of valuations and sustainability of investments in the sector.<n>This report describes a framework emerging from systematic qualitative assessment across 7 frontier-grade LLMs and 3 market-facing venture vignettes under time pressure.
arXiv Detail & Related papers (2025-11-10T22:24:21Z) - Governable AI: Provable Safety Under Extreme Threat Models [31.36879992618843]
We propose a Governable AI (GAI) framework that shifts from traditional internal constraints to externally enforced structural compliance.<n>The GAI framework is composed of a simple yet reliable, fully deterministic, powerful, flexible, and general-purpose rule enforcement module (REM); governance rules; and a governable secure super-platform (GSSP) that offers end-to-end protection against compromise or subversion by AI.
arXiv Detail & Related papers (2025-08-28T04:22:59Z) - Governance-as-a-Service: A Multi-Agent Framework for AI System Compliance and Policy Enforcement [0.0]
We introduce Governance-as-a-Service (G): a policy-driven enforcement layer that regulates agent outputs at runtime.<n>G employs declarative rules and a Trust Factor mechanism that scores agents based on compliance and severity of violations.<n>Results show that G reliably blocks or redirects high-risk behaviors while preserving throughput.
arXiv Detail & Related papers (2025-08-26T07:48:55Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Toward Adaptive Categories: Dimensional Governance for Agentic AI [0.0]
dimensional governance is a framework that tracks how decision authority, process autonomy, and accountability (the 3As) distribute dynamically across human-AI relationships.<n>A critical advantage of this approach is its ability to explicitly monitor system movement toward and across key governance thresholds.<n>We outline key dimensions, critical trust thresholds, and practical examples illustrating where rigid categorical frameworks fail.
arXiv Detail & Related papers (2025-05-16T14:43:12Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Beyond Accidents and Misuse: Decoding the Structural Risk Dynamics of Artificial Intelligence [0.0]
This paper advances the concept of structural risk by introducing a framework grounded in complex systems research.<n>We classify structural risks into three categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops.<n>To anticipate and govern these dynamics, the paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight.
arXiv Detail & Related papers (2024-06-21T05:44:50Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.