Toward Adaptive Categories: Dimensional Governance for Agentic AI
- URL: http://arxiv.org/abs/2505.11579v1
- Date: Fri, 16 May 2025 14:43:12 GMT
- Title: Toward Adaptive Categories: Dimensional Governance for Agentic AI
- Authors: Zeynep Engin, David Hand,
- Abstract summary: dimensional governance is a framework that tracks how decision authority, process autonomy, and accountability (the 3As) distribute dynamically across human-AI relationships.<n>A critical advantage of this approach is its ability to explicitly monitor system movement toward and across key governance thresholds.<n>We outline key dimensions, critical trust thresholds, and practical examples illustrating where rigid categorical frameworks fail.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI systems evolve from static tools to dynamic agents, traditional categorical governance frameworks -- based on fixed risk tiers, levels of autonomy, or human oversight models -- are increasingly insufficient on their own. Systems built on foundation models, self-supervised learning, and multi-agent architectures increasingly blur the boundaries that categories were designed to police. In this Perspective, we make the case for dimensional governance: a framework that tracks how decision authority, process autonomy, and accountability (the 3As) distribute dynamically across human-AI relationships. A critical advantage of this approach is its ability to explicitly monitor system movement toward and across key governance thresholds, enabling preemptive adjustments before risks materialize. This dimensional approach provides the necessary foundation for more adaptive categorization, enabling thresholds and classifications that can evolve with emerging capabilities. While categories remain essential for decision-making, building them upon dimensional foundations allows for context-specific adaptability and stakeholder-responsive governance that static approaches cannot achieve. We outline key dimensions, critical trust thresholds, and practical examples illustrating where rigid categorical frameworks fail -- and where a dimensional mindset could offer a more resilient and future-proof path forward for both governance and innovation at the frontier of artificial intelligence.
Related papers
- A Survey of Self-Evolving Agents: On Path to Artificial Super Intelligence [87.08051686357206]
Large Language Models (LLMs) have demonstrated strong capabilities but remain fundamentally static.<n>As LLMs are increasingly deployed in open-ended, interactive environments, this static nature has become a critical bottleneck.<n>This survey provides the first systematic and comprehensive review of self-evolving agents.
arXiv Detail & Related papers (2025-07-28T17:59:05Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [115.71019708491386]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - A Survey on Autonomy-Induced Security Risks in Large Model-Based Agents [45.53643260046778]
Recent advances in large language models (LLMs) have catalyzed the rise of autonomous AI agents.<n>These large-model agents mark a paradigm shift from static inference systems to interactive, memory-augmented entities.
arXiv Detail & Related papers (2025-06-30T13:34:34Z) - Rational Superautotrophic Diplomacy (SupraAD); A Conceptual Framework for Alignment Based on Interdisciplinary Findings on the Fundamentals of Cognition [0.0]
Rational Superautotrophic Diplomacy (SupraAD) is a theoretical, interdisciplinary conceptual framework for alignment.<n>It draws on cognitive systems analysis and instrumental rationality modeling.<n>SupraAD reframes alignment as a challenge that predates AI, afflicting all sufficiently complex, coadapting intelligences.
arXiv Detail & Related papers (2025-06-03T17:28:25Z) - Human-AI Governance (HAIG): A Trust-Utility Approach [0.0]
This paper introduces the HAIG framework for analysing trust dynamics across evolving human-AI relationships.<n>Our analysis reveals how technical advances in self-supervision, reasoning authority, and distributed decision-making drive non-uniform trust evolution.
arXiv Detail & Related papers (2025-05-03T01:57:08Z) - Contemplative Wisdom for Superalignment [1.7143967091323253]
We advocate designing AI with intrinsic morality built into its cognitive architecture and world model.<n>Inspired by contemplative wisdom traditions, we show how four axiomatic principles can instil a resilient Wise World Model in AI systems.
arXiv Detail & Related papers (2025-04-21T14:20:49Z) - Position: Emergent Machina Sapiens Urge Rethinking Multi-Agent Paradigms [6.285314639722078]
We argue that AI agents should be empowered to dynamically adjust their objectives.<n>We call for a shift toward the emergent, self-organizing, and context-aware nature of these systems.
arXiv Detail & Related papers (2025-02-05T22:20:15Z) - Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - Beyond Accidents and Misuse: Decoding the Structural Risk Dynamics of Artificial Intelligence [0.0]
This paper advances the concept of structural risk by introducing a framework grounded in complex systems research.<n>We classify structural risks into three categories: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops.<n>To anticipate and govern these dynamics, the paper proposes a methodological agenda incorporating scenario mapping, simulation, and exploratory foresight.
arXiv Detail & Related papers (2024-06-21T05:44:50Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Measuring Value Alignment [12.696227679697493]
This paper introduces a novel formalism to quantify the alignment between AI systems and human values.
By utilizing this formalism, AI developers and ethicists can better design and evaluate AI systems to ensure they operate in harmony with human values.
arXiv Detail & Related papers (2023-12-23T12:30:06Z) - Levels of AGI for Operationalizing Progress on the Path to AGI [64.59151650272477]
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.
This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.
arXiv Detail & Related papers (2023-11-04T17:44:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.