AI Autonomy Coefficient ($α$): Defining Boundaries for Responsible AI Systems
- URL: http://arxiv.org/abs/2512.11295v3
- Date: Thu, 18 Dec 2025 16:29:37 GMT
- Title: AI Autonomy Coefficient ($α$): Defining Boundaries for Responsible AI Systems
- Authors: Nattaya Mairittha, Gabriel Phorncharoenmusikul, Sorawit Worapradidth,
- Abstract summary: Integrity of contemporary AI systems is compromised by misuse of Human-in-the-Loop models.<n>We introduce the AI-First, Human-Empowered (AFHE) paradigm, which requires AI systems to demonstrate a quantifiable level of functional independence.<n>We conclude that AFHE provides a metric-driven approach for ensuring verifiable autonomy, transparency, and sustainable operational integrity in modern AI systems.
- Score: 0.15293427903448018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integrity of many contemporary AI systems is compromised by the misuse of Human-in-the-Loop (HITL) models to obscure systems that remain heavily dependent on human labor. We define this structural dependency as Human-Instead-of-AI (HISOAI), an ethically problematic and economically unsustainable design in which human workers function as concealed operational substitutes rather than intentional, high-value collaborators. To address this issue, we introduce the AI-First, Human-Empowered (AFHE) paradigm, which requires AI systems to demonstrate a quantifiable level of functional independence prior to deployment. This requirement is formalized through the AI Autonomy Coefficient, measuring the proportion of tasks completed without mandatory human intervention. We further propose the AFHE Deployment Algorithm, an algorithmic gate that enforces a minimum autonomy threshold during offline evaluation and shadow deployment. Our results show that the AI Autonomy Coefficient effectively identifies HISOAI systems with an autonomy level of 0.38, while systems governed by the AFHE framework achieve an autonomy level of 0.85. We conclude that AFHE provides a metric-driven approach for ensuring verifiable autonomy, transparency, and sustainable operational integrity in modern AI systems.
Related papers
- AI+HW 2035: Shaping the Next Decade [135.53570243498987]
Artificial intelligence (AI) and hardware (HW) are advancing at unprecedented rates, yet their trajectories have become inseparably intertwined.<n>This vision paper lays out a 10-year roadmap for AI+HW co-design and co-development, spanning algorithms, architectures, systems, and sustainability.<n>We identify key challenges and opportunities, candidly assess potential obstacles and pitfalls, and propose integrated solutions.
arXiv Detail & Related papers (2026-03-05T14:36:33Z) - Trustworthy Orchestration Artificial Intelligence by the Ten Criteria with Control-Plane Governance [1.9691447018712314]
This paper presents the Ten Criteria for Trustworthy Orchestration AI.<n>It integrates human input, semantic coherence, audit and provenance integrity into a unified Control-Panel architecture.
arXiv Detail & Related papers (2025-12-11T05:49:26Z) - A Pragmatic View of AI Personhood [45.069027101429704]
Agentic Artificial Intelligence is set to trigger a "Cambrian explosion" of new kinds of personhood.<n>This paper proposes a pragmatic framework for navigating this diversification.<n>We argue that this traditional bundle can be unbundled, creating bespoke solutions for different contexts.
arXiv Detail & Related papers (2025-10-30T11:36:34Z) - From Agentification to Self-Evolving Agentic AI for Wireless Networks: Concepts, Approaches, and Future Research Directions [70.72279728350763]
Self-evolving agentic artificial intelligence (AI) offers a new paradigm for future wireless systems.<n>Unlike static AI models, self-evolving agents embed an autonomous evolution cycle that updates models, tools, and in response to environmental dynamics.<n>This paper presents a comprehensive overview of self-evolving agentic AI, highlighting its layered architecture, life cycle, and key techniques.
arXiv Detail & Related papers (2025-10-07T05:45:25Z) - LIMI: Less is More for Agency [49.63355240818081]
LIMI (Less Is More for Intelligent Agency) demonstrates that agency follows radically different development principles.<n>We show that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior.<n>Our findings establish the Agency Efficiency Principle: machine autonomy emerges not from data abundance but from strategic curation of high-quality agentic demonstrations.
arXiv Detail & Related papers (2025-09-22T10:59:32Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - The Missing Reward: Active Inference in the Era of Experience [1.9761774213809036]
Active Inference (AIF) provides a crucial foundation for developing autonomous AI agents.<n>AIF can replace external reward signals with an intrinsic drive to minimize free energy.<n>This synthesis offers a compelling path toward AI systems that can develop autonomously while adhering to both computational and physical constraints.
arXiv Detail & Related papers (2025-08-07T17:57:12Z) - ML-Master: Towards AI-for-AI via Integration of Exploration and Reasoning [49.25518866694287]
We propose ML-Master, a novel AI4AI agent that seamlessly integrates exploration and reasoning by employing a selectively scoped memory mechanism.<n>We evaluate ML-Master on the MLE-Bench, where it achieves a 29.3% average medal rate, significantly surpassing existing methods.
arXiv Detail & Related papers (2025-06-19T17:53:28Z) - FAIRTOPIA: Envisioning Multi-Agent Guardianship for Disrupting Unfair AI Pipelines [1.556153237434314]
AI models have become active decision makers, often acting without human supervision.<n>We envision agents as fairness guardians, since agents learn from their environment.<n>We introduce a fairness-by-design approach which embeds multi-role agents in an end-to-end (human to AI) synergetic scheme.
arXiv Detail & Related papers (2025-06-10T17:02:43Z) - A Unified Framework for Human AI Collaboration in Security Operations Centers with Trusted Autonomy [10.85035493967822]
This article presents a structured framework for Human-AI collaboration in Security Operations Centers (SOCs)<n>We propose a novel autonomy tiered framework grounded in five levels of AI autonomy from manual to fully autonomous.<n>This enables adaptive and explainable AI integration across core SOC functions, including monitoring, protection, threat detection, alert triage, and incident response.
arXiv Detail & Related papers (2025-05-29T12:35:08Z) - Modeling AI-Human Collaboration as a Multi-Agent Adaptation [0.0]
We develop an agent-based simulation to formalize AI-human collaboration as a function of a task.<n>We show that in modular tasks, AI often substitutes for humans - delivering higher payoffs unless human expertise is very high.<n>We also show that even "hallucinatory" AI - lacking memory or structure - can improve outcomes when augmenting low-capability humans by helping escape local optima.
arXiv Detail & Related papers (2025-04-29T16:19:53Z) - Alignment, Agency and Autonomy in Frontier AI: A Systems Engineering Perspective [0.0]
Concepts of alignment, agency, and autonomy have become central to AI safety, governance, and control.<n>This paper traces the historical, philosophical, and technical evolution of these concepts, emphasizing how their definitions influence AI development, deployment, and oversight.
arXiv Detail & Related papers (2025-02-20T21:37:20Z) - OML: A Primitive for Reconciling Open Access with Owner Control in AI Model Distribution [35.68672391812135]
We introduce OML, a primitive that enables a new distribution paradigm for AI models.<n>OML can be freely distributed for local execution while maintaining cryptographically enforced usage authorization.<n>This work opens a new research direction at the intersection of cryptography, machine learning, and mechanism design.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems [11.690126756498223]
Vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems.
In practice, the performance disparity of machine learning models on out-of-distribution data makes dataset-specific performance feedback unreliable.
arXiv Detail & Related papers (2024-09-22T09:43:27Z) - A Formalisation of the Purpose Framework: the Autonomy-Alignment Problem in Open-Ended Learning Robots [39.94239759860999]
We propose a computational framework to support the design of autonomous robots that balance autonomy and control.<n>A human purpose specifies what humans want the robot to learn, do or not do.<n>The framework decomposes the autonomy-alignment problem into more tractable sub-problems.
arXiv Detail & Related papers (2024-03-04T22:03:49Z) - A Quantitative Autonomy Quantification Framework for Fully Autonomous Robotic Systems [0.0]
This paper focuses on the full autonomous mode and proposes a quantitative autonomy assessment framework based on task requirements.
The framework provides not only a tool for quantifying autonomy, but also a regulatory interface and common language for autonomous systems developers and users.
arXiv Detail & Related papers (2023-11-03T14:26:53Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.