A Mathematical Theory of Agency and Intelligence
- URL: http://arxiv.org/abs/2602.22519v1
- Date: Thu, 26 Feb 2026 01:26:21 GMT
- Title: A Mathematical Theory of Agency and Intelligence
- Authors: Wael Hafez, Chenan Wei, Rodrigo Felipe, Amir Nazeri, Cameron Reid,
- Abstract summary: We show how much of the total information a system deploys is actually shared between its observations, actions, and outcomes.<n>We prove this shared fraction, which we term bipredictability, P, is intrinsic to any interaction, derivable from first principles.<n>We demonstrate a feedback architecture that monitors P in real time, establishing a prerequisite for adaptive, resilient AI.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To operate reliably under changing conditions, complex systems require feedback on how effectively they use resources, not just whether objectives are met. Current AI systems process vast information to produce sophisticated predictions, yet predictions can appear successful while the underlying interaction with the environment degrades. What is missing is a principled measure of how much of the total information a system deploys is actually shared between its observations, actions, and outcomes. We prove this shared fraction, which we term bipredictability, P, is intrinsic to any interaction, derivable from first principles, and strictly bounded: P can reach unity in quantum systems, P equal to, or smaller than 0.5 in classical systems, and lower once agency (action selection) is introduced. We confirm these bounds in a physical system (double pendulum), reinforcement learning agents, and multi turn LLM conversations. These results distinguish agency from intelligence: agency is the capacity to act on predictions, whereas intelligence additionally requires learning from interaction, self-monitoring of its learning effectiveness, and adapting the scope of observations, actions, and outcomes to restore effective learning. By this definition, current AI systems achieve agency but not intelligence. Inspired by thalamocortical regulation in biological systems, we demonstrate a feedback architecture that monitors P in real time, establishing a prerequisite for adaptive, resilient AI.
Related papers
- Fundamentals of Building Autonomous LLM Agents [64.39018305018904]
This paper reviews the architecture and implementation methods of agents powered by large language models (LLMs)<n>The research aims to explore patterns to develop "agentic" LLMs that can automate complex tasks and bridge the performance gap with human capabilities.
arXiv Detail & Related papers (2025-10-10T10:32:39Z) - Knowledge Conceptualization Impacts RAG Efficacy [0.0786430477112975]
We investigate the design of transferable and interpretable neurosymbolic AI systems.<n>Specifically, we focus on a class of systems referred to as ''Agentic Retrieval-Augmented Generation'' systems.
arXiv Detail & Related papers (2025-07-12T20:10:26Z) - Computational Irreducibility as the Foundation of Agency: A Formal Model Connecting Undecidability to Autonomous Behavior in Complex Systems [0.0]
we establish precise mathematical connections, proving that for any truly autonomous system, questions about its future behavior are fundamentally undecidable.<n>The findings have significant implications for artificial intelligence, biological modeling, and philosophical concepts like free will.
arXiv Detail & Related papers (2025-05-05T21:24:50Z) - Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems [132.77459963706437]
This book provides a comprehensive overview, framing intelligent agents within modular, brain-inspired architectures.<n>It explores self-enhancement and adaptive evolution mechanisms, exploring how agents autonomously refine their capabilities.<n>It also examines the collective intelligence emerging from agent interactions, cooperation, and societal structures.
arXiv Detail & Related papers (2025-03-31T18:00:29Z) - Intelligence at the Edge of Chaos [24.864145150537855]
We investigate how the complexity of rule-based systems influences the capabilities of models trained to predict these rules.<n>Our findings reveal that rules with higher complexity lead to models exhibiting greater intelligence.<n>We conjecture that intelligence arises from the ability to predict complexity and that creating intelligence may require only exposure to complexity.
arXiv Detail & Related papers (2024-10-03T14:42:34Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - An active inference model of collective intelligence [0.0]
This paper posits a minimal agent-based model that simulates the relationship between local individual-level interaction and collective intelligence.
Results show that stepwise cognitive transitions increase system performance by providing complementary mechanisms for alignment between agents' local and global optima.
arXiv Detail & Related papers (2021-04-02T14:32:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.