Computational Irreducibility as the Foundation of Agency: A Formal Model Connecting Undecidability to Autonomous Behavior in Complex Systems
- URL: http://arxiv.org/abs/2505.04646v2
- Date: Wed, 11 Jun 2025 13:38:17 GMT
- Title: Computational Irreducibility as the Foundation of Agency: A Formal Model Connecting Undecidability to Autonomous Behavior in Complex Systems
- Authors: Poria Azadi,
- Abstract summary: we establish precise mathematical connections, proving that for any truly autonomous system, questions about its future behavior are fundamentally undecidable.<n>The findings have significant implications for artificial intelligence, biological modeling, and philosophical concepts like free will.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This article presents a formal model demonstrating that genuine autonomy, the ability of a system to self-regulate and pursue objectives, fundamentally implies computational unpredictability from an external perspective. we establish precise mathematical connections, proving that for any truly autonomous system, questions about its future behavior are fundamentally undecidable. this formal undecidability, rather than mere complexity, grounds a principled distinction between autonomous and non-autonomous systems. our framework integrates insights from computational theory and biology, particularly regarding emergent agency and computational irreducibility, to explain how novel information and purpose can arise within a physical universe. the findings have significant implications for artificial intelligence, biological modeling, and philosophical concepts like free will.
Related papers
- Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence [59.07578850674114]
Sound deductive reasoning is an indisputably desirable aspect of general intelligence.<n>It is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable reasoning tasks.<n>We argue that their unsound behavior is a consequence of the statistical learning approach powering their development.
arXiv Detail & Related papers (2025-06-30T14:37:50Z) - Nature's Insight: A Novel Framework and Comprehensive Analysis of Agentic Reasoning Through the Lens of Neuroscience [11.174550573411008]
We propose a novel neuroscience-inspired framework for agentic reasoning.<n>We apply this framework to systematically classify and analyze existing AI reasoning methods.<n>We propose new neural-inspired reasoning methods, analogous to chain-of-thought prompting.
arXiv Detail & Related papers (2025-05-07T14:25:46Z) - Stochastic, Dynamic, Fluid Autonomy in Agentic AI: Implications for Authorship, Inventorship, and Liability [0.2209921757303168]
Agentic AI systems autonomously pursue goals, adapting strategies through implicit learning.<n>Human and machine contributions become irreducibly entangled in intertwined creative processes.<n>We argue that legal and policy frameworks may need to treat human and machine contributions as functionally equivalent.
arXiv Detail & Related papers (2025-04-05T04:44:59Z) - Dissociating Artificial Intelligence from Artificial Consciousness [0.4537124110113416]
Developments in machine learning and computing power suggest that artificial general intelligence is within reach.<n>This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, would it experience sights, sounds, and thoughts, as we do when we are conscious?<n>We employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious.
arXiv Detail & Related papers (2024-12-05T19:28:35Z) - Emergence of Self-Identity in AI: A Mathematical Framework and Empirical Study with Generative Large Language Models [4.036530158875673]
This paper introduces a mathematical framework for defining and quantifying self-identity in AI systems.<n>Our framework posits that self-identity emerges from two mathematically quantifiable conditions.<n>The implications of our study are immediately relevant to the fields of humanoid robotics and autonomous systems.
arXiv Detail & Related papers (2024-11-27T17:23:47Z) - "Efficient Complexity": a Constrained Optimization Approach to the Evolution of Natural Intelligence [0.0]
A fundamental question in the conjunction of information theory, biophysics, bioinformatics and thermodynamics relates to the principles and processes that guide the development of natural intelligence in natural environments where information about external stimuli may not be available at prior.<n>A novel approach in the description of the information processes of natural learning is proposed in the framework of constrained optimization.<n>Non-trivial conclusions on the relationships between the complexity, variability and efficiency of the structure, or architecture of learning models made on the basis of the proposed formalism can explain the effectiveness of neural networks as collaborative groups of small intelligent units in biological and artificial intelligence.
arXiv Detail & Related papers (2024-10-03T11:54:33Z) - Closing the Loop: How Semantic Closure Enables Open-Ended Evolution [0.5755004576310334]
This manuscript explores the evolutionary emergence of semantic closure.<n>It integrates concepts from relational biology, physical biosemiotics, and ecological psychology into a unified computational enactivism framework.
arXiv Detail & Related papers (2024-04-05T19:35:38Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Towards Probabilistic Causal Discovery, Inference & Explanations for
Autonomous Drones in Mine Surveying Tasks [5.569226615350014]
Causal modelling can aid autonomous agents in making decisions and explaining outcomes.
Here we identify challenges relating to causality in the context of a drone system operating in a salt mine.
We propose a probabilistic causal framework consisting of: causally-informed POMDP planning, online SCM adaptation, and post-hoc counterfactual explanations.
arXiv Detail & Related papers (2023-08-19T15:12:55Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Independent Natural Policy Gradient Methods for Potential Games:
Finite-time Global Convergence with Entropy Regularization [28.401280095467015]
We study the finite-time convergence of independent entropy-regularized natural policy gradient (NPG) methods for potential games.
We show that the proposed method converges to the quantal response equilibrium (QRE) at a sublinear rate, which is independent of the size of the action space.
arXiv Detail & Related papers (2022-04-12T01:34:02Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Mathematical Framework for Consciousness in Neural Networks [0.0]
This paper presents a novel mathematical framework for bridging the explanatory gap between consciousness and its physical correlates.<n>We do not claim that qualia are singularities or that singularities "explain" why qualia feel as they do.<n>We establish a framework that recognizes qualia as phenomena inherently beyond reduction to complexity, computation, or information.
arXiv Detail & Related papers (2017-04-04T18:32:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.