Universal AI maximizes Variational Empowerment
- URL: http://arxiv.org/abs/2502.15820v2
- Date: Mon, 03 Mar 2025 19:50:15 GMT
- Title: Universal AI maximizes Variational Empowerment
- Authors: Yusuke Hayashi, Koichi Takahashi,
- Abstract summary: We build on the existing framework of Self-AIXI -- a universal learning agent that predicts its own actions.<n>We argue that power-seeking tendencies of universal AI agents can be explained as an instrumental strategy to secure future reward.<n>Our main contribution is to show how these motivations systematically lead universal AI agents to seek and sustain high-optionality states.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a theoretical framework unifying AIXI -- a model of universal AI -- with variational empowerment as an intrinsic drive for exploration. We build on the existing framework of Self-AIXI -- a universal learning agent that predicts its own actions -- by showing how one of its established terms can be interpreted as a variational empowerment objective. We further demonstrate that universal AI's planning process can be cast as minimizing expected variational free energy (the core principle of active Inference), thereby revealing how universal AI agents inherently balance goal-directed behavior with uncertainty reduction curiosity). Moreover, we argue that power-seeking tendencies of universal AI agents can be explained not only as an instrumental strategy to secure future reward, but also as a direct consequence of empowerment maximization -- i.e. the agent's intrinsic drive to maintain or expand its own controllability in uncertain environments. Our main contribution is to show how these intrinsic motivations (empowerment, curiosity) systematically lead universal AI agents to seek and sustain high-optionality states. We prove that Self-AIXI asymptotically converges to the same performance as AIXI under suitable conditions, and highlight that its power-seeking behavior emerges naturally from both reward maximization and curiosity-driven exploration. Since AIXI can be view as a Bayes-optimal mathematical formulation for Artificial General Intelligence (AGI), our result can be useful for further discussion on AI safety and the controllability of AGI.
Related papers
- NGENT: Next-Generation AI Agents Must Integrate Multi-Domain Abilities to Achieve Artificial General Intelligence [15.830291699780874]
We argue that the next generation of AI agent (NGENT) should integrate across-domain abilities to advance toward Artificial General Intelligence (AGI)
We propose that future AI agents should synthesize the strengths of these specialized systems into a unified framework capable of operating across text, vision, robotics, reinforcement learning, emotional intelligence, and beyond.
arXiv Detail & Related papers (2025-04-30T08:46:14Z) - Stochastic, Dynamic, Fluid Autonomy in Agentic AI: Implications for Authorship, Inventorship, and Liability [0.2209921757303168]
Agentic AI systems autonomously pursue goals, adapting strategies through implicit learning.
Human and machine contributions become irreducibly entangled in intertwined creative processes.
We argue that legal and policy frameworks may need to treat human and machine contributions as functionally equivalent.
arXiv Detail & Related papers (2025-04-05T04:44:59Z) - General Scales Unlock AI Evaluation with Explanatory and Predictive Power [57.7995945974989]
benchmarking has guided progress in AI, but it has offered limited explanatory and predictive power for general-purpose AI systems.
We introduce general scales for AI evaluation that can explain what common AI benchmarks really measure.
Our fully-automated methodology builds on 18 newly-crafted rubrics that place instance demands on general scales that do not saturate.
arXiv Detail & Related papers (2025-03-09T01:13:56Z) - Agentic AI Needs a Systems Theory [46.36636351388794]
We argue that AI development is currently overly focused on individual model capabilities.
We outline mechanisms for enhanced agent cognition, emergent causal reasoning ability, and metacognitive awareness.
We emphasize that a systems-level perspective is essential for better understanding, and purposefully shaping, agentic AI systems.
arXiv Detail & Related papers (2025-02-28T22:51:32Z) - Common Sense Is All You Need [5.280511830552275]
Artificial intelligence (AI) has made significant strides in recent years, yet it continues to struggle with a fundamental aspect of cognition present in all animals: common sense.
Current AI systems often lack the ability to adapt to new situations without extensive prior knowledge.
This manuscript argues that integrating common sense into AI systems is essential for achieving true autonomy and unlocking the full societal and commercial value of AI.
arXiv Detail & Related papers (2025-01-11T21:23:41Z) - Augmenting Minds or Automating Skills: The Differential Role of Human Capital in Generative AI's Impact on Creative Tasks [4.39919134458872]
Generative AI is rapidly reshaping creative work, raising critical questions about its beneficiaries and societal implications.<n>This study challenges prevailing assumptions by exploring how generative AI interacts with diverse forms of human capital in creative tasks.<n>While AI democratizes access to creative tools, it simultaneously amplifies cognitive inequalities.
arXiv Detail & Related papers (2024-12-05T08:27:14Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - A call for embodied AI [1.7544885995294304]
We propose Embodied AI as the next fundamental step in the pursuit of Artificial General Intelligence.
By broadening the scope of Embodied AI, we introduce a theoretical framework based on cognitive architectures.
This framework is aligned with Friston's active inference principle, offering a comprehensive approach to EAI development.
arXiv Detail & Related papers (2024-02-06T09:11:20Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.