Emergence of Implicit World Models from Mortal Agents
- URL: http://arxiv.org/abs/2411.12304v1
- Date: Tue, 19 Nov 2024 07:43:30 GMT
- Title: Emergence of Implicit World Models from Mortal Agents
- Authors: Kazuya Horibe, Naoto Yoshida,
- Abstract summary: We discuss the possibility of world models and active exploration as emergent properties of open-ended behavior optimization in autonomous agents.
In discussing the source of the open-endedness of living things, we start from the perspective of biological systems as understood by the mechanistic approach of theoretical biology and artificial life.
- Score: 0.276240219662896
- License:
- Abstract: We discuss the possibility of world models and active exploration as emergent properties of open-ended behavior optimization in autonomous agents. In discussing the source of the open-endedness of living things, we start from the perspective of biological systems as understood by the mechanistic approach of theoretical biology and artificial life. From this perspective, we discuss the potential of homeostasis in particular as an open-ended objective for autonomous agents and as a general, integrative extrinsic motivation. We then discuss the possibility of implicitly acquiring a world model and active exploration through the internal dynamics of a network, and a hypothetical architecture for this, by combining meta-reinforcement learning, which assumes domain adaptation as a system that achieves robust homeostasis.
Related papers
- Probing for Consciousness in Machines [3.196204482566275]
This study explores the potential for artificial agents to develop core consciousness.
The emergence of core consciousness relies on the integration of a self model, informed by representations of emotions and feelings, and a world model.
Our results demonstrate that the agent can form rudimentary world and self models, suggesting a pathway toward developing machine consciousness.
arXiv Detail & Related papers (2024-11-25T10:27:07Z) - Learning World Models With Hierarchical Temporal Abstractions: A Probabilistic Perspective [2.61072980439312]
Devising formalisms to develop internal world models is a critical research challenge in the domains of artificial intelligence and machine learning.
This thesis identifies several limitations with the prevalent use of state space models as internal world models.
The structure of models in formalisms facilitates exact probabilistic inference using belief propagation, as well as end-to-end learning via backpropagation through time.
These formalisms integrate the concept of uncertainty in world states, thus improving the system's capacity to emulate the nature of the real world and quantify the confidence in its predictions.
arXiv Detail & Related papers (2024-04-24T12:41:04Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Discovering Sensorimotor Agency in Cellular Automata using Diversity
Search [17.898087201326483]
In cellular automata (CA), a key open-question has been whether it is possible to find environment rules that self-organize.
We show that this approach enables to find systematically environmental conditions in CA leading to self-organization.
We show that the discovered agents have surprisingly robust capabilities to move, maintain their body integrity and navigate among various obstacles.
arXiv Detail & Related papers (2024-02-14T14:30:42Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Life-inspired Interoceptive Artificial Intelligence for Autonomous and
Adaptive Agents [0.8246494848934447]
We focus on interoception, a process of monitoring one's internal environment to keep it within certain bounds.
To develop AI with interoception, we need to factorize the state variables representing internal environments from external environments.
This paper offers a new perspective on how interoception can help build autonomous and adaptive agents.
arXiv Detail & Related papers (2023-09-12T06:56:46Z) - Abstract Interpretation for Generalized Heuristic Search in Model-Based
Planning [50.96320003643406]
Domain-general model-based planners often derive their generality by constructing searchs through the relaxation of symbolic world models.
We illustrate how abstract interpretation can serve as a unifying framework for these abstractions, extending the reach of search to richer world models.
Theses can also be integrated with learning, allowing agents to jumpstart planning in novel world models via abstraction-derived information.
arXiv Detail & Related papers (2022-08-05T00:22:11Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - An Artificial Consciousness Model and its relations with Philosophy of
Mind [0.0]
This work seeks to study the beneficial properties that an autonomous agent can obtain by implementing a cognitive architecture similar to the one of conscious beings.
We show in a large experiment set how an autonomous agent can benefit from having a cognitive architecture such as the one described.
arXiv Detail & Related papers (2020-11-30T00:24:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.