Intrinsically-Motivated Humans and Agents in Open-World Exploration
- URL: http://arxiv.org/abs/2503.23631v1
- Date: Mon, 31 Mar 2025 00:09:00 GMT
- Title: Intrinsically-Motivated Humans and Agents in Open-World Exploration
- Authors: Aly Lidayan, Yuqing Du, Eliza Kosoy, Maria Rufova, Pieter Abbeel, Alison Gopnik,
- Abstract summary: We compare adults, children, and AI agents in a complex open-ended environment.<n>We find that only Entropy and Empowerment are consistently positively correlated with human exploration progress.
- Score: 50.00331050937369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What drives exploration? Understanding intrinsic motivation is a long-standing challenge in both cognitive science and artificial intelligence; numerous objectives have been proposed and used to train agents, yet there remains a gap between human and agent exploration. We directly compare adults, children, and AI agents in a complex open-ended environment, Crafter, and study how common intrinsic objectives: Entropy, Information Gain, and Empowerment, relate to their behavior. We find that only Entropy and Empowerment are consistently positively correlated with human exploration progress, indicating that these objectives may better inform intrinsic reward design for agents. Furthermore, across agents and humans we observe that Entropy initially increases rapidly, then plateaus, while Empowerment increases continuously, suggesting that state diversity may provide more signal in early exploration, while advanced exploration should prioritize control. Finally, we find preliminary evidence that private speech utterances, and particularly goal verbalizations, may aid exploration in children.
Related papers
- ForesightNav: Learning Scene Imagination for Efficient Exploration [57.49417653636244]
We propose ForesightNav, a novel exploration strategy inspired by human imagination and reasoning.
Our approach equips robotic agents with the capability to predict contextual information, such as occupancy and semantic details, for unexplored regions.
We validate our imagination-based approach using the Structured3D dataset, demonstrating accurate occupancy prediction and superior performance in anticipating unseen scene geometry.
arXiv Detail & Related papers (2025-04-22T17:38:38Z) - QuadrupedGPT: Towards a Versatile Quadruped Agent in Open-ended Worlds [51.05639500325598]
We introduce QuadrupedGPT, designed to follow diverse commands with agility comparable to that of a pet.<n>Our agent shows proficiency in handling diverse tasks and intricate instructions, representing a significant step toward the development of versatile quadruped agents.
arXiv Detail & Related papers (2024-06-24T12:14:24Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Self-mediated exploration in artificial intelligence inspired by
cognitive psychology [1.3351610617039975]
Exploration of the physical environment is an indispensable precursor to data acquisition and enables knowledge generation via analytical or direct trialing.
This work links human behavior and artificial agents to endorse self-development.
A study is subsequently designed to mirror previous human trials, which artificial agents are made to undergo repeatedly towards convergence.
Results demonstrate causality, learned by the vast majority of agents, between their internal states and exploration to match those reported for human counterparts.
arXiv Detail & Related papers (2023-02-13T18:20:44Z) - Intrinsically Motivated Learning of Causal World Models [0.0]
A promising direction is to build world models capturing the true physical mechanisms hidden behind the sensorimotor interaction with the environment.
Inferring the causal structure of the environment could benefit from well-chosen actions as means to collect relevant interventional data.
arXiv Detail & Related papers (2022-08-09T16:48:28Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - Benchmarking the Spectrum of Agent Capabilities [7.088856621650764]
We introduce Crafter, an open world survival game with visual inputs that evaluates a wide range of general abilities within a single environment.
Agents learn from the provided reward signal or through intrinsic objectives and are evaluated by semantically meaningful achievements.
We experimentally verify that Crafter is of appropriate difficulty to drive future research and provide baselines scores of reward agents and unsupervised agents.
arXiv Detail & Related papers (2021-09-14T15:49:31Z) - Improved Learning of Robot Manipulation Tasks via Tactile Intrinsic
Motivation [40.81570120196115]
In sparse goal settings, an agent does not receive any positive feedback until randomly achieving the goal.
Inspired by touch-based exploration observed in children, we formulate an intrinsic reward based on the sum of forces between a robot's force sensors and manipulation objects.
We show that our solution accelerates the exploration and outperforms state-of-the-art methods on three fundamental robot manipulation benchmarks.
arXiv Detail & Related papers (2021-02-22T14:21:30Z) - Action and Perception as Divergence Minimization [43.75550755678525]
Action Perception Divergence is an approach for categorizing the space of possible objective functions for embodied agents.
We show a spectrum that reaches from narrow to general objectives.
These agents use perception to align their beliefs with the world and use actions to align the world with their beliefs.
arXiv Detail & Related papers (2020-09-03T16:52:46Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.