From Curiosity to Competence: How World Models Interact with the Dynamics of Exploration
- URL: http://arxiv.org/abs/2507.08210v1
- Date: Thu, 10 Jul 2025 22:45:28 GMT
- Title: From Curiosity to Competence: How World Models Interact with the Dynamics of Exploration
- Authors: Fryderyk Mantiuk, Hanqi Zhou, Charley M. Wu,
- Abstract summary: We show how evolving internal representations mediate the trade-off between curiosity and competence.<n>Our findings formalize adaptive exploration as a balance between pursuing the unknown and the controllable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What drives an agent to explore the world while also maintaining control over the environment? From a child at play to scientists in the lab, intelligent agents must balance curiosity (the drive to seek knowledge) with competence (the drive to master and control the environment). Bridging cognitive theories of intrinsic motivation with reinforcement learning, we ask how evolving internal representations mediate the trade-off between curiosity (novelty or information gain) and competence (empowerment). We compare two model-based agents using handcrafted state abstractions (Tabular) or learning an internal world model (Dreamer). The Tabular agent shows curiosity and competence guide exploration in distinct patterns, while prioritizing both improves exploration. The Dreamer agent reveals a two-way interaction between exploration and representation learning, mirroring the developmental co-evolution of curiosity and competence. Our findings formalize adaptive exploration as a balance between pursuing the unknown and the controllable, offering insights for cognitive theories and efficient reinforcement learning.
Related papers
- Behavioral Exploration: Learning to Explore via In-Context Adaptation [53.92981562916783]
We train a long-context generative model to predict expert actions conditioned on a context of past observations and a measure of how exploratory'' the expert's behaviors are relative to this context.<n>This enables the model to not only mimic the behavior of an expert, but also, by feeding its past history of interactions into its context, to select different expert behaviors than what have been previously selected.<n>We demonstrate the effectiveness of our method in both simulated locomotion and manipulation settings, as well as on real-world robotic manipulation tasks.
arXiv Detail & Related papers (2025-07-11T21:36:19Z) - Can you see how I learn? Human observers' inferences about Reinforcement Learning agents' learning processes [1.6874375111244329]
Reinforcement Learning (RL) agents often exhibit learning behaviors that are not intuitively interpretable by human observers.<n>This work provides a data-driven understanding of the factors of human observers' understanding of the agent's learning process.
arXiv Detail & Related papers (2025-06-16T15:04:27Z) - Truly Self-Improving Agents Require Intrinsic Metacognitive Learning [59.60803539959191]
Self-improving agents aim to continuously acquire new capabilities with minimal supervision.<n>Current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities.<n>We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent's intrinsic ability to actively evaluate, reflect on, and adapt its own learning processes.
arXiv Detail & Related papers (2025-06-05T14:53:35Z) - Intrinsically-Motivated Humans and Agents in Open-World Exploration [50.00331050937369]
We compare adults, children, and AI agents in a complex open-ended environment, Crafter.<n>We find that only Entropy and Empowerment are consistently positively correlated with human exploration progress.<n>We find preliminary evidence that private speech utterances, and particularly goal verbalizations, may aid exploration in children.
arXiv Detail & Related papers (2025-03-31T00:09:00Z) - Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - Choreographer: Learning and Adapting Skills in Imagination [60.09911483010824]
We present Choreographer, a model-based agent that exploits its world model to learn and adapt skills in imagination.
Our method decouples the exploration and skill learning processes, being able to discover skills in the latent state space of the model.
Choreographer is able to learn skills both from offline data, and by collecting data simultaneously with an exploration policy.
arXiv Detail & Related papers (2022-11-23T23:31:14Z) - Intrinsically Motivated Learning of Causal World Models [0.0]
A promising direction is to build world models capturing the true physical mechanisms hidden behind the sensorimotor interaction with the environment.
Inferring the causal structure of the environment could benefit from well-chosen actions as means to collect relevant interventional data.
arXiv Detail & Related papers (2022-08-09T16:48:28Z) - A Novel Multimodal Approach for Studying the Dynamics of Curiosity in
Small Group Learning [2.55061802822074]
We propose an integrated socio-cognitive account of curiosity that ties observable behaviors in peers to underlying curiosity states.
We make a bipartite distinction between individual and interpersonal functions that contribute to curiosity, and multimodal behaviors that fulfill these functions.
This work is a step towards designing learning technologies that can recognize and evoke moment-by-moment curiosity during learning in social contexts.
arXiv Detail & Related papers (2022-04-01T16:12:40Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Towards Teachable Autotelic Agents [21.743801780657435]
Teachable autotelic agents (TAA) are agents that learn from both internal and teaching signals.
This paper presents a roadmap towards the design of teachable autonomous agents.
arXiv Detail & Related papers (2021-05-25T14:28:58Z) - Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration [0.0]
We investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning.
We look at what role hard-tovary explanations play in intelligence by looking at the human brain.
arXiv Detail & Related papers (2020-12-16T23:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.