Autotelic Reinforcement Learning: Exploring Intrinsic Motivations for Skill Acquisition in Open-Ended Environments
- URL: http://arxiv.org/abs/2502.04418v1
- Date: Thu, 06 Feb 2025 14:37:46 GMT
- Title: Autotelic Reinforcement Learning: Exploring Intrinsic Motivations for Skill Acquisition in Open-Ended Environments
- Authors: Prakhar Srivastava, Jasmeet Singh,
- Abstract summary: This paper presents a comprehensive overview of autotelic Reinforcement Learning (RL), emphasizing the role of intrinsic motivations in the open-ended formation of skill repertoires.
We delineate the distinctions between knowledge-based and competence-based intrinsic motivations, illustrating how these concepts inform the development of autonomous agents capable of generating and pursuing self-defined goals.
- Score: 1.104960878651584
- License:
- Abstract: This paper presents a comprehensive overview of autotelic Reinforcement Learning (RL), emphasizing the role of intrinsic motivations in the open-ended formation of skill repertoires. We delineate the distinctions between knowledge-based and competence-based intrinsic motivations, illustrating how these concepts inform the development of autonomous agents capable of generating and pursuing self-defined goals. The typology of Intrinsically Motivated Goal Exploration Processes (IMGEPs) is explored, with a focus on the implications for multi-goal RL and developmental robotics. The autotelic learning problem is framed within a reward-free Markov Decision Process (MDP), WHERE agents must autonomously represent, generate, and master their own goals. We address the unique challenges in evaluating such agents, proposing various metrics for measuring exploration, generalization, and robustness in complex environments. This work aims to advance the understanding of autotelic RL agents and their potential for enhancing skill acquisition in a diverse and dynamic setting.
Related papers
- Metacognition for Unknown Situations and Environments (MUSE) [3.2020845462590697]
We propose the Metacognition for Unknown Situations and Environments (MUSE) framework.
MUSE integrates metacognitive processes--specifically self-awareness and self-regulation--into autonomous agents.
Agents show significant improvements in self-awareness and self-regulation.
arXiv Detail & Related papers (2024-11-20T18:41:03Z) - Synthesizing Evolving Symbolic Representations for Autonomous Systems [2.4233709516962785]
This paper presents an open-ended learning system able to synthesize from scratch its experience into a PPDDL representation and update it over time.
The system explores the environment and iteratively: (a) discover options, (b) explore the environment using options, (c) abstract the knowledge collected and (d) plan.
arXiv Detail & Related papers (2024-09-18T07:23:26Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Intrinsic Motivation in Model-based Reinforcement Learning: A Brief
Review [77.34726150561087]
This review considers the existing methods for determining intrinsic motivation based on the world model obtained by the agent.
The proposed unified framework describes the architecture of agents using a world model and intrinsic motivation to improve learning.
arXiv Detail & Related papers (2023-01-24T15:13:02Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Deep Reinforcement Learning for Multi-Agent Interaction [14.532965827043254]
The Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control.
This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
arXiv Detail & Related papers (2022-08-02T21:55:56Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - Individual and Collective Autonomous Development [7.928003786376716]
We envision that future systems will have to dynamically learn how to act and adapt to face evolving situations with little or no priori knowledge.
In this paper, we introduce the vision of autonomous development in ICT systems, by framing its key concepts and by illustrating suitable application domains.
arXiv Detail & Related papers (2021-09-23T09:11:24Z) - Understanding the origin of information-seeking exploration in
probabilistic objectives for control [62.997667081978825]
An exploration-exploitation trade-off is central to the description of adaptive behaviour.
One approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive'
We show that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives.
arXiv Detail & Related papers (2021-03-11T18:42:39Z) - Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short
Survey [21.311739361361717]
Developmental approaches argue that learning agents must generate, select and learn to solve their own problems.
Recent years have seen a convergence of developmental approaches and deep reinforcement learning (RL) methods, forming the new domain of developmental machine learning.
This paper proposes a typology of these methods at the intersection of deep RL and developmental approaches, surveys recent approaches and discusses future avenues.
arXiv Detail & Related papers (2020-12-17T18:51:40Z) - Learning with AMIGo: Adversarially Motivated Intrinsic Goals [63.680207855344875]
AMIGo is a goal-generating teacher that proposes Adversarially Motivated Intrinsic Goals.
We show that our method generates a natural curriculum of self-proposed goals which ultimately allows the agent to solve challenging procedurally-generated tasks.
arXiv Detail & Related papers (2020-06-22T10:22:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.