Progressive growing of self-organized hierarchical representations for
exploration
- URL: http://arxiv.org/abs/2005.06369v1
- Date: Wed, 13 May 2020 15:24:42 GMT
- Title: Progressive growing of self-organized hierarchical representations for
exploration
- Authors: Mayalen Etcheverry, Pierre-Yves Oudeyer, Chris Reinke
- Abstract summary: A central challenge is how to learn representations in order to progressively build a map of the discovered structures.
We aim to build lasting representations and avoid catastrophic forgetting throughout the exploration process.
Thirdly, we target representations that can structure the agent discoveries in a coarse-to-fine manner.
- Score: 22.950651316748207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing agent that can autonomously discover and learn a diversity of
structures and skills in unknown changing environments is key for lifelong
machine learning. A central challenge is how to learn incrementally
representations in order to progressively build a map of the discovered
structures and re-use it to further explore. To address this challenge, we
identify and target several key functionalities. First, we aim to build lasting
representations and avoid catastrophic forgetting throughout the exploration
process. Secondly we aim to learn a diversity of representations allowing to
discover a "diversity of diversity" of structures (and associated skills) in
complex high-dimensional environments. Thirdly, we target representations that
can structure the agent discoveries in a coarse-to-fine manner. Finally, we
target the reuse of such representations to drive exploration toward an
"interesting" type of diversity, for instance leveraging human guidance.
Current approaches in state representation learning rely generally on
monolithic architectures which do not enable all these functionalities.
Therefore, we present a novel technique to progressively construct a Hierarchy
of Observation Latent Models for Exploration Stratification, called HOLMES.
This technique couples the use of a dynamic modular model architecture for
representation learning with intrinsically-motivated goal exploration processes
(IMGEPs). The paper shows results in the domain of automated discovery of
diverse self-organized patterns, considering as testbed the experimental
framework from Reinke et al. (2019).
Related papers
- Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - Detecting Any Human-Object Interaction Relationship: Universal HOI
Detector with Spatial Prompt Learning on Foundation Models [55.20626448358655]
This study explores the universal interaction recognition in an open-world setting through the use of Vision-Language (VL) foundation models and large language models (LLMs)
Our design includes an HO Prompt-guided Decoder (HOPD), facilitates the association of high-level relation representations in the foundation model with various HO pairs within the image.
For open-category interaction recognition, our method supports either of two input types: interaction phrase or interpretive sentence.
arXiv Detail & Related papers (2023-11-07T08:27:32Z) - Enhancing Representations through Heterogeneous Self-Supervised Learning [61.40674648939691]
We propose Heterogeneous Self-Supervised Learning (HSSL), which enforces a base model to learn from an auxiliary head whose architecture is heterogeneous from the base model.
The HSSL endows the base model with new characteristics in a representation learning way without structural changes.
The HSSL is compatible with various self-supervised methods, achieving superior performances on various downstream tasks.
arXiv Detail & Related papers (2023-10-08T10:44:05Z) - Goal Space Abstraction in Hierarchical Reinforcement Learning via
Reachability Analysis [0.0]
We propose a developmental mechanism for subgoal discovery via an emergent representation that abstracts (i.e., groups together) sets of environment states.
We create a HRL algorithm that gradually learns this representation along with the policies and evaluate it on navigation tasks to show the learned representation is interpretable and results in data efficiency.
arXiv Detail & Related papers (2023-09-12T06:53:11Z) - Intrinsic Motivation in Model-based Reinforcement Learning: A Brief
Review [77.34726150561087]
This review considers the existing methods for determining intrinsic motivation based on the world model obtained by the agent.
The proposed unified framework describes the architecture of agents using a world model and intrinsic motivation to improve learning.
arXiv Detail & Related papers (2023-01-24T15:13:02Z) - Agent Spaces [0.0]
We define exploration as the act of modifying an agent to itself be explorative.
We show that many important structures in Reinforcement Learning are well behaved under the topology induced by convergence in the agent space.
arXiv Detail & Related papers (2021-11-11T01:12:17Z) - Self-supervised Visual Reinforcement Learning with Object-centric
Representations [11.786249372283562]
We propose to use object-centric representations as a modular and structured observation space.
We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills.
arXiv Detail & Related papers (2020-11-29T14:55:09Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z) - Hierarchically Organized Latent Modules for Exploratory Search in
Morphogenetic Systems [21.23182328329019]
We introduce a novel dynamic and modular architecture that enables unsupervised learning of a hierarchy of diverse representations.
We show that this system forms a discovery assistant that can efficiently adapt its diversity search towards preferences of a user.
arXiv Detail & Related papers (2020-07-02T15:28:27Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.