Learning Object-Centered Autotelic Behaviors with Graph Neural Networks
- URL: http://arxiv.org/abs/2204.05141v1
- Date: Mon, 11 Apr 2022 14:19:04 GMT
- Title: Learning Object-Centered Autotelic Behaviors with Graph Neural Networks
- Authors: Ahmed Akakzia, Olivier Sigaud
- Abstract summary: Humans have access to a handful of previously learned skills, which they rapidly adapt to new situations.
In artificial intelligence, autotelic agents, which are intrinsically motivated to represent and set their own goals, exhibit promising skill adaptation capabilities.
We study different implementations of autotelic agents using four types of Graph Neural Networks policy representations and two types of goal spaces.
- Score: 10.149376933379036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although humans live in an open-ended world and endlessly face new
challenges, they do not have to learn from scratch each time they face the next
one. Rather, they have access to a handful of previously learned skills, which
they rapidly adapt to new situations. In artificial intelligence, autotelic
agents, which are intrinsically motivated to represent and set their own goals,
exhibit promising skill adaptation capabilities. However, these capabilities
are highly constrained by their policy and goal space representations. In this
paper, we propose to investigate the impact of these representations on the
learning capabilities of autotelic agents. We study different implementations
of autotelic agents using four types of Graph Neural Networks policy
representations and two types of goal spaces, either geometric or
predicate-based. We show that combining object-centered architectures that are
expressive enough with semantic relational goals enables an efficient transfer
between skills and promotes behavioral diversity. We also release our
graph-based implementations to encourage further research in this direction.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Augmenting Autotelic Agents with Large Language Models [24.16977502082188]
We introduce a language model augmented autotelic agent (LMA3)
LMA3 supports the representation, generation and learning of diverse, abstract, human-relevant goals.
We show that LMA3 agents learn to master a large diversity of skills in a task-agnostic text-based environment.
arXiv Detail & Related papers (2023-05-21T15:42:41Z) - Choreographer: Learning and Adapting Skills in Imagination [60.09911483010824]
We present Choreographer, a model-based agent that exploits its world model to learn and adapt skills in imagination.
Our method decouples the exploration and skill learning processes, being able to discover skills in the latent state space of the model.
Choreographer is able to learn skills both from offline data, and by collecting data simultaneously with an exploration policy.
arXiv Detail & Related papers (2022-11-23T23:31:14Z) - Goal-Conditioned Q-Learning as Knowledge Distillation [136.79415677706612]
We explore a connection between off-policy reinforcement learning in goal-conditioned settings and knowledge distillation.
We empirically show that this can improve the performance of goal-conditioned off-policy reinforcement learning when the space of goals is high-dimensional.
We also show that this technique can be adapted to allow for efficient learning in the case of multiple simultaneous sparse goals.
arXiv Detail & Related papers (2022-08-28T22:01:10Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Divergent representations of ethological visual inputs emerge from
supervised, unsupervised, and reinforcement learning [20.98896935012773]
We compare the representations learned by eight different convolutional neural networks.
We find that the network trained with reinforcement learning differs most from the other networks.
arXiv Detail & Related papers (2021-12-03T17:18:09Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - GRIMGEP: Learning Progress for Robust Goal Sampling in Visual Deep
Reinforcement Learning [21.661530291654692]
We propose a framework that allows agents to autonomously identify and ignore noisy distracting regions.
Our framework can be combined with any state-of-the-art novelty seeking goal exploration approaches.
arXiv Detail & Related papers (2020-08-10T19:50:06Z) - ELSIM: End-to-end learning of reusable skills through intrinsic
motivation [0.0]
We present a novel reinforcement learning architecture which hierarchically learns and represents self-generated skills in an end-to-end way.
With this architecture, an agent focuses only on task-rewarded skills while keeping the learning process of skills bottom-up.
arXiv Detail & Related papers (2020-06-23T11:20:46Z) - Learning Neural-Symbolic Descriptive Planning Models via Cube-Space
Priors: The Voyage Home (to STRIPS) [13.141761152863868]
We show that our neuro-symbolic architecture is trained end-to-end to produce a succinct and effective discrete state transition model from images alone.
Our target representation is already in a form that off-the-shelf solvers can consume, and opens the door to the rich array of modern search capabilities.
arXiv Detail & Related papers (2020-04-27T15:01:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.