Language-Conditioned Goal Generation: a New Approach to Language
Grounding for RL
- URL: http://arxiv.org/abs/2006.07043v1
- Date: Fri, 12 Jun 2020 09:54:38 GMT
- Title: Language-Conditioned Goal Generation: a New Approach to Language
Grounding for RL
- Authors: C\'edric Colas, Ahmed Akakzia, Pierre-Yves Oudeyer, Mohamed Chetouani,
Olivier Sigaud
- Abstract summary: In the real world, linguistic agents are also embodied agents: they perceive and act in the physical world.
This paper proposes using language to condition goal generators. Given any goal-conditioned policy, one could train a language-conditioned goal generator to generate language-agnostic goals for the agent.
- Score: 23.327749767424567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the real world, linguistic agents are also embodied agents: they perceive
and act in the physical world. The notion of Language Grounding questions the
interactions between language and embodiment: how do learning agents connect or
ground linguistic representations to the physical world ? This question has
recently been approached by the Reinforcement Learning community under the
framework of instruction-following agents. In these agents, behavioral policies
or reward functions are conditioned on the embedding of an instruction
expressed in natural language. This paper proposes another approach: using
language to condition goal generators. Given any goal-conditioned policy, one
could train a language-conditioned goal generator to generate language-agnostic
goals for the agent. This method allows to decouple sensorimotor learning from
language acquisition and enable agents to demonstrate a diversity of behaviors
for any given instruction. We propose a particular instantiation of this
approach and demonstrate its benefits.
Related papers
- Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use [16.425032085699698]
It is desirable for embodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks.
It's not clear how to incorporate rich language use to facilitate task learning.
This paper studies different types of language inputs in facilitating reinforcement learning.
arXiv Detail & Related papers (2024-10-31T17:59:52Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - Simple Embodied Language Learning as a Byproduct of Meta-Reinforcement
Learning [56.07190845063208]
We ask: can embodied reinforcement learning (RL) agents indirectly learn language from non-language tasks?
We design an office navigation environment, where the agent's goal is to find a particular office, and office locations differ in different buildings (i.e., tasks)
We find RL agents indeed are able to indirectly learn language. Agents trained with current meta-RL algorithms successfully generalize to reading floor plans with held-out layouts and language phrases.
arXiv Detail & Related papers (2023-06-14T09:48:48Z) - Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural
Language Instructions [53.21504989297547]
We propose a new method that combines a language model and reinforcement learning for the task of building objects in a Minecraft-like environment.
Our method first generates a set of consistently achievable sub-goals from the instructions and then completes associated sub-tasks with a pre-trained RL policy.
arXiv Detail & Related papers (2022-11-01T18:30:42Z) - Inner Monologue: Embodied Reasoning through Planning with Language
Models [81.07216635735571]
Large Language Models (LLMs) can be applied to domains beyond natural language processing.
LLMs planning in embodied environments need to consider not just what skills to do, but also how and when to do them.
We propose that by leveraging environment feedback, LLMs are able to form an inner monologue that allows them to more richly process and plan in robotic control scenarios.
arXiv Detail & Related papers (2022-07-12T15:20:48Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Inverse Reinforcement Learning with Natural Language Goals [8.972202854038382]
We propose a novel inverse reinforcement learning algorithm to learn a language-conditioned policy and reward function.
Our algorithm outperforms multiple baselines by a large margin on a vision-based natural language instruction following dataset.
arXiv Detail & Related papers (2020-08-16T14:43:49Z) - Grounding Language to Autonomously-Acquired Skills via Goal Generation [23.327749767424567]
We propose a new conceptual approach to language-conditioned RL: the Language-Goal-Behavior architecture (LGB)
LGB decouples skill learning and language grounding via an intermediate semantic representation of the world.
We present DECSTR, an intrinsically motivated learning agent endowed with an innate semantic representation describing spatial relations between physical objects.
arXiv Detail & Related papers (2020-06-12T13:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.