GLIDE-RL: Grounded Language Instruction through DEmonstration in RL
- URL: http://arxiv.org/abs/2401.02991v1
- Date: Wed, 3 Jan 2024 17:32:13 GMT
- Title: GLIDE-RL: Grounded Language Instruction through DEmonstration in RL
- Authors: Chaitanya Kharyal and Sai Krishna Gottipati and Tanmay Kumar Sinha and
Srijita Das and Matthew E. Taylor
- Abstract summary: Training efficient Reinforcement Learning (RL) agents grounded in natural language has been a long-standing challenge.
We present a novel algorithm, Grounded Language Instruction through DEmonstration in RL (GLIDE-RL) that introduces a teacher-instructor-student curriculum learning framework.
In this multi-agent framework, the teacher and the student agents learn simultaneously based on the student's current skill level.
- Score: 7.658523833511356
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: One of the final frontiers in the development of complex human - AI
collaborative systems is the ability of AI agents to comprehend the natural
language and perform tasks accordingly. However, training efficient
Reinforcement Learning (RL) agents grounded in natural language has been a
long-standing challenge due to the complexity and ambiguity of the language and
sparsity of the rewards, among other factors. Several advances in reinforcement
learning, curriculum learning, continual learning, language models have
independently contributed to effective training of grounded agents in various
environments. Leveraging these developments, we present a novel algorithm,
Grounded Language Instruction through DEmonstration in RL (GLIDE-RL) that
introduces a teacher-instructor-student curriculum learning framework for
training an RL agent capable of following natural language instructions that
can generalize to previously unseen language instructions. In this multi-agent
framework, the teacher and the student agents learn simultaneously based on the
student's current skill level. We further demonstrate the necessity for
training the student agent with not just one, but multiple teacher agents.
Experiments on a complex sparse reward environment validates the effectiveness
of our proposed approach.
Related papers
- Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use [16.425032085699698]
It is desirable for embodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks.
It's not clear how to incorporate rich language use to facilitate task learning.
This paper studies different types of language inputs in facilitating reinforcement learning.
arXiv Detail & Related papers (2024-10-31T17:59:52Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Policy Learning with a Language Bottleneck [65.99843627646018]
Policy Learning with a Language Bottleneck (PLLBB) is a framework enabling AI agents to generate linguistic rules.
PLLBB alternates between a rule generation step guided by language models, and an update step where agents learn new policies guided by rules.
In a two-player communication game, a maze solving task, and two image reconstruction tasks, we show thatPLLBB agents are not only able to learn more interpretable and generalizable behaviors, but can also share the learned rules with human users.
arXiv Detail & Related papers (2024-05-07T08:40:21Z) - LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language
Models [56.25156596019168]
This paper introduces the LMRL-Gym benchmark for evaluating multi-turn RL for large language models (LLMs)
Our benchmark consists of 8 different language tasks, which require multiple rounds of language interaction and cover a range of tasks in open-ended dialogue and text games.
arXiv Detail & Related papers (2023-11-30T03:59:31Z) - Large Language Model as a Policy Teacher for Training Reinforcement Learning Agents [16.24662355253529]
Large Language Models (LLMs) can address sequential decision-making tasks through the provision of high-level instructions.
LLMs lack specialization in tackling specific target problems, particularly in real-time dynamic environments.
We introduce a novel framework that addresses these challenges by training a smaller, specialized student RL agent using instructions from an LLM-based teacher agent.
arXiv Detail & Related papers (2023-11-22T13:15:42Z) - Accelerating Reinforcement Learning of Robotic Manipulations via
Feedback from Large Language Models [21.052532074815765]
We introduce the Lafite-RL (Language agent feedback interactive Reinforcement Learning) framework.
It enables RL agents to learn robotic tasks efficiently by taking advantage of Large Language Models' timely feedback.
It outperforms the baseline in terms of both learning efficiency and success rate.
arXiv Detail & Related papers (2023-11-04T11:21:38Z) - Progressively Efficient Learning [58.6490456517954]
We develop a novel learning framework named Communication-Efficient Interactive Learning (CEIL)
CEIL leads to emergence of a human-like pattern where the learner and the teacher communicate efficiently by exchanging increasingly more abstract intentions.
Agents trained with CEIL quickly master new tasks, outperforming non-hierarchical and hierarchical imitation learning by up to 50% and 20% in absolute success rate.
arXiv Detail & Related papers (2023-10-13T07:52:04Z) - Collaborating with language models for embodied reasoning [30.82976922056617]
Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents.
We present a set of tasks that require reasoning, test this system's ability to generalize zero-shot and investigate failure cases.
arXiv Detail & Related papers (2023-02-01T21:26:32Z) - Improving Policy Learning via Language Dynamics Distillation [87.27583619910338]
We propose Language Dynamics Distillation (LDD), which pretrains a model to predict environment dynamics given demonstrations with language descriptions.
We show that language descriptions in demonstrations improve sample-efficiency and generalization across environments.
arXiv Detail & Related papers (2022-09-30T19:56:04Z) - Multitasking Inhibits Semantic Drift [46.71462510028727]
We study the dynamics of learning in latent language policies (LLPs)
LLPs can solve challenging long-horizon reinforcement learning problems.
Previous work has found that LLP training is prone to semantic drift.
arXiv Detail & Related papers (2021-04-15T03:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.