LiFT: Unsupervised Reinforcement Learning with Foundation Models as
Teachers
- URL: http://arxiv.org/abs/2312.08958v1
- Date: Thu, 14 Dec 2023 14:07:41 GMT
- Title: LiFT: Unsupervised Reinforcement Learning with Foundation Models as
Teachers
- Authors: Taewook Nam, Juyong Lee, Jesse Zhang, Sung Ju Hwang, Joseph J. Lim,
Karl Pertsch
- Abstract summary: We propose a framework that guides a reinforcement learning agent to acquire semantically meaningful behavior without human feedback.
In our framework, the agent receives task instructions grounded in a training environment from large language models.
We demonstrate that our method can learn semantically meaningful skills in a challenging open-ended MineDojo environment.
- Score: 59.69716962256727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a framework that leverages foundation models as teachers, guiding
a reinforcement learning agent to acquire semantically meaningful behavior
without human feedback. In our framework, the agent receives task instructions
grounded in a training environment from large language models. Then, a
vision-language model guides the agent in learning the multi-task
language-conditioned policy by providing reward feedback. We demonstrate that
our method can learn semantically meaningful skills in a challenging open-ended
MineDojo environment while prior unsupervised skill discovery methods struggle.
Additionally, we discuss observed challenges of using off-the-shelf foundation
models as teachers and our efforts to address them.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.