Code as Reward: Empowering Reinforcement Learning with VLMs
- URL: http://arxiv.org/abs/2402.04764v1
- Date: Wed, 7 Feb 2024 11:27:45 GMT
- Title: Code as Reward: Empowering Reinforcement Learning with VLMs
- Authors: David Venuto, Sami Nur Islam, Martin Klissarov, Doina Precup, Sherry
Yang, Ankit Anand
- Abstract summary: We propose a framework named Code as Reward (VLM-CaR) to produce dense reward functions from pre-trained Vision-Language Models.
VLM-CaR significantly reduces the computational burden of querying the VLM directly.
We show that the dense rewards generated through our approach are very accurate across a diverse set of discrete and continuous environments.
- Score: 37.862999288331906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained Vision-Language Models (VLMs) are able to understand visual
concepts, describe and decompose complex tasks into sub-tasks, and provide
feedback on task completion. In this paper, we aim to leverage these
capabilities to support the training of reinforcement learning (RL) agents. In
principle, VLMs are well suited for this purpose, as they can naturally analyze
image-based observations and provide feedback (reward) on learning progress.
However, inference in VLMs is computationally expensive, so querying them
frequently to compute rewards would significantly slowdown the training of an
RL agent. To address this challenge, we propose a framework named Code as
Reward (VLM-CaR). VLM-CaR produces dense reward functions from VLMs through
code generation, thereby significantly reducing the computational burden of
querying the VLM directly. We show that the dense rewards generated through our
approach are very accurate across a diverse set of discrete and continuous
environments, and can be more effective in training RL policies than the
original sparse environment rewards.
Related papers
- Exploring RL-based LLM Training for Formal Language Tasks with Programmed Rewards [49.7719149179179]
This paper investigates the feasibility of using PPO for reinforcement learning (RL) from explicitly programmed reward signals.
We focus on tasks expressed through formal languages, such as programming, where explicit reward functions can be programmed to automatically assess quality of generated outputs.
Our results show that pure RL-based training for the two formal language tasks is challenging, with success being limited even for the simple arithmetic task.
arXiv Detail & Related papers (2024-10-22T15:59:58Z) - FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning [18.60627708199452]
We investigate how to leverage pre-trained visual-language models (VLM) for online Reinforcement Learning (RL)
We first identify the problem of reward misalignment when applying VLM as a reward in RL tasks.
We introduce a lightweight fine-tuning method, named Fuzzy VLM reward-aided RL (FuRL)
arXiv Detail & Related papers (2024-06-02T07:20:08Z) - RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback [24.759613248409167]
Reward engineering has long been a challenge in Reinforcement Learning research.
We propose RL-VLM-F, a method that automatically generates reward functions for agents to learn new tasks.
We demonstrate that RL-VLM-F successfully produces effective rewards and policies across various domains.
arXiv Detail & Related papers (2024-02-06T04:06:06Z) - Vision-Language Models Provide Promptable Representations for Reinforcement Learning [67.40524195671479]
We propose a novel approach that uses the vast amounts of general and indexable world knowledge encoded in vision-language models (VLMs) pre-trained on Internet-scale data for embodied reinforcement learning (RL)
We show that our approach can use chain-of-thought prompting to produce representations of common-sense semantic reasoning, improving policy performance in novel scenes by 1.5 times.
arXiv Detail & Related papers (2024-02-05T00:48:56Z) - Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning [12.628697648945298]
Reinforcement learning (RL) requires either manually specifying a reward function, or learning a reward model from a large amount of human feedback.
We study a more sample-efficient alternative: using pretrained vision-language models (VLMs) as zero-shot reward models (RMs) to specify tasks via natural language.
arXiv Detail & Related papers (2023-10-19T17:17:06Z) - Language Reward Modulation for Pretraining Reinforcement Learning [61.76572261146311]
We propose leveraging the capabilities of LRFs as a pretraining signal for reinforcement learning.
Our VLM pretraining approach, which is a departure from previous attempts to use LRFs, can warmstart sample-efficient learning on robot manipulation tasks.
arXiv Detail & Related papers (2023-08-23T17:37:51Z) - LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient
Querying [71.86163159193327]
Large language models (LLMs) have recently demonstrated their impressive ability to provide context-aware responses via text.
This ability could potentially be used to predict plausible solutions in sequential decision making tasks pertaining to pattern completion.
We introduce LaGR, which uses this predictive ability of LLMs to propose solutions to tasks that have been partially completed by a primary reinforcement learning (RL) agent.
arXiv Detail & Related papers (2023-08-21T02:07:35Z) - Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in
Vision-Language Models [76.410400238974]
We propose TTA with feedback to rectify the model output and prevent the model from becoming blindly confident.
A CLIP model is adopted as the reward model during TTA and provides feedback for the VLM.
The proposed textitreinforcement learning with CLIP feedback(RLCF) framework is highly flexible and universal.
arXiv Detail & Related papers (2023-05-29T11:03:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.