Reflexion: Language Agents with Verbal Reinforcement Learning
- URL: http://arxiv.org/abs/2303.11366v4
- Date: Tue, 10 Oct 2023 05:21:45 GMT
- Title: Reflexion: Language Agents with Verbal Reinforcement Learning
- Authors: Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik
Narasimhan, Shunyu Yao
- Abstract summary: Reflexion is a novel framework to reinforce language agents not by updating weights, but through linguistic feedback.
It is flexible enough to incorporate various types (scalar values or free-form language) and sources (external or internally simulated) of feedback signals.
For example, Reflexion achieves a 91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous state-of-the-art GPT-4 that achieves 80%.
- Score: 44.85337947858337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have been increasingly used to interact with
external environments (e.g., games, compilers, APIs) as goal-driven agents.
However, it remains challenging for these language agents to quickly and
efficiently learn from trial-and-error as traditional reinforcement learning
methods require extensive training samples and expensive model fine-tuning. We
propose Reflexion, a novel framework to reinforce language agents not by
updating weights, but instead through linguistic feedback. Concretely,
Reflexion agents verbally reflect on task feedback signals, then maintain their
own reflective text in an episodic memory buffer to induce better
decision-making in subsequent trials. Reflexion is flexible enough to
incorporate various types (scalar values or free-form language) and sources
(external or internally simulated) of feedback signals, and obtains significant
improvements over a baseline agent across diverse tasks (sequential
decision-making, coding, language reasoning). For example, Reflexion achieves a
91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous
state-of-the-art GPT-4 that achieves 80%. We also conduct ablation and analysis
studies using different feedback signals, feedback incorporation methods, and
agent types, and provide insights into how they affect performance.
Related papers
- LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback [33.14770105185958]
Large Language Models (LLMs) excel at generating human-like dialogues and comprehending text.
We propose a bootstrapping framework that leverages self-generated feedback to enhance LLM reasoning capabilities for lie detection.
We investigate the application of the proposed framework for detecting betrayal and deception in Diplomacy games, and compare it with feedback from professional human players.
arXiv Detail & Related papers (2024-08-25T18:47:55Z) - Re-ReST: Reflection-Reinforced Self-Training for Language Agents [101.22559705696885]
Self-training in language agents can generate supervision from the agent itself.
We present Reflection-Reinforced Self-Training (Re-ReST), which uses a textitreflector to refine low-quality generated samples.
arXiv Detail & Related papers (2024-06-03T16:21:38Z) - MetaReflection: Learning Instructions for Language Agents using Past Reflections [11.028256182234017]
We introduce MetaReflection, a novel offline reinforcement learning technique that enhances the performance of Language Agents.
We demonstrate the efficacy of MetaReflection by evaluating across multiple domains, including complex logical reasoning, biomedical semantic similarity, open world question answering, and vulnerability threat detection.
arXiv Detail & Related papers (2024-05-13T10:51:43Z) - Is Feedback All You Need? Leveraging Natural Language Feedback in
Goal-Conditioned Reinforcement Learning [54.31495290436766]
We extend BabyAI to automatically generate language feedback from environment dynamics and goal condition success.
We modify the Decision Transformer architecture to take advantage of this additional signal.
We find that training with language feedback either in place of or in addition to the return-to-go or goal descriptions improves agents' generalisation performance.
arXiv Detail & Related papers (2023-12-07T22:33:34Z) - Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization [103.70896967077294]
This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model.
Our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model.
Experimental results on various tasks demonstrate that the language agents improve over time.
arXiv Detail & Related papers (2023-08-04T06:14:23Z) - SimOAP: Improve Coherence and Consistency in Persona-based Dialogue
Generation via Over-sampling and Post-evaluation [54.66399120084227]
Language models trained on large-scale corpora can generate remarkably fluent results in open-domain dialogue.
For the persona-based dialogue generation task, consistency and coherence are great challenges for language models.
A two-stage SimOAP strategy is proposed, i.e., over-sampling and post-evaluation.
arXiv Detail & Related papers (2023-05-18T17:23:00Z) - Improving Policy Learning via Language Dynamics Distillation [87.27583619910338]
We propose Language Dynamics Distillation (LDD), which pretrains a model to predict environment dynamics given demonstrations with language descriptions.
We show that language descriptions in demonstrations improve sample-efficiency and generalization across environments.
arXiv Detail & Related papers (2022-09-30T19:56:04Z) - Exploring Fluent Query Reformulations with Text-to-Text Transformers and
Reinforcement Learning [11.205077315939644]
We explore methods to generate query reformulations by training reformulators using text-to-text transformers.
We apply policy-based reinforcement learning algorithms to further encourage reward learning.
Our framework is demonstrated to be flexible, allowing reward signals to be sourced from different downstream environments.
arXiv Detail & Related papers (2020-12-18T03:16:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.