FireAct: Toward Language Agent Fine-tuning
- URL: http://arxiv.org/abs/2310.05915v1
- Date: Mon, 9 Oct 2023 17:58:38 GMT
- Title: FireAct: Toward Language Agent Fine-tuning
- Authors: Baian Chen, Chang Shu, Ehsan Shareghi, Nigel Collier, Karthik
Narasimhan, Shunyu Yao
- Abstract summary: We argue for the overlooked direction of fine-tuning LMs to obtain language agents.
Fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to a 77% HotpotQA performance increase.
We propose FireAct, a novel approach to fine-tuning LMs with trajectories from multiple tasks and prompting methods.
- Score: 63.06306936820456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent efforts have augmented language models (LMs) with external tools or
environments, leading to the development of language agents that can reason and
act. However, most of these agents rely on few-shot prompting techniques with
off-the-shelf LMs. In this paper, we investigate and argue for the overlooked
direction of fine-tuning LMs to obtain language agents. Using a setup of
question answering (QA) with a Google search API, we explore a variety of base
LMs, prompting methods, fine-tuning data, and QA tasks, and find language
agents are consistently improved after fine-tuning their backbone LMs. For
example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4
leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct,
a novel approach to fine-tuning LMs with trajectories from multiple tasks and
prompting methods, and show having more diverse fine-tuning data can further
improve agents. Along with other findings regarding scaling effects,
robustness, generalization, efficiency and cost, our work establishes
comprehensive benefits of fine-tuning LMs for agents, and provides an initial
set of experimental designs, insights, as well as open questions toward
language agent fine-tuning.
Related papers
- DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs [70.54226917774933]
We propose the DecompositionAlignment-Reasoning Agent (DARA) framework.
DARA effectively parses questions into formal queries through a dual mechanism.
We show that DARA attains performance comparable to state-of-the-art enumerating-and-ranking-based methods for KGQA.
arXiv Detail & Related papers (2024-06-11T09:09:37Z) - Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch Reasoning [56.82041895921434]
Open-source pre-trained Large Language Models (LLMs) exhibit strong language understanding and generation capabilities.
When used as agents for dealing with complex problems in the real world, their performance is far inferior to large commercial models such as ChatGPT and GPT-4.
arXiv Detail & Related papers (2024-03-29T03:48:12Z) - Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models [56.00992369295851]
Open-sourced Large Language Models (LLMs) have achieved great success in various NLP tasks, however, they are still far inferior to API-based models when acting as agents.
This paper delivers three key observations: (1) the current agent training corpus is entangled with both formats following and agent reasoning, which significantly shifts from the distribution of its pre-training data; (2) LLMs exhibit different learning speeds on the capabilities required by agent tasks; and (3) current approaches have side-effects when improving agent abilities by introducing hallucinations.
We propose Agent-FLAN to effectively Fine-tune LANguage models for Agents.
arXiv Detail & Related papers (2024-03-19T16:26:10Z) - KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents [54.09074527006576]
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges.
This inadequacy primarily stems from the lack of built-in action knowledge in language agents.
We introduce KnowAgent, a novel approach designed to enhance the planning capabilities of LLMs by incorporating explicit action knowledge.
arXiv Detail & Related papers (2024-03-05T16:39:12Z) - Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models [31.509994889286183]
We introduce Language Agent Tree Search (LATS) -- the first general framework that synergizes the capabilities of language models (LMs) in reasoning, acting, and planning.
A key feature of our approach is the incorporation of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism.
LATS achieves state-of-the-art pass@1 accuracy (92.7%) for programming on HumanEval with GPT-4 and demonstrates gradient-free performance (average score of 75.9) comparable to gradient-based fine-tuning for web navigation on WebShop with GPT
arXiv Detail & Related papers (2023-10-06T17:55:11Z) - Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization [103.70896967077294]
This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model.
Our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model.
Experimental results on various tasks demonstrate that the language agents improve over time.
arXiv Detail & Related papers (2023-08-04T06:14:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.