From Summary to Action: Enhancing Large Language Models for Complex
Tasks with Open World APIs
- URL: http://arxiv.org/abs/2402.18157v1
- Date: Wed, 28 Feb 2024 08:42:23 GMT
- Title: From Summary to Action: Enhancing Large Language Models for Complex
Tasks with Open World APIs
- Authors: Yulong Liu, Yunlong Yuan, Chunwei Wang, Jianhua Han, Yongqiang Ma, Li
Zhang, Nanning Zheng, Hang Xu
- Abstract summary: We introduce a novel tool invocation pipeline designed to control massive real-world APIs.
This pipeline mirrors the human task-solving process, addressing complicated real-life user queries.
Empirical evaluations of our Sum2Act pipeline on the ToolBench benchmark show significant performance improvements.
- Score: 62.496139001509114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The distinction between humans and animals lies in the unique ability of
humans to use and create tools. Tools empower humans to overcome physiological
limitations, fostering the creation of magnificent civilizations. Similarly,
enabling foundational models like Large Language Models (LLMs) with the
capacity to learn external tool usage may serve as a pivotal step toward
realizing artificial general intelligence. Previous studies in this field have
predominantly pursued two distinct approaches to augment the tool invocation
capabilities of LLMs. The first approach emphasizes the construction of
relevant datasets for model fine-tuning. The second approach, in contrast, aims
to fully exploit the inherent reasoning abilities of LLMs through in-context
learning strategies. In this work, we introduce a novel tool invocation
pipeline designed to control massive real-world APIs. This pipeline mirrors the
human task-solving process, addressing complicated real-life user queries. At
each step, we guide LLMs to summarize the achieved results and determine the
next course of action. We term this pipeline `from Summary to action', Sum2Act
for short. Empirical evaluations of our Sum2Act pipeline on the ToolBench
benchmark show significant performance improvements, outperforming established
methods like ReAct and DFSDT. This highlights Sum2Act's effectiveness in
enhancing LLMs for complex real-world tasks.
Related papers
- LLM With Tools: A Survey [0.0]
This paper delves into the methodology,challenges, and developments in the realm of teaching LLMs to use external tools.
We introduce a standardized paradigm for tool integration guided by a series of functions that map user instructions to actionable plans.
Our exploration reveals the various challenges encountered, such as tool invocation timing, selection accuracy, and the need for robust reasoning processes.
arXiv Detail & Related papers (2024-09-24T14:08:11Z) - What Affects the Stability of Tool Learning? An Empirical Study on the Robustness of Tool Learning Frameworks [33.51887014808227]
This paper explores the impact of both internal and external factors on the performance of tool learning frameworks.
We find several insightful conclusions for future work, including the observation that LLMs can benefit significantly from increased trial and exploration.
arXiv Detail & Related papers (2024-07-03T11:06:05Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - Chain of Tools: Large Language Model is an Automatic Multi-tool Learner [54.992464510992605]
Automatic Tool Chain (ATC) is a framework that enables the large language models (LLMs) to act as a multi-tool user.
To scale up the scope of the tools, we next propose a black-box probing method.
For a comprehensive evaluation, we build a challenging benchmark named ToolFlow.
arXiv Detail & Related papers (2024-05-26T11:40:58Z) - Towards Completeness-Oriented Tool Retrieval for Large Language Models [60.733557487886635]
Real-world systems often incorporate a wide array of tools, making it impractical to input all tools into Large Language Models.
Existing tool retrieval methods primarily focus on semantic matching between user queries and tool descriptions.
We propose a novel modelagnostic COllaborative Learning-based Tool Retrieval approach, COLT, which captures not only the semantic similarities between user queries and tool descriptions but also takes into account the collaborative information of tools.
arXiv Detail & Related papers (2024-05-25T06:41:23Z) - Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models [26.28459880766842]
We propose a decision-aware and generalizable tool-usage framework (DEER)
Specifically, we first construct the tool-usage samples with multiple decision branches via an automatic generation pipeline.
Our proposed DEER is effective and significantly outperforms baselines across various datasets.
arXiv Detail & Related papers (2024-02-26T16:11:03Z) - Small LLMs Are Weak Tool Learners: A Multi-LLM Agent [73.54562551341454]
Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs.
We propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer.
This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability.
arXiv Detail & Related papers (2024-01-14T16:17:07Z) - Confucius: Iterative Tool Learning from Introspection Feedback by
Easy-to-Difficult Curriculum [42.36892453363961]
We propose a novel tool learning framework to train large language models (LLMs) to use complicated tools in real-world scenarios.
We first propose a multi-stage learning method to teach the LLM to use various tools from an easy-to-difficult curriculum.
We then propose the Iterative Self-instruct from Introspective Feedback to dynamically construct the dataset to improve the ability to use the complicated tool.
arXiv Detail & Related papers (2023-08-27T07:53:00Z) - CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models [74.22729793816451]
Large Language Models (LLMs) have made significant progress in utilizing tools, but their ability is limited by API availability.
We propose CREATOR, a novel framework that enables LLMs to create their own tools using documentation and code realization.
We evaluate CREATOR on MATH and TabMWP benchmarks, respectively consisting of challenging math competition problems.
arXiv Detail & Related papers (2023-05-23T17:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.