ControlLLM: Augment Language Models with Tools by Searching on Graphs
- URL: http://arxiv.org/abs/2310.17796v3
- Date: Mon, 18 Dec 2023 15:09:20 GMT
- Title: ControlLLM: Augment Language Models with Tools by Searching on Graphs
- Authors: Zhaoyang Liu, Zeqiang Lai, Zhangwei Gao, Erfei Cui, Ziheng Li, Xizhou
Zhu, Lewei Lu, Qifeng Chen, Yu Qiao, Jifeng Dai, Wenhai Wang
- Abstract summary: We present ControlLLM, a novel framework that enables large language models (LLMs) to utilize multi-modal tools for solving real-world tasks.
Our framework comprises three key components: (1) a textittask decomposer that breaks down a complex task into clear subtasks with well-defined inputs and outputs; (2) a textitThoughts-on-Graph (ToG) paradigm that searches the optimal solution path on a pre-built tool graph; and (3) an textitexecution engine with a rich toolbox that interprets the solution path and runs the
- Score: 97.62758830255002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present ControlLLM, a novel framework that enables large language models
(LLMs) to utilize multi-modal tools for solving complex real-world tasks.
Despite the remarkable performance of LLMs, they still struggle with tool
invocation due to ambiguous user prompts, inaccurate tool selection and
parameterization, and inefficient tool scheduling. To overcome these
challenges, our framework comprises three key components: (1) a \textit{task
decomposer} that breaks down a complex task into clear subtasks with
well-defined inputs and outputs; (2) a \textit{Thoughts-on-Graph (ToG)
paradigm} that searches the optimal solution path on a pre-built tool graph,
which specifies the parameter and dependency relations among different tools;
and (3) an \textit{execution engine with a rich toolbox} that interprets the
solution path and runs the tools efficiently on different computational
devices. We evaluate our framework on diverse tasks involving image, audio, and
video processing, demonstrating its superior accuracy, efficiency, and
versatility compared to existing methods. The code is at
https://github.com/OpenGVLab/ControlLLM.
Related papers
- Tool-Planner: Task Planning with Clusters across Multiple Tools [29.278169900986434]
We propose Tool-Planner, a task-processing framework based on toolkits.
Tool-Planner groups tools based on the API functions with the same function into a toolkit and allows LLMs to implement planning across the various toolkits.
arXiv Detail & Related papers (2024-06-06T07:30:14Z) - Chain of Tools: Large Language Model is an Automatic Multi-tool Learner [54.992464510992605]
Automatic Tool Chain (ATC) is a framework that enables the large language models (LLMs) to act as a multi-tool user.
To scale up the scope of the tools, we next propose a black-box probing method.
For a comprehensive evaluation, we build a challenging benchmark named ToolFlow.
arXiv Detail & Related papers (2024-05-26T11:40:58Z) - Towards Completeness-Oriented Tool Retrieval for Large Language Models [60.733557487886635]
Real-world systems often incorporate a wide array of tools, making it impractical to input all tools into Large Language Models.
Existing tool retrieval methods primarily focus on semantic matching between user queries and tool descriptions.
We propose a novel modelagnostic COllaborative Learning-based Tool Retrieval approach, COLT, which captures not only the semantic similarities between user queries and tool descriptions but also takes into account the collaborative information of tools.
arXiv Detail & Related papers (2024-05-25T06:41:23Z) - ToolNet: Connecting Large Language Models with Massive Tools via Tool
Graph [43.95759808077083]
Existing in-context learning approaches simply format tools into a list of plain text descriptions and input them to large language models.
This paper proposes ToolNet, a plug-and-play framework that scales up the number of tools to thousands with a moderate increase in token consumption.
arXiv Detail & Related papers (2024-02-29T02:04:00Z) - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized
Toolsets [75.64181719386497]
We present CRAFT, a tool creation and retrieval framework for large language models (LLMs)
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning.
arXiv Detail & Related papers (2023-09-29T17:40:26Z) - Large Language Models as Tool Makers [85.00361145117293]
We introduce a closed-loop framework, referred to as LLMs A s Tool Makers (LATM), where LLMs create their own reusable tools for problem-solving.
Our approach consists of two phases: 1) tool making: an LLM acts as the tool maker that crafts tools for a set of tasks. 2) tool using: another LLM acts as the tool user, which applies the tool built by the tool maker for problem-solving.
arXiv Detail & Related papers (2023-05-26T17:50:11Z) - ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via
Tool Embeddings [25.5476046472217]
Augmenting large language models with external tools has emerged as a promising approach to solving complex problems.
Recent in-context learning paradigm alleviates these issues, but the limited context length only allows for a few shots of demonstrations.
We propose an alternative approach, $textbfToolkenGPT$, which combines the benefits of both sides.
arXiv Detail & Related papers (2023-05-19T09:54:21Z) - ART: Automatic multi-step reasoning and tool-use for large language
models [105.57550426609396]
Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings.
Each reasoning step can rely on external tools to support computation beyond the core LLM capabilities.
We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.
arXiv Detail & Related papers (2023-03-16T01:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.