TaskBench: Benchmarking Large Language Models for Task Automation
- URL: http://arxiv.org/abs/2311.18760v2
- Date: Sat, 9 Dec 2023 16:54:20 GMT
- Title: TaskBench: Benchmarking Large Language Models for Task Automation
- Authors: Yongliang Shen, Kaitao Song, Xu Tan, Wenqi Zhang, Kan Ren, Siyu Yuan,
Weiming Lu, Dongsheng Li, Yueting Zhuang
- Abstract summary: We introduce TaskBench to evaluate the capability of large language models in task automation.
To generate high-quality evaluation datasets, we introduce the concept of Tool Graph.
We also propose TaskEval to evaluate the capability of LLMs from different aspects, including task decomposition, tool invocation, and parameter prediction.
- Score: 85.3879908356586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the incredible progress of large language models (LLMs) has ignited
the spark of task automation, which decomposes the complex tasks described by
user instructions into sub-tasks, and invokes external tools to execute them,
and plays a central role in autonomous agents. However, there lacks a
systematic and standardized benchmark to foster the development of LLMs in task
automation. To this end, we introduce TaskBench to evaluate the capability of
LLMs in task automation. Specifically, task automation can be formulated into
three critical stages: task decomposition, tool invocation, and parameter
prediction to fulfill user intent. This complexity makes data collection and
evaluation more challenging compared to common NLP tasks. To generate
high-quality evaluation datasets, we introduce the concept of Tool Graph to
represent the decomposed tasks in user intent, and adopt a back-instruct method
to simulate user instruction and annotations. Furthermore, we propose TaskEval
to evaluate the capability of LLMs from different aspects, including task
decomposition, tool invocation, and parameter prediction. Experimental results
demonstrate that TaskBench can effectively reflects the capability of LLMs in
task automation. Benefiting from the mixture of automated data construction and
human verification, TaskBench achieves a high consistency compared to the human
evaluation, which can be utilized as a comprehensive and faithful benchmark for
LLM-based autonomous agents.
Related papers
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks [85.95607119635102]
Large language models (LLMs) can mimic human-like intelligence.
WorkArena++ is designed to evaluate the planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding abilities of web agents.
arXiv Detail & Related papers (2024-07-07T07:15:49Z) - AutoMMLab: Automatically Generating Deployable Models from Language
Instructions for Computer Vision Tasks [39.71649832548044]
AutoMMLab is a general-purpose LLM-empowered AutoML system that follows user's language instructions.
The proposed AutoMMLab system effectively employs LLMs as the bridge to connect AutoML and OpenMMLab community.
Experiments show that our AutoMMLab system is versatile and covers a wide range of mainstream tasks.
arXiv Detail & Related papers (2024-02-23T14:38:19Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - Small LLMs Are Weak Tool Learners: A Multi-LLM Agent [73.54562551341454]
Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs.
We propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer.
This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability.
arXiv Detail & Related papers (2024-01-14T16:17:07Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - ChatGPT as your Personal Data Scientist [0.9689893038619583]
This paper introduces a ChatGPT-based conversational data-science framework to act as a "personal data scientist"
Our model pivots around four dialogue states: Data visualization, Task Formulation, Prediction Engineering, and Result Summary and Recommendation.
In summary, we developed an end-to-end system that not only proves the viability of the novel concept of conversational data science but also underscores the potency of LLMs in solving complex tasks.
arXiv Detail & Related papers (2023-05-23T04:00:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.