TaskBench: Benchmarking Large Language Models for Task Automation
- URL: http://arxiv.org/abs/2311.18760v2
- Date: Sat, 9 Dec 2023 16:54:20 GMT
- Title: TaskBench: Benchmarking Large Language Models for Task Automation
- Authors: Yongliang Shen, Kaitao Song, Xu Tan, Wenqi Zhang, Kan Ren, Siyu Yuan,
Weiming Lu, Dongsheng Li, Yueting Zhuang
- Abstract summary: We introduce TaskBench to evaluate the capability of large language models in task automation.
To generate high-quality evaluation datasets, we introduce the concept of Tool Graph.
We also propose TaskEval to evaluate the capability of LLMs from different aspects, including task decomposition, tool invocation, and parameter prediction.
- Score: 85.3879908356586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the incredible progress of large language models (LLMs) has ignited
the spark of task automation, which decomposes the complex tasks described by
user instructions into sub-tasks, and invokes external tools to execute them,
and plays a central role in autonomous agents. However, there lacks a
systematic and standardized benchmark to foster the development of LLMs in task
automation. To this end, we introduce TaskBench to evaluate the capability of
LLMs in task automation. Specifically, task automation can be formulated into
three critical stages: task decomposition, tool invocation, and parameter
prediction to fulfill user intent. This complexity makes data collection and
evaluation more challenging compared to common NLP tasks. To generate
high-quality evaluation datasets, we introduce the concept of Tool Graph to
represent the decomposed tasks in user intent, and adopt a back-instruct method
to simulate user instruction and annotations. Furthermore, we propose TaskEval
to evaluate the capability of LLMs from different aspects, including task
decomposition, tool invocation, and parameter prediction. Experimental results
demonstrate that TaskBench can effectively reflects the capability of LLMs in
task automation. Benefiting from the mixture of automated data construction and
human verification, TaskBench achieves a high consistency compared to the human
evaluation, which can be utilized as a comprehensive and faithful benchmark for
LLM-based autonomous agents.
Related papers
- DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering [7.264718073839472]
Large Language Model (LLM) agents have shown great potential for solving real-world problems and promise to be a solution for tasks automation in industry.<n>We propose DrafterBench for the comprehensive evaluation of LLM agents in the context of technical drawing revision.<n>DrafterBench is an open-source benchmark to rigorously test AI agents' proficiency in interpreting intricate and long-context instructions.
arXiv Detail & Related papers (2025-07-15T17:56:04Z) - VerifyLLM: LLM-Based Pre-Execution Task Plan Verification for Robots [44.99833362998488]
We propose an architecture for automatically verifying high-level task plans before their execution in simulator or real-world environments.<n>The module uses the reasoning capabilities of the Large Language Models to evaluate logical coherence and identify potential gaps in the plan.<n>We contribute to improving the reliability and efficiency of task planning and addresses the critical need for robust pre-execution verification in autonomous systems.
arXiv Detail & Related papers (2025-07-07T15:31:36Z) - Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.
However, they still struggle with problems requiring multi-step decision-making and environmental feedback.
We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - A Comparison of Prompt Engineering Techniques for Task Planning and Execution in Service Robotics [16.064583670720587]
We compare prompt engineering techniques and combinations thereof within the application of high-level task planning and execution in service robotics.
We define a diverse set of tasks and a simple set of functionalities in simulation, and measure task completion accuracy and execution time for several state-of-the-art models.
arXiv Detail & Related papers (2024-10-30T13:22:55Z) - AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML [56.565200973244146]
Automated machine learning (AutoML) accelerates AI development by automating tasks in the development pipeline.
Recent works have started exploiting large language models (LLM) to lessen such burden.
This paper proposes AutoML-Agent, a novel multi-agent framework tailored for full-pipeline AutoML.
arXiv Detail & Related papers (2024-10-03T20:01:09Z) - Incorporating Large Language Models into Production Systems for Enhanced Task Automation and Flexibility [2.3999111269325266]
This paper introduces a novel approach to integrating large language model (LLM) agents into automated production systems.
We organize production operations within a hierarchical framework based on the automation pyramid.
This allows for a scalable and flexible foundation for orchestrating production processes.
arXiv Detail & Related papers (2024-07-11T14:34:43Z) - WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks [85.95607119635102]
Large language models (LLMs) can mimic human-like intelligence.
WorkArena++ is designed to evaluate the planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding abilities of web agents.
arXiv Detail & Related papers (2024-07-07T07:15:49Z) - Tool Learning in the Wild: Empowering Language Models as Automatic Tool Agents [56.822238860147024]
Augmenting large language models with external tools has emerged as a promising approach to extend their utility.
Previous methods manually parse tool documentation and create in-context demonstrations, transforming tools into structured formats for LLMs to use in their step-by-step reasoning.
We propose AutoTools, a framework that enables LLMs to automate the tool-use workflow.
arXiv Detail & Related papers (2024-05-26T11:40:58Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - Small LLMs Are Weak Tool Learners: A Multi-LLM Agent [73.54562551341454]
Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs.
We propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer.
This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability.
arXiv Detail & Related papers (2024-01-14T16:17:07Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - TPTU: Large Language Model-based AI Agents for Task Planning and Tool
Usage [28.554981886052953]
Large Language Models (LLMs) have emerged as powerful tools for various real-world applications.
Despite their prowess, intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks.
This paper proposes a structured framework tailored for LLM-based AI Agents.
arXiv Detail & Related papers (2023-08-07T09:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.