T-Eval: Evaluating the Tool Utilization Capability of Large Language
Models Step by Step
- URL: http://arxiv.org/abs/2312.14033v3
- Date: Mon, 15 Jan 2024 03:18:25 GMT
- Title: T-Eval: Evaluating the Tool Utilization Capability of Large Language
Models Step by Step
- Authors: Zehui Chen, Weihua Du, Wenwei Zhang, Kuikun Liu, Jiangning Liu, Miao
Zheng, Jingming Zhuo, Songyang Zhang, Dahua Lin, Kai Chen, Feng Zhao
- Abstract summary: Large language models (LLM) have achieved remarkable performance on various NLP tasks.
How to evaluate and analyze the tool-utilization capability of LLMs is still under-explored.
We introduce T-Eval to evaluate the tool utilization capability step by step.
- Score: 69.64348626180623
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language models (LLM) have achieved remarkable performance on various
NLP tasks and are augmented by tools for broader applications. Yet, how to
evaluate and analyze the tool-utilization capability of LLMs is still
under-explored. In contrast to previous works that evaluate models
holistically, we comprehensively decompose the tool utilization into multiple
sub-processes, including instruction following, planning, reasoning, retrieval,
understanding, and review. Based on that, we further introduce T-Eval to
evaluate the tool utilization capability step by step. T-Eval disentangles the
tool utilization evaluation into several sub-domains along model capabilities,
facilitating the inner understanding of both holistic and isolated competency
of LLMs. We conduct extensive experiments on T-Eval and in-depth analysis of
various LLMs. T-Eval not only exhibits consistency with the outcome-oriented
evaluation but also provides a more fine-grained analysis of the capabilities
of LLMs, providing a new perspective in LLM evaluation on tool-utilization
ability. The benchmark will be available at
https://github.com/open-compass/T-Eval.
Related papers
- ELF-Gym: Evaluating Large Language Models Generated Features for Tabular Prediction [33.03433653251314]
We propose ELF-Gym, a framework for evaluating Large Language Models (LLMs)
We curated a new dataset from historical Kaggle competitions, including 251 "golden" features used by top-performing teams.
We empirically demonstrate that, in the best-case scenario, LLMs can semantically capture approximately 56% of the golden features, but at the more demanding implementation level this overlap drops to 13%.
arXiv Detail & Related papers (2024-10-13T13:59:33Z) - From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions [60.733557487886635]
This paper focuses on bridging the comprehension gap between Large Language Models and external tools.
We propose a novel framework, DRAFT, aimed at Dynamically refining tool documentation.
Extensive experiments on multiple datasets demonstrate that DRAFT's iterative, feedback-based refinement significantly ameliorates documentation quality.
arXiv Detail & Related papers (2024-10-10T17:58:44Z) - What Affects the Stability of Tool Learning? An Empirical Study on the Robustness of Tool Learning Frameworks [33.51887014808227]
This paper explores the impact of both internal and external factors on the performance of tool learning frameworks.
We find several insightful conclusions for future work, including the observation that LLMs can benefit significantly from increased trial and exploration.
arXiv Detail & Related papers (2024-07-03T11:06:05Z) - Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models [26.28459880766842]
We propose a decision-aware and generalizable tool-usage framework (DEER)
Specifically, we first construct the tool-usage samples with multiple decision branches via an automatic generation pipeline.
Our proposed DEER is effective and significantly outperforms baselines across various datasets.
arXiv Detail & Related papers (2024-02-26T16:11:03Z) - Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios [93.68764280953624]
UltraTool is a novel benchmark designed to improve and evaluate Large Language Models' ability in tool utilization.
It emphasizes real-world complexities, demanding accurate, multi-step planning for effective problem-solving.
A key feature of UltraTool is its independent evaluation of planning with natural language, which happens before tool usage.
arXiv Detail & Related papers (2024-01-30T16:52:56Z) - Can Large Language Models be Trusted for Evaluation? Scalable
Meta-Evaluation of LLMs as Evaluators via Agent Debate [74.06294042304415]
We propose ScaleEval, an agent-debate-assisted meta-evaluation framework.
We release the code for our framework, which is publicly available on GitHub.
arXiv Detail & Related papers (2024-01-30T07:03:32Z) - BLESS: Benchmarking Large Language Models on Sentence Simplification [55.461555829492866]
We present BLESS, a performance benchmark of the most recent state-of-the-art large language models (LLMs) on the task of text simplification (TS)
We assess a total of 44 models, differing in size, architecture, pre-training methods, and accessibility, on three test sets from different domains (Wikipedia, news, and medical) under a few-shot setting.
Our evaluation indicates that the best LLMs, despite not being trained on TS, perform comparably with state-of-the-art TS baselines.
arXiv Detail & Related papers (2023-10-24T12:18:17Z) - Evaluating Large Language Models at Evaluating Instruction Following [54.49567482594617]
We introduce a challenging meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM evaluator in discerning instruction-following outputs.
We discover that different evaluators exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement.
arXiv Detail & Related papers (2023-10-11T16:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.