RestGPT: Connecting Large Language Models with Real-World RESTful APIs
- URL: http://arxiv.org/abs/2306.06624v2
- Date: Sun, 27 Aug 2023 02:55:36 GMT
- Title: RestGPT: Connecting Large Language Models with Real-World RESTful APIs
- Authors: Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song,
Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
- Abstract summary: A tool-augmented large language models (LLMs) have achieved remarkable progress in tackling a broad range of tasks.
To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of robustness.
To fully evaluate RestGPT, we propose RestBench, a high-quality benchmark which consists of two real-world scenarios and human-annotated instructions.
- Score: 44.94234920380684
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
Related papers
- LRASGen: LLM-based RESTful API Specification Generation [3.420331911153286]
We propose a novel approach for generating the OpenAPI Specification (OAS) specifications for APIs using Large Language Models (LLMs)
Compared with existing tools and methods, LRASGen can generate the OASs, even when the implementation is incomplete (with partial code, annotations/comments, etc.)
LRASGen-generated specifications cover an average of 48.85% more missed entities than the developer-provided specifications.
arXiv Detail & Related papers (2025-04-23T15:52:50Z) - A Framework for Testing and Adapting REST APIs as LLM Tools [5.758488787763118]
We present a novel testing framework aimed at evaluating and enhancing the readiness of REST APIs to function as tools for agents.
Our framework transforms apis as tools, generates comprehensive test cases for the APIs, tests cases into natural language instructions and evaluates the agent's ability t correctly invoke the API and process its inputs and responses.
arXiv Detail & Related papers (2025-04-22T02:52:08Z) - LlamaRestTest: Effective REST API Testing with Small Language Models [50.058600784556816]
We present LlamaRestTest, a novel approach that employs two custom Large Language Models (LLMs) to generate realistic test inputs.
We evaluate it against several state-of-the-art REST API testing tools, including RESTGPT, a GPT-powered specification-enhancement tool.
Our study shows that small language models can perform as well as, or better than, large language models in REST API testing.
arXiv Detail & Related papers (2025-01-15T05:51:20Z) - ExploraCoder: Advancing code generation for multiple unseen APIs via planning and chained exploration [70.26807758443675]
ExploraCoder is a training-free framework that empowers large language models to invoke unseen APIs in code solution.
We show that ExploraCoder significantly improves performance for models lacking prior API knowledge, achieving an absolute increase of 11.24% over niave RAG approaches and 14.07% over pretraining methods in pass@10.
arXiv Detail & Related papers (2024-12-06T19:00:15Z) - SEAL: Suite for Evaluating API-use of LLMs [1.2528321519119252]
SEAL is an end-to-end testbed designed to evaluate large language models in real-world API usage.
It standardizes existing benchmarks, integrates an agent system for testing API retrieval and planning, and addresses the instability of real-time APIs.
arXiv Detail & Related papers (2024-09-23T20:16:49Z) - FANTAstic SEquences and Where to Find Them: Faithful and Efficient API Call Generation through State-tracked Constrained Decoding and Reranking [57.53742155914176]
API call generation is the cornerstone of large language models' tool-using ability.
Existing supervised and in-context learning approaches suffer from high training costs, poor data efficiency, and generated API calls that can be unfaithful to the API documentation and the user's request.
We propose an output-side optimization approach called FANTASE to address these limitations.
arXiv Detail & Related papers (2024-07-18T23:44:02Z) - ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents [7.166156709980112]
We introduce textscShortcutsBench, a large-scale benchmark for the comprehensive evaluation of API-based agents.
textscShortcutsBench includes a wealth of real APIs from Apple Inc.'s operating systems.
Our evaluation reveals significant limitations in handling complex queries related to API selection, parameter filling, and requesting necessary information from systems and users.
arXiv Detail & Related papers (2024-06-28T08:45:02Z) - You Can REST Now: Automated Specification Inference and Black-Box
Testing of RESTful APIs with Large Language Models [8.753312212588371]
manually documenting APIs is a time-consuming and error-prone task, resulting in unavailable, incomplete, or imprecise documentation.
Recently, Large Language Models (LLMs) have demonstrated exceptional abilities to automate tasks based on their colossal training data.
We present RESTSpecIT, the first automated API specification inference and black-box testing approach.
arXiv Detail & Related papers (2024-02-07T18:55:41Z) - Leveraging Large Language Models to Improve REST API Testing [51.284096009803406]
RESTGPT takes as input an API specification, extracts machine-interpretable rules, and generates example parameter values from natural-language descriptions in the specification.
Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation.
arXiv Detail & Related papers (2023-12-01T19:53:23Z) - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized
Toolsets [75.64181719386497]
We present CRAFT, a tool creation and retrieval framework for large language models (LLMs)
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning.
arXiv Detail & Related papers (2023-09-29T17:40:26Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world
APIs [104.37772295581088]
Open-source large language models (LLMs), e.g., LLaMA, remain significantly limited in tool-use capabilities.
We introduce ToolLLM, a general tool-usetuning encompassing data construction, model training, and evaluation.
We first present ToolBench, an instruction-tuning framework for tool use, which is constructed automatically using ChatGPT.
arXiv Detail & Related papers (2023-07-31T15:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.