Leveraging Large Language Models to Improve REST API Testing
- URL: http://arxiv.org/abs/2312.00894v2
- Date: Tue, 30 Jan 2024 03:43:55 GMT
- Title: Leveraging Large Language Models to Improve REST API Testing
- Authors: Myeongsoo Kim, Tyler Stennett, Dhruv Shah, Saurabh Sinha, Alessandro
Orso
- Abstract summary: RESTGPT takes as input an API specification, extracts machine-interpretable rules, and generates example parameter values from natural-language descriptions in the specification.
Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation.
- Score: 51.284096009803406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of REST APIs, coupled with their growing complexity
and size, has led to the need for automated REST API testing tools. Current
tools focus on the structured data in REST API specifications but often neglect
valuable insights available in unstructured natural-language descriptions in
the specifications, which leads to suboptimal test coverage. Recently, to
address this gap, researchers have developed techniques that extract rules from
these human-readable descriptions and query knowledge bases to derive
meaningful input values. However, these techniques are limited in the types of
rules they can extract and prone to produce inaccurate results. This paper
presents RESTGPT, an innovative approach that leverages the power and intrinsic
context-awareness of Large Language Models (LLMs) to improve REST API testing.
RESTGPT takes as input an API specification, extracts machine-interpretable
rules, and generates example parameter values from natural-language
descriptions in the specification. It then augments the original specification
with these rules and values. Our evaluations indicate that RESTGPT outperforms
existing techniques in both rule extraction and value generation. Given these
promising results, we outline future research directions for advancing REST API
testing through LLMs.
Related papers
- A Multi-Agent Approach for REST API Testing with Semantic Graphs and LLM-Driven Inputs [46.65963514391019]
We present AutoRestTest, the first black-box framework to adopt a dependency-embedded multi-agent approach for REST API testing.
We integrate Multi-Agent Reinforcement Learning (MARL) with a Semantic Property Dependency Graph (SPDG) and Large Language Models (LLMs)
Our approach treats REST API testing as a separable problem, where four agents -- API, dependency, parameter, and value -- collaborate to optimize API exploration.
arXiv Detail & Related papers (2024-11-11T16:20:27Z) - KAT: Dependency-aware Automated API Testing with Large Language Models [1.7264233311359707]
KAT (Katalon API Testing) is a novel AI-driven approach that autonomously generates test cases to validate APIs.
Our evaluation of KAT using 12 real-world services shows that it can improve validation coverage, detect more undocumented status codes, and reduce false positives in these services.
arXiv Detail & Related papers (2024-07-14T14:48:18Z) - REST: Retrieval-Based Speculative Decoding [69.06115086237207]
We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm designed to speed up language model generation.
Unlike previous methods that rely on a draft language model for speculative decoding, REST harnesses the power of retrieval to generate draft tokens.
When benchmarked on 7B and 13B language models in a single-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on code or text generation.
arXiv Detail & Related papers (2023-11-14T15:43:47Z) - Exploring Behaviours of RESTful APIs in an Industrial Setting [0.43012765978447565]
We propose a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit.
These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases.
Our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners.
arXiv Detail & Related papers (2023-10-26T11:33:11Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - RestGPT: Connecting Large Language Models with Real-World RESTful APIs [44.94234920380684]
A tool-augmented large language models (LLMs) have achieved remarkable progress in tackling a broad range of tasks.
To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of robustness.
To fully evaluate RestGPT, we propose RestBench, a high-quality benchmark which consists of two real-world scenarios and human-annotated instructions.
arXiv Detail & Related papers (2023-06-11T08:53:12Z) - Evaluating Embedding APIs for Information Retrieval [51.24236853841468]
We evaluate the capabilities of existing semantic embedding APIs on domain generalization and multilingual retrieval.
We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective in English.
For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best, albeit at a higher cost.
arXiv Detail & Related papers (2023-05-10T16:40:52Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.