Leveraging Large Language Models to Improve REST API Testing
- URL: http://arxiv.org/abs/2312.00894v2
- Date: Tue, 30 Jan 2024 03:43:55 GMT
- Title: Leveraging Large Language Models to Improve REST API Testing
- Authors: Myeongsoo Kim, Tyler Stennett, Dhruv Shah, Saurabh Sinha, Alessandro
Orso
- Abstract summary: RESTGPT takes as input an API specification, extracts machine-interpretable rules, and generates example parameter values from natural-language descriptions in the specification.
Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation.
- Score: 51.284096009803406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of REST APIs, coupled with their growing complexity
and size, has led to the need for automated REST API testing tools. Current
tools focus on the structured data in REST API specifications but often neglect
valuable insights available in unstructured natural-language descriptions in
the specifications, which leads to suboptimal test coverage. Recently, to
address this gap, researchers have developed techniques that extract rules from
these human-readable descriptions and query knowledge bases to derive
meaningful input values. However, these techniques are limited in the types of
rules they can extract and prone to produce inaccurate results. This paper
presents RESTGPT, an innovative approach that leverages the power and intrinsic
context-awareness of Large Language Models (LLMs) to improve REST API testing.
RESTGPT takes as input an API specification, extracts machine-interpretable
rules, and generates example parameter values from natural-language
descriptions in the specification. It then augments the original specification
with these rules and values. Our evaluations indicate that RESTGPT outperforms
existing techniques in both rule extraction and value generation. Given these
promising results, we outline future research directions for advancing REST API
testing through LLMs.
Related papers
- Utilizing API Response for Test Refinement [2.8002188463519944]
This paper proposes a dynamic test refinement approach that leverages the response message.
Using an intelligent agent, the approach adds constraints to the API specification that are further used to generate a test scenario.
The proposed approach led to a decrease in the number of 4xx responses, taking a step closer to generating more realistic test cases.
arXiv Detail & Related papers (2025-01-30T05:26:32Z) - LlamaRestTest: Effective REST API Testing with Small Language Models [50.058600784556816]
We present LlamaRestTest, a novel approach that employs two custom LLMs to generate realistic test inputs.
LlamaRestTest surpasses state-of-the-art tools in code coverage and error detection, even with RESTGPT-enhanced specifications.
arXiv Detail & Related papers (2025-01-15T05:51:20Z) - A Multi-Agent Approach for REST API Testing with Semantic Graphs and LLM-Driven Inputs [46.65963514391019]
We present AutoRestTest, the first black-box tool to adopt a dependency-embedded multi-agent approach for REST API testing.
Our approach treats REST API testing as a separable problem, where four agents collaborate to optimize API exploration.
Our evaluation of AutoRestTest on 12 real-world REST services shows that it outperforms the four leading black-box REST API testing tools.
arXiv Detail & Related papers (2024-11-11T16:20:27Z) - KAT: Dependency-aware Automated API Testing with Large Language Models [1.7264233311359707]
KAT (Katalon API Testing) is a novel AI-driven approach that autonomously generates test cases to validate APIs.
Our evaluation of KAT using 12 real-world services shows that it can improve validation coverage, detect more undocumented status codes, and reduce false positives in these services.
arXiv Detail & Related papers (2024-07-14T14:48:18Z) - REST: Retrieval-Based Speculative Decoding [69.06115086237207]
We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm designed to speed up language model generation.
Unlike previous methods that rely on a draft language model for speculative decoding, REST harnesses the power of retrieval to generate draft tokens.
When benchmarked on 7B and 13B language models in a single-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on code or text generation.
arXiv Detail & Related papers (2023-11-14T15:43:47Z) - Exploring Behaviours of RESTful APIs in an Industrial Setting [0.43012765978447565]
We propose a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit.
These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases.
Our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners.
arXiv Detail & Related papers (2023-10-26T11:33:11Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.