Leveraging Large Language Models to Improve REST API Testing
- URL: http://arxiv.org/abs/2312.00894v2
- Date: Tue, 30 Jan 2024 03:43:55 GMT
- Title: Leveraging Large Language Models to Improve REST API Testing
- Authors: Myeongsoo Kim, Tyler Stennett, Dhruv Shah, Saurabh Sinha, Alessandro
Orso
- Abstract summary: RESTGPT takes as input an API specification, extracts machine-interpretable rules, and generates example parameter values from natural-language descriptions in the specification.
Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation.
- Score: 51.284096009803406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of REST APIs, coupled with their growing complexity
and size, has led to the need for automated REST API testing tools. Current
tools focus on the structured data in REST API specifications but often neglect
valuable insights available in unstructured natural-language descriptions in
the specifications, which leads to suboptimal test coverage. Recently, to
address this gap, researchers have developed techniques that extract rules from
these human-readable descriptions and query knowledge bases to derive
meaningful input values. However, these techniques are limited in the types of
rules they can extract and prone to produce inaccurate results. This paper
presents RESTGPT, an innovative approach that leverages the power and intrinsic
context-awareness of Large Language Models (LLMs) to improve REST API testing.
RESTGPT takes as input an API specification, extracts machine-interpretable
rules, and generates example parameter values from natural-language
descriptions in the specification. It then augments the original specification
with these rules and values. Our evaluations indicate that RESTGPT outperforms
existing techniques in both rule extraction and value generation. Given these
promising results, we outline future research directions for advancing REST API
testing through LLMs.
Related papers
- DeepREST: Automated Test Case Generation for REST APIs Exploiting Deep Reinforcement Learning [5.756036843502232]
This paper introduces DeepREST, a novel black-box approach for automatically testing REST APIs.
It leverages deep reinforcement learning to uncover implicit API constraints, that is, constraints hidden from API documentation.
Our empirical validation suggests that the proposed approach is very effective in achieving high test coverage and fault detection.
arXiv Detail & Related papers (2024-08-16T08:03:55Z) - KAT: Dependency-aware Automated API Testing with Large Language Models [1.7264233311359707]
KAT (Katalon API Testing) is a novel AI-driven approach that autonomously generates test cases to validate APIs.
Our evaluation of KAT using 12 real-world services shows that it can improve validation coverage, detect more undocumented status codes, and reduce false positives in these services.
arXiv Detail & Related papers (2024-07-14T14:48:18Z) - REST: Retrieval-Based Speculative Decoding [69.06115086237207]
We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm designed to speed up language model generation.
Unlike previous methods that rely on a draft language model for speculative decoding, REST harnesses the power of retrieval to generate draft tokens.
When benchmarked on 7B and 13B language models in a single-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on code or text generation.
arXiv Detail & Related papers (2023-11-14T15:43:47Z) - Exploring Behaviours of RESTful APIs in an Industrial Setting [0.43012765978447565]
We propose a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit.
These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases.
Our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners.
arXiv Detail & Related papers (2023-10-26T11:33:11Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - RestGPT: Connecting Large Language Models with Real-World RESTful APIs [44.94234920380684]
A tool-augmented large language models (LLMs) have achieved remarkable progress in tackling a broad range of tasks.
To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of robustness.
To fully evaluate RestGPT, we propose RestBench, a high-quality benchmark which consists of two real-world scenarios and human-annotated instructions.
arXiv Detail & Related papers (2023-06-11T08:53:12Z) - Evaluating Embedding APIs for Information Retrieval [51.24236853841468]
We evaluate the capabilities of existing semantic embedding APIs on domain generalization and multilingual retrieval.
We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective in English.
For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best, albeit at a higher cost.
arXiv Detail & Related papers (2023-05-10T16:40:52Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - Classifiers are Better Experts for Controllable Text Generation [63.17266060165098]
We show that the proposed method significantly outperforms recent PPLM, GeDi, and DExperts on PPL and sentiment accuracy based on the external classifier of generated texts.
The same time, it is also easier to implement and tune, and has significantly fewer restrictions and requirements.
arXiv Detail & Related papers (2022-05-15T12:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.