You Can REST Now: Automated Specification Inference and Black-Box
Testing of RESTful APIs with Large Language Models
- URL: http://arxiv.org/abs/2402.05102v1
- Date: Wed, 7 Feb 2024 18:55:41 GMT
- Title: You Can REST Now: Automated Specification Inference and Black-Box
Testing of RESTful APIs with Large Language Models
- Authors: Alix Decrop, Gilles Perrouin, Mike Papadakis, Xavier Devroey,
Pierre-Yves Schobbens
- Abstract summary: manually documenting APIs is a time-consuming and error-prone task, resulting in unavailable, incomplete, or imprecise documentation.
Recently, Large Language Models (LLMs) have demonstrated exceptional abilities to automate tasks based on their colossal training data.
We present RESTSpecIT, the first automated API specification inference and black-box testing approach.
- Score: 8.753312212588371
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: RESTful APIs are popular web services, requiring documentation to ease their
comprehension, reusability and testing practices. The OpenAPI Specification
(OAS) is a widely adopted and machine-readable format used to document such
APIs. However, manually documenting RESTful APIs is a time-consuming and
error-prone task, resulting in unavailable, incomplete, or imprecise
documentation. As RESTful API testing tools require an OpenAPI specification as
input, insufficient or informal documentation hampers testing quality.
Recently, Large Language Models (LLMs) have demonstrated exceptional
abilities to automate tasks based on their colossal training data. Accordingly,
such capabilities could be utilized to assist the documentation and testing
process of RESTful APIs.
In this paper, we present RESTSpecIT, the first automated RESTful API
specification inference and black-box testing approach leveraging LLMs. The
approach requires minimal user input compared to state-of-the-art RESTful API
inference and testing tools; Given an API name and an LLM key, HTTP requests
are generated and mutated with data returned by the LLM. By sending the
requests to the API endpoint, HTTP responses can be analyzed for inference and
testing purposes. RESTSpecIT utilizes an in-context prompt masking strategy,
requiring no model fine-tuning. Our evaluation demonstrates that RESTSpecIT is
capable of: (1) inferring specifications with 85.05% of GET routes and 81.05%
of query parameters found on average, (2) discovering undocumented and valid
routes and parameters, and (3) uncovering server errors in RESTful APIs.
Inferred specifications can also be used as testing tool inputs.
Related papers
- Utilizing API Response for Test Refinement [2.8002188463519944]
This paper proposes a dynamic test refinement approach that leverages the response message.
Using an intelligent agent, the approach adds constraints to the API specification that are further used to generate a test scenario.
The proposed approach led to a decrease in the number of 4xx responses, taking a step closer to generating more realistic test cases.
arXiv Detail & Related papers (2025-01-30T05:26:32Z) - LlamaRestTest: Effective REST API Testing with Small Language Models [50.058600784556816]
We present LlamaRestTest, a novel approach that employs two custom LLMs to generate realistic test inputs.
LlamaRestTest surpasses state-of-the-art tools in code coverage and error detection, even with RESTGPT-enhanced specifications.
arXiv Detail & Related papers (2025-01-15T05:51:20Z) - APIRL: Deep Reinforcement Learning for REST API Fuzzing [3.053989095162017]
APIRL is a fully automated deep reinforcement learning tool for testing REST APIs.
We show APIRL can find significantly more bugs than the state-of-the-art in real world REST APIs.
arXiv Detail & Related papers (2024-12-20T15:40:51Z) - ExploraCoder: Advancing code generation for multiple unseen APIs via planning and chained exploration [70.26807758443675]
ExploraCoder is a training-free framework that empowers large language models to invoke unseen APIs in code solution.
We show that ExploraCoder significantly improves performance for models lacking prior API knowledge, achieving an absolute increase of 11.24% over niave RAG approaches and 14.07% over pretraining methods in pass@10.
arXiv Detail & Related papers (2024-12-06T19:00:15Z) - A Multi-Agent Approach for REST API Testing with Semantic Graphs and LLM-Driven Inputs [46.65963514391019]
We present AutoRestTest, the first black-box tool to adopt a dependency-embedded multi-agent approach for REST API testing.
Our approach treats REST API testing as a separable problem, where four agents collaborate to optimize API exploration.
Our evaluation of AutoRestTest on 12 real-world REST services shows that it outperforms the four leading black-box REST API testing tools.
arXiv Detail & Related papers (2024-11-11T16:20:27Z) - DeepREST: Automated Test Case Generation for REST APIs Exploiting Deep Reinforcement Learning [5.756036843502232]
This paper introduces DeepREST, a novel black-box approach for automatically testing REST APIs.
It leverages deep reinforcement learning to uncover implicit API constraints, that is, constraints hidden from API documentation.
Our empirical validation suggests that the proposed approach is very effective in achieving high test coverage and fault detection.
arXiv Detail & Related papers (2024-08-16T08:03:55Z) - Leveraging Large Language Models to Improve REST API Testing [51.284096009803406]
RESTGPT takes as input an API specification, extracts machine-interpretable rules, and generates example parameter values from natural-language descriptions in the specification.
Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation.
arXiv Detail & Related papers (2023-12-01T19:53:23Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - RestGPT: Connecting Large Language Models with Real-World RESTful APIs [44.94234920380684]
A tool-augmented large language models (LLMs) have achieved remarkable progress in tackling a broad range of tasks.
To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of robustness.
To fully evaluate RestGPT, we propose RestBench, a high-quality benchmark which consists of two real-world scenarios and human-annotated instructions.
arXiv Detail & Related papers (2023-06-11T08:53:12Z) - Carving UI Tests to Generate API Tests and API Specification [8.743426215048451]
API-level testing can play an important role, in-between unit-level testing and UI-level (or end-to-end) testing.
Existing API testing tools require API specifications, which often may not be available or, when available, be inconsistent with the API implementation.
We present an approach that leverages UI testing to enable API-level testing for web applications.
arXiv Detail & Related papers (2023-05-24T03:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.