Exploring Behaviours of RESTful APIs in an Industrial Setting
- URL: http://arxiv.org/abs/2310.17318v1
- Date: Thu, 26 Oct 2023 11:33:11 GMT
- Title: Exploring Behaviours of RESTful APIs in an Industrial Setting
- Authors: Stefan Karlsson, Robbert Jongeling, Adnan Causevic, Daniel Sundmark
- Abstract summary: We propose a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit.
These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases.
Our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners.
- Score: 0.43012765978447565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common way of exposing functionality in contemporary systems is by
providing a Web-API based on the REST API architectural guidelines. To describe
REST APIs, the industry standard is currently OpenAPI-specifications. Test
generation and fuzzing methods targeting OpenAPI-described REST APIs have been
a very active research area in recent years. An open research challenge is to
aid users in better understanding their API, in addition to finding faults and
to cover all the code. In this paper, we address this challenge by proposing a
set of behavioural properties, common to REST APIs, which are used to generate
examples of behaviours that these APIs exhibit. These examples can be used both
(i) to further the understanding of the API and (ii) as a source of automatic
test cases. Our evaluation shows that our approach can generate examples deemed
relevant for understanding the system and for a source of test generation by
practitioners. In addition, we show that basing test generation on behavioural
properties provides tests that are less dependent on the state of the system,
while at the same time yielding a similar code coverage as state-of-the-art
methods in REST API fuzzing in a given time limit.
Related papers
- A Multi-Agent Approach for REST API Testing with Semantic Graphs and LLM-Driven Inputs [46.65963514391019]
We present AutoRestTest, the first black-box framework to adopt a dependency-embedded multi-agent approach for REST API testing.
We integrate Multi-Agent Reinforcement Learning (MARL) with a Semantic Property Dependency Graph (SPDG) and Large Language Models (LLMs)
Our approach treats REST API testing as a separable problem, where four agents -- API, dependency, parameter, and value -- collaborate to optimize API exploration.
arXiv Detail & Related papers (2024-11-11T16:20:27Z) - A Systematic Evaluation of Large Code Models in API Suggestion: When, Which, and How [53.65636914757381]
API suggestion is a critical task in modern software development.
Recent advancements in large code models (LCMs) have shown promise in the API suggestion task.
arXiv Detail & Related papers (2024-09-20T03:12:35Z) - DeepREST: Automated Test Case Generation for REST APIs Exploiting Deep Reinforcement Learning [5.756036843502232]
This paper introduces DeepREST, a novel black-box approach for automatically testing REST APIs.
It leverages deep reinforcement learning to uncover implicit API constraints, that is, constraints hidden from API documentation.
Our empirical validation suggests that the proposed approach is very effective in achieving high test coverage and fault detection.
arXiv Detail & Related papers (2024-08-16T08:03:55Z) - FANTAstic SEquences and Where to Find Them: Faithful and Efficient API Call Generation through State-tracked Constrained Decoding and Reranking [57.53742155914176]
API call generation is the cornerstone of large language models' tool-using ability.
Existing supervised and in-context learning approaches suffer from high training costs, poor data efficiency, and generated API calls that can be unfaithful to the API documentation and the user's request.
We propose an output-side optimization approach called FANTASE to address these limitations.
arXiv Detail & Related papers (2024-07-18T23:44:02Z) - KAT: Dependency-aware Automated API Testing with Large Language Models [1.7264233311359707]
KAT (Katalon API Testing) is a novel AI-driven approach that autonomously generates test cases to validate APIs.
Our evaluation of KAT using 12 real-world services shows that it can improve validation coverage, detect more undocumented status codes, and reduce false positives in these services.
arXiv Detail & Related papers (2024-07-14T14:48:18Z) - WorldAPIs: The World Is Worth How Many APIs? A Thought Experiment [49.00213183302225]
We propose a framework to induce new APIs by grounding wikiHow instruction to situated agent policies.
Inspired by recent successes in large language models (LLMs) for embodied planning, we propose a few-shot prompting to steer GPT-4.
arXiv Detail & Related papers (2024-07-10T15:52:44Z) - Leveraging Large Language Models to Improve REST API Testing [51.284096009803406]
RESTGPT takes as input an API specification, extracts machine-interpretable rules, and generates example parameter values from natural-language descriptions in the specification.
Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation.
arXiv Detail & Related papers (2023-12-01T19:53:23Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - Exploring API Behaviours Through Generated Examples [0.768721532845575]
We present an approach to automatically generate relevant examples of behaviours of an API.
Our method can produce small and relevant examples that can help engineers to understand the system under exploration.
arXiv Detail & Related papers (2023-08-29T11:05:52Z) - Carving UI Tests to Generate API Tests and API Specification [8.743426215048451]
API-level testing can play an important role, in-between unit-level testing and UI-level (or end-to-end) testing.
Existing API testing tools require API specifications, which often may not be available or, when available, be inconsistent with the API implementation.
We present an approach that leverages UI testing to enable API-level testing for web applications.
arXiv Detail & Related papers (2023-05-24T03:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.