Generating Accurate OpenAPI Descriptions from Java Source Code
- URL: http://arxiv.org/abs/2410.23873v1
- Date: Thu, 31 Oct 2024 12:34:35 GMT
- Title: Generating Accurate OpenAPI Descriptions from Java Source Code
- Authors: Alexander Lercher, Christian Macho, Clemens Bauer, Martin Pinzger,
- Abstract summary: AutoOAS detects exposed REST endpoint paths, corresponding HTTP methods, HTTP response codes, and the data models of request parameters and responses directly from Java source code.
Based on a manually created ground truth, AutoOAS achieved the highest precision and recall when identifying REST endpoint paths, HTTP methods, parameters, and responses.
- Score: 42.02451453254076
- License:
- Abstract: Developers require accurate descriptions of REpresentational State Transfer (REST) Application Programming Interfaces (APIs) for a successful interaction between web services. The OpenAPI Specification (OAS) has become the de facto standard for documenting REST APIs. Manually creating an OpenAPI description is time-consuming and error-prone, and therefore several approaches were proposed to automatically generate them from bytecode or runtime information. In this paper, we first study three state-of-the-art approaches, Respector, Prophet, and springdoc-openapi, and present and discuss their shortcomings. Next, we introduce AutoOAS, our approach addressing these shortcomings to generate accurate OpenAPI descriptions. It detects exposed REST endpoint paths, corresponding HTTP methods, HTTP response codes, and the data models of request parameters and responses directly from Java source code. We evaluated AutoOAS on seven real-world Spring Boot projects and compared its performance with the three state-of-the-art approaches. Based on a manually created ground truth, AutoOAS achieved the highest precision and recall when identifying REST endpoint paths, HTTP methods, parameters, and responses. It outperformed the second-best approach, Respector, with a 39% higher precision and 35% higher recall when identifying parameters and a 29% higher precision and 11% higher recall when identifying responses. Furthermore, AutoOAS is the only approach that handles configuration profiles, and it provided the most accurate and detailed description of the data models that were used in the REST APIs.
Related papers
- Less is More: Making Smaller Language Models Competent Subgraph Retrievers for Multi-hop KGQA [51.3033125256716]
We model the subgraph retrieval task as a conditional generation task handled by small language models.
Our base generative subgraph retrieval model, consisting of only 220M parameters, competitive retrieval performance compared to state-of-the-art models.
Our largest 3B model, when plugged with an LLM reader, sets new SOTA end-to-end performance on both the WebQSP and CWQ benchmarks.
arXiv Detail & Related papers (2024-10-08T15:22:36Z) - ToolACE: Winning the Points of LLM Function Calling [139.07157814653638]
ToolACE is an automatic agentic pipeline designed to generate accurate, complex, and diverse tool-learning data.
We demonstrate that models trained on our synthesized data, even with only 8B parameters, achieve state-of-the-art performance on the Berkeley Function-Calling Leaderboard.
arXiv Detail & Related papers (2024-09-02T03:19:56Z) - FANTAstic SEquences and Where to Find Them: Faithful and Efficient API Call Generation through State-tracked Constrained Decoding and Reranking [57.53742155914176]
API call generation is the cornerstone of large language models' tool-using ability.
Existing supervised and in-context learning approaches suffer from high training costs, poor data efficiency, and generated API calls that can be unfaithful to the API documentation and the user's request.
We propose an output-side optimization approach called FANTASE to address these limitations.
arXiv Detail & Related papers (2024-07-18T23:44:02Z) - ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents [7.166156709980112]
We introduce textscShortcutsBench, a large-scale benchmark for the comprehensive evaluation of API-based agents.
textscShortcutsBench includes a wealth of real APIs from Apple Inc.'s operating systems.
Our evaluation reveals significant limitations in handling complex queries related to API selection, parameter filling, and requesting necessary information from systems and users.
arXiv Detail & Related papers (2024-06-28T08:45:02Z) - You Can REST Now: Automated Specification Inference and Black-Box
Testing of RESTful APIs with Large Language Models [8.753312212588371]
manually documenting APIs is a time-consuming and error-prone task, resulting in unavailable, incomplete, or imprecise documentation.
Recently, Large Language Models (LLMs) have demonstrated exceptional abilities to automate tasks based on their colossal training data.
We present RESTSpecIT, the first automated API specification inference and black-box testing approach.
arXiv Detail & Related papers (2024-02-07T18:55:41Z) - Leveraging Large Language Models to Improve REST API Testing [51.284096009803406]
RESTGPT takes as input an API specification, extracts machine-interpretable rules, and generates example parameter values from natural-language descriptions in the specification.
Our evaluations indicate that RESTGPT outperforms existing techniques in both rule extraction and value generation.
arXiv Detail & Related papers (2023-12-01T19:53:23Z) - APICom: Automatic API Completion via Prompt Learning and Adversarial
Training-based Data Augmentation [6.029137544885093]
API recommendation is the process of assisting developers in finding the required API among numerous candidate APIs.
Previous studies mainly modeled API recommendation as the recommendation task, and developers may not yet be able to find what they need.
Motivated by the neural machine translation research domain, we can model this problem as the generation task.
We propose a novel approach APICom based on prompt learning, which can generate API related to the query according to the prompts.
arXiv Detail & Related papers (2023-09-13T15:31:50Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - API-Miner: an API-to-API Specification Recommendation Engine [1.8352113484137629]
API-Miner is an API-to-API specification recommendation engine.
It retrieves relevant specification components written in OpenAPI.
We evaluate API-Miner in both quantitative and qualitative tasks.
arXiv Detail & Related papers (2022-12-14T14:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.