Comparison of Static Analysis Architecture Recovery Tools for
Microservice Applications
- URL: http://arxiv.org/abs/2403.06941v1
- Date: Mon, 11 Mar 2024 17:26:51 GMT
- Title: Comparison of Static Analysis Architecture Recovery Tools for
Microservice Applications
- Authors: Simon Schneider, Alexander Bakhtin, Xiaozhou Li, Jacopo Soldani,
Antonio Brogi, Tomas Cerny, Riccardo Scandariato, Davide Taibi
- Abstract summary: We will identify static analysis architecture recovery tools for microservice applications via a multi-vocal literature review.
We will then execute them on a common dataset and compare the measured effectiveness in architecture recovery.
- Score: 43.358953895199264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Architecture recovery tools help software engineers obtain an overview of
their software systems during all phases of the software development lifecycle.
This is especially important for microservice applications because their
distributed nature makes it more challenging to oversee the architecture.
Various tools and techniques for this task are presented in academic and grey
literature sources. Practitioners and researchers can benefit from a
comprehensive overview of these tools and their abilities. However, no such
overview exists that is based on executing the identified tools and assessing
their outputs regarding effectiveness. With the study described in this paper,
we plan to first identify static analysis architecture recovery tools for
microservice applications via a multi-vocal literature review, and then execute
them on a common dataset and compare the measured effectiveness in architecture
recovery. We will focus on static approaches because they are also suitable for
integration into fast-paced CI/CD pipelines.
Related papers
- A Systematic Mapping Study on Architectural Approaches to Software Performance Analysis [8.629569588488328]
This paper presents a systematic mapping study of 109 papers that integrate software architecture and performance analysis.
We focus on five research questions that provide guidance for researchers and practitioners to gain an in-depth understanding of this research area.
arXiv Detail & Related papers (2024-10-22T19:12:03Z) - From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions [60.733557487886635]
This paper focuses on bridging the comprehension gap between Large Language Models and external tools.
We propose a novel framework, DRAFT, aimed at Dynamically refining tool documentation.
Extensive experiments on multiple datasets demonstrate that DRAFT's iterative, feedback-based refinement significantly ameliorates documentation quality.
arXiv Detail & Related papers (2024-10-10T17:58:44Z) - Easing Maintenance of Academic Static Analyzers [0.0]
Mopsa is a static analysis platform that aims at being sound.
This article documents the tools and techniques we have come up with to simplify the maintenance of Mopsa since 2017.
arXiv Detail & Related papers (2024-07-17T11:29:21Z) - Towards Completeness-Oriented Tool Retrieval for Large Language Models [60.733557487886635]
Real-world systems often incorporate a wide array of tools, making it impractical to input all tools into Large Language Models.
Existing tool retrieval methods primarily focus on semantic matching between user queries and tool descriptions.
We propose a novel modelagnostic COllaborative Learning-based Tool Retrieval approach, COLT, which captures not only the semantic similarities between user queries and tool descriptions but also takes into account the collaborative information of tools.
arXiv Detail & Related papers (2024-05-25T06:41:23Z) - Full-stack evaluation of Machine Learning inference workloads for RISC-V systems [0.2621434923709917]
This study evaluates the performance of a wide array of machine learning workloads on RISC-V architectures using gem5, an open-source architectural simulator.
Leveraging an open-source compilation toolchain based on Multi-Level Intermediate Representation (MLIR), the research presents benchmarking results specifically focused on deep learning inference workloads.
arXiv Detail & Related papers (2024-05-24T09:24:46Z) - Deep Configuration Performance Learning: A Systematic Survey and Taxonomy [3.077531983369872]
We conduct a comprehensive review on the topic of deep learning for performance learning of software, covering 1,206 searched papers spanning six indexing services.
Our results outline key statistics, taxonomy, strengths, weaknesses, and optimal usage scenarios for techniques related to the preparation of configuration data.
We also identify the good practices and potentially problematic phenomena from the studies surveyed, together with a comprehensive summary of actionable suggestions and insights into future opportunities within the field.
arXiv Detail & Related papers (2024-03-05T21:05:16Z) - Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios [93.68764280953624]
UltraTool is a novel benchmark designed to improve and evaluate Large Language Models' ability in tool utilization.
It emphasizes real-world complexities, demanding accurate, multi-step planning for effective problem-solving.
A key feature of UltraTool is its independent evaluation of planning with natural language, which happens before tool usage.
arXiv Detail & Related papers (2024-01-30T16:52:56Z) - Charting a Path to Efficient Onboarding: The Role of Software
Visualization [49.1574468325115]
The present study aims to explore the familiarity of managers, leaders, and developers with software visualization tools.
This approach incorporated quantitative and qualitative analyses of data collected from practitioners using questionnaires and semi-structured interviews.
arXiv Detail & Related papers (2024-01-17T21:30:45Z) - Open Tracing Tools: Overview and Critical Comparison [10.196089289625599]
This paper aims to provide an overview of popular Open tracing tools via comparison.
We first identified ra30 tools in an objective, systematic, and reproducible manner.
We then characterized each tool looking at the 1) measured features, 2) popularity both in peer-reviewed literature and online media, and 3) benefits and issues.
arXiv Detail & Related papers (2022-07-14T12:52:32Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.