Integrating Static Code Analysis Toolchains
- URL: http://arxiv.org/abs/2403.05986v1
- Date: Sat, 9 Mar 2024 18:59:50 GMT
- Title: Integrating Static Code Analysis Toolchains
- Authors: Matthias Kern, Ferhat Erata, Markus Iser, Carsten Sinz, Frederic
Loiret, Stefan Otten, Eric Sax
- Abstract summary: State of the art toolchains support features for either test execution and build automation or traceability between tests, requirements and design information.
Our approach combines all those features and extends traceability to the source code level, incorporating static code analysis.
- Score: 0.8246494848934447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes an approach for a tool-agnostic and heterogeneous static
code analysis toolchain in combination with an exchange format. This approach
enhances both traceability and comparability of analysis results. State of the
art toolchains support features for either test execution and build automation
or traceability between tests, requirements and design information. Our
approach combines all those features and extends traceability to the source
code level, incorporating static code analysis. As part of our approach we
introduce the "ASSUME Static Code Analysis tool exchange format" that
facilitates the comparability of different static code analysis results. We
demonstrate how this approach enhances the usability and efficiency of static
code analysis in a development process. On the one hand, our approach enables
the exchange of results and evaluations between static code analysis tools. On
the other hand, it enables a complete traceability between requirements,
designs, implementation, and the results of static code analysis. Within our
approach we also propose an OSLC specification for static code analysis tools
and an OSLC communication framework.
Related papers
- Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Static Code Analysis with CodeChecker [0.0]
CodeChecker is an open source project that integrates different static analysis tools.
It has a powerful issue management system to make it easier to evaluate the reports of the static analysis tools.
arXiv Detail & Related papers (2024-08-05T03:48:16Z) - Scaling Symbolic Execution to Large Software Systems [0.0]
Symbolic execution is a popular static analysis technique used both in program verification and in bug detection software.
We focus on an error finding framework called the Clang Static Analyzer, and the infrastructure built around it named CodeChecker.
arXiv Detail & Related papers (2024-08-04T02:54:58Z) - OSL-ActionSpotting: A Unified Library for Action Spotting in Sports Videos [56.393522913188704]
We introduce OSL-ActionSpotting, a Python library that unifies different action spotting algorithms to streamline research and applications in sports video analytics.
We successfully integrated three cornerstone action spotting methods into OSL-ActionSpotting, achieving performance metrics that match those of the original, disparates.
arXiv Detail & Related papers (2024-07-01T13:17:37Z) - STALL+: Boosting LLM-based Repository-level Code Completion with Static Analysis [8.059606338318538]
This work performs the first study on the static analysis integration in LLM-based repository-level code completion.
We first implement a framework STALL+, which supports an extendable and customizable integration of multiple static analysis strategies.
Our findings show that integrating file-level dependencies in prompting phase performs the best while the integration in post-processing phase performs the worse.
arXiv Detail & Related papers (2024-06-14T13:28:31Z) - Customizing Static Analysis using Codesearch [1.7205106391379021]
A commonly used language to describe a range of static analysis applications is Datalog.
We aim to make building custom static analysis tools much easier for developers, while at the same time providing a familiar framework for application security and static analysis experts.
Our approach introduces a language called StarLang, a variant of Datalog which only includes programs with a fast runtime.
arXiv Detail & Related papers (2024-04-19T09:50:02Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - T-Eval: Evaluating the Tool Utilization Capability of Large Language
Models Step by Step [69.64348626180623]
Large language models (LLM) have achieved remarkable performance on various NLP tasks.
How to evaluate and analyze the tool-utilization capability of LLMs is still under-explored.
We introduce T-Eval to evaluate the tool utilization capability step by step.
arXiv Detail & Related papers (2023-12-21T17:02:06Z) - Distributed intelligence on the Edge-to-Cloud Continuum: A systematic
literature review [62.997667081978825]
This review aims at providing a comprehensive vision of the main state-of-the-art libraries and frameworks for machine learning and data analytics available today.
The main simulation, emulation, deployment systems, and testbeds for experimental research on the Edge-to-Cloud Continuum available today are also surveyed.
arXiv Detail & Related papers (2022-04-29T08:06:05Z) - Comparative Code Structure Analysis using Deep Learning for Performance
Prediction [18.226950022938954]
This paper aims to assess the feasibility of using purely static information (e.g., abstract syntax tree or AST) of applications to predict performance change based on the change in code structure.
Our evaluations of several deep embedding learning methods demonstrate that tree-based Long Short-Term Memory (LSTM) models can leverage the hierarchical structure of source-code to discover latent representations and achieve up to 84% (individual problem) and 73% (combined dataset with multiple of problems) accuracy in predicting the change in performance.
arXiv Detail & Related papers (2021-02-12T16:59:12Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.