Formal Runtime Error Detection During Development in the Automotive
Industry
- URL: http://arxiv.org/abs/2310.16468v1
- Date: Wed, 25 Oct 2023 08:44:52 GMT
- Title: Formal Runtime Error Detection During Development in the Automotive
Industry
- Authors: Jesko Hecking-Harbusch, Jochen Quante, Maximilian Schlund
- Abstract summary: For safety-relevant automotive software, it is recommended to use sound static program analysis to prove the absence of runtime errors.
The analysis is often perceived as burdensome by developers because it runs for a long time and produces many false alarms.
In this case study, we present how automatically inferred contracts add context to module-level analysis.
- Score: 0.1611401281366893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern automotive software is highly complex and consists of millions lines
of code. For safety-relevant automotive software, it is recommended to use
sound static program analysis to prove the absence of runtime errors. However,
the analysis is often perceived as burdensome by developers because it runs for
a long time and produces many false alarms. If the analysis is performed on the
integrated software system, there is a scalability problem, and the analysis is
only possible at a late stage of development. If the analysis is performed on
individual modules instead, this is possible at an early stage of development,
but the usage context of modules is missing, which leads to too many false
alarms. In this case study, we present how automatically inferred contracts add
context to module-level analysis. Leveraging these contracts with an
off-the-shelf tool for abstract interpretation makes module-level analysis more
precise and more scalable. We evaluate this framework quantitatively on
industrial case studies from different automotive domains. Additionally, we
report on our qualitative experience for the verification of large-scale
embedded software projects.
Related papers
- Leveraging Slither and Interval Analysis to build a Static Analysis Tool [0.0]
This paper presents our progress toward finding defects that are sometimes not detected or completely detected by state-of-the-art analysis tools.
We developed a working solution built on top of Slither that uses interval analysis to evaluate the contract state during the execution of each instruction.
arXiv Detail & Related papers (2024-10-31T09:28:09Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - Scaling Symbolic Execution to Large Software Systems [0.0]
Symbolic execution is a popular static analysis technique used both in program verification and in bug detection software.
We focus on an error finding framework called the Clang Static Analyzer, and the infrastructure built around it named CodeChecker.
arXiv Detail & Related papers (2024-08-04T02:54:58Z) - Easing Maintenance of Academic Static Analyzers [0.0]
Mopsa is a static analysis platform that aims at being sound.
This article documents the tools and techniques we have come up with to simplify the maintenance of Mopsa since 2017.
arXiv Detail & Related papers (2024-07-17T11:29:21Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Customizing Static Analysis using Codesearch [1.7205106391379021]
A commonly used language to describe a range of static analysis applications is Datalog.
We aim to make building custom static analysis tools much easier for developers, while at the same time providing a familiar framework for application security and static analysis experts.
Our approach introduces a language called StarLang, a variant of Datalog which only includes programs with a fast runtime.
arXiv Detail & Related papers (2024-04-19T09:50:02Z) - It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph Generation [50.06412862964449]
Attack Graph (AG) represents the best-suited solution to support cyber risk assessment for multi-step attacks on computer networks.
Current solutions propose to address the generation problem from the algorithmic perspective and postulate the analysis only after the generation is complete.
This paper rethinks the classic AG analysis through a novel workflow in which the analyst can query the system anytime.
arXiv Detail & Related papers (2023-12-27T10:44:58Z) - The Hitchhiker's Guide to Program Analysis: A Journey with Large
Language Models [18.026567399243]
Large Language Models (LLMs) offer a promising alternative to static analysis.
In this paper, we take a deep dive into the open space of LLM-assisted static analysis.
We develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM.
arXiv Detail & Related papers (2023-08-01T02:57:43Z) - D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using
Differential Analysis [55.15995704119158]
We propose D2A, a differential analysis based approach to label issues reported by static analysis tools.
We use D2A to generate a large labeled dataset to train models for vulnerability identification.
arXiv Detail & Related papers (2021-02-16T07:46:53Z) - DirectDebug: Automated Testing and Debugging of Feature Models [55.41644538483948]
Variability models (e.g., feature models) are a common way for the representation of variabilities and commonalities of software artifacts.
Complex and often large-scale feature models can become faulty, i.e., do not represent the expected variability properties of the underlying software artifact.
arXiv Detail & Related papers (2021-02-11T11:22:20Z) - Self-Supervised Log Parsing [59.04636530383049]
Large-scale software systems generate massive volumes of semi-structured log records.
Existing approaches rely on log-specifics or manual rule extraction.
We propose NuLog that utilizes a self-supervised learning model and formulates the parsing task as masked language modeling.
arXiv Detail & Related papers (2020-03-17T19:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.