Formal Runtime Error Detection During Development in the Automotive
Industry
- URL: http://arxiv.org/abs/2310.16468v1
- Date: Wed, 25 Oct 2023 08:44:52 GMT
- Title: Formal Runtime Error Detection During Development in the Automotive
Industry
- Authors: Jesko Hecking-Harbusch, Jochen Quante, Maximilian Schlund
- Abstract summary: For safety-relevant automotive software, it is recommended to use sound static program analysis to prove the absence of runtime errors.
The analysis is often perceived as burdensome by developers because it runs for a long time and produces many false alarms.
In this case study, we present how automatically inferred contracts add context to module-level analysis.
- Score: 0.1611401281366893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern automotive software is highly complex and consists of millions lines
of code. For safety-relevant automotive software, it is recommended to use
sound static program analysis to prove the absence of runtime errors. However,
the analysis is often perceived as burdensome by developers because it runs for
a long time and produces many false alarms. If the analysis is performed on the
integrated software system, there is a scalability problem, and the analysis is
only possible at a late stage of development. If the analysis is performed on
individual modules instead, this is possible at an early stage of development,
but the usage context of modules is missing, which leads to too many false
alarms. In this case study, we present how automatically inferred contracts add
context to module-level analysis. Leveraging these contracts with an
off-the-shelf tool for abstract interpretation makes module-level analysis more
precise and more scalable. We evaluate this framework quantitatively on
industrial case studies from different automotive domains. Additionally, we
report on our qualitative experience for the verification of large-scale
embedded software projects.
Related papers
- Easing Maintenance of Academic Static Analyzers [0.0]
Mopsa is a static analysis platform that aims at being sound.
This article documents the tools and techniques we have come up with to simplify the maintenance of Mopsa since 2017.
arXiv Detail & Related papers (2024-07-17T11:29:21Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Customizing Static Analysis using Codesearch [1.7205106391379021]
A commonly used language to describe a range of static analysis applications is Datalog.
We aim to make building custom static analysis tools much easier for developers, while at the same time providing a familiar framework for application security and static analysis experts.
Our approach introduces a language called StarLang, a variant of Datalog which only includes programs with a fast runtime.
arXiv Detail & Related papers (2024-04-19T09:50:02Z) - Automating SBOM Generation with Zero-Shot Semantic Similarity [2.169562514302842]
A Software-Bill-of-Materials (SBOM) is a comprehensive inventory detailing a software application's components and dependencies.
We propose an automated method for generating SBOMs to prevent disastrous supply-chain attacks.
Our test results are compelling, demonstrating the model's strong performance in the zero-shot classification task.
arXiv Detail & Related papers (2024-02-03T18:14:13Z) - It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph Generation [50.06412862964449]
Attack Graph (AG) represents the best-suited solution to model and analyze multi-step attacks on computer networks.
This paper introduces an analysis-driven framework for AG generation.
It enables real-time attack path analysis before the completion of the AG generation with a quantifiable statistical significance.
arXiv Detail & Related papers (2023-12-27T10:44:58Z) - The Hitchhiker's Guide to Program Analysis: A Journey with Large
Language Models [18.026567399243]
Large Language Models (LLMs) offer a promising alternative to static analysis.
In this paper, we take a deep dive into the open space of LLM-assisted static analysis.
We develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM.
arXiv Detail & Related papers (2023-08-01T02:57:43Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using
Differential Analysis [55.15995704119158]
We propose D2A, a differential analysis based approach to label issues reported by static analysis tools.
We use D2A to generate a large labeled dataset to train models for vulnerability identification.
arXiv Detail & Related papers (2021-02-16T07:46:53Z) - DirectDebug: Automated Testing and Debugging of Feature Models [55.41644538483948]
Variability models (e.g., feature models) are a common way for the representation of variabilities and commonalities of software artifacts.
Complex and often large-scale feature models can become faulty, i.e., do not represent the expected variability properties of the underlying software artifact.
arXiv Detail & Related papers (2021-02-11T11:22:20Z) - Neural Software Analysis [18.415191504144577]
Many software development problems can be addressed by program analysis tools.
Recent work has shown tremendous success through an alternative way of creating developer tools, which we call neural software analysis.
The key idea is to train a neural machine learning model on numerous code examples, which, once trained, makes predictions about previously unseen code.
arXiv Detail & Related papers (2020-11-16T14:32:09Z) - Self-Supervised Log Parsing [59.04636530383049]
Large-scale software systems generate massive volumes of semi-structured log records.
Existing approaches rely on log-specifics or manual rule extraction.
We propose NuLog that utilizes a self-supervised learning model and formulates the parsing task as masked language modeling.
arXiv Detail & Related papers (2020-03-17T19:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.