Generating Automotive Code: Large Language Models for Software Development and Verification in Safety-Critical Systems
- URL: http://arxiv.org/abs/2506.04038v1
- Date: Wed, 04 Jun 2025 15:01:59 GMT
- Title: Generating Automotive Code: Large Language Models for Software Development and Verification in Safety-Critical Systems
- Authors: Sven Kirchner, Alois C. Knoll,
- Abstract summary: The framework uses Large Language Models (LLMs) to automate code generation in languages such as C++.<n>A feedback-driven pipeline ensures the integration of test, simulation and verification for compliance with safety standards.
- Score: 21.595590728109226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing safety-critical automotive software presents significant challenges due to increasing system complexity and strict regulatory demands. This paper proposes a novel framework integrating Generative Artificial Intelligence (GenAI) into the Software Development Lifecycle (SDLC). The framework uses Large Language Models (LLMs) to automate code generation in languages such as C++, incorporating safety-focused practices such as static verification, test-driven development and iterative refinement. A feedback-driven pipeline ensures the integration of test, simulation and verification for compliance with safety standards. The framework is validated through the development of an Adaptive Cruise Control (ACC) system. Comparative benchmarking of LLMs ensures optimal model selection for accuracy and reliability. Results demonstrate that the framework enables automatic code generation while ensuring compliance with safety-critical requirements, systematically integrating GenAI into automotive software engineering. This work advances the use of AI in safety-critical domains, bridging the gap between state-of-the-art generative models and real-world safety requirements.
Related papers
- GenAI for Automotive Software Development: From Requirements to Wheels [3.2821049498759094]
This paper introduces a GenAI-empowered approach to automated development of automotive software.<n>The process starts with requirements as input, while the main generated outputs are test scenario code for simulation environment.<n>Our approach aims shorter compliance and re-engineering cycles, as well as reduced development and testing time when it comes to ADAS-related capabilities.
arXiv Detail & Related papers (2025-07-24T09:17:13Z) - Training Language Models to Generate Quality Code with Program Analysis Feedback [66.0854002147103]
Code generation with large language models (LLMs) is increasingly adopted in production but fails to ensure code quality.<n>We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code.
arXiv Detail & Related papers (2025-05-28T17:57:47Z) - Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - Generative AI for Autonomous Driving: Frontiers and Opportunities [145.6465312554513]
This survey delivers a comprehensive synthesis of the emerging role of GenAI across the autonomous driving stack.<n>We begin by distilling the principles and trade-offs of modern generative modeling, encompassing VAEs, GANs, Diffusion Models, and Large Language Models.<n>We categorize practical applications, such as synthetic data generalization, end-to-end driving strategies, high-fidelity digital twin systems, smart transportation networks, and cross-domain transfer to embodied AI.
arXiv Detail & Related papers (2025-05-13T17:59:20Z) - Engineering Risk-Aware, Security-by-Design Frameworks for Assurance of Large-Scale Autonomous AI Models [0.0]
This paper presents an enterprise-level, risk-aware, security-by-design approach for large-scale autonomous AI systems.<n>We detail a unified pipeline that delivers provable guarantees of model behavior under adversarial and operational stress.<n>Case studies in national security, open-source model governance, and industrial automation demonstrate measurable reductions in vulnerability and compliance overhead.
arXiv Detail & Related papers (2025-05-09T20:14:53Z) - Automating Automotive Software Development: A Synergy of Generative AI and Formal Methods [4.469600208122469]
We propose to combine GenAI with model-driven engineering to automate automotive software development.<n>Our approach uses LLMs to convert free-text requirements into event chain descriptions and to generate platform-independent software components.<n>As a proof of concept, we used GPT-4o to implement our method and tested it in the CARLA simulation environment with ROS2.
arXiv Detail & Related papers (2025-05-05T09:29:13Z) - A Path Less Traveled: Reimagining Software Engineering Automation via a Neurosymbolic Paradigm [9.900581015679935]
We propose Neurosymbolic Software Engineering as a promising paradigm combining neural learning with symbolic (rule-based) reasoning.<n>This hybrid methodology aims to enhance efficiency, reliability, and transparency in AI-driven software engineering.
arXiv Detail & Related papers (2025-05-04T22:10:21Z) - SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models [63.71984266104757]
Multimodal Large Language Models (MLLMs) can process both visual and textual data.<n>We propose SafeAuto, a novel framework that enhances MLLM-based autonomous driving systems by incorporating both unstructured and structured knowledge.
arXiv Detail & Related papers (2025-02-28T21:53:47Z) - Concept-Guided LLM Agents for Human-AI Safety Codesign [6.603483691167379]
Generative AI is increasingly important in software engineering, including safety engineering, where its use ensures that software does not cause harm to people.
It is crucial to develop more advanced and sophisticated approaches that can effectively address the complexities and safety concerns of software systems.
We present an efficient, hybrid strategy to leverage Large Language Models for safety analysis and Human-AI codesign.
arXiv Detail & Related papers (2024-04-03T11:37:01Z) - On STPA for Distributed Development of Safe Autonomous Driving: An Interview Study [0.7851536646859475]
System-Theoretic Process Analysis (STPA) is a novel method applied in safety-related fields like defense and aerospace.
STPA assumes prerequisites that are not fully valid in the automotive system engineering with distributed system development and multi-abstraction design levels.
This can be seen as a maintainability challenge in continuous development and deployment.
arXiv Detail & Related papers (2024-03-14T15:56:02Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.