LLM-Based Approach for Enhancing Maintainability of Automotive Architectures
- URL: http://arxiv.org/abs/2509.12798v1
- Date: Tue, 16 Sep 2025 08:17:41 GMT
- Title: LLM-Based Approach for Enhancing Maintainability of Automotive Architectures
- Authors: Nenad Petrovic, Lukasz Mazur, Alois Knoll,
- Abstract summary: In this paper, we explore the potential of Large Language Models (LLMs) when it comes to the automation of tasks and processes that aim to increase the flexibility of automotive systems.<n>Three case studies are considered as outcomes of early-stage research: 1) updates, hardware abstraction, and compliance, 2) interface compatibility checking, and 3) architecture modification suggestions.<n>For proof-of-concept implementation, we rely on OpenAI's GPT-4o model.
- Score: 29.585470064353014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are many bottlenecks that decrease the flexibility of automotive systems, making their long-term maintenance, as well as updates and extensions in later lifecycle phases increasingly difficult, mainly due to long re-engineering, standardization, and compliance procedures, as well as heterogeneity and numerosity of devices and underlying software components involved. In this paper, we explore the potential of Large Language Models (LLMs) when it comes to the automation of tasks and processes that aim to increase the flexibility of automotive systems. Three case studies towards achieving this goal are considered as outcomes of early-stage research: 1) updates, hardware abstraction, and compliance, 2) interface compatibility checking, and 3) architecture modification suggestions. For proof-of-concept implementation, we rely on OpenAI's GPT-4o model.
Related papers
- Architecture-Aware Multi-Design Generation for Repository-Level Feature Addition [53.50448142467294]
RAIM is a multi-design and architecture-aware framework for repository-level feature addition.<n>It shifts away from linear patching by generating multiple diverse implementation designs.<n>Experiments on the NoCode-bench Verified dataset demonstrate that RAIM establishes a new state-of-the-art performance.
arXiv Detail & Related papers (2026-03-02T12:50:40Z) - AR-MOT: Autoregressive Multi-object Tracking [56.09738000988466]
We propose a novel autoregressive paradigm that formulates MOT as a sequence generation task within a large language model (LLM) framework.<n>This design enables the model to output structured results through flexible sequence construction, without requiring any task-specific heads.<n>To enhance region-level visual perception, we introduce an Object Tokenizer based on a pretrained detector.
arXiv Detail & Related papers (2026-01-05T09:17:28Z) - Software Defined Vehicle Code Generation: A Few-Shot Prompting Approach [0.0]
General-purpose large language models (LLMs) have demonstrated transformative potential across domains.<n>This study proposes using prompts, a common and basic strategy to interact with LLMs and redirect their responses.<n>Using only system prompts with an appropriate and efficient prompt structure designed using advanced prompt engineering techniques, LLMs can be crafted without requiring a training session or access to their base design.
arXiv Detail & Related papers (2025-11-06T22:27:39Z) - AutoMaAS: Self-Evolving Multi-Agent Architecture Search for Large Language Models [4.720605681761044]
AutoMaAS is a self-evolving multi-agent architecture search framework.<n>It uses neural architecture search principles to automatically discover optimal agent configurations.<n>It achieves 1.0-7.1% performance improvement and reduces inference costs by 3-5% compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-10-03T01:57:07Z) - ASIL-Decomposition Based Resource Allocation Optimization for Automotive E/E Architectures [0.4143603294943439]
We present an approach to automatically map software components to available hardware resources.<n>Compared to existing frameworks, our method provides a wider range of safety analyses in compliance with the ISO 26262 standard.<n>We formulate a multi-objective optimization problem to minimize both the development cost and the maximum execution times of critical function chains.
arXiv Detail & Related papers (2025-05-10T15:48:29Z) - Advancing AI-assisted Hardware Design with Hierarchical Decentralized Training and Personalized Inference-Time Optimization [3.29494205026308]
Large Language Models (LLMs) have sparked significant interest in AI-assisted hardware design generation.<n>We identify three critical challenges hindering the development of LLM-assisted hardware design generation.<n>This paper introduces a two-stage framework for AI-assisted hardware design by exploring decentralized training and personalized inference.
arXiv Detail & Related papers (2025-04-21T15:41:28Z) - DriveTransformer: Unified Transformer for Scalable End-to-End Autonomous Driving [62.62464518137153]
DriveTransformer is a simplified E2E-AD framework for the ease of scaling up.<n>It is composed of three unified operations: task self-attention, sensor cross-attention, temporal cross-attention.<n>It achieves state-of-the-art performance in both simulated closed-loop benchmark Bench2Drive and real world open-loop benchmark nuScenes with high FPS.
arXiv Detail & Related papers (2025-03-07T11:41:18Z) - A Survey of Model Architectures in Information Retrieval [59.61734783818073]
The period from 2019 to the present has represented one of the biggest paradigm shifts in information retrieval (IR) and natural language processing (NLP)<n>We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)<n>We conclude with a forward-looking discussion of emerging challenges and future directions.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - Inference Optimization of Foundation Models on AI Accelerators [68.24450520773688]
Powerful foundation models, including large language models (LLMs), with Transformer architectures have ushered in a new era of Generative AI.
As the number of model parameters reaches to hundreds of billions, their deployment incurs prohibitive inference costs and high latency in real-world scenarios.
This tutorial offers a comprehensive discussion on complementary inference optimization techniques using AI accelerators.
arXiv Detail & Related papers (2024-07-12T09:24:34Z) - Towards Single-System Illusion in Software-Defined Vehicles -- Automated, AI-Powered Workflow [3.2821049498759094]
We propose a novel model- and feature-based approach to development of vehicle software systems.
One of the key points of the presented approach is the inclusion of modern generative AI, specifically Large Language Models (LLMs)
The resulting pipeline is automated to a large extent, with feedback being generated at each step.
arXiv Detail & Related papers (2024-03-21T15:07:57Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z) - AutonoML: Towards an Integrated Framework for Autonomous Machine
Learning [9.356870107137095]
Review seeks to motivate a more expansive perspective on what constitutes an automated/autonomous ML system.
In doing so, we survey developments in the following research areas.
We develop a conceptual framework throughout the review, augmented by each topic, to illustrate one possible way of fusing high-level mechanisms into an autonomous ML system.
arXiv Detail & Related papers (2020-12-23T11:01:10Z) - Many-Objective Software Remodularization using NSGA-III [17.487053547108516]
We propose a novel many-objective search-based approach using NSGA-III.
The process aims at finding the optimal remodularization solutions that improve the structure of packages, minimize the number of changes, preserve semantics coherence, and re-use the history of changes.
arXiv Detail & Related papers (2020-05-13T18:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.