A Systematic Mapping Study on Software Architecture for AI-based Mobility Systems
- URL: http://arxiv.org/abs/2506.01595v1
- Date: Mon, 02 Jun 2025 12:29:54 GMT
- Title: A Systematic Mapping Study on Software Architecture for AI-based Mobility Systems
- Authors: Amra Ramic, Stefan Kugele,
- Abstract summary: We aim to provide the missing overview of existing architectures, their contribution to safety, and their level of maturity in AI-based safety-critical systems.<n>From a set of 1,639 primary studies, we selected 38 relevant studies dealing with safety assurance through software architecture in AI-based safety-critical systems.<n>The selected studies were then examined using various criteria to answer the research questions and identify gaps in this area of research.
- Score: 0.5156484100374057
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Background: Due to their diversity, complexity, and above all importance, safety-critical and dependable systems must be developed with special diligence. Criticality increases as these systems likely contain artificial intelligence (AI) components known for their uncertainty. As software and reference architectures form the backbone of any successful system, including safety-critical dependable systems with learning-enabled components, choosing the suitable architecture that guarantees safety despite uncertainties is of great eminence. Aim: We aim to provide the missing overview of all existing architectures, their contribution to safety, and their level of maturity in AI-based safety-critical systems. Method: To achieve this aim, we report a systematic mapping study. From a set of 1,639 primary studies, we selected 38 relevant studies dealing with safety assurance through software architecture in AI-based safety-critical systems. The selected studies were then examined using various criteria to answer the research questions and identify gaps in this area of research. Results: Our findings showed which architectures have been proposed and to what extent they have been implemented. Furthermore, we identified gaps in different application areas of those systems and explained these gaps with various arguments. Conclusion: As the AI trend continues to grow, the system complexity will inevitably increase, too. To ensure the lasting safety of the systems, we provide an overview of the state of the art, intending to identify best practices and research gaps and direct future research more focused.
Related papers
- A quantitative framework for evaluating architectural patterns in ML systems [49.1574468325115]
This study proposes a framework for quantitative assessment of architectural patterns in ML systems.<n>We focus on scalability and performance metrics for cost-effective CPU-based inference.
arXiv Detail & Related papers (2025-01-20T15:30:09Z) - Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems [0.0]
AI has emerged as a key technology, driving advancements across a range of applications.<n>The challenge of assuring safety in systems that incorporate AI components is substantial.<n>We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
arXiv Detail & Related papers (2024-12-18T16:38:16Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.<n>We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - ACCESS: Assurance Case Centric Engineering of Safety-critical Systems [9.388301205192082]
Assurance cases are used to communicate and assess confidence in critical system properties such as safety and security.
In recent years, model-based system assurance approaches have gained popularity to improve the efficiency and quality of system assurance activities.
We show how model-based system assurance cases can trace to heterogeneous engineering artifacts.
arXiv Detail & Related papers (2024-03-22T14:29:50Z) - Architecting Safer Autonomous Aviation Systems [1.2599533416395767]
This paper considers common architectural patterns used within traditional aviation systems and explores their safety and safety assurance implications.
Considering safety as an architectural property, we discuss both the allocation of safety requirements and the architectural trade-offs involved early in the design lifecycle.
arXiv Detail & Related papers (2023-01-09T21:02:18Z) - An Exploratory Study of AI System Risk Assessment from the Lens of Data
Distribution and Uncertainty [4.99372598361924]
Deep learning (DL) has become a driving force and has been widely adopted in many domains and applications.
This paper initiates an early exploratory study of AI system risk assessment from both the data distribution and uncertainty angles.
arXiv Detail & Related papers (2022-12-13T03:34:25Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Proceedings of the Robust Artificial Intelligence System Assurance
(RAISA) Workshop 2022 [0.0]
The RAISA workshop will focus on research, development and application of robust artificial intelligence (AI) and machine learning (ML) systems.
Rather than studying robustness with respect to particular ML algorithms, our approach will be to explore robustness assurance at the system architecture level.
arXiv Detail & Related papers (2022-02-10T01:15:50Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.