AI Safety Assurance for Automated Vehicles: A Survey on Research, Standardization, Regulation
- URL: http://arxiv.org/abs/2504.18328v1
- Date: Fri, 25 Apr 2025 13:14:06 GMT
- Title: AI Safety Assurance for Automated Vehicles: A Survey on Research, Standardization, Regulation
- Authors: Lars Ullrich, Michael Buchholz, Klaus Dietmayer, Knut Graichen,
- Abstract summary: Assuring safety of artificial intelligence applied to safety-critical systems is of paramount importance.<n>Research on safeguarding AI is being conducted in parallel to AI standardization and regulation.
- Score: 13.059865395440603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assuring safety of artificial intelligence (AI) applied to safety-critical systems is of paramount importance. Especially since research in the field of automated driving shows that AI is able to outperform classical approaches, to handle higher complexities, and to reach new levels of autonomy. At the same time, the safety assurance required for the use of AI in such safety-critical systems is still not in place. Due to the dynamic and far-reaching nature of the technology, research on safeguarding AI is being conducted in parallel to AI standardization and regulation. The parallel progress necessitates simultaneous consideration in order to carry out targeted research and development of AI systems in the context of automated driving. Therefore, in contrast to existing surveys that focus primarily on research aspects, this paper considers research, standardization and regulation in a concise way. Accordingly, the survey takes into account the interdependencies arising from the triplet of research, standardization and regulation in a forward-looking perspective and anticipates and discusses open questions and possible future directions. In this way, the survey ultimately serves to provide researchers and safety experts with a compact, holistic perspective that discusses the current status, emerging trends, and possible future developments.
Related papers
- AI Safety for Everyone [3.440579243843689]
Recent discussions and research in AI safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems.<n>This framing may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles.<n>We find a vast array of concrete safety work that addresses immediate and practical concerns with current AI systems.
arXiv Detail & Related papers (2025-02-13T13:04:59Z) - Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems [0.0]
AI has emerged as a key technology, driving advancements across a range of applications.<n>The challenge of assuring safety in systems that incorporate AI components is substantial.<n>We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
arXiv Detail & Related papers (2024-12-18T16:38:16Z) - How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.<n>We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.<n>It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.<n>We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.