Safety of autonomous vehicles: A survey on Model-based vs. AI-based
approaches
- URL: http://arxiv.org/abs/2305.17941v1
- Date: Mon, 29 May 2023 08:05:32 GMT
- Title: Safety of autonomous vehicles: A survey on Model-based vs. AI-based
approaches
- Authors: Dimia Iberraken and Lounis Adouane
- Abstract summary: It is proposed to review research on relevant methods and concepts defining an overall control architecture for AVs.
It is intended through this reviewing process to highlight researches that use either model-based methods or AI-based approaches.
This paper ends with discussions on the methods used to guarantee the safety of AVs namely: safety verification techniques and the standardization/generalization of safety frameworks.
- Score: 1.370633147306388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing advancements in Autonomous Vehicles (AVs) have emphasized the
critical need to prioritize the absolute safety of AV maneuvers, especially in
dynamic and unpredictable environments or situations. This objective becomes
even more challenging due to the uniqueness of every traffic
situation/condition. To cope with all these very constrained and complex
configurations, AVs must have appropriate control architectures with reliable
and real-time Risk Assessment and Management Strategies (RAMS). These targeted
RAMS must lead to reduce drastically the navigation risks. However, the lack of
safety guarantees proves, which is one of the key challenges to be addressed,
limit drastically the ambition to introduce more broadly AVs on our roads and
restrict the use of AVs to very limited use cases. Therefore, the focus and the
ambition of this paper is to survey research on autonomous vehicles while
focusing on the important topic of safety guarantee of AVs. For this purpose,
it is proposed to review research on relevant methods and concepts defining an
overall control architecture for AVs, with an emphasis on the safety assessment
and decision-making systems composing these architectures. Moreover, it is
intended through this reviewing process to highlight researches that use either
model-based methods or AI-based approaches. This is performed while emphasizing
the strengths and weaknesses of each methodology and investigating the research
that proposes a comprehensive multi-modal design that combines model-based and
AI approaches. This paper ends with discussions on the methods used to
guarantee the safety of AVs namely: safety verification techniques and the
standardization/generalization of safety frameworks.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Inherent Diverse Redundant Safety Mechanisms for AI-based Software
Elements in Automotive Applications [1.6495054381576084]
This paper explores the role and challenges of Artificial Intelligence (AI) algorithms in autonomous driving systems.
A primary concern relates to the ability (and necessity) of AI models to generalize beyond their initial training data.
This paper investigates the risk associated with overconfident AI models in safety-critical applications like autonomous driving.
arXiv Detail & Related papers (2024-02-13T04:15:26Z) - Formal Modelling of Safety Architecture for Responsibility-Aware
Autonomous Vehicle via Event-B Refinement [1.45566585318013]
This paper describes our strategy and experience in modelling, deriving, and proving the safety conditions of AVs.
Our case study targets the state-of-the-art model of goal-aware responsibility-sensitive safety to argue over interactions with surrounding vehicles.
arXiv Detail & Related papers (2024-01-10T02:02:06Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.