Inherent Diverse Redundant Safety Mechanisms for AI-based Software
Elements in Automotive Applications
- URL: http://arxiv.org/abs/2402.08208v2
- Date: Thu, 29 Feb 2024 18:18:04 GMT
- Title: Inherent Diverse Redundant Safety Mechanisms for AI-based Software
Elements in Automotive Applications
- Authors: Mandar Pitale, Alireza Abbaspour, Devesh Upadhyay
- Abstract summary: This paper explores the role and challenges of Artificial Intelligence (AI) algorithms in autonomous driving systems.
A primary concern relates to the ability (and necessity) of AI models to generalize beyond their initial training data.
This paper investigates the risk associated with overconfident AI models in safety-critical applications like autonomous driving.
- Score: 1.6495054381576084
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper explores the role and challenges of Artificial Intelligence (AI)
algorithms, specifically AI-based software elements, in autonomous driving
systems. These AI systems are fundamental in executing real-time critical
functions in complex and high-dimensional environments. They handle vital tasks
like multi-modal perception, cognition, and decision-making tasks such as
motion planning, lane keeping, and emergency braking. A primary concern relates
to the ability (and necessity) of AI models to generalize beyond their initial
training data. This generalization issue becomes evident in real-time
scenarios, where models frequently encounter inputs not represented in their
training or validation data. In such cases, AI systems must still function
effectively despite facing distributional or domain shifts. This paper
investigates the risk associated with overconfident AI models in
safety-critical applications like autonomous driving. To mitigate these risks,
methods for training AI models that help maintain performance without
overconfidence are proposed. This involves implementing certainty reporting
architectures and ensuring diverse training data. While various
distribution-based methods exist to provide safety mechanisms for AI models,
there is a noted lack of systematic assessment of these methods, especially in
the context of safety-critical automotive applications. Many methods in the
literature do not adapt well to the quick response times required in
safety-critical edge applications. This paper reviews these methods, discusses
their suitability for safety-critical applications, and highlights their
strengths and limitations. The paper also proposes potential improvements to
enhance the safety and reliability of AI algorithms in autonomous vehicles in
the context of rapid and accurate decision-making processes.
Related papers
- Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems [0.0]
AI has emerged as a key technology, driving advancements across a range of applications.
The challenge of assuring safety in systems that incorporate AI components is substantial.
We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
arXiv Detail & Related papers (2024-12-18T16:38:16Z) - Key Safety Design Overview in AI-driven Autonomous Vehicles [0.0]
It is essential to maintain a high level of functional safety and robust software design.
This paper explores the necessary safety architecture and systematic approach for automotive software and hardware.
arXiv Detail & Related papers (2024-12-12T01:48:45Z) - Generative AI Agents in Autonomous Machines: A Safety Perspective [9.02400798202199]
generative AI agents provide unparalleled capabilities, but they also have unique safety concerns.
This work investigates the evolving safety requirements when generative models are integrated as agents into physical autonomous machines.
We recommend the development and implementation of comprehensive safety scorecards for the use of generative AI technologies in autonomous machines.
arXiv Detail & Related papers (2024-10-20T20:07:08Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Concept-Guided LLM Agents for Human-AI Safety Codesign [6.603483691167379]
Generative AI is increasingly important in software engineering, including safety engineering, where its use ensures that software does not cause harm to people.
It is crucial to develop more advanced and sophisticated approaches that can effectively address the complexities and safety concerns of software systems.
We present an efficient, hybrid strategy to leverage Large Language Models for safety analysis and Human-AI codesign.
arXiv Detail & Related papers (2024-04-03T11:37:01Z) - On STPA for Distributed Development of Safe Autonomous Driving: An Interview Study [0.7851536646859475]
System-Theoretic Process Analysis (STPA) is a novel method applied in safety-related fields like defense and aerospace.
STPA assumes prerequisites that are not fully valid in the automotive system engineering with distributed system development and multi-abstraction design levels.
This can be seen as a maintainability challenge in continuous development and deployment.
arXiv Detail & Related papers (2024-03-14T15:56:02Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.