Deep Learning Safety Concerns in Automated Driving Perception
- URL: http://arxiv.org/abs/2309.03774v3
- Date: Fri, 12 Jul 2024 11:46:08 GMT
- Title: Deep Learning Safety Concerns in Automated Driving Perception
- Authors: Stephanie Abrecht, Alexander Hirsch, Shervin Raafatnia, Matthias Woehrle,
- Abstract summary: This paper introduces an additional categorization for a better understanding as well as enabling cross-functional teams to jointly address the concerns.
Recent advances in the field of deep learning and impressive performance of deep neural networks (DNNs) for perception have resulted in an increased demand for their use in automated driving (AD) systems.
- Score: 43.026485214492105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in the field of deep learning and impressive performance of deep neural networks (DNNs) for perception have resulted in an increased demand for their use in automated driving (AD) systems. The safety of such systems is of utmost importance and thus requires to consider the unique properties of DNNs. In order to achieve safety of AD systems with DNN-based perception components in a systematic and comprehensive approach, so-called safety concerns have been introduced as a suitable structuring element. On the one hand, the concept of safety concerns is -- by design -- well aligned to existing standards relevant for safety of AD systems such as ISO 21448 (SOTIF). On the other hand, it has already inspired several academic publications and upcoming standards on AI safety such as ISO PAS 8800. While the concept of safety concerns has been previously introduced, this paper extends and refines it, leveraging feedback from various domain and safety experts in the field. In particular, this paper introduces an additional categorization for a better understanding as well as enabling cross-functional teams to jointly address the concerns.
Related papers
- Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Redefining Safety for Autonomous Vehicles [0.9208007322096532]
Existing definitions and associated conceptual frameworks for computer-based system safety should be revisited.
Operation without a human driver dramatically increases the scope of safety concerns.
We propose updated definitions for core system safety concepts.
arXiv Detail & Related papers (2024-04-25T17:22:43Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Safe Perception -- A Hierarchical Monitor Approach [0.0]
We propose a novel hierarchical monitoring approach for AI-based perception systems.
It can reliably detect detection misses, and at the same time has a very low false alarm rate.
arXiv Detail & Related papers (2022-08-01T13:09:24Z) - System Safety and Artificial Intelligence [0.0]
New applications of AI across societal domains come with new hazards.
The field of system safety has dealt with accidents and harm in safety-critical systems.
This chapter honors system safety pioneer Nancy Leveson.
arXiv Detail & Related papers (2022-02-18T16:37:54Z) - The missing link: Developing a safety case for perception components in
automated driving [10.43163823170716]
Perception is a key aspect of automated driving systems (AD) that relies heavily on Machine Learning (ML)
Despite the known challenges with the safety assurance of ML-based components, proposals have recently emerged for unit-level safety cases addressing these components.
We propose a generic template for such a linking argument specifically tailored for perception components.
arXiv Detail & Related papers (2021-08-30T15:12:27Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Strategy to Increase the Safety of a DNN-based Perception for HAD
Systems [6.140206215951371]
Safety is one of the most important development goals for automated driving systems.
For these, large parts of the traditional safety processes and requirements are not fully applicable or sufficient.
This paper presents a framework for the description and mitigation of DNN insufficiencies and the derivation of relevant safety mechanisms.
arXiv Detail & Related papers (2020-02-20T18:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.