The missing link: Developing a safety case for perception components in
automated driving
- URL: http://arxiv.org/abs/2108.13294v1
- Date: Mon, 30 Aug 2021 15:12:27 GMT
- Title: The missing link: Developing a safety case for perception components in
automated driving
- Authors: Rick Salay, Krzysztof Czarnecki, Hiroshi Kuwajima, Hirotoshi Yasuoka,
Toshihiro Nakae, Vahdat Abdelzad, Chengjie Huang, Maximilian Kahn, Van Duong
Nguyen
- Abstract summary: Perception is a key aspect of automated driving systems (AD) that relies heavily on Machine Learning (ML)
Despite the known challenges with the safety assurance of ML-based components, proposals have recently emerged for unit-level safety cases addressing these components.
We propose a generic template for such a linking argument specifically tailored for perception components.
- Score: 10.43163823170716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safety assurance is a central concern for the development and societal
acceptance of automated driving (AD) systems. Perception is a key aspect of AD
that relies heavily on Machine Learning (ML). Despite the known challenges with
the safety assurance of ML-based components, proposals have recently emerged
for unit-level safety cases addressing these components. Unfortunately, AD
safety cases express safety requirements at the system-level and these efforts
are missing the critical linking argument connecting safety requirements at the
system-level to component performance requirements at the unit-level. In this
paper, we propose a generic template for such a linking argument specifically
tailored for perception components. The template takes a deductive and formal
approach to define strong traceability between levels. We demonstrate the
applicability of the template with a detailed case study and discuss its use as
a tool to support incremental development of perception components.
Related papers
- What Makes and Breaks Safety Fine-tuning? A Mechanistic Study [64.9691741899956]
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
We design a synthetic data generation framework that captures salient aspects of an unsafe input.
Using this, we investigate three well-known safety fine-tuning methods.
arXiv Detail & Related papers (2024-07-14T16:12:57Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Comprehensive and Efficient Post Safety Alignment of Large Language Models via Safety Patching [77.36097118561057]
textscSafePatching is a novel framework for comprehensive and efficient PSA.
textscSafePatching achieves a more comprehensive and efficient PSA than baseline methods.
arXiv Detail & Related papers (2024-05-22T16:51:07Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components [0.6526824510982802]
Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
arXiv Detail & Related papers (2023-09-28T09:00:31Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - Integrating Testing and Operation-related Quantitative Evidences in
Assurance Cases to Argue Safety of Data-Driven AI/ML Components [2.064612766965483]
In the future, AI will increasingly find its way into systems that can potentially cause physical harm to humans.
For such safety-critical systems, it must be demonstrated that their residual risk does not exceed what is acceptable.
This paper proposes a more holistic argumentation structure for having achieved the target.
arXiv Detail & Related papers (2022-02-10T20:35:25Z) - Reliability Assessment and Safety Arguments for Machine Learning
Components in Assuring Learning-Enabled Autonomous Systems [19.65793237440738]
We present an overall assurance framework for Learning-Enabled Systems (LES)
We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers.
We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM.
arXiv Detail & Related papers (2021-11-30T14:39:22Z) - Safety Case Templates for Autonomous Systems [0.0]
This report documents safety assurance argument templates to support the deployment and operation of autonomous systems that include machine learning (ML) components.
The report also presents generic templates for argument defeaters and evidence confidence that can be used to strengthen, review, and adapt the templates as necessary.
arXiv Detail & Related papers (2021-01-29T15:49:37Z) - Quantifying Assurance in Learning-enabled Systems [3.0938904602244355]
Dependability assurance of systems embedding machine learning components is a key step for their use in safety-critical applications.
This paper develops a quantitative notion of assurance that an LES is dependable, as a core component of its assurance case.
We illustrate the utility of assurance measures by application to a real world autonomous aviation system.
arXiv Detail & Related papers (2020-06-18T08:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.