The Open Autonomy Safety Case Framework
- URL: http://arxiv.org/abs/2404.05444v1
- Date: Mon, 8 Apr 2024 12:26:06 GMT
- Title: The Open Autonomy Safety Case Framework
- Authors: Michael Wagner, Carmen Carlan,
- Abstract summary: Safety cases have become a best practice for measuring, managing, and communicating the safety of autonomous vehicles.
This paper introduces the Open Autonomy Safety Case Framework, developed over years of work with the autonomous vehicle industry.
- Score: 3.2995359570845917
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A system safety case is a compelling, comprehensible, and valid argument about the satisfaction of the safety goals of a given system operating in a given environment supported by convincing evidence. Since the publication of UL 4600 in 2020, safety cases have become a best practice for measuring, managing, and communicating the safety of autonomous vehicles (AVs). Although UL 4600 provides guidance on how to build the safety case for an AV, the complexity of AVs and their operating environments, the novelty of the used technology, the need for complying with various regulations and technical standards, and for addressing cybersecurity concerns and ethical considerations make the development of safety cases for AVs challenging. To this end, safety case frameworks have been proposed that bring strategies, argument templates, and other guidance together to support the development of a safety case. This paper introduces the Open Autonomy Safety Case Framework, developed over years of work with the autonomous vehicle industry, as a roadmap for how AVs can be deployed safely and responsibly.
Related papers
- Safety cases for frontier AI [0.8987776881291144]
Safety cases are reports that make a structured argument, supported by evidence, that a system is safe enough in a given operational context.
Safety cases are already common in other safety-critical industries such as aviation and nuclear power.
We explain why they may also be a useful tool in frontier AI governance, both in industry self-regulation and government regulation.
arXiv Detail & Related papers (2024-10-28T22:08:28Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Redefining Safety for Autonomous Vehicles [0.9208007322096532]
Existing definitions and associated conceptual frameworks for computer-based system safety should be revisited.
Operation without a human driver dramatically increases the scope of safety concerns.
We propose updated definitions for core system safety concepts.
arXiv Detail & Related papers (2024-04-25T17:22:43Z) - ACCESS: Assurance Case Centric Engineering of Safety-critical Systems [9.388301205192082]
Assurance cases are used to communicate and assess confidence in critical system properties such as safety and security.
In recent years, model-based system assurance approaches have gained popularity to improve the efficiency and quality of system assurance activities.
We show how model-based system assurance cases can trace to heterogeneous engineering artifacts.
arXiv Detail & Related papers (2024-03-22T14:29:50Z) - Formal Modelling of Safety Architecture for Responsibility-Aware
Autonomous Vehicle via Event-B Refinement [1.45566585318013]
This paper describes our strategy and experience in modelling, deriving, and proving the safety conditions of AVs.
Our case study targets the state-of-the-art model of goal-aware responsibility-sensitive safety to argue over interactions with surrounding vehicles.
arXiv Detail & Related papers (2024-01-10T02:02:06Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Safety of autonomous vehicles: A survey on Model-based vs. AI-based
approaches [1.370633147306388]
It is proposed to review research on relevant methods and concepts defining an overall control architecture for AVs.
It is intended through this reviewing process to highlight researches that use either model-based methods or AI-based approaches.
This paper ends with discussions on the methods used to guarantee the safety of AVs namely: safety verification techniques and the standardization/generalization of safety frameworks.
arXiv Detail & Related papers (2023-05-29T08:05:32Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.