Building a Credible Case for Safety: Waymo's Approach for the
Determination of Absence of Unreasonable Risk
- URL: http://arxiv.org/abs/2306.01917v1
- Date: Fri, 2 Jun 2023 21:05:39 GMT
- Title: Building a Credible Case for Safety: Waymo's Approach for the
Determination of Absence of Unreasonable Risk
- Authors: Francesca Favaro, Laura Fraade-Blanar, Scott Schnelle, Trent Victor,
Mauricio Pe\~na, Johan Engstrom, John Scanlon, Kris Kusano, Dan Smith
- Abstract summary: A safety case for fully autonomous operations is a formal way to explain how a company determines that an AV system is safe.
It involves an explanation of the system, the methodologies used to develop it, the metrics used to validate it and the actual results of validation tests.
This paper helps enabling such alignment by providing foundational thinking into how a system is determined to be ready for deployment.
- Score: 2.2386635730984117
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents an overview of Waymo's approach to building a reliable
case for safety - a novel and thorough blueprint for use by any company
building fully autonomous driving systems. A safety case for fully autonomous
operations is a formal way to explain how a company determines that an AV
system is safe enough to be deployed on public roads without a human driver,
and it includes evidence to support that determination. It involves an
explanation of the system, the methodologies used to develop it, the metrics
used to validate it and the actual results of validation tests. Yet, in order
to develop a worthwhile safety case, it is first important to understand what
makes one credible and well crafted, and align on evaluation criteria. This
paper helps enabling such alignment by providing foundational thinking into not
only how a system is determined to be ready for deployment but also into
justifying that the set of acceptance criteria employed in such determination
is sufficient and that their evaluation (and associated methods) is credible.
The publication is structured around three complementary perspectives on safety
that build upon content published by Waymo since 2020: a layered approach to
safety; a dynamic approach to safety; and a credible approach to safety. The
proposed approach is methodology-agnostic, so that anyone in the space could
employ portions or all of it.
Related papers
- What Makes and Breaks Safety Fine-tuning? A Mechanistic Study [64.9691741899956]
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
We design a synthetic data generation framework that captures salient aspects of an unsafe input.
Using this, we investigate three well-known safety fine-tuning methods.
arXiv Detail & Related papers (2024-07-14T16:12:57Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Towards a Completeness Argumentation for Scenario Concepts [0.2184775414778289]
This paper argues a sufficient completeness of a scenario concept using a goal structured notation.
Methods are applied to a scenario concept and the inD dataset to prove the usability.
arXiv Detail & Related papers (2024-04-02T13:29:38Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Towards Formal Fault Injection for Safety Assessment of Automated
Systems [0.0]
This paper introduces formal fault injection, a fusion of these two techniques throughout the development lifecycle.
We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection.
arXiv Detail & Related papers (2023-11-16T11:34:18Z) - Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models? [52.238883592674696]
Ring-A-Bell is a model-agnostic red-teaming tool for T2I diffusion models.
It identifies problematic prompts for diffusion models with the corresponding generation of inappropriate content.
Our results show that Ring-A-Bell, by manipulating safe prompting benchmarks, can transform prompts that were originally regarded as safe to evade existing safety mechanisms.
arXiv Detail & Related papers (2023-10-16T02:11:20Z) - Safety of autonomous vehicles: A survey on Model-based vs. AI-based
approaches [1.370633147306388]
It is proposed to review research on relevant methods and concepts defining an overall control architecture for AVs.
It is intended through this reviewing process to highlight researches that use either model-based methods or AI-based approaches.
This paper ends with discussions on the methods used to guarantee the safety of AVs namely: safety verification techniques and the standardization/generalization of safety frameworks.
arXiv Detail & Related papers (2023-05-29T08:05:32Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - An Empirical Analysis of the Use of Real-Time Reachability for the
Safety Assurance of Autonomous Vehicles [7.1169864450668845]
We propose using a real-time reachability algorithm for the implementation of the simplex architecture to assure the safety of a 1/10 scale open source autonomous vehicle platform.
In our approach, the need to analyze an underlying controller is abstracted away, instead focusing on the effects of the controller's decisions on the system's future states.
arXiv Detail & Related papers (2022-05-03T11:12:29Z) - Bootstrapping confidence in future safety based on past safe operation [0.0]
We show an approach to confidence of low enough probability of causing accidents in the early phases of operation.
This formalises the common approach of operating a system on a limited basis in the hope that mishap-free operation will confirm one's confidence in its safety.
arXiv Detail & Related papers (2021-10-20T18:36:23Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.