Interpreting Safety Outcomes: Waymo's Performance Evaluation in the
Context of a Broader Determination of Safety Readiness
- URL: http://arxiv.org/abs/2306.14923v1
- Date: Fri, 23 Jun 2023 14:26:40 GMT
- Title: Interpreting Safety Outcomes: Waymo's Performance Evaluation in the
Context of a Broader Determination of Safety Readiness
- Authors: Francesca M. Favaro, Trent Victor, Henning Hohnhold, Scott Schnelle
- Abstract summary: This paper highlights the need for a diversified approach to safety determination that complements the analysis of observed safety outcomes with other estimation techniques.
Our discussion highlights: the presentation of a "credibility paradox" within the comparison between ADS crash data and human-derived baselines, the recognition of continuous confidence growth through in-use monitoring, and the need to supplement any aggregate statistical analysis with appropriate event-level reasoning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper frames recent publications from Waymo within the broader context
of the safety readiness determination for an Automated Driving System (ADS).
Starting from a brief overview of safety performance outcomes reported by Waymo
(i.e., contact events experienced during fully autonomous operations), this
paper highlights the need for a diversified approach to safety determination
that complements the analysis of observed safety outcomes with other estimation
techniques. Our discussion highlights: the presentation of a "credibility
paradox" within the comparison between ADS crash data and human-derived
baselines; the recognition of continuous confidence growth through in-use
monitoring; and the need to supplement any aggregate statistical analysis with
appropriate event-level reasoning.
Related papers
- Advancing Embodied Agent Security: From Safety Benchmarks to Input Moderation [52.83870601473094]
Embodied agents exhibit immense potential across a multitude of domains.
Existing research predominantly concentrates on the security of general large language models.
This paper introduces a novel input moderation framework, meticulously designed to safeguard embodied agents.
arXiv Detail & Related papers (2025-04-22T08:34:35Z) - Position: Bayesian Statistics Facilitates Stakeholder Participation in Evaluation of Generative AI [0.0]
The evaluation of Generative AI (GenAI) systems plays a critical role in public policy and decision-making.
Existing methods are often limited by reliance on benchmark-driven, point-estimate comparisons.
This paper argues for the use of Bayesian statistics as a principled framework to address these challenges.
arXiv Detail & Related papers (2025-04-21T16:31:15Z) - SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.
Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.
It aggregates all harmful and beneficial effects into a harmfulness score using fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - Multimodal Situational Safety [73.63981779844916]
We present the first evaluation and analysis of a novel safety challenge termed Multimodal Situational Safety.
For an MLLM to respond safely, whether through language or action, it often needs to assess the safety implications of a language query within its corresponding visual context.
We develop the Multimodal Situational Safety benchmark (MSSBench) to assess the situational safety performance of current MLLMs.
arXiv Detail & Related papers (2024-10-08T16:16:07Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - FREA: Feasibility-Guided Generation of Safety-Critical Scenarios with Reasonable Adversariality [13.240598841087841]
We introduce FREA, a novel safety-critical scenarios generation method that incorporates the Largest Feasible Region (LFR) of AV as guidance.
Experiments illustrate that FREA can effectively generate safety-critical scenarios, yielding considerable near-miss events.
arXiv Detail & Related papers (2024-06-05T06:26:15Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Safety Analysis of Autonomous Driving Systems Based on Model Learning [16.38592243376647]
We present a practical verification method for safety analysis of the autonomous driving system (ADS)
The main idea is to build a surrogate model that quantitatively depicts the behaviour of an ADS in the specified traffic scenario.
We demonstrate the utility of the proposed approach by evaluating safety properties on the state-of-the-art ADS in literature.
arXiv Detail & Related papers (2022-11-23T06:52:40Z) - Architectural patterns for handling runtime uncertainty of data-driven
models in safety-critical perception [1.7616042687330642]
We present additional architectural patterns for handling uncertainty estimation.
We evaluate the four patterns qualitatively and quantitatively with respect to safety and performance gains.
We conclude that the consideration of context information of the driving situation makes it possible to accept more or less uncertainty depending on the inherent risk of the situation.
arXiv Detail & Related papers (2022-06-14T13:31:36Z) - Towards the Unification and Data-Driven Synthesis of Autonomous Vehicle
Safety Concepts [31.13851159912757]
We advocate for the use of Hamilton Jacobi (HJ) reachability as a unifying mathematical framework for comparing existing safety concepts.
We show that (i) existing predominant safety concepts can be embedded in the HJ reachability framework, thereby enabling a common language for comparing and contrasting modeling assumptions.
arXiv Detail & Related papers (2021-07-30T03:16:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.