Towards the Unification and Data-Driven Synthesis of Autonomous Vehicle
Safety Concepts
- URL: http://arxiv.org/abs/2107.14412v1
- Date: Fri, 30 Jul 2021 03:16:48 GMT
- Title: Towards the Unification and Data-Driven Synthesis of Autonomous Vehicle
Safety Concepts
- Authors: Andrea Bajcsy, Karen Leung, Edward Schmerling, Marco Pavone
- Abstract summary: We advocate for the use of Hamilton Jacobi (HJ) reachability as a unifying mathematical framework for comparing existing safety concepts.
We show that (i) existing predominant safety concepts can be embedded in the HJ reachability framework, thereby enabling a common language for comparing and contrasting modeling assumptions.
- Score: 31.13851159912757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As safety-critical autonomous vehicles (AVs) will soon become pervasive in
our society, a number of safety concepts for trusted AV deployment have been
recently proposed throughout industry and academia. Yet, agreeing upon an
"appropriate" safety concept is still an elusive task. In this paper, we
advocate for the use of Hamilton Jacobi (HJ) reachability as a unifying
mathematical framework for comparing existing safety concepts, and propose ways
to expand its modeling premises in a data-driven fashion. Specifically, we show
that (i) existing predominant safety concepts can be embedded in the HJ
reachability framework, thereby enabling a common language for comparing and
contrasting modeling assumptions, and (ii) HJ reachability can serve as an
inductive bias to effectively reason, in a data-driven context, about two
critical, yet often overlooked aspects of safety: responsibility and
context-dependency.
Related papers
- SafeCast: Risk-Responsive Motion Forecasting for Autonomous Vehicles [12.607007386467329]
We present SafeCast, a risk-responsive motion forecasting model.
It integrates safety-aware decision-making with uncertainty-aware adaptability.
Our model achieves state-of-the-art (SOTA) accuracy while maintaining a lightweight architecture and low inference latency.
arXiv Detail & Related papers (2025-03-28T15:38:21Z) - SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.
Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.
It aggregates all harmful and beneficial effects into a harmfulness score using fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models [5.6874111521946356]
Safety-aligned language models often exhibit fragile and imbalanced safety mechanisms.
We propose SafeInfer, a context-adaptive, decoding-time safety alignment strategy.
HarmEval is a novel benchmark for extensive safety evaluations.
arXiv Detail & Related papers (2024-06-18T05:03:23Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models? [52.238883592674696]
Ring-A-Bell is a model-agnostic red-teaming tool for T2I diffusion models.
It identifies problematic prompts for diffusion models with the corresponding generation of inappropriate content.
Our results show that Ring-A-Bell, by manipulating safe prompting benchmarks, can transform prompts that were originally regarded as safe to evade existing safety mechanisms.
arXiv Detail & Related papers (2023-10-16T02:11:20Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Interpreting Safety Outcomes: Waymo's Performance Evaluation in the
Context of a Broader Determination of Safety Readiness [0.0]
This paper highlights the need for a diversified approach to safety determination that complements the analysis of observed safety outcomes with other estimation techniques.
Our discussion highlights: the presentation of a "credibility paradox" within the comparison between ADS crash data and human-derived baselines, the recognition of continuous confidence growth through in-use monitoring, and the need to supplement any aggregate statistical analysis with appropriate event-level reasoning.
arXiv Detail & Related papers (2023-06-23T14:26:40Z) - Safety-aware Policy Optimisation for Autonomous Racing [17.10371721305536]
We introduce Hamilton-Jacobi (HJ) reachability theory into the constrained Markov decision process (CMDP) framework.
We demonstrate that the HJ safety value can be learned directly on vision context.
We evaluate our method on several benchmark tasks, including Safety Gym and Learn-to-Race (L2R), a recently-released high-fidelity autonomous racing environment.
arXiv Detail & Related papers (2021-10-14T20:15:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.