Formal Modelling of Safety Architecture for Responsibility-Aware
Autonomous Vehicle via Event-B Refinement
- URL: http://arxiv.org/abs/2401.04875v1
- Date: Wed, 10 Jan 2024 02:02:06 GMT
- Title: Formal Modelling of Safety Architecture for Responsibility-Aware
Autonomous Vehicle via Event-B Refinement
- Authors: Tsutomu Kobayashi, Martin Bondu, Fuyuki Ishikawa
- Abstract summary: This paper describes our strategy and experience in modelling, deriving, and proving the safety conditions of AVs.
Our case study targets the state-of-the-art model of goal-aware responsibility-sensitive safety to argue over interactions with surrounding vehicles.
- Score: 1.45566585318013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring the safety of autonomous vehicles (AVs) is the key requisite for
their acceptance in society. This complexity is the core challenge in formally
proving their safety conditions with AI-based black-box controllers and
surrounding objects under various traffic scenarios. This paper describes our
strategy and experience in modelling, deriving, and proving the safety
conditions of AVs with the Event-B refinement mechanism to reduce complexity.
Our case study targets the state-of-the-art model of goal-aware
responsibility-sensitive safety to argue over interactions with surrounding
vehicles. We also employ the Simplex architecture to involve advanced black-box
AI controllers. Our experience has demonstrated that the refinement mechanism
can be effectively used to gradually develop the complex system over scenario
variations.
Related papers
- SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems [5.055705635181593]
Embodied AI systems, including AI-powered robots that autonomously interact with the physical world, stand to be significantly advanced.
Improper safety management can lead to failures in complex environments and make the system vulnerable to malicious command injections.
We propose textitSafeEmbodAI, a safety framework for integrating mobile robots into embodied AI systems.
arXiv Detail & Related papers (2024-09-03T05:56:50Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Safety of autonomous vehicles: A survey on Model-based vs. AI-based
approaches [1.370633147306388]
It is proposed to review research on relevant methods and concepts defining an overall control architecture for AVs.
It is intended through this reviewing process to highlight researches that use either model-based methods or AI-based approaches.
This paper ends with discussions on the methods used to guarantee the safety of AVs namely: safety verification techniques and the standardization/generalization of safety frameworks.
arXiv Detail & Related papers (2023-05-29T08:05:32Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Synergistic Redundancy: Towards Verifiable Safety for Autonomous
Vehicles [10.277825331268179]
We propose Synergistic Redundancy (SR) a safety architecture for complex cyber physical systems, like Autonomous Vehicle (AV)
SR provides verifiable safety guarantees against specific faults by decoupling the mission and safety tasks of the system.
Close coordination with the mission layer allows easier and early detection of safety critical faults in the system.
arXiv Detail & Related papers (2022-09-04T23:52:03Z) - An Empirical Analysis of the Use of Real-Time Reachability for the
Safety Assurance of Autonomous Vehicles [7.1169864450668845]
We propose using a real-time reachability algorithm for the implementation of the simplex architecture to assure the safety of a 1/10 scale open source autonomous vehicle platform.
In our approach, the need to analyze an underlying controller is abstracted away, instead focusing on the effects of the controller's decisions on the system's future states.
arXiv Detail & Related papers (2022-05-03T11:12:29Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Towards Safe, Explainable, and Regulated Autonomous Driving [11.043966021881426]
We propose a framework that integrates autonomous control, explainable AI (XAI), and regulatory compliance.
We describe relevant XAI approaches that can help achieve the goals of the framework.
arXiv Detail & Related papers (2021-11-20T05:06:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.