Synergistic Redundancy: Towards Verifiable Safety for Autonomous
Vehicles
- URL: http://arxiv.org/abs/2209.01710v1
- Date: Sun, 4 Sep 2022 23:52:03 GMT
- Title: Synergistic Redundancy: Towards Verifiable Safety for Autonomous
Vehicles
- Authors: Ayoosh Bansal, Simon Yu, Hunmin Kim, Bo Li, Naira Hovakimyan, Marco
Caccamo and Lui Sha
- Abstract summary: We propose Synergistic Redundancy (SR) a safety architecture for complex cyber physical systems, like Autonomous Vehicle (AV)
SR provides verifiable safety guarantees against specific faults by decoupling the mission and safety tasks of the system.
Close coordination with the mission layer allows easier and early detection of safety critical faults in the system.
- Score: 10.277825331268179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Autonomous Vehicle (AV) development has progressed, concerns regarding the
safety of passengers and agents in their environment have risen. Each real
world traffic collision involving autonomously controlled vehicles has
compounded this concern. Open source autonomous driving implementations show a
software architecture with complex interdependent tasks, heavily reliant on
machine learning and Deep Neural Networks (DNN), which are vulnerable to non
deterministic faults and corner cases. These complex subsystems work together
to fulfill the mission of the AV while also maintaining safety. Although
significant improvements are being made towards increasing the empirical
reliability and confidence in these systems, the inherent limitations of DNN
verification create an, as yet, insurmountable challenge in providing
deterministic safety guarantees in AV.
We propose Synergistic Redundancy (SR), a safety architecture for complex
cyber physical systems, like AV. SR provides verifiable safety guarantees
against specific faults by decoupling the mission and safety tasks of the
system. Simultaneous to independently fulfilling their primary roles, the
partially functionally redundant mission and safety tasks are able to aid each
other, synergistically improving the combined system. The synergistic safety
layer uses only verifiable and logically analyzable software to fulfill its
tasks. Close coordination with the mission layer allows easier and early
detection of safety critical faults in the system. SR simplifies the mission
layer's optimization goals and improves its design. SR provides safe deployment
of high performance, although inherently unverifiable, machine learning
software. In this work, we first present the design and features of the SR
architecture and then evaluate the efficacy of the solution, focusing on the
crucial problem of obstacle existence detection faults in AV.
Related papers
- SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems [5.055705635181593]
Embodied AI systems, including AI-powered robots that autonomously interact with the physical world, stand to be significantly advanced.
Improper safety management can lead to failures in complex environments and make the system vulnerable to malicious command injections.
We propose textitSafeEmbodAI, a safety framework for integrating mobile robots into embodied AI systems.
arXiv Detail & Related papers (2024-09-03T05:56:50Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Formal Modelling of Safety Architecture for Responsibility-Aware
Autonomous Vehicle via Event-B Refinement [1.45566585318013]
This paper describes our strategy and experience in modelling, deriving, and proving the safety conditions of AVs.
Our case study targets the state-of-the-art model of goal-aware responsibility-sensitive safety to argue over interactions with surrounding vehicles.
arXiv Detail & Related papers (2024-01-10T02:02:06Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - An Empirical Analysis of the Use of Real-Time Reachability for the
Safety Assurance of Autonomous Vehicles [7.1169864450668845]
We propose using a real-time reachability algorithm for the implementation of the simplex architecture to assure the safety of a 1/10 scale open source autonomous vehicle platform.
In our approach, the need to analyze an underlying controller is abstracted away, instead focusing on the effects of the controller's decisions on the system's future states.
arXiv Detail & Related papers (2022-05-03T11:12:29Z) - Smart and Secure CAV Networks Empowered by AI-Enabled Blockchain: Next
Frontier for Intelligent Safe-Driving Assessment [17.926728975133113]
Securing a safe-driving circumstance for connected and autonomous vehicles (CAVs) continues to be a widespread concern.
We propose a novel framework of algorithm-enabled intElligent Safe-driving assessmenT (BEST) to offer a smart and reliable approach.
arXiv Detail & Related papers (2021-04-09T19:08:34Z) - SoK: A Modularized Approach to Study the Security of Automatic Speech
Recognition Systems [13.553395767144284]
We present our systematization of knowledge for ASR security and provide a comprehensive taxonomy for existing work based on a modularized workflow.
We align the research in this domain with that on security in Image Recognition System (IRS), which has been extensively studied.
Their similarities allow us to systematically study existing literature in ASR security based on the spectrum of attacks and defense solutions proposed for IRS.
In contrast, their differences, especially the complexity of ASR compared with IRS, help us learn unique challenges and opportunities in ASR security.
arXiv Detail & Related papers (2021-03-19T06:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.