Redefining Safety for Autonomous Vehicles
- URL: http://arxiv.org/abs/2404.16768v4
- Date: Mon, 12 Aug 2024 21:39:00 GMT
- Title: Redefining Safety for Autonomous Vehicles
- Authors: Philip Koopman, William Widen,
- Abstract summary: Existing definitions and associated conceptual frameworks for computer-based system safety should be revisited.
Operation without a human driver dramatically increases the scope of safety concerns.
We propose updated definitions for core system safety concepts.
- Score: 0.9208007322096532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing definitions and associated conceptual frameworks for computer-based system safety should be revisited in light of real-world experiences from deploying autonomous vehicles. Current terminology used by industry safety standards emphasizes mitigation of risk from specifically identified hazards, and carries assumptions based on human-supervised vehicle operation. Operation without a human driver dramatically increases the scope of safety concerns, especially due to operation in an open world environment, a requirement to self-enforce operational limits, participation in an ad hoc sociotechnical system of systems, and a requirement to conform to both legal and ethical constraints. Existing standards and terminology only partially address these new challenges. We propose updated definitions for core system safety concepts that encompass these additional considerations as a starting point for evolving safe-ty approaches to address these additional safety challenges. These results might additionally inform framing safety terminology for other autonomous system applications.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Deep Learning Safety Concerns in Automated Driving Perception [43.026485214492105]
This paper introduces an additional categorization for a better understanding as well as enabling cross-functional teams to jointly address the concerns.
Recent advances in the field of deep learning and impressive performance of deep neural networks (DNNs) for perception have resulted in an increased demand for their use in automated driving (AD) systems.
arXiv Detail & Related papers (2023-09-07T15:25:47Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Sustainable Adaptive Security [11.574868434725117]
We propose the notion of Sustainable Adaptive Security (SAS) which reflects enduring protection by augmenting adaptive security systems with the capability of mitigating newly discovered threats.
We use a smart home example to showcase how we can engineer the activities of the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems satisfying sustainable adaptive security.
arXiv Detail & Related papers (2023-06-05T08:48:36Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Safe Perception -- A Hierarchical Monitor Approach [0.0]
We propose a novel hierarchical monitoring approach for AI-based perception systems.
It can reliably detect detection misses, and at the same time has a very low false alarm rate.
arXiv Detail & Related papers (2022-08-01T13:09:24Z) - System Safety and Artificial Intelligence [0.0]
New applications of AI across societal domains come with new hazards.
The field of system safety has dealt with accidents and harm in safety-critical systems.
This chapter honors system safety pioneer Nancy Leveson.
arXiv Detail & Related papers (2022-02-18T16:37:54Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Regulating Safety and Security in Autonomous Robotic Systems [0.0]
Rules for autonomous systems are often difficult to formalise.
In the space and nuclear sectors applications are more likely to differ, so a set of general safety principles has developed.
This allows novel applications to be assessed for their safety, but are difficult to formalise.
We are collaborating with regulators and the community in the space and nuclear sectors to develop guidelines for autonomous and robotic systems.
arXiv Detail & Related papers (2020-07-09T16:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.