Safety Analysis of Autonomous Railway Systems: An Introduction to the SACRED Methodology
- URL: http://arxiv.org/abs/2403.12114v1
- Date: Mon, 18 Mar 2024 11:12:19 GMT
- Title: Safety Analysis of Autonomous Railway Systems: An Introduction to the SACRED Methodology
- Authors: Josh Hunter, John McDermid, Simon Burton,
- Abstract summary: We introduce SACRED, a safety methodology for producing an initial safety case for autonomous systems.
The development of SACRED is motivated by the proposed GoA-4 light-rail system in Berlin.
- Score: 2.47737926497181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the railway industry increasingly seeks to introduce autonomy and machine learning (ML), several questions arise. How can safety be assured for such systems and technologies? What is the applicability of current safety standards within this new technological landscape? What are the key metrics to classify a system as safe? Currently, safety analysis for the railway reflects the failure modes of existing technology; in contrast, the primary concern of analysis of automation is typically average performance. Such purely statistical approaches to measuring ML performance are limited, as they may overlook classes of situations that may occur rarely but in which the function performs consistently poorly. To combat these difficulties we introduce SACRED, a safety methodology for producing an initial safety case and determining important safety metrics for autonomous systems. The development of SACRED is motivated by the proposed GoA-4 light-rail system in Berlin.
Related papers
- Where AI Assurance Might Go Wrong: Initial lessons from engineering of critical systems [0.0]
Critical systems perspective might support the development and implementation of AI Safety Frameworks.
We consider four key questions: What is the system? How good does it have to be?
We advocate the use of assurance cases based on Assurance 2.0 to support decision making in which the criticality of the decision as well as the criticality of the system are evaluated.
arXiv Detail & Related papers (2025-01-07T22:02:23Z) - Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems [0.0]
AI has emerged as a key technology, driving advancements across a range of applications.
The challenge of assuring safety in systems that incorporate AI components is substantial.
We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
arXiv Detail & Related papers (2024-12-18T16:38:16Z) - SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.
Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.
It aggregates all harmful and beneficial effects into a harmfulness score using fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Safe Inputs but Unsafe Output: Benchmarking Cross-modality Safety Alignment of Large Vision-Language Model [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Unsolved Problems in ML Safety [45.82027272958549]
We present four problems ready for research, namely withstanding hazards, identifying hazards, steering ML systems, and reducing risks to how ML systems are handled.
We clarify each problem's motivation and provide concrete research directions.
arXiv Detail & Related papers (2021-09-28T17:59:36Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Scalable Learning of Safety Guarantees for Autonomous Systems using
Hamilton-Jacobi Reachability [18.464688553299663]
Methods like Hamilton-Jacobi reachability can provide guaranteed safe sets and controllers for such systems.
As the system is operating, it may learn new knowledge about these uncertainties and should therefore update its safety analysis accordingly.
In this paper we synthesize several techniques to speed up computation: decomposition, warm-starting, and adaptive grids.
arXiv Detail & Related papers (2021-01-15T00:13:01Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z) - Safety design concepts for statistical machine learning components
toward accordance with functional safety standards [0.38073142980732994]
In recent years, curial incidents and accidents have been reported due to misjudgment of statistical machine learning.
In this paper, we organize five kinds of technical safety concepts (TSCs) for components toward accordance with functional safety standards.
arXiv Detail & Related papers (2020-08-04T01:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.