Safety Analysis of Autonomous Railway Systems: An Introduction to the SACRED Methodology
- URL: http://arxiv.org/abs/2403.12114v1
- Date: Mon, 18 Mar 2024 11:12:19 GMT
- Title: Safety Analysis of Autonomous Railway Systems: An Introduction to the SACRED Methodology
- Authors: Josh Hunter, John McDermid, Simon Burton,
- Abstract summary: We introduce SACRED, a safety methodology for producing an initial safety case for autonomous systems.
The development of SACRED is motivated by the proposed GoA-4 light-rail system in Berlin.
- Score: 2.47737926497181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the railway industry increasingly seeks to introduce autonomy and machine learning (ML), several questions arise. How can safety be assured for such systems and technologies? What is the applicability of current safety standards within this new technological landscape? What are the key metrics to classify a system as safe? Currently, safety analysis for the railway reflects the failure modes of existing technology; in contrast, the primary concern of analysis of automation is typically average performance. Such purely statistical approaches to measuring ML performance are limited, as they may overlook classes of situations that may occur rarely but in which the function performs consistently poorly. To combat these difficulties we introduce SACRED, a safety methodology for producing an initial safety case and determining important safety metrics for autonomous systems. The development of SACRED is motivated by the proposed GoA-4 light-rail system in Berlin.
Related papers
- Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Unsolved Problems in ML Safety [45.82027272958549]
We present four problems ready for research, namely withstanding hazards, identifying hazards, steering ML systems, and reducing risks to how ML systems are handled.
We clarify each problem's motivation and provide concrete research directions.
arXiv Detail & Related papers (2021-09-28T17:59:36Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Scalable Learning of Safety Guarantees for Autonomous Systems using
Hamilton-Jacobi Reachability [18.464688553299663]
Methods like Hamilton-Jacobi reachability can provide guaranteed safe sets and controllers for such systems.
As the system is operating, it may learn new knowledge about these uncertainties and should therefore update its safety analysis accordingly.
In this paper we synthesize several techniques to speed up computation: decomposition, warm-starting, and adaptive grids.
arXiv Detail & Related papers (2021-01-15T00:13:01Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z) - Safety design concepts for statistical machine learning components
toward accordance with functional safety standards [0.38073142980732994]
In recent years, curial incidents and accidents have been reported due to misjudgment of statistical machine learning.
In this paper, we organize five kinds of technical safety concepts (TSCs) for components toward accordance with functional safety standards.
arXiv Detail & Related papers (2020-08-04T01:01:00Z) - Safe reinforcement learning for probabilistic reachability and safety
specifications: A Lyapunov-based approach [2.741266294612776]
We propose a model-free safety specification method that learns the maximal probability of safe operation.
Our approach constructs a Lyapunov function with respect to a safe policy to restrain each policy improvement stage.
It yields a sequence of safe policies that determine the range of safe operation, called the safe set.
arXiv Detail & Related papers (2020-02-24T09:20:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.