Using Formal Models, Safety Shields and Certified Control to Validate AI-Based Train Systems
- URL: http://arxiv.org/abs/2411.14374v1
- Date: Thu, 21 Nov 2024 18:09:04 GMT
- Title: Using Formal Models, Safety Shields and Certified Control to Validate AI-Based Train Systems
- Authors: Jan Gruteser, Jan Roßbach, Fabian Vu, Michael Leuschel,
- Abstract summary: The KI-LOK project explores new methods for certifying and safely integrating AI components into autonomous trains.
We pursue a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker.
This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker.
- Score: 0.5249805590164903
- License:
- Abstract: The certification of autonomous systems is an important concern in science and industry. The KI-LOK project explores new methods for certifying and safely integrating AI components into autonomous trains. We pursued a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker. This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker. The demonstrator is integrated into the validation tool ProB. This enables runtime monitoring, runtime verification, and statistical validation of formal safety properties using a formal B model. Consequently, one can detect and analyse potential vulnerabilities and weaknesses of the AI and the certificate checker. We apply these techniques to a signal detection case study and present our findings.
Related papers
- Formally Verifying Deep Reinforcement Learning Controllers with Lyapunov Barrier Certificates [2.0611129932396164]
We present a novel method for training and verifying NLB-based certificates for discrete-time systems.
Specifically, we introduce a technique for certificate composition, which simplifies the verification of highly-complex systems.
We demonstrate the merits of our approach with a case study on providing safety and liveness guarantees for a DRL-controlled spacecraft.
arXiv Detail & Related papers (2024-05-22T23:06:34Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - Certified Control for Train Sign Classification [0.0]
The KI-LOK research project is involved in developing new methods for certifying such AI-based systems.
Here we explore the utility of a certified control architecture for a runtime monitor that prevents false positive detection of traffic signs.
arXiv Detail & Related papers (2023-11-16T11:02:10Z) - Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach [32.15663640443728]
The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits.
Existing verification and validation techniques are often inadequate for these new paradigms.
We propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
arXiv Detail & Related papers (2023-11-13T14:56:14Z) - Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components [0.6526824510982802]
Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
arXiv Detail & Related papers (2023-09-28T09:00:31Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Formal Verification of End-to-End Learning in Cyber-Physical Systems:
Progress and Challenges [7.955313479061443]
It might be possible to ensure the safety of autonomous systems by constructing formal, computer-checked correctness.
This paper identifies three assumptions underlying existing formal verification techniques.
It summarizes preliminary work toward improving the strength of evidence provided by formal verification.
arXiv Detail & Related papers (2020-06-15T16:50:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.