Safety Verification and Optimization in Industrial Drive Systems
- URL: http://arxiv.org/abs/2503.21965v1
- Date: Thu, 27 Mar 2025 20:27:19 GMT
- Title: Safety Verification and Optimization in Industrial Drive Systems
- Authors: Imran Riaz Hasrat, Eun-Young Kang, Christian Uldal Graulund,
- Abstract summary: This paper optimize the safety and diagnostic performance of a real-world industrial Basic Drive Module using Uppaal Stratego.<n>We model the functional safety architecture of the BDM with timed automata and formally verify its key functional and safety requirements.<n>Considering the formally verified correct model as a baseline, we leverage the reinforcement learning facility in Uppaal Stratego to optimize the safe failure fraction to the 90 % threshold.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety and reliability are crucial in industrial drive systems, where hazardous failures can have severe consequences. Detecting and mitigating dangerous faults on time is challenging due to the stochastic and unpredictable nature of fault occurrences, which can lead to limited diagnostic efficiency and compromise safety. This paper optimizes the safety and diagnostic performance of a real-world industrial Basic Drive Module(BDM) using Uppaal Stratego. We model the functional safety architecture of the BDM with timed automata and formally verify its key functional and safety requirements through model checking to eliminate unwanted behaviors. Considering the formally verified correct model as a baseline, we leverage the reinforcement learning facility in Uppaal Stratego to optimize the safe failure fraction to the 90 % threshold, improving fault detection ability. The promising results highlight strong potential for broader safety applications in industrial automation.
Related papers
- A Physics-Informed Machine Learning Framework for Safe and Optimal Control of Autonomous Systems [8.347548017994178]
Safety and performance could be competing objectives, which makes their co-optimization difficult.<n>We propose a state-constrained optimal control problem, where performance objectives are encoded via a cost function and safety requirements are imposed as state constraints.<n>We demonstrate that the resultant value function satisfies a Hamilton-Jacobi-Bellman equation, which we approximate efficiently using a novel machine learning framework.
arXiv Detail & Related papers (2025-02-16T09:46:17Z) - Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - Safeguarding Learning-based Control for Smart Energy Systems with
Sampling Specifications [0.31498833540989407]
We study challenges using reinforcement learning in controlling energy systems, where apart from performance requirements, one has additional safety requirements such as avoiding blackouts.
We detail how these safety requirements in real-time temporal logic can be strengthened via discretization into linear temporal logic.
arXiv Detail & Related papers (2023-08-11T11:09:06Z) - ISAACS: Iterative Soft Adversarial Actor-Critic for Safety [0.9217021281095907]
This work introduces a novel approach enabling scalable synthesis of robust safety-preserving controllers for robotic systems.
A safety-seeking fallback policy is co-trained with an adversarial "disturbance" agent that aims to invoke the worst-case realization of model error.
While the learned control policy does not intrinsically guarantee safety, it is used to construct a real-time safety filter.
arXiv Detail & Related papers (2022-12-06T18:53:34Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z) - ProBF: Learning Probabilistic Safety Certificates with Barrier Functions [31.203344483485843]
The control barrier function is a useful tool to guarantee safety if we have access to the ground-truth system dynamics.
In practice, we have inaccurate knowledge of the system dynamics, which can lead to unsafe behaviors.
We show the efficacy of this method through experiments on Segway and Quadrotor simulations.
arXiv Detail & Related papers (2021-12-22T20:18:18Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.