Safeguarding Learning-based Control for Smart Energy Systems with
Sampling Specifications
- URL: http://arxiv.org/abs/2308.06069v1
- Date: Fri, 11 Aug 2023 11:09:06 GMT
- Title: Safeguarding Learning-based Control for Smart Energy Systems with
Sampling Specifications
- Authors: Chih-Hong Cheng, Venkatesh Prasad Venkataramanan, Pragya Kirti Gupta,
Yun-Fei Hsu, Simon Burton
- Abstract summary: We study challenges using reinforcement learning in controlling energy systems, where apart from performance requirements, one has additional safety requirements such as avoiding blackouts.
We detail how these safety requirements in real-time temporal logic can be strengthened via discretization into linear temporal logic.
- Score: 0.31498833540989407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study challenges using reinforcement learning in controlling energy
systems, where apart from performance requirements, one has additional safety
requirements such as avoiding blackouts. We detail how these safety
requirements in real-time temporal logic can be strengthened via discretization
into linear temporal logic (LTL), such that the satisfaction of the LTL
formulae implies the satisfaction of the original safety requirements. The
discretization enables advanced engineering methods such as synthesizing
shields for safe reinforcement learning as well as formal verification, where
for statistical model checking, the probabilistic guarantee acquired by LTL
model checking forms a lower bound for the satisfaction of the original
real-time safety requirements.
Related papers
- Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - System Safety Monitoring of Learned Components Using Temporal Metric Forecasting [8.76735390039138]
In learning-enabled autonomous systems, safety monitoring of learned components is crucial to ensure their outputs do not lead to system safety violations.
We propose a safety monitoring method based on probabilistic time series forecasting.
We empirically evaluate safety metric and violation prediction accuracy, and inference latency and resource usage of four state-of-the-art models.
arXiv Detail & Related papers (2024-05-21T23:48:26Z) - Sampling-based Safe Reinforcement Learning for Nonlinear Dynamical
Systems [15.863561935347692]
We develop provably safe and convergent reinforcement learning algorithms for control of nonlinear dynamical systems.
Recent advances at the intersection of control and RL follow a two-stage, safety filter approach to enforcing hard safety constraints.
We develop a single-stage, sampling-based approach to hard constraint satisfaction that learns RL controllers enjoying classical convergence guarantees.
arXiv Detail & Related papers (2024-03-06T19:39:20Z) - Safe Model-Based Reinforcement Learning with an Uncertainty-Aware
Reachability Certificate [6.581362609037603]
We build a safe reinforcement learning framework to resolve constraints required by the DRC and its corresponding shield policy.
We also devise a line search method to maintain safety and reach higher returns simultaneously while leveraging the shield policy.
arXiv Detail & Related papers (2022-10-14T06:16:53Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z) - Model-Free Learning of Safe yet Effective Controllers [11.876140218511157]
We study the problem of learning safe control policies that are also effective.
We propose a model-free reinforcement learning algorithm that learns a policy that first maximizes the probability of ensuring the safety.
arXiv Detail & Related papers (2021-03-26T17:05:12Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Cautious Reinforcement Learning with Logical Constraints [78.96597639789279]
An adaptive safe padding forces Reinforcement Learning (RL) to synthesise optimal control policies while ensuring safety during the learning process.
Theoretical guarantees are available on the optimality of the synthesised policies and on the convergence of the learning algorithm.
arXiv Detail & Related papers (2020-02-26T00:01:08Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.