Safe Reinforcement Learning for Grid Voltage Control
- URL: http://arxiv.org/abs/2112.01484v1
- Date: Thu, 2 Dec 2021 18:34:50 GMT
- Title: Safe Reinforcement Learning for Grid Voltage Control
- Authors: Thanh Long Vu, Sayak Mukherjee, Renke Huang, Qiuhua Huang
- Abstract summary: Under voltage load shedding has been considered as a standard approach to recover the voltage stability of the electric power grid under emergency conditions.
In this paper, we discuss a couple of novel safe RL approaches, namely constrained optimization approach and Barrier function-based approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under voltage load shedding has been considered as a standard approach to
recover the voltage stability of the electric power grid under emergency
conditions, yet this scheme usually trips a massive amount of load
inefficiently. Reinforcement learning (RL) has been adopted as a promising
approach to circumvent the issues; however, RL approach usually cannot
guarantee the safety of the systems under control. In this paper, we discuss a
couple of novel safe RL approaches, namely constrained optimization approach
and Barrier function-based approach, that can safely recover voltage under
emergency events. This method is general and can be applied to other
safety-critical control problems. Numerical simulations on the 39-bus IEEE
benchmark are performed to demonstrate the effectiveness of the proposed safe
RL emergency control.
Related papers
- Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - Reinforcement Learning with Adaptive Regularization for Safe Control of Critical Systems [2.126171264016785]
We propose Adaptive Regularization (RL-AR), an algorithm that enables safe RL exploration.
RL-AR performs policy combination via a "focus module," which determines the appropriate combination depending on the state.
In a series of critical control applications, we demonstrate that RL-AR not only ensures safety during training but also achieves a return competitive with the standards of model-free RL.
arXiv Detail & Related papers (2024-04-23T16:35:14Z) - Sampling-based Safe Reinforcement Learning for Nonlinear Dynamical
Systems [15.863561935347692]
We develop provably safe and convergent reinforcement learning algorithms for control of nonlinear dynamical systems.
Recent advances at the intersection of control and RL follow a two-stage, safety filter approach to enforcing hard safety constraints.
We develop a single-stage, sampling-based approach to hard constraint satisfaction that learns RL controllers enjoying classical convergence guarantees.
arXiv Detail & Related papers (2024-03-06T19:39:20Z) - Bayesian Reinforcement Learning for Automatic Voltage Control under
Cyber-Induced Uncertainty [0.533024001730262]
This work introduces a Bayesian Reinforcement Learning (BRL) approach for power system control problems.
It focuses on sustained voltage control under uncertainty in a cyber-adversarial environment.
BRL techniques assist in automatically finding a threshold for exploration and exploitation in various RL techniques.
arXiv Detail & Related papers (2023-05-25T20:58:08Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe reinforcement learning for multi-energy management systems with
known constraint functions [0.0]
Reinforcement learning (RL) is a promising optimal control technique for multi-energy management systems.
We present two novel safe RL methods, namely SafeFallback and GiveSafe.
In a simulated multi-energy systems case study we have shown that both methods start with a significantly higher utility.
arXiv Detail & Related papers (2022-07-08T11:33:53Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z) - Enhancing Safe Exploration Using Safety State Augmentation [71.00929878212382]
We tackle the problem of safe exploration in model-free reinforcement learning.
We derive policies for scheduling the safety budget during training.
We show that Simmer can stabilize training and improve the performance of safe RL with average constraints.
arXiv Detail & Related papers (2022-06-06T15:23:07Z) - Contingency-constrained economic dispatch with safe reinforcement learning [7.133681867718039]
Reinforcement-learning based (RL) controllers can address this challenge, but cannot themselves provide safety guarantees.
We propose a formally validated RL controller for economic dispatch.
We extend conventional constraints by a time-dependent constraint encoding the islanding contingency.
Unsafe actions are projected into the safe action space while leveraging constrained zonotope set representations for computational efficiency.
arXiv Detail & Related papers (2022-05-12T16:52:48Z) - Learning Robust Hybrid Control Barrier Functions for Uncertain Systems [68.30783663518821]
We propose robust hybrid control barrier functions as a means to synthesize control laws that ensure robust safety.
Based on this notion, we formulate an optimization problem for learning robust hybrid control barrier functions from data.
Our techniques allow us to safely expand the region of attraction of a compass gait walker that is subject to model uncertainty.
arXiv Detail & Related papers (2021-01-16T17:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.